uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,108,101,565,515 | arxiv | \section{Introduction}
One of the most well-known and long-standing problems in computer science is the question of whether $\ensuremath{\p=\np}$. A solution to the problem would have wide-ranging implications to everything from economics to cybersecurity. To this end many have claimed to have found a proof that either $\ensuremath{\p=\np}$ or $\ensuremath{\p\neq\np}$. However, to this date no such claim has been found to be correct.
There are various methods for attempting such a proof. One such
method is by using lower bounds on the complexity of circuits. By
showing that a known NP-complete problem has an exponential
lower-bounded circuit complexity, you show that $\ensuremath{\p\neq\np}$. In his
paper ``A solution of the P versus NP problem based on specific
property of clique function''~\cite{sima2019}, Sima tries to do
precisely this. Sima analyzes the clique function and attempts to
show that the circuit complexity of the clique function is
exponential, thus showing that $\ensuremath{\p\neq\np}$.
In this paper, we will first present some definitions and some prior work that Sima uses in his argument. We will then present Sima's argument and describe where Sima's argument fails due to making an improper generalization and failing to consider the connection between a Boolean variable and its negation. Finally we will provide an example that demonstrates the hole in his algorithm.
\section{Preliminaries}
The following are the needed definitions and an existing theorem that
will be used.
\subsubsection*{Boolean Functions}
A Boolean function of $k$ variables is a function $f: \{0, 1\}^k \rightarrow \{0,1\}$.
Boolean functions (of $k$ variables) can be expressed as Boolean (propositional) formulas with $k$ variables and the logical operators ($\wedge, \lor, \lnot$).
A Boolean function $f$ of $k$ variables is called monotone if
\begin{equation*}
(\forall w, w'\in \{0,1\}^k: w\leq_{\mathrm lex} w') [f(w)\leq f(w')].
\end{equation*}
\noindent Note: If a function is monotone, then changing a 0 to a 1 in the input will never cause a decrease in the output, and changing a 1 to a 0 in the input will never cause an increase in the output.
\subsubsection*{Boolean Circuits}
A Boolean circuit is a directed acyclic graph with \emph{gate} nodes and \emph{input} nodes. Gate nodes can be one of three types corresponding to the logical operators AND ($\wedge$), OR ($\lor$), and NOT ($\lnot$) and have indegrees of 2, 2 and 1 respectively and unbounded outdegrees. For his purposes, Sima expresses Boolean circuits as Boolean expressions ($f(x_1, x_2, x_3,\dots)$) where input nodes correspond to the variables, and gate nodes correspond to logical operators. This allows him to work with and modify expressions without working expressly with circuits and their diagrams. It should be noted that the number of logical operators in an expression may not be directly correlated to the complexity of the corresponding circuit as gates within circuits are allowed to output to multiple other gates. However, this does not affect Sima's argument as he uses these expressions to argue about the correctness (behavior) of the circuits rather than their complexity.
A Boolean circuit with no NOT gates is called \emph{monotone}.
A Boolean circuit in which only the input nodes are negated (only the input nodes are inputs to NOT gates) is called a \emph{standard} circuit. Because negations occur only at the input nodes, one can rewrite such a circuit,
\begin{equation*}
f(x_1, x_2,\dots,x_n),
\end{equation*}
as a circuit with twice as many input nodes but no negations,
\begin{equation*}
f(x_1, x_2,\dots,x_n, \lnot x_1, \lnot x_2,\dots,\lnot x_n).
\end{equation*}
In this manner one removes the NOT gates, but for any valid assignment of variables, $(\forall i \in \{1\dots n\})\ [x_i = \var{NOT} (\lnot x_i)]$.
\subsubsection*{Circuit Complexity}
The \emph{circuit complexity} of a Boolean function is the size (number of gates) of the smallest Boolean circuit that computes the Boolean function.
The \emph{standard circuit complexity} of a Boolean function is the size (number of gates) of the smallest standard circuit that computes the Boolean function.
The \emph{monotone circuit complexity} of a Boolean function is the size (number of gates) of the smallest monotone circuit that computes the Boolean function.
\subsubsection*{The Clique Function and its Monotone Complexity}
For $1\leq s \leq m$, let $\var {CLIQUE}(m, s)$ be the function of $n = \binom{m}{2}$ variables representing the edges of an undirected graph \textit{G} on \textit{m} vertices, whose value is 1 if and only if \textit{G} contains an \textit{s}-clique.
The monotone complexity of the clique function is exponential. Razborov initially showed a superpolynomial lower bound. This was improved by Alon and Boppana~\cite{monotonecomplexity} to be exponential.
\section{Summary of Sima's Argument}
Sima builds his argument by attempting to fill in the holes left by Alon and Boppana's analysis of the lower bounds for the monotone complexity of the clique function. Their paper is only able to establish lower bounds for the monotone complexity of the clique function. The nonmonotone complexity of the clique function is thus left unbounded. Sima argues that the nonmonotone complexity of the clique function is in fact greater than or equal to that of the monotone complexity of the clique function.
In order to show this, Sima attempts to transform a nonmonotone circuit for the clique function into a monotone circuit for the clique function without increasing its size beyond a polynomial factor. He first considers standard circuits (as defined previously). He claims that any circuit can be transformed into a standard circuit by at most doubling the number of gates. Because of this, the difference in complexity between standard and non-standard circuits is at most a factor of 2, thus allowing him to consider only standard circuits. From this point he makes his main argument.
He considers the standard Boolean circuit $f(x_1, x_2,\dots,x_n, \lnot x_1, \lnot x_2,\dots,\lnot x_n)$ that computes the clique function, $\var{CLIQUE}(m, s)$. At this point he makes the argument that replacing any one of the negated variables ($\lnot x_i$) with 1 (TRUE) will result in a circuit that computes the same function. To prove this, he first argues that one can ``extract" any negated variable in the circuit, moving it to the front of the formula. This results in a formula of the form\footnote{Throughout this paper, we assume the standard
operator precedence rules, and in particular that $\lnot$ has higher precedence
than~$\wedge$, which itself has higher precedence than $\lor$.}
\begin{equation*}
f=\lnot x_i \wedge \var{TermA} \lor \var{TermB} \lor \var{TermC} \dots,
\end{equation*}
where $x_i$ is the extracted variable and none of $\var {TermA}$, $\var {TermB}$, $\var {TermC}$\dots include $\lnot x_i$.
He then uses this extracted form of the formula to argue that replacing $\lnot x_1$ (or some other arbitrary negated variable) with 1 does not change the value of $f$ when $f$ calculates the clique function. The argument is as follows.
Sima starts with the extracted form of the standard clique formula where $\lnot x_1$ is the extracted variable:
\begin{equation*}
f=\lnot x_1 \wedge \var{TermA} \lor \var{TermB} \lor \var{TermC} \dots\,\,.
\end{equation*}
There are then two cases he considers. Sima refers to $\lnot x_1 \wedge \var{TermA}$ as $\var{Term1}$, with $\var{Term1part1}$ referring to $\lnot x_1$ and $\var{Term1part2}$ referring to $\var{TermA}$.
Case 1: $\var{Term1part2}$ has a value of 0.\newline
In this case he argues that because this second part of $\var{Term1}$ has a value of 0, clearly no matter what you set $\lnot x_1$ to, the value of $\var{Term1}$ will be 0. Thus the output of the circuit will not change.
Case 2: $\var{Term1part2}$ has a value of 1.\newline
In this case, he argues that if $\var{Term1part2}$ has a value of 1 then the value of the entire function is also 1. The reasoning he gives is that if $\lnot x_1$ is also 1, then $x_1$ takes the value 0. As such, by the definition of the clique function, the edge corresponding to $x_1$ is disconnected and thus $\lnot x_1$ (and $x_1$) has no contribution to the size of the clique. Thus as long as the value of $\var{Term1part2}$ is 1, the value of the circuit will be 1 no matter what $\lnot x_1$ is.
The final step in Sima's proof is to argue that since this aforementioned process can be done with any of the negated variables ($\lnot x_1,\ \lnot x_2,\dots, \lnot x_n$), you can (sequentially) replace all the negated variables with the value 1, and the resulting circuit will compute the clique function correctly. Furthermore he states that since this replacement will clearly not increase the complexity of the circuit, and (as you are starting with a standard circuit) the resulting circuit will be monotone, standard circuits for the clique function are no smaller than monotone circuits for the clique function. Since the monotone circuit complexity of the clique function was proven to be exponential, the standard circuit complexity (and thus the complexity overall) of the clique function is also exponential. As such he concludes that $\ensuremath{\p\neq\np}$.
\section{Critique of Sima's Argument}
On its surface, Sima's argument seems sound. He builds off Alon and Boppana's finding about the monotonic complexity of the clique function by converting a nonmonotone circuit into a monotone one without increasing its complexity. However, the problem lies in his argument about the conversion process. From here let's follow his argument more closely.
He starts with a(n) (arbitrary) standard circuit that computes the clique function. This is then expressed in the form $f = f(x_1, x_2,\dots,x_n, \lnot x_1, \lnot x_2,\dots,\lnot x_n)$. His first claim is that any of the negated variables ($\lnot x_i$) can be extracted, putting the circuit into the form $f=\lnot x_i \wedge \var{TermA} \lor \var{TermB} \lor \var{TermC} \dots$, where each of the terms $\var{TermA}$, $\var{TermB}$,\dots does not contain $\lnot x_i$. This is in fact trivial. By first expanding the Boolean formula into Sum of Products form and then reorganizing and factoring out $\lnot x_i$, this is easily achievable.
He then considers extracting one of the negated variables (using $\lnot x_1$ as his example, but extending the argument to any negated variable) and makes arguments for two cases concerning the value of $\var{Term1}$ (again where $\var{Term1} = \lnot x_1 \wedge \var{TermA}$, $\var{Term1part1} = \lnot x_1$, and $\var{Term1part2} = \var{TermA}$).
In the first case he considers when $\var{Term1part2}$ has a value of 0. His argument in this case is sound. When $\var{Term1part2}$ has a value of 0, the contribution from $\var{Term1}$ to the overall function will be 0 no matter the value of $\lnot x_1$.
However, in his second case he runs into trouble. When the value of $\var{Term1part2}$ is 1, he again tries to present an argument for why setting the value of $\lnot x_1$ to 1 will not change the value of the function. But here he misunderstands what he is actually arguing and neglects to fully comprehend the connection between $x_1$ and $\lnot x_1$. His initial argument is true. If $\var{Term1part1}$ is 1, then clearly the value of $f$ will be 1. Due to the clique function being monotone, this does in fact mean that the $x_1$ edge has no contribution to the value of the clique function as adding an edge to the graph (changing the value of $\lnot x_1$ from 1 to 0 and vice versa for $x_1$) will not cause a clique in the graph to disappear. As such any assignment of $\lnot x_1$ will result in the same value of the function. A more formal description of what Sima proves follows below.
\begin{definition}
Let $A$ be an assignment of the Boolean variables $x_1, x_2, x_3,\dots,x_n$. Define $A'$ to be the same assignment as $A$ except with the value of $x_1$ reversed (changed from 0 to 1 or vice versa).
\end{definition}
\begin{claim}
If both $\var{Term1part1}$ and $\var{Term1part2}$ evaluate to 1 given an assignment $A$, then both $f(A) = 1$ and $f(A') = 1$. The same is true swapping $A$ and $A'$, as $A$ = $A''$
\end{claim}
\noindent
By his argument Sima is able to prove the claim above: that if a variable assignment $A$, or its corresponding assignment $A'$, in which the assigned value of $x_1$ is reversed, causes the value of $\var{Term1}$ to be 1, then BOTH $f(A) = 1$ and $f(A') = 1$. However, Sima mistakenly assumes that all assignments where $\var{Term1part2} = 1$ fall into the form of $A$ or $A'$. This appears, at first, to be true as $\var{Term1part1} = \lnot x_1$. So by flipping the value of $\lnot x_1$ (i.e. changing from $A$ to $A'$) you can always make $\var{Term1}$ evaluate to 1. However, this ignores the connection between $x_1$ and $\lnot x_1$. Because $x_1$ must be equal to the negation of $\lnot x_1$ in any valid assignment, it is possible for there to be variable assignments $B$ for which the value of $\var{Term1part2}$ is 1, while neither $B$ nor $B'$ result in both $\var{Term1part1}$ AND $\var{Term1part2}$ having a value of 1. We will describe one such case below.
\subsection{An Illustrative Example}
Consider the following example:
Let $f(x_1, x_2,\dots,x_n)$ be a monotone circuit that computes the clique function, $\var{CLIQUE}(m, s)$, for some $s\geq 3$.
Now append to the circuit (via the logical OR operator) the term
\begin{equation*}
\lnot x_1 \wedge x_1.
\end{equation*}
The resulting circuit $f'$ is now
\begin{equation*}\label{fp}
f'(x_1, x_2,\dots,x_n) = x_1 \wedge \lnot x_1 \lor f(x_1, x_2,\dots,x_n),
\end{equation*}
or in standard form
\begin{equation*}
f'(x_1, x_2,\dots,x_n, \lnot x_1) = x_1 \wedge \lnot x_1 \lor f(x_1, x_2,\dots,x_n).
\end{equation*}
Since the appended term is NEVER satisfied and is adjoined to $f$ via an OR operator, the resulting $f'$ will have the same behavior as $f$ and will also calculate $\var{CLIQUE}(m,s)$ correctly. Now following Sima's process and extracting the negated $\lnot x_1$, the result is
\begin{equation*}
f'(x_1, x_2,\dots,x_n, \lnot x_1) = \lnot x_1 \wedge (x_1) \lor \var{TermB} \lor \var{TermC}\dots,
\end{equation*}
where $\var{TermB}$, $\var{TermC}$,\dots are terms (without negation) containing any of the variables except $\lnot x_1$. Note that since $f$ is a monotone circuit, the only negated variable will be the $\lnot x_1$ that was just introduced. What Sima refers to as $\var{Term1}$ in this case is $\lnot x_1 \wedge x_1$, with $\var{Term1part1}$ being $\lnot x_1$ and $\var{Term1part2}$ being $x_1$. Now, per Sima's algorithm, set $\lnot x_1$ to 1, resulting in the following circuit $f''$:
\begin{eqnarray*}
f''(x_1, x_2,\dots,x_n, 1) &=& 1 \wedge (x_1) \lor TermB\dots\\& =& x_1 \lor TermB\dots\,\,.
\end{eqnarray*}
From here it is clear to see that as long as $x_1$ has a value of 1, the value of $f''$ will be 1. Looking at the equation from another perspective, even if ONLY $x_1$ has a value of 1 and all the other variables are set to 0, the circuit will still output 1. However, bringing this back to our definition of the clique function, if there is a graph with only one edge, it is impossible to have a clique of size anything greater than 2. Thus this new $f''$ clearly does not compute the same function as $f$.
The reason Sima's argument fails is that the set of all variable assignments $A$ for which both $\var{Term1part1}$ and $\var{Term1part2}$ evaluate to 1, and their corresponding assignments $A'$, does not equal the set of all assignments S for which $\var{Term1part2}$ evaluates to be 1. This is stated more formally below. Let $\var{Term1part2}(A)$ be the value of $\var{Term1part2}$ given the assignment $A$ and similarly for $\var{Term1part1}(A)$. Then
\begin{align*}
\{A \mid \var{Term1part2}(A) = 1,\var{Term1part1}(A) = 1\}\ \cup\ &\\
\{A \mid \var{Term1part2}(A') = 1,\var{Term1part1}(A') = 1\} &\neq \{A \mid \var{Term1part2}(A) = 1\}.
\end{align*}
\noindent
In fact, there are no valid assignments $A$ (and thus no corresponding $A'$) for which both $\var{Term1part1}$ and $\var{Term1part2}$, in the example presented in this section, evaluate to 1. However, any assignment in which $x_1$ is 1 causes $\var{Term1part2}$ to evaluate to 1. Because $x_1$ and $\lnot x_1$, although treated as different variables in the standard formula, must be negations of each other in any valid assignment, it is possible to create cases that Sima's argument fails to cover such as the one presented above.
By failing to consider this connection between $\lnot x_1$ and $x_1$, Sima makes an intuitive generalization that, although on the surface may seem reasonable, leaves loopholes that can change the behavior of the function. The above example illustrates the error in his logic and provides a specific counterexample to his process.
\section{Conclusion}
Sima's argument attempts to build off of the findings of Alon and Boppana~\cite{monotonecomplexity} in order to extend the exponential lower bound for the monotone circuit complexity of the clique function, to the nonmonotone circuit complexity of the clique function. He describes a clever attempt to convert standard (nonmonotone) circuits into monotone circuits and attempts to prove that this conversion holds in the case of the clique function. However, in doing so he makes an unfounded generalization in his argument. This results in specific cases that can be exploited by using the connection between variables and their negations.
By describing his flaw and presenting a counterexample to his process, we have demonstrated that his method is not satisfactory. It is possible that through using a more specific description of what the minimal nonmonotone circuit of the clique function looks like, one could sidestep the problems that we have described in this paper and establish an exponential lower bound on the nonmonotone complexity of the clique function. In fact, in 2005 Amano and Maruoka were able to show that the lower bound for the complexity of nonmonotone circuits for the clique function with at most $1/6\log(\log(n))$ negation gates is in fact superpolynomial~\cite{smallnegs}. However, until such time as we have a better understanding of the clique function and its properties, a proof such as presented in Sima's paper is not possible.
|
1,108,101,565,516 | arxiv | \section{Introduction}
\label{sec:intro}
The interesting functors between model categories are the \emph{left
Quillen functors} and \emph{right Quillen functors} (see
\cite{MCATL}*{Def.~8.5.2}).
In this paper, we study Quillen functors between diagram categories
with the Reedy model category structure (see Theorem~\ref{thm:RFib}).
In more detail, if $\cat C$ is a Reedy category (see
Definition~\ref{def:ReedyCat}) and $\cat M$ is a model category, then there is
a \emph{Reedy model category structure} on the category $\cat M^{\cat
C}$ of $\cat C$-diagrams in $\cat M$ (see Definition~\ref{def:DiagCat} and
Theorem~\ref{thm:RFib}). The original (and most well known) examples of
Reedy model category structures are the model categories of
\emph{cosimplicial objects in a model category} and of
\emph{simplicial objects in a model category}.
Any functor $G\colon \cat C \to \cat D$ between Reedy categories
induces a functor
\begin{displaymath}
G^{*}\colon \cat M^{\cat D} \longrightarrow \cat M^{\cat C}
\end{displaymath}
of diagram categories (see Definition~\ref{def:InducedDiag}), and it is
important to know when such a functor $G^{*}$ is a left or a right
Quillen functor, since, for example, a right Quillen functor takes
fibrant objects to fibrant objects, and takes weak equivalences
between fibrant objects to weak equivalences (see
Proposition~\ref{prop:QuillenNice}). The results in this paper provide a
complete characterization of the Reedy functors (functors between
Reedy categories that preserve the structure; see
Definition~\ref{def:subreedy}) between diagram categories for which this is
the case for all model categories $\cat M$.
To be clear, we point out that for any Reedy functor $G\colon \cat C
\to \cat D$ there exist model categories $\cat M$ such that the
induced functor $G^{*}\colon \cat M^{\cat D} \to \cat M^{\cat C}$ is a
(right or left) Quillen functor. For example, if $\cat M$ is a model
category in which the weak equivalences are the isomorphisms of $\cat
M$ and all maps of $\cat M$ are both cofibrations and fibrations, then
\emph{every} Reedy functor $G\colon \cat C \to \cat D$ induces a right
Quillen functor $G^{*}\colon \cat M^{\cat D} \to \cat M^{\cat C}$
(which is also a left Quillen functor). In this paper, we
characterize those Reedy functors that induce right Quillen functors
for \emph{all} model categories $\cat M$. More precisely, we have:
\begin{thm}
\label{thm:GoodisGood}
If $G\colon \cat C \to \cat D$ is a Reedy functor (see
Definition~\ref{def:subreedy}), then the induced functor of diagram
categories $G^{*}\colon \cat M^{\cat D} \to \cat M^{\cat C}$ is a
right Quillen functor for every model category $\cat M$ if and only
if $G$ is a fibering Reedy functor (see Definition~\ref{def:goodsub}).
\end{thm}
We also have a dual result:
\begin{thm}
\label{thm:MainCofibering}
If $G\colon \cat C \to \cat D$ is a Reedy functor, then the induced
functor of diagram categories $G^{*}\colon \cat M^{\cat D} \to \cat
M^{\cat C}$ is a left Quillen functor for every model category $\cat
M$ if and only if $G$ is a cofibering Reedy functor (see
Definition~\ref{def:goodsub}).
\end{thm}
In an attempt to make these results accessible to a more general
audience, we've included a description of some background material
that is well known to the experts. The structure of the paper is as
follows: We provide some background on Reedy categories and functors
in Section~\ref{sec:background}, including discussions of filtrations,
opposites, Quillen functors, and cofinality. The only new content for
this part is in Section~\ref{sec:ReedyFunc}, where we define inverse and
direct $\cat C$-factorizations and (co)fibering Reedy functors, and
prove some results about them. We then discuss several examples and
applications of Theorem~\ref{thm:GoodisGood} and
Theorem~\ref{thm:MainCofibering} in Section~\ref{sec:Examples}. More precisely,
we look at the subdiagrams given by truncations, diagrams defined as
skeleta, and three kinds of subdiagrams determined by (co)simplicial
and multi(co)simplicial diagrams: restricted (co)simplicial objects,
diagonals of multi(co)simplicial objects, and slices of
multi(co)simplicial objects. We then finally present the proofs of
Theorem~\ref{thm:GoodisGood} and Theorem~\ref{thm:MainCofibering} in
Section~\ref{sec:Proof}. Theorem~\ref{thm:GoodisGood} will follow immediately
from Theorem~\ref{thm:FiberingThree}, which is its slight elaboration.
Theorem~\ref{thm:MainCofibering} can be proved by dualizing the proof of
Theorem~\ref{thm:GoodisGood}, but we will instead derive it in
Section~\ref{sec:PrfCofibering} from Theorem~\ref{thm:GoodisGood} and a careful
discussion of opposite categories.
\section{Reedy model category structures}
\label{sec:background}
In this section, we give the definitions and results needed for the
statements and proofs of our theorems. We assume the reader is
familiar with the basic language of model categories. The material
here is standard, with the exception of Section~\ref{sec:ReedyFunc} where
the key notions for characterizing Quillen functors between Reedy
model categories are introduced (Definition~\ref{def:CFactors} and
Definition~\ref{def:goodsub}).
\subsection{Reedy categories and their diagram categories}
\label{sec:ReedyCat}
\begin{defn}
\label{def:ReedyCat}
A \emph{Reedy category} is a small category $\cat C$ together with
two subcategories $\drc{\cat C}$ (the \emph{direct subcategory}) and
$\inv{\cat C}$ (the \emph{inverse subcategory}), both of which
contain all the objects of $\cat C$, in which every object can be
assigned a nonnegative integer (called its \emph{degree}) such that
\begin{enumerate}
\item Every non-identity map of $\drc{\cat C}$ raises degree.
\item Every non-identity map of $\inv{\cat C}$ lowers degree.
\item Every map $g$ in $\cat C$ has a unique factorization $g = \drc
g \inv g$ where $\drc g$ is in $\drc{\cat C}$ and $\inv g$ is in
$\inv{\cat C}$.
\end{enumerate}
\end{defn}
\begin{rem} The function that assigns to every object of a Reedy
category its degree is not a part of the structure, but we will
generally assume that such a degree function has been chosen.
\end{rem}
\begin{defn}
\label{def:DiagCat}
Let $\cat C$ be a Reedy category and let $\cat M$ be a model
category.
\begin{enumerate}
\item A \emph{$\cat C$-diagram in $\cat M$} is a functor from $\cat
C$ to $\cat M$.
\item The category $\cat M^{\cat C}$ of $\cat C$-diagrams in $\cat
M$ is the category with objects the functors from $\cat C$ to
$\cat M$ and with morphisms the natural transformations of such
functors.
\end{enumerate}
\end{defn}
In order to describe the \emph{Reedy model category structure} on the
diagram category $\cat M^{\cat C}$ in Theorem~\ref{thm:RFib}, we first
define the \emph{latching maps} and
\emph{matching maps} of a $\cat C$-diagram in $\cat M$ as follows.
\begin{defn}
\label{def:Matchobj}
Let $\cat C$ be a Reedy category, let $\cat M$ be a model category,
let $\diag X$ and $\diag Y$ be $\cat C$-diagrams in $\cat M$, let
$f\colon \diag X \to \diag Y$ be a map of diagrams, and let $\alpha$
be an object of $\cat C$.
\begin{enumerate}
\item The \emph{latching category} $\latchcat{\cat C}{\alpha}$ of
$\cat C$ at $\alpha$ is the full subcategory of
$\overcat{\drc{\cat C}}{\alpha}$ (the category of objects of
$\drc{\cat C}$ over $\alpha$; see \cite{MCATL}*{Def.~11.8.1})
containing all of the objects except the identity map of $\alpha$.
\item The \emph{latching object} of $\diag X$ at $\alpha$ is
\begin{displaymath}
\mathrm{L}_{\alpha}\diag X =
\colim_{\latchcat{\cat C}{\alpha}} \diag X
\end{displaymath}
and the \emph{latching map} of $\diag X$ at $\alpha$ is the
natural map
\begin{displaymath}
\mathrm{L}_{\alpha}\diag X \longrightarrow
\diag X_{\alpha} \rlap{\enspace .}
\end{displaymath}
We will use $\mathrm{L}_{\alpha}^{\cat C}\diag X$ to denote the
latching object if the indexing category is not obvious.
\item The \emph{relative latching map} of $f\colon \diag X \to \diag
Y$ at $\alpha$ is the natural map
\begin{displaymath}
\pushout{\diag X_{\alpha}}{\mathrm{L}_{\alpha} \diag X}
{\mathrm{L}_{\alpha} \diag Y} \longrightarrow
\diag Y_{\alpha} \rlap{\enspace .}
\end{displaymath}
\item The \emph{matching category} $\matchcat{\cat C}{\alpha}$ of
$\cat C$ at $\alpha$ is the full subcategory of
$\undercat{\inv{\cat C}}{\alpha}$ (the category of objects of
$\inv{\cat C}$ under $\alpha$; see \cite{MCATL}*{Def.~11.8.3})
containing all of the objects except the identity map of $\alpha$.
\item The \emph{matching object} of $\diag X$ at $\alpha$ is
\begin{displaymath}
\mathrm{M}_{\alpha}\diag X =
\lim_{\matchcat{\cat C}{\alpha}} \diag X
\end{displaymath}
and the \emph{matching map} of $\diag X$ at $\alpha$ is the
natural map
\begin{displaymath}
\diag X_{\alpha} \longrightarrow \mathrm{M}_{\alpha}\diag X \rlap{\enspace .}
\end{displaymath}
We will use $\mathrm{M}_{\alpha}^{\cat C}\diag X$ to denote the
matching object if the indexing category is not obvious.
\item The \emph{relative matching map} of $f\colon \diag X \to \diag
Y$ at $\alpha$ is the map
\begin{displaymath}
\diag X_{\alpha} \longrightarrow
\pullback{\diag Y_{\alpha}}{\mathrm{M}_{\alpha}\diag Y}
\mathrm{M}_{\alpha}{\diag X} \rlap{\enspace .}
\end{displaymath}
\end{enumerate}
\end{defn}
\begin{thm}[\cite{MCATL}*{Def.~15.3.3 and Thm.~15.3.4}]
\label{thm:RFib}
Let $\cat C$ be a Reedy category and let $\cat M$ be a model
category. There is a model category structure on the category $\cat
M^{\cat C}$ of $\cat C$-diagrams in $\cat M$, called the \emph{Reedy
model category structure}, in which a map $f\colon \diag X \to
\diag Y$ of $\cat C$-diagrams in $\cat M$ is
\begin{itemize}
\item a \emph{weak equivalence} if for every object $\alpha$ of
$\cat C$ the map $f_{\alpha}\colon \diag X_{\alpha} \to \diag
Y_{\alpha}$ is a weak equivalence in $\cat M$,
\item a \emph{cofibration} if for every object $\alpha$ of $\cat C$
the relative latching map $\pushout{\diag
X_{\alpha}}{\mathrm{L}_{\alpha}\diag X}{\mathrm{L}_{\alpha}\diag Y} \to
\diag Y_{\alpha}$ (see Definition~\ref{def:Matchobj}) is a cofibration in
$\cat M$, and
\item a \emph{fibration} if for every object $\alpha$ of $\cat C$
the relative matching map $\diag X_{\alpha} \to \pullback{\diag
Y_{\alpha}}{\mathrm{M}_{\alpha} \diag Y}{\mathrm{M}_{\alpha} \diag X}$
(see Definition~\ref{def:Matchobj}) is a fibration in $\cat M$.
\end{itemize}
\end{thm}
We also record the following standard result; we will have use for it
in the proof of Proposition~\ref{prop:MatchIso}.
\begin{prop}
\label{prop:DetectIso}
If $\cat M$ is a category and $f\colon X \to Y$ is a map in $\cat
M$, then $f$ is an isomorphism if and only if it induces an
isomorphism of the sets of maps $f_{*}\colon \cat M(W,X) \to \cat
M(W,Y)$ for every object $W$ of $\cat M$.
\end{prop}
\begin{proof}
If $g\colon Y \to X$ is an inverse for $f$, then $g_{*}\colon \cat
M(W,Y) \to \cat M(W,X)$ is an inverse for $f_{*}$.
Conversely, if $f_{*}\colon \cat M(W,X) \to \cat M(W,Y)$ is an
isomorphism for every object $W$ of $\cat M$, then $f_{*}\colon \cat
M(Y,X) \to \cat M(Y,Y)$ is an epimorphism, and so there is a map
$g\colon Y \to X$ such that $fg = 1_{Y}$. We then have two maps
$gf,1_{X}\colon X \to X$, and
\begin{displaymath}
f_{*}(gf) = fgf = 1_{Y}f = f = f_{*}(1_{X}) \rlap{\enspace .}
\end{displaymath}
Since $f_{*}\colon \cat M(X,X) \to \cat M(X,Y)$ is a monomorphism,
this implies that $gf = 1_{X}$.
\end{proof}
\subsection{Filtrations of Reedy categories}
\label{sec:ReedyFilt}
The notion of a filtration of a Reedy category will be used in the
proof of Theorem~\ref{thm:RtGdNec}.
\begin{defn}
\label{def:filtration}
If $\cat C$ is a Reedy category (with a chosen degree function) and
$n$ is a nonnegative integer, the \emph{$n$'th filtration}
$\mathrm{F}^{n}\cat C$ of $\cat C$ (also called the \emph{$n$'th truncation}
$\cat C^{\le n}$ of $\cat C$) is the full subcategory of $\cat C$
with objects the objects of $\cat C$ of degree at most $n$.
\end{defn}
The following is a direct consequence of the definitions.
\begin{prop}
\label{prop:SeqFilt}
If $\cat C$ is a Reedy category then each of its filtrations
$\mathrm{F}^{n}\cat C$ is a Reedy category with $\drc{\mathrm{F}^{n}\cat C} =
\drc{\cat C} \cap \mathrm{F}^{n}\cat C$ and $\inv{\mathrm{F}^{n}\cat C} = \inv{\cat
C} \cap \mathrm{F}^{n}\cat C$, and $\cat C$ equals the union of the
increasing sequence of subcategories $\mathrm{F}^{0}\cat C \subset
\mathrm{F}^{1}\cat C \subset \mathrm{F}^{2}\cat C \subset \cdots$.
\end{prop}
The following will be used in the proof of Theorem~\ref{thm:RtGdNec} (which
is one direction of Theorem~\ref{thm:GoodisGood}).
\begin{prop}[\cite{MCATL}*{Thm.~15.2.1 and Cor.~15.2.9}]
\label{prop:ConstructFilt}
For $n > 0$, extending a diagram $\diag X$ on $\mathrm{F}^{n-1}\cat D$ to
one on $\mathrm{F}^{n}\cat D$ consists of choosing, for every object
$\gamma$ of degree $n$, an object $\diag X_{\gamma}$ and a
factorization $\mathrm{L}_{\gamma} \diag X \to \diag X_{\gamma} \to
\mathrm{M}_{\gamma}\diag X$ of the natural map $\mathrm{L}_{\gamma} \diag X
\to \mathrm{M}_{\gamma}\diag X$ from the latching object of $\diag X$ at
$\gamma$ to the matching object of $\diag X$ at $\gamma$.
\end{prop}
\subsection{Reedy functors}
\label{sec:ReedyFunc}
In Definition~\ref{def:subreedy} we introduce the notion of a \emph{Reedy
functor} between Reedy categories; this is a functor that preserves
the Reedy structure.
\begin{defn}
\label{def:subreedy}
If $\cat C$ and $\cat D$ are Reedy categories, then a \emph{Reedy
functor} $G\colon \cat C \to \cat D$ is a functor that takes
$\drc{\cat C}$ into $\drc{\cat D}$ and takes $\inv{\cat C}$ into
$\inv{\cat D}$. If $\cat D$ is a Reedy category, then a \emph{Reedy
subcategory} of $\cat D$ is a subcategory $\cat C$ of $\cat D$
that is a Reedy category for which the inclusion functor $\cat C \to
\cat D$ is a Reedy functor.
\end{defn}
Note that a Reedy functor is \emph{not} required to respect the
filtrations on the Reedy categories $\cat C$ and $\cat D$ (see
Definition~\ref{def:filtration}). Thus, a Reedy functor might take
non-identity maps to identity maps (see, e.g.,
Proposition~\ref{prop:MatchIso}).
\begin{defn}
\label{def:InducedDiag}
If $G\colon \cat C \to \cat D$ is a Reedy functor between Reedy
categories and $\cat M$ is a model category, then $G$ induces a
functor of diagram categories $G^{*}\colon \cat M^{\cat D} \to \cat
M^{\cat C}$ under which
\begin{itemize}
\item a functor $\diag X\colon \cat D \to \cat M$ goes to the
functor $G^{*}\diag X\colon \cat C \to \cat M$ that is the
composition $\cat C \xrightarrow{G} \cat D \xrightarrow{\diag X}
\cat M$ (so that for an object $\alpha$ of $\cat C$ we have
$(G^{*}\diag X)_{\alpha} = \diag X_{G\alpha}$) and
\item a natural transformation of $\cat D$-diagrams $f\colon \diag X
\to \diag Y$ goes to the natural transformation of $\cat
C$-diagrams $G^{*}f$ that on a map $\sigma\colon \alpha \to \beta$
of $\cat C$ is the map $(G\sigma)_{*}\colon \diag X_{G\alpha} \to
\diag Y_{G\alpha}$ in $\cat M$.
\end{itemize}
\end{defn}
The main results of this paper (Theorem~\ref{thm:GoodisGood} and
Theorem~\ref{thm:MainCofibering}) determine when the functor $G^{*}\colon
\cat M^{\cat D} \to \cat M^{\cat C}$ is either a left Quillen functor
or a right Quillen functor for all model categories $\cat M$. The
characterizations will depend on the notions of the \emph{category of
inverse $\cat C$-factorizations} of a map in $\inv{\cat D}$ and the
\emph{category of direct $\cat C$-factorizations} of a map in
$\drc{\cat D}$.
\begin{defn}
\label{def:CFactors}
Let $G\colon \cat C \to \cat D$ be a Reedy functor between Reedy
categories, let $\alpha$ be an object of $\cat C$, and let $\beta$
be an object of $\cat D$.
\begin{enumerate}
\item If $\sigma\colon G\alpha \to \beta$ is a map in $\inv{\cat
D}$, then the \emph{category of inverse $\cat C$-factorizations}
of $(\alpha, \sigma)$ is the category $\invfact{\cat
C}{\alpha}{\sigma}$ in which
\begin{itemize}
\item an object is a pair
\begin{displaymath}
\bigl((\nu\colon \alpha \to \gamma),
(\mu\colon G\gamma\to \beta)\bigr)
\end{displaymath}
consisting of a non-identity map $\nu\colon \alpha \to \gamma$
in $\inv{\cat C}$ and a map $\mu\colon G\gamma \to \beta$ in
$\inv{\cat D}$ such that the diagram
\begin{displaymath}
\xymatrix@=.6em{
{G\alpha} \ar[rr]^{G\nu} \ar[dr]_{\sigma}
&& {G\gamma} \ar[dl]^{\mu}\\
& {\beta}
}
\end{displaymath}
commutes, and
\item a map from $\bigl((\nu\colon \alpha \to \gamma), (\mu\colon
G\gamma\to \beta)\bigr)$ to $\bigl((\nu'\colon \alpha \to
\gamma'), (\mu'\colon G\gamma'\to \beta)\bigr)$ is a map
$\tau\colon \gamma \to \gamma'$ in $\inv{\cat C}$ such that the
triangles
\begin{displaymath}
\vcenter{
\xymatrix@=.6em{
&{\alpha} \ar[dl]_{{\nu}} \ar[dr]^{\nu'}\\
{\gamma} \ar[rr]_{\tau}
&& {\gamma'}
\qquad\text{and}\qquad
\vcenter{
\xymatrix@=.6em{
{G\gamma} \ar[rr]^{G\tau} \ar[dr]_{\mu}
&& {G\gamma'} \ar[dl]^{\mu'}\\
& {\beta}
\end{displaymath}
commute.
\end{itemize}
We will often refer just to the map $\sigma$ when the object
$\alpha$ is obvious. In particular, when $G\colon \cat C \to \cat
D$ is the inclusion of a subcategory the object $\alpha$ is
determined by the morphism $\sigma$, and we will often refer to
the \emph{category of inverse $\cat C$-factorizations of $\sigma$}.
\item If $\sigma\colon \beta \to G\alpha$ is a map in $\drc{\cat
D}$, then the \emph{category of direct $\cat C$-factorizations}
of $(\alpha, \sigma)$ is the category $\drcfact{\cat
C}{\alpha}{\sigma}$ in which
\begin{itemize}
\item an object is a pair
\begin{displaymath}
\bigl((\nu\colon \gamma \to \alpha),
(\mu\colon \beta \to G\gamma)\bigr)
\end{displaymath}
consisting of a non-identity map $\nu\colon \gamma \to \alpha$
in $\drc{\cat C}$ and a map $\mu\colon \beta \to G\gamma$ in
$\drc{\cat D}$ such that the diagram
\begin{displaymath}
\xymatrix@=.6em{
{\beta} \ar[rr]^{\mu} \ar[dr]_{\sigma}
&& {G\gamma} \ar[dl]^{G\nu}\\
& {G\alpha}
}
\end{displaymath}
commutes, and
\item a map from $\bigl((\nu\colon \gamma \to \alpha), (\mu\colon
\beta \to G\gamma)\bigr)$ to $\bigl((\nu'\colon \gamma' \to
\alpha), (\mu'\colon \beta \to G\gamma')\bigr)$ is a map
$\tau\colon \gamma \to \gamma'$ in $\drc{\cat C}$ such that the
triangles
\begin{displaymath}
\vcenter{
\xymatrix@=.8em{
& {\alpha}\\
{\gamma} \ar[rr]_{\tau} \ar[ur]^{\nu}
&& {\gamma'} \ar[ul]_{\gamma'}
\qquad\text{and}\qquad
\vcenter{
\xymatrix@=.8em{
{G\gamma} \ar[rr]^{G\tau}
&& {G\gamma'}\\
& {\beta} \ar[ul]^{\mu} \ar[ur]_{\mu'}
\end{displaymath}
commute.
\end{itemize}
We will often refer just to the map $\sigma$ when the object
$\alpha$ is obvious. In particular, when $G\colon \cat C \to \cat
D$ is the inclusion of a subcategory the object $\alpha$ is
determined by the morphism $\sigma$, and we will often refer to
the \emph{category of direct $\cat C$-factorizations of $\sigma$}.
\end{enumerate}
\end{defn}
\begin{prop}
\label{prop:CategoryOfFacts}
Let $G\colon \cat C \to \cat D$ be a Reedy functor between Reedy
categories, let $\alpha$ be an object of $\cat C$, and let $\beta$
be an object of $\cat D$.
\begin{enumerate}
\item If $\sigma\colon G\alpha \to \beta$ is a map in $\inv{\cat
D}$, then we have an induced functor
\begin{displaymath}
G_{*}\colon \matchcat{\cat C}{\alpha} \longrightarrow
\undercat{\inv{\cat D}}{G\alpha}
\end{displaymath}
from the matching category of $\cat C$ at $\alpha$ to the category
of objects of $\inv{\cat D}$ under $G\alpha$ that takes the object
$\alpha \to \gamma$ of $\matchcat{\cat C}{\alpha}$ to the object
$G\alpha \to G\gamma$ of $\undercat{\inv{\cat D}}{G\alpha}$, and
the category $\invfact{\cat C}{\alpha}{\sigma}$ of inverse $\cat
C$-factorizations of $(\alpha, \sigma)$ (see
Definition~\ref{def:CFactors}) is the category $\overcat{G_{*}}{\sigma}$
of objects of $\matchcat{\cat C}{\alpha}$ over $\sigma$.
\item If $\sigma\colon \beta \to G\alpha$ is a map in $\drc{\cat
D}$, then we have an induced functor
\begin{displaymath}
G_{*}\colon \latchcat{\cat C}{\alpha} \longrightarrow
\overcat{\drc{\cat D}}{G\alpha}
\end{displaymath}
from the latching category of $\cat C$ at $\alpha$ to the category
of objects of $\drc{\cat D}$ over $G\alpha$ that takes the object
$\gamma \to \alpha$ of $\latchcat{\cat C}{\alpha}$ to the object
$G\gamma \to G\alpha$ of $\overcat{\drc{\cat D}}{G\alpha}$, and
the category $\drcfact{\cat C}{\alpha}{\sigma}$ of direct $\cat
C$-factorizations of $(\alpha, \sigma)$ is the category
$\undercat{G_{*}}{\sigma}$ of objects of $\latchcat{\cat
C}{\alpha}$ under $\sigma$.
\end{enumerate}
\end{prop}
\begin{proof}
We will prove part~1; the proof of part~2 is similar. An object of
$\overcat{G_{*}}{\sigma}$ is a pair $\bigl((\nu\colon \alpha \to
\gamma), (\mu\colon G\gamma \to \beta)\bigr)$ where $\nu\colon
\alpha \to \gamma$ is an object of $\matchcat{\cat C}{\alpha}$ and
$\mu\colon G\gamma \to \beta$ is a map in $\inv{\cat D}$ that makes
the triangle
\begin{displaymath}
\xymatrix@=.6em{
&{G\alpha} \ar[dl]_{{G\nu}} \ar[dr]^{\sigma}\\
{G\gamma} \ar[rr]_{\mu}
&& {\beta}
\end{displaymath}
commute. A map from $\bigl((\nu\colon \alpha \to \gamma),
(\mu\colon G\gamma \to \beta)\bigr)$ to $\bigl((\nu'\colon \alpha
\to \gamma'), (\mu'\colon G\gamma' \to \beta)\bigr)$ is a map
$\tau\colon \gamma \to \gamma'$ in $\inv{\cat C}$ that makes the
triangles
\begin{displaymath}
\vcenter{
\xymatrix@=.8em{
&{\alpha} \ar[dl]_{\nu} \ar[dr]^{\nu'}\\
{\gamma} \ar[rr]_{\tau}
&& {\gamma'}
\qquad\text{and}\qquad
\vcenter{
\xymatrix@=.6em{
{G\gamma} \ar[rr]^{G\tau} \ar[dr]_{\mu}
&& {G\gamma'} \ar[dl]^{\mu'}\\
& {\beta}
\end{displaymath}
commute. This is exactly the definition of the category of inverse
$\cat C$-factorizations of $(\alpha, \sigma)$.
\end{proof}
\begin{prop}
\label{prop:CatFacts}
Let $\cat C$ and $\cat D$ be Reedy categories, let $G\colon \cat C
\to \cat D$ be a Reedy functor, and let $\alpha$ be an object of
$\cat C$.
\begin{enumerate}
\item If $G$ takes every non-identity map $\alpha \to \gamma$ in
$\inv{\cat C}$ to a non-identity map in $\inv{\cat D}$, then there
is an induced functor of matching categories
\begin{displaymath}
G_{*}\colon \matchcat{\cat C}{\alpha}
\to \matchcat{\cat D}{G\alpha}
\end{displaymath}
(see Definition~\ref{def:Matchobj}) that takes the object $\sigma\colon
\alpha \to \gamma$ of $\matchcat{\cat C}{\alpha}$ to the object
$G\sigma\colon G\alpha \to G\gamma$ of $\matchcat{\cat
D}{G\alpha}$. If $\beta$ is an object of $\cat D$ and
$\sigma\colon G\alpha \to \beta$ is a map in $\inv{\cat D}$, then
the category $\invfact{\cat C}{\alpha}{\sigma}$ of inverse $\cat
C$-factorizations of $(\alpha, \sigma)$ (see
Definition~\ref{def:CFactors}) is the category $\overcat{G_{*}}{\sigma}$
of objects of $\matchcat{\cat C}{\alpha}$ over $\sigma$.
\item If $G$ takes every non-identity map $\gamma \to \alpha$ in
$\drc{\cat C}$ to a non-identity map in $\drc{\cat D}$, then there
is an induced functor of latching categories
\begin{displaymath}
G_{*}\colon \latchcat{\cat C}{\alpha}
\to \latchcat{\cat D}{G\alpha}
\end{displaymath}
(see Definition~\ref{def:Matchobj}) that takes the object $\sigma\colon
\gamma \to \alpha$ of $\latchcat{\cat C}{\alpha}$ to the object
$G\sigma\colon G\gamma \to G\alpha$ of $\latchcat{\cat
D}{G\alpha}$. If $\beta$ is an object of $\cat D$ and
$\sigma\colon \beta \to G\alpha$ is a map in $\drc{\cat D}$, then
the category $\drcfact{\cat C}{\alpha}{\sigma}$ of direct $\cat
C$-factorizations of $(\alpha, \sigma)$ is the category
$\undercat{G_{*}}{\sigma}$ of objects of $\latchcat{\cat
C}{\alpha}$ under $\sigma$.
\end{enumerate}
\end{prop}
\begin{proof}
This is identical to the proof of Proposition~\ref{prop:CategoryOfFacts},
except that the requirement that certain non-identity maps go to
non-identity maps ensures (in part~1) that the functor $G_{*}\colon
\matchcat{\cat C}{\alpha} \to \undercat{\inv{\cat D}}{G\alpha}$
factors through the subcategory $\matchcat{\cat D}{G\alpha}$ of
$\undercat{\inv{\cat D}}{G\alpha}$ and (in part~2) that the functor
$G_{*}\colon \latchcat{\cat C}{\alpha} \to \overcat{\drc{\cat
D}}{G\alpha}$ factors through the subcategory $\latchcat{\cat
D}{G\alpha}$ of $\overcat{\drc{\cat D}}{G\alpha}$.
\end{proof}
The following is the main definition of this section; it is used in
the statements of our main theorems (Theorem~\ref{thm:GoodisGood} and
Theorem~\ref{thm:MainCofibering}).
\begin{defn}
\label{def:goodsub}
Let $G\colon \cat C \to \cat D$ be a Reedy functor between Reedy
categories.
\begin{enumerate}
\item The Reedy functor $G$ is a \emph{fibering Reedy functor} if
for every object $\alpha$ in $\cat C$, every object $\beta$ in
$\cat D$, and every map $\sigma\colon G\alpha \to \beta$ in
$\inv{\cat D}$, the nerve of $\invfact{\cat C}{\alpha}{\sigma}$,
the category of inverse $\cat C$-factorizations of $(\alpha,
\sigma)$, (see Definition~\ref{def:CFactors}) is either empty or
connected.
If $\cat C$ is a Reedy subcategory of $\cat D$ and if the
inclusion is a fibering Reedy functor, then we will call $\cat C$
a \emph{fibering Reedy subcategory} of $\cat D$.
\item The Reedy functor $G$ is a \emph{cofibering Reedy functor} if
for every object $\alpha$ in $\cat C$, every object $\beta$ in
$\cat D$, and every map $\sigma\colon \beta \to G\alpha$ in
$\drc{\cat D}$, the nerve of $\drcfact{\cat C}{\alpha}{\sigma}$,
the category of direct $\cat C$-factorizations of $(\alpha,
\sigma)$, (see Definition~\ref{def:CFactors}) is either empty or
connected.
If $\cat C$ is a Reedy subcategory of $\cat D$ and if the
inclusion is a cofibering Reedy functor, then we will call $\cat
C$ a \emph{cofibering Reedy subcategory} of $\cat D$.
\end{enumerate}
\end{defn}
Examples of fibering Reedy functors and of cofibering Reedy functors
(and of Reedy functors that are not fibering and Reedy functors that
are not cofibering) are given in Section~\ref{sec:Examples}.
\subsection{Opposites}
\label{sec:Opposites}
The results in this section will be used in the proof of
Theorem~\ref{thm:MainCofibering}, which can be found in
Section~\ref{sec:PrfCofibering}.
\begin{prop}
\label{prop:OpReedy}
If $\cat C$ is a Reedy category, then the opposite category $\cat
C^{\mathrm{op}}$ is a Reedy category in which $\drc{\cat C^{\mathrm{op}}} = (\inv{\cat
C})^{\mathrm{op}}$ and $\inv{\cat C^{\mathrm{op}}} = (\drc{\cat C})^{\mathrm{op}}$.
\end{prop}
\begin{proof}
A degree function for $\cat C$ will serve as a degree function for
$\cat C^{\mathrm{op}}$, and factorizations $\sigma = \tau\mu$ in $\cat C$ with
$\mu \in \inv{\cat C}$ and $\tau \in \drc{\cat C}$ correspond to
factorizations $\sigma^{\mathrm{op}} = \mu^{\mathrm{op}} \tau^{\mathrm{op}}$ in $\cat C^{\mathrm{op}}$ with
$\mu^{\mathrm{op}} \in (\inv{\cat C})^{\mathrm{op}} = \drc{\cat C^{\mathrm{op}}}$ and $\tau^{\mathrm{op}} \in
(\drc{\cat C})^{\mathrm{op}} = \inv{\cat C^{\mathrm{op}}}$.
\end{proof}
\begin{prop}
\label{prop:OpReedyFunc}
If $\cat C$ and $\cat D$ are Reedy categories, then a functor
$G\colon \cat C \to \cat D$ is a Reedy functor if and only if its
opposite $G^{\mathrm{op}}\colon \cat C^{\mathrm{op}} \to \cat D^{\mathrm{op}}$ is a Reedy functor.
\end{prop}
\begin{proof}
This follows from Proposition~\ref{prop:OpReedy}.
\end{proof}
\begin{lem}
\label{lem:FactOpFact}
Let $G\colon \cat C \to \cat D$ be a Reedy functor between Reedy
categories, let $\alpha$ be an object of $\cat C$, and let $\beta$
be an object of $\cat D$.
\begin{enumerate}
\item If $\sigma\colon G\alpha \to \beta$ is a map in $\inv{\cat
D}$, then the opposite of the category of inverse $\cat
C$-factorizations of $(\alpha, \sigma)$ is the category of direct
$\cat C^{\mathrm{op}}$-factorizations of $(\alpha, \sigma^{\mathrm{op}}\colon \beta \to
G\alpha)$ in $\drc{\cat D^{\mathrm{op}}}$.
\item If $\sigma\colon \beta \to G\alpha$ is a map in $\drc{\cat
D}$, then the opposite of the category of direct $\cat
C$-factorizations of $(\alpha, \sigma)$ is the category of inverse
$\cat C^{\mathrm{op}}$-factorizations of $(\alpha, \sigma^{\mathrm{op}}\colon G\alpha
\to \beta)$ in $\inv{\cat D^{\mathrm{op}}}$.
\end{enumerate}
\end{lem}
\begin{proof}
We will prove part~(1); part~(2) will then follow from applying
part~(1) to $\sigma^{\mathrm{op}}\colon G\alpha \to \beta$ in $\cat C^{\mathrm{op}}$ and
remembering that $(\cat C^{\mathrm{op}})^{\mathrm{op}} = \cat C$ and $(\cat D^{\mathrm{op}})^{\mathrm{op}} =
\cat D$.
Let $\sigma\colon G\alpha \to \beta$ be a map in $\inv{\cat D}$.
Recall from Definition~\ref{def:CFactors} that
\begin{itemize}
\item an object of the category of inverse $\cat C$-factorizations
of $(\alpha, \sigma\colon G\alpha \to \beta)$ is a pair
\begin{displaymath}
\bigl((\nu\colon \alpha \to \gamma),
(\mu\colon G\gamma\to \beta)\bigr)
\end{displaymath}
consisting of a non-identity map $\nu\colon \alpha \to \gamma$
in $\inv{\cat C}$ and a map $\mu\colon G\gamma \to \beta$ in
$\inv{\cat D}$ such that the composition $G\alpha
\xrightarrow{G\nu} G\gamma \xrightarrow{\mu} \beta$ equals
$\sigma$, and
\item a map from $\bigl((\nu\colon \alpha \to \gamma), (\mu\colon
G\gamma\to \beta)\bigr)$ to $\bigl((\nu'\colon \alpha \to
\gamma'), (\mu'\colon G\gamma'\to \beta)\bigr)$ is a map
$\tau\colon \gamma \to \gamma'$ in $\inv{\cat C}$ such that the
triangles
\begin{displaymath}
\vcenter{
\xymatrix@=.6em{
&{\alpha} \ar[dl]_{{\nu}} \ar[dr]^{\nu'}\\
{\gamma} \ar[rr]_{\tau}
&& {\gamma'}
\qquad\text{and}\qquad
\vcenter{
\xymatrix@=.6em{
{G\gamma} \ar[rr]^{G\tau} \ar[dr]_{\mu}
&& {G\gamma'} \ar[dl]^{\mu'}\\
& {\beta}
\end{displaymath}
commute.
\end{itemize}
The opposite of this category has the same objects, but
\begin{itemize}
\item a non-identity map $\nu\colon \alpha \to \gamma$ in $\inv{\cat
C}$ is equivalently a non-identity map $\nu^{\mathrm{op}}\colon \gamma \to
\alpha$ in $(\inv{\cat C})^{\mathrm{op}} = \drc{\cat C^{\mathrm{op}}}$, and
\item a factorization $G\alpha \xrightarrow{G\nu} G\gamma
\xrightarrow{\mu} \beta$ of $\sigma$ such that $\mu \in \inv{\cat
D}$ is equivalently a factorization $\beta \xrightarrow{\mu^{\mathrm{op}}}
G\gamma \xrightarrow{G\nu^{\mathrm{op}}} G\alpha$ of $\sigma^{\mathrm{op}}\colon \beta
\to G\alpha$ in $(\inv{\cat D})^{\mathrm{op}} = \drc{\cat D^{\mathrm{op}}}$
\end{itemize}
Thus, the opposite category can be described as the category in
which
\begin{itemize}
\item An object is a pair
\begin{displaymath}
\bigl((\nu^{\mathrm{op}}\colon \gamma \to \alpha),
(\mu^{\mathrm{op}}\colon \beta \to G\gamma)\bigr)
\end{displaymath}
consisting of a non-identity map $\nu^{\mathrm{op}}\colon \gamma \to \alpha$
in $(\inv{\cat C})^{\mathrm{op}} = \drc{\cat C^{\mathrm{op}}}$ and a map $\mu^{\mathrm{op}}\colon
\beta \to G\gamma$ in $(\inv{\cat D})^{\mathrm{op}} = \drc{\cat D^{\mathrm{op}}}$ such
that the composition $\beta \xrightarrow{\mu^{\mathrm{op}}} G\gamma
\xrightarrow{G\nu^{\mathrm{op}}} G\alpha$ equals $\sigma^{\mathrm{op}}$, and
\item a map from $\bigl((\nu^{\mathrm{op}}\colon \gamma \to \alpha),
(\mu^{\mathrm{op}}\colon \beta \to G\gamma)\bigr)$ to $\bigl(((\nu')^{\mathrm{op}}\colon
\gamma' \to \alpha), ((\mu')^{\mathrm{op}}\colon \beta \to G\gamma')\bigr)$
is a map $\tau^{\mathrm{op}}\colon \gamma' \to \gamma$ in $(\inv{\cat C})^{\mathrm{op}}
= \drc{\cat C^{\mathrm{op}}}$ such that the triangles
\begin{displaymath}
\vcenter{
\xymatrix@=.6em{
& {\alpha}\\
{\gamma} \ar[ur]^{\nu^{\mathrm{op}}}
&& {\gamma'} \ar[ll]^{\tau^{\mathrm{op}}} \ar[ul]_{(\nu')^{\mathrm{op}}}
\qquad\text{and}\qquad
\vcenter{
\xymatrix@=.6em{
{G\gamma}
&& {G\gamma'} \ar[ll]_{G\tau^{\mathrm{op}}}\\
& {\beta} \ar[ul]^{\mu^{\mathrm{op}}} \ar[ur]_{(\mu')^{\mathrm{op}}}
\end{displaymath}
commute.
\end{itemize}
This is exactly the category of direct $\cat C^{\mathrm{op}}$-factorizations of
$(\alpha, \sigma^{\mathrm{op}}\colon \beta \to G\alpha)$ in $\drc{\cat D^{\mathrm{op}}}$.
\end{proof}
\begin{prop}
\label{prop:OpFiberingCofibering}
If $G\colon \cat C \to \cat D$ is a Reedy functor between Reedy
categories, then $G$ is a fibering Reedy functor if and only if
$G^{\mathrm{op}}\colon \cat C^{\mathrm{op}} \to \cat D^{\mathrm{op}}$ is a cofibering Reedy functor.
\end{prop}
\begin{proof}
Since the nerve of a category is empty or connected if and only if
the nerve of the opposite category is, respectively, empty or
connected, this follows from Lemma~\ref{lem:FactOpFact}.
\end{proof}
\begin{lem}
\label{lem:OpMatch}
Let $\diag X$ be a $\cat C$-diagram in $\cat M$ (which can also be
viewed as a $\cat C^{\mathrm{op}}$-diagram in $\cat M^{\mathrm{op}}$), and let $\alpha$ be
an object of $\cat C$.
\begin{enumerate}
\item The latching object $\mathrm{L}^{\cat C}_{\alpha}$ of $\diag X$ as
a $\cat C$-diagram in $\cat M$ at $\alpha$ is the matching object
$\mathrm{M}^{\cat C^{\mathrm{op}}}_{\alpha}$ of $\diag X$ as a $\cat
C^{\mathrm{op}}$-diagram in $\cat M^{\mathrm{op}}$ at $\alpha$, and the opposite of the
latching map $\mathrm{L}^{\cat C}_{\alpha}\diag X \to \diag X$ of
$\diag X$ as a $\cat C$-diagram in $\cat M$ at $\alpha$ is the
matching map $\diag X \to \mathrm{L}^{\cat C}_{\alpha}\diag X =
\mathrm{M}^{\cat C^{\mathrm{op}}}_{\alpha}\diag X$ of $\diag X$ as a $\cat
C^{\mathrm{op}}$-diagram in $\cat M^{\mathrm{op}}$ at $\alpha$.
\item The matching object $\mathrm{M}^{\cat C}_{\alpha}$ of $\diag X$ as
a $\cat C$-diagram in $\cat M$ at $\alpha$ is the latching object
$\mathrm{L}^{\cat C^{\mathrm{op}}}_{\alpha}$ of $\diag X$ as a $\cat
C^{\mathrm{op}}$-diagram in $\cat M^{\mathrm{op}}$ at $\alpha$, and the opposite of the
matching map $\diag X \to \mathrm{M}^{\cat C}_{\alpha}\diag X$ of
$\diag X$ as a $\cat C$-diagram in $\cat M$ at $\alpha$ is the
latching map $ \mathrm{L}^{\cat C^{\mathrm{op}}}_{\alpha}\diag X = \mathrm{M}^{\cat
C}_{\alpha}\diag X \to \diag X$ of $\diag X$ as a $\cat
C^{\mathrm{op}}$-diagram in $\cat M^{\mathrm{op}}$ at $\alpha$.
\end{enumerate}
\end{lem}
\begin{proof}
We will prove part~1; part~2 then follows by applying part~1 to the
$\cat C^{\mathrm{op}}$-diagram $\diag X$ in $\cat M^{\mathrm{op}}$ and remembering that
$(\cat C^{\mathrm{op}})^{\mathrm{op}} = \cat C$ and $(\cat M^{\mathrm{op}})^{\mathrm{op}} = \cat M$.
The latching object $\mathrm{L}^{\cat C}_{\alpha} \diag X$ of $\diag X$
at $\alpha$ is the colimit of the diagram in $\cat M$ with an object
$\diag X_{\beta}$ for every non-identity map $\sigma\colon \beta \to
\alpha$ in $\drc{\cat C}$ and a map $\mu_{*}\colon \diag X_{\beta}
\to \diag X_{\gamma}$ for every commutative triangle
\begin{displaymath}
\xymatrix@=.6em{
& {\alpha}\\
{\beta} \ar[rr]_{\mu} \ar[ur]^-{\sigma}
&& {\gamma} \ar[ul]_-{\tau}
\end{displaymath}
in $\drc{\cat C}$ in which $\sigma$ and $\tau$ are non-identity
maps. Thus, $\mathrm{L}^{\cat C}_{\alpha}\diag X$ can also be described
as the limit of the diagram in $\cat M^{\mathrm{op}}$ with one object $\diag
X_{\beta}$ for every non-identity map $\sigma^{\mathrm{op}}\colon \alpha \to
\beta$ in $(\drc{\cat C})^{\mathrm{op}} = \inv{\cat C^{\mathrm{op}}}$ and a map
$(\mu^{\mathrm{op}})_{*}\colon \diag X_{\gamma} \to \diag X_{\beta}$ for every
commutative triangle
\begin{displaymath}
\xymatrix@=.6em{
& {\alpha} \ar[dl]_{\sigma^{\mathrm{op}}} \ar[dr]^{\tau^{\mathrm{op}}}\\
{\beta}
&& {\gamma} \ar[ll]^{\mu^{\mathrm{op}}}
\end{displaymath}
in $(\drc{\cat C})^{\mathrm{op}} = \inv{\cat C^{\mathrm{op}}}$ in which $\sigma^{\mathrm{op}}$ and
$\tau^{\mathrm{op}}$ are non-identity maps. Thus, $\mathrm{L}^{\cat C}_{\alpha}
\diag X = \mathrm{M}^{\cat C^{\mathrm{op}}}_{\alpha} \diag X$.
The latching map $\mathrm{L}^{\cat C}_{\alpha}\diag X \to \diag
X_{\alpha}$ is the unique map in $\cat M$ such that for every
non-identity map $\sigma\colon \beta \to \alpha$ in $\drc{\cat C}$
the triangle
\begin{displaymath}
\xymatrix@=.8em{
& {\diag X_{\alpha}}\\
{\diag X_{\beta}} \ar[r] \ar[ur]^{\sigma_{*}}
& {\mathrm{L}^{\cat C}_{\alpha}\diag X} \ar[u]
\end{displaymath}
commutes, and so the opposite of the latching map is the unique map
$\diag X_{\alpha} \to \mathrm{L}^{\cat C}_{\alpha}\diag X = \mathrm{M}^{\cat
C^{\mathrm{op}}}_{\alpha}\diag X$ in $\cat M^{\mathrm{op}}$ such that for every
non-identity map $\sigma^{\mathrm{op}}\colon \alpha \to \beta$ in $(\drc{\cat
C})^{\mathrm{op}} = \inv{\cat C^{\mathrm{op}}}$ the triangle
\begin{displaymath}
\xymatrix@=.8em{
& {\diag X_{\alpha}} \ar[d] \ar[dl]_{(\sigma^{\mathrm{op}})_{*}}\\
{\diag X_{\beta}}
& {\mathrm{M}^{\cat C^{\mathrm{op}}}_{\alpha}\diag X} \ar[l]
\end{displaymath}
commutes, i.e., the opposite of the latching map of $\diag X$ at
$\alpha$ in $\cat C$ is the matching map of $\diag X$ at $\alpha$ in
$\cat C^{\mathrm{op}}$.
\end{proof}
\begin{lem}
\label{lem:OpRelMatch}
Let $f\colon \diag X \to \diag Y$ be a map of $\cat C$-diagrams in
$\cat M$ and let $\alpha$ be an object of $\cat C$.
\begin{enumerate}
\item The opposite of the relative latching map (see
Definition~\ref{def:Matchobj}) of $f$ at $\alpha$ is the relative matching
map of the map $f^{\mathrm{op}}\colon \diag Y \to \diag X$ of $\cat
C^{\mathrm{op}}$-diagrams in $\cat M^{\mathrm{op}}$ at $\alpha$.
\item The opposite of the relative matching map (see
Definition~\ref{def:Matchobj}) of $f$ at $\alpha$ is the relative latching
map of the map $f^{\mathrm{op}}\colon \diag Y \to \diag X$ of $\cat
C^{\mathrm{op}}$-diagrams in $\cat M^{\mathrm{op}}$ at $\alpha$.
\end{enumerate}
\end{lem}
\begin{proof}
We will prove part~(1); part~(2) then follows by applying part~(1) to the
map of $\cat C^{\mathrm{op}}$-diagrams $f^{\mathrm{op}}\colon \diag Y \to \diag X$ in
$\cat M^{\mathrm{op}}$ and remembering that $(\cat C^{\mathrm{op}})^{\mathrm{op}} = \cat C$ and
$(\cat M^{\mathrm{op}})^{\mathrm{op}} = \cat M$.
If $P = \pushout{\diag X_{\alpha}}{\mathrm{L}^{\cat C}_{\alpha}\diag
X}{\mathrm{L}^{\cat C}_{\alpha}\diag Y}$, then the relative latching
map is the unique map $P \to \diag Y_{\alpha}$ that makes the diagram
\begin{displaymath}
\xymatrix@=.8em{
{\mathrm{L}^{\cat C}_{\alpha}\diag X} \ar[rr] \ar[dd]
&& {\mathrm{L}^{\cat C}_{\alpha}\diag Y} \ar[dd] \ar@{..>}[dl]\\
& {P} \ar[dr]\\
{\diag X_{\alpha}} \ar[rr] \ar@{..>}[ur]
&& {\diag Y_{\alpha}}
\end{displaymath}
commute. The opposite of that diagram is the diagram
\begin{displaymath}
\xymatrix@=.8em{
{\mathrm{M}^{\cat C^{\mathrm{op}}}_{\alpha}\diag X}
&& {\mathrm{M}^{\cat C^{\mathrm{op}}}_{\alpha}\diag X} \ar[ll]\\
& {P} \ar@{..>}[ur] \ar@{..>}[dl]\\
{\diag X_{\alpha}} \ar[uu]
&& {\diag Y_{\alpha}} \ar[uu] \ar[ll] \ar[ul]
\end{displaymath}
in $\cat M^{\mathrm{op}}$ (see Lemma~\ref{lem:OpMatch}), in which $P =
\pullback{\diag X_{\alpha}}{\mathrm{M}^{\cat C^{\mathrm{op}}}_{\alpha}\diag
X}{\mathrm{M}^{\cat C^{\mathrm{op}}}_{\alpha}\diag Y}$, and the opposite of the
relative latching map is the unique map in $\cat M^{\mathrm{op}}$ that makes
this diagram commute, i.e., it is the relative matching map.
\end{proof}
\begin{prop}
\label{prop:OppositeReedy}
If $\cat M$ is a model category and $\cat C$ is a Reedy category,
then the opposite $(\cat M^{\cat C})^{\mathrm{op}}$ of the Reedy model category
$\cat M^{\cat C}$ (see Definition~\ref{def:DiagCat}) is naturally isomorphic
as a model category to the Reedy model category $(\cat M^{\mathrm{op}})^{\cat
C^{\mathrm{op}}}$.
\end{prop}
\begin{proof}
The opposite $(\cat M^{\cat C})^{\mathrm{op}}$ of $\cat M^{\cat C}$ is a model
category in which
\begin{itemize}
\item the cofibrations of $(\cat M^{\cat C})^{\mathrm{op}}$ are the opposites
of the fibrations of $\cat M^{\cat C}$,
\item the fibrations of $(\cat M^{\cat C})^{\mathrm{op}}$ are the opposites of
the cofibrations of $\cat M^{\cat C}$, and
\item the weak equivalences of $(\cat M^{\cat C})^{\mathrm{op}}$ are the
opposites of the weak equivalences of $\cat M^{\cat C}$.
\end{itemize}
Proposition~\ref{prop:OpReedy} implies that we have a Reedy model category
structure on $(\cat M^{\mathrm{op}})^{\cat C^{\mathrm{op}}}$. The objects and maps of
$(\cat M^{\cat C})^{\mathrm{op}}$ coincide with those of $(\cat M^{\mathrm{op}})^{(\cat
C^{\mathrm{op}})}$, and so we need only show that the model category
structures coincide. This follows because the opposites of the
objectwise weak equivalences of $\cat M^{\cat C}$ are the objectwise
weak equivalences of $(\cat M^{\mathrm{op}})^{\cat C^{\mathrm{op}}}$, and
Lemma~\ref{lem:OpRelMatch} implies that the opposites of the
cofibrations of $\cat M^{\cat C}$ are the fibrations of $(\cat
M^{\mathrm{op}})^{\cat C^{\mathrm{op}}}$ and that the opposites of the fibrations of $\cat
M^{\cat C}$ are the cofibrations of $(\cat M^{\mathrm{op}})^{\cat C^{\mathrm{op}}}$ (see
Theorem~\ref{thm:RFib}).
\end{proof}
\subsection{Quillen functors}
\label{sec:QuillenFunc}
\begin{defn}
\label{def:QuilFunc}
Let $\cat M$ and $\cat N$ be model categories and let $\adj{G}{\cat
M}{\cat N}{U}$ be a pair of adjoint functors. The functor $G$ is
a \emph{left Quillen functor} and the functor $U$ is a \emph{right
Quillen functor} if
\begin{itemize}
\item the left adjoint $G$ preserves both cofibrations and trivial
cofibrations, and
\item the right adjoint $U$ preserves both fibrations and trivial
fibrations.
\end{itemize}
\end{defn}
\begin{prop}
\label{prop:QuilFunc}
If $\cat M$ and $\cat N$ are model categories and $\adj{G}{\cat
M}{\cat N}{U}$ is a pair of adjoint functors, then the following
are equivalent:
\begin{enumerate}
\item The left adjoint $G$ is a left Quillen functor and the right
adjoint $U$ is a right Quillen functor.
\item The left adjoint $G$ preserves both cofibrations and trivial
cofibrations.
\item The right adjoint $U$ preserves both fibrations and trivial
fibrations.
\end{enumerate}
\end{prop}
\begin{proof}
This is \cite{MCATL}*{Prop.~8.5.3}.
\end{proof}
\begin{prop}
\label{prop:QuillenNice}
Let $\cat M$ and $\cat N$ be model categories and let $\adj{G}{\cat
M}{\cat N}{U}$ be a pair of adjoint functors.
\begin{enumerate}
\item If $G$ is a left Quillen functor, then $G$ takes cofibrant
objects of $\cat M$ to cofibrant objects of $\cat N$ and takes
weak equivalences between cofibrant objects in $\cat M$ to weak
equivalences between cofibrant objects of $\cat N$.
\item If $U$ is a right Quillen functor, then $U$ takes fibrant
objects of $\cat N$ to fibrant objects of $\cat M$ and takes weak
equivalences between fibrant objects in $\cat N$ to weak
equivalences between fibrant objects of $\cat M$.
\end{enumerate}
\end{prop}
\begin{proof}
Since left adjoints take initial objects to initial objects, if the
left adjoint $G$ takes cofibrations to cofibrations then it takes
cofibrant objects to cofibrant objects. The statement about weak
equivalences follows from \cite{MCATL}*{Cor.~7.7.2}.
Dually, since right adjoints take terminal objects to terminal
objects, if the right adjoint $U$ takes fibrations to fibrations
then it takes fibrant objects to fibrant objects. The statement
about weak equivalences follows from \cite{MCATL}*{Cor.~7.7.2}.
\end{proof}
\begin{prop}
\label{prop:OpQuillen}
A functor between model categories $G\colon \cat M \to \cat N$ is a
left Quillen functor if and only if its opposite $G^{\mathrm{op}}\colon \cat
M^{\mathrm{op}} \to \cat N^{\mathrm{op}}$ is a right Quillen functor.
\end{prop}
\begin{proof}
This follows because the cofibrations and trivial cofibrations of
$\cat M^{\mathrm{op}}$ are the opposites of the fibrations and trivial
fibrations, respectively, of $\cat M$ and the fibrations and trivial
fibrations of $\cat M^{\mathrm{op}}$ are the opposites of the cofibrations and
trivial cofibrations, respectively, of $\cat M$ (with a similar
statement for $\cat N$).
\end{proof}
\subsection{Cofinality}
\label{sec:Cofinality}
\begin{defn}
\label{def:cofinal}
Let $\cat A$ and $\cat B$ be small categories and let $G\colon \cat
A \to \cat B$ be a functor.
\begin{itemize}
\item The functor $G$ is \emph{left cofinal} (or \emph{initial}) if
for every object $\alpha$ of $\cat B$ the nerve
$\mathrm{N}\overcat{G}{\alpha}$ of the overcategory $\overcat{G}{\alpha}$
is non-empty and connected. If in addition $G$ is the inclusion
of a subcategory, then we will say that $\cat A$ is a \emph{left
cofinal subcategory} (or \emph{initial subcategory}) of $\cat B$.
\item The functor $G$ is \emph{right cofinal} (or \emph{terminal})
if for every object $\alpha$ of $\cat B$ the nerve
$\mathrm{N}\undercat{G}{\alpha}$ of the undercategory
$\undercat{G}{\alpha}$ is non-empty and connected. If in addition
$G$ is the inclusion of a subcategory, then we will say that $\cat
A$ is a \emph{right cofinal subcategory} (or \emph{terminal
subcategory}) of $\cat B$.
\end{itemize}
\end{defn}
For the proof of the following, see \cite{MCATL}*{Thm.~14.2.5}.
\begin{thm}
\label{thm:CofinalIso}
Let $\cat A$ and $\cat B$ be small categories and let $G\colon \cat
A \to \cat B$ be a functor.
\begin{enumerate}
\item The functor $G$ is left cofinal if and only if for every
complete category $\cat M$ (i.e., every category in which all small
limits exist) and every diagram $\diag X\colon \cat B \to \cat M$
the natural map $\lim_{\cat B}\diag X \to \lim_{\cat A} G^{*}\diag
X$ is an isomorphism.
\item The functor $G$ is right cofinal if and only if for every
cocomplete category $\cat M$ (i.e., every category in which all
small colimits exist) and every diagram $\diag X\colon \cat B \to
\cat M$ the natural map $\colim_{\cat A} G^{*}\diag X \to
\colim_{\cat B}\diag X$ is an isomorphism.
\end{enumerate}
\end{thm}
\section{Examples}
\label{sec:Examples}
In this section, we present various examples to illustrate
Theorem~\ref{thm:GoodisGood} and Theorem~\ref{thm:MainCofibering}.
\subsection{A Reedy functor that is not fibering}
\label{sec:notfibering}
The following is an example of a Reedy subcategory that is not
fibering.
\begin{ex}
\label{ex:NotGood}
Let $\cat D$ be the category
\begin{displaymath}
\vcenter{
\xymatrix@=1ex{
& {\alpha} \ar[dl]_{p} \ar[dr]^{r}\\
{\gamma} \ar[dr]_{q}
&& {\delta} \ar[dl]^{s}\\
& {\beta}
\qquad
\text{in which $qp = sr$.}
\end{displaymath}
\begin{itemize}
\item Let $\alpha$ be of degree $2$,
\item let $\gamma$ and $\delta$ be of degree $1$, and
\item let $\beta$ be of degree 0.
\end{itemize}
$\cat D$ is then a Reedy category in which $\inv{\cat D} = \cat D$
and $\drc{\cat D}$ has only identity maps.
Let $\cat C$ be the full subcategory of $\cat D$ on the objects
$\{\alpha, \gamma, \delta\}$, and let $\cat C$ have the structure of
a Reedy category that makes it a Reedy subcategory of $\cat D$.
Although $\cat C$ is a Reedy subcategory of $\cat D$, it is not a
fibering Reedy subcategory because the map $qp\colon \alpha \to
\beta$ in $\inv{\cat D}$ has only two factorizations in which the
first map is in $\inv{\cat C}$ and is not an identity map and the
second is in $\inv{\cat D}$, $q\circ p$ and $s\circ r$, and neither
of those factorizations maps to the other; thus the nerve of the
category of such factorizations is nonempty and not connected.
Theorem~\ref{thm:GoodisGood} thus implies that there is a model category
$\cat M$ such that the restriction functor $\cat M^{\cat D} \to \cat
M^{\cat C}$ is not a right Quillen functor.
\end{ex}
\subsection{A Reedy functor that is not cofibering}
\label{sec:notcofibering}
Proposition~\ref{prop:OpFiberingCofibering} implies that the opposite of
Example~\ref{ex:NotGood} is a Reedy subcategory that is not cofibering.
\subsection{Truncations}
\begin{prop}
\label{prop:TruncFib}
If $\cat C$ is a Reedy category and $n \ge 0$, then the inclusion
functor $G\colon \cat C^{\le n} \to \cat C$ (see
Definition~\ref{def:filtration}) is both a fibering Reedy functor and a
cofibering Reedy functor.
\end{prop}
\begin{proof}
We will prove that the inclusion is a fibering Reedy functor; the
proof that it is a cofibering Reedy functor is similar.
If $\degree(\alpha) \le n$, then the inclusion functor $G\colon \cat
C^{\le n} \to \cat C$ induces an isomorphism of undercategories
$G_{*}\colon \undercat{\inv{\cat C^{\le n}}}{\alpha} \to
\undercat{\inv{\cat C}}{\alpha}$. Let $\sigma\colon \alpha \to
\beta$ be a map in $\inv{\cat C}$. If $\sigma$ is the identity map,
then the category of inverse $\cat C$-factorizations of $\sigma$ is
empty; if $\sigma$ is not an identity map, then the object
$\bigl((\sigma\colon \alpha \to \beta), 1_{\beta}\bigr)$ is a
terminal object of the category of inverse $\cat C$-factorizations
of $\sigma$, and so the nerve of the category of inverse $\cat
C$-factorizations of $\sigma$ is connected. Thus, $G$ is fibering.
\end{proof}
\begin{prop}
\label{prop:LRQuil}
If $\cat M$ is a model category, $\cat C$ is a Reedy category, and
$n \ge 0$, then the restriction functor $\cat M^{\cat C} \to \cat
M^{\cat C^{\le n}}$ (see Definition~\ref{def:filtration}) is both a left
Quillen functor and a right Quillen functor.
\end{prop}
\begin{proof}
This follows from Proposition~\ref{prop:TruncFib}, Theorem~\ref{thm:GoodisGood},
and Theorem~\ref{thm:MainCofibering}.
\end{proof}
Proposition~\ref{prop:LRQuil} extends to products of Reedy categories as
follows.
\begin{prop}
\label{prop:TruncOne}
If $\cat C$ and $\cat D$ are Reedy categories, $\cat M$ is a model
category, and $n \ge 0$, then the restriction functor $\cat M^{\cat
C \times \cat D} \to \cat M^{(\cat C^{\le n}\times \cat D)}$ (see
Definition~\ref{def:filtration}) is both a left Quillen functor and a right
Quillen functor.
\end{prop}
\begin{proof}
The category $\cat M^{\cat C\times \cat D}$ of $(\cat C\times\cat
D)$-diagrams in $\cat M$ is isomorphic as a model category to the
category $(\cat M^{\cat D})^{\cat C}$ of $\cat C$-diagrams in $\cat
M^{\cat D}$ (see \cite{MCATL}*{Thm.~15.5.2}), and so the result
follows from Proposition~\ref{prop:LRQuil}.
\end{proof}
\begin{prop}
\label{prop:TruncAll}
If $\cat M$ is a model category, $m$ is a positive integer, and for
$1 \le i \le m$ we have a Reedy category $\cat C_{i}$ and a
nonnegative integer $n_{i}$, then the restriction functor
\begin{displaymath}
\cat M^{\cat C_{1}\times \cat C_{2}\times \cdots \cat C_{m}}
\longrightarrow
\cat M^{\cat C_{1}^{\le n_{1}}\times \cat C_{2}^{\le n_{2}}\times
\cdots \cat C_{m}^{\le n_{m}}}
\end{displaymath}
(see Definition~\ref{def:filtration}) is both a left Quillen functor and a
right Quillen functor.
\end{prop}
\begin{proof}
The restriction functor is the composition of the restriction
functors
\begin{multline*}
\cat M^{\cat C_{1}\times \cat C_{2}\times \cdots \cat C_{m}}
\longrightarrow
\cat M^{\cat C_{1}^{\le n_{1}}\times \cat C_{2}\times \cdots \cat
C_{m}}\\
\longrightarrow
\cat M^{\cat C_{1}^{\le n_{1}}\times \cat C_{2}^{\le n_{2}}\times
\cdots \cat C_{m}} \longrightarrow \cdots \longrightarrow
\cat M^{\cat C_{1}^{\le n_{1}}\times \cat C_{2}^{\le n_{2}}\times
\cdots \cat C_{m}^{\le n_{m}}}
\end{multline*}
and so the result follows from Proposition~\ref{prop:TruncOne}.
\end{proof}
\subsection{Skeleta}
\label{sec:skeleta}
\begin{defn}
\label{def:skeleton}
Let $\cat C$ be a Reedy category, let $n \ge 0$, and let $\cat M$ be
a model category.
\begin{enumerate}
\item Since $\cat M$ is cocomplete, the restriction functor $\cat
M^{\cat C} \to \cat M^{\cat C^{\le n}}$ has a left adjoint
$\mathbf{L}\colon \cat M^{\cat C^{\le n}} \to \cat M^{\cat C}$ (see
\cite{borceux-I}*{Thm.~3.7.2}), and we define the
\emph{$n$-skeleton functor} $\skel{n}\colon \cat M^{\cat C} \to
\cat M^{\cat C}$ to be the composition
\begin{displaymath}
\xymatrix@C=5em{
{\cat M^{\cat C}} \ar[r]^-{\text{restriction}}
& {\cat M^{\cat C^{\le n}}} \ar[r]^-{\mathbf{L}}
& {\cat M^{\cat C} \rlap{\enspace .}}
}
\end{displaymath}
\item Since $\cat M$ is complete, the restriction functor $\cat
M^{\cat C} \to \cat M^{\cat C^{\le n}}$ has a right adjoint
$\mathbf{R}\colon \cat M^{\cat C^{\le n}} \to \cat M^{\cat C}$ (see
\cite{borceux-I}*{Thm.~3.7.2}), and we define the
\emph{$n$-coskeleton functor} $\coskel{n}\colon \cat M^{\cat C} \to
\cat M^{\cat C}$ to be the composition
\begin{displaymath}
\xymatrix@C=5em{
{\cat M^{\cat C}} \ar[r]^-{\text{restriction}}
& {\cat M^{\cat C^{\le n}}} \ar[r]^-{\mathbf{R}}
& {\cat M^{\cat C} \rlap{\enspace .}}
}
\end{displaymath}
\end{enumerate}
\end{defn}
\begin{prop}
\label{prop:SkelQuil}
If $\cat C$ is a Reedy category, $n \ge 0$, and $\cat M$ is a model
category, then
\begin{enumerate}
\item the $n$-skeleton functor $\skel{n}\colon \cat M \to \cat M$ is
a left Quillen functor, and
\item the $n$-coskeleton functor $\coskel{n}\colon \cat M \to \cat
M$ is a right Quillen functor.
\end{enumerate}
\end{prop}
\begin{proof}
Since the restriction functor is a right Quillen functor (see
Proposition~\ref{prop:LRQuil}), its left adjoint is a left Quillen functor
(see Proposition~\ref{prop:QuilFunc}). Since the restriction is also a left
Quillen functor (see Proposition~\ref{prop:LRQuil}), its composition with
its left adjoint is a left Quillen functor. Similarly, the
composition of restriction with its right adjoint is a right Quillen
functor.
\end{proof}
\subsection{(Multi)cosimplicial and (multi)simplicial objects}
\label{sec:(multi)(co)simplicial}
In this section we consider simplicial and cosimplicial diagrams, as
well as their multidimensional versions, $m$-cosimplicial and
$m$-simplicial diagrams (see Definition~\ref{def:Delta}). Simplicial and
cosimplicial diagrams are standard tools in homotopy theory, while
$m$-simplicial and $m$-cosimplicial ones have seen an increase in
usage in recent years, most notably through their appearance in the
calculus of functors (see \cites{cosimplcalc, FTHoLinks}).
The important questions are whether the restrictions to various
subdiagrams of $m$-simplicial and $m$-cosimplicial diagrams are
Quillen functors (and the answer will be yes in all cases). The
subdiagrams we will look at are the restricted (co)simplicial objects,
diagonals of $m$-(co)simplicial objects, and slices of
$m$-(co)simplicial objects. These are considered in Sections
\ref{sec:Restricted}, \ref{sec:diagonal}, and \ref{sec:slice},
respectively. In particular, the fibrancy of the slices of a fibrant
$m$-dimensional cosimplicial object is needed to justify taking its
totalization one dimension at a time, as is done in both
\cite{cosimplcalc} and \cite{FTHoLinks}. This and some further
results about totalizations of $m$-cosimplicial objects will be
addressed in future work.
We begin by recalling the definitions:
\begin{defn}
\label{def:Delta}
For every nonnegative integer $n$, we let $[n]$ denote the ordered
set $(0, 1, 2, \ldots, n)$.
\begin{enumerate}
\item The \emph{cosimplicial indexing category} $\boldsymbol\Delta$ is the
category with objects the $[n]$ for $n \ge 0$ and with
$\boldsymbol\Delta\bigl([n],[k]\bigr)$ the set of weakly monotone functions $[n]
\to [k]$.
\item A \emph{cosimplicial object} in a category $\cat M$ is a
functor from $\boldsymbol\Delta$ to $\cat M$.
\item If $m$ is a positive integer, then an \emph{$m$-cosimplicial
object} in $\cat M$ is a functor from $\boldsymbol\Delta^{m}$ to $\cat M$.
\item The \emph{simplicial indexing category} $\boldsymbol\Delta^{\mathrm{op}}$, the opposite
category of $\boldsymbol\Delta$.
\item A \emph{simplicial object} in a category $\cat M$ is a functor
from $\boldsymbol\Delta^{\mathrm{op}}$ to $\cat M$.
\item If $m$ is a positive integer, then an \emph{$m$-simplicial
object} in $\cat M$ is a functor from $(\boldsymbol\Delta^{m})^{\mathrm{op}} =
(\boldsymbol\Delta^{\mathrm{op}})^{m}$ to $\cat M$.
\end{enumerate}
\end{defn}
\subsubsection{Restricted cosimplicial objects and restricted simplicial
objects}
\label{sec:Restricted}
For examples of fibering Reedy subcategories and cofibering Reedy
subcategories that include all of the objects, we consider the
restricted cosimplicial (or semi-cosimplicial) and restricted
simplicial (or semi-simplicial) indexing categories.
\begin{defn}
\label{def:DeltaRest}
For $n$ a nonnegative integer, let $[n]$ denote the ordered set
$(0, 1, 2, \ldots, n)$.
\begin{enumerate}
\item The \emph{restricted cosimplicial indexing category} $\boldsymbol\Delta_{\mathrm{rest}}$
is the category with objects the ordered sets $[n]$ for $n \ge 0$
and with $\boldsymbol\Delta_{\mathrm{rest}}\bigl([n], [k]\bigr)$ the \emph{injective} order
preserving maps $[n] \to [k]$.
The category $\boldsymbol\Delta_{\mathrm{rest}}$ is thus a subcategory of $\boldsymbol\Delta$, the
cosimplicial indexing category (see Definition~\ref{def:Delta}).
\item The \emph{restricted simplicial indexing category}
$\boldsymbol\Delta_{\mathrm{rest}}^{\mathrm{op}}$ is the opposite of the restricted cosimplicial
indexing category.
\item If $\cat M$ is a category, then a \emph{restricted
cosimplicial object} in $\cat M$ is a functor from $\boldsymbol\Delta_{\mathrm{rest}}$ to
$\cat M$.
\item If $\cat M$ is a category, a \emph{restricted simplicial
object} in $\cat M$ is a functor from $(\boldsymbol\Delta_{\mathrm{rest}})^{\mathrm{op}}$ to $\cat
M$.
\end{enumerate}
\end{defn}
If we let $G\colon \boldsymbol\Delta_{\mathrm{rest}} \to \boldsymbol\Delta$ be the inclusion, then for $\diag
X$ a cosimplicial object in $\cat M$ the induced diagram $G^{*}\diag
X$ is a restricted cosimplicial object in $\cat M$, called the
\emph{underlying restricted cosimplicial object} of $\diag X$; it is
obtained from $\diag X$ by ``forgetting the codegeneracy operators''.
Similarly, if we let $G\colon \boldsymbol\Delta_{\mathrm{rest}}^{\mathrm{op}} \to \boldsymbol\Delta^{\mathrm{op}}$ be the inclusion,
then for $\diag Y$ a simplicial object in $\cat M$ the induced diagram
$G^{*}\diag Y$ is a restricted simplicial object in $\cat M$, called
the \emph{underlying restricted simplicial object} of $\diag Y$,
obtained from $\diag Y$ by ``forgetting the degeneracy operators''.
\begin{thm}
\label{thm:RestCosFib}
\leavevmode
\begin{enumerate}
\item The inclusion $\boldsymbol\Delta_{\mathrm{rest}} \to \boldsymbol\Delta$ of the restricted cosimplicial
indexing category into the cosimplicial indexing category is both
a fibering Reedy functor and a cofibering Reedy functor.
\item The inclusion $\boldsymbol\Delta_{\mathrm{rest}}^{\mathrm{op}} \to \boldsymbol\Delta^{\mathrm{op}}$ of the restricted
simplicial indexing category into the simplicial indexing category
is both a fibering Reedy functor and a cofibering Reedy functor.
\end{enumerate}
\end{thm}
\begin{proof}
We will prove part~1; part~2 will then follow from
Proposition~\ref{prop:OpFiberingCofibering}.
We first prove that the inclusion $\boldsymbol\Delta_{\mathrm{rest}} \to \boldsymbol\Delta$ is the inclusion
of a cofibering Reedy subcategory. Let $\sigma\colon \beta \to
\alpha$ be a map in $\drc{\boldsymbol\Delta}$. If $\sigma$ is an identity map,
then the category of direct $\boldsymbol\Delta_{\mathrm{rest}}$-factorizations of $\sigma$ is
empty. If $\sigma$ is not an identity map, then
$\bigl((\sigma\colon \beta \to \alpha), 1_{\beta}\bigr)$ is an
object of the category of direct $\boldsymbol\Delta_{\mathrm{rest}}$-factorizations of
$\sigma$ that maps to every other object of that category, and so
the nerve of that category is connected.
We now prove that the inclusion $\boldsymbol\Delta_{\mathrm{rest}} \to \boldsymbol\Delta$ is the inclusion
of a fibering Reedy subcategory. Let $\sigma\colon \alpha \to
\beta$ be a map in $\inv{\boldsymbol\Delta}$. Since there are no non-identity
maps in $\inv{\boldsymbol\Delta_{\mathrm{rest}}}$, the category of inverse
$\boldsymbol\Delta_{\mathrm{rest}}$-factorizations of $\sigma$ is empty.
\end{proof}
\begin{thm}
\label{thm:RestCosQuil}
Let $\cat M$ be a model category.
\begin{enumerate}
\item The functor $\cat M^{\boldsymbol\Delta} \to \cat M^{\boldsymbol\Delta_{\mathrm{rest}}}$ that ``forgets
the codegeneracies'' of a cosimplicial object is both a left
Quillen functor and a right Quillen functor.
\item The functor $\cat M^{\boldsymbol\Delta^{\mathrm{op}}} \to \cat M^{\boldsymbol\Delta_{\mathrm{rest}}^{\mathrm{op}}}$ that
``forgets the degeneracies'' of a simplicial object is both a left
Quillen functor and a right Quillen functor.
\end{enumerate}
\end{thm}
\begin{proof}
This follows from Theorem~\ref{thm:RestCosFib}, Theorem~\ref{thm:GoodisGood},
and Theorem~\ref{thm:MainCofibering}.
\end{proof}
\subsubsection{Diagonals of multicosimplicial and multisimplicial
objects}
\label{sec:diagonal}
\begin{defn}
\label{def:diag}
Let $m$ be a positive integer.
\begin{enumerate}
\item The \emph{diagonal embedding} of the category $\boldsymbol\Delta$ into
$\boldsymbol\Delta^{m}$ is the functor $D\colon \boldsymbol\Delta \to \boldsymbol\Delta^{m}$ that takes the
object $[k]$ of $\boldsymbol\Delta$ to the object $\bigl(\underbrace{[k], [k],
\ldots, [k]}_{\text{$m$ times}}\bigr)$ of $\boldsymbol\Delta^{m}$ and the
morphism $\phi\colon [p] \to [q]$ of $\boldsymbol\Delta$ to the morphism
$(\phi^{m})$ of $\boldsymbol\Delta^{m}$.
\item If $\cat M$ is a category and $\diag X$ is an $m$-cosimplicial
object in $\cat M$, then the \emph{diagonal} $\diagon\diag X$ of
$\diag X$ is the cosimplicial object in $\cat M$ that is the
composition
\begin{displaymath}
\boldsymbol\Delta \xrightarrow{D} \boldsymbol\Delta^{m} \xrightarrow{\diag X} \cat M\rlap{\enspace ,}
\end{displaymath}
so that $(\diagon\diag X)^{k} = \diag X^{(k, k, \ldots, k)}$.
\item If $\cat M$ is a category and $\diag X$ is an $m$-simplicial
object in $\cat M$, then the \emph{diagonal} $\diagon\diag X$ of
$\diag X$ is the simplicial object in $\cat M$ that is the
composition
\begin{displaymath}
\boldsymbol\Delta^{\mathrm{op}} \xrightarrow{D^{\mathrm{op}}} (\boldsymbol\Delta^{m})^{\mathrm{op}} = (\boldsymbol\Delta^{\mathrm{op}})^{m}
\xrightarrow{\diag X} \cat M\rlap{\enspace ,}
\end{displaymath}
so that $(\diagon\diag X)_{k} = \diag X_{(k, k, \ldots, k)}$.
\end{enumerate}
\end{defn}
\begin{thm}
\label{thm:DiagFibr}
Let $m$ be a positive integer.
\begin{enumerate}
\item The diagonal embedding $D\colon \boldsymbol\Delta \to \boldsymbol\Delta^{m}$ is a fibering
Reedy functor.
\item The diagonal embedding $D^{\mathrm{op}}\colon \boldsymbol\Delta^{\mathrm{op}} \to (\boldsymbol\Delta^{m})^{\mathrm{op}} =
(\boldsymbol\Delta^{\mathrm{op}})^{m}$ is a cofibering Reedy functor.
\end{enumerate}
\end{thm}
\begin{proof}
We will prove part~1; part~2 will then follow from
Proposition~\ref{prop:OpFiberingCofibering}.
We will identify $\boldsymbol\Delta$ with its image in $\boldsymbol\Delta^{m}$, so that the
objects of $\boldsymbol\Delta$ are the $m$-tuples $\bigl([k],[k],\ldots,
[k]\bigr)$. If $(\alpha_{1},\alpha_{2}, \ldots, \alpha_{m})\colon
\bigl([k],[k],\ldots, [k]\bigr) \to \bigl([p_{1}], [p_{2}], \ldots,
[p_{m}]\bigr)$ is a map in $\inv{\boldsymbol\Delta^{m}}$, then
\cite{diagn}*{Lem.~5.1} implies that it has a terminal factorization
through a diagonal object of $\boldsymbol\Delta^{m}$. If that terminal
factorization is through the identity map of $\bigl([k],[k], \ldots,
[k]\bigr)$, then the category of inverse $\boldsymbol\Delta$-factorizations of
$(\alpha_{1},\alpha_{2},\ldots, \alpha_{m})$ is empty; if that
terminal factorization is not through the identity map, then it is a
terminal object of the category of inverse $\boldsymbol\Delta$-factorizations of
$(\alpha_{1},\alpha_{2}, \ldots, \alpha_{m})$, and so the nerve of
that category is connected.
\end{proof}
Part~1 of the following corollary appears in \cite{diagn}.
\begin{cor}
\label{cor:DiagFibr}
Let $m$ be a positive integer and let $\cat M$ be a model category.
\begin{enumerate}
\item The functor that takes an $m$-cosimplicial object in $\cat M$
to its diagonal cosimplicial object is a right Quillen functor.
\item The functor that takes an $m$-simplicial object in $\cat M$ to
its diagonal simplicial object is a left Quillen functor.
\end{enumerate}
\end{cor}
\begin{proof}
This follows from Theorem~\ref{thm:DiagFibr}, Theorem~\ref{thm:GoodisGood},
and Theorem~\ref{thm:MainCofibering}.
\end{proof}
\subsubsection{Slices of multicosimplicial and multisimplicial
objects}
\label{sec:slice}
\begin{defn}
\label{def:SliceCat}
Let $n$ be a positive integer and for $1 \le i \le n$ let $\cat
C_{i}$ be a category. If $K$ is a subset of $\{1, 2, \ldots, n\}$,
then a \emph{$K$-slice} of the product category $\prod_{i=1}^{n}
\cat C_{i}$ is the category $\prod_{i \in K} \cat C_{i}$. (If $K$
consists of a single integer $j$, then we will use the term
\emph{$j$-slice} to refer to the $K$-slice.) An \emph{inclusion of
the $K$-slice} is a functor $\prod_{i\in K} \cat C_{i} \to
\prod_{i=1}^{n} \cat C_{i}$ defined by choosing an object
$\alpha_{i}$ of $\cat C_{i}$ for $i \in \bigl(\{1, 2, \ldots,
n\}-K\bigr)$ and inserting $\alpha_{i}$ into the $i$'th coordinate
for $i \in \bigl(\{1, 2, \ldots, n\}-K\bigr)$.
\end{defn}
\begin{thm}
\label{thm:SliceFibCofib}
Let $n$ be a positive integer and for $1 \le i \le n$ let $\cat
C_{i}$ be a Reedy category. For every subset $K$ of $\{1, 2,
\ldots, n\}$ both the product $\prod_{i=1}^{n}\cat C_{i}$ and the
product $\prod_{i\in K}\cat C_{i}$ are Reedy categories (see
\cite{MCATL}*{Prop.~15.1.6}), and every inclusion of a $K$-slice
$\prod_{i\in K}\cat C_{i} \to \prod_{i=1}^{n}\cat C_{i}$ (see
Definition~\ref{def:SliceCat}) is both a fibering Reedy functor and a
cofibering Reedy functor.
\end{thm}
\begin{proof}
We will show that every inclusion is a fibering Reedy functor; the
proof that it is a cofibering Reedy functor is similar (and also
follows from applying the fibering case to the inclusion
$\prod_{i\in K}\cat C_{i}^{\mathrm{op}} \to \prod_{i=1}^{n}\cat C_{i}^{\mathrm{op}}$; see
Proposition~\ref{prop:OpFiberingCofibering}). We will assume that $K =
\{1,2\}$; the other cases are similar.
Let $(\beta_{1}, \beta_{2}, \alpha_{3}, \alpha_{4}, \ldots,
\alpha_{n})$ be an object of $\prod_{i\in K}\cat C_{i}$ and let
\begin{displaymath}
(\sigma_{1},\sigma_{2},\ldots, \sigma_{n})\colon
(\beta_{1},\beta_{2},\alpha_{3},\alpha_{4},\ldots, \alpha_{n})
\longrightarrow
(\gamma_{1},\gamma_{2},\ldots,\gamma_{n})
\end{displaymath}
be a map in $\inv{\prod_{i=1}^{n}\cat C_{i}}$. Since
$\inv{\prod_{i=1}^{n}\cat C_{i}} = \prod_{i=1}^{n}\inv{\cat C_{i}}$,
each $\sigma_{i} \in \inv{\cat C_{i}}$. If $\sigma_{1}$ and
$\sigma_{2}$ are both identity maps, then the category of inverse
$\prod_{i \in K}\cat C_{i}$-factorizations of
$(\sigma_{1},\sigma_{2},\ldots, \sigma_{n})$ is empty. Otherwise,
the category of inverse $\prod_{i \in K}\cat C_{i}$-factorizations
of $(\sigma_{1},\sigma_{2},\ldots, \sigma_{n})$ contains the object
\begin{multline*}
(\beta_{1},\beta_{2},\alpha_{3},\alpha_{4},\ldots,\alpha_{n})
\xrightarrow{(\sigma_{1},\sigma_{2},1_{\cat C_{3}}, 1_{\cat
C_{4}}, \ldots, 1_{\cat C_{n}})}
(\gamma_{1},\gamma_{2},\alpha_{3},\alpha_{4},\ldots,\alpha_{n})\\
\xrightarrow{(1_{\cat C_{1}},1_{\cat C_{2}},\sigma_{3},\sigma_{4},
\ldots, \sigma_{n})}
(\gamma_{1},\gamma_{2},\ldots, \gamma_{n})
\end{multline*}
and every other object of the category of inverse $\prod_{i \in
K}\cat C_{i}$-factorizations of $(\sigma_{1},\sigma_{2},\ldots,
\sigma_{n})$ maps to this one. Thus the nerve of the category of
inverse $\prod_{i \in K}\cat C_{i}$-factorizations of
$(\sigma_{1},\sigma_{2},\ldots, \sigma_{n})$ is connected.
\end{proof}
\begin{thm}
\label{thm:GenSliceQF}
If $\cat M$ is a model category, $n$, $\cat C_{i}$ for $1 \le i \le
n$, and $K$ are as in Theorem~\ref{thm:SliceFibCofib}, and the functor
$\prod_{i\in K}\cat C_{i} \to \prod_{i=1}^{n}\cat C_{i}$ is the
inclusion of a $K$-slice, then the restriction functor
\begin{displaymath}
\cat M^{(\prod_{i=1}^{n}\cat C_{i})} \longrightarrow
\cat M^{(\prod_{i \in K}\cat C_{i})}
\end{displaymath}
is both a left Quillen functor and a right Quillen functor.
\end{thm}
\begin{proof}
This follows from Theorem~\ref{thm:GoodisGood},
Theorem~\ref{thm:MainCofibering}, and Theorem~\ref{thm:SliceFibCofib}.
\end{proof}
\begin{defn}
\label{def:slice}
Let $\cat M$ be a model category and let $m$ be a positive integer.
\begin{enumerate}
\item If $\diag X$ is an $m$-cosimplicial object in $\cat M$, then a
\emph{slice} of $\diag X$ is a cosimplicial object in $\cat M$
defined by restricting all but one factor of $\boldsymbol\Delta^{m}$.
\item If $\diag X$ is an $m$-simplicial object in $\cat M$, then a
\emph{slice} of $\diag X$ is a simplicial object in $\cat M$
defined by restricting all but one factor of $(\boldsymbol\Delta^{\mathrm{op}})^{m}$.
\end{enumerate}
\end{defn}
\begin{thm}
\label{thm:SliceQF}
Let $\cat M$ be a model category and let $m$ be a positive integer.
\begin{enumerate}
\item The functor $\cat M^{\boldsymbol\Delta^{m}} \to \cat M^{\boldsymbol\Delta}$ that restricts
a multicosimplicial object to a slice (see Definition~\ref{def:slice}) is
a both a left Quillen functor and a right Quillen functor.
\item The functor $\cat M^{(\boldsymbol\Delta^{\mathrm{op}})^{m}} \to \cat M^{\boldsymbol\Delta^{\mathrm{op}}}$ that
restricts a multisimplicial object to a slice is both a left
Quillen functor and a right Quillen functor.
\end{enumerate}
\end{thm}
\begin{proof}
This follows from Theorem~\ref{thm:GenSliceQF}.
\end{proof}
\begin{cor}
\label{cor:SliceQF}
Let $\cat M$ be a model category and let $m$ be a positive integer.
\begin{enumerate}
\item If $\diag X$ is a fibrant $m$-cosimplicial object in $\cat M$,
then every slice of $\diag X$ is a fibrant cosimplicial object.
\item If $\diag X$ is a cofibrant $m$-simplicial object in $\cat M$,
then every slice of $\diag X$ is a cofibrant simplicial object.
\end{enumerate}
\end{cor}
\begin{proof}
This follows from Theorem~\ref{thm:SliceQF}.
\end{proof}
\section{Proofs of the main theorems}
\label{sec:Proof}
Our main result, Theorem~\ref{thm:GoodisGood}, will follow immediately from
Theorem~\ref{thm:FiberingThree} below (the latter is an elaboration of the
former). The proof of its dual, Theorem~\ref{thm:MainCofibering}, will use
Theorem~\ref{thm:GoodisGood} and can be found in
Section~\ref{sec:PrfCofibering}.
\begin{thm}
\label{thm:FiberingThree}
If $G\colon \cat C \to \cat D$ is a Reedy functor between Reedy
categories, then the following are equivalent:
\begin{enumerate}
\item The functor $G$ is a fibering Reedy functor (see
Definition~\ref{def:goodsub}).
\item For every model category $\cat M$ the induced functor of
diagram categories $G^{*}\colon \cat M^{\cat D} \to \cat M^{\cat
C}$ is a right Quillen functor.
\item For every model category $\cat M$ the induced functor of
diagram categories $G^{*}\colon \cat M^{\cat D} \to \cat M^{\cat
C}$ takes fibrant objects of $\cat M^{\cat D}$ to fibrant
objects of $\cat M^{\cat C}$.
\end{enumerate}
\end{thm}
\begin{proof} The proof will be completed by the proofs of
Theorem~\ref{thm:RightQuil} and Theorem~\ref{thm:RtGdNec} below. More
precisely, we will have
\begin{displaymath}
\xymatrix@=7em{
{(1)} \ar@{=>}[r]^{\text{Theorem~\ref{thm:RightQuil}}}
& {(2)} \ar@{=>}[r]^{\text{Proposition~\ref{prop:QuillenNice}}}
& {(3)} \ar@{=>}[r]^{\text{Theorem~\ref{thm:RtGdNec}}}
& {(1)}
}\qedhere
\end{displaymath}
\end{proof}
\begin{thm}
\label{thm:RightQuil}
If $G\colon \cat C \to \cat D$ is a fibering Reedy functor and $\cat
M$ is a model category, then the induced functor of diagram
categories $G^{*}\colon \cat M^{\cat D} \to \cat M^{\cat C}$ is a
right Quillen functor.
\end{thm}
\begin{thm}
\label{thm:RtGdNec}
If $G\colon \cat C \to \cat D$ is a Reedy functor that is not a
fibering Reedy functor, then there is a fibrant $\cat D$-diagram of
topological spaces for which the induced $\cat C$-diagram is not
fibrant.
\end{thm}
The proof of Theorem~\ref{thm:RightQuil} is given in
Section~\ref{sec:ProofRightQuil}, while the proof of Theorem~\ref{thm:RtGdNec}
can be found in Section~\ref{sec:RtGdNec}.
In summary, the proofs of our main results, Theorem~\ref{thm:GoodisGood}
and Theorem~\ref{thm:MainCofibering}, thus have the following structure:
\begin{displaymath}
\vcenter{
\xymatrix@C=1.5em{
\txt{Theorem~\ref{thm:RightQuil} \\
(Section~\ref{sec:ProofRightQuil})}
\ar@{=>}[dr]
\\
& \text{Theorem~\ref{thm:FiberingThree}} \ar@{=>}[r]
& \text{Theorem~\ref{thm:GoodisGood}} \ar@{=>}[d]
\\
\txt{Theorem~\ref{thm:RtGdNec} \\
(Section~\ref{sec:RtGdNec})}
\ar@{=>}[ur]
&
&\txt{Theorem~\ref{thm:MainCofibering} \\
(Section~\ref{sec:PrfCofibering})}
\end{displaymath}
\subsection{Proof of Theorem~\ref{thm:RightQuil}}
\label{sec:ProofRightQuil}
We work backward, first giving the proof of the main result. The
completion of that proof will depend on two key assertions,
Proposition~\ref{prop:MatchFib} and Proposition~\ref{prop:MatchIso}, whose proofs are
given in Sections \ref{sec:PrfMatchFib} and \ref{sec:PrfMatchIso}.
The assumption that we have a fibering Reedy functor is used only in
the proofs of Proposition~\ref{prop:MatchFib} and Proposition~\ref{prop:isoprime} (the
latter is used in the proof of the former).
\begin{proof}[Proof of Theorem~\ref{thm:RightQuil}]
Since $\cat M$ is cocomplete, the left adjoint of $G^{*}$ exists
(see \cite{borceux-I}*{Thm.~3.7.2} or
\cite{McL:categories}*{p.~235}). Thus, to show that the induced
functor $\cat M^{\cat D} \to \cat M^{\cat C}$ is a right Quillen
functor, we need only show that it preserves fibrations and trivial
fibrations (see Proposition~\ref{prop:QuilFunc}). Since the weak
equivalences in $\cat M^{\cat D}$ and $\cat M^{\cat C}$ are the
objectwise ones, any weak equivalence in $\cat M^{\cat D}$ induces a
weak equivalence in $\cat M^{\cat C}$. Thus, if we show that the
induced functor preserves fibrations, then we will also know that it
takes maps that are both fibrations and weak equivalences to maps
that are both fibrations and weak equivalences, i.e., that it also
preserves trivial fibrations.
To show that the induced functor $\cat M^{\cat D} \to \cat M^{\cat
C}$ preserves fibrations, let $\diag X \to \diag Y$ be a fibration
of $\cat D$-diagrams in $\cat M$; we will let $G^{*}\diag X$ and
$G^{*}\diag Y$ denote the induced diagrams on $\cat C$. For every
object $\alpha$ of $\cat C$, the matching objects of $\diag X$ and
$\diag Y$ at $\alpha$ in $\cat M^{\cat C}$ are
\begin{displaymath}
\mathrm{M}_{\alpha}^{\cat C} G^{*}\diag X =
\lim_{\matchcat{\cat C}{\alpha}} G^{*}\diag X
\qquad\text{and}\qquad
\mathrm{M}_{\alpha}^{\cat C} G^{*}\diag Y =
\lim_{\matchcat{\cat C}{\alpha}} G^{*}\diag Y
\end{displaymath}
and we define $P_{\alpha}^{\cat C}$ by letting the diagram
\begin{equation}
\label{diag:DefPB}
\vcenter{
\xymatrix{
{P_{\alpha}^{\cat C}} \ar@{..>}[r] \ar@{..>}[d]
& {(G^{*}\diag Y)_{\alpha}} \ar[d]\\
{\mathrm{M}_{\alpha}^{\cat C} G^{*}\diag X} \ar[r]
& {\mathrm{M}_{\alpha}^{\cat C} G^{*}\diag Y}
\end{equation}
be a pullback; we must show that the relative matching map
$(G^{*}\diag X)_{\alpha} \to P_{\alpha}^{\cat C}$ is a fibration
(see Theorem~\ref{thm:RFib}), and there are two cases:
\begin{enumerate}
\item There is a non-identity map $\alpha \to \gamma$ in $\inv{\cat
C}$ that $G$ takes to the identity map of $G\alpha$.
\item $G$ takes every non-identity map $\alpha \to \gamma$ in
$\inv{\cat C}$ to a non-identity map in $\inv{\cat D}$.
\end{enumerate}
In the first case, Proposition~\ref{prop:MatchIso} (in
Section~\ref{sec:PrfMatchIso} below) implies that the pullback
Diagram~\ref{diag:DefPB} is isomorphic to the diagram
\begin{displaymath}
\xymatrix{
{P^{\cat C}_{\alpha}} \ar[r] \ar[d]
& {(G^{*}\diag Y)_{\alpha}}
\ar[d]^{1_{(G^{*}\diag Y)_{\alpha}}}\\
{(G^{*}\diag X)_{\alpha}} \ar[r]
& {(G^{*}\diag Y)_{\alpha}}
\end{displaymath}
in which the vertical map on the left is an isomorphism $P^{\cat
C}_{\alpha} \approx (G^{*}\diag X)_{\alpha}$. Thus, the composition
of the relative matching map with that isomorphism is the identity
map of $(G^{*}\diag X)_{\alpha}$, and so the relative matching map
is an isomorphism $(G^{*}\diag X)_{\alpha} \to P^{\cat C}_{\alpha}$,
and is thus a fibration.
We are left with the second case, and so we can assume that $G$
takes every non-identity map $\alpha \to \gamma$ in $\inv{\cat C}$
to a non-identity map in $\inv{\cat D}$. In this case, $G$ induces
a functor $G_{*}\colon \matchcat{\cat C}{\alpha} \to \matchcat{\cat
D}{G\alpha}$ that takes the object $f\colon \alpha \to \gamma$ of
$\matchcat{\cat C}{\alpha}$ to the object $Gf\colon G\alpha \to
G\gamma$ of $\matchcat{\cat D}{G\alpha}$ (see
Proposition~\ref{prop:CatFacts}).
The matching objects of $\diag X$ and $\diag Y$ at $G\alpha$ in
$\cat M^{\cat D}$ are
\begin{displaymath}
\mathrm{M}_{G\alpha}^{\cat D} \diag X =
\lim_{\matchcat{\cat D}{G\alpha}} \diag X
\qquad\text{and}\qquad
\mathrm{M}_{G\alpha}^{\cat D} \diag Y =
\lim_{\matchcat{\cat D}{G\alpha}} \diag Y
\end{displaymath}
and we define $P_{G\alpha}^{\cat D}$ by letting the diagram
\begin{displaymath}
\xymatrix{
{P_{G\alpha}^{\cat D}} \ar@{..>}[r] \ar@{..>}[d]
& {\diag Y_{G\alpha}} \ar[d]\\
{\mathrm{M}_{G\alpha}^{\cat D} \diag X}
\ar[r]
& {\mathrm{M}_{G\alpha}^{\cat D} \diag Y}
\end{displaymath}
be a pullback. The functor $G_{*}\colon \matchcat{\cat C}{\alpha}
\to \matchcat{\cat D}{G\alpha}$ (see Proposition~\ref{prop:CatFacts})
induces natural maps
\begin{align*}
\mathrm{M}_{G\alpha}^{\cat D}\diag X =
\lim_{\matchcat{\cat D}{G\alpha}} \diag X &\longrightarrow
\lim_{\matchcat{\cat C}{\alpha}} G^{*}\diag X =
\mathrm{M}_{\alpha}^{\cat C}G^{*}\diag X\\
\mathrm{M}_{G\alpha}^{\cat D}\diag Y =
\lim_{\matchcat{\cat D}{G\alpha}} \diag Y &\longrightarrow
\lim_{\matchcat{\cat C}{\alpha}} G^{*}\diag Y =
\mathrm{M}_{\alpha}^{\cat C}G^{*}\diag Y
\end{align*}
and so we have a map of pullbacks and relative matching maps
\begin{displaymath}
\xymatrix@=.8em{
& {(G^{*}\diag X)_{\alpha}} \ar[dr]\\
{\diag X_{G\alpha}} \ar@{=}[ur] \ar[dr]
&& {P^{\cat C}_{\alpha}} \ar[rr] \ar'[d][dd]
&& {(G^{*}\diag Y)_{\alpha}} \ar[dd]\\
& {P^{\cat D}_{G\alpha}} \ar[ur] \ar[rr] \ar[dd]
&& {\diag Y_{G\alpha}} \ar[ur] \ar[dd]\\
&& {\mathrm{M}^{\cat C}_{\alpha}G^{*}\diag X} \ar'[r][rr]
&& {\mathrm{M}^{\cat C}_{\alpha}G^{*}\diag Y}\\
& {\mathrm{M}^{\cat D}_{G\alpha}\diag X} \ar[ur] \ar[rr]
&& {\mathrm{M}^{\cat D}_{G\alpha}\diag Y} \ar[ur]
}
\end{displaymath}
and our map $(G^{*}\diag X)_{\alpha} \to P_{\alpha}^{\cat C}$ equals
the composition
\begin{displaymath}
(G^{*}\diag X)_{\alpha} =
\diag X_{G\alpha} \longrightarrow P_{G\alpha}^{\cat D}
\longrightarrow P_{\alpha}^{\cat C} \rlap{\enspace .}
\end{displaymath}
Since the map $\diag X \to \diag Y$ is a fibration in $\cat M^{\cat
D}$, the relative matching map $\diag X_{G\alpha} \to
P_{G\alpha}^{\cat D}$ is a fibration (see Theorem~\ref{thm:RFib}), and so
it is sufficient to show that the natural map
\begin{equation}
\label{eq:Matches}
P_{G\alpha}^{\cat D} \longrightarrow P_{\alpha}^{\cat C}
\end{equation}
is a fibration. That statement is the content of
Proposition~\ref{prop:MatchFib} (in Section~\ref{sec:PrfMatchFib}, below) which
(along with Proposition~\ref{prop:MatchIso} in Section~\ref{sec:PrfMatchIso})
will complete the proof of Theorem~\ref{thm:RightQuil}.
\end{proof}
\subsection{Statement and proof of Proposition~\ref{prop:MatchFib}}
\label{sec:PrfMatchFib}
The purpose of this section is to state and prove the following
proposition, which (along with Proposition~\ref{prop:MatchIso} in
Section~\ref{sec:PrfMatchIso}) will complete the proof of
Theorem~\ref{thm:RightQuil}.
\begin{prop}
\label{prop:MatchFib}
For every object $\alpha$ of $\cat C$, the map
\begin{displaymath}
P_{G\alpha}^{\cat D} \longrightarrow P_{\alpha}^{\cat C}
\end{displaymath}
from \eqref{eq:Matches} is a fibration.
\end{prop}
The proof of Proposition~\ref{prop:MatchFib} is intricate, but it does not
require any new definitions. To aid the reader, here is the structure
of the argument:
\begin{equation}
\label{eq:RightQuilFlow}
\vcenter{
\xymatrix@C=1.5em{
\text{Proposition~\ref{prop:isoprime}}\ar@{=>}[r]
& \text{Proposition~\ref{prop:MatchFib}}
& \\
\text{Lemma~\ref{lem:reedy}}\ar@{=>}[r]
& \text{Proposition~\ref{prop:IndFibr}}\ar@{=>}[u]
& \text{Lemma~\ref{lem:lastfib}}\ar@{=>}[l]\\
& \txt{Lemma~\ref{lem:pbCtoCprime} \& \\
Diagram~\ref{diag:bigcube}}\ar@{=>}[r]
& \text{Lemma~\ref{lem:PBf}}\ar@{=>}[u]
\end{equation}
We will start with the proof of Proposition~\ref{prop:MatchFib} and then, as
in the proof of Theorem~\ref{thm:RightQuil}, we will work our way backward
from it.
\begin{proof}[Proof of Proposition~\ref{prop:MatchFib}]
If the degree of $\alpha$ is $k$, we define a nested sequence of
subcategories of $\matchcat{\cat D}{G\alpha}$
\begin{equation}
\label{eq:MatchFibCats}
\cat A_{-1} \subset \cat A_{0} \subset \cat A_{1} \subset \cdots
\subset \cat A_{k-1} = \matchcat{\cat D}{G\alpha}
\end{equation}
by letting $\cat A_{i}$ for $-1 \le i \le k-1$ be the full
subcategory of $\matchcat{\cat D}{G\alpha}$ with objects the union
of
\begin{itemize}
\item the objects of $\matchcat{\cat D}{G\alpha}$ whose target is of
degree at most $i$, and
\item the image under $G_{*}\colon \matchcat{\cat C}{\alpha} \to
\matchcat{\cat D}{G\alpha}$ (see Proposition~\ref{prop:CatFacts}) of the
objects of $\matchcat{\cat C}{\alpha}$.
\end{itemize}
The functor $G_{*}\colon \matchcat{\cat C}{\alpha} \to
\matchcat{\cat D}{G\alpha}$ factors through $\cat A_{-1}$ and, since
there are no objects of negative degree, this functor, which by
abuse of notation we will also call $G_{*}\colon \matchcat{\cat
C}{\alpha} \to \cat A_{-1}$, maps onto the objects of $\cat A_{-1}$.
In fact, we claim that the functor $G_{*}\colon \matchcat{\cat
C}{\alpha} \to \cat A_{-1}$ is left cofinal (see
Definition~\ref{def:cofinal}) and thus induces isomorphisms
\begin{displaymath}
\lim_{\cat A_{-1}} \diag X \approx
\lim_{\matchcat{\cat C}{\alpha}} G^{*}\diag X
\qquad\text{and}\qquad
\lim_{\cat A_{-1}}
\diag Y \approx \lim_{\matchcat{\cat C}{\alpha}} G^{*}\diag Y
\end{displaymath}
(see Theorem~\ref{thm:CofinalIso}). To see this, note that every object
of $\cat A_{-1}$ is of the form $G\sigma\colon G\alpha \to G\beta$
for some object $\sigma\colon \alpha \to \beta$ of $\matchcat{\cat
C}{\alpha}$. For every object $\sigma\colon \alpha \to \beta$ of
$\matchcat{\cat C}{\alpha}$, an object of the overcategory
$\bovercat{G_{*}}{(G\sigma\colon G\alpha \to G\beta)}$ is a pair
\begin{displaymath}
\bigl((\nu\colon \alpha \to \gamma),
(\mu\colon G\gamma \to G\beta)\bigr)
\end{displaymath}
where $\nu\colon \alpha \to \gamma$ is an object in $\matchcat{\cat
C}{\alpha}$ and $\mu\colon G\gamma \to G\beta$ is a map in
$\inv{\cat D}$ such that the triangle
\begin{displaymath}
\xymatrix@=.8em{
&{G\alpha} \ar[dl]_{G\nu} \ar[dr]^{G\sigma}\\
{G\gamma} \ar[rr]_{\mu}
&& {G\beta}
\end{displaymath}
commutes, and a map
\begin{displaymath}
\bigl((\nu\colon \alpha \to \gamma),
(\mu\colon G\gamma \to G\beta)\bigr) \longrightarrow
\bigl((\nu'\colon \alpha \to \gamma'),
(\mu'\colon G\gamma' \to G\beta)\bigr)
\end{displaymath}
is a map $\tau\colon \gamma \to \gamma'$ in $\inv{\cat C}$ that
makes the diagrams
\begin{displaymath}
\vcenter{
\xymatrix@=.8em{
&{\alpha} \ar[dl]_{\nu} \ar[dr]^{\nu'}\\
{\gamma} \ar[rr]_{\tau}
&& {\gamma'}
\qquad\text{and}\qquad
\vcenter{
\xymatrix@=.6em{
{G\gamma} \ar[rr]^{G\tau} \ar[dr]_{\mu}
&& {G\gamma'} \ar[dl]^{\mu'}\\
& {\beta}
\end{displaymath}
commute. Thus, this overcategory is exactly the category of inverse
$\cat C$-factorizations of $(\alpha, G\sigma)$ (see
Proposition~\ref{prop:CatFacts}) and so (since $G$ is a fibering Reedy
functor) its nerve must be either empty or connected. Since it is
not empty (it contains the vertex $(\alpha \xrightarrow{\sigma}
\beta, 1_{G\beta})$), it is connected, and so $G_{*}\colon
\matchcat{\cat C}{\alpha} \to \cat A_{-1}$ is left cofinal.
The sequence of inclusions of categories \eqref{eq:MatchFibCats}
thus induces sequences of maps
\begin{displaymath}
\begin{gathered}
\lim_{\matchcat{\cat D}{G\alpha}} \diag X = \lim_{\cat A_{k-1}}
\diag X \to \lim_{\cat A_{k-2}} \diag X \to \cdots \to
\lim_{\cat A_{0}} \diag X \to \lim_{\cat A_{-1}} \diag X \approx
\lim_{\matchcat{\cat C}{\alpha}} G^{*}\diag X\\
\lim_{\matchcat{\cat D}{G\alpha}} \diag Y = \lim_{\cat A_{k-1}}
\diag Y \to \lim_{\cat A_{k-2}} \diag Y \to \cdots \to
\lim_{\cat A_{0}} \diag Y \to \lim_{\cat A_{-1}} \diag Y \approx
\lim_{\matchcat{\cat C}{\alpha}} G^{*}\diag Y\rlap{\enspace .}
\end{gathered}
\end{displaymath}
For $-1 \le i \le k-1$ we let $P_{i}$ be the pullback
\begin{displaymath}
\xymatrix{
{P_{i}} \ar@{..>}[r] \ar@{..>}[d]
& {\diag Y_{G\alpha}} \ar[d]\\
{\lim_{\cat A_{i}}\diag X} \ar[r]
& {\lim_{\cat A_{i}}\diag Y \rlap{\enspace .}}
}
\end{displaymath}
Since we have an evident map of diagrams
\begin{displaymath}
\Big(\lim_{\cat A_{i+1}}\diag X\to \lim_{\cat A_{i+1}}\diag
Y\leftarrow \diag Y_{G\alpha} \Big) \longrightarrow
\Big(\lim_{\cat A_{i}}\diag X\to \lim_{\cat A_{i}}\diag
Y\leftarrow \diag Y_{G\alpha} \Big)
\end{displaymath}
we also get an induced map $P_{i+1}\to P_i$ of pullbacks.
We thus have a factorization of
\eqref{eq:Matches} as
\begin{displaymath}
P_{G\alpha}^{\cat D} = P_{k-1} \longrightarrow P_{k-2}
\longrightarrow \cdots \longrightarrow
P_{-1} \approx P_{\alpha}^{\cat C} \rlap{\enspace ,}
\end{displaymath}
and we will show that the map $P_{i+1} \to P_{i}$ is a fibration for
$-1 \le i \le k-2$.
The objects of $\cat A_{i+1}$ that are not in $\cat A_{i}$ are maps
$G\alpha \to \beta$ where $\beta$ is of degree $i+1$, and this set
of maps can be divided into two subsets:
\begin{itemize}
\item the set $S_{i+1}$ of maps $G\alpha\to\beta$ for which the
category of inverse $\cat C$-factorizations of $(\alpha, G\alpha
\to \beta)$ is nonempty, and
\item the set $T_{i+1}$ of maps for which the category of inverse
$\cat C$-factorizations of $(\alpha, G\alpha \to \beta)$ is empty.
\end{itemize}
We let $\cat A'_{i+1}$ be the full subcategory of $\matchcat{\cat
D}{G\alpha}$ with objects the union of $S_{i+1}$ with the objects
of $\cat A_{i}$, and define $P'_{i+1}$ as the pullback
\begin{displaymath}
\xymatrix{
{P'_{i+1}} \ar@{..>}[r] \ar@{..>}[d]
& {\diag Y_{G\alpha}} \ar[d]\\
{\lim_{\cat A'_{i+1}}\diag X} \ar[r]
& {\lim_{\cat A'_{i+1}}\diag Y \rlap{\enspace .}}
\end{displaymath}
We have inclusions of categories $\cat A_{i} \subset \cat A'_{i+1}
\subset \cat A_{i+1}$, and the maps
\begin{displaymath}
\lim_{\cat A_{i+1}} \diag X \longrightarrow
\lim_{\cat A_{i}}\diag X
\qquad\text{and}\qquad
\lim_{\cat A_{i+1}} \diag Y \longrightarrow
\lim_{\cat A_{i}}\diag Y
\end{displaymath}
factor as
\begin{displaymath}
\lim_{\cat A_{i+1}} \diag X \longrightarrow
\lim_{\cat A'_{i+1}} \diag X \longrightarrow
\lim_{\cat A_{i}}\diag X
\qquad\text{and}\qquad
\lim_{\cat A_{i+1}} \diag Y \longrightarrow
\lim_{\cat A'_{i+1}} \diag Y \longrightarrow
\lim_{\cat A_{i}}\diag Y \rlap{\enspace .}
\end{displaymath}
These factorizations induce a factorization
\begin{equation}
\label{eq:factorization}
P_{i+1} \longrightarrow P'_{i+1} \longrightarrow P_{i}
\end{equation}
of the map $P_{i+1} \to P_{i}$, and we have the commutative diagram
\begin{displaymath}
\xymatrix@=.8em{
&& {P_{i}} \ar[rrr] \ar'[d]'[dd][ddd]
&&& {\diag Y_{G\alpha}} \ar[ddd]\\
& {P'_{i+1}} \ar[ur] \ar[rrr] \ar'[d][ddd]
&&& {\diag Y_{G\alpha}} \ar@{=}[ur] \ar[ddd]\\
{P_{i+1}} \ar[ur] \ar[rrr] \ar[ddd]
&&& {\diag Y_{G\alpha}} \ar@{=}[ur] \ar[ddd]\\
&& {\lim_{\cat A_{i}}\diag X} \ar'[r]'[rr][rrr]
&&& {\lim_{\cat A_{i}}\diag Y}\\
& {\lim_{\cat A'_{i+1}}\diag X} \ar[ur] \ar'[rr][rrr]
&&& {\lim_{\cat A'_{i+1}}\diag Y} \ar[ur]\\
{\lim_{\cat A_{i+1}}\diag X} \ar[ur] \ar[rrr]
&&& {\lim_{\cat A_{i+1}}\diag Y} \ar[ur]
}
\end{displaymath}
Proposition~\ref{prop:isoprime} below asserts that the map $P'_{i+1} \to
P_{i}$ is an isomorphism and Proposition~\ref{prop:IndFibr} asserts that the
map $P_{i+1} \to P'_{i+1}$ is a fibration. Hence, the map
$P_{G\alpha}^{\cat D}\to P_{\alpha}^{\cat C}$ is a fibration as
well.
\end{proof}
\begin{prop}
\label{prop:isoprime}
For $-1 \le i \le k-2$, the map $P'_{i+1} \to P_{i}$ in
\eqref{eq:factorization} is an isomorphism.
\end{prop}
\begin{proof}
Let $\sigma\colon G\alpha \to \beta$ be an object of $\cat A'_{i+1}$
that is not in $\cat A_{i}$. The objects of $\overcat{\cat
A_{i}}{\sigma}$ are commutative diagrams
\begin{displaymath}
\xymatrix@=.6em{
& {G\alpha} \ar[dl]_{\nu} \ar[dr]^{\sigma}\\
{\gamma} \ar[rr]_{\mu}
&& {\beta}
\end{displaymath}
where $\nu\colon G\alpha \to \gamma$ is in $\cat A_{i}$ and $\mu$ is
in $\inv{\cat D}$. Since $\beta$ is of degree $i+1$ and $\mu$
lowers degree (because $\mu$ cannot be an identity map, since
$\sigma$ isn't in $\cat A_{i}$), the degree of $\gamma$ must be
greater than $i+1$, and so the map $\nu\colon G\alpha \to \gamma$
must be of the form $G\nu'\colon G\alpha \to G\gamma'$ for some map
$\nu'\colon \alpha \to \gamma'$ in $\matchcat{\cat C}{\alpha}$.
Thus, the objects of $\overcat{\cat A_{i}}{\sigma}$ are pairs
$\bigl((\nu'\colon \alpha\to \gamma'), (\mu\colon G\gamma' \to
\beta)\bigr)$ where $\nu'\colon \alpha \to \gamma'$ is a
non-identity map of $\inv{\cat C}$, $\mu\colon G\gamma' \to \beta$
is in $\inv{\cat D}$, and $\mu\circ G\nu' = \sigma$, and
$\overcat{\cat A_{i}}{\sigma}$ is the category of inverse $\cat
C$-factorizations of $(\alpha, \sigma)$ (see
Proposition~\ref{prop:CatFacts}). Since $G$ is a fibering Reedy functor,
the nerve of the category of inverse $\cat C$-factorizations of
$(\alpha, \sigma)$ is either empty or connected. Since it is
nonempty (because $\sigma\colon G\alpha \to \beta$ is an element of
$S_{i+1}$), the nerve of the overcategory $\overcat{\cat
A_{i}}{\sigma}$ is nonempty and connected, and so the inclusion
$\cat A_{i} \subset \cat A'_{i+1}$ is left cofinal (see
Definition~\ref{def:cofinal}). Thus, the maps $\lim_{\cat A'_{i+1}} \diag X
\to \lim_{\cat A_{i}}\diag X$ and $\lim_{\cat A'_{i+1}} \diag Y \to
\lim_{\cat A_{i}}\diag Y$ are isomorphisms (see
Theorem~\ref{thm:CofinalIso}), and so the induced map $P'_{i+1} \to
P_{i}$ is an isomorphism.
\end{proof}
\begin{prop}
\label{prop:IndFibr}
For $-1 \le i \le k-2$, the map $P_{i+1} \to P'_{i+1}$ in
\eqref{eq:factorization} is a fibration.
\end{prop}
The proof of Proposition~\ref{prop:IndFibr} is more intricate; the reader
might wish to refer to the chart \eqref{eq:RightQuilFlow} for its
structure. Before we can present it, we will need several lemmas.
For the first one, the reader should recall the definition of the sets
$T_i$ from the proof of Proposition~\ref{prop:MatchFib}.
\begin{lem}
\label{lem:pbCtoCprime}
For every $\cat D$-diagram $\diag Z$ in $\cat M$ there is a natural
pullback square
\begin{equation}
\label{diag:pbCtoCprime}
\vcenter{
\xymatrix{
{\lim_{\cat A_{i+1}} \diag Z} \ar[r] \ar[d]
& {\lim_{\cat A'_{i+1}} \diag Z} \ar[d]\\
{\prod_{(G\alpha\to\beta)\in T_{i+1}}
\hspace{-1.5em}\diag Z_{\beta}} \ar[r]
& {\prod_{(G\alpha\to\beta)\in T_{i+1}}
\lim_{\matchcat{\cat D}{\beta}} \diag Z \rlap{\enspace .}}
\end{equation}
\end{lem}
\begin{proof}
For every element $\sigma\colon G\alpha\to\beta$ of $T_{i+1}$, every
object of the matching category $\matchcat{\cat D}{\beta}$ is a map
to an object of degree at most $i$, and so we have a functor
$\matchcat{\cat D}{\beta} \to \cat A'_{i+1}$ that takes $\beta \to
\gamma$ to the composition $G\alpha\xrightarrow{\sigma}
\beta\to\gamma$; this induces the map $\lim_{\cat A'_{i+1}} \diag Z
\longrightarrow \lim_{\matchcat{\cat D}{\beta}} \diag Z$ that is the
projection of the right hand vertical map onto the factor indexed by
$\sigma$. We thus have a commutative square as in
Diagram~\ref{diag:pbCtoCprime}.
The objects of $\cat A_{i+1}$ are the objects of $\cat A'_{i+1}$
together with the elements of $T_{i+1}$, and so a map to $\lim_{\cat
A_{i+1}} \diag Z$ is determined by its postcompositions with the
above maps to $\lim_{\cat A'_{i+1}} \diag Z$ and
$\prod_{(G\alpha\to\beta)\in T_{i+1}} \diag Z_{\beta}$. Since there
are no non-identity maps in $\cat A_{i+1}$ with codomain an element
of $T_{i+1}$, and the only non-identity maps with domain an element
$G\alpha\to\beta$ of $T_{i+1}$ are the objects of the matching
category $\matchcat{\cat D}{\beta}$, maps to $\lim_{\cat A'_{i+1}}
\diag Z$ and to $\prod_{(G\alpha\to\beta)\in T_{i+1}} X_{\beta}$
determine a map to $\lim_{\cat A_{i+1}} \diag Z$ if and only if
their compositions to $\prod_{(G\alpha\to\beta)\in T_{i+1}}
\lim_{\matchcat{\cat D}{\beta}} \diag Z$ agree. Thus, the diagram
is a pullback square.
\end{proof}
Now define $Q$ and $R$ by letting the squares
\begin{equation}
\label{diag:pullbacks}
\vcenter{
\xymatrix{
{Q} \ar@{..>}[r] \ar@{..>}[d]
& {\lim_{\cat A'_{i+1}} \diag X} \ar[d]\\
{\lim_{\cat A_{i+1}} \diag Y} \ar[r]
& {\lim_{\cat A'_{i+1}} \diag Y}
\qquad\text{and}\quad
\vcenter{
\xymatrix@=1.7em{
{R} \ar@{..>}[r] \ar@{..>}[d]
& {\prod_{(G\alpha\to\beta)\in T_{i+1}}
\lim_{\matchcat{\cat D}{\beta}} \diag X} \ar[d]\\
{\prod_{(G\alpha\to\beta)\in T_{i+1}}
\hspace{-1.5em}\diag Y_{\beta}} \ar[r]
& {\prod_{(G\alpha\to\beta)\in T_{i+1}}
\lim_{\matchcat{\cat D}{\beta}} \diag Y}
\end{equation}
be pullbacks, and consider the commutative diagram
\begin{equation}
\label{diag:bigcube}
\vcenter{
\xymatrix@=.8em{
{\lim_{\cat A_{i+1}} \diag X} \ar[rrr]^{s} \ar[drr]^(.7){a}
\ar[ddr]_{\delta} \ar[ddd]_{u}
&&& {\lim_{\cat A'_{i+1}} \diag X} \ar[ddr]^{\beta}
\ar'[dd][ddd]^{v}\\
&& {Q} \ar[ur]_{c} \ar[dl]_{d} \ar'[d][ddd]^(.3){g}\\
& {\lim_{\cat A_{i+1}}\diag Y} \ar[rrr]^(.7){s'}
\ar[ddd]_(.75){u'}
&&& {\lim_{\cat A'_{i+1}}\diag Y} \ar[ddd]^{v'}\\
{\hspace{-1em}\prod_{(G\alpha\to\beta)\in T_{i+1}}
\hspace{-1.5em}\diag X_{\beta}} \ar[ddr]_{\gamma}
\ar[drr]|!{[ru];[rdd]}{\hole}^(.75){b}
\ar[rrr]|!{[ruu];[rd]}{\hole}^{t}
|!{[uurr];[rrd]}{\hole}
&&& {\prod_{\substack{\phantom{X}\\(G\alpha\to\beta)\in T_{i+1}}}
\hspace{-1.5em}\lim_{\matchcat{\cat D}{\beta}}\diag X} \ar[ddr]\\
&& {R} \ar[ur]_-{e} \ar[dl]^(.3){f}\\
& {\hspace{-1.5em}\prod_{(G\alpha\to\beta)\in T_{i+1}}
\hspace{-1.5em}\diag Y_{\beta}} \ar[rrr]_-{t'}
&&& {\hspace{-1.5em}
\prod_{\substack{\phantom{X}\\(G\alpha\to\beta)\in T_{i+1}}}
\hspace{-1.5em}\lim_{\matchcat{\cat D}{\beta}}\diag Y}
\end{equation}
Lemma~\ref{lem:pbCtoCprime} implies that the front and back rectangles
are pullbacks.
\begin{lem}
\label{lem:PBf}
The square
\begin{equation}
\label{diag:PBf}
\vcenter{
\xymatrix{
{\lim_{\cat A_{i+1}}\diag X} \ar[r]^-{a} \ar[d]_{u}
& {Q} \ar[d]^{g}\\
{\prod_{(G\alpha\to\beta)\in T_{i+1}}
\hspace{-1.5em}\diag X_{\alpha}} \ar[r]_-{b}
& {R}
\end{equation}
is a pullback.
\end{lem}
\begin{proof}
Let $W$ be an object of $\cat M$ and let $h\colon W \to
\prod_{(G\alpha\to\beta)\in T_{i+1}} \diag X_{\beta}$ and $k\colon W
\to Q$ be maps such that $gk = bh$; we will show that there is a
unique map $\phi\colon W \to \lim_{\cat A_{i+1}} \diag X$ such that
$a\phi = k$ and $u\phi = h$.
\begin{displaymath}
\xymatrix{
{W} \ar@/^3ex/[drr]^{k} \ar@/_3ex/[ddr]_{h}
\ar@{..>}[dr]^{\phi}\\
&{\lim_{\cat A_{i+1}} \diag X} \ar[r]^-{a} \ar[d]_{u}
& {Q} \ar[d]^{g}\\
&{\prod_{(G\alpha\to\beta)\in T_{i+1}}
\hspace{-1.5em}\diag X_{\beta}} \ar[r]_-{b}
& {R}
\end{displaymath}
The map $ck\colon W \to \lim_{\cat A'_{i+1}} \diag X$ has the
property that $v(ck) = egk = ebh = th$, and since the back rectangle
of Diagram~\ref{diag:bigcube} is a pullback, the maps $ck$ and $h$
induce a map $\phi\colon W \to \lim_{\cat A_{i+1}} \diag X$ such
that $u\phi = h$ and $s\phi = ck$. We must show that $a\phi = k$,
and since $Q$ is a pullback as in Diagram~\ref{diag:pullbacks}, this is
equivalent to showing that $ca\phi = ck$ and $da\phi = dk$.
Since $ck = s\phi = ca\phi$, we need only show that $da\phi = dk$.
Since the front rectangle of Diagram~\ref{diag:bigcube} is a pullback,
it is sufficient to show that $s'da\phi = s'dk$ and $u'da\phi =
u'dk$. For the first of those, we have
\begin{displaymath}
s'da\phi = s'\delta\phi = \beta s\phi = \beta ck = s'dk\\
\end{displaymath}
and for the second, we have
\begin{displaymath}
u'da\phi = u'\delta\phi = \gamma u\phi = fbu\phi = fbh = fgk =
u'dk \rlap{\enspace .}
\end{displaymath}
Thus, the map $\phi$ satisfies $a\phi = k$.
To see that $\phi$ is the unique such map, let $\psi\colon W \to
\lim_{\cat A_{i+1}} \diag X$ be another map such that $a\psi = k$
and $u\psi = h$. We will show that $s\psi = s\phi$ and $u\psi =
u\phi$; since the back rectangle of Diagram~\ref{diag:bigcube} is a
pullback, this will imply that $\psi = \phi$.
Since $u\psi = h = u\phi$, we need only show that $s\psi = s\phi$,
which follows because $s\psi = ca\psi = ck = s\phi$.
\end{proof}
\begin{lem}
\label{lem:lastfib}
If $\diag X \to \diag Y$ is a fibration of $\cat D$-diagrams, then
the natural map
\begin{displaymath}
\lim_{\cat A_{i+1}} \diag X \longrightarrow
Q = \pullback{\lim_{\cat A'_{i+1}}\diag X}
{(\lim_{\cat A'_{i+1}}\diag Y)}{\lim_{\cat A_{i+1}}\diag Y}
\end{displaymath}
is a fibration.
\end{lem}
\begin{proof}
Lemma~\ref{lem:PBf} gives us the pullback square in Diagram~\ref{diag:PBf}
where $Q$ and $R$ are defined by the pullbacks in
Diagram~\ref{diag:pullbacks}. Since $\diag X \to \diag Y$ is a
fibration of $\cat D$-diagrams, the map $\prod_{(G\alpha\to\beta)\in
T_{i+1}} \diag X_{\beta} \to R$ is a product of fibrations and is
thus a fibration, and so the map $\lim_{\cat A_{i+1}}\diag X \to Q =
\pullback{\lim_{\cat A'_{i+1}}\diag X} {(\lim_{\cat A'_{i+1}}\diag
Y)}{\lim_{\cat A_{i+1}}\diag Y}$ is a pullback of a fibration and
is thus a fibration.
\end{proof}
\begin{lem}[Reedy]
\label{lem:reedy}
If both the front and back squares in the diagram
\begin{displaymath}
\xymatrix@C=5ex@R=1em{
{A} \ar[rr] \ar[dr]_{f_{A}} \ar[dd]
&& {B} \ar[dr]^{f_{B}} \ar'[d][dd]\\
& {A'} \ar[rr] \ar[dd]
&& {B'} \ar[dd]\\
{C} \ar[dr]_{f_{C}} \ar'[r][rr]
&& {D} \ar[dr]^{f_{D}}\\
& {C'} \ar[rr]
&& {D'}
}
\end{displaymath}
are pullbacks and both $f_{B}\colon B \to B'$ and $C \to
\pullback{C'}{D'}{D}$ are fibrations, then $f_{A}\colon A \to A'$ is
a fibration.
\end{lem}
\begin{proof}
This is the dual of a lemma of Reedy (see
\cite{MCATL}*{Lem.~7.2.15 and Rem.~7.1.10}).
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:IndFibr}]
We have a commutative diagram
\begin{displaymath}
\xymatrix@=.25em{
{P_{i+1}} \ar[rr] \ar[dr] \ar[dd]
&& {Y_{G\alpha}} \ar[dr] \ar'[d][dd]\\
& {P'_{i+1}} \ar[rr] \ar[dd]
&& {Y_{G\alpha}} \ar[dd]\\
{\lim_{\cat A_{i+1}}\diag X} \ar[dr] \ar'[r][rr]
&& {\lim_{\cat A_{i+1}}\diag Y} \ar[dr]\\
& {\lim_{\cat A'_{i+1}}\diag X} \ar[rr]
&& {\lim_{\cat A'_{i+1}}\diag Y}
}
\end{displaymath}
in which the front and back squares are pullbacks (by definition), and so
Lemma~\ref{lem:reedy} implies that it is sufficient to show that the
map
\begin{displaymath}
\lim_{\cat A_{i+1}} \diag X \longrightarrow
\pullback{\lim_{\cat A'_{i+1}}\diag X}
{(\lim_{\cat A'_{i+1}}\diag Y)}{\lim_{\cat A_{i+1}}\diag Y}
\end{displaymath}
is a fibration; that is the statement of Lemma~\ref{lem:lastfib}.
\end{proof}
\subsection{Statement and proof of Proposition~\ref{prop:MatchIso}}
\label{sec:PrfMatchIso}
The purpose of this section is to state and prove the following
proposition, which (along with Proposition~\ref{prop:MatchFib} in
Section~\ref{sec:PrfMatchFib}) completes the proof of
Theorem~\ref{thm:RightQuil}.
\begin{prop}
\label{prop:MatchIso}
Let $G\colon \cat C \to \cat D$ be a fibering Reedy functor and let
$\diag X$ be a $\cat D$-diagram in a model category $\cat M$. If
$\alpha$ is an object of $\cat C$ for which there is an object
$\alpha \to \gamma$ of $\matchcat{\cat C}{\alpha}$ (i.e., a
non-identity map $\alpha \to \gamma$ in $\inv{\cat C}$) that $G$
takes to an identity map in $\inv{\cat D}$, then the matching map
$(G^{*}\diag X)_{\alpha} \to \mathrm{M}^{\cat C}_{\alpha}(G^{*}\diag X)$
of $G^{*}\diag X$ (see Definition~\ref{def:InducedDiag}) at $\alpha$ is an
isomorphism.
\end{prop}
The proof will require several preliminary definitions
and results.
\begin{defn}
\label{def:Gkernel}
The \emph{$G$-kernel at $\alpha$} is the full subcategory of the
matching category $\matchcat{\cat C}{\alpha}$ with objects the
non-identity maps $\alpha \to \gamma$ in $\inv{\cat C}$ that $G$
takes to the identity map of $G\alpha$.
\end{defn}
If $\alpha \to \gamma$ is an object of the $G$-kernel at $\alpha$,
then the map $(G^{*}\diag X)_{\alpha} \to (G^{*}\diag X)_{\gamma}$ is
the identity map.
\begin{lem}
\label{lem:IdSubcat}
Under the hypotheses of Proposition~\ref{prop:MatchIso}, the nerve of the
$G$-kernel at $\alpha$ is connected.
\end{lem}
\begin{proof}
Since $G$ is a fibering Reedy functor, the nerve of the category
$\invfact{\cat C}{\alpha}{1_{G\alpha}}$ of inverse $\cat
C$-factorizations of $(\alpha, 1_{G\alpha})$ is connected, and there
is an isomorphism from the $G$-kernel at $\alpha$ to $\invfact{\cat
C}{\alpha}{1_{G\alpha}}$ that takes the object $\alpha \to \gamma$
to the object $\bigl((\alpha \to \gamma), (1_{G\alpha})\bigr)$.
\end{proof}
The matching object $\mathrm{M}^{\cat C}_{\alpha}(G^{*}\diag X)$ is the
limit of a $\matchcat{\cat C}{\alpha}$-diagram (which we will also
denote by $G^{*}\diag X$); we will refer to that diagram as the
\emph{matching diagram}. The restriction of the matching diagram to
the $G$-kernel at $\alpha$ is a diagram in which every object goes to
$\diag X_{G\alpha} = (G^{*}\diag X)_{\alpha}$ and every map goes to
the identity map of $\diag X_{G\alpha}$, because if there is a
commutative triangle
\begin{displaymath}
\xymatrix@=.8em{
&{\alpha} \ar[dl]_{f} \ar[dr]^{f'}\\
{\gamma} \ar[rr]_{\tau}
&& {\gamma'}
\end{displaymath}
in $\inv{\cat C}$ in which $Gf = Gf' = 1_{G\alpha}$, then $G\tau \circ
1_{G\alpha} = 1_{G\alpha}$, and so $G\tau = 1_{G\alpha}$. Together
with Lemma~\ref{lem:IdSubcat}, this implies the following.
\begin{lem}
\label{lem:IdDiag}
Under the hypotheses of Proposition~\ref{prop:MatchIso}, the restriction
of the matching diagram to the $G$-kernel at $\alpha$ is a connected
diagram in which every object goes to $\diag X_{G\alpha}$ and every
map goes to the identity map of $\diag X_{G\alpha}$.
\end{lem}
We will prove Proposition~\ref{prop:MatchIso} by showing that for every object
$W$ of $\cat M$ the matching map induces an isomorphism of sets of
maps
\begin{equation}
\label{eq:MatchMap}
\cat M \bigl(W, (G^{*}\diag X)_{\alpha}\bigr) \longrightarrow
\cat M\bigl(W,\mathrm{M}^{\cat C}_{\alpha}(G^{*}\diag X)\bigr)
\end{equation}
(see Proposition~\ref{prop:DetectIso}). The matching object $\mathrm{M}^{\cat
C}_{\alpha}(G^{*}\diag X)$ is the limit of the matching diagram, and
so maps from $W$ to $\mathrm{M}^{\cat C}_{\alpha}(G^{*}\diag X)$
correspond to maps from $W$ to the matching diagram.
Lemma~\ref{lem:IdDiag} implies that if we restrict the matching diagram
to the $G$-kernel at $\alpha$, then maps from $W$ to the restriction
of that diagram to the $G$-kernel at $\alpha$ correspond to maps from
$W$ to $(G^{*}\diag X)_{\alpha}$, and that fact allows us to define a
potential inverse to \eqref{eq:MatchMap}. All that remains is to show
that our potential inverse is actually an inverse.
If $\alpha \to \beta$ and $\alpha \to \gamma$ are objects of the
matching category and there is a map $\tau\colon (\alpha \to \beta)
\to (\alpha \to \gamma)$ in the matching category, i.e., a commutative
diagram
\begin{displaymath}
\xymatrix@=.6em{
& {\alpha} \ar[dl] \ar[dr]\\
{\beta} \ar[rr]_{\tau}
&& {\gamma \rlap{\enspace ,}}
}
\end{displaymath}
then for every object $W$ of $\cat M$ and map from $W$ to the matching
diagram, the projection of that map onto $(\alpha \to \gamma)$ is
entirely determined by its projection onto $(\alpha\to\beta)$; we will
describe this by saying that the object $(\alpha\to\gamma)$ is
\emph{controlled} by the object $(\alpha\to\beta)$. Similarly, if
there is a commutative triangle
\begin{displaymath}
\xymatrix@=.6em{
& {\alpha} \ar[dl] \ar[dr]\\
{\gamma} \ar[rr]_{\tau}
&& {\gamma'}
}
\end{displaymath}
in the matching category such that $G\tau$ is an identity map, then we
will say that the object $(\alpha\to\gamma)$ is \emph{controlled} by
the object $(\alpha\to\gamma')$ \emph{and} that the object
$(\alpha\to\gamma')$ is \emph{controlled} by the object $(\alpha\to
\gamma)$. We will show by a downward induction on degree that all
objects of the matching category are controlled by objects of the
$G$-kernel at $\alpha$ (see Definition~\ref{def:controlled} and
Proposition~\ref{prop:AllControlled}).
\begin{defn}
\label{def:Gequiv}
We define an equivalence relation on the set of objects of
$\matchcat{\cat C}{\alpha}$, called \emph{$G$-equivalence at
$\alpha$}, as the equivalence relation generated by the relation
under which $f\colon \alpha \to \gamma$ is equivalent to $f'\colon
\alpha \to \gamma'$ if there is a commutative triangle
\begin{displaymath}
\xymatrix@=.8em{
&{\alpha} \ar[dl]_{f} \ar[dr]^{f'}\\
{\gamma} \ar[rr]_{\tau}
&& {\gamma'}
\end{displaymath}
with $G\tau$ an identity map.
\end{defn}
If $f$ and $f'$ are $G$-equivalent at $\alpha$, then $Gf = Gf'$, and
there is a zig-zag of identity maps connecting $\diag X_{f}$ and
$\diag X_{f'}$ in the matching diagram.
\begin{defn}
\label{def:controlled}
We define the set of \emph{controlled objects} $\{\alpha \to
\gamma\}$ of the matching category $\matchcat{\cat C}{\alpha}$ by a
decreasing induction on $\degree(G\gamma)$:
\begin{enumerate}
\item If $\alpha \to \gamma$ is an object of $\matchcat{\cat
C}{\alpha}$ such that $\degree(G\gamma) = \degree(G\alpha)$
(i.e., if $G(\alpha\to\gamma) = 1_{G\alpha}$), then $\alpha \to
\gamma$ is controlled. (That is, all objects of the $G$-kernel at
$\alpha$ are controlled.)
\item If $0 \le n < \degree(G\alpha)$ and we have defined the
controlled objects $\alpha \to \delta$ for $n < \degree(\delta)
\le \degree(G\alpha)$, then we define an object $\alpha \to
\gamma$ with $\degree(G\gamma) = n$ to be controlled if it is
$G$-equivalent at $\alpha$ to an object $\alpha \to \gamma'$ that
has a factorization $\alpha \to \delta \to \gamma'$ in $\inv{\cat
C}$ such that $\alpha \to \delta$ is an object of
$\matchcat{\cat C}{\alpha}$ that is controlled.
\end{enumerate}
\end{defn}
\begin{ex}
Let $G\colon \cat C \to \cat D$ be the fibering Reedy functor
between Reedy categories as in the following diagram:
\begin{displaymath}
\xymatrix@R=1.5ex@C=1em{
& {\cat C} \ar[]+<4.5em,0ex>;[rrrrrr]+<-2.5em,0ex>^{G}
&&&&&& {\cat D}\\
& {\alpha} \ar[dr] \ar[dd] \ar[lddd]_{\sigma}
&&&&&& {a} \ar[dddd]^{f}\\
&& {\beta} \ar[dl]\\
& {\gamma} \ar[dd]^{\tau}\\
{\delta} \ar[dr]_{\mu}\\
& {\epsilon}
&&&&&& {b}
\end{displaymath}
where
\begin{itemize}
\item $\cat C$ has five objects, $\alpha$, $\beta$, $\gamma$,
$\delta$, and $\epsilon$ of degrees $4$, $3$, $2$, $1$, and $0$,
respectively, and the diagram commutes;
\item $\cat D$ has two objects, $a$ and $b$ of degrees $1$ and $0$,
respectively;
\item $G\alpha = G\beta = G\gamma = a$ and $G$ takes the maps
between them to $1_{a}$;
\item $G\delta = G\epsilon = b$ and $G\mu = 1_{b}$; and
\item $G\sigma = G\tau = f$.
\end{itemize}
Every object of $\matchcat{\cat C}{\alpha}$ is controlled:
\begin{itemize}
\item The objects $\alpha \to \beta$ and $\alpha \to\gamma$ are
controlled because of the first part of Definition~\ref{def:controlled}.
\item The object $\alpha \to \epsilon$ is controlled because it is
$G$-equivalent at $\alpha$ to itself and it factors as $\alpha
\to\gamma \to\epsilon$ with the object $\alpha \to \gamma$
controlled.
\item The object $\sigma$ is controlled because it is $G$-equivalent
at $\alpha$ to $\alpha \to \epsilon$ and the latter map factors as
$\alpha \to \gamma \to \epsilon$ where the object $\alpha \to
\gamma$ is controlled.
\end{itemize}
If $\diag X$ is a $\cat D$-diagram in a model category $\cat M$,
then the induced $\cat C$-diagram $G^{*}\diag X$ has
\begin{displaymath}
(G^{*}\diag X)_{\alpha} = (G^{*}\diag X)_{\beta} =
(G^{*}\diag X)_{\gamma} = \diag X_{a}
\qquad\text{and}\qquad
(G^{*}\diag X)_{\delta} = (G^{*}\diag X)_{\epsilon} =
\diag X_{b}\rlap{\enspace ,}
\end{displaymath}
and the matching object of $(G^{*}\diag X)$ at $\alpha$ is the limit
of the diagram
\begin{displaymath}
\xymatrix@C=1.5em@R=2ex{
&& {\diag X_{a}} \ar[dl]^{1_{\diag X_{a}}}\\
& {\diag X_{a}} \ar[dd]^{\diag X_{f}}\\
{\diag X_{b}} \ar[dr]_{1_{\diag X_{b}}}\\
& {\diag X_{b} \rlap{\enspace ;}}
\end{displaymath}
that limit is isomorphic to $\diag X_{a}$, as guaranteed by
Proposition~\ref{prop:MatchIso}.
\end{ex}
The set of controlled objects has the following property.
\begin{lem}
\label{lem:controlled}
Under the hypotheses of Proposition~\ref{prop:MatchIso}, if $W$ is an object
of $\cat M$ and $h,k\colon W \to \mathrm{M}^{\cat C}_{\alpha}(G^{*}\diag
X)$ are two maps to the matching object of $G^{*}\diag X$ at
$\alpha$ whose projections onto at least one object of the
$G$-kernel at $\alpha$ agree, then their projections onto every
controlled object agree.
\end{lem}
\begin{proof}
This follows by a decreasing induction as in
Definition~\ref{def:controlled}, using Lemma~\ref{lem:IdDiag} and
Definition~\ref{def:controlled}.
\end{proof}
That every object in the example above was controlled was not an
accident, as shown by the following result.
\begin{prop}
\label{prop:AllControlled}
Under the hypotheses of Proposition~\ref{prop:MatchIso}, every object
$f\colon \alpha \to \gamma$ of $\matchcat{\cat C}{\alpha}$ is
controlled.
\end{prop}
\begin{proof}
We will show this by a decreasing induction on the degree of
$G\gamma$ in $\cat D$, beginning with $\degree(G\alpha)$. The
induction is begun because the objects $f\colon \alpha \to \gamma$
in $\matchcat{\cat C}{\alpha}$ with $\degree(G\gamma) =
\degree(G\alpha)$ are exactly the objects of the $G$-kernel at
$\alpha$, since a map in $\inv{\cat D}$ that does not lower degree
must be an identity map.
Suppose now that $0 \le n < \degree(G\alpha)$, that every object
$\alpha \to \delta$ in $\matchcat{\cat C}{\alpha}$ with
$\degree(G\delta) > n$ is controlled, and that $f\colon \alpha \to
\gamma$ is an object of $\matchcat{\cat C}{\alpha}$ with
$\degree(G\gamma) = n$. Consider the category $\invfact{\cat
C}{\alpha}{Gf}$ of inverse $\cat C$-factorizations of $(\alpha,
Gf\colon G\alpha \to G\gamma)$. That category contains the object
$\bigl((f\colon \alpha \to \gamma), (1_{G\gamma})\bigr)$ and, if
$g\colon \alpha \to \delta$ is an object of the $G$-kernel at
$\alpha$, then it also contains the object $\bigl((g\colon \alpha
\to \delta), (Gf\colon G\alpha \to G\gamma)\bigr)$. Since $G$ is a
fibering Reedy functor, the nerve of the category $\invfact{\cat
C}{\alpha}{Gf}$ is connected, and so there must be a zig-zag of
maps in $\invfact{\cat C}{\alpha}{Gf}$ connecting those two objects.
There is a functor from $\invfact{\cat C}{\alpha}{Gf}$ to
$\matchcat{\cat C}{\alpha}$ that takes the object
$\bigl((\nu\colon\alpha \to \delta), (\mu\colon G\delta \to
G\gamma)\bigr)$ to the object $\nu\colon \alpha \to \delta$. We
will show that there is a map $\tau\colon \epsilon \to \gamma'$ in
$\invfact{\cat C}{\alpha}{Gf}$ from an object $\bigl((h\colon \alpha
\to \epsilon), (G\epsilon \to G\gamma)\bigr)$ with
$\degree(\epsilon) > \degree(\gamma)$ to an object $\bigl((f'\colon
\alpha \to \gamma'), (1\colon G\gamma' \to G\gamma' =
G\gamma)\bigr)$ that is $G$-equivalent to $f$. The induction
hypothesis will then imply that $h\colon \alpha \to \epsilon$ is
controlled, and since the composition $\alpha \xrightarrow{h}
\epsilon \xrightarrow{\tau} \gamma'$ equals $f'\colon \alpha \to
\gamma'$, this will imply that $f\colon \alpha \to \gamma$ is
controlled.
We first show that if $\bigl((f'\colon \alpha \to \gamma'),
(1_{G\gamma})\bigr)$ is an object of $\invfact{\cat C}{\alpha}{Gf}$
such that $f'$ is $G$-equivalent at $\alpha$ to $f$, then that
object is not the domain of any map to an object $\bigl((h\colon
\alpha \to \epsilon), (G\epsilon \to G\gamma)\bigr)$ with
$\degree(G\epsilon) \neq \degree(G\gamma)$. If $f'\colon \alpha \to
\gamma'$ is an object of $\matchcat{\cat C}{\alpha}$ that is
$G$-equivalent at $\alpha$ to $f\colon \alpha \to \gamma$, then $Gf'
= Gf$, and if there is a map in $\invfact{\cat C}{\alpha}{Gf}$ from
$\bigl((f'\colon \alpha \to \gamma'), (1_{G\gamma})\bigr)$ to
another object, then that other object must be of the form
$\bigl((f''\colon \alpha \to \gamma''), (1_{G\gamma})\bigr)$ where
$f''$ is also $G$-equivalent at $\alpha$ to $f$. This is because if
$\tau\colon \gamma' \to \gamma''$ is a map in $\inv{\cat C}$ such
that $G\tau$ is \emph{not} an identity map, then $\degree(G\gamma'')
< \degree(G\gamma') = \degree(G\gamma)$ and so there is no object of
$\invfact{\cat C}{\alpha}{Gf}$ of the form $\bigl((\tau f'\colon
\alpha \to \gamma''), (G\gamma'' \to G\gamma)\bigr)$ (because an
identity map in a Reedy category cannot factor through a
degree-lowering map), and so there can be no such map.
Since there must be a zig-zag of maps in $\invfact{\cat
C}{\alpha}{Gf}$ connecting $\bigl((g\colon \alpha \to \delta),
(Gf\colon G\alpha \to G\gamma)\bigr)$ to some object
$\bigl((f'\colon \alpha \to \gamma'), (1_{G\gamma})\bigr)$ where
$f'\colon \alpha \to \gamma'$ is $G$-equivalent to $f\colon \alpha
\to \gamma$, there must be an object $\bigl((h\colon \alpha \to
\epsilon), (G\epsilon \to G\gamma)\bigr)$ of $\invfact{\cat
C}{\alpha}{Gf}$ and a map $\tau\colon \epsilon \to \gamma'$ to an
object $\bigl((f'\colon \alpha \to \gamma'), (1_{G\gamma})\bigr)$
where $f'\colon \alpha \to \gamma'$ is $G$-equivalent to $f$ and $h$
is not $G$-equivalent to $f$. If $\degree(G\epsilon) =
\degree(G\gamma)$, then $G\tau$ must be an identity map (and so $h$
must be $G$-equivalent to $f$) because there is a commutative
triangle
\begin{displaymath}
\xymatrix@=.8em{
{G\epsilon} \ar[rr]^{G\tau} \ar[dr]
&& {G\gamma'} \ar[dl]^{1_{G\gamma'}}\\
& {G\gamma'}
\end{displaymath}
in which the map $G\epsilon \to G\gamma'$ is a map of $\inv{\cat D}$
that does not lower degree and is thus an identity map. Thus, the
only way an object $\bigl((f'\colon \alpha \to \gamma'),
(1_{G\gamma})\bigr)$ with $f'$ being $G$-equivalent to $f$ can
connect via a zig-zag to an object $\bigl((h\colon \alpha \to
\epsilon), (G\epsilon \to G\gamma)\bigr)$ with $h$ not
$G$-equivalent to $f$ is by way of a map $\tau\colon \epsilon \to
\gamma'$ from an object $\bigl((h\colon \alpha \to \epsilon),
(G\epsilon \to G\gamma)\bigr)$ with $\degree(G\epsilon) >
\degree(G\gamma)$, which (by the induction hypothesis) implies that
$h\colon \alpha \to \epsilon$ is controlled. In this case, the
composition $\alpha \xrightarrow{h} \epsilon \xrightarrow{\tau}
\gamma'$ equals $f'\colon \alpha \to \gamma'$, and so $f\colon
\alpha \to \gamma$ is controlled. This completes the induction.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:MatchIso}]
Proposition~\ref{prop:DetectIso} implies that it is sufficient to show that
for every object $W$ of $\cat M$ the matching map $(G^{*}\diag
X)_{\alpha} \to \mathrm{M}^{\cat C}_{\alpha}(G^{*}\diag X)$ induces an
isomorphism of the sets of maps
\begin{equation}
\label{eq:MatchIso}
\xymatrix{
{\cat M\bigl(W, (G^{*}\diag X)_{\alpha}\bigr)} \ar[r]^-{\approx}
& {\cat M\bigl(W, \mathrm{M}^{\cat C}_{\alpha}(G^{*}\diag X)\bigr)
\rlap{\enspace .}}
\end{equation}
Let $W$ be an object of $\cat M$ and let $h\colon W \to \mathrm{M}^{\cat
C}_{\alpha}(G^{*}\diag X)$ be a map. If $\alpha \to \gamma$ is an
object of $\matchcat{\cat C}{\alpha}$ that is in the $G$-kernel at
$\alpha$, then $(G^{*}\diag X)_{(\alpha \to \gamma)} = (G^{*}\diag
X)_{\gamma} = (G^{*}\diag X)_{\alpha}$, and so the projection of $h$
onto $(G^{*}\diag X)_{(\alpha \to \gamma)}$ defines a map
$\hat h\colon W \to (G^{*}\diag X)_{\alpha}$. Lemma~\ref{lem:IdDiag}
implies that the map $\hat h$ is independent of the choice of object
of the $G$-kernel at $\alpha$.
The composition
\begin{displaymath}
\xymatrix{
{W} \ar[r]^-{\hat h}
& {(G^{*}\diag X)_{\alpha}} \ar[r]
& {\mathrm{M}^{\cat C}_{\alpha}(G^{*}\diag X)}
\end{displaymath}
has the same projection onto $(G^{*}\diag X)_{(\alpha\to\gamma)}$ as
the map $h\colon W \to \mathrm{M}^{\cat C}_{\alpha}(G^{*}\diag X)$;
since every object of $\matchcat{\cat C}{\alpha}$ is controlled (see
Proposition~\ref{prop:AllControlled}), these two maps agree on every
projection of $\mathrm{M}^{\cat C}_{\alpha}(G^{*}\diag X)$ (see
Lemma~\ref{lem:controlled}), and so they are equal; thus, the map
\eqref{eq:MatchIso} is a surjection. Since the composition of the
matching map with the projection $\mathrm{M}^{\cat
C}_{\alpha}(G^{*}\diag X) \to (G^{*}\diag X)_{(\alpha \to
\gamma)}$ is $\diag X \circ G$ applied to $\alpha\to\gamma$, which
is the identity map, $\hat h$ is the only possible lift to
$(G^{*}\diag X)_{\alpha}$ of $h$, and so the map \eqref{eq:MatchIso}
is also an injection, and so it is an isomorphism.
\end{proof}
\subsection{Proof of Theorem~\ref{thm:RtGdNec}}
\label{sec:RtGdNec}
We will first construct the $\cat D$-diagram whose existence is
asserted in Theorem~\ref{thm:RtGdNec}. The proof of the theorem is then
structured as follows:
\begin{equation}
\label{eq:RtGdNecFlow}
\vcenter{
\xymatrix@C=1.5em{
& \text{Theorem~\ref{thm:RtGdNec}} \\
\text{Proposition~\ref{prop:FibD}}\ar@{=>}[ur]
& \text{Proposition~\ref{prop:NotFib}}\ar@{=>}[u] \\
\text{Proposition~\ref{prop:ProdInts}} \ar@{=>}[r]\ar@{=>}[ur]
& \text{Proposition~\ref{prop:MatchProd}}\ar@{=>}[u]
\end{equation}
The theorem will follow immediately from Proposition~\ref{prop:FibD} (which
asserts that the $\cat D$-diagram is fibrant) and
Proposition~\ref{prop:NotFib} (which asserts that the induced $\cat C$-diagram
is not fibrant).
Our $\cat D$-diagram $\diag X$ will be a diagram in the standard model
category of topological spaces. Throughout its construction, the
reader should keep the square diagram from Example~\ref{ex:NotGood} in mind.
In that example, the diagram $\diag X$ that we construct here is the
functor that sends each object in that square to the interval $I$ with
all the maps going to the identity map, and $G\colon \cat C \to \cat
D$ is the inclusion of the diagram obtained by removing the degree
zero object $\beta$ from the square.
We will define the diagram $\diag X$ inductively over the filtrations
$\mathrm{F}^{n}\cat D$ of $\cat D$ (see Definition~\ref{def:filtration} and
Proposition~\ref{prop:ConstructFilt}). To start this inductive construction,
since $G\colon \cat C \to \cat D$ is not a fibering Reedy functor,
there are objects $\alpha\in \Ob(\cat C)$ and $\beta\in \Ob(\cat D)$
and a map $\sigma\colon G\alpha \to \beta$ in $\inv{\cat D}$ such that
the nerve of the category of inverse $\cat C$-factorizations of
$(\alpha, \sigma)$ (see Definition~\ref{def:CFactors}) is nonempty and not
connected. Let $n_{\beta}$ be the degree of $\beta$. We have two
cases:
\begin{itemize}
\item If $n_{\beta} = 0$, we begin by letting $\diag X\colon
\mathrm{F}^{0}\cat D \to \mathrm{Top}$ take $\beta$ to the unit interval $I$ and all
other objects of $\mathrm{F}^{0}\cat D$ to $*$ (the one-point space).
\item If $n_{\beta} > 0$, we begin by letting $\diag X\colon
\mathrm{F}^{(n_{\beta})-1}\cat D \to \mathrm{Top}$ be the constant functor at $*$
(the one-point space). Then, to extend $\diag X$ from
$\mathrm{F}^{(n_{\beta})-1}\diag D$ to $\mathrm{F}^{n_{\beta}}\diag D$, we let
$\diag X_{\beta} = I$, the unit interval. We factor
$\mathrm{L}_{\beta}\diag X \to \mathrm{M}_{\beta}\diag X$ as
\begin{displaymath}
\mathrm{L}_{\beta}\diag X \longrightarrow I \longrightarrow
\mathrm{M}_{\beta}\diag X
\end{displaymath}
where the first map is the constant map at $0 \in I$ and the second
map is the unique map $I \to *$ (since $\diag X_{\gamma} = *$ is the
terminal object of $\mathrm{Top}$ for all objects $\gamma$ of degree less
than $n_{\beta}$, that matching object is $*$). If $\gamma$ is any
other object of $\cat D$ of degree $n_{\beta}$, we let $\diag
X_{\gamma} = \mathrm{M}_{\gamma}\diag X$ and let $\mathrm{L}_{\gamma}\diag X
\to \diag X_{\gamma} \to \mathrm{M}_{\gamma}\diag X$ be the natural map
followed by the identity map.
\end{itemize}
We now define $\diag X\colon F^{n}\cat D \to \mathrm{Top}$ for $n > n_{\beta}$
inductively on $n$ by letting $\diag X_{\gamma} = \mathrm{M}_{\gamma}\diag
X$ for every object $\gamma$ of degree $n$ and letting the
factorization $\mathrm{L}_{\gamma}\diag X \to \diag X_{\gamma} \to
\mathrm{M}_{\gamma}\diag X$ be the natural map followed by the identity
map.
\begin{prop}
\label{prop:FibD}
The $\cat D$-diagram of topological spaces $\diag X$ is fibrant.
\end{prop}
\begin{proof}
The matching map at the object $\beta$ of $\cat D$ is the map $I \to
*$, which is a fibration, and the matching map at every other object
of $\cat D$ is an identity map, which is also a fibration.
\end{proof}
\begin{prop}
\label{prop:ProdInts}
\leavevmode
\begin{enumerate}
\item For every object $\gamma$ in $\cat D$ the space $\diag
X_{\gamma}$ is homeomorphic to a product of unit intervals, one
for each map $\gamma \to \beta$ in $\inv{\cat D}$ (and so, for
objects $\gamma$ for which there are no maps $\gamma \to \beta$ in
$\inv{\cat D}$, the space $\diag X_{\gamma}$ is the empty product,
and is thus equal to the terminal object, the one-point space
$*$).
\item Under the isomorphisms of part~1, if $\tau\colon \gamma \to
\delta$ is a map in $\inv{\cat D}$, then the projection of $\diag
X_{\tau}\colon \diag X_{\gamma} \to \diag X_{\delta}$ onto the
factor $I$ of $\diag X_{\delta}$ indexed by a map $\mu\colon
\delta \to \beta$ in $\inv{\cat D}$ is the projection of $\diag
X_{\gamma}$ onto the factor $I$ of $\diag X_{\gamma}$ indexed by
$\mu\tau\colon \gamma \to \beta$.
\end{enumerate}
That is, the $\cat D$-diagram of topological spaces $\diag X$ is
isomorphic to the composition
\begin{displaymath}
\cat D \longrightarrow \mathrm{Set}^{\mathrm{op}} \longrightarrow \mathrm{Top}
\end{displaymath}
in which the first map takes an object $\gamma$ of $\cat D$ to
$\mathrm{Hom}_{\inv{\cat D}}(\gamma,\beta)$ and the second map takes a set
$S$ to a product, indexed by the elements of $S$, of copies of the
unit interval $I$.
\end{prop}
\begin{proof}
We will use an induction on $n$ to prove both parts of the
proposition simultaneously for the restriction of $\diag X$ to each
filtration $\mathrm{F}^{n}\cat D$ of $\cat D$. The induction is begun at $n
= n_{\beta}$ because the only map in $\mathrm{F}^{n_{\beta}}\inv{\cat D}$ to
$\beta$ is the identity map of $\beta$, the only object of
$\mathrm{F}^{n_{\beta}}\inv{\cat D}$ at which $\diag X$ is not a single
point is $\beta$, and $\diag X_{\beta} = I$.
Suppose now that $n > n_{\beta}$, the statement is true for the
restriction of $\diag X$ to $\mathrm{F}^{n-1}\cat D$, and that $\gamma$ is
an object of degree $n$. The space $\diag X_{\gamma}$ is defined to
be the matching object $\mathrm{M}_{\gamma}\diag X =
\lim_{\matchcat{\cat D}{\gamma}} \diag X$. There is a discrete
subcategory ${\cat E}_{\gamma}$ of the matching category
$\matchcat{\cat D}{\gamma}$ consisting of the maps $\gamma \to
\beta$ in $\inv{\cat D}$, and so there is a projection map
\begin{displaymath}
\mathrm{M}_{\gamma}\diag X =
\lim_{\matchcat{\cat D}{\gamma}} \hspace{-0.5em}\diag X
\longrightarrow \lim_{\cat E_{\gamma}} \diag X =
\prod_{(\gamma \to \beta)\in\inv{\cat D}}
\hspace{-1em}\diag X_{\beta} =
\prod_{(\gamma \to \beta)\in\inv{\cat D}} \hspace{-1em}I \rlap{\enspace .}
\end{displaymath}
We will show that that projection map $p\colon \lim_{\matchcat{\cat
D}{\gamma}} \diag X \to \prod_{(\gamma\to \beta)\in\inv{\cat D}}
I$ is a homeomorphism by defining an inverse homeomorphism
\begin{displaymath}
q\colon \prod_{(\gamma\to \beta)\in\inv{\cat D}} \hspace{-1em}I
\longrightarrow \lim_{\matchcat{\cat D}{\gamma}}
\hspace{-0.5em}\diag X \rlap{\enspace .}
\end{displaymath}
We define the map $q$ by defining its projection onto $\diag
X_{(\tau\colon\gamma\to\delta)} = \diag X_{\delta}$ for each object
$(\tau\colon\gamma \to \delta)$ of $\matchcat{\cat D}{\gamma}$. The
induction hypothesis implies that $\diag X_{\tau} = \diag
X_{\delta}$ is isomorphic to $\prod_{(\delta \to \beta)\in\inv{\cat
D}} I$, and we let the projection onto the factor indexed by
$\mu\colon \delta \to \beta$ be the projection of $\prod_{(\gamma\to
\beta)\in\inv{\cat D}} I$ onto the factor indexed by
$\mu\tau\colon \gamma \to \beta$. To see that this defines a map to
$\lim_{\matchcat{\cat D}{\gamma}} \diag X$, let $\nu\colon \delta
\to \epsilon$ be a map from $\tau\colon \gamma \to \delta$ to
$\nu\tau\colon \gamma \to \epsilon$ in $\matchcat{\cat D}{\gamma}$
(see Diagram~\ref{diag:ProdInts}).
The induction hypothesis implies that the projection of the map
$\diag X_{\nu}\colon \diag X_{\tau} = \diag X_{\delta} \to \diag
X_{\nu\tau} = \diag X_{\epsilon}$ onto the factor of $\diag
X_{\epsilon}$ indexed by $\xi\colon \epsilon \to \beta$ in
$\inv{\cat D}$ is the projection of $\diag X_{\tau} = \diag
X_{\delta}$ onto the factor indexed by $\xi\nu\colon \delta \to
\beta$.
\begin{equation}
\label{diag:ProdInts}
\vcenter{
\xymatrix@C=1em{
&{\gamma} \ar[dl]_{\tau} \ar[dr]^{\nu\tau}\\
{\delta} \ar[rr]_{\nu}
&& {\epsilon} \ar[rr]_{\xi}
&& {\beta}
\end{equation}
Thus, the projection of the composition
$\prod_{(\gamma\to\beta)\in\inv{\cat D}} I \to \diag X_{\tau} =
\diag X_{\delta} \xrightarrow{\diag X_{\nu}} \diag X_{\nu\tau} =
\diag X_{\epsilon}$ onto the factor indexed by $\xi\colon \epsilon
\to \beta$ equals the projection of
$\prod_{(\gamma\to\beta)\in\inv{\cat D}} I$ onto the factor indexed
by $\xi\nu\tau\colon \gamma \to \beta$, which equals that same
projection of the map $\prod_{(\gamma\to\beta)\in\inv{\cat D}} I \to
\diag X_{\nu\tau: \gamma \to \epsilon} = \diag X_{\epsilon}$. Thus,
we have defined the map $q$.
It is immediate from the definitions that $pq$ is the identity map
of $\prod_{(\gamma\to\beta)\in\inv{\cat D}} I$. To see that $qp$ is
the identity map of $\lim_{\matchcat{\cat D}{\gamma}} \diag X$, we
first note that the definitions immediately imply that the
projection of $qp$ onto each $\diag X_{(\gamma \to \beta)} = \diag
X_{\beta}$ equals the corresponding projection of the identity map
of $\lim_{\matchcat{\cat D}{\gamma}} \diag X$. If $\tau\colon
\gamma \to \delta$ is any other object of $\matchcat{\cat
D}{\gamma}$, then the induction hypothesis implies that $\diag
X_{\tau} = \diag X_{\delta}$ is homeomorphic to the product
$\prod_{(\delta\to\beta)\in\inv{\cat D}}I$. Every $\mu\colon \delta
\to \beta$ in $\inv{\cat D}$ defines a map $\mu_{*}\colon
(\tau\colon \gamma\to\delta) \to (\mu\tau\colon \gamma \to \beta)$
in $\matchcat{\cat D}{\gamma}$, and the induction hypothesis implies
that the map $\diag X_{\mu}\colon \diag X_{\tau} = \diag X_{\delta}
\to \diag X_{\mu\tau} = \diag X_{\beta} = I$ is projection onto the
factor indexed by $\mu$. Thus, for any map to $\lim_{\matchcat{\cat
D}{\gamma}} \diag X$, its projection onto $\diag X_{\tau} =
\diag X_{\delta}$ is determined by its projections onto the $\diag
X_{(\gamma\to\beta)\in\inv{\cat D}}$; since $qp$ and the identity
map agree on those projections, $qp$ must equal the identity map.
This completes the induction for part~1.
For part~2, for every map $\tau\colon \gamma \to \delta$ in
$\inv{\cat D}$ the map $\diag X_{\tau}\colon \diag X_{\gamma} \to
\diag X_{\delta}$ equals the composition
\begin{displaymath}
\diag X_{\gamma} \longrightarrow
\lim_{\matchcat{\cat D}{\gamma}} \hspace{-0.5em}\diag X
\longrightarrow \diag X_{\delta}
\end{displaymath}
where the first map is the matching map of $\diag X$ at $\gamma$ and
the second is the projection from the limit $\lim_{\matchcat{\cat
D}{\gamma}} \diag X \to \diag X_{(\tau\colon \gamma \to \delta)}
= \diag X_{\delta}$ (this is the case for \emph{every} $\cat
D$-diagram in $\cat M$, not just for $\diag X$). Since the matching
map at every object other than $\beta$ is the identity map, the map
$\diag X_{\tau}\colon \diag X_{\gamma} \to \diag X_{\delta}$ is the
projection $\lim_{\matchcat{\cat D}{\gamma}}\diag X \to \diag
X_{(\tau\colon \gamma \to \delta)} = \diag X_{\delta}$. The
discussion in the previous paragraph shows that the projection of
$\diag X_{\tau}\colon \diag X_{\gamma} \to \diag X_{\delta}$ onto
the factor of $\diag X_{\delta}$ indexed by $\mu\colon \delta \to
\beta$ is the projection of $\diag X_{\gamma}$ onto the factor
indexed by $\mu\tau\colon \gamma \to \beta$. This completes the
induction for part~2.
\end{proof}
We now consider the diagram $G^{*}\diag X$ that $G\colon \cat C \to
\cat D$ induces on $\cat C$ from $\diag X$.
\begin{prop}
\label{prop:MatchProd}
The matching object $\mathrm{M}^{\cat C}_{\alpha} G^{*}\diag X =
\lim_{\matchcat{\cat C}{\alpha}} G^{*}\diag X$ of the induced
diagram on $\cat C$ at $\alpha$ is homeomorphic to a product of unit
intervals indexed by the union over the maps $\tau\colon G\alpha \to
\beta$ in $\inv{\cat D}$ of the sets of path components of the nerve
of the category of inverse $\cat C$-factorizations of $(\alpha,
\tau)$. That is,
\begin{displaymath}
\mathrm{M}^{\cat C}_{\alpha} G^{*}\diag X \approx
\prod_{(\tau\colon G\alpha\to\beta)\in\inv{\cat D}}
\left(\prod_{\pi_{0}\mathrm{N}\text{(category of inverse
$\cat C$-factorizations of $\tau$)}}
\hspace{-4em}I \hspace{3.5em}\right)
\rlap{\enspace .}
\end{displaymath}
\end{prop}
\begin{proof}
Let $S = \coprod_{(\alpha \to \gamma)\in\Ob(\matchcat{\cat
C}{\alpha})} \inv{\cat D}(G\gamma,\beta)$, the disjoint union
over all objects $\alpha\to\gamma$ of $\matchcat{\cat C}{\alpha}$ of
the set of maps $\inv{\cat D}(G\gamma,\beta)$. An element of $S$ is
then an ordered pair $\bigl((\nu\colon \alpha \to \gamma),
(\mu\colon G\gamma \to \beta)\bigr)$ where $\nu\colon \alpha \to
\gamma$ is an object of $\matchcat{\cat C}{\alpha}$ and $\mu\colon
G\gamma \to \beta$ is a map in $\inv{\cat D}$, and is thus an object
of the category of inverse $\cat C$-factorizations of the
composition $(\alpha, G\alpha \xrightarrow{G\nu} G\gamma
\xrightarrow{\mu} \beta)$, i.e., of $(\alpha, \mu\circ G\nu\colon
G\alpha \to\beta)$. Every object of the category of inverse $\cat
C$-factorizations of every map $(\alpha, \tau\colon G\alpha \to
\beta)$ in $\inv{\cat D}$ appears exactly once, and so the set $S$
is the union over all maps $\tau\colon G\alpha \to \beta$ in
$\inv{\cat D}$ of the set of objects of the category of inverse
$\cat C$-factorizations of $(\alpha, \tau)$.
Proposition~\ref{prop:ProdInts} implies that for every object $\tau\colon
\alpha \to \gamma$ in $\matchcat{\cat C}{\alpha}$ the space
$(G^{*}\diag X)_{\tau} = (G^{*}\diag X)_{\gamma} = \diag
X_{G\gamma}$ is a product of unit intervals, one for each map
$G\gamma \to \beta$ in $\inv{\cat D}$, and so the product over all
objects $\tau\colon \alpha\to\gamma$ of $\matchcat{\cat C}{\alpha}$
of $(G^{*}\diag X)_{\tau} = (G^{*}\diag X)_{\gamma} = \diag
X_{G\gamma}$ is homeomorphic to the product of unit intervals
indexed by $S$, i.e.,
\begin{displaymath}
\prod_{(\alpha\to\gamma)\in\Ob(\matchcat{\cat C}{\alpha})}
\hspace{-3em} (G^{*}\diag X)_{\gamma}
\approx
\prod_{S} I \rlap{\enspace .}
\end{displaymath}
The matching object $\mathrm{M}^{\cat C}_{\alpha} G^{*}\diag X$ is a
subspace of that product. More specifically, it is the subspace
consisting of the points such that, for every map
\begin{displaymath}
\xymatrix@=.6em{
& {\alpha} \ar[dl]_{\nu} \ar[dr]^{\nu'}\\
{\gamma} \ar[rr]_{\tau}
&& {\gamma'}
\end{displaymath}
in $\matchcat{\cat C}{\alpha}$ from $\nu\colon \alpha \to \gamma$ to
$\nu'\colon \alpha \to \gamma'$ and every map $\mu'\colon G\gamma'
\to \beta$ in $\inv{\cat D}$, the projection onto the factor indexed
by $\bigl((\nu'\colon \alpha \to \gamma'), (\mu'\colon G\gamma' \to
\beta)\bigr)$ equals the projection onto the factor indexed by
$\bigl((\nu\colon \alpha \to \gamma), (\mu'\circ(G\tau)\colon
G\gamma \to \beta)\bigr)$.
Generate an equivalence relation on $S$ by letting $\bigl((\nu\colon
\alpha \to \gamma), (\mu\colon G\gamma \to \beta)\bigr)$ be
equivalent to $\bigl((\nu'\colon \alpha \to \gamma'), (\mu'\colon
G\gamma' \to \beta)\bigr)$ if there is a map $\tau\colon \gamma \to
\gamma'$ in $\inv{\cat C}$ such that $\tau\nu = \nu'$ and $\mu'\circ
(G\tau) = \mu$, i.e., if there is a map in the category of inverse
$\cat C$-factorizations of $(\alpha, \mu \circ (G\nu)\colon G\alpha
\to \beta)$ from $\bigl((\nu\colon \alpha\to\gamma), (\mu\colon
G\gamma \to \beta)\bigr)$ to $\bigl((\mu'\colon \alpha\to\gamma'),
(\mu'\colon G\gamma' \to\beta)\bigr)$; let $T$ be the set of
equivalence classes. This makes two objects in the category of
inverse $\cat C$-factorizations of a map equivalent if there is a
zig-zag of maps in that category from one to the other, i.e., if
those two objects are in the same component of the nerve, and so the
set $T$ is the disjoint union over all maps $\tau\colon G\alpha \to
\beta$ in $\inv{\cat D}$ of the set of components of the nerve of
the category of inverse $\cat C$-factorizations of $(\alpha, \tau)$,
i.e.,
\begin{displaymath}
T = \coprod_{(\tau\colon G\alpha\to\beta)\in\inv{\cat D}}
\hspace{-1.5em}\pi_{0}\mathrm{N}\bigl(\text{category of
inverse $\cat C$-factorizations of $(\alpha, \tau)$}\bigr)
\rlap{\enspace .}
\end{displaymath}
Let $T'$ be a set of representatives of the equivalence classes $T$
(i.e., let $T'$ consist of one element of $S$ from each equivalence
class); we will show that the composition
\begin{displaymath}
\xymatrix{
{\mathrm{M}^{\cat C}_{\alpha}G^{*}\diag X} \ar[r]^-{\subset}
& {\prod_{S} I} \ar[r]^{p'}
& {\prod_{T'} I}
\end{displaymath}
(where $p'$ is the projection) is a homeomorphism. We will do that
by constructing an inverse $q\colon \prod_{T'} I \to \mathrm{M}^{\cat
C}_{\alpha} G^{*}\diag X$ to the map $p\colon \mathrm{M}^{\cat
C}_{\alpha} G^{*}\diag X \to \prod_{T'}I$ (where $p$ is the
restriction of $p'$ to $\mathrm{M}^{\cat C}_{\alpha}G^{*}\diag X$).
We first construct a map $q'\colon \prod_{T'} I \to \prod_{S} I$ by
letting the projection of $q'$ onto the factor indexed by $s \in S$
be the projection of $\prod_{T'}I$ onto the factor indexed by the
unique $t \in T'$ that is equivalent to $s$. The description above
of the subspace $\mathrm{M}^{\cat C}_{\alpha}G^{*}\diag X$ of $\prod_{S}
I$ makes it clear that $q'$ factors through $\mathrm{M}^{\cat
C}_{\alpha}G^{*}\diag X$ and thus defines a map $q\colon
\prod_{T'}I \to \mathrm{M}^{\cat C}_{\alpha} \diag X$.
The composition $pq$ equals the identity of $\prod_{T'}I$ because
the composition $p'q'$ equals the identity of $\prod_{T'}I$. To see
that the composition $qp$ equals the identity of $\mathrm{M}^{\cat
C}_{\alpha}G^{*}\diag X$, it is sufficient to see that the
projection of $qp$ onto the factor $I$ indexed by every element $s$
of $S$ agrees with that of the identity map of $\mathrm{M}^{\cat
C}_{\alpha}G^{*}\diag X$. Since the projections of points in
$\mathrm{M}^{\cat C}_{\alpha}G^{*}\diag X$ onto factors indexed by
equivalent elements of $S$ are equal, and it is immediate that the
projection of $\mathrm{M}^{\cat C}_{\alpha}G^{*}\diag X$ onto a factor
indexed by an element of the set of representatives $T'$ agrees with
the corresponding projection of $qp$, the projections for every
element of $S$ must agree, and so $qp$ equals the identity of
$\prod_{T'}I$.
\end{proof}
\begin{prop}
\label{prop:NotFib}
The diagram $G^{*}\diag X$ induced on $\cat C$ is not a fibrant
$\cat C$-diagram.
\end{prop}
\begin{proof}
We will show that the matching map $(G^{*}\diag X)_{\alpha} \to
\mathrm{M}^{\cat C}_{\alpha} G^{*}\diag X$ of the induced $\cat
C$-diagram at $\alpha$ is not a fibration. Since the matching
object $\mathrm{M}^{\cat C}_{\alpha} G^{*}\diag X$ is a product of unit
intervals (see Proposition~\ref{prop:MatchProd}), it is path connected, and
so if the matching map were a fibration, it would be surjective. We
will show that the matching map is not surjective.
Since $\sigma\colon G\alpha \to \beta$ is a map in $\inv{\cat D}$
such that the nerve of the category of inverse $\cat
C$-factorizations of $(\alpha, \sigma)$ is not connected, we can
choose objects $(\nu\colon \alpha \to \gamma, \mu\colon G\gamma \to
\beta)$ and $(\nu'\colon \alpha \to \gamma', \mu'\colon G\gamma' \to
\beta)$ of that category that represent different path components of
that nerve. Since $\mu\circ (G\nu) = \mu'\circ (G\nu')$,
Proposition~\ref{prop:ProdInts} implies that the projection of the matching
map onto the copies of $I$ indexed by those objects are equal, and
so the projection onto the $I\times I$ indexed by that pair of
components factors as the composition $\diag X_{\alpha} \to I \to
I\times I$, where that second map is the diagonal map and is thus
not surjective.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:RtGdNec}]
This follows from Proposition~\ref{prop:FibD} and Proposition~\ref{prop:NotFib}.
\end{proof}
\subsection{Proof of Theorem~\ref{thm:MainCofibering}}
\label{sec:PrfCofibering}
Since $\cat M$ is complete, the right adjoint of $G^{*}$ exists and
can be constructed pointwise (see \cite{borceux-I}*{Thm.~3.7.2} or
\cite{McL:categories}*{p.~235}), and Theorem~\ref{thm:GoodisGood} implies
that $(G^{\mathrm{op}})^{*}\colon (\cat M^{\mathrm{op}})^{\cat D^{\mathrm{op}}} \to (\cat M^{\mathrm{op}})^{\cat
C^{\mathrm{op}}}$ is a right Quillen functor for every model category $\cat
M^{\mathrm{op}}$ if and only if $G^{\mathrm{op}}$ is fibering (because every model category
$\cat N$ is of the form $\cat M^{\mathrm{op}}$ for $\cat M = \cat N^{\mathrm{op}}$).
Proposition~\ref{prop:OpFiberingCofibering} implies that the functor $G\colon
\cat C \to \cat D$ is cofibering if and only if its opposite
$G^{\mathrm{op}}\colon \cat C^{\mathrm{op}} \to \cat D^{\mathrm{op}}$ is fibering, and
Theorem~\ref{thm:GoodisGood} implies that this is the case if and only if
$(G^{\mathrm{op}})^{*}\colon (\cat M^{\mathrm{op}})^{\cat D^{\mathrm{op}}} \to (\cat M^{\mathrm{op}})^{\cat C^{\mathrm{op}}}$
is a right Quillen functor for every model category $\cat M^{\mathrm{op}}$, which
is the case if and only if $G^{*}\colon \cat M^{\cat D} \to \cat
M^{\cat C}$ is a left Quillen functor for every model category $\cat
M$ (see Proposition~\ref{prop:OpQuillen} and Proposition~\ref{prop:OppositeReedy}).
\qed
\begin{bibdiv}
\begin{biblist}
\bib{borceux-I}{book}{
author={Borceux, Francis},
title={Handbook of categorical algebra. 1},
series={Encyclopedia of Mathematics and its Applications},
volume={50},
note={Basic category theory},
publisher={Cambridge University Press, Cambridge},
date={1994},
pages={xvi+345},
}
\bib{cosimplcalc}{article}{
author={Eldred, Rosona},
title={Cosimplicial models for the limit of the {G}oodwillie tower},
date={2012},
eprint={http://arxiv.org/pdf/1108.0114v5}
}
\bib{MCATL}{book}{
author={Hirschhorn, Philip S.},
title={Model categories and their localizations},
series={Mathematical Surveys and Monographs},
volume={99},
publisher={American Mathematical Society, Providence, RI},
date={2003},
pages={xvi+457},
}
\bib{diagn}{article}{
author={Hirschhorn, Philip S.},
title={The diagonal of a multicosimplicial object},
date={2015},
eprint={http://arxiv.org/abs/1506.06837}
}
\bib{FTHoLinks}{article}{
author={Koytcheff, Robin},
author={Munson, Brian A.},
author={Voli{\'c}, Ismar},
title={Configuration space integrals and the cohomology of the space
of homotopy string links},
journal={J. Knot Theory Ramifications},
volume={22},
date={2013},
number={11},
pages={1--73},
}
\bib{McL:categories}{book}{
author={MacLane, Saunders},
title={Categories for the working mathematician},
note={Graduate Texts in Mathematics, Vol. 5},
publisher={Springer-Verlag, New York-Berlin},
date={1971},
pages={ix+262},
}
\end{biblist}
\end{bibdiv}
\end{document}
|
1,108,101,565,517 | arxiv | \section{Introduction}
\label{sec.introduction}
Several studies have revealed that the ability of humans to make predictions is not only essential for motor control, but it is also fundamental for high level cognitive functions including action recognition, understanding, imitation, mental replay, and social cognition \citep{Wolpert2001}.
Improving the ability of robots to make predictions is a promising direction to enhance their skills, not only on motor control and prediction of their own body, but also on fostering the understanding of others' actions.
Well-established learning systems for motor prediction and control~\citep{Wolpert1998,Kawato1999,demiris2006hierarchical}
are built on internal models, namely \textit{forward} and \textit{inverse} models. The former provides a prediction of the state of the agent given the current state and an action, while the latter provides a mapping in the opposite direction: given a target state and the current state, it retrieves the action to bring the system from the current state to the target.
Assuming existing similarities between agents, the internal model used to predict one's own actions can be instrumental to predict the (\textit{visual}) consequences of someone else's actions \citep{demiris2006hierarchical, demiris2014mirror}.
The assumption of the existence of similarities between agents poses a challenge in robotics, known as the correspondence problem \citep{hafner2005interpersonal,alissandrakis2002imitation,nehaniv1998mapping}. This paper does not address this problem. Instead, we assume that the robot has access to visual information from an egocentric point of view. A solution to address general scenarios where the spatial perspective that the robot acquires of its own and of others' actions is different has been proposed for example in \citep{JohnsonDemiris05, Fischer2016}. However, in this work it is assumed that agents share the same perspective (the same assumption is generally made in similar applications \citep{baraglia2015motor,copete2016motor}).
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{fig1.pdf}
\caption{Overview of the learning architecture.
The self-learned model can be used to reconstruct missing data, make predictions, control the robot's motion.
When observing others, only the visual information is available.
The model learned can reconstruct the multimodal state of the robot, including the proprioceptive, visual, tactile, sound and the motor commands data, from partial information (left).
The model can also be used to make predictions in the futures, by feeding reconstructed data back to the model (center).
Finally, the model can generate motor commands that can be issue directly to the robot's joints to imitate others' visual trajectories (right).
}
\label{fig.architecture}
\end{figure*}
While several studies have focused on predicting outcomes of actions of the agent (\textit{e.g.} learning a forward model) or actions of others
(\textit{e.g.} human trajectories from images or videos)
\citep{kamel2018deep,kamel2019efficient,kamel2019investigation},
in this paper the goal is to learn a model of the self that can be applied to predict and imitate the visual perception of another agent from an egocentric point of view.
The proposed architecture is based on a self-learned model, which is built, trained and updated only using the experience accumulated by the agent.
The advantage of self-learned models is that they can be used without specific prior knowledge about the robot, for example its morphology or predefined forward and inverse models. This information might be unavailable in some cases, such as in soft robotics or after a mechanical damage. Self-learned models can enable robots to learn on their own how to behave in those circumstances \citep{cully2015robots, kriegman2019automated}.
However, one of the major obstacles in using self-learned internal models to predict motion of others is the intrinsic difference between the available data. While the model is learned and exploited by the agent using a whole range of available sensory modalities, only the visual information is available when observing someone else's motion.
In this paper, we overcome this challenge by implementing a model
which is able to retrieve the missing sensory information and motor commands needed for mimicking and predicting the visual trajectories of another agent's action.
As a result, the main contribution of this paper is a learning architecture that uses
a multimodal variational autoencoder in a versatile manner to
(1) reconstruct missing sensory modalities, (2) predict the sensorimotor state of self and the visual perception of another agent from an egocentric point of view, and (3) imitate the observed agent's visual trajectory.
This architecture represents a unified representation of the traditional forward and inverse model leveraging their synergy to implement functions that are fundamental for autonomous systems. An overview of the proposed learning architecture is shown in Fig.~\ref{fig.architecture}.
Variational autoencoders \citep{kingma2013auto, rezende2014stochastic} have recently emerged as one of the most popular approaches for unsupervised learning of complex distributions of data.
One of their key characteristics is that they can model the probability distribution of the reconstructed data and its distribution in the latent space.
In this paper, we extend a traditional variational autoencoder model to reconstruct the probability distribution of non-observed modalities (\textit{e.g.} joint positions and velocities) given observed modalities (\textit{e.g.} visual position of the end-effector). Using probability distributions is particularly important in the case of robotics applications, as it allows the system to take into account the redundancy of the system.
Typically, several joint positions lead to the same end-effector position, and such relationships can be captured by the learned conditional probability distribution.
An important aspect of this work is also the training strategy used to learn this model. Specifically, we propose to train the model to reconstruct the input even when only part of it is available, by adopting a denoising approach. Our experiments, presented in Section \ref{sec.experiments}, show that this method allows for the improved performance on the task at hand of various alternative models too.
The paper is organized as follows: the multimodal variational autoencoder implementation is introduced in Section \ref{sec.methodology}.
Experiments have been performed by using a humanoid iCub robot
and results are reported and discussed in Sections \ref{sec.experiments} and \ref{sec.discussion}, respectively.
\section{Related work}
\label{sec.relatedwork}
\paragraph*{Learning internal models in robotics}
Learning algorithms have proven to be an effective means of building internal models for robots. Learning strategies achieve flexibility and adaptability in building robots' kinematic and dynamic models, by incorporating uncertainties and nonlinearities, as well as dynamical changes due to wear, and in limiting the influence of specific engineered settings.
Many approaches to learn controllers for robots have been proposed, including for example reinforcement learning \citep{sutton1998reinforcement,abbeel2007application} and learning by demonstration \citep{argall2009survey,billard2008robot}.
Various implementations have been proposed, such as Gaussian processes \citep{deisenroth2011pilco,williams2009multi}, neural networks \citep{miller1995neural,kawato1988hierarchical}
and more recently deep neural networks \citep{Hinton2006,levine2016end}.
The majority of these studies have focused on learning controllers,
where the goal is to learn a policy or an inverse model in order to generate motor commands given a target input.
Typically, learning forward models has been less investigated in traditional robotics because they can be directly defined based on the kinematic structure of the robot. However,
learning such models is fundamental to implement a prediction model for robots to be able to make predictions not only on their own actions but also of others' actions.
Forward and inverse models learning is a general approach to allow robots to learn new skills. Forward models generate state predictions from current state and action, while inverse models generate actions from states. These two capabilities enable robots to perform predictions, ``mental simulation'', planning, and control \citep{Wolpert1998,Kawato1999,demiris2006hierarchical}.
In developmental robotics, such models are acquired by designing learning mechanisms to let a robot build its own perceptive and behavioral repertoire. The focus is to investigate the acquisition of motor skills from sensorimotor interaction with the environment \citep{Lungarella2003}.
As a result, the developmental approach aims to endow robots with all the learning capabilities that may be necessary to build rich and flexible sensorimotor representations
\citep{Sigaud2016}.
Several studies have addressed the problem of learning internal models from sensorimotor data through exploration strategies, including for example learning of visuomotor models \citep{droniou2012autonomous,vicente2016online}, learning of dynamics models \citep{calandra2015learning}, and learning from multiple sensory signals and possible partial information \citep{fitzpatrick2006reinforcing,vicente2016online,ruesch2008multimodal}.
Internal models (forward and inverse models) are usually learned separately \citep{Wolpert1998,Kawato1999,demiris2006hierarchical}:
the forward model is used to make predictions, and the inverse model is used for control.
The method proposed in our paper instead achieves these two capabilities in conjunction. This can be a valuable asset, for example in terms of number of parameters used (one network instead of multiple ones). Our proposed approach also provides a compact yet powerful model that can achieve satisfactory performance on both prediction and control tasks.
One powerful way to learn internal models is imitation, considered a fundamental part of learning in humans and used as a mechanism of learning for robots \citep{Demiris2005crib}.
The ability to predict someone else's movements inherently incorporates the necessity of understanding others' motion, being able to simulate it by developing learning as well as imitation skills.
A vast literature exists in the robotics domain addressing imitation, in particular the paradigm of learning by imitation
\citep{schaal2003computational,calinon2010probabilistic,lopes2005visual},
and the related correspondence problem \citep{hafner2005interpersonal,alissandrakis2002imitation,nehaniv1998mapping}
arising from the structural (kinematic/dynamic) differences between a demonstrator and a learner agent. Imitation can happen at different levels, such as at the action level, or at the effect level \citep{nehaniv2001like}.
Recently, advances on motion analysis and estimation have been proposed \citep{kamel2018deep,kamel2019efficient,kamel2019investigation}, and these techniques have also been applied to humanoid robot motion learning through sensorimotor representation and physical interactions \citep{shimizu2014robust}.
In this paper, we use a trajectory level imitation, as an instrumental example of application of our proposed multimodal learning approach. Also, although the correspondence problem has an important role in the context of learning by imitation, we refer the reader to the relevant literature to solve this problem, and we focus the paper on the multimodal learning approach instead.
\paragraph*{Multimodal learning}
In the fields of sensor fusion and pattern recognition, several works have addressed the problem of learning representations from multiple sources, \textit{e.g.} text and audio or text and images~\citep{ramisa2017breakingnews,poria2016fusing}.
In \citep{Ngiam2011}, a multimodal deep learning approach was proposed, able to cope with data of different types, such as visual and audio data, with cross-modal learning and reconstruction.
Some work on multimodal learning in robotics was proposed in \citep{zambelli2016,zambelli2016tcds}.
Recent literature has started to address the challenging problem of learning from multiple data sources, using variational inference models (\textit{e.g.} variational autoencoders). Among others, two recent works have shown great potential: the joint multimodal VAE \citep{suzuki2016joint}, and the product-of-experts-based multimodal VAE \citep{wu2018multimodal}. The former learns a joint distribution between two modalities, but trains a new inference network for each multimodal subset, which is generally impractical and arguably intractable. The latter uses a product-of-experts inference network and a sub-sampled training paradigm to solve the multimodal inference problem. Although these methods have been shown to achieve good results in domains such as image processing and text-to-vision tasks, they do not address the problem of multimodal learning from different sensors on a real robot. Such domain is fundamentally different since the data collected by the robot while acting are generally noisy time series of unscaled and heterogeneous data.
The main contributions of our work compared to \citep{suzuki2016joint,wu2018multimodal} are the application domain and the ability of our method to generate actions. Our work is the first, to the best of our knowledge, to use a multimodal formulation of variational autoencoders on a real robotic domain. While in \citep{suzuki2016joint,wu2018multimodal} the addressed domains are purely self-supervised learning applications, not involving actions or control tasks, in this work we successfully use a multimodal VAE model to go beyond self-supervision and achieve imitation, prediction and control tasks.
In \citep{Droniou2015}, an architecture based on deep networks was proposed to make a humanoid robot iCub learn a task from multiple perceptual modalities (namely proprioception, vision, audio).
While the method proposed in that paper learns the cross-modal relationships between sensory modalities, it is not able to deal explicitly with missing information. On the contrary, the architecture that we propose here can successfully retrieve missing modalities and use them to both predict and control motion.
Finally, \citep{baraglia2015motor,copete2016motor} have applied deep autoencoders to make a robot predict others' actions through predictive learning, showing how a robot can use a self-acquired model to make predictions of others' goals. In those works, the sequences of signals used for learning are given through kinesthetic teaching. On the contrary, in this paper we use a fully autonomous exploration for the robot to acquire its own sensorimotor data. Furthermore, the variational autoencoder that we propose in this paper is a more general and versatile model for robots to not only predict self and others' motion, but also to perform imitation tasks. It also presents one major advantage compared to the model proposed in \citep{copete2016motor}, namely the ability to capture the redundancy of the robotic system.
\section{Methodology}
\label{sec.methodology}
\subsection{Multimodal variational autoencoder}
A variational autoencoder (VAE) \citep{kingma2013auto, rezende2014stochastic} is a latent variable generative model. It consists of an encoder that maps the input data $x$ into a latent representation $z=\text{encoder}(x)$, and of a decoder that reconstructs the input from the latent code, that is $\hat{x}=\text{decoder}(z)$. Encoder and decoder are neural networks, parameterized by $\theta$ and $\phi$, respectively. The lower-dimensional latent space where $z$ lives is stochastic: the encoder, denoted as $q_\phi (z | x)$ outputs a probability density, generally (as also in our case) a Gaussian distribution. The latent representation $z$ can then be sampled from this distribution. The decoder is denoted as $p_\phi(x| z)$: it gets as input the latent representation $z$ of the input, and outputs parameter of a distribution representing the reconstructed input.
The variational autoencoder model can also be written as
$p_\theta(x, z) =
p(z)p_\theta(x|z)$ where $p(z)$ is a prior, usually Gaussian, and $p_\theta(x| z)$ is the decoder.
The information bottleneck given by mapping of the input into a lower-dimensional latent space yields to a loss of information. The reconstruction log-likelihood $\log p_\phi (x| z)$ is a measure of how effectively the decoder has learned to reconstruct an input $x$ given its latent representation $z$.
The training goal is then to maximize the marginal log-likelihood of the data. Because this is intractable~\citep{rezende2014stochastic}, the evidence lower bound (ELBO) is instead optimized, by leveraging the inference network (decoder), $q_\phi(z|x)$, which serves as a tractable distribution. The ELBO is defined as:
\begin{equation}
ELBO(x) \triangleq \mathbb{E}_{q_\phi(z|x)}
\left[\lambda log p_\theta(x|z)\right] - \beta KL\left[q_\phi(z|x), p(z)\right]
\end{equation}
where $KL[p, q]$ is the Kullback-Leibler divergence between distributions $p$ and $q$, while $\lambda$ \citep{wu2018multimodal} and $\beta$ \citep{higgins2016beta} are parameters balancing the terms in the ELBO. The ELBO is then optimized via stochastic gradient descent, using the reparameterization trick to estimate the gradient \citep{kingma2013auto, rezende2014stochastic}.
In practice, since the main focus of this study is the reconstruction capability of the model, we chose $\beta=0$, and only consider the reconstruction loss to train our architecture, noticing improvements in the reconstruction performance obtained.
In this paper, we extend standard variational autoencoders to multimodal sensorimotor data.
Our multimodal VAE is formed of multiple encoders and decoders, one for each sensory modality. Each encoder and decoder is an independent neural network, not sharing weights with other modalities' networks. The latent representation is however shared: each encoder maps its input (one sensory modality) into the shared code $z$, as depicted in Fig.~\ref{fig.mdvae}. Each decoder then reconstruct its particular output (one sensory modality) from the shared code.
The main difference that characterizes the \textit{multimodal} learning approach compared to a standard VAE is that the sub-networks can be used to process each modality, and shared layers can be used to learn cross-modal relations (see Fig. \ref{fig.mdvae}).
\begin{figure}[ht]
\centering
\includegraphics[width=.9\columnwidth]{fig2.pdf}
\caption{Multimodal Variational Autoencoder used in this work. The input layer is composed by multimodal sensorimotor data. Each modality is encoded and decoded by separate autoencoders (shown with different colors). A shared layer (in light blue, in the center) allows to learn a shared representation among different dimensions. This architecture is trained with complete as well as partial data (see Table~\ref{tab.training_struc}). Each uni-modal autoencoder can be trained separately, allowing for single modality learning. The cross-modality representations are also learned through the shared layer. The output of the network consists of the mean and variance of the reconstruction of each different data part. Details about the parameters of the network included in this figure are further explained in Appendix. N-ReLU represents a fully connected layer with N neurons and using the ReLU activation function. N-ReLU x2 indicates that 2 N-ReLU layers are created in parallel, one to encode the mean and the other to encode the variance of the output distribution. }
\label{fig.mdvae}
\end{figure}
The $\lambda$ parameters are here used to balance losses from different sensor modalities.
In order to put more emphasis on modalities described by fewer dimensions (\textit{e.g.} the tactile and sound modalities),
we compute independent loss values for each modality ($m$) and weight them according to their dimensionality ($D_m$), that is $\lambda_m = 1/D_m$.
Then the sum of the independent reconstruction loss terms is optimized.
The scaling factor given by the dimensionality of each modality allows us to balance the importance of each modality when combining them in the optimization step. That is, when optimizing the reconstruction loss, the $\lambda$ weights allow to take into account that each modality and each corresponding unimodal sub-network have different dimensions.
This approach helps learning even the most difficult parts of the state space, such as discrete or binary dimensions of the sensory space (see tactile example in Figure~\ref{fig.data_self}).
This type of variational model presents various advantages in a robotic framework.
First, the ability of variational autoencoders to learn the distribution of a dataset in latent space is a powerful feature to generate a shared representation of the different modalities. For instance, the latent representation can be used to learn relationships and dependencies present in the sensorimotor experience of robots. This can be leveraged to generate new artificial perception by sampling from the latent distribution in the latent space.
Second, this shared latent representation also allows the robot to reconstruct missing modalities. For example, if data from a sensor is unavailable, this model can be used to model the probability distribution of the data that should be observed from this sensor conditioned on the data from other sensors of the robot.
Finally, their ability to predict probability distributions is fundamental to take into account the redundancy of complex robots, such as the iCub humanoid robot used in this study. With this property, the model can capture the fact that for a given end-effector position, several joint configurations are possible.
Details of the network implemented and used in this work are reported in~\ref{app.architecture}.
\subsection{Training the Multimodal Variational Autoencoder}
\label{sec.meth_train}
An important contribution of this work is the training strategy used to learn the proposed model. We propose to train the model to reconstruct the input even when only part of it is available, by adopting a de-noising approach. While in the following paragraphs the proposed training approach is presented relative to the multimodal variational autoencoder introduced earlier, this strategy is generic, and can be applied to other architectures, such as the reconstruction model proposed in \citep{Droniou2015} as demonstrated by the experimental results.
In the Experiment section, we show that the proposed training strategy allows to improve performance on the task at hand of various alternative models.
The training dataset contains multimodal sensorimotor data collected during a self-exploration phase. Data are captured from different sensors of the robot, such as the position of the hand in the robot's visual space, tactile and sound data, and proprioception (joint positions) from the motor encoders. In particular, the position of the hand in the visual space is extracted by considering the center point of a tracking window around the moving hand. All data are then normalized to take values in the range $[-1,1]$. More details regarding the data acquisition and the database are presented in Section \ref{sec.experiments_setup}.
Time series data from the self-exploration dataset recorded are shown in Fig.~\ref{fig.data_self}. Denote by ${\mathbf{u}_t}$ the vector of velocity commands issued at time $t$, ${\mathbf{q}_t}$ the vector of joint positions (proprioception), ${\mathbf{v}_t}$ the vector of the visual position, $\mathbf{p}_t$ the tactile signal and $\mathbf{s}_t$ the sound signal at time $t$. Note that other modalities can also be included. The input of the architecture is a multi-dimensional vector ${\mathbf{y}_t=[\mathbf{q}_t,\mathbf{q}_{t-1},\mathbf{v}_t,\mathbf{v}_{t-1},\mathbf{p}_t,\mathbf{p}_{t-1},\mathbf{s}_t,\mathbf{s}_{t-1},\mathbf{u}_t,\mathbf{u}_{t-1}]}$, which contains both data from time $t$ and $t-1$ to capture the temporal relationship between the different modalities.
The network is trained on both complete and partial samples of the training dataset collected during the robot self-exploration. To do so, the original dataset is augmented with samples that require the network to reconstruct the missing modalities given only one of them. This is realized by duplicating the dataset, while using a flag value (namely the arbitrary value -2, which is outside the range of any sensorimotor signal after normalization) to denote the non-observable modalities. The training dataset follows the structure in Table~\ref{tab.training_struc} to enable the network to perform predictions and reconstruction in multiple conditions of missing information.
More specifically, the augmented training set is formed by concatenating the original complete set of data collected during motor babbling and normalized to values between -1 and 1, with mutilated versions of itself. The final dataset is then (1) the complete data at time $t$ and $t-1$, concatenated to (2) data including only time $t-1$, concatenated to (3) data including only proprioception at time $t-1$ and vision at time $t$ and $t-1$, concatenated to (4) data including only vision at $t$ and $t-1$.
At each training step, a batch is randomly sampled from the augmented dataset and fed to the multimodal VAE model. The batch may contain only partial data, but the training objective forces the network to try to reconstruct the target complete sensorimotor state (\textit{i.e.} $\mathbf{y}_t$).
Because the model is trained using the combination of complete and partial data as described above, the latent representation is shaped in such a way that it is robust to missing data; similarly, the sub-networks weights are learned to also be robust to missing inputs.
\begin{table}
\centering
\caption{Training dataset structure: original dataset (1) augmented with samples that only include partial data (2-3-4). Each row correspond to a dataset of 7380 datapoints. Colored cells indicate that the corresponding modality is present in the dataset. For the cases (2-3-4), missing modality data (cells in gray) is replaced with values $-2$. The datasets (1), (2), (3), and (4) are concatenated. The proposed model is trained on the augmented dataset, that is the concatenation of the four (1-2-3-4).
\label{tab.training_struc}}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{| l || c | c | c | c | c | c | c | c | c | c |}
\hline
(1) & \cellcolor{red!35}$\mathbf{q_t}$ & \cellcolor{red!35}$\mathbf{q_{t-1}}$ & \cellcolor{yellow!35}$\mathbf{v_t}$ & \cellcolor{yellow!35}$\mathbf{v_{t-1}}$ & \cellcolor{green!35}$\mathbf{p_t}$ & \cellcolor{green!35}$\mathbf{p_{t-1}}$ & \cellcolor{blue!35}$\mathbf{s_t}$ & \cellcolor{blue!35}$\mathbf{s_{t-1}}$ & \cellcolor{purple!35}$\mathbf{u_t}$ & \cellcolor{purple!35}$\mathbf{u_{t-1}}$ \\ \hline
(2)&\cellcolor{gray!15}- & \cellcolor{red!35}$\mathbf{q}_{t-1}$ & \cellcolor{gray!15}- & \cellcolor{yellow!35}$\mathbf{v}_{t-1}$& \cellcolor{gray!15}- & \cellcolor{green!35}$\mathbf{p}_{t-1}$& \cellcolor{gray!15}- & \cellcolor{blue!35}$\mathbf{s}_{t-1}$& \cellcolor{gray!15}- &\cellcolor{purple!35} $\mathbf{u}_{t-1}$ \\\hline
(3)&\cellcolor{gray!15}- &\cellcolor{red!35} $\mathbf{q}_{t-1}$ & \cellcolor{yellow!35}$\mathbf{v}_t$ & \cellcolor{yellow!35}$\mathbf{v}_{t-1}$& \cellcolor{gray!15}- & \cellcolor{gray!15}- & \cellcolor{gray!15}- & \cellcolor{gray!15}- & \cellcolor{gray!15}- & \cellcolor{gray!15}- \\\hline
(4)&\cellcolor{gray!15}- & \cellcolor{gray!15}- & \cellcolor{yellow!35}$\mathbf{v}_t$ & \cellcolor{yellow!35}$\mathbf{v}_{t-1}$& \cellcolor{gray!15} - & \cellcolor{gray!15} - & \cellcolor{gray!15} - & \cellcolor{gray!15} - & \cellcolor{gray!15} - & \cellcolor{gray!15} - \\
\hline
\end{tabular}
}
\end{table}
\subsection{One model for multiple tasks}
One of the major assets of our proposed model is its versatility, that is the possibility of using the same learned model to achieve different goals. In this section, we present how the learned multimodal variational autoencoder can be deployed to achieve three different objectives:
\begin{enumerate}
\item reconstructing missing data;
\item predicting the robot's own sensorimotor data and visual trajectories from other data sources (\textit{e.g.} other agents, other datasets);
\item controlling the robot in an online control loop.
\end{enumerate}
In these three cases, the training, structure, and parameters of the neural network remain the same: the learned model and network used for learning \textit{do not change} even when different sets of input are available. We argue that this is a key aspect of our method: one single model can be trained and learned to capture a comprehensive internal model from multimodal data, and to cope even when part of this data is not available.
Details for each of the aforementioned functions that the model can achieve are given in the remaining part of the section.
\subsubsection{Reconstructing missing data}
Similar to denoising autoencoders, the proposed multimodal VAE is trained to reconstruct missing data.
Missing modalities are set to $-2$ (as explained in Section~\ref{sec.meth_train}), while the network outputs the probability distribution of the reconstructed inputs.
This is fundamental to address the problem at the origin of this work, that is the ability to predict the visual trajectory of others taken from egocentric visual information
by relying on internal models of the self. In such an application, an agent learns internal representations of its sensorimotor space, in particular relating motor actions with multimodal sensory effects \citep{demiris2006hierarchical, demiris2014mirror, Pickering2014}. However, when observing someone else performing an action, only the visual information is available. The agent, which relies on full information from all its sensors, must then be able to retrieve the missing information and interpret the observed motion in relation with its own internal representations. The architecture proposed in this paper allows robots to achieve this by reconstructing the missing sensorimotor information; for example reconstructing joint configuration, touch, sound and motor information from observations of the visual input, or time step $t$ from observations at time $t-1$.
\subsubsection{Predicting the robot and others' visual trajectories }
While data from all sensory modalities is available to the agent when learning the models, only the visual input, from an ego-centric perspective is available when observing others.
This implies that only data referring to the visual input are available in $\mathbf{y}$ (see (4) in Table~\ref{tab.training_struc}: this part of the augmented dataset only contains visual data at time $t$ and $t-1$; training on this part of the dataset allows the network to learn to predict the missing modalities from only visual information).
In this respect, the reconstruction of missing modalities described above plays a key role. The neural network can act as a forward model to predict the next sensorimotor state $\mathbf{y_{t+1}}$ from the current state of the agent $\mathbf{y_{t}}$ (see line (2) in Table~\ref{tab.training_struc}: this part of the augmented dataset only contains data at time $t-1$; training on this part of the dataset allows the network to learn to predict the next time step when only the previous observation is available). However, when observing someone else, the current state of the agent is not fully available, as only vision information can be observed.
To perform predictions the network needs to infer future sensorimotor states given the current one first.
We first feed the model with $\mathbf{y_{t-1}}$, and let the model reconstruct $\mathbf{y_{t}}$; then we feed the obtained reconstructed signal $\mathbf{y_{t}}$ as if it was the $t-1$ observation, and let the network reconstruct the missing part, that is $\mathbf{y_{t+1}}$.
In summary, the network first reconstructs the current sensorimotor perceptions of the observed agent and then uses these reconstructed perceptions to predict the next state of the agent.
\subsubsection{Controlling the robot in an online control loop}
In addition to the abilities of the architecture to reconstruct and predict the visual trajectories of other agents' motion, the learned model can be used as a controller for the robot. In particular, we show how the model can be placed in a control loop to regulate the sensory state of the robot given a target state. This approach can be used in imitation learning scenarios, for instance, where the robot imitates a target trajectory. In our scenario, the robot observes someone else's visual trajectory from an egocentric point of view and uses the learned model to replicate such trajectory.
The control loop is depicted in Fig.~\ref{fig.architecture} (rightmost diagram).
Notably, the joint and visual configurations ($\mathbf{q_{t-1}},\mathbf{v_{t-1}}$) of the robot are fed back to the network in order to provide the correct current state at each time. This prevents the network from drifting during the online cycles of the control loop, due to the dependencies between different input modalities. For example, moving to areas of the sensory space that lie far from the training space have increased uncertainty. This condition is made more severe by the multimodal nature of the data, which come independently from diverse sensors. The feedback loop implemented to provide the network with the real current data from the robot helps prevent the accumulation of errors in different state dimensions.
It is also important to emphasize that using the learned network as a controller for the robot is not a trivial application, since the network itself represents a model of the robotic system. The ability of the network to produce motor commands is then key to achieve a controller behavior, but this is not sufficient to implement an effective controller. It is important to provide the network with all the sensory information that can help the model to learn the kinematics and dynamics of the system, in particular the sensory states at two consecutive time steps. This is key for the network to build meaningful representations of the robot kinematics and dynamics, and in turn to generate sensible motor commands.
\section{Experiments}
\label{sec.experiments}
\subsection{Experimental setup}
\label{sec.experiments_setup}
We have demonstrated our proposed approach using a humanoid iCub robot. In our scenario, the robot is interacting with a piano keyboard.
The architecture is trained using data collected from the robot
through experience, by performing pseudo-random self-exploratory movements (motor babbling).
Then, the robot uses the learned architecture to (1) reconstruct missing sensory modalities, (2) predict its own sensorimotor state and predict visual trajectories of another agent from an egocentric point of view, and (3) imitate the observed agent 's trajectories.
During the experiments, the iCub robot moves it's right arm, while keeping its head still to a fixed position.
Four joints of one of the robot's arms are used during motor babbling.
The joints' positions ($q_0,...,q_3$) are acquired from the motor encoders attached to each joint\footnote{The initial joints configuration of the robot's arm is $q_0$=-35 deg, $q_1$=35 deg, $q_2$=0 deg, $q_3$=50 deg (with $q_0,...,q_3$ corresponding to the shoulder pitch, roll, yaw, and elbow flexion, respectively), the wrist is fixed in the standard neutral position, the index finger extended in the neutral position and the rest of the fingers folded. The joint configuration of the robot's head is the standard neutral one, except for the first two joints of the neck which are turned 12 degrees rightwards and downwards.}.
Visual information encoding the position of the hand in the 2D visual field of the robot is acquired from the robot's eye cameras
(using a resolution of $320\times 240$ pixels for the image frames),
with coordinates $x_R,y_R$ and $x_L,y_L$ for the right and left eye, respectively. This is obtained by tracking the hand of the robot using OpenCV features and computing the mean of the tracked feature points, thus obtaining the two coordinates in the 2D frames. This approach is a coarse representation of the visual information available to the robot. An alternative is to extract visual information directly from pixels using a convolutional neural network (CNN). On the other hand, the coarse approximation obtained with the simple visual tracker was sufficient to develop the experiments presented in the following paragraphs, and we let the implementation of the CNN as future work.
A binary one-dimensional tactile signal is acquired from the robot's artificial skin, which consists of a network of taxels (``tactile pixels'').
More specifically, the 60 tactile signals acquired from the robot's hand skin are normalized, averaged and binarized using an empirically fixed threshold. The result is a one-dimensional signal that is equal to 1 when a contact is perceived (\textit{i.e.} when the average of the signals is above the fixed threshold), or 0 otherwise.
Sound data is acquired from the piano keyboard, in the form of a one-dimensional vector containing the MIDI information related to the key played. MIDI is a symbolic representation of musical information incorporating both timing and velocity for each note played, which is thus associated to a specific integer number.
The commands sent to the robot's motors ($u_0,...,u_3$) to perform autonomous self-exploration (motor babbling) are velocity references.
No prior knowledge is assumed on the robot's kinematic or dynamic structure. The choice of using velocity commands aims to keep this prior knowledge to a minimum by avoiding to rely on the inverse kinematic of the robot.
However, our method can accommodate other implementation choices, such as position or torque control.
Self-exploration is realized by performing motor babbling on one of the robot's arm.
Random sinusoidal motor commands are sent to the motors as velocity commands defined for each joint $j$ as
$
u_j(t) = \alpha_j \sin (2\pi\omega t ),
$
where the amplitudes $\alpha_j$ are sampled for each joint at each cycle from a uniform distribution $\mathcal{U}(-\bar{u},\bar{u})$, and the frequency $\omega$ is fixed so that each cycle starts and terminates at zero (\textit{i.e.} null velocity).
Normalization is finally applied to all data to obtain signals in the range $[-1,1]$.
The dataset collected from motor babbling contains 7380 data points, corresponding to approximately 30 minutes of exploration. This dataset is then augmented in order to train the network, as explained in the previous section.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{fig3.pdf}
\caption{Data from self-exploration.
Joint positions are recorded from the motor encoders, visual positions (4D) are acquired from the RGB cameras of the robot's eyes, random sinusoidal velocity commands are issued to the arm joints to perform motor babbling.
\label{fig.data_self}}
\end{figure}
The input fed to the network is a 28-dimensional vector, including two four-dimensional joint position vectors ($\mathbf{q_t,q_{t-1}}$), two four-dimensional visual position vectors ($\mathbf{v_t,v_{t-1}}$), two one-dimensional tactile vectors ($\mathbf{p_t,p_{t-1}}$), two one-dimensional sound vectors ($\mathbf{s_t,s_{t-1}}$), and two four-dimensional motor commands vectors ($\mathbf{u_t,u_{t-1}}$).
We performed extensive evaluation tests of our proposed method. Three different datasets have been used: test data from the robot self-exploration, data from a RGB-D camera of a human playing a piano keyboard, and data from a RGB-D camera of the Imperial-PRL KSC Dataset\footnote{Dataset available at www.imperial.ac.uk/PersonalRobotics} (data used in \citep{chang2017learning} to validate kinematic structure correspondences methods).
To demonstrate our proposed method in practice, we show that the iCub robot is able to leverage its prediction capability to plan its own actions to imitate a human on the piano keyboard.
More details about the datasets used (including number of datapoints and training specifications) are provided in~\ref{app.datatrain}.
\begin{figure*}[!h]
\centering
\includegraphics[width=0.95\textwidth]{fig4.pdf}
\caption{Reconstruction results of multimodal data: proprioception ($q_0,...,q_3$), vision ($x_L, y_L, x_R, y_R$), motor commands ($u_0,...,u_3$), touch and sound. Blue lines show the reconstructed data given complete input (case 1 in Table~\ref{tab.training_struc}), and orange lines show the reconstruction results with partial input (case 4 in Table~\ref{tab.training_struc}). Shaded areas represent the variance of the predicted Gaussian distribution of the reconstructed signals. The multimodal variational autoencoder is able to reconstruct the visual position accurately. Reconstruction results on the joint and motor spaces display the effect of the redundancy of the robot's arm: the same visual position can be reconstructed using diverse configurations, and applying diverse motor commands.
Reconstruction errors occur simultaneously on different degrees of freedom, according to the robot's kinematic structure. The redundancy effect is particularly evident for the second and third joints (${q_1, q_2}$).
A representation of the degrees of freedom of the iCub arm is depicted in the lower-right picture.
}
\label{fig.vae-redundancy}
\end{figure*}
\subsection{Architecture structure}
The network implemented\footnote{Tensorflow \citep{tensorflow2015-whitepaper} has been used for the implementation of the Multimodal Variational Autoencoder.} consists of five unimodal sub-networks, for the proprioceptive (joint positions), visual, tactile, sound and motor modalities, respectively. The encoders, one for each unimodal sub-network, consist of two fully connected layers, while the decoders consist of three fully connected layers.
For the proprioception, visual and motor networks, the two encoder layers consist of 40 and 20 units, respectively, and the three decoder layers consist of 40, 8 and 8 units. For the tactile and sound networks, the two encoder layers consist of 10 and 5 units, respectively, and the three decoder layers consist of 10, 2 and 2 units.
The ReLU activation function is used throughout the network for each layer.
The difference in the number of units is to take into account that tactile and sound data are two-dimensional vectors, while the other modalities consist of eight-dimensional vectors.
The outputs of all the unimodal encoders are concatenated to feed into the shared network, which consists of a two-layer encoder with 100 and 28 units, and a two-layer decoder with 100 and 70 units \footnote{
The source code and the dataset used for this experiment can be downloaded at \url{github.com/ImperialCollegeLondon/Zambelli2019_RAS_multimodal_VAE}. }.
\subsection{Sensorimotor data reconstruction}
In this section, we present experiments that demonstrate the performance of the proposed system to reconstruct sensorimotor data from complete and from partial observations, that is when all inputs are available and when only a subset of modalities is available. The experiments show that the proposed architecture can effectively reconstruct the data in all cases.
The Multimodal Variational Autoencoder is first trained using datapoints explored during motor babbling.
The dataset collected during babbling is split into a training dataset and a test dataset.
As described in Section~\ref{sec.methodology}, the network is trained on both complete and partial data of the training set.
In order to evaluate the reconstruction ability of the network, we first assess whether the encoding and decoding of the variational autoencoder manage to retrieve complete input data (when all the modalities are present).
Then we tested the model on the reconstruction of missing modalities, using only the visual information as input.
The experiments conducted showed that the learned network achieves considerable results in terms of reconstruction and beyond that in terms of capturing the complexity of the system.
The network is able to provide an estimate of the input reconstructed even when the majority of the modality dimensions is missing. Importantly, the model is also able to provide a measure of the uncertainty due, for example, to the redundancy of the system.
Results of the reconstruction obtained using the multimodal variational autoencoder are shown in Fig.~\ref{fig.vae-redundancy}.
This figure shows the reconstruction results on the joints, motor, tactile, sound and visual spaces, obtained with both complete and partial input data.
The data used for this experiment belongs to the dataset collected from the robot self-exploration phase, but have not been used during the training of the model.
It is possible to note that while the reconstruction of the visual signals is very accurate, the reconstruction of the joints' positions and of the motor commands presents a peculiar behavior. In particular, reconstruction errors occur simultaneously for diverse joints.
A closer analysis of these results shows that these joints are actually related in the kinematic structure of the robot: one joint can compensate or contribute for the movements of the other joint.
The results shown in Fig.~\ref{fig.vae-redundancy} demonstrate how this redundancy is captured by the multimodal variational autoencoder, thus demonstrating the power of this type of network on such difficult tasks. More specifically, the multimodal variational autoencoder is able to learn the general sensorimotor structure underlying the robot's movements rather than single trajectories or single motion sequences. In other words, a robot learns that there can be diverse configurations to achieve a target (for example a visual target).
For instance, it can be seen that for $q_1$ and $q_2$ the variance of the reconstruction is particularly large. This comes from the fact that several joint configurations can explain the visual information provided to the architecture.
We note that the true data to be reconstructed remains most of the time within the confidence range of the reconstruction.
The results obtained show another interesting capability of the learned network, namely the ability of learning a forward kinematics only using 2D images from the robot's cameras, while not having direct access to the depth information of the 3D position of the hand in the robot's operational space. This allows the system to avoid the use of stereo vision algorithms (with the related calibration and matching issues), while having the possibility to rely on the on-board 2D RGB cameras.
The mean squared errors of the reconstructed sensorimotor signals on test data for each modality have been computed to provide a quantitative account of the network performance.
In Table~\ref{tab.rec}, we report the error scores obtained both when complete and partial data are provided to the network.
Note that the error scores achieved with partial data are comparable to those obtained when feeding complete data to the network,
with the only exception of the touch modality, which remains a challenge due to its binary nature.
This shows that the performance of the network is generally not degraded significantly when the input data consists only of partial data (\textit{i.e.} vision only).
This also shows that the network has successfully learned not only a direct reconstruction of each single modality but also cross-relations between the modalities and the way to reconstruct one of them provided only visual data are available.
The values reported in Table~\ref{tab.rec} show the accuracy of the proposed method. The values are reported with percentages (relative to the dataset ranges) to enable direct comparison across the modalities. However, to better appreciate them, consider that the $0.46\%$ and $1.39\%$ mean squared errors in joint space correspond to mean errors of $1.29$ and $2.24$ degrees in joint angles respectively. Similarly, the $0.05\%$ mean squared error in vision space corresponds to an average error of about $1.85$ pixels in the original image frames and mean squared errors of $1.29\%$ and $2.32\%$ in the motor commands represent an average error of $2.16$ and $2.89$ degrees per second.
\begin{table}
\centering
\caption{ Mean squared error percentages for each dimension of the multimodal reconstructed signal on test data.
\label{tab.rec}}
\resizebox{\columnwidth}{!}{
\begin{tabular}{l | c | c}
& Rec. complete data &Rec. partial data\\
\hline
$\mathbf{q}$ & 0.46\% [0.45; 0.48]\% & 1.39\% [1.37; 1.44]\%\\
$\mathbf{v}$ & 0.05\% [0.04; 0.07]\% & 0.05\% [0.03; 0.06]\%\\
$\mathbf{p}$ & 2.35\% [1.74; 3.66]\% & 9.42\% [9.07; 10.44]\%\\
$\mathbf{s}$ & 3.35\% [0.70; 4.18]\% & 3.95\% [3.35; 4.18]\%\\
$\mathbf{u}$ & 1.29\% [0.67; 1.60]\% & 2.32\% [2.29; 2.37]\%\\
[0.5ex] \hline
\end{tabular}
}
\end{table}
\subsection{Predict own sensorimotor states and visual trajectories of others}
In this section, we present experiments which demonstrate the ability of the proposed architecture to predict the robot's own sensorimotor state and to predict visual trajectories of another agent from an egocentric point of view. These experiments show that the proposed architecture can effectively predict future states by using the multimodal representations learned during training.
Condition (2) in Table~\ref{tab.training_struc} was critical to achieve this behavior.
The prediction tasks requires the network to infer future sensorimotor states given the current one (see case (2) in Table~\ref{tab.training_struc}). This is realized by feeding the inferred missing time step (\textit{i.e.} the time step $t$) back to the network as the new time step $t-1$, letting the network infer the new time step $t$, which is in fact the prediction at $t+1$.
\begin{figure}[pt]
\centering
\includegraphics[width=\columnwidth]{fig5.pdf}
\caption{
Prediction results using the learned model to predict the visual trajectories (with coordinates $x_L,y_L,x_R,y_R$) of the robot's own motion (a representative part of the trajectories is depicted).
Solid black lines represent the real data (part of the test database), while blue lines represent the predicted mean and the shaded light blue areas the predicted variance (uncertainty) of the model.
In each plot, on the horizontal axes are the time steps, while on the vertical axes are the magnitude (normalized) of each of the four dimensions of the visual state.
\label{fig.self_pred}}
\end{figure}
First, we have evaluated the proposed architecture using test data from the robot's own data collected from motor babbling.
Results of the predictions of the visual trajectories obtained on data explored during motor babbling are shown in Fig.~\ref{fig.self_pred}.
The mean squared prediction error score obtained on this experiment is $0.21\%$ (corresponding to less than 4 pixels). The data on which the experiment is carried out is the test database, that is a part of the data from the robot's self-exploration which was not used for training the model.
These results show that the network is able to effectively make accurate predictions by first reconstructing missing data from visual positions only, and then iterating the process for a second time in order to achieve the next step prediction.
We have also tested the architecture on multi-step ahead predictions. At each time step, the predicted next state is used as the input of the network to predict an additional step ahead. This process can be repeated as long as necessary. The results in Fig.~\ref{fig.multistep} show that the model is capable of predicting the visual trajectory of the on-going swing of the robot (the starting state of the prediction being 2 time steps after the beginning of the swing). The predicted trajectory (in blue) matches accurately the ground truth trajectory (in black) for more than 20 time steps. The prediction accuracy at 50 time steps is $0.42\%$ (less than $5$ pixels).
Then, the model converges to a stable periodic swing pattern which differs from the actual trajectory of the robot.
Note that obtaining stable long-term predictions with this type of approach is a challenging problem: this approach tends to diverge quickly because of the accumulation of error; also, note that it is expected the model to be unable to predict the movements of the robot after the first swing, as each swing is independent.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{fig6.pdf}
\caption{
Prediction over multiple time steps using the learned model to predict the visual trajectories (with coordinates $x_L,y_L,x_R,y_R$) of the robot's own motion (a representative part of the trajectories is depicted).
Solid black lines represent the real data, while blue lines represent the predicted mean.
\label{fig.multistep}}
\end{figure}
Then we evaluated the architecture on data collected from the observation of other agents.
Using the multimodal variational autoencoder trained on data of the robot itself, the robot is able to make predictions also of others' motion trajectories in the visual space.
When observing others, the robot has access to the visual information only, from its egocentric point of view. The learned model
is then used to retrieve the motor commands (together with the other missing sensory modalities) that would enable the robot to reproduce the trajectory observed to perform mental simulation of the observed action.
Experiments were carried out using two different datasets. The first test dataset consists of movements of a human playing a piano keyboard, that was recorded by the authors using a RGB-D camera (Fig.~\ref{fig.martinaPianokinectdata}). The second test dataset is part of the Imperial-PRL KSC Dataset (data used in \citep{chang2017learning} to validate kinematic structure correspondences methods). It contains kinect data of a human moving his hands (represented in Fig.~\ref{fig.maximekinectdata}). The 3D visual positions of these two datasets were then translated into 2D data by using two of the three available dimensions. This corresponds to a coarse approximation of the projection of the 3D trajectories onto the two cameras of the robot.
While the first dataset is similar to the self-exploration dataset in terms of scenario and application, the second one is significantly different, involving the free motion of the human arms, which are not confined within the scope of a keyboard.
The first test dataset allows us to demonstrate that the robot can effectively reconstruct and predict another agent's performing a sequence of motions that is similar to those performed in the motor babbling phase by using the learned internal models.
The second test dataset allows us to demonstrate that the robot is able to reconstruct and predict visual trajectories of others' motion using the learned models also when the type of motion is significantly different from the data acquired by the robot from self-exploration.
\begin{figure}
\centering
\subfloat[Human piano playing.]{\label{fig.martinaPianokinectdata} \includegraphics[height=2.8cm]{fig7a.pdf}}
\hfill
\subfloat[Imperial-PRL KSC Dataset.]{\label{fig.maximekinectdata} \includegraphics[height=2.8cm]{fig7b.pdf}}
\caption{
\protect\subref{fig.martinaPianokinectdata} Kinect data of a human upper-body movements while playing a piano keyboard with one hand.
\protect\subref{fig.maximekinectdata} Kinect data from the Imperial-PRL KSC Dataset.
The trajectory of the left hand $\mathbf{v}_{\!_{\text{PRL}}}$ has been used as test dataset.
}
\end{figure}
Results are shown in Fig.~\ref{fig.predother}: the left plot shows the prediction performance on the kinect data collected from a human playing a piano keyboard (see Fig.~\ref{fig.martinaPianokinectdata}), and the right plot shows the prediction performance on the kinect data from the Imperial-PRL dataset (specifically on $\mathbf{v}_{\!_{\text{PRL}}}$, see Fig.~\ref{fig.maximekinectdata}).
The corresponding mean squared error scores obtained are $0.64\%$ and $0.69\%$ (corresponding to about 6 to 7 pixels) for the two datasets, respectively.
The results achieved demonstrate that the proposed architecture obtains predictions of visual trajectories of others' motion by only making use of internal models of self.
\begin{figure}
\includegraphics[width=\columnwidth]{fig8.pdf}
\caption{Predictions of others' trajectories. Solid black lines represent the real data, while blue lines represent the predicted mean and the shaded light blue areas the predicted variance (uncertainty) of the prediction model. Prediction of human playing a piano keyboard (left) and prediction of the left hand motion $\mathbf{v}_{\!_{\text{PRL}}}$ (right).}
\label{fig.predother}
\end{figure}
\subsection{Imitate the observed agent's trajectories}
In this section, we present experiments that demonstrate the ability of the proposed architecture to use the learned multimodal representations to control the robot to imitate an observed agent 's visual trajectories. Condition (3) in Table~\ref{tab.training_struc} was critical to achieve this behavior.
The experiments presented here show that the robot can successfully follow demonstrated/target visual trajectories, only using the learned multimodal representations.
The learned model can be used in a control loop (rightmost diagram in Fig.~\ref{fig.architecture}).
By deploying the learned model as a controller, it is possible to implement, for example, imitation tasks, where the robot needs to track trajectories in the sensory space. The learned model is able to reconstruct the motor commands necessary to achieve reference trajectories. The retrieved motor commands can then be issued to the robot's motors.
For this experiment, we have used two datasets: target trajectories from motor babbling, and data observed from the human playing two keys on the piano keyboard.
The first dataset consists of trajectories from the part of the babbling dataset that has not been used for training the network. This test dataset thus contains data that have not been seen by the network before, though they are similar to the data used for training. In particular, the associations between positions in the sensory space and corresponding values of the velocity motor commands are similar.
The second dataset is more challenging, particularly because it may contain visual positions that were not contained in the training set, and this can in turn lead to combinations of the multimodal dimensions of the input that the network was never presented before.
The objective is for the robot to imitate the observed target trajectory. The target trajectory is used as reference and fed to the network in place of $\mathbf{v_t}$, while the current visual position of the robot and the current joint configuration of the robot ($\mathbf{v_{t-1}}$ and $\mathbf{q_{t-1}}$) are fed back to the network. All the other modalities are considered missing, in particular the motor commands that are produced by the network online after each new observation.
In the first experiment, we have compared the proposed method with the Cartesian controller available on the iCub.
The stereo vision system of the iCub is used to determine the 3D position in the Cartesian space associated with 2D visual inputs. This information is then used by the Cartesian controller to reach the target positions.
Results obtained on the first dataset are represented in Fig.~\ref{fig.resultsctrl}. The trajectories depicted in this figure are, consistently with the visual data used throughout this article, those captured from the robot's first person view.
It can be noted that our proposed method generates a trajectory that is more accurate than the one from the built-in Cartesian controller. It is important to note that the visual information available to the Cartesian controller is the same used by the proposed method, hence the calibration of the cameras together with the whole experimental setup is in common. This observation allows us to conclude that the proposed method overall surpasses the built-in controller in performing the task.
The mean squared error score achieved by the proposed model on this task on the four-dimensional visual data is only $0.48\%$ (corresponding to an error of less than 6 pixels), a very low value considering the resolution of the image ($320\times 240$ pixels) and the precision of the visual data encoding the hand position throughout the experiments.
The built-in Cartesian controller achieved a less accurate tracking of the reference visual trajectory, with a mean squared error score on the four-dimensional visual data of $1.07\%$, that is more than double the error achieved with the proposed method.
This difference is likely related to the fact that the reference data comes from the OpenCV tracker used to detect the 2D position of the hand in each image, which are probably not a perfect representation of the 3D position of the hand. This is likely causing the stereo-vision module to produce inaccurate target positions for the built-in Cartesian controller. Thus, we hypothesize that this succession of inaccuracies leads to a less accurate reproduction of the trajectory. Nevertheless, it is interesting to see that our proposed model manages to generate a better trajectory, while using the same data and without the need of the prior knowledge contained in the built-in Cartesian controller.
\begin{figure}
\includegraphics[width=\columnwidth]{fig9.pdf}
\caption{Results of the imitation task realized by using the built-in Cartesian controller (yellow line) and the learned model (green line) to control online the robot's movements. The proposed method outperformed the built-in model, achieving a more accurate tracking of the reference visual trajectory (gray line).
The left plot shows the 2D visual position representation of the reference and executed trajectories, while the right plots show the corresponding temporal profiles of the positions ($x$ and $y$ coordinates). For clarity of the representation, only the trajectories acquired from the left eye camera of the robot are depicted, while similar results were obtained from the right camera.
}
\label{fig.resultsctrl}
\end{figure}
The experiments on the second dataset are also instrumental to show that the proposed method allows a robot to use data observed from another agent and imitate them.
Results of 3 repetitions of this task are represented in Fig.~\ref{fig.resultsctrl_piano}. The mean squared error score achieved on this task on the four-dimensional visual data is $0.13\%$ (corresponding to an average of only 3 pixels error in the image frames).
The visual trajectory executed by the robot and represented in Fig.~\ref{fig.resultsctrl_piano} closely tracks the trajectory demonstrated. The robot is able to replicate the trajectory and successfully hit the two keys that were played by the demonstrator.
It is possible to note that the results on the $y$ coordinate are more accurate than those obtained on the $x$ coordinate. This reflects the structure of the actions performed during the exploration, which are used for training the model. While the exploratory movements spanned a wide range on the vertical direction, a smaller part of the space was explored on the horizontal direction.
We hypothesize that the bias observed in the network performance is related to the fact that the data acquired through the motor babbling exploration were also biased and constrained within a limited portion of the operational space. This limited, biased exploration allowed a more efficient data collection for the scope of the experiments and tasks described in this paper.
We discuss this point further in Section~\ref{sec.discussion}.
\begin{figure}
\includegraphics[width=\columnwidth]{fig10.pdf}
\caption{Results of the imitation task on the data collected from a human playing a piano keyboard. The proposed method (colored lines) allows the robot to effectively track the reference visual trajectory (black line).
The left plot shows the 2D visual position representation of the reference and executed trajectories, while the right plots show the corresponding temporal profiles of the positions ($x$ and $y$ coordinates). For the clarity of the representation, only 3 of the repetitions performed on the task are represented, and only the trajectories acquired from the left eye camera of the robot are depicted (analogous results where obtained from the right camera).
}
\label{fig.resultsctrl_piano}
\end{figure}
\subsection{Results summary}
In summary, the proposed method achieves accurate reconstruction and prediction; moreover, it is able to generate control signals to imitate visual trajectories consistently and accurately.
We report in Table~\ref{tab.results_summary} a summary of the quantitative results obtained and described in the previous subsections.
The proposed method achieved low prediction errors across the different tasks considered: the model was able to predict with errors that can be considered negligible with respect to the state and action spaces (\textit{e.g.} less than 2 degrees angles for joint positions, less than 6 pixels in the vision space).
\begin{table}[]
\centering
\caption{Accuracy scores summary: low prediction error is achieved on all the considered tasks, as only small discrepancies to the reference are measured.
\label{tab.results_summary}}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{@{}ll@{}}
\toprule
\multicolumn{1}{l|}{Task} & \begin{tabular}[c]{@{}l@{}}Accuracy \\ (percentage scores, relative to the dataset ranges)\end{tabular} \\ \midrule
\multicolumn{1}{l|}{Reconstruction} & \begin{tabular}[c]{@{}l@{}}Joint pos.: 0.46\% ($\approx 1.29$ degrees)\\ Vision: 0.05\% ($\approx 1.85$ pixels)\\ Touch: 2.35\% \\ Sound: 3.35\%\\ Motor c.: 1.29\% ($\approx 2.16$ degree per sec.)\end{tabular} \\\midrule
\multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}Reconstruction\\ from partial data\end{tabular}} & \begin{tabular}[c]{@{}l@{}}Joint pos.: 1.39\% ($\approx 2.24$ degrees)\\ Vision: 0.05\% ($\approx 1.85$ pixels)\\ Touch: 9.42\%\\ Sound: 3.95\%\\ Motor c.: 2.32\% ($\approx 2.89$ degrees per sec.)\end{tabular} \\ \midrule
\multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}Prediction\\ of self motion\end{tabular}} & \begin{tabular}[c]{@{}l@{}}Single step: 0.21\% ($<4$ pixels)\\ Multi-step: 0.42\% ($<5$ pixels, after 50 time steps)\end{tabular}
\\ \midrule
\multicolumn{1}{l|}{\begin{tabular}[c]{@{}l@{}}Prediction\\ of others motion\end{tabular}} & \begin{tabular}[c]{@{}l@{}}Piano playing: 0.64\% ($\approx 6$ pixels)\\ Imperial-PRL-KSC: 0.69\% ($\approx 6$ pixels)\end{tabular} \\ \midrule
\multicolumn{1}{l|}{Imitation} & \begin{tabular}[c]{@{}l@{}}Motor babbling: 0.48\% ($\approx 5$ pixels)\\ Playing keys: 0.13\% ($\approx 3$ pixels)\end{tabular} \\ \bottomrule
\end{tabular}
}
\end{table}
\subsection{Comparison with other methods}
An important aspect of this work is that the training procedure can be applied to other neural network architectures with reconstruction capabilities.
The augmentation of the dataset with different arrangements of missing modalities enables the construction of a single model capable of executing several tasks. In order to illustrate the possibility of applying this training procedure to other networks, and to compared the accuracy of the proposed model, we have tested several other architectures:
\begin{itemize}
\item[(i)] Vanilla VAE: a standard VAE model (\textit{e.g.}~\citep{kingma2013auto}) trained in a denoising fashion on the dataset without missing modalities, by using a probability of 30\% to set some values of the inputs to $0$.
\item[(ii)] Vanilla VAE trained with our proposed training method, on the augmented database.
\item[(iii)] The multimodal architecture proposed in \citep{Droniou2015}. This architecture learns a shared latent representation and classification of the inputs, and can be used to reconstruct missing modalities. It is trained as (i), which is also the approach proposed by the authors. The implementation of this architecture is based on the source code provided by its authors. The sizes of the different fully connected layers have been selected to match those of our proposed architecture.
\item[(iv)] The multimodal architecture proposed in \citep{Droniou2015} trained with our augmented database.
\item[(v)] Two independent models, namely a forward and an inverse models, implemented by feed-forward neural networks.
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{fig11.pdf}
\caption{Visualization of performance scores (prediction error) of the proposed method and other methods. Our training strategy clearly improves performance of reconstruction methods, including Vanilla VAE and the model proposed in \citep{Droniou2015}. Our method performs equally or better than the alternatives in the complex fully sensorimotor state estimation task.}
\label{fig.chart}
\end{figure}
Implementation details of the different architectures are given in~\ref{app.architecture}.
We considered three representative cases for comparison, namely:
\begin{itemize}
\item prediction of the current sensory state from the previous one (case 2 of Table~\ref{tab.training_struc}); this case corresponds to the forward model function;
\item prediction of the motor commands from the visual information only (case 4 of Table~\ref{tab.training_struc}); this case corresponds to the inverse model function;
\item prediction of the whole sensorimotor state from the external visual information and the current joints configuration (case 3 of Table~\ref{tab.training_struc}); this case corresponds to the imitation scenario.
\end{itemize}
Fig.~\ref{fig.chart} provides a visual representation of the performance comparisons in terms of prediction error.
Table~\ref{tab.rec_comparison} summarizes the MSE scores obtained by the models compared.
From the results presented in Table~\ref{tab.rec_comparison}, we are able to draw the following conclusions.
First, the proposed training strategy consistently improves the performance of the considered models, allowing a drop of the MSE scores to approximately half of the original scores in the case of the model from \citep{Droniou2015}, and to a fraction of it in the case of the vanilla VAE. Also, the proposed multimodal VAE outperforms a vanilla VAE model: we argue that this is because the proposed multimodal model can learn both modality-specific and cross-modality features thanks to the modular structure of the encoder/decoder and the joint probability distribution learned in the latent encoding.
The comparison with the two independent forward and inverse models demonstrates that the proposed architecture performs better because it can fulfill the two functions (of forward and inverse model) simultaneously.
In this comparison, the predictions from the forward model are used for the ``forward model'' case, and the prediction from the inverse model are used for the ``inverse model'' case, respectively.
To achieve the third case (imitation case) the forward and inverse models must feed each other in order to produce the whole sensorimotor state from the visual and proprioception information: first, the inverse model must be applied to get the motor commands which are then used by the forward model to produce the sensory state prediction. Despite each individual model being (almost) perfectly suited for its own function (note the lowest scores achieved), the combination of the two to achieve imitation results does not achieve the best performance on the imitation case.
On the contrary, the proposed architecture outperforms this baseline.
\begin{table}
\centering
\caption{ Accuracy of different architectures on the tasks presented in this paper. The training and evaluation of the different models have been replicated 10 times. The results are presented in the form of percentages indicating: median [first quartile; third quartile].
\label{tab.rec_comparison}}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{l | c c c}
\hline\hline
Method &
\begin{tabular}[c]{@{}c@{}}Prediction of \\ sensory state \\ (\textit{forward model})\end{tabular} &
\begin{tabular}[c]{@{}c@{}}Prediction of \\ motor command \\ (\textit{inverse model})\end{tabular} &
\begin{tabular}[c]{@{}c@{}}Prediction of \\ full sensorimotor state \\ in imitation case\end{tabular}
\\ [0.5ex] \hline\hline \\ [-1ex]
Our &\textbf{1.13\% [0.96; 1.22]\%} & \textbf{2.31\% [2.28; 2.34]\%} & \textbf{1.52\% [1.45; 1.67]\%}
\\ \hline
\\[-1ex]
Vanilla VAE & 19.37\% [12.48; 35.39]\% & 22.65\% [12.47; 49.08]\% & 8.43\% [6.86; 9.16]\% \\ \hline
\\[-1ex]
\begin{tabular}[l]{@{}l@{}}Vanilla VAE \\ $\quad$+ our training method \end{tabular} & 1.75\% [1.63; 1.82]\% & 3.31\% [3.24; 3.73]\% & 1.76\% [1.72; 1.81]\%
\\ \hline
\\[-1ex]
Model from \citep{Droniou2015} & 2.36\% [2.07;2.61]\% & 5.72\% [5.66; 5.77]\% & 3.56\% [3.22; 3.71]\%
\\ \hline \\[-1ex]
\begin{tabular}[l]{@{}l@{}}Model from \citep{Droniou2015} \\ $\quad$+ our training method \end{tabular} & \textbf{1.09\% [1.06; 1.12]\%} & 3.03\% [2.91; 3.06]\% & \textbf{1.45\% [1.43; 1.47]\%}
\\ \hline
\\[-1ex]
\begin{tabular}[l]{@{}l@{}}Indep. forward \\ $\quad$ and inverse models \end{tabular} & 0.51\% [0.49; 0.53]\% & 0.24\% [0.23; 0.26]\% & 3.39\% [3.04; 3.64]\%
\\ [0.5ex] \hline
\end{tabular}
}
\end{table}
\section{Discussion}
\label{sec.discussion}
The results presented in this study show that a robot can learn to predict the visual trajectories of another agent from an egocentric point of view by exploiting only self-learned internal models.
In this study, it has been argued that one of the main challenges in achieving predictions of others only based on internal models of self is the difference of the available data:
while the whole set of sensorimotor data is available when the robot is acting and exploring, only visual information is available when the robot observes another agent.
This motivated the proposed strategy to reconstruct and infer the missing information.
In particular, the proposed training strategy has shown crucial to improve models performance, and
the proposed variational autoencoder allowed a robot to learn probability distributions among different sensorimotor modalities which captures the kinematic redundancy of the robot's motions.
The choice of the variational autoencoder was motivated by its capability of modelling data uncertainty, through a learned posterior distribution represented by the mean and the variance of a Gaussian distribution.
The multimodal formulation, moreover, allows us to combine different representations of different types of data into a single distribution (the learned posterior distribution), that gracefully merges the different sources of information.
In addition, the encoder-decoder structure of the variational autoencoder is ideal for reconstruction and self-supervised learning purposes, hence a perfect fit for the objective of this work: that is to reproduce (reconstruct, predict, generate) signals during inference, after training on exploration (self-collected) data.
The choice of a variational autoencoder model instead of a classical autoencoder also allowed us to leverage the advantages of generative models.
Variational autoencoders model the input data by means of a distribution, generally (as in our case) a Gaussian distribution, defined by a mean and a variance. This allows to capture a more general and flexible underlying structure of the data compared to other models (such as standard autoencoders or encoder-decoder models). In our case, the distribution is action-conditioned since part of the input includes the motor commands. This means that the posterior distribution learned during training captures the correspondences between actions and sensor observations, and learns that some observations actually correspond to different actions. This is shown in Figure~\ref{fig.vae-redundancy}: despite the fact that joint $q_2$ does not follow the prescribed trajectory, the visual trajectory (as well as tactile and acoustic ones) is actually tracked accurately. This is because the same visual position of the hand can be achieved by a number of different joint configurations (redundancy). The fact that the variance of joint $q_2$ is significantly bigger than the variance of the other joints supports this claim, because it represents the uncertainty of this particular joint motion.
The proposed approach can be enhanced by
enforcing the variational autoencoder to learn a latent space of a certain shape from which inputs can be sampled in a more meaningful manner, to generate synthetic sensorimotor data.
Although we let the exploration of this direction for future work, we believe this is a strong and promising characteristic of the chosen model in the context of multimodal learning.
Another key characteristic of the proposed multimodal variational autoencoder is that this model can learn both modality-specific and cross-modality features thanks to the modular structure of the encoder/decoder and the joint probability distribution learned in the latent encoding.
A limitation of the current implementation is the dependence of the reconstruction accuracy on the explored sensorimotor space. In particular, it is possible that combinations of sensory states reached during an imitation task are far from the training set of states used in the training of the network. In this case the network ``guesses'' motor commands by sampling from the learned distribution, but the reconstruction accuracy is usually poor due to the lack of samples resembling the observed new sensory state.
The problem of generalizing to unexplored regions of the space is indeed a very interesting and still largely unsolved problem in robotics as well as in exploration methods in other domains (\textit{e.g.} machine learning, reinforcement learning, multi-task learning, \textit{etc.}). One possibility to improve our current method would be to enlarge the exploration space to include a larger region of the multimodal space (\textit{e.g.} bigger areas of Cartesian/join space). This would come with the problem of having to acquire larger number of data and thus making learning of the model slower.
A possible direction is the implementation of more sophisticated exploration strategies, for instance curiosity-based strategies \citep{maestre2015bootstrapping,baranes2010intrinsically}, or to exploit the generative nature of the model as mentioned earlier.
Finally, in this paper, we designed the tasks in a way that the robot and the human are both capable of executing it. It would be interesting in future works to investigate how to identify and address the situation when the task cannot be fulfilled by the robot.
\section{Conclusion and Future Work}
This work takes inspiration from cognitive studies showing that humans can predict others' actions by using their own internal models \citep{demiris2014mirror}. Following this direction, we have implemented a new architecture that allows a robot to predict visual trajectories of other agents' actions by using only self-learned internal models.
In this paper, we introduced a strategic training approach and a multimodal learning architecture that allow a robot to (1) reconstruct missing sensory modalities, (2) predict the its own sensorimotor state and predict visual trajectories of another agent from an egocentric point of view, and (3) imitate the observed agent 's trajectories.
This versatility represents a major advantage of the proposed approach, that can thus be applied in different applications to address different objectives (\textit{e.g.} prediction, control, etc.).
This architecture leverages advantages of developmental robotics and of deep learning,
and has been evaluated extensively on different datasets and set-ups.
In future work, we will investigate how to leverage the generative capabilities of the network, and how this method can be combined with more advanced exploration strategies (such as curiosity-based strategies) in order to acquire a self-perception database that covers the robot and environment states as much as possible \citep{maestre2015bootstrapping,baranes2010intrinsically}. The presented method will also be combined with perspective taking mechanisms \citep{JohnsonDemiris05, Fischer2016} to enable prediction of future states from different viewpoints.
\section*{Acknowledgements}
This work was supported by an EPSRC doctoral scholarship (Grant Number 1507722), EU FP7 project WYSIWYD under Grant 612139, and EU Horizon2020 project PAL under Grant 643783-RIA.
|
1,108,101,565,518 | arxiv | \section{Introduction}
It is not often pointed out that the Universe has recently undergone a bounce {\it in connection space}
(not to be confused with a possible metric bounce at the Planck epoch).
The natural connection variable in homogenous cosmological models is the inverse comoving Hubble parameter, here called $b$,
as opposed to the expansion factor $a$ in metric space (with $b=\dot a/N$ on-shell for a lapse function $N$). This is precisely
the variable used in characterizing the horizon structure of the Universe. It is well established (\cite{accelexp} and references therein)
that $b$ has recently transitioned from a decreasing function of time (associated with decelerated expansion) to an increasing function of time (accelerated expansion), due to $\Lambda$ or more generally a form of dark energy taking over. If we choose the connection representation in quantum cosmology the Universe has, therefore, in the recent few billion years of its life undergone a bounce or a reflection.
Reflection is one of the best ways to highlight quantum wave-like behavior~\cite{interfreflex}, sometimes with paradoxical results~\cite{quantumreflex}. The incident and reflected waves interfere, introducing oscillations in the probability, or ``ringing'', which affects the classical limit. Such interference transforms traveling waves into stationary waves, leading to effects not dissimilar to those investigated in~\cite{randono}. Independently of this, turning points in the
effective potential, dividing classically allowed and forbidden regions, are always regions where the WKB or semiclassical limit potentially
breaks down, revealing fully quantum behavior. The point of this paper is to initiate an investigation into this matter, specifically
into whether the extremes of ``quantum reflection'' could ever be felt by our recent Universe.
In this study we will base ourselves on recent work where a relational time (converting the Wheeler-DeWitt equation into a
Schr\"odinger-like equation) is obtained by demoting the constants of Nature to constants-on-shell only~\cite{JoaoLetter,JoaoPaper} (i.e.,
quantities which are constant as a result of the equations of motion, rather than being fixed parameters in the action). The conjugates
of such ``constants'' supply excellent physical time variables.
This method is nothing but an extension of unimodular gravity~\cite{unimod1} as formulated in~\cite{unimod}, where the demoted
constant is the cosmological constant, $\Lambda$, and its conjugate time is Misner's volume time~\cite{misner}.
Extensions targeting other constants (for example Newton's constant) have been considered before, notably in the context of the sequester~\cite{padilla, pad}
in the form~\cite{pad1} (where the associated ``times" are called ``fluxes"), or more recently in~\cite{vikman,vikman1}.
Regarding the Wheeler--DeWitt equation in this fashion, one finds that the fixed constant solutions appear as mono-chromatic partial waves.
By ``de-constantizing'' the constants the general
solution is a superposition of such partial waves, with amplitudes that depend on the ``de-constants''. Such superpositions can form wave packets with better normalizability properties. In this paper we investigate the simplest toy model exhibiting
a $b$-bounce, a mixture of radiation and $\Lambda$, subject to the deconstantization of $\Lambda$ and of a radiation variable (which can be
the gravitational coupling $G$). The wave packets we build thus move in two alternative time variables, the description being simpler~\cite{JoaoPaper} in terms
of the clock associated with the dominant specie (e.g., Misner time during Lambda domination). The $b$-bounce is the interesting epoch
where the ``time-zone'' changes.
The plan of this paper is as follows. In Section~\ref{classical} we set up the classical theory
highlighting the connection rather than the metric, with a view to quantization in the connection representation (Section~\ref{quantumMSS}).
We stress the large number of decision forks in the connection representation (thus, leading
to non-equivalent theories with respect to quantizations based upon the metric). Notably, beside factor ordering issues, we have ambiguities
in the {\it order} of the quantum equation. Thus we find two distinct theories for our toy model: one first, another second order.
We seek solutions to the second order theory in Sec.~\ref{2ndordertheory}, but encounter a number of mathematical problems that hinder
progress. In contrast, we produce explicit solutions to the first order theory in Section~\ref{firstordersln}, albeit at the cost of several approximations that may erase or soften important quantum behavior. Gaussian wave packets are found, and the motion of their peaks reproduces the semiclassical limit.
At the bounce they do exhibit ``ringing'' in $|\psi|^2$, as in all other quantum mechanical reflections.
However, with at least one definition of inner product and unitarity, within the semiclassical approximation this ``ringing'' disappears from the probability, as shown in Section~\ref{inner}. Nonetheless in Section~\ref{phenom} we find hints of interesting phenomenology: even within the semiclassical approximation, for a period around the bounce, the Universe is ruled by a double peaked distribution biased towards the value of
$b$ at the bounce. This could be observable.
Whether the features found/erased in Sections~\ref{firstordersln}, \ref{inner} and \ref{phenom} vanish or become more pronounced in a
realistic model with fewer approximations is left to future work (e.g.,~\cite{brunobounce}), as we discuss in a concluding Section.
\section{Classical theory}\label{classical}
We study a cosmological model with two candidate matter clocks, modeled as perfect fluids with equation of state parameter $w=\frac{1}{3}$ (radiation) and $w=-1$ (dark energy), respectively. In minisuperspace, these fluids can be characterized by their energy density $\rho$ or equivalently by a conserved quantity $\rho a^{3(w+1)}$. This conserved quantity is canonically conjugate to a clock variable, and hence particularly convenient to use.
Reduction of the Einstein--Hilbert action (with appropriate boundary term) to a homogeneous and isotropic minisuperspace model yields
\begin{equation}
S_{{\rm GR}} = \frac{3V_c}{8\pi G}\int {\rm d}t \left(\dot{b}a^2+N a\left(b^2+k\right)\right)
\end{equation}
where $b$ is conjugate to the squared scale factor $a^2$; varying with respect to $b$ gives $b=\dot{a}/N$, as stated above. $k=0,\,\pm 1$ is the usual spatial curvature parameter, and $V_c$ is the coordinate volume of each 3-dimensional slice.
A perfect fluid action in minisuperspace can be defined by~\cite{Brown}
\begin{equation}
S_{{\rm fl}} = \int {\rm d}t \left(U\dot\tau - N a^3 V_c \,\rho\left(\frac{U}{a^3 V_c}\right)\right)
\end{equation}
where $U$ is the total particle number (whose conservation is ensured by the first term) and $\tau$ is a Lagrange multiplier. For a fluid with equation of state parameter $w$, $\rho(n)=\rho_0 n^{1+w}$ for some $\rho_0$ where $n=U/(a^3 V_c)$ is the particle number density. Now introducing a new variable $m=\frac{8\pi G \rho_0}{3V_c} (\frac{U}{V_c})^{1+w}$, conservation of $U$ is equivalent to conservation of $m$, and we can define an equivalent fluid action (see also~\cite{GielenTurok,GielenMenendez})
\begin{equation}
S^{(w)}_{{\rm fl}} = \frac{3V_c}{8\pi G}\int {\rm d}t \left(\dot{m}\chi-N\frac{m}{a^{3w}}\right)\,.
\end{equation}
The total action for gravity with two fluids is then
\begin{eqnarray}
S_{{\rm GR}} + S^{(\frac{1}{3})}_{{\rm fl}}+S^{(-1)}_{{\rm fl}} &=& \frac{3V_c}{8\pi G}\int {\rm d}t \left[\dot{b}a^2+\dot{m}\chi_1+\dot{\Lambda}\chi_2\right.
\label{totalac}
\\&&\left.\quad-N a\left(-(b^2+k)+\frac{m}{a^2}+\frac{\Lambda}{3} a^2\right)\right]\nonumber
\end{eqnarray}
where we now write $m$ for the conserved quantity associated to radiation and $\Lambda/3$ for the ``cosmological integration constant'' of dark energy. (The latter is equivalent to the way in which the cosmological constant emerges in unimodular gravity~\cite{unimod}; the factor of 3 ensures consistency with the usual definition of $\Lambda$.) We will assume that $m$ and $\Lambda$ are positive: other solutions are of less direct interest in cosmology. Classically the values of such conserved quantities can be fixed once and for all. In the quantum theory discussed below, we will only be interested in semiclassical states sharply peaked around some positive $m$ and $\Lambda$ values, even though the corresponding operators are defined with eigenvalues covering the whole real line in order to simplify the technical aspects of the theory.
The Lagrangian is in canonical form $\mathcal{L}=p_i\dot{q}^i-\mathcal{H}$, which implies the nonvanishing Poisson brackets
\begin{equation}\label{PB}
\{b,a^2\}=\{m,\chi_1\}=\{\Lambda,\chi_2\}=\frac{8\pi G}{3 V_c}
\end{equation}
and Hamiltonian
\begin{equation}
\mathcal{H} = \frac{3V_c}{8\pi G}N a\left(-(b^2+k)+\frac{m}{a^2}+\frac{\Lambda}{3} a^2\right)\,.
\label{hamiltonian}
\end{equation}
Importantly, this Hamiltonian is {\em linear} in $m$ and $\Lambda$; for a suitable choice of lapse given by the appropriate power of $a$, the equations of motion for $\chi_1$ and $\chi_2$ can be brought into the form $\dot\chi_i=-1$; if one allows for a negative lapse $\dot\chi_i=1$ would also be possible. Hence, in such a gauge either $\chi_1$ or $\chi_2$ are identified with (minus) the time coordinate~\cite{GielenMenendez,JoaoLetter}.
We could apply any canonical transformation upon these variables, in particular point transformations from constants to functions of themselves (inducing time conjugates proportional to the original one, the proportionality factor being a function of the constants). In particular it will be convenient to introduce the canonically transformed pair
\begin{equation}
\phi=\frac{3}{\Lambda}\,; \quad T_\phi= -3\frac{\chi_2}{\phi^2}
\label{canontransf}
\end{equation}
instead of $\Lambda$ and $\chi_2$.
Evidently, variation of Eq.~(\ref{totalac}) with respect to $N$ leads to a Hamiltonian constraint
\begin{equation}
-(b^2+k)+\frac{m}{a^2}+\frac{\Lambda}{3} a^2=0
\label{hamconst}
\end{equation}
which is equivalent to the Friedmann equation. We will think of $b$ as a ``coordinate'' and of $a^2$ as a ``momentum'' variable, and introduce the shorthand $V(b)\equiv b^2 + k$ viewing the $b$ dependence in Eq.~(\ref{hamconst}) as a potential, whereas the $a^2$-dependent terms play the role of kinetic terms.
If we use the variables (\ref{canontransf}) from now on, we can give the two solutions to the constraint in terms of $a^2$ as
\begin{equation}
a^2_\pm = \frac{\phi}{2}\left(V(b)\pm\sqrt{V(b)^2-4m/\phi}\right)
\label{sqrtsolution}
\end{equation}
which can be seen as two constraints, linear in $a^2$, which taken together are equivalent to the original (\ref{hamconst}) which is quadratic in $a^2$. We could write this alternatively as
\begin{equation}
h_\pm(b)a_\pm^2-\phi:=\frac{2a_\pm^2}{V(b)\pm \sqrt{V(b)^2-4m/\phi}} - \phi = 0
\label{linearized}
\end{equation}
in terms of the ``linearizing'' conserved quantity $\phi$, as suggested in~\cite{JoaoLetter,JoaoPaper}. The negative sign solution in Eq.~(\ref{sqrtsolution}) corresponds to a regime in which radiation dominates ($\phi m\gg a^4$) whereas the positive sign corresponds to $\Lambda$ domination, as one can see by checking which solution survives in the $m\rightarrow 0$ or $\Lambda\rightarrow 0$ ($\phi\rightarrow\infty$) limit.
The equations of motion arising from Eq.~(\ref{hamconst}) can be solved numerically\footnote{Analytical solutions can be given in conformal time in terms of Jacobi elliptic functions~\cite{twosheet}.}, which shows explicitly how the classical solutions transition from a radiation-dominated to a $\Lambda$-dominated branch of Eq.~(\ref{sqrtsolution}). We plot some examples (one for $k=0$ and one for $k=1$) in Fig.~\ref{solutionplot}. Notice that the point of handover between the two branches (which is when radiation and dark energy have equal energy densities, $\phi m=a^4$) corresponds to a ``bounce'' in $b$, where $\dot{b}=0$. This bounce of course happens at a time where the Universe is overall still expanding. It happens when $V(b)=2\sqrt{m/\phi}$, or equivalently when
\begin{equation}\label{b0}
b^2=b_0^2:=2\sqrt{\frac{m}{\phi}}-k\,.
\end{equation}
\begin{figure}[htp]
\includegraphics[scale=0.8]{plotk0-eps-converted-to.pdf}
\includegraphics[scale=0.8]{plotkp-eps-converted-to.pdf}
\caption{Cosmological solutions with initial data (set at $t=0$) $a=1$, $b=2$, and $m=1.2$, (top: $k=0,\,\Lambda=8.4$, bottom: $k=1,\,\Lambda=11.4$). These follow the radiation-dominated (orange dotted) branch at small $a^2$ but the $\Lambda$-dominated (green dashed) branch at large $a^2$. The time coordinate is defined by setting $N=1/a$.}
\label{solutionplot}
\end{figure}
It is important to realize that a linearized form of the constraints based on Eq.~(\ref{sqrtsolution}) leads to the same dynamical equations as the ones arising from Eq.~(\ref{hamiltonian}): for the Hamiltonian
\begin{equation}
\mathcal{H}_{\pm}= \frac{3 V_c}{8\pi G}\left(a^2 - \frac{\phi}{2}\left(V(b)\pm \sqrt{V(b)^2-4m/\phi}\right)\right)
\label{linearhamilt}
\end{equation}
we obtain
\begin{equation}
\frac{{\rm d}b}{{\rm d}t}=1\,,\quad \frac{{\rm d}(a^2)}{{\rm d}t}=\phi b \left(1\pm\frac{V(b)}{\sqrt{V(b)^2-4m/\phi}}\right)\,.
\label{eom}
\end{equation}
This form of the dynamics corresponds to a gauge in which $b$ plays the role of time and we are expressing the solution for $a^2$ in ``relational'' form $a^2(b)$. The second equation in Eq.~(\ref{eom}) can be obtained from Hamilton's equations for Eq.~(\ref{hamiltonian}) by using $\frac{{\rm d}(a^2)}{{\rm d}b}\equiv \frac{{\rm d}(a^2)}{{\rm d}t}/\frac{{\rm d}b}{{\rm d}t}$ and substituting in one of the solutions for $a^2(b)$ given by Eq.~(\ref{sqrtsolution}). Of course this way of defining things can only ever reproduce one branch of the dynamics corresponding to one of the two possible sign choices; the equations of motion break down at the turning point $\phi m =a^4$, where one should flip from $\mathcal{H}_+$ to $\mathcal{H}_-$ or vice versa and where both the parametrization $a^2(b)$ and the gauge choice $\dot{b}=1$ in Eq.~(\ref{eom}) fail. In this sense, the ambiguities in passing from Eq.~(\ref{hamiltonian}) to the linearized form (\ref{sqrtsolution}) are related to the failure of $b$ to be a good global clock for this system, a situation frequently discussed in the literature on constrained systems~\cite{IshamRovelli}.
\section{Quantum theory}\label{quantumMSS}
Minisuperspace quantization follows from promoting the first Poisson bracket in (\ref{PB}) to
\begin{equation} \label{com1}
\left[ \hat{b},\hat{a}^{2}\right] ={\rm i}\frac{l_{P}^{2}}{3V_{c}},
\end{equation}%
where $l_{P}=\sqrt{8\pi G_{N}\hbar }$ is the reduced Planck length.
Given our focus on a bounce in connection space, we choose the representation
diagonalizing $b$, so that
\begin{equation}\label{a2inb}
\hat a^2=-{\rm i}\frac{l_P^2}{3V_c}\frac{\partial}{\partial b}=:-{\rm i}\mathfrak{h}\frac{\partial}{\partial b}
\end{equation}
where we are introducing a shorthand for the ``effective Planck parameter'' $\mathfrak{h}$ as in~\cite{BarrowMagueijo}.
By choosing this representation we are making a very non-innocuous decision, leading to minimal quantum theories which
are not dual to the most obvious ones based on the metric representation.
When implementing the Hamiltonian constraint, in the metric representation all matter contents (subject to a given theory of gravity)
share the same gravity-fixed kinetic term, with the different equations of state $w$ reflected in different powers of $a$ in the effective potential,
$U(a)$, as is well known (e.g.,~\cite{vil-rev}).
In contrast, in the connection representation all matter fillings share the same gravity-fixed effective potential $V(b)=b^2+k$ introduced below Eq.~(\ref{hamconst}), with different
matter components appearing as different kinetic terms, induced by their different powers of $a^2\rightarrow -{\rm i}\mathfrak{h}\partial/\partial b$.
As a result the connection representation leads to further ambiguities quantizing these theories, besides the usual
factor ordering ambiguities. In addition to these, we have an ambiguity in the {\it order} of
the quantum equation (with a non-trivial interaction between the two issues). In the specific model we are studying here, we already discussed this issue for the classical theory above. We can work with a single Hamiltonian constraint (\ref{hamconst}) which is quadratic in $a^2$,
\begin{equation}\label{quartic}
\frac{a^4}{\phi}-V(b)a^2+m=0\,,
\end{equation}
with the middle term providing ordering problems; or we can write Eq.~(\ref{quartic}) as $(a^2-a^2_+)(a^2-a^2_-)=0$ with $a_\pm^2$ given in Eq.~(\ref{sqrtsolution}), and quantize a Hamiltonian constraint written as a two-branch condition
\begin{equation}\label{a2twobranch}
\hat a^2-\frac{\phi}{2}\left(V(b)\pm\sqrt {V(b)^2-4m/\phi}\right)=0.
\end{equation}
The two branches then naturally link with the mono-fluid prescriptions in~\cite{JoaoLetter,JoaoPaper}
when $\Lambda$ or radiation dominate (as we will see in detail later). For more complicated cosmological models in which multiple components with different powers of $a^2$ are present the situation can clearly become more complicated, with additional ambiguities in how to impose the Hamiltonian constraint. Notice also that an analogous linearization would have been possible in the metric representation, by writing Eq.~(\ref{quartic}) as $(b-b_+)(b-b_-)=0$ in terms of the two solutions for $b(a^2)$. We see no reason to expect that the resulting theories obtained by applying this procedure to either $b$ or $a^2$ would be related by Fourier transform.
We therefore have in hand two distinct quantum theories based on applying Eq.~(\ref{a2inb}) to either Eq.~(\ref{quartic}),
leading to
\begin{equation}\label{quarticeq}
\left[\frac{(\hat a^2)^2}{\phi}-V(b)\hat a^2+m\right]\psi=0\,,
\end{equation}
or to Eq.~(\ref{a2twobranch}), leading to
\begin{equation}\label{a2twobrancheq1}
\left[h_\pm(b)\hat a^2-\phi\right]\psi =0
\end{equation}
with $h_\pm(b)$ defined in Eq.~(\ref{linearized}).
One results in a second order formulation; the other in a two-branch first order formulation.
These theories are different and there is no reason
why one (with any ordering) should be equivalent to the other. Indeed, they are not. Let us define operators
\begin{equation}
D_\pm=\hat a^2-a_\pm^2(b;m,\phi)
\end{equation}
where we work (for now) in a representation in which $m$ and $\phi$ act as multiplication operators. These operators clearly do not commute:
\begin{equation}
\left[ D_+,D_-\right] \neq 0\,.
\end{equation}
The second order formulation, based on the constraint (\ref{quartic}), has an equation of the form
\begin{equation}
:D_+ D_-:\psi=0
\end{equation}
where the $:$ denote some conventional ``normal ordering'', for example keeping the $b$ to the left of the $a^2$.
The first order formulation defined by Eq.~(\ref{a2twobrancheq1}) leads to a pair of equations
\begin{equation}
D_+\psi=0\lor D_-\psi=0
\label{firstorderdef}
\end{equation}
(note that an ordering prescription is implied here).
In keeping with the philosophy of quantum mechanics, in the presence of a situation which classically corresponds to an ``OR" conjunction,
we superpose the separate results upon quantization, so that the space of solutions is still a vector space as in standard quantum mechanics. A generic element of this solution space will satisfy neither $D_+\psi=0$ nor $D_-\psi=0$.
To understand the difference between the two types of theory, we can compare with a simple quantum mechanics Hamiltonian $H=p^2/2m + V(x)$. Quantizing the relation $E=H(p,x)$ leads to a Schr\"odinger equation that is second order in $x$ derivatives (and which, depending on the form of $V(x)$, may not be solvable analytically). Alternatively, we could replace this fixed energy relation by {\em two} conditions $p\mp{\sqrt{ 2m(E -V(x))}}=0$ linear in $p$; these would be analogous to the conditions $D_\pm = 0$ appearing in our quantum cosmology model. In the quantum mechanics case, quantizing the linear relations and taking superpositions of their respective solutions results in a set of plane wave solutions, different to those of the second order theory. The interpretation of these plane wave solutions would be as the lowest order WKB/eikonal approximation to the theory given by the initial Schr\"odinger equation. Hence, while these approaches agree in producing the same classical dynamics (away from turning points where $p$ can change sign), the two quantum theories give different predictions in terms of $\hbar$-dependent corrections to the classical limit. In quantum cosmology, we do not know which type of quantization is ``correct'' and we saw at the end of Section~\ref{classical} that the classical cosmological dynamics can be equally described by the linear Hamiltonian (\ref{linearhamilt}) or by the original (\ref{hamiltonian}). In the quantum theory we can then follow either a first order or a second order approach as separate theories, with the difference between them becoming relevant at next-to-lowest order in $\hbar$. Again, we stress that this ambiguity goes beyond the issue of ordering ambiguities: it is about different {\em classical} representations of the same dynamics used as starting points for quantization. {\it The strategy proposed here is a new type of quantization procedure} compared to most of the existing quantum cosmology literature.
Indeed no ordering prescription for the second order formulation would lead to the total space of solutions of the first order formulation. By choosing $:D_+ D_-:\, = D_+ D_-$, for example, the solutions of $D_-\psi=0$ would be present in the second order formulation but not those of $D_+\psi=0$ (and vice versa). One might prefer a symmetric ordering $:D_+ D_-:\, = (D_+ D_- + D_- D_+)/2$ but the resulting equation would not be solved by solutions of either $D_-\psi=0$ or $D_+\psi=0$. If we start from a second order formulation in which we keep all $b$ to the left,
\begin{equation}\label{seconddiff}
\left(\frac{(\hat a^2)^2}{\phi}-V(b)\hat a^2+m\right)\psi=0\,,
\end{equation}
we do not exactly recover any of the solutions of the first order formulation, and even asymptotically (in regions in which either $m$ or $\Lambda$ dominates) we can only recover
the $D_-\psi=0$ solutions (and the radiation solutions in~\cite{JoaoLetter}). Indeed, by letting $\phi\rightarrow \infty$, Eq.~\ref{seconddiff}
reduces to
\begin{equation}
\left(-V(b)\hat a^2+m\right)\psi=0
\end{equation}
which
asymptotically is the same as $D_-\psi=0$ (since $a_-^2\approx m/V(b)$ when $V(b)^2\gg 4m/\phi$). However, for $m=0$ we get
\begin{equation}\label{asymptsec}
\left(\frac{\hat a^4}{\phi}- V(b) \hat a^2 \right)\psi =0
\end{equation}
with $V(b)$ to the left of $\hat{a}^2$. Thus, we {\it cannot} factor out $\hat a^2$ on the left, to obtain
\begin{equation}
\left(\frac{1}{V(b)}\hat a^2-\phi\right)\psi=0
\end{equation}
and so force some solutions to asymptotically match those of $D_+\psi=0$ and the pure $\Lambda$ solutions of~\cite{JoaoLetter}. The solutions of (\ref{asymptsec}) instead match those
studied in~\cite{Ngai}. They are not the Chern--Simons state but the integral of the Chern--Simons state.
From the second order perspective, in order to reproduce the solutions of the first order theory one would need to put the $b$ to the left or right depending on the branch we look at.
The ordering in one formulation can therefore never be matched by the ordering in the other~\footnote{Apart from the forceful two-branched ordering $:D_+ D_-:\, \equiv D_+ D_-\lor D_- D_+$, of course.}.
\section{Solutions in the second order forumlation}
\label{2ndordertheory}
In our model, as in the example of a general potential in the usual Schr\"odinger equation, the second order theory is more difficult to solve. If we add a possible operator ordering correction proportional to $[\hat{b}^2,\hat{a}^2]=2{\rm i}\mathfrak{h}\hat{b}$ to Eq.~(\ref{seconddiff}), we obtain the more general form
\begin{equation}\label{seconddiff2}
\left(\frac{(\hat a^2)^2}{\phi}+{\rm i}\xi\mathfrak{h} b-V(b)\hat a^2+m\right)\psi=0
\end{equation}
where $\xi$ is a free parameter (which could be fixed by self-consistency arguments; for instance, requiring the Hamiltonian constraint to be self-adjoint with respect to a standard $L^2$ inner product would imply $\xi=1$).
We can eliminate the first derivative in Eq.~(\ref{seconddiff2}) by making the ansatz
\begin{equation}
\psi(b,m,\phi)=e^{\frac{{\rm i}}{2\mathfrak{h}}\phi\left(\frac{b^3}{3}+b k\right)}\chi(b,m,\phi)
\end{equation}
so that $\chi$ now has to satisfy
\begin{equation}
\label{effectiveschr}
\left(-\frac{\mathfrak{h}^2}{\phi}\frac{\partial^2}{\partial b^2}+\left(m+{\rm i} \mathfrak{h} (\xi-1)b-\frac{\phi}{4}V(b)^2\right)\right)\chi=0
\end{equation}
which we recognize (with $\xi=1$) as a standard Schr\"odinger equation with a (negative) quartic potential. One can write down the general solution to this problem in terms of tri-confluent Heun functions (see, e.g.,~\cite{heunbook}),
\begin{eqnarray}
\chi&=&c_1(m,\phi)e^{-\frac{{\rm i}}{2\mathfrak{h}}\phi\left(\frac{b^3}{3}+b k\right)}H_T\left(\frac{m\phi}{\mathfrak{h}^2};-{\rm i}\frac{\phi}{\mathfrak{h}},-{\rm i}\frac{k\phi}{\mathfrak{h}},0,-{\rm i}\frac{\phi}{\mathfrak{h}};b\right)\nonumber
\\&&+c_2(m,\phi)e^{\frac{{\rm i}}{2\mathfrak{h}}\phi\left(\frac{b^3}{3}+b k\right)}H_T\left(\frac{m\phi}{\mathfrak{h}^2};{\rm i}\frac{\phi}{\mathfrak{h}},{\rm i}\frac{k\phi}{\mathfrak{h}},0,{\rm i}\frac{\phi}{\mathfrak{h}};b\right)\nonumber
\end{eqnarray}
where the tri-confluent Heun functions $H_T$ are normalized by defining them to be solutions to the tri-confluent Heun differential equation subject to the boundary conditions $f(0)=1$ and $f'(0)=0$. These are defined in terms of a power series around $b=0$, so that we get
\begin{equation}
\psi = c_1 + c_2 + {\rm i}\,c_2\frac{k\phi}{\mathfrak{h}}b+\frac{m\phi(c_1+c_2)-k^2\phi^2c_2}{2\mathfrak{h}^2}b^2+O(b^3)\,.
\end{equation}
These solutions could be useful for setting ``no-bounce'' boundary conditions at $b=0$ (now referring to a bounce in the scale factor), in the classically forbidden region. An immediate issue however is that tri-confluent Heun functions defined in this way diverge badly at large $b$, and are hence not very useful for studying the classically allowed region. While they can be written down for arbitrary $\xi$, there seems to be no particular value which allows for more elementary expressions or analytical functions that are well-defined for all $b$.
The divergences seen in these ``analytical'' solutions are rooted in the definition of these functions as a power series around $b=0$; full numerical solutions show no such divergence but decay at large $b$. This is reassuring, but one might prefer retaining analytical expressions that can at least be valid at large $b$. In this limit, we can obtain an approximate solution by setting $m=0,\,\xi=1$, and $V(b)=b^4$ in Eq.~(\ref{effectiveschr}); the resulting differential equation has the general solution
\begin{equation}
\chi=\sqrt{b}\left(c_3(m,\phi)J_{-\frac{1}{6}}\left(\frac{\phi b^3}{6h}\right)+c_4(m,\phi)J_{\frac{1}{6}}\left(\frac{\phi b^3}{6h}\right)\right)
\label{besselsols}
\end{equation}
where $J_\nu(z)$ are Bessel functions. At large $b$, these Bessel functions have the asymptotic form
\begin{eqnarray}
\chi &\sim& \frac{2}{b}\sqrt{\frac{3h}{\pi\phi}}\left(c_3(m,\phi)\sin\left(\frac{\phi b^3}{6h}+\frac{\pi}{3}\right)+\right.\nonumber
\\&&\left.\qquad c_4(m,\phi)\sin\left(\frac{\phi b^3}{6h}+\frac{\pi}{6}\right)\right)\,.
\end{eqnarray}
These asymptotic solutions are plane waves in $b^3$ modulated by a prefactor decaying as $1/b$, so they are certainly well behaved at large $b$. These large $b$ solutions can be matched to the tri-confluent Heun functions at smaller values of $b$; see Fig.~\ref{matchfig} for an example. The result of this matching agrees perfectly with a numerically constructed solution. Of course, the coefficients $c_3$ and $c_4$ in Eq.~(\ref{besselsols}) which correspond to certain initial conditions are then also only known numerically. We have no good analytical control over these solutions where they are most interesting, in the region around $b=b_0$.
\begin{figure}[h]
\includegraphics[scale=0.8]{heunplot-eps-converted-to.pdf}
\includegraphics[scale=0.8]{matchplot-eps-converted-to.pdf}
\caption{Solutions with $m=1.2$, $k=0$, $\Lambda=8.4$ and in units $\mathfrak{h}=1$ (real part in blue, imaginary part in orange). The top plot shows a solution given in terms of a tri-confluent Heun function which diverges at $b\approx 6.8$. At the bottom we have matched this to an approximate solution (\ref{besselsols}) by matching the wave function and its derivative at $b=6$, which leads to a solution defined at arbitrarily large $b$. The classically allowed region is $b>b_0\approx 1.91$. This solution agrees with a numerical solution constructed from the same initial data $\psi(0)=1,\;\psi'(0)=0$ (black, dashed and dotted).}
\label{matchfig}
\end{figure}
If we interpret $|\psi|^2$ as a probability density, we see that this falls off as $1/b^2$ at large $b$ and so most of the probability would in fact be concentrated near the ``bounce'' $b=b_0$. One might be tempted to relate this property to the coincidence problem of cosmology, since it would suggest that an observer would be likely to find themselves not too far from equality between radiation and $\Lambda$, contrary to the naive expectation in classical cosmology that $\Lambda$ should dominate completely. Below we will compare this expectation with a more detailed calculation (and using a different measure) in the first order theory.
We can contrast these attempts at obtaining exact solutions to the second order theory with what would be the traditional approach in quantum cosmology, which is to resort to approximate semiclassical solutions. After all, the setup of quantum cosmology is anyway at best a semiclassical approximation to quantum gravity. If we start from a WKB-type ansatz $\psi(b,m,\phi)=A(b,m,\phi)e^{{\rm i} P(b,m,\phi)/\mathfrak{h}}$, truncation of Eq.~(\ref{seconddiff2}) to lowest order in $\mathfrak{h}$ implies that
\begin{equation}
\frac{1}{\phi}\left(\frac{\partial P}{\partial b}\right)^2-V(b)\frac{\partial P}{\partial b}+m=0\,,
\end{equation}
the Hamilton--Jacobi equation corresponding to Eq.~(\ref{quartic}). Its solutions are $\partial P/\partial b=a^2_\pm(b;m,\phi)$ with $a^2_\pm$ as in Eq.~(\ref{sqrtsolution}),
\begin{equation}
a^2_\pm = \frac{\phi}{2}\left(V(b)\pm\sqrt{V(b)^2-4m/\phi}\right)\,,
\end{equation}
and the general lowest-order WKB solution to the second order theory is
\begin{equation}
\psi = c_+(m,\phi)e^{\frac{{\rm i}}{\mathfrak{h}}\int^b {\rm d}b'\;a^2_+}+c_-(m,\phi)e^{\frac{{\rm i}}{\mathfrak{h}}\int^b {\rm d}b'\;a^2_-}\,.
\label{firstordersol}
\end{equation}
On the other hand, Eq.~(\ref{firstordersol}) is already the {\em exact} general solution of the first order theory we defined by Eq.~(\ref{firstorderdef}). These solutions are pure plane waves in the classically allowed region $|V(b)|\ge 2\sqrt{m/\phi}$ but have a growing or decaying exponential part in the classically forbidden region $|V(b)|<2\sqrt{m/\phi}$, as expected. In the next section we will discard the exponentially growing solution corresponding to $a^2_-$, but since this forbidden region is of finite extent there are no obvious normalizability arguments that mean it has to be excluded.
\section{Detailed solution in the first order formulation}\label{firstordersln}
Needless to say, the first order formulation is easier to solve analytically and take further. In these theories
(e.g.,~\cite{JoaoPaper}) the general solution is a superposition for different values of ``constants'' $\bm\alpha$ of ``spatial''
monochromatic functions $\psi_s(b; \bm \alpha)$ (solving a Wheeler--DeWitt equation for fixed values of the $\bm\alpha$)
multiplied by the appropriate time evolution factor combining $\bm \alpha$ and their conjugates $\bm T$.
The total integral takes the form
\begin{equation}\label{gensolM}
\psi(b,\bm T)=\int \mathrm{d}{\bm \alpha}\, {\cal A}( \bm \alpha)\, \exp{\left[-\frac{{\rm i}}{\mathfrak{h}} \bm\alpha \cdot \bm T \right]}\psi_s(b; \bm \alpha)\,.
\end{equation}
The $\psi_s$ are conventionally normalized so that in the classically allowed region
\begin{equation}
|\psi_s|^2=\frac{1}{(2\pi\mathfrak{h})^{D}}
\end{equation}
where $D$ is the dimensionality of the deconstantized space, i.e., the number of conserved quantities $\bm\alpha$. The model studied in this paper corresponds to (see Eq.~(\ref{totalac}), (\ref{canontransf}))
\begin{equation}
{\bm \alpha}=\left(\phi\equiv \frac{3}{\Lambda},m\right)\,,\;{\bm T}=\left(T_\phi,T_m =\chi_1\right)
\end{equation}
with $D=2$.
\subsection{Monochromatic solutions}\label{monoch}
In our model, the $\psi_s(b; \bm \alpha)$ are defined to be the solutions to the two branches of Eq.~(\ref{a2twobrancheq1}), given by
\begin{equation}
\psi_{s\pm}(b;\phi,m)={\cal N} \exp{\left[\frac{{\rm i}}{\mathfrak{h} } \phi X_\pm (b;\phi,m) \right]}
\label{psisdef}
\end{equation}
with (see also Eq.~(\ref{firstordersol}))
\begin{equation}
X_\pm(b;\phi ,m )=\int^b_{b_0} \mathrm{d}\tilde b\,\frac{1}{2}\left(V(\tilde{b})\pm\sqrt{{V(\tilde{b})^2- 4 m/\phi }}\right),
\end{equation}
where the integration limit is chosen to be $b=b_0$, defined in Eq.~(\ref{b0}) as the value of $b$ at the bounce.
We plot these functions, with this choice of limits and for some particular choices of the parameters, in Fig.~\ref{Fig1}.
We see that for $b^2\gg b_0^2$ the $+/-$ branches have
\begin{eqnarray}
X_+(b;\phi, m)&\approx&X_\phi=\frac{b^3}{3} + kb\,,\\
X_-(b;\phi, m)&\approx&\frac{m}{\phi}X_r = \frac{m}{\phi}\int^b\frac{\mathrm{d}\tilde b}{\tilde{b}^2 + k}\,,
\end{eqnarray}
where $X_\phi$ and $X_r$ would be the corresponding functions appearing in the exponent for a model of pure $\Lambda$ (characterized by the quantity $\phi$) and a model of pure radiation. Hence this leads to the correct limits far away from the bounce~\cite{JoaoPaper},
\begin{eqnarray}
\psi_{s+}(b;\phi,m)&\approx &{\cal N} \exp{\left[\frac{{\rm i}}{\mathfrak{h}} \phi X _\phi (b) \right]}\,,\\
\psi_{s-}(b;\phi,m)&\approx &{\cal N} \exp{\left[\frac{{\rm i}}{\mathfrak{h}} m X_r (b) \right]}\,,
\label{xrdef}
\end{eqnarray}
up to a phase related to the limits of integration. This phase is irrelevant for the
$+$ wave, since $X_\phi$ diverges with $b$, so that the $b_0$ contribution quickly becomes negligible.
It does affect the $-$ wave, if we want to match with Eq.~(\ref{xrdef}) asymptotically.
Let us assume $k=0$~\footnote{The other cases are more complicated, as we could
go curvature dominated before $\Lambda$ domination.}. Then, $X_-(b)\sim -\frac{1}{b}$ for large $b$, so
in order to have agreement between Eq.~(\ref{psisdef}) and Eq.~(\ref{xrdef}) we should subtract the extra phase obtained by using $b_0$ as the lower limit
of the integral, which we denote by
\begin{equation}\label{asymptphase}
\chi:= \frac{1}{\mathfrak{h}} \phi X_- (\infty)\,.
\end{equation}
We could also take the lower limit of the integral to be $\infty$ or absorb the phase (\ref{asymptphase}) into the $-$ amplitude defined in Eq.~(\ref{superpose}),
\begin{equation}
A_-\rightarrow A_-e^{{\rm i}\chi}\,.
\end{equation}
We have plotted the various options for defining $\psi_{s-}$ in Fig.~\ref{Fig2}.
\begin{figure}
\center
\epsfig{file=Psipm.pdf,width=8.cm}
\caption{Imaginary part of the wave functions $\psi_{s\pm}$ (Lambda branch $+$ in blue, radiation branch $-$ in orange),
with $\mathfrak{h} =1$, $m=1$, $\phi=10^6$, defined with the
lower limit $b_0$ (here and in the following plots $b_0\approx 0.0447$). Notice how the oscillation frequency increases/decreases with $b$ for the $\Lambda$/radiation dominated branches. }
\label{Fig1}
\end{figure}
\begin{figure}
\center
\epsfig{file=Psimphases.pdf,width=8.cm}
\caption{Imaginary part of the wave functions $\psi_{s-}$ with $\mathfrak{h} =1$, $m=1$, $\phi=10^6$, defined with the
lower limit of integration $b_0$ (orange) and with lower limit infinity (green), compared with the asymptotic radiation dominated
wave function (blue).
}
\label{Fig2}
\end{figure}
The general solution for $b>b_0$ is the superposition
\begin{equation}
\psi_s(b )=A_+ \psi_{s+}(b )+ A_- \psi_{s-}(b)\,,
\label{superpose}
\end{equation}
where we dropped the $\phi$ and $m$ labels to lighten up the notation.
In the $b<b_0$ region we have the usual evanescent wave\footnote{Here we shall assume that the amplitude for tunneling into the contracting region $b<-b_0<0$ is negligible.}.
The appropriate solution (i.e., the one that is exponentially suppressed, rather than blowing up) is
\begin{eqnarray}
\psi(b )&=&B \psi_{s-}(b )\nonumber\\
&=& B \exp{\left[\frac{{\rm i} }{\mathfrak{h}} \phi X_- (b;\phi,m) \right]}\\
&=& B\exp{\left[\frac{ \phi }{2\mathfrak{h} } \int^b_{b_0} \mathrm{d}\tilde b \left({\rm i} V(\tilde{b}) + \sqrt{{4m/\phi - V(\tilde{b})^2 }}\right) \right]}\nonumber\,.
\end{eqnarray}
Note that the limits of integration then ensure a negative sign for the real exponential. In addition to this there is also an oscillatory factor. This solution is plotted in Fig.~\ref{Fig3}.
\begin{figure}
\center
\epsfig{file=Psievan.pdf,width=8.cm}
\caption{Imaginary part of the evanescent wave function $\psi_{s-}$ valid for $b<b_0$ with $\mathfrak{h} =1$, $m=1$,
$\phi=10^6$, defined with the lower limit $b_0$. (Strictly speaking the integrals used are only valid for $b>0$ but we
ignore the region $b<0$.) }
\label{Fig3}
\end{figure}
Our problem is now
similar to a quantum reflection problem, but with significant novelties because the medium is highly dispersive.
Usually all we have to do is match the wave functions and their derivatives at the reflection point $b_0$ to get a fully defined wave function.
Given that $X_+(b_0)=X_-(b_0)=0$, imposing continuity at $b=b_0$
requires
\begin{equation}\label{cond1}
A_++A_-=B\,.
\end{equation}
However, imposing that the first derivative of $\psi_s$ is continuous
at $b=b_0$ produces the same
condition, given that $X'_\pm (b_0)=V(b_0)/2$. Second derivatives diverge as $b\rightarrow b_0$, as can be understood from the fact that this is a classical turning point and the monochromatic solutions are $e^{{\rm i} P(b,m,\phi)/\mathfrak{h}}$ where $P$ is the classical Hamilton--Jacobi function. We will require as a matching condition that these divergences have the same form as we approach $b_0$ from above or below. This leads to
\begin{equation}\label{cond2}
A_+- A_-={\rm i} B
\end{equation}
from a
term that diverges as $b\rightarrow b_0$. Hence
\begin{equation}
\frac{A_\pm}{B}=\frac{1\pm {\rm i}}{2}\,.
\end{equation}
For wave packets, the same conditions arise from imposing continuity
of the wave function and requiring that divergent first derivatives match, as we shall see below.
Specifically, in order to match the radiation dominated phase for the partial waves we should choose
\begin{eqnarray}
A_-&=&e^{-{\rm i}\chi}\,,\nonumber\\
B&=&\sqrt{2}e^{{\rm i}(-\chi +\pi/4)}\,,\nonumber\\
A_+&=&e^{{\rm i}(-\chi +\pi/2)}\,.
\label{fixedcoefs}
\end{eqnarray}
The resulting $\psi_s$ is plotted in Fig.~\ref{Fig4}. Suppressing for the moment the $\bm \alpha$ label, it has the form
\begin{eqnarray}\label{psis3terms}
\psi_s(b) &=&[A_+ \psi_{s+}(b )+ A_- \psi_{s-}(b)]\Theta(b-b_0)+\nonumber\\
&+&B\psi_{s-}(b)\Theta(b_0-b)
\end{eqnarray}
with the coefficients given by Eq.~(\ref{fixedcoefs}).
\begin{figure}
\center
\epsfig{file=PsiSfinal.pdf,width=8.cm}
\caption{Imaginary part of the full wave function $\psi_{s}$ normalized so as to match the asymptotic radiation dominated expression, with
parameters $\mathfrak{h} =1$, $m=1$,
$\phi=10^6$. The incident (orange) and reflected (blue) waves, when superposed, match the evanescent wave (green) up to second derivatives in this plot.}
\label{Fig4}
\end{figure}
\subsection{Wave packets}
To construct coherent/squeezed wave packets we must now evaluate Eq.~(\ref{gensolM}) with a
factorizable state,
\begin{equation}\label{Amp}
{\cal A}(\bm\alpha)=\prod_i {\cal A}_i(\alpha_i)=
\prod_i\frac{\exp{\left[-\frac{(\alpha_i-\alpha_{i0})^2}{4\sigma_{\alpha i}^2 }\right]}}
{(2\pi \sigma_{\alpha i} ^2)^{1/4}}\,.
\end{equation}
Given Eq.~(\ref{psis3terms}), this results in
\begin{eqnarray}\label{3pack}
\psi(b,\bm T)&=&[A_+ \psi_{+}(b,\bm T )+ A_- \psi_{-}(b,\bm T)]\Theta(b-b_0)+\nonumber\\
&+&B\psi_{-}(b,\bm T)\Theta(b_0-b)
\end{eqnarray}
with
\begin{equation}\label{pmpacks}
\psi_\pm (b,\bm T)=\int \mathrm{d}{\bm \alpha}\, {\cal A}( \bm \alpha)\, \exp{\left[-\frac{{\rm i} }{\mathfrak{h}} \bm\alpha \cdot \bm T \right]}\psi_{s\pm}(b; \bm \alpha)\,.
\end{equation}
These are the superposition of three wave packets: an incident one, coming from the radiation epoch, a reflected one, going into the $\Lambda$ epoch, and an evanescent packet in the classically forbidden region significant around the ``time'' of the bounce.
We can now follow a saddle point approximation, as in Ref.~\cite{JoaoPaper}, appropriate for interpreting minisuperspace as a
dispersive medium, where the concept of group speed of a packet is crucial. Defining the
spatial phases $P_\pm $ from
\begin{equation}
\psi_{s\pm}(b,{\bm \alpha})={\cal N} \exp{\left[\frac{{\rm i} }{\mathfrak{h}} P_\pm (b,{\bm \alpha})\right]}
\end{equation}
so that
\begin{equation}\label{P}
P_\pm =\phi X_\pm=\phi \int^b_{b_0} \mathrm{d}\tilde b\,\frac{1}{2}\left(V(\tilde{b})\pm\sqrt{{V(\tilde{b})^2- 4 \frac{m}{\phi}}}\right)\,,
\end{equation}
we can approximate:
\begin{equation}\label{expansion}
P_\pm(b,\bm \alpha)\approx P_\pm (b;\bm \alpha_0)+\sum_i\frac{\partial P_\pm }{\partial \alpha_i}\biggr\rvert_{\bm \alpha_0}(\alpha_i-\alpha_{i0})\,.
\end{equation}
These $P_\pm$ again correspond to the two solutions for the classical Hamilton--Jacobi function of the model, as discussed before Eq.~(\ref{firstordersol}).
Then, for any factorizable amplitude, the wave functions (\ref{pmpacks}) simplify to
\begin{equation}
\psi_\pm (b,\bm T) \approx
e^{ \frac{i }{\mathfrak{h}} (P_\pm (b;\bm\alpha_0)-\bm \alpha_0\cdot \bm T)}
\prod_i \psi_{\pm i} (b,T_i)
\end{equation}
with
\begin{equation}\label{envelopes}
\psi_{\pm i}(b,T_i)=\int
\frac{\mathrm{d}\alpha_i}{\sqrt {2\pi\mathfrak{h} }}
\,{\cal A}_i(\alpha_i)\, e^ {-\frac{{\rm i} }{\mathfrak{h}} (\alpha_i-\alpha_{i0})\left(T_i- \frac{\partial P_\pm }{\partial \alpha_i}
\big\rvert_{\bm \alpha_0}
\right)}.
\end{equation}
The first factor is the monochromatic wave centered on $\bm\alpha_0$ derived in Section~\ref{monoch}, with the
time phases $\bm \alpha_0\cdot \bm T$ included. The other factors, $\psi_{\pm i}(b,T_i)$, describe envelopes moving with equations of motion
\begin{equation}
T_i=\frac{\partial P_\pm(b,\bm\alpha )}{\partial \alpha_i}\biggr\rvert_{\bm \alpha_0}\,.
\end{equation}
In the classically allowed region, the motion of the envelopes (and so of their peaks) reproduces the classical equations of motion
for both branches, throughout the whole trajectory, as proved in~\cite{JoaoPaper}. The packets move along
outgoing waves whose group speed
can be set to one using the
linearizing variable
\begin{equation}
X_{\pm i}^{\rm eff}(b)=\frac{\partial P_\pm(b,\bm\alpha )}{\partial \alpha_i}\biggr\rvert_{\bm \alpha_0}\,,
\end{equation}
so that $T_i=X_{\pm i}^{\rm eff}$.
Inserting (\ref{Amp}) in (\ref{envelopes}) we find that the envelopes in our case are the Gaussians
\begin{equation}\label{Gauss}
\psi_{\pm i}(b,T_i)=\frac{1}{(2\pi\sigma^2_{Ti})^{1/4}} \exp\left[-\frac{ (X_{\pm i}^{\rm eff}(b) -T_i)^2}{4 \sigma_{Ti} ^2}\right]\,,
\end{equation}
with $\sigma_{T_i}=\mathfrak{h}/(2\sigma_{\alpha i})$ saturating the Heisenberg inequality as expected for squeezed/coherent states.
It is interesting to see that the condition (\ref{cond2}) obtained in Section~\ref{monoch} from matching divergences in the second derivative of the plane waves can be derived from the first derivative of the wave packets. Recall that
\begin{eqnarray}
P_\pm(b_0,\bm\alpha)&=&0\,,\nonumber\\
P'_\pm(b_0,\bm\alpha)&=&\phi \frac{V(b_0)}{2}=\sqrt{m\phi}\,,
\end{eqnarray}
to which we should add
\begin{eqnarray}
X_{\pm i}^{\rm eff}(b_0)&=&0\,,
\label{xefflimits}\\
\lim_{b\rightarrow b_0}\left(\sqrt{V(b)^2-4m/\phi}\,X_{\pm i}^{{\rm eff}' }(b)\right)&=&\mp \phi\frac{\partial(m/\phi)}{\partial\alpha_i}\,.\nonumber
\end{eqnarray}
Leaving the $A_\pm$ and $B$ undefined in Eq.~(\ref{3pack}) we then find that continuity of the wave packet at $b=b_0$ requires
\begin{equation}
A_++A_-=B\,,
\end{equation}
i.e., Eq.~(\ref{cond1}), whereas the divergent terms in the first derivative at $b_0$ agree on both sides if
\begin{equation}
A_+- A_-={\rm i} B \,,
\end{equation}
i.e., condition (\ref{cond2}).
\begin{figure}[htp]
\includegraphics[scale=0.8]{PsiT8.pdf}
\includegraphics[scale=0.8]{PsiT0.pdf}
\includegraphics[scale=0.8]{PsiT-8.pdf}
\caption{Snapshots of the wave function in the classically allowed region $b\ge b_0$ for a wave packet with $\sigma_{Tm}=4$
at times $T_m=8,0,-8$. Note that on-shell $T_m=-(\eta-\eta_0)$, where $\eta$ (and $\eta_0$) is conformal time (and conformal time at the bounce). The envelope picks the right portion of the $\psi_s$, $+$ or $-$, away from the bounce. Close to the bounce, however, the $+$ and $-$ waves interfere. }
\label{reflectpsi}
\end{figure}
\begin{figure}
\includegraphics[scale=.9]{ProbT.pdf}
\caption{A plot of $|\psi|^2$ for the same situation as in Fig.\ref{reflectpsi} at $T_m=12,8,4,0$.
(For the particular case of $T_m$ -- but not for a generic time -- this function is symmetric,
so for clarity we have refrained from plotting the equivalent $T_m<0$.)}
\label{reflectprob}
\end{figure}
\subsection{Ringing of the wave function at the bounce}\label{ring}
As already studied in detail in~\cite{JoaoPaper}, the peaks of these wave packets follow the classical limit throughout
the whole trajectory, including the bounce, assuming they remain peaked and do not interfere. They are also bona fide
WKB states asymptotically, in the sense that they have a peaked broad envelope multiplying a fast oscillating phase
(the minority clock in general will not produce a coherent packet, but we leave that matter out of the discussion here).
The problem is that none of this applies at the bounce, where the incident and the reflected waves interfere, leading to ``ringing'' in the
probability. This is an example of how the superposition of two semiclassical states is itself not a semiclassical state.
To illustrate this point at its simplest, let us set $k=0$ and focus on the factor with the radiation time $T_m$, so that
\begin{equation}\label{Xeffm}
X^{{\rm eff}}_{\pm m}=\mp\int_{b_0}^b\frac{\mathrm{d}\tilde{b}}{\sqrt{\tilde{b}^4-b_0^4}}+{\rm const.}
\end{equation}
where ${\bm \alpha}_0=(\phi_0,m_0)$, and we used that for $k=0$, Eq.~\ref{b0} leads to $b_0^4=4m_0/\phi_0$.
A term constant in $b$, resulting from the dependence on $m$ in the limits of integration in (\ref{P}), can be neglected. We will evaluate our wavefunctions numerically, but we note that
in this case the integral can be expressed in terms of elliptic integrals of the first kind $F$,
\begin{equation}
X^{{\rm eff}}_{\pm m}=\mp\frac{{\rm i}}{b} F(\arcsin(b/b_0);-1)+{\rm const.}
\end{equation}
with another constant ($b$-independent) piece (which includes the constant imaginary part of the $F$ function, ensuring that the resulting $X^{{\rm eff}}_{\pm m}$ is real).
For illustration purposes,
we then select a wave packet with $\sigma_{Tm}=4$ and follow it around the bounce
at $T_{m}=0$. Note that on-shell $T_m=-(\eta-\eta_0)$, where $\eta$ is conformal time (shifted by $\eta_0$ so that $T_m=0$ at the bounce),
so the conventional arrow of $T_m$ is reversed with respect to
that of $T_\phi$ or the thermodynamical arrow (see discussion in~\cite{JoaoPaper}). In Fig.\ref{reflectpsi} we plot the wave function
away from the bounce on either side, and at the bounce. As we see, well away from the bounce,
the envelope picks the right portion of the $\psi_s$ as depicted in Fig.~\ref{Fig4}, $+$ or $-$ depending on whether $T$ is positive or negative.
Around the bounce $T=0$, however, the $+$ and $-$ waves clearly interfere (see middle plot).
As in standard reflections~\cite{interfreflex}, this interference could have implications for the probability, in the form of ``ringing''.
We illustrate this point with the traditional $|\psi|^2$, which contains the interference cross-term (but which, we stress, is not
a serious contender for a unitary definition of probability, as we will see in the next Section). If we were to compute
$|\psi|^2$ for the $\psi_+$ or $\psi_-$ in Fig.~\ref{Fig4} we would obtain a constant, in spite of the wave function oscillations.
Likewise, if we dress $\psi_+$ or $\psi_-$ with an envelope, these internal beatings will not appear in the separate $|\psi|^2$.
Close to the bounce, however, the interference between the $+$ and $-$ wave will appear as ringing in $|\psi|^2$ (see Fig.~\ref{reflectprob}) or any other measure displaying interference. A similar construction could be made with the packets locked on to the time $T_\phi$. Thus one clock hands over to the other.
We close this Section with two words of caution. First, this ringing is probably as observable as the one associated with the mesoscopic stationary waves
described in~\cite{randono}. Indeed the two are formally related. The Chern--Simons wave function described in~\cite{randono} translates (by Fourier transform~\cite{CSHHV}) into a Hartle--Hawking stationary wave function~\cite{HH}, which is nothing but the superposition of two Vilenkin traveling waves~\cite{vil-PRD} moving in opposite directions. The reflection studied here is precisely one such superposition in a different context and in $b$ space. The scale of the effect, however, is the same.
Secondly, we need to make sure that the probability is indeed associated with a function (like $|\psi|^2$) containing an interference cross-term, and work out the correct integration measure to obtain a unitary theory. At least with one definition of inner product, in the semiclassical approximation the ringing
disappears, as we now show.
\section{Inner product and probability measure}\label{inner}
Usually, the inner product and probability measure are inferred from the requirement of unitarity, i.e., time-independence of the inner
product, which in turn follows from a conserved current (see, e.g.,~\cite{vil-rev,vil-PRD}). As explained in Ref.~\cite{JoaoPaper}, in mono-fluid situations
this leaves us with three equivalent definitions, which we first review.
\subsection{Monofluids}\label{innermono}
For a single fluid with equation of state parameter $w$, the first-order version of the Hamiltonian constraint leads to a dynamical equation that can be written as
\begin{equation}\label{outeq}
\left((b^2+k)^{\frac{2}{1+3w}}\frac{\partial}{\partial b}+
\frac{\partial}{\partial T}
\right)\psi =: \left(\frac{\partial}{\partial X}+
\frac{\partial}{\partial T}
\right)\psi =0
\end{equation}
with $T$ dependent on $w$ and
\begin{equation}
X=\int\frac{{\rm d}b}{(b^2+k)^{\frac{2}{1+3w}}}.
\end{equation}
From such an equation we can infer a current $j^X=j^T=|\psi|^2$ satisfying the conservation law
\begin{equation}
\partial_X j^X +\partial_T j^T=0\,.
\end{equation}
The inner product can then be defined as
\begin{equation}\label{innX}
\langle\psi_1|\psi_2 \rangle=\int {\rm d}X \psi_1^\star(b(X),T)\psi_2(b(X),T)
\end{equation}
with unitarity enforced by current conservation:
\begin{equation}
\frac{\partial}{\partial T}\langle\psi_1|\psi_2 \rangle=-\int {\rm d}X \frac{\partial}{\partial X}(\psi_1^\star(b,T)\psi_2(b,T))=0\,.
\end{equation}
For this argument to be valid without the introduction of boundary conditions as in, e.g.,~\cite{GielenMenendez}, here and in the following we must assume that $X(b)$ takes values over the whole real line and is monotonic. This is true for many cases including the ones studied here, namely radiation and $\Lambda$ with $k=0$ (and also in the case of dust with $k=0$, studied in~\cite{brunobounce}).
We have then established that a useful integration measure for mono-fluids is
\begin{equation}
\mathrm{d}\mu(b)=\mathrm{d} X= \frac{\mathrm{d} b}{\left|(b^2+k)^{\frac{2}{1+3w}}\right|}.
\end{equation}
The normalizability condition $|\langle\psi |\psi \rangle|=1$ supports using this measure to identify the probability.
Given the particular form of the general solution for mono-fluids,
\begin{equation}\label{wavepacketmono}
\psi (b,T)=\int \frac{\mathrm{d}\alpha}{\sqrt {2\pi\mathfrak{h} }} {\cal A}(\alpha) \exp{\left[\frac{{\rm i} }{\mathfrak{h}}\alpha (X(b) - T) \right]}\,,
\end{equation}
we can write (\ref{innX}) in the equivalent forms
\begin{eqnarray}
\langle\psi_1|\psi_2 \rangle&=&\int \mathrm{d} T \;\psi_1^\star(b,T)\psi_2(b,T)\,,\label{innT}\\
\langle\psi_1|\psi_2 \rangle &=&\int \mathrm{d}\alpha \; {\cal A}_1^\star (\alpha) {\cal A}_2(\alpha)\, .\label{innalpha}
\end{eqnarray}
\subsection{Multifluids with no bounce}\label{innermulti}
Unfortunately, not all of this construction generalizes to the transition regions of multi-fluids,
where an ``$X$'' variable can be defined, but in general depends on $\alpha$ as well as $b$ (even putting aside that
there may be multi-branch expressions if there is a bounce, a matter which we ignore at first).
We {\it may} propose that the inner product in a general multi-fluid setting be defined by the generalization of
(\ref{innalpha}),
\begin{equation}\label{innalpha1}
\langle\psi_1|\psi_2 \rangle=\int \mathrm{d}{\bm \alpha} \; {\cal A}_1^\star ({\bm \alpha}) {\cal A}_2({\bm \alpha})
\end{equation}
which,
by construction, is time-independent, and so unitarity is preserved. However, since $\psi_s$ in (\ref{gensolM})
is not a plane wave in some $X(b)$, its
expressions in terms of integrals in $b$ and $\bm T$ will not generally take the forms (\ref{innX}) and (\ref{innT}).
For example
\begin{eqnarray}
\langle\psi_1|\psi_2 \rangle&=&\int \mathrm{d}{\bm T}\, \mathrm{d}{\bm T' }\, \psi_1^\star(b,\bm T)\psi_2(b,\bm T')K(b,\bm T-\bm T')\nonumber
\end{eqnarray}
with
\begin{eqnarray}
K(b,\bm T-\bm T')
&=&\int \mathrm{d}\bm \alpha \frac{e^{-\frac{{\rm i}}{\mathfrak{h}}\bm\alpha\cdot(\bm T-\bm T')}}{(2\pi\mathfrak{h})^{2D}|\psi_s(b,\bm\alpha)|^2}\nonumber
\end{eqnarray}
so we recover Eq.~(\ref{innT}) iff $\psi_s$ is a pure phase\footnote{As we saw in Eq.~(\ref{3pack}), in the case of a bounce $\psi_s$ must be chosen to be a superposition of the solutions $\psi_{s+}$ and $\psi_{s-}$ in the classically allowed region, so this condition is not met.}.
Even if $\psi_s$ is a pure phase, we would not be able to recover a form like Eq.~(\ref{innX}) which would
require $\psi_s$ to be a plane wave in some $X$ only dependent on $b$. In general the kernel $K(b-b',\bm T)$ for the $X$ inner product
will not be diagonal, inducing an interesting new quantum effect\footnote{This would in principle interact with ``ringing'' in a case where incident
and reflected waves interfere.}.
\subsection{The semiclassical measure}\label{innersemi}
With the proviso that {\it this might erase important quantum information},
matters simplify within
the wave packet approximation (already used in Sec.~\ref{ring}).
Then, the calculation of the measure in terms of $b$ is straightforward. We call the measure thus inferred
the semiclassical measure, since it erases fully quantum effects, as we shall see.
Still ignoring the bounce (and so the double branch) setup,
we can regard minisuperspace for multi-fluids as a dispersive medium with the single
dispersion relation~\cite{JoaoPaper}
\begin{equation}
\bm \alpha \cdot \bm T-P(b,\bm \alpha)=0\,.
\end{equation}
If the
amplitude ${\cal A}({\bm \alpha})$ is factorizable and sufficiently peaked around ${\bm \alpha}_0$ we
can Taylor expand $P$ around $\bm\alpha_0$ to find
\begin{equation}\label{approxwavepack}
\psi\approx
e^{ \frac{{\rm i}}{\mathfrak{h}}(P(b;\bm\alpha_0)-\bm \alpha_0\cdot \bm T)}
\prod_i \psi_i(b,T_i)
\end{equation}
with (cf.~Eq.~(\ref{envelopes}))
\begin{eqnarray}
\psi_i(b,T_i)&=&\int \mathrm{d}\alpha_i\,{\cal A}(\alpha_i) \frac{e^ {-\frac{{\rm i}}{\mathfrak{h}} (\alpha_i-\alpha_{i0})(T_i-X^{\rm eff}_i
)}}{\sqrt{2\pi\mathfrak{h}}}\,,\\
X^{\rm eff}_i&=&\frac{\partial P}{\partial \alpha_i}{\Big|}_{\alpha_{i0}}\,.
\end{eqnarray}
Then, for
the space of all the functions with an ${\cal A}(\bm\alpha)$ factorized as $ {\cal A}(\bm\alpha)=\prod_{i=1}^D {\cal A}_i(\alpha_i)$
and peaked around the {\it same} ${\bm\alpha}_0$,
the definition (\ref{innalpha1}) simplifies to
\begin{equation}
\langle\psi_1|\psi_2 \rangle= \prod_{i=1}^{D}\int \mathrm{d}{\alpha_i} \; {\cal A}_{i1}^\star (\alpha_i) {\cal A}_{i2}(\alpha_i)
\end{equation}
and is equivalent to\footnote{The amplitude functions in this space, we stress, are not necessarily Gaussian and, if Gaussian, do not necessarily have to have the same variance, but they must all peak around the same $\bm\alpha_0$ for the argument to follow through.}
\begin{equation}
\langle\psi_1|\psi_2 \rangle=\prod_{i=1}^{D} \int \mathrm{d}{X^{{\rm eff}}_i} \psi _{i1}^\star (b,T_i) \psi_{ i2}(b,T_i)
\label{differentinnerprod}
\end{equation}
with $\mathrm{d} X^{\rm eff}_i=(\mathrm{d} X^{\rm eff}_i/\mathrm{d} b)\mathrm{d} b$.
Hence, in this approximation, in the presence of multiple times the probability factorizes:
\begin{equation}
{\cal P}(b,\bm T)=\prod_{i=1}^{D} {\cal P}_i(b,T_i)\,,
\end{equation}
and each factor is normalized with respect to the measure
\begin{equation}
\mathrm{d}\mu_i(b)=\mathrm{d} X^{{\rm eff}}_i
\end{equation}
which we identify as the semiclassical probability measure. This normalization implies that each ${\cal P}_i(b,T_i)$ can itself be seen as a probability distribution for $b$ at a particular value of $T_i$, with unspecified values for the other times.
\subsection{The case of a bounce}\label{innerbounce}
In our case $D=2$, so the wave function is the product of two independent factors, one for $m$ one for $\phi$ (and their respective clocks).
The fact that there is a bounce in $b$ adds on an
extra complication. Indeed, each factor is the superposition of three terms:
the incident ($-$) wave, the reflected ($+$) wave, and the evanescent wave. A crucial novelty is that
$X^{{\rm eff}}_{i-}\in(-\infty,X_{i0 })$ and $X^{{\rm eff}}_{i+}\in(X_{i0},\infty)$, where $X_{i0}=X_{i-}(b_0)=X_{i+}(b_0)$. For example, $X_{i0}=0$ in the example $i=m$
used in the previous Section, cf.~Eq.~(\ref{Xeffm}). Therefore, when performing the manipulations leading to (\ref{differentinnerprod}, we find for the
cross term:
\begin{equation}
\int \mathrm{d}\alpha_i\; e^{{\rm i}\frac{\alpha_i}{\mathfrak{h}} (X^{\rm eff}_{i+}-X^{\rm eff '}_{i-})}=0
\end{equation}
except in the measure zero point $b=b_0$, killing the cross term.
The requirement that $X^{\rm eff}_i$ covers the real line is satisfied, but with the joint domains of $X^{{\rm eff}}_{i+}$ and $X^{{\rm eff}}_{i-}$ only, and without cross terms. Therefore, for this inner product and in this approximation,
\begin{eqnarray}
\langle\psi_1|\psi_2 \rangle&=&\prod_{i=1}^{D} \Big( \int \mathrm{d}{X^{{\rm eff}}_{i+}} \psi _{i+1}^\star (b,T_i) \psi_{i+2}(b,T_i) \nonumber\\
&&+\int \mathrm{d}{X^{{\rm eff}}_{i-}} \psi _{i-1}^\star (b,T_i) \psi_{i-2}(b,T_i) \Big)
\label{approxnewinner}
\end{eqnarray}
and the interference between incident and reflected waves disappears. Moreover, the norm of a state only depends on the wave function in the classically allowed region. Calling this measure semiclassical therefore seems appropriate.
In conclusion, for $b\ge b_0$ the probability in terms of $b$ has the
form
\begin{equation}
\label{probxeff}
{\cal P}_i(b;T_i)=|\psi_{i+}|^2\left| \frac{\mathrm{d} X^{{\rm eff}}_{i+}}{\mathrm{d} b}\right|+
|\psi_{i-}|^2\left| \frac{\mathrm{d} X^{{\rm eff}}_{i-}}{\mathrm{d} b}\right|\,.
\end{equation}
For our model with radiation and $\Lambda$ and now assuming $k=0$ for simplicity, we have (cf.~Eq.~(\ref{xefflimits})) for the measure
factors:
\begin{equation}
\frac{\mathrm{d} X^{{\rm eff}}_{1\pm}}{\mathrm{d} b} = \frac{b^2}{2}\pm\frac{b^4-2m/\phi}{2\sqrt{b^4-b_0^4}}\,,\quad \frac{\mathrm{d} X^{{\rm eff}}_{2\pm}}{\mathrm{d} b} = \frac{\mp 1}{\sqrt{b^4-b_0^4}}\,.
\end{equation}
In this semiclassical approximation, one can define an explicitly unitary notion of time evolution, focusing on one of the times $T_i$ and therefore on only one of the factors in (\ref{approxnewinner}). From the form of the inner product it is clear that a self-adjoint ``momentum'' operator is given by $-{\rm i}\mathfrak{h}\frac{\partial}{\partial X^{{\rm eff}}_{i\pm}}= -{\rm i}\mathfrak{h}\left( \frac{\mathrm{d} X^{{\rm eff}}_{i\pm}}{\mathrm{d} b}\right)^{-1}\frac{\partial}{\partial b}$, where in the first definition we think of $X^{{\rm eff}}_{i\pm}$ as a single variable going over the whole real line and in the second expression the sign depends on whether the operator acts on $\psi_{i+}$ or $\psi_{i-}$.
Moreover, the waves $\psi_{i+}$ are constructed to satisfy
\begin{equation}
{\rm i}\mathfrak{h}\frac{\partial}{\partial T_i}\psi_{i\pm} = -{\rm i}\mathfrak{h}\frac{\partial}{\partial X^{{\rm eff}}_{i\pm}}\psi_{i\pm}\,,
\end{equation}
see Eq.~(\ref{envelopes}) and the discussion below. Hence they satisfy a time evolution equation with a self-adjoint operator on the right-hand side, which is all that is needed.
\section{Towards phenomenology}\label{phenom}
One may rightly worry that our semiclassical inner product and other approximations have removed too much of the quantum behavior of the full theory.
For any state the probability to be in the classically forbidden region would always be exactly zero. The phenomenon of ``ringing'' is erased.
We need to go beyond the semiclassical measure and peaked wave-packet approximation to see these phenomena. And yet, even within these approximations we can infer some interesting phenomenology, which probably will survive the transition to a more realistic model~\cite{brunobounce}
involving pressure-less matter (rather than radiation) and $\Lambda$. We also refer to~\cite{brunobounce} for an investigation of effects
revealed within the semiclassical approximation closer to the bounce than considered here.
\begin{figure}
\includegraphics[scale=.9]{ProbTmeasure.pdf}
\caption{The probability with the semiclassical measure, for the same situation as in Fig.\ref{reflectpsi}, at the various times $T_m=16,12,10,8,0$. We have verified explicitly that this probability density, unlike the function plotted in Fig.\ref{reflectprob}, always integrates to unity.}
\label{reflectprobmeasure}
\end{figure}
In Fig.~\ref{reflectprobmeasure} we have replotted Fig.~\ref{reflectprob} using the semiclassical measure
(\ref{probxeff}) and Gaussian packets (\ref{Gauss}). Hence,
for the wave function factor associated with $m$ and $T_m$ we have
\begin{eqnarray}
{\cal P}_2(b;T_m)&=&\frac{e^{-\frac{ (X_{+ 2}^{\rm eff} -T_m)^2}{2 \sigma_{T2} ^2}}
+e^{-\frac{ (X_{- 2}^{\rm eff} -T_m)^2}{2 \sigma_{T2} ^2}}
}{\sqrt{2\pi\sigma^2_{T2}}\sqrt{b^4-b_0^4}}
\end{eqnarray}
without an interference term. At times well
away from the bounce, the measure factor goes like $1/b^2$, so for a sufficiently
peaked wave packet it factors out. However, for times near the bounce the measure factor is significant. It induces a soft divergence
as $b\rightarrow b_0$:
\begin{equation}\label{Pb0}
{\cal P}_2(b\rightarrow b_0;T_m)=\frac{\exp{\left[-\frac{ T_m^2}{2 \sigma_{T2} ^2}\right]}}
{\sqrt{2\pi\sigma^2_{T2}}b_0^{3/2}\sqrt{b -b_0}}
\end{equation}
which becomes exponentially suppressed when $|T_m|\gg\sigma_{T2}$ (for example in Fig.~\ref{reflectprobmeasure} this is hardly visible
already for $T_m=16$), but is otherwise significant.
As we see in Fig.~\ref{reflectprobmeasure}, the measure factor therefore leads to a double peaked distribution, when the main peak (due to the Gaussian) is present (in this picture at $T_m=10, 12, 16$). The measure factor also shifts the
main peak of the distribution towards $b_0$, since it now follows
\begin{equation}
T_m-X^{\rm eff}_{i\pm}=\mp \frac{2b^3\sigma_{Tm}^2}{b^4-b_0^4}\,,
\end{equation}
valid for times when one of the waves dominates (incident or reflected), the right hand side due fully to the measure effect.
We recall~\cite{JoaoPaper} that the classical trajectory is reproduced by $T_m=X^{\rm eff}_{i\pm}$.
At some critical time close to the bounce, the ``main'' peak disappears altogether (see $T_m=8$ in Fig.~\ref{reflectprobmeasure}),
the distribution retaining only a peak at $b=b_0$. This peak becomes sharper and sharper as $|T_m|\rightarrow 0$
(so the average value of $b$ will be eventually larger than the classical trajectory, although the peak of the distribution will now be below
the classically expected value, and stuck at $b_0$). A detailed study of how all these effects interact in a more concrete setting
is discussed elsewhere~\cite{brunobounce}, but all of this points to interesting phenomenology near the $b$-bounce at $b_0$.
The strength of the effects, and for how long they will be felt, depends on $\sigma_T$ for whichever clock is being used, which in turn depends
on the sharpness of its conjugate ``constant''. The sharper the progenitor constant, the larger the $\sigma_T$, and so the stronger the effect around the $b$-bounce.
How this fits in with other constraints pertaining to the life of the Universe well away from the $b$-bounce has to be taken into consideration. See, e.g., \cite{brunobounce} for a realistic model for which an examination of these details is more meaningful. We note that in real life it is the dominant clock for pressure-less matter (rather than for radiation) that is relevant. This could be the same as the dominant clock for radiation (for example, if both are derived from a deconstantization of Newton's $G$; see~\cite{pad1,twotimes}) or not.
\section{Conclusions}\label{concs}
In this paper we laid down the foundation work for studying the quantum effects of the bounce in $b$ which our Universe has recently experienced.
We investigated a toy model designed to be simple whilst testing the main issues of a transition from deceleration to acceleration:
a model with only radiation and $\Lambda$. The realistic case of a mixture of matter and $\Lambda$ is studied in~\cite{brunobounce}. Nonetheless we were able to unveil both promising and disappointing results.
Analogies with quantum reflection and ringing were found, but these will require going beyond the semiclassical approximation.
Specifically, the inner product issues presented in Section~\ref{inner} were tantalizing in that they point to new quantum effects,
namely in the non-local nature of probability, as highlighted in Section~\ref{innermulti}. However, as soon as the semiclassical approximation is
consistently applied to both solutions and inner product, even the usual interference of incident and reflected wave is erased (see
Section~\ref{innerbounce}).
Nonetheless, the semiclassical measure factor has a strong effect on the probabilities near the bounce, as was shown in Sections~\ref{innerbounce} and \ref{phenom}. It introduces a double peaked distribution for part of the trajectory\footnote{Strictly speaking the divergence at $b_0$ is always present, but at times well away from the bounce this is
negligible; cf. Eq.~(\ref{Pb0}).}. This eventually becomes single peaked, with the average $b$ shifting significantly from the classical trajectory.
The period over
which this could be potentially felt depends on the width of the clock, $\sigma_T$. This is not a priori fixed, since the concept of squeezing is not
well defined in a ``unimodular'' setting, as pointed out in~\cite{JoaoPaper}. Indeed, any deconstantized constant can be seen as the
constant momentum of an abstract free particle moving with uniform ``speed'' in a ``dimension'' which we identify with a time variable.
It is well known that, unlike for a harmonic oscillator or electromagnetic radiation~\cite{knight}, coherent states for a free a particle lack a natural scale with which to define dimensionless quadratures and so the squeezing parameter~\cite{freecoh}. Hence they share with the free particle this problem\footnote{This is also found in the quantum treatment of the parity odd component of torsion appearing in first order theories~\cite{quantumtorsion}, or in any other quantum treatment of theories with trivial classical dynamics.}. Thus, an uncertainty in $T$ and $b$ of the order
of a few percent, felt over a significant redshift range around the bounce is a distinct possibility.
It is tempting to relate these findings to the so-called ``Hubble tension'' (see, e.g., Ref.~\cite{HubbleTension} and references therein), as is done in~\cite{brunobounce}.
It should be stressed that due to Heisenberg's uncertainty principle in action in this setting (involving constants and conjugate times),
if we define sharper clocks (so that the fluctuations studied herein are not observable), then it might be their conjugate constants that bear observable uncertainties.
This would invalidate the approximations used in this paper (namely those leading to wave packets and the semiclassical measure). Most
crucially,
$b_0$, the point of reflection, would not be not sharply defined for such states, different partial waves reflecting at different ``walls'' and then
interfering.
Such quantum state for our current Universe should not be so easily dismissed. It might be an excellent example of
cosmological quantum reflection.
We close with two comments. In spite of its ``toy'' nature, our paper does make a point of principle:
quantum cosmology could be here and now, rather than something swept under the carpet of the ``Planck epoch''.
This is not entirely new (see, e.g., Ref.~\cite{QCnow}), but it would be good to see such speculations get out of the toy model
doldrums. Obviously, important questions of interpretation would then emerge~\cite{Jonathan,Jonathan1}.
Finally we note that something similar to the bounce studied here happens in a reflection in the reverse direction at the end of inflation.
One may wonder about the interconnection between any effects studied here and re/pre-heating.
{\it Acknowledgments.} We would like to thank Bruno Alexandre, Jonathan Halliwell, Alisha Mariott-Best and Tony Padilla for discussions related to this paper. The work of SG was funded by the Royal Society through
a University Research Fellowship (UF160622) and a Research Grant (RGF$\backslash$R1$\backslash$180030). The work of JM was supported by the STFC Consolidated Grant ST/L00044X/1.
|
1,108,101,565,519 | arxiv | \section{Introduction}
It was observed in the well known work of Refs.\cite{gro,pol} that QCD is an asymptotically free theory if the number of quark flavors $N_f$ is smaller than a certain critical value.
When $N_f \leq 16$ the one-loop $\beta$-function is negative and the coupling constant diminishes as the energy is increased. Above this number the $\beta$-function becomes positive
indicating the increase of the coupling constant with the momentum. At two-loop order the $\beta$-function receives a contribution with a different signal as observed by Caswell
\cite{cas}, and although at high momentum this contribution is perturbatively small for a small number of flavors, its effect is not trivial if this number is increased, leading to
a zero of the $\beta$-function, and to the so-called Banks-Zaks (BZ) fixed point \cite{BZ}. The importance of a non-trivial perturbative fixed point is not only related to the interest on
the different QCD phases, but this fixed point may lead to a conformal window with possible interesting consequences for beyond standard model physics \cite{san}.
The $\beta$-function above two loops is scheme dependent, and the existence of the Banks-Zaks fixed point has been tested perturbatively
for QCD up to four loops and in different schemes \cite{rit}. Therefore the fixed point location, as a function of $N_f$, has a small dependence on the
renormalization scheme, in the order of perturbation theory that it was calculated, and, of course, the result can be considered reliable if
the next orders are indeed smaller. This motivated the interest of lattice simulations, not only for QCD, but also in different non-Abelian gauge
theories, and with different fermionic representations in order to determine the existence (or not) of this fixed point (see, for instance, \cite{app,pal,has} and references therein).
At four loops, as a function of $N_f$ and in the $\overline{MS}$ scheme, the QCD $\beta$-function for quarks in the fundamental representation is the following \cite{Beta4loop}:
\begin{equation}
\beta(a_s)=-b_0 a_s^2-b_1 a_s^3-b_2 a_s^4- b_3 a_s^5 + {\cal{O}}(a_s^6) \, ,
\label{betaper}
\end{equation}
where $a_s=\alpha_s/4\pi \equiv (g^2/4\pi)/4\pi$ and for SU(3)
\begin{eqnarray}
b_0 & \approx & 11-0.66667 N_f \, ,\\
b_1 & \approx & 102 - 12.6667 N_f \, , \\
b_2 & \approx & 1428.50 - 279.611 N_f + 6.01852 N_f^2 \, , \\
b_3 & \approx & 29243.0 - 6946.30 N_f + 405.089 N_f^2 \, , \nonumber \\
& + & 1.49931 N_f^3 \, .
\label{eqs:Coefficients}
\end{eqnarray}
A zero of this $\beta$ function already appears if $N_f\geq 8$.
Parallel to the Banks-Zaks scenario there are other discussions about a possible infrared (IR) freezing (or IR fixed point) of the QCD coupling constant \cite{prosp,dok}.
The concept of an IR finite QCD effective charge even for a small number of quarks was also extensively discussed by Grunberg \cite{gru}, and, in particular, he has also made an
interesting discussion about the relation of these effective charges with the conformal window originated through the Banks-Zaks expansion \cite{gru2}. These effective charges
naturally have a non-perturbative contribution that eliminates the Landau singularity for small $N_f$ \cite{gru2}. Another discussion following similar ideas can also be found in
Ref.\cite{brod}. Other analysis about the transition region to these non-perturbative fixed points are presented in Ref.\cite{GiesAlkofer}.
There is one particular effective charge that generates a fixed point associated to the existence of a dynamically generated gluon mass \cite{nat3}.
As shall be discussed ahead this charge is gauge invariant, has been obtained solving Schwinger-Dyson equations (SDE) for the QCD propagators and is consistent with lattice
simulations, but the main point that we shall discuss in this work, is concerned with a comparison between these different fixed points, i.e. the BZ and the one that appears when
gluons acquire a dynamically generated mass, which will be denominated DGM. Some of the questions that we will discuss here include: 1) Are the fixed points generated in these
different approaches numerically similar? 2) Are the anomalous dimensions associated to these fixed points varying in the same way and with similar values? 3) If these fixed points
are different for a given $N_f$ which one corresponds to a state of minimum energy?
The possible existence of a dynamically generated gluon mass was for the first time determined by Cornwall \cite{cor1}.
Since this seminal work there were many others showing in detail that it is possible to write the SDE for the gluon propagator in a gauge invariant way \cite{pinch,papa3}, and from
it obtain a dynamical gluon mass \cite{s1,s2,s3,s4,s5,s6} and a gauge invariant IR finite frozen coupling \cite{g1,g2}. Lattice simulations are showing agreement with the SDE
\cite{la1,la2,la3,la4}. Even if the lattice simulations are performed in one specific gauge, it has been shown how the coupling constant obtained in the gauge invariant SDE approach
can be translated to a specific gauge, for instance the Landau gauge, and it was verified that the gap between the freezing values of the coupling constant obtained in different
gauges can be explained \cite{g1}. In Section II we will briefly discuss this effective charge and set the equations to show how it should vary with $N_f$.
This will allow us to compare the different $\beta$ functions, i.e. the BZ and the DGM ones. We will compute the different fixed points as a function of $N_f$. In Section III we will
check if these fixed points respect analyticity and compute their respective anomalous dimensions. In Section IV we provide some arguments about which scenario is the one leading
to the states of minimum of energy, or the one that should be chosen by Nature. In Section V we draw our conclusions. Within the approximations of this work, which rely basically
on extrapolations of the dynamical gluon masses at large $N_f$, we can compare the different $\beta$ functions (BZ and DGM) and their different fixed points, arguing that the state
of minimum energy in QCD, as a function of the coupling constant and up to the critical value $g^*$, are related to the dynamical gluon mass generation mechanism only above $N_f=9$.
\section{Fixed points and dynamical gluon masses}
Dynamical gauge boson masses appear in non-Abelian gauge theories as a consequence of the Schwinger mechanism, according to which if the gauge boson vacuum polarization
develops a pole at zero momentum transfer, the boson acquires a dynamical mass. The work of Ref.\cite{cor1} was the first one to obtain a gauge invariant SDE for the
gluon propagator, verifying the existence of the Schwinger mechanism in QCD. For this the introduction of the pinch technique was necessary, which combined with
the background field method, leads to a particular truncation of the SDE obeying abelian Slavnov-Taylor identities and to a gauge invariant gluon self-energy (${\hat{\Delta}}(k^2)$),
allowing us to build a renormalization group invariant quantity ${\hat{d}}(k^2)=g^2 {\hat{\Delta}}(k^2)$, whose solution can be written as
\begin{equation}
{\hat{d}}^{-1}(k^2)= [k^2+m_g^2 (k^2)]\left\{ \beta_0 g^2 \ln \left(\frac{k^2+4m_g^2(k^2)}{\Lambda_{QCD}^2}\right)\right\} ,
\label{eqa1}
\end{equation}
where $g$ is the coupling constant, $\beta_0=(11N-2N_f)/48\pi^2$, $N=3$, $m_g(k^2)$ the dynamical gluon mass
and $\Lambda_{QCD}$ is the characteristic QCD mass scale. The above equation can be
recognized as the equivalent at (large) momentum space to the inverse of the QCD Coulomb force, from where we can read the effective coupling constant
\begin{equation}
g^2(k^2)=\frac{1}{\beta_0 \ln\Big[\frac{k^2+4 \cdot m_{g}^{2}(k^2)}{\Lambda_{QCD}^2}\Big]}=4\pi\alpha_s (k^2),
\label{eq:Coupling}
\end{equation}
which match asymptotically with the perturbative coupling constant.
Several aspects of this mechanism are illustrated in Refs.\cite{s12} and \cite{cor3}, respectively for the cases of QCD in four and
three dimensions. A recent review about dynamical gluon mass generation can be found in Ref.\cite{aguida}, which contains early references about this mechanism. Perhaps the many
technical details of the pinch technique and background field method hampered the dissemination of these results, but there is unequivocal evidence from large volume lattice
simulations that the gluon acquires a dynamical mass \cite{la3,la4}, the results are fully consistent with the SDE for the gluon propagator \cite{sx}, and consistent with many
phenomenological calculations that depend to some extent on the IR QCD behavior \cite{nat6}.
The IR value of the dynamical gluon mass $m_g(k^2)$ is $m_g$ and it goes to zero at high momentum scales, with a running behavior roughly given by \cite{nat1}
\begin{equation}
m_g^2(k^2) \approx \frac{m_{g}^{4}}{k^2+m_{g}^{2}}.
\label{eq:mg}
\end{equation}
Without considering the running of the dynamical gluon mass with $k^2$, we have the following non-perturbative $\beta$-function \cite{cor2}
\begin{equation}
\beta_{DGM}=-\beta_0 g^3\Big(1-\frac{4 m_g^2}{\Lambda^2}e^{-\frac{1}{\beta_0 g^2}}\Big),
\label{eq:betacornwall}
\end{equation}
that will be denominated DGM $\beta$ function, where $\Lambda=\Lambda_{QCD}$.
Note that a more detailed expression for the non-perturbative coupling constant and dynamical mass can be found in \cite{g2}, however,
for simplicity, we will continue to use Eq.(\ref{eq:Coupling}) which reflects the gross behavior of this QCD IR finite coupling. One expression for the $\beta$-function taking into
account the running gluon mass as described in Eq.(\ref{eq:mg}) is given by \cite{nat2}
\begin{equation}
\beta_{DGM}(k^2)=-\beta_0 g^3
\left(1-\frac{4 m_g^4 e^{-\frac{1}{g^2 \beta_0}}}{(m_g^2+k^2)\Lambda ^2}\left(1+\frac{k^2}{m_g^2+k^2}\right)\right),
\label{eq:betaMg_k2}
\end{equation}
although this running barely affects the fixed point location that can be observed in Eq.(\ref{eq:betacornwall}). At this point we recall that the fixed point occurs at the
$k\rightarrow 0$ limit. At this limit the fixed point of the non-perturbative $\beta$ function depends only on the $m_g/\Lambda$ ratio.
It can be demonstrated that the dynamical gluon mass generation imply in the existence of a non-trivial fixed point \cite{nat3}, and
more importantly: Recently we verified that the QCD coupling of Eq.(\ref{eq:Coupling}) freezes in the infrared limit, when only three quarks are operative, at one relatively small
value \cite{nat4}, and not too much different from the coupling values that would come out from the zeros of the perturbative $\beta$ function given by Eq.(\ref{betaper}) at some
large $N_f$! This fact certainly justifies the comparison of the different mechanisms (BZ or DGM). Moreover, this $\beta$ function may have an important consequence for the stability
of the standard model \cite{nat2}. Finally, to know how Eq.(\ref{eq:betacornwall}) or Eq.(\ref{eq:betaMg_k2}) vary with $N_f$ we must know how $m_g (k^2)$ varies with this quantity.
It has been observed in QCD lattice simulations that the dynamical gluon mass increases when $N_f$ increases \cite{aya}. Similar results
were observed in the solution of the Schwinger-Dyson equation (SDE) for the gluon propagator including dynamical quarks \cite{agui3}. We can
quote typical $m_g(0)$ values of $373, \,\, 427, \,\, 470$ MeV respectively for $N_f = 0, \,\, 2, \,\, 4$ quarks \cite{aya}. While several low energy phenomenological values,
obtained in the presence of two or three quarks, predict a somewhat larger value around $500$ MeV \cite{nat6}. Therefore we will conservatively assume that $373\leq m_g(0)\leq 500$
MeV for $N_f =0$. Note that it is not difficult to understand why $m_g$ should increase with $N_f$. When we increase the number of quarks we may expect that the strong force
diminishes (see, for instance, Eq.(\ref{eqa1})), because of the screening proportioned by extra quark loops. This force should be, at least asymptotically, proportional to the
product of the coupling constant times the gluon propagator. The coupling behaves as shown by Eq.(\ref{eq:Coupling}) and the propagator is roughly given by
$\Delta (k^2) \propto 1/(k^2 + m_g^2(k^2))$. The only way to decrease the force as $N_f$ is increased is with larger $m_g(0)$ values.
Therefore, using the lattice data \cite{aya}, we will assume the following different scaling laws to describe the dynamical gluon mass evolution as a function of $N_f$:
\begin{eqnarray}
m_g^{-1}(N_f) & = & m_{g_0}^{-1} \,\, (1-A_1 N_f)\label{eq:MgLinearPRD},\\
m_g^{-1}(N_f) & = & m_{g_0}^{-1} \,\, e^{-A_2 N_f}\label{eq:MgExpPRD},
\end{eqnarray}
where $m_{g_0}$ varies between $373$ and $500$ MeV and $A_1=0.05462$ and $A_2=0.05942$. With the extreme $m_{g_0}$ values we obtain a band
of possible $\beta$ function curves, reflecting the uncertainty in our knowledge about the dynamical gluon mass. Extrapolations like the ones of Eq.\eqref{eq:MgLinearPRD} and
\eqref{eq:MgExpPRD} have been used in Refs.\cite{aya,cap} and is probably the best that we can perform at the present status of lattice and SDE calculations.
The different fits for the gluon mass evolution with $N_f$ are plotted in Fig.\eqref{fig:BnpNf_3}, where we can note that with the
same $m_{g_0}$, there is no large difference between the linear and exponential fit for $N_f = 3$, but they start having a small difference above $N_f=6$. The error between linear
and exponential fits for $N_f = 6, \, 8, \, 9, \, 10 $ are $1.24\%, \, 3\%,\, 3.3\%, \, 5\%$ respectively. On the other hand we have also chosen an intermediate value of $m_{g_0}$
and computed the relative error with respect to $m_{g_0}=373$ MeV obtained with lattice data and the phenomenological value $m_{g_0}=500$ MeV. With $m_{g_0}=440$ MeV we found an
error approximately of $5\%$ when comparing the different curves and with different (and large) $N_f$ values.
If we choose one appropriate fit we can compare the behavior of both $\beta$ functions BZ (Eq.\eqref{betaper}) and DGM with a running gluon mass (Eq.\eqref{eq:betaMg_k2}), comparing
different fixed points and discussing other consequences.
\begin{figure}[hbt]
\centerline{\includegraphics[width=8cm]{BnpNf_3}}
\caption{$\beta$ functions for the linear and exponential fits of Eqs.(\ref{eq:MgLinearPRD}-\ref{eq:MgExpPRD}) for $N_f=3$. \label{fig:BnpNf_3}}
\end{figure}
\begin{table}[htbp]
\caption{Fixed points ($\beta(g_s)=0$), i.e. critical coupling constant values ($g^*$), in the DGM approach with different
values of $m_{g_0}$.}
\begin{tabular}{@{}cccc@{}} \toprule
$N_f\,\,\,$ & $m_{g_0}$ & $m_{g_0}$ & $m_{g_0}$ \\
& (373 MeV) & (440 MeV) & (500 MeV) \\ \colrule
3\hphantom{00} & \hphantom{0} 2.63 & \hphantom{0} 2.47 & 2.37 \\
6\hphantom{00} & \hphantom{0} 2.79 & \hphantom{0} 2.64 & 2.54 \\
8\hphantom{00} & \hphantom{0} 2.98 & \hphantom{0} 2.83 & 2.74 \\
9\hphantom{00} & \hphantom{0} 3.11 & \hphantom{0} 2.97 & 2.87 \\
10\hphantom{0} & \hphantom{0} 3.29 & \hphantom{0} 3.14 & 3.03 \\ \botrule
\end{tabular}
\label{tbl:FP}
\end{table}
In Table \eqref{tbl:FP} we show the fixed points (coupling constant values) in the DGM approach (Eq.\eqref{eq:betaMg_k2}) with different number of flavors and three values of
$m_g(0)$. It is possible to see that $g^*$ at the fixed point values become smaller when the $m_{g_0}$ is increased and also that for the same $m_{g_0}$ the value of the critical
coupling constant is increased when the number of flavors $N_f$ is increased. Observing these results and the small difference between the fits we will consider in all subsequent
calculations only the exponential fit of Eq.(\ref{eq:MgExpPRD}) and the $m_g(0)$ value of $440$ MeV.
We show in Fig.\eqref{fig:FP_Cor_Pert} the $\beta$ functions for both cases (BZ and DGM).
The fixed points in the BZ case start appearing for $N_f\geq 8$ [see Fig.\eqref{fig:FP_Cor_Pert}(b)], while in the DGM case they exist as long as
we have asymptotic freedom. The non-trivial fixed points appear at approximately the same values of the strong coupling ($g_s$). However as we increase
$N_f$ the BZ fixed points occur at smaller coupling constant values, while in the DGM we have exactly the opposite. The exact coupling
constant values for each fixed point can be better observed in Table \eqref{tbl:FP_BZ}. It is interesting to comment how much the BZ fixed points are dependent on the order of
$\beta$ function calculated in perturbation theory. Although, two-, three-, and four-loop perturbative calculations of the $\beta$ function indicate an infrared fixed point in the
interval $8 \leq N_f\leq 16$, the recent five-loop calculation of this quantity \cite{Baikov} is showing that $N_f=8$ does not present a non-trivial perturbative fixed point at this
level (see, for instance, the discussion in Refs.\cite{Stevenson1}, \cite{Rittov1}). This higher loop result does not show the apparently mild convergent behavior of the lower loop
contributions and is currently being independently checked, therefore it is still premature to infer any possible behavior about
what is going to happen with the BZ scenario with higher order computations of the $\beta$ function.
\begin{figure}[hbt]
\centering
\subfigure[$\beta$ function of Eq.\eqref{eq:betaMg_k2}]
{\resizebox{0.49\columnwidth}{!}{\includegraphics[width=0.4\textwidth,angle=0]{bcornwall.pdf}}}\hspace{0.05cm}
\subfigure[$\beta$ function of Eq.\eqref{betaper}]
{\resizebox{0.49\columnwidth}{!}{\includegraphics[width=0.4\textwidth,angle=0]{bBZ.pdf}}}
\caption{The DGM and BZ $\beta$ functions. Note that the non-trivial fixed points appear at approximately the same values of
the strong coupling ($g_s$), although they ``move" in different directions as $N_f$ is changed.}
\label{fig:FP_Cor_Pert}
\end{figure}
\begin{table}[htbp]
\caption{Values of the coupling constant ($g_s^*$) at the fixed points ($\beta(g_s^*)=0$) for both approaches (BZ and DGM), with $N_f$ between 6-13.}
\begin{tabular}{@{}ccc@{}}\toprule
$N_f$ \hphantom{00} & \hphantom{00} BZ & \hphantom{0000} DGM \\ \colrule
6 \hphantom{00} & \hphantom{00} $\ast$ & \hphantom{0000} 2.64 \\
7 \hphantom{00} & \hphantom{00} $\ast$ & \hphantom{0000} 2.73 \\
8 \hphantom{00} & \hphantom{00} 4.41 & \hphantom{0000} 2.83 \\
10 \hphantom{00} & \hphantom{00} 3.20 & \hphantom{0000} 3.13 \\
11 \hphantom{00} & \hphantom{00} 2.80 & \hphantom{0000} 3.36 \\
12 \hphantom{00} & \hphantom{00} 2.43 & \hphantom{0000} 3.65 \\
13 \hphantom{00} & \hphantom{00} 2.06 & \hphantom{0000} 4.08 \\ \botrule
\end{tabular}
\label{tbl:FP_BZ}
\end{table}
Recall that we are comparing quantities obtained in different schemes. The non-perturbative effective coupling constant given by
Eq.(\ref{eq:Coupling}) has been determined as a function of Green's functions obtained from SDE solutions, through the combination of the pinch technique with the background field
method. Such approach is gauge and renormalization group invariant, i.e. they are independent of any renormalization mass $\mu$ \cite{cor4,aguida}. Note that at the end the fixed
point is only a function of $m_g/\Lambda$. This means that in principle we may have certain stability in the fixed point determination. However the SDE for the gluon propagator, from
where it is obtained part of the information leading to the infrared coupling, has to be solved imposing that the non-perturbative propagator is equal to the perturbative one at some
high-energy scale $(\mu )$, or comparing the SDE propagator to the lattice data. After this a $\mu$ independent coupling is obtained through one specific relation of two point
correlators. On the other hand we may say that the BZ $\beta$ function has a scheme dependence (${\overline{MS}}$) above the two-loop level, but its fixed point is relatively stable
\cite{rit}, and we may also say that the comparison between the different scenarios that we have been discussing is worthwhile. Finally, the DGM coupling moves to higher $g_s$ values
as we increase $N_f$, and at some moment even the non-perturbative method used to obtain this quantity may fail. Therefore in the next section we will verify if these different
$\beta$ functions are analytic up to the fixed point and what can be said about their anomalous dimension.
\section{Analyticity and anomalous dimension}
The renormalization group behavior of the coupling constant is constrained by the analyticity condition as proposed by Krasnikov \cite{Krasnikov} as
\begin{equation}
\bigg\lvert \alpha_s \frac{d}{d\alpha_s}\Big(\frac{\beta(\alpha_s)}{\alpha_s}\Big) \bigg\rvert \leq 1.
\label{eq:Analyt}
\end{equation}
This is a perfect condition to test if the different fixed points, or critical coupling constants, discussed in the previous sections can be considered still small enough to be
reliable, even if they were obtained with a non-perturbative method as in the DGM case. The problem of using Eq.(\ref{eq:Analyt}) is that it was obtained in the so called ``natural"
scheme, where the ratio $\beta (\alpha_s)/\alpha_s$ is modified, as we change from one scheme to the other, only by a multiplicative constant, which is not a significative change
near a fixed point. Nevertheless we have a condition constraining the coupling constant renormalization group behavior in one specific scheme, the BZ coupling determined up to four
loops in the $\overline{MS}$ scheme, and one non-perturbative coupling that is renormalization group invariant but obtained in one truncation dependent scheme. Many discussions on
the conformal window seems to indicate some stability in the critical coupling constant values obtained in different schemes \cite{rit,gra}. This fact does not justify a fully formal
comparison of these different quantities, although, as we shall see, we still can learn from it as well as extract some valuable information. Of course, a complete solution of all
these scheme differences, i.e. obtaining Eq.(\ref{eq:Analyt}), the BZ and DGM $\beta$ functions in one scheme independent way, is out of the scope of this work.
\begin{figure}[htb]
\centering
\subfigure[Analyticity condition applied to the DGM $\beta$ function for different $N_f$. For each $N_f$ the inequality \eqref{eq:Analyt} is satisfied only below one specific
$\alpha_s$ (or $g_s$) value.]
{\resizebox{0.49\columnwidth}{!}{\includegraphics[width=0.4\textwidth,angle=0]{AnDGM.pdf}}}\label{fig:Cornw_Anali}\hspace{0.05cm}
\subfigure[Analyticity condition applied to the perturbative (BZ) $\beta$ function for different $N_f$. ]
{\resizebox{0.49\columnwidth}{!}{\includegraphics[width=0.4\textwidth,angle=0]{AnBZ.pdf}}}\label{fig:Pertu_Anali}
\caption{Analyticity condition for both $\beta$ functions (BZ and DGM), where ${\textsl{F}}(g)$ stands for
$\alpha_s.[d/d\alpha_s (\beta(\alpha_s)/\alpha_s)]$. Fig (a) (DGM or Eq.\eqref{eq:betaMg_k2}) and Fig.(b)
(perturbative or Eq.\eqref{betaper}) show the analyticity condition as a function of the coupling constant $g_s$ and for different $N_f$.}
\label{fig:Analiticity1}
\end{figure}
We plot the left-hand side of Eq.(\ref{eq:Analyt}) in Fig.\eqref{fig:Analiticity1}. This figure allows us to see up to what value of the coupling
constant we can rely on our results. For instance, from Fig.\eqref{fig:Analiticity1}(a) we can see that the
inequality of Eq.(\ref{eq:Analyt}) is not fulfilled in the case of Eq.\eqref{eq:betaMg_k2} with $N_f=12$ for $\alpha_s \geq 1.1$.
Since we are particularly interested in what happens at the fixed point,
in Table \eqref{tbl:IneqFP} we show the value of the left-hand side of inequality \eqref{eq:Analyt} evaluated exactly at the fixed points for
both cases: BZ and DGM. This means that we are inside, within our approximations, of the analytic region. In particular, the
DGM $\beta$ function and the respective fixed point seems to be at the border of the analytic region. Since the derivative of Eq.(\ref{eq:betaMg_k2}) is linear in $\alpha_s$
it is easy to understand why Eq.(\ref{eq:Analyt}) is saturated at the fixed point, where the left hand side of Eq.(\ref{eq:Analyt}) is
proportional to $d\beta{\alpha_s}/d\alpha_s$.
\begin{table}[htbp]
\caption{Left-hand side value of the inequality \eqref{eq:Analyt} evaluated at the fixed points obtained from both $\beta$ functions (BZ and DGM).}
{\begin{tabular}{@{}ccc@{}}\toprule
$N_f$ \hphantom{00} & \hphantom{00} BZ & \hphantom{0000} DGM \\ \colrule
3 \hphantom{00} & \hphantom{00} $\ast$ & \hphantom{0000} 0.9998 \\
6 \hphantom{00} & \hphantom{00} $\ast$ & \hphantom{0000} 1.0000 \\
8 \hphantom{00} & \hphantom{00} 0.1053 & \hphantom{0000} 0.9999 \\
9 \hphantom{00} & \hphantom{00} 0.0583 & \hphantom{0000} 1.0000 \\
10 \hphantom{00} & \hphantom{00} 0.0340 & \hphantom{0000} 0.9997 \\
12 \hphantom{00} & \hphantom{00} 0.0112 & \hphantom{0000} 0.9998 \\ \botrule
\end{tabular}
\label{tbl:IneqFP}}
\end{table}
The BZ fixed points are in the analytic region and can be certainly considered perturbative fixed points, therefore we can
compute for these points the respective anomalous dimension. Several lattice simulations have tried to compute the quark mass anomalous dimensions ($\gamma$) associated
to these fixed points, because a large anomalous dimension may solve the many problems of Technicolor (or composite) models \cite{san}.
The mass anomalous dimension exponent $\gamma$ up to ${\cal{O}}(\alpha_s^5)$ was determined in Ref.\cite{bai} in the ${\overline{MS}}$,
and is given by
\begin{equation}
\gamma = - 2 \gamma_m = \sum_{i=0}^\infty 2 (\gamma_m)_i a_s^{i+1} \, ,
\label{dima}
\end{equation}
where the $(\gamma_m)_i$ can be read from Ref.\cite{bai}, and in numerical form as a function of $N_f$ we have:
\begin{eqnarray}
\gamma_m & \approx & - a_{os} - a_{os}^2 (4.20833-0.138889 N_f) \nonumber \\
& - & a_{os}^3 (19.5156-2.28412 N_f -0.0270062 N_f^2) \nonumber \\
& - & a_{os}^4 (98.9434-19.1075 N_f +0.276163 N_f^2 \nonumber \\
& + & 0.00579322 N_f^3 ) \nonumber \\
& - & a_{os}^5 (559.7069 -143.6864 N_f +7.4824 N_f^2 \nonumber \\
& + & 0.1083 N_f^3 - 0.000085359 N_f^4) .
\label{gamam}
\end{eqnarray}
where $a_{os}=\alpha_s/\pi $.
The DGM fixed points are at the border of the analytic region, and we may wonder if we can reliably compute the
anomalous dimension with Eq.(\ref{gamam}) even in this case. It should be remembered that chiral symmetry breaking, or the dynamical
generation of quark masses, in the presence of
dynamically generated gluon masses is still a motive of debate \cite{corc,doff}, possibly being associated to the confinement
mechanism \cite{corc}, demanding a non-perturbative anomalous dimension calculation.
However in the sequence we will just assume that we can
use Eq.(\ref{gamam}) to compute the anomalous dimensions at the DGM fixed points.
The results are shown in Fig.\eqref{fig:GammaNf}
\begin{figure}[hbt]
\setlength{\epsfxsize}{0.6\hsize} \centerline{\epsfbox{gamma1.pdf}}
\caption[dummy0]{Anomalous dimension for different values of $N_f$ in the BZ and DGM cases.}
\label{fig:GammaNf}
\end{figure}
It is interesting to see in Fig.\eqref{fig:GammaNf} that the anomalous dimension for the BZ and DGM $\beta$ functions have
different behaviors as a function of $N_f$ and this effect may probably be tested in lattice simulations.
We show in Table \eqref{tbl:gamma} the anomalous dimensions for the BZ and DGM cases respectively for
$N_f= 8, \, 9, \, 10, \, 12$. However, note
that, if Eq.(\ref{gamam}) is applied to the DGM case, we do have large $\gamma$ values for a small number of quarks. In general it is said that such values may be present in
walking gauge theories, but this is certainly not the QCD case with a small number of quarks.
\begin{table}[htbp]
\caption{Anomalous dimensions evaluated at the fixed points obtained from Eq. \eqref{eq:betaMg_k2} for different values of $N_f$. $\alpha_s^{\ast}$ is the fixed point value of the
coupling constant for each $N_f$. }
{\begin{tabular}{@{}lcccc@{}}
\toprule
& \multicolumn{2}{c}{BZ} \hphantom{00} & \multicolumn{2}{c}{DGM} \\
\cline{2-5}
$N_f$ \hphantom{00} & $\alpha_s^{\ast}$ \hphantom{0} & $\gamma(\alpha_{s}^\ast)$ \hphantom{00} & $\alpha_s^{\ast}$ \hphantom{0} & $\gamma(\alpha_{s}^\ast)$ \\ \colrule
8 \hphantom{00} & 1.55 \hphantom{0} & -4.83 \hphantom{00} & 0.64 \hphantom{0} & 0.501 \\
9 \hphantom{00} & 1.07 \hphantom{0} & -0.60 \hphantom{00} & 0.70 \hphantom{0} & 0.39 \\
10 \hphantom{00} & 0.82 \hphantom{0} & 0.09 \hphantom{00} & 0.78 \hphantom{0} & 0.17 \\
12 \hphantom{00} & 0.47 \hphantom{0} & 0.27 \hphantom{00} & 1.06 \hphantom{0} & -0.88 \\ \botrule
\end{tabular}
\label{tbl:gamma}}
\end{table}
In principle these anomalous dimension can be tested in lattice simulations, although they demand simulations with
extremely large volume lattices, since the calculation must be performed in a conformal regime. There are results
for the anomalous dimension with $N_f=12$ \cite{Appel, Aoki}, indicating a value in the range $\gamma_m \approx 0.4\,-\,0.5$.
Unfortunately this value is a factor of $2$ above the one predicted in the BZ case, and curiously is exactly the region
of the $\gamma$ values in the DGM case, although this is true only up to $N_f\approx 10$. It should also be remembered
that these lattice simulations make use of the naive hyperscaling function $M_H \propto m^{1/(1+\gamma)}$ \cite{mira} determined
for a walking gauge theory. It is possible that physical masses necessarily do not follow such scaling, and, in particular, in
the case of scalar masses we have been advocating that these masses may scale differently according to the asymptotic behavior
of the dynamically generated fermion mass \cite{and1,and2,and3}.
We end this section stressing that the BZ approach is analytic (as usually claimed), but also the DGM approach seems to be reliable up to the
border of the analytic region. Therefore we assumed that in both cases we can compute the anomalous dimension with the
perturbative expression, observing quite different behaviors for the mass anomalous dimension exponent, what could be
observed in lattice simulations.
\section{Minimum of energy: BZ or DGM?}
If the $\beta$ functions in these two approaches are comparable and lead to approximately the same fixed points for some $N_f$ values, can we determine which one leads to the actual
minimum of energy? The $\beta$ function can be related to the trace of the energy momentum tensor \cite{cre,cha,col}
\begin{equation}
\left< \theta_{\mu\mu} \right> = \frac{\beta (g)}{g} \left<G_{\mu\nu}G^{\mu\nu}\right> \, ,
\label{tem}
\end{equation}
which is proportional to the vacuum energy $\left< \Omega \right>$ as
\begin{equation}
\left< \Omega \right> = \frac{1}{4} \left< \theta_{\mu\mu} \right> .
\label{ve}
\end{equation}
The minimum of the vacuum energy is a scheme independent quantity, and, in principle, this quantity could be used to discriminate
which $\beta$ function leads to the deepest minimum of energy. Of course the Eq.\eqref{tem} will be computed in a quite simple approximation, not free of scheme dependence, although,
we have no reason to expect great deviations of the $\beta$ functions behaviors as we discussed before.
In Eq.(\ref{tem}) the term $\left<G_{\mu\nu}G^{\mu\nu}\right>$
is proportional to the gluon condensate, which is a fully non-perturbative quantity \cite{shif}. In order to calculate the vacuum energy we must know how the gluon
condensate is modified as we change the number of flavors. Unfortunately there are not, as far as we know, lattice simulations of this quantity as a function
of $N_f$. On the other hand there are discussions about the condensate value related to the infrared behavior of the gluon propagator. For instance (see Ref.\cite{zak} and
references therein), the gluon condensate is expected to be of order $c m_t^4$, where $c$ is a constant and $m_t$ is a tachionic mass that appears in the IR gluon propagator, which
can also be related to the confining potential.
One expression for the gluon condensate as a function of the dynamical gluon mass was determined in Ref.\cite{cor1} and also studied in Ref.\cite{gor} (see Eq.(6.17) of
Ref.\cite{cor1}):
\begin{equation}
\left<\frac{\alpha_s}{\pi}G_{\mu\nu}G^{\mu\nu} \right>=A\frac{3m_g^4}{4\pi^4 \beta_0 \ln\Big(\frac{4m_g^2}{\Lambda^2}\Big)},
\label{eq:gluoncondensate}
\end{equation}
where we are going to assume that $m_g$ is a function of $N_f$, as described by the exponential behavior of Eq.(\ref{eq:MgExpPRD}) with $m_{g_0}=440$ MeV, and A ($\approx 7$)
is a constant value such that $\left<\frac{\alpha_s}{\pi}G_{\mu\nu}G^{\mu\nu} \right>=0.012$ GeV$^4$ when $N_f=0$ \cite{shif}.
This expression for the condensate is able to represent the gross behavior of this quantity as we vary $N_f$. For a fixed $N_f$ value and at one specific $g$ value of the coupling
constant we can say that the state of minimum energy will happen at the smallest value of $\beta (g)$, although a complete answer to this problem will demand a detailed calculation
of Eq.(\ref{tem}). Therefore, with Eqs.\eqref{tem} and introducing \eqref{eq:gluoncondensate} into Eq.\eqref{ve}, we have
\begin{equation}
\left< \Omega \right> = \frac{3 A}{4\pi^2}\frac{\beta(g)}{g}m_g^4(N_f).
\label{eq:vacuum}
\end{equation}
The assumption of Eq.(\ref{eq:gluoncondensate}), a monotonically increasing with $N_f$ function, is not fundamental to determine the minimum of energy, which is basically dictated by
the behavior of the $\beta$ function.
Our results for the vacuum energy as a function of the coupling constant $g_s$ are shown in Fig.(\ref{fig:veNf}) for $N_f \approx 8 \, - \, 12$. Our results are surprising in the
following sense: For $N_f=8$ it seems that the BZ $\beta$ function is the one that leads to the deepest minimum of energy as a function of the coupling constant up to the critical
$g^*$ value, although $N_f=8$ is at the border of the conformal window and below this value the ``perturbative" vacuum becomes unstable, i.e. if we define the vacuum energy
proportional to the $\beta_{BZ}$ function the vacuum is negative up to infinity for $N_f<8$. Above this $N_f$ value it is the DGM $\beta$ function that leads to the deepest state of
minimum energy. As we increase $N_f$ above $12$ it is the coupling constant in the DGM approach that increases at one point that we cannot be sure how much the SDE truncation,
leading to this solution, is still reliable. However, there is a clear possibility that the non-trivial fixed point observed in lattice simulations at $N_f=12$ is related to the DGM
mechanism.
\begin{figure}[htb]
\centering
\subfigure[]
{\resizebox{0.45\columnwidth}{!}{\includegraphics[width=0.4\textwidth,angle=0]{vacuumNf8.pdf}}}\label{fig:vevNf8}\\
\subfigure[]
{\resizebox{0.45\columnwidth}{!}{\includegraphics[width=0.4\textwidth,angle=0]{vacuumNf10.pdf}}}\label{fig:vevNf10}\\
\subfigure[]
{\resizebox{0.45\columnwidth}{!}{\includegraphics[width=0.4\textwidth,angle=0]{vacuumNf12.pdf}}}\label{fig:vevNf12}
\caption{Vacuum energy in both, BZ and DGM, approaches. Fig.(a) is for $N_f=8$, value that is in the limit of conformal window. Figs.(b) and (c) correspond to $N_f=10$
and $N_f=12$ respectively.}
\label{fig:veNf}
\end{figure}
We recall that a full calculation of the vacuum energy can be performed with the effective
potential for composite operators as a function of the complete QCD propagators \cite{cjt}. In this type of calculation besides the contribution
of dynamical gluon masses it should also be considered the inclusion of dynamical fermion masses (see, for instance, Ref.\cite{gor}).
Our simple estimate of the vacuum energy does not consider the effect of fermions, however this effect gives just a few percent contribution
to the vacuum energy for a small number of flavors \cite{gor}. As we increase the number of flavors the chiral symmetry is recovered, i.e. the dynamical generation
of fermion masses is probably erased \cite{for,tom,cap}, and the effect of fermions will probably not affect the value of
$\left< \Omega \right>$ calculated for a number of fermions around $8-12$, which is the region where the BZ and DGM can be compared. Finally,
we also do not know how confinement may affect the dynamical masses of gluons and quarks and modify our simple estimate of the vacuum energy.
\section{CONCLUSIONS}
We have compared two different mechanisms (BZ and DGM) proposing the existence of non-trivial fixed points in QCD. The BZ approach is essentially perturbative while the DGM
one is non-perturbative. Their $\beta$ functions are quite similar and the fixed points occur at approximately the same coupling constant values for $N_f \approx 9-10$.
However, as we vary $N_f$ the values of the coupling constants associated to the fixed points move in different directions: Decreasing when
$N_f$ increases in the BZ
approach, and exactly the opposite occurs in the DGM case.
Both $\beta$ functions and their coupling constants up to the critical value that determine the fixed points are in agreement with the analyticity constraint, and if we assume
that in both cases we can use perturbation theory to compute the anomalous dimension associated to each fixed point, we come to the conclusion that the DGM mechanism could be a
possible explanation for the anomalous dimension and fixed points within the conformal window.
Our results cover the following number of flavors $N_f\approx 8\, - \,12$. Assuming that the gluon condensate can be calculated in terms of the non-perturbative
gluon propagator and its dynamical gluon mass, we observe an intriguing behavior of the vacuum energy calculated as a function of the
coupling constant up to the fixed point value: Around $N_f=8$ it is the BZ approach that leads to the deepest
minimum of energy, which is at the border of the conformal window, and below this number of flavors the ``perturbative" vacuum becomes unstable, in the sense that when $N_f<8$ the
BZ $\beta$ function is not bounded from below. As we increase $N_f$ the vacuum energy is dominated by the DGM mechanism. At larger $N_f$ values the coupling constant in the DGM
approach seems to increase to values where we cannot be sure that the SDE calculations
of the DGM approach are still reliable. Of course, in the DGM case we do have fixed points for a small number of quarks, and for
a naive (perturbative) calculation of the mass anomalous dimension exponent we obtain $\gamma$ in the range $0.4-0.5$ up to $N_f \approx 10$.
If the fixed points predicted in the DGM approach are not confirmed by lattice simulations this means that the extrapolations shown in Eqs. \eqref{eq:MgLinearPRD} and
\eqref{eq:MgExpPRD} are not correct or the $\beta$ function of Eq.\eqref{eq:betacornwall} does not correspond to a true minimum of energy.
It should be noted that we used simple approximations for the coupling constant (Eq.(\ref{eq:Coupling})), for the dynamical gluon mass behavior with the momentum (Eq.(\ref{eq:mg})),
for the dependence of this mass (Eq.(\ref{eq:MgExpPRD})) and of the gluon condensate (Eq.(\ref{eq:gluoncondensate})) with the number of flavors. The dependence of these quantities
with $N_f$ introduces some uncertainty in our calculation, which are difficult to be estimated, but shall not modify the main results and characteristics of the DGM mechanism.
In particular, it has been strongly stressed how lattice simulations of the gluon propagator, and consequently the dynamical gluon mass IR value, demand simulations with quite large
volume lattices \cite{cuca}. We also note that our calculation depends on the ratio $m_g/\Lambda$, and the $\Lambda$ value will be dependent on the scheme and the number of fermions
that we consider in order to obtain its physical value. If we consider recently reported $\Lambda^{(N_f)}_{\bar{MS}}$ determinations obtained in lattice simulations (e.g.
$\Lambda^{(2)}_{\overline{MS}} = 330^{+21}_{-54}$ MeV and $\Lambda^{(3)}_{\overline{MS}} = 336 (19)$ MeV \cite{Aoki1}), we can be confident that the uncertainty that we have in the $m_g$
determination exceed by far the one that we have for $\Lambda$, and for this reason we stress that our result is quite dependent on the ratio $m_g/\Lambda$ and the main source of
uncertainty resides in the $m_g$ variation with $N_f$.
We are comparing $\beta$ functions obtained in different schemes, and are assuming that their respective fixed points are relatively stable, making our
comparison worthwhile. Considering the proximity of the different fixed points, and their dependence with $N_f$ it would be important that such coincidence could be tested by
different methods. Therefore, it is imperative to have improved lattice calculations of the behavior of this quantity with large $N_f$, mainly to check extrapolations like the ones
of Eq.\eqref{eq:MgLinearPRD} and Eq.\eqref{eq:MgExpPRD}. The same can also be said about calculations of the gluon condensate as a function of $N_f$. This will allow better
estimates of the fixed points as well as of the vacuum energy.
\section*{Acknowledgments}
\vspace{-0.5cm}
This research was partially supported
by the Conselho Nac. de Desenv. Cient\'{\i}fico e Tecnol\'ogico
(CNPq), by the grants 2013/22079-8 and 2013/24065-4 of Funda\c c\~ao de Amparo \`a Pesquisa do
Estado de S\~ao Paulo (FA\-PES\-P) and by Coordena\c c\~ao de Aper\-fei\-\c coa\-mento
de Pessoal de N\'{\i}vel Superior (CAPES).
\begin {thebibliography}{99}
\bibitem{gro} D. J. Gross and F. J. Wilczek, {\it Phys. Rev. Lett.} {\bf 30}, 1343 (1973).
\bibitem{pol} H. D. Politzer, {\it Phys. Rev. Lett.} {\bf 30}, 1346 (1973).
\bibitem{cas} W. E. Caswell, {\it Phys. Rev. Lett.} {\bf 33}, 244 (1974).
\bibitem{BZ} Tom Banks, A. Zaks, {\it Nucl. Phys. B} {\bf 196}, 189 (1982).
\bibitem{san} F. Sannino, {\it Acta Phys. Polon. B} {\bf 40}, 3533 (2009).
\bibitem{rit} Thomas A. Ryttov, {\it Phys. Rev. D} {\bf 90}, 056007 (2014), {\it Phys. Rev. D} {\bf 91}, 039906 (2015).
\bibitem{app} T. Appelquist et al., {\it arXiv:} 1204.6000 .
\bibitem{pal} M. P. Lombardo, K. Miura, T. J. N. da Silva and E. Pallante, {\it Int. J. Mod. Phys. A} {\bf 29}, 1445007 (2014)
\bibitem{has} A. Hasenfratz, D. Schaich and A. Veemala, {\it JHEP} {\bf 1506}, 143 (2015).
\bibitem{Beta4loop} T. van Ritbergen, J. A. Vermaseren, and S. A. Larin, {\it Phys. Lett. B} {\bf 400}, 379 (1997); M. Czakon, {\it Nucl. Phys. B} {\bf 710}, 485 (2005)
\bibitem{prosp} G. M. Prosperi, M. Raciti and C. Simolo, {\it Prog. Part. Nucl. Phys.} {\bf 58}, 387 (2007).
\bibitem{dok} Y. L. Dokshitzer, \textsl{Perturbative QCD and power corrections}, in \textsl{International Conference ``Frontiers of Matter"}, Blois, France,
June 1999, arXiv: 9911299.
\bibitem{gru} G. Grunberg, {\it Phys. Rev. D} {\bf 29}, 2315 (1984).
\bibitem{gru2} G. Grunberg,{\it JHEP} {\bf 08}, 019 (2001).
\bibitem{brod} S. J. Brodsky, E. Gardi, G. Grunberg and J. Rathsman, {\it Phys. Rev. D} {\bf 63}, 094017 (2001); S. J. Brodsky, G. F. de Teramond and A. Deur,
{\it Phys. Rev. D} {\bf 81}, 096010 (2010).
\bibitem{GiesAlkofer} Jens Braun and Holger Gies, {\it JHEP} {\bf 05}, 60 (2010); Markus Hopfer, Christian S. Fischer and Reinhard Alkofer, {\it JHEP} {\bf 11}, 035 (2014).
\bibitem{nat3} A. C. Aguilar, A. A. Natale and P. S. Rodrigues da Silva, {\it Phys. Rev. Lett.} {\bf 90}, 152001 (2003).
\bibitem{cor1} J. M. Cornwall, {\it Phys. Rev. D} {\bf 26}, 1453 (1982).
\bibitem{pinch} J.M. Cornwall, J. Papavassiliou and D. Binosi, {\it The Pinch Technique and its Applications to Non-Abelian Gauge Theories}, Cambridge University Press, 2011.
\bibitem{papa3} D. Binosi and J. Papavassiliou, {\it Phys. Rept.} {\bf 479}, 1 (2009).
\bibitem{s1} A. C. Aguilar and J. Papavassiliou, {\it JHEP} {\bf 12}, 012 (2006).
\bibitem{s2} A. C. Aguilar and J. Pappavassiliou, {\it Phys. Rev. D} {\bf 81}, 034003 (2010).
\bibitem{s3} A. C. Aguilar, D. Binosi and J. Papavassiliou, {\it Phys. Rev. D} {\bf 84}, 085026 (2011).
\bibitem{s4} D. Binosi, D. Ibanez and J. Papavassiliou, {\it Phys. Rev. D} {\bf 86}, 085033 (2012).
\bibitem{s5} A. C. Aguilar, D. Binosi and J. Papavassiliou, {\it JHEP} {\bf 12}, 050 (2012).
\bibitem{s6} A. C. Aguilar, D. Binosi and J. Papavassiliou, {\it Phys. Rev. D} {\bf 89}, 085032 (2014).
\bibitem{g1} A. C. Aguilar, D. Binosi and J. Papavassiliou, {\it JHEP} {\bf 1007}, 002 (2010).
\bibitem{g2} A. C. Aguilar, D. Binosi, J. Papavassiliou and J. R.-Quintero, {\it Phys. Rev. D} {\bf 80}, 085018 (2009).
\bibitem{la1} I. L. Bogolubsky, E. M. Ilgenfritz, M. Muller-Preussker and A. Sternbeck, {\it Phys. Lett. B} {\bf 676}, 69 (2009).
\bibitem{la2} P. O. Bowman, et al., {\it Phys. Rev. D} {\bf 76}, 094505 (2007).
\bibitem{la3} A. Cucchieri and T. Mendes, {\it Phys. Rev. Lett.} {\bf 100}, 241601 (2008).
\bibitem{la4} A. Cucchieri and T. Mendes, {\it Phys. Rev. D} {\bf 81}, 016005 (2010).
\bibitem{s12} A. C. Aguilar and J. Papavassiliou, {\it Eur. Phys. J. A} {\bf 31}, 742 (2007).
\bibitem{cor3} J. M. Cornwall, {\it Phys. Rev. D} {\bf 93}, 025021 (2016).
\bibitem{aguida} A. C. Aguilar, D. Binosi and J. Papavassiliou, {\it Front. Phys. China} {\bf 11}, 111203 (2016).
\bibitem{sx} A. C. Aguilar, D. Binosi and J. Papavassiliou, {\it Phys. Rev. D} {\bf 78}, 025010 (2008).
\bibitem{nat6} A. A. Natale, {\it PoS QCD-TNT} {\bf 09}, 031 (2009); arXiv: 0910.5689.
\bibitem{nat1} A. C. Aguilar and A. A. Natale, {\it JHEP} {\bf 0408}, 057 (2004).
\bibitem{cor2} J. M. Cornwall, {\it PoS QCD-TNT-II}, 010 (2011); {\it arXiv}: 1111.0322.
\bibitem{nat2} J. D. Gomez and A. A. Natale, {\it Phys. Lett. B} {\bf 747}, 541 (2015).
\bibitem{nat4} J. D. Gomez and A. A. Natale, {\it Phys. Rev. D} {\bf 93}, 014027 (2016).
\bibitem{aya} A. Ayala et al., {\it Phys. Rev. D} {\bf 86}, 074512 (2012).
\bibitem{agui3} A. C. Aguilar, D. Binosi and J. Papavassiliou, {\it Phys. Rev. D} {\bf 88}, 074010 (2013).
\bibitem{Baikov} P. A. Baikov, K. G. Chetyrkin and J. H. Kunh, \it{arXiv:1606.08659.}
\bibitem{Stevenson1} P. M. Stevenson, \it{arXiv:1607.01670.}
\bibitem{Rittov1} T. A. Rittov and R. Shrock, \it{Phys. Rev. D} {\bf 94}, 105015; idem, \it{Phys. Rev. D} {\bf 94}, 125005.
\bibitem{Hasenfratz}A. Hasenfratz and D. Schaich, \it{arXiv:1610.10004.}
\bibitem{cor4} J. M. Cornwall, {\it arXiv:1211.2019; arXiv:1410.2214.}
\bibitem{Krasnikov} N.V. Krasnikov, {\it Nucl. Phys. B} {\bf 192}, 497 (1981).
\bibitem{gra} J. A. Gracey and R. M. Simms, {\it Phys. Rev. D} {\bf 91}, 085037 (2015).
\bibitem{bai} P. A. Baikov, K. G. Chetyrkin and J. H. Kuhn, {\it JHEP} {\bf 1410}, 76 (2014).
\bibitem{corc} J. M. Cornwall, {\it Phys. Rev. D} {\bf 83}, 076001 (2011).
\bibitem{doff} A. Doff, F. A. Machado and A. A. Natale, {\it Annals Phys.} {\bf 327}, 1030 (2012).
\bibitem{Appel} T. Appelquist, G.T. Fleming, M.F. Lin, E.T. Neil, and D.A. Schaich, {\it Phys. Rev. D} {\bf 84}, 054501 (2011).
\bibitem{Aoki} Yasumichi Aoki, et al, {\it Phys. Rev. D} {\bf 86}, 054506 (2012).
\bibitem{mira} V. A. Miransky, {\it Phys. Rev. D} {\bf 59}, 105003 (1999).
\bibitem{and1} A. Doff and A. A. Natale, {\it Int. J. Mod. Phys. A} {\bf 31}, 1650024 (2016).
\bibitem{and2} A. Doff, A. A. Natale and P. S. Rodrigues da Silva, {\it Phys. Rev. D} {\bf 80}, 055005 (2009).
\bibitem{and3} A. Doff, A. A. Natale and P. S. Rodrigues da Silva, {\it Phys. Rev. D} {\bf 77}, 075012 (2008).
\bibitem{cre} R. Crewter, {\it Phys. Rev. Lett.} {\bf 28}, 1421 (1972).
\bibitem{cha} M. Chanowitz and J. Ellis, {\it Phys. Lett. B} {\bf 40}, 397 (1972).
\bibitem{col} J. C. Collins, A. Duncan and S. D. Joglekar, {\it Phys. Rev. D} {\bf 16}, 438 (1977).
\bibitem{shif} M.A. Shifman, A.I. Vainshten, and V.I. Zakharov, {\it Nucl. Phys. B} {\bf 163}, 46 (1980).
\bibitem{zak} S. Narison and V. I. Zakharov, {\it Phys. Lett. B} {\bf 679}, 355 (2009).
\bibitem{gor} E. V. Gorbar and A. A. Natale, {\it Phys. Rev. D} {\bf 61}, 054012 (2000).
\bibitem{cjt} J. M. Cornwall, R. Jackiw and E. Tomboulis, {\it Phys. Rev. D} {\bf 10}, 2428 (1974).
\bibitem{for} Ph. de Forcrand, S. Kim and W. Unger, {\it JHEP} {\bf 13}, 051 (2013).
\bibitem{tom} E. T. Tomboulis, {\it Phys. Rev. D} {\bf 87}, 034513 (2013).
\bibitem{cap} R. M. Capdevilla, A. Doff and A. A. Natale, {\it Phys. Lett. B} {\bf 744}, 325 (2015).
\bibitem{cuca} A. Cucchieri and T. Mendes, {\it AIP Conf.Proc.} {\bf 1343}, 185 (2011); idem, {\it PoS QCD-TNT} {\bf 09}, 026 (2009).
\bibitem{Aoki1} S. Aoki \textit{et al.} \it{arXiv:1607.00299.}
\end {thebibliography}
\end{document}
|
1,108,101,565,520 | arxiv |
\section{Introduction}
Let $M$ be a connected manifold. It is well-known that for any Cartan geometry $(\cG \to M, \omega)$ of a given type $(G,K)$ (henceforth, a ``$G/K$ geometry''), the Lie algebra of (infinitesimal) symmetries $\finf(\cG,\omega)$ (see Definition \ref{D:Cartan}) is always finite-dimensional. Indeed, we always have $\dim(\finf(\cG,\omega)) \leq \dim(G)$ and equality is realized by the flat model $(G \to G/K, \omega_{MC})$, where $\omega_{MC}$ is the Maurer--Cartan form on $G$. Hence, a natural question is: {\em Among all $G/K$ geometries $(\cG \to M, \omega)$ with $\dim(\finf(\cG,\omega)) < \dim(G)$, what is the maximum of $\dim(\finf(\cG,\omega))$?} Often there is a significant {\em gap} between this number and $\dim(G)$, i.e. there are forbidden dimensions, so the above question is sometimes referred to as the {\em gap problem}. However, as stated, the problem is rather too general. We will instead concentrate on classes of Cartan geometries which admit an equivalent description as an underlying geometric structure on $M$. The gap problem for various geometric structures has been studied since the time of Sophus Lie by many authors, including Fubini, Cartan, Yano, Wakakuwa, Vranceanu, Egorov, Obata, and Kobayashi -- see \cite{Fub1903, Ego1978, Kob1972} and references therein.
The prototypical example is Riemannian geometry, which in dimension $n$ corresponds to an $\oE(n) / \text{O}(n)$ geometry, where $\oE(n)$ is the Euclidean group and $\text{O}(n)$ is the orthogonal group. Maximal symmetry occurs (non-uniquely) for the flat model $\oE(n) / \text{O}(n) \cong \bbR^n$, and all other constant curvature spaces. Submaximal symmetry dimensions are given in Table \ref{F:submax-Riem} and were studied by Wang \cite{Wang1947} and Egorov \cite{Ego1955} for $n \geq 3$, and by Darboux and Koenigs in the $n=2$ case \cite{Dar1887}.
\begin{table}[h]
$\begin{array}{|c|c|c|} \hline
n & {\rm Max} & {\rm Submax}\\ \hline\hline
2 & 3 & 1 \\
3 & 6 & 4 \\
4 & 10 & 8 \\
\geq 5 & \binom{n+1}{2} & \binom{n}{2} + 1 \\[2pt] \hline
\end{array}$
\caption{Maximal / submaximal symmetry dimensions for Riemannian geometry}
\label{F:submax-Riem}
\end{table}
A {\em parabolic geometry} is a $G/P$ geometry, where $G$ is a real or complex \ss Lie group and $P \subset G$ is a parabolic subgroup. Many well-known geometric structures such as conformal, projective, CR, systems of 2nd order ODE, scalar 3rd order ODE, and many bracket-generating distributions, etc. are equivalently described as parabolic geometries. Indeed, there is an equivalence of categories between {\em regular, normal} parabolic geometries and underlying structures \cite{CS2009}. For such geometries, maximal symmetry occurs {\em uniquely} for the (locally) flat model. The gap problem we will focus on is: {\em Among all {\bf regular, normal} $G/P$ geometries $(\cG \to M, \omega)$ which are {\bf not locally flat}, what is the maximum of $\dim(\finf(\cG,\omega))$?} We denote this {\em submaximal symmetry dimension} by $\fS$. (Equivalently, if we let $\cS$ be the symmetry algebra of the underlying structure on $M$, then $\fS$ maximizes $\dim(\cS)$ among all structures which are not locally flat.)
In the context of parabolic geometries, previously known results are given in Table \ref{F:known-submax}, but this list comes with some caveats. For example, consider the geometry of generic rank 2 distributions on 5-manifolds \cite{Car1910}, also known as $(2,3,5)$-distributions, which is modelled on $G_2 / P_1$. Using his method of equivalence, Cartan identified a fundamental binary quartic invariant, whose root type gives a pointwise algebraic classification similar to the Petrov classification of the Weyl tensor in 4-dimensional Lorentzian (conformal) geometry. Cartan then identified the maximal amount of symmetry admitted {\em within each class with constant root type}, and in many cases exhibited local models either explicitly or via (closed) structure equations. The flat model has $G_2$ symmetry, which is 14-dimensional, and occurs precisely when the binary quartic vanishes everywhere. Submaximally symmetric models, having 7-dimensional symmetry, all occur within the class where the binary quartic has a single root of multiplicity $4$. However, since our only assumption is that the geometry is not locally flat, our result that $\fS=7$ is a sharpening of Cartan's result. For any (regular, normal) parabolic geometry, there is a fundamental tensorial invariant called {\em harmonic curvature} $\Kh$, valued in the Lie algebra cohomology group $H^2(\fg_-,\fg)$, whose vanishing everywhere characterizes local flatness. (The Cartan quartic and the Weyl tensor are instances of $\Kh$ for their respective geometries.) For us, ``submaximally symmetric'' assumes only that $\Kh$ does not vanish {\em at some point} in the neighbourhood under consideration.\footnote{The global problem of studying symmetry dimensions which are ``less than maximal'' is quite different. For example, take the flat model $G \to G/P$ and remove a point on the base and the fibre over it. This yields a (locally flat) $G/P$ geometry with {\em global} automorphism group isomorphic to $P$. In the $G_2 / P_1$ case, $\dim(P_1) = 9$.}
\begin{table}[h]
\begin{tabular}{|c|c|c|c|c|} \hline
Geometry & Model & Max & Submax & Citation\\ \hline\hline
\begin{tabular}{c} Scalar 2nd order ODE\\ mod point transformations\end{tabular} & $\text{SL}_3(\bbR) / P_{1,2}$ & 8 & 3 & Tresse (1896)\\ \hline
$2$-dim.\ projective structures & $\text{SL}_3(\bbR) / P_1$ & 8 & 3 & Tresse (1896)\\ \hline
$(2,3,5)$-distributions & $G_2 / P_1$ & 14 & 7 & Cartan (1910)\\ \hline
Projective structures (dim.\ $\rkg \geq 3$) & $\text{SL}_{\rkg+1}(\bbR) / P_1$ & $\rkg^2+2\rkg$ & $(\rkg-1)^2 + 4$ & Egorov (1951)\\ \hline
\begin{tabular}{c} Scalar 3rd order ODE\\ mod contact transformations \end{tabular} & $\text{Sp}_4(\bbR) / P_{1,2}$ & 10 & 5 & \begin{tabular}{c} Wafo Soh, Mahomed,\\ Qu (2002)\end{tabular}\\ \hline
Pairs of 2nd order ODE & $\text{SL}_4(\bbR) / P_{1,2}$ & 15 & 9 & \begin{tabular}{c}Casey, Dunajski,\\ Tod (2012)\end{tabular}\\ \hline
\end{tabular}
\caption{Previously known submaximal symmetry dimensions for parabolic geometries}
\label{F:known-submax}
\end{table}
The main results of this article are:
\begin{itemize}
\item For any complex or real regular, normal $G/P$ geometry we give a universal upper bound $\fS \leq \fU$ (Theorem \ref{T:upper}), where $\fU$ is algebraically determined -- see \eqref{E:fU}.
\item In complex or {\em split-real}\footnote{We refer to $\fg$ as split-real if it is a split real form of its complexification, e.g. $\fsl(n,\bbR)$, but not $\fsu(n)$.} cases, we:
\begin{itemize}
\item exhibit models with $\dim(\cS) = \fU$ in almost all cases (Theorem \ref{T:realize}). Thus, $\fS = \fU$ almost always (Theorem \ref{T:main-thm2}). Exceptions are also studied (Section \ref{S:exceptions}).
\item give a Dynkin diagram recipe to efficiently compute $\fU$ (Section \ref{S:Dynkin}).
\item establish local homogeneity of all submaximally symmetric models\footnote{This is not universally true outside the parabolic context, e.g. Killing fields for metrics on surfaces (see Table \ref{F:submax-Riem}).} near non-flat {\em regular} points (Theorem \ref{T:transitive}); the set of all such points is open and dense in $M$ (Lemma \ref{L:dense}).
\end{itemize}
\item We recover all results in Table \ref{F:known-submax}; some sample new results are given in Table \ref{F:sample-submax}. Our complete classification when $G$ is (complex or split-real) {\em simple} is presented in Appendix \ref{App:Submax}.
\end{itemize}
\begin{table}[h]
\begin{tabular}{|c|c|c|c|c|c|} \hline
Geometry & Range & Model & Max & Submax \\ \hline\hline
\begin{tabular}{c} Sig.\ $(p,q)$ conformal geometry\\ in dim.\ $n = p+q$ \end{tabular} & $p,q \geq 2$ & $\text{SO}_{p+1,q+1} / P_1$ & $\binom{n+2}{2}$ & $\binom{n-1}{2} +6$ \\ \hline
\begin{tabular}{c} Systems of 2nd order ODE \\ in $m$ dependent variables \end{tabular} & $m \geq 2$ & $\text{SL}_{m+2}(\bbR) / P_{1,2}$ & $(m+2)^2-1$ & $m^2 + 5$ \\ \hline
\begin{tabular}{c} Generic rank $\rkg$ distributions \\ on $\half \rkg(\rkg+1)$-dim.\ manifolds \end{tabular} & $\rkg \geq 3$ & $\text{SO}_{\rkg,\rkg+1} / P_\rkg$ & $\binom{2\rkg+1}{2}$ & $\mycase{\frac{\rkg(3\rkg-7)}{2} + 10, & \rkg \geq 4; \\ 11, & \rkg = 3 }$ \\ \hline
Lagrangean contact structures & $\rkg \geq 3$ & $\text{SL}_{\rkg+1}(\bbR) / P_{1,\rkg}$ & $\rkg^2 + 2\rkg$ & $(\rkg-1)^2 + 4$ \\ \hline
Contact projective structures & $\rkg \geq 2$ & $\text{Sp}_{2\rkg}(\bbR) / P_1$ & $\rkg(2\rkg+1)$ & $\mycase{2\rkg^2 - 5\rkg + 8, & \rkg \geq 3;\\ 5, & \rkg = 2}$\\ \hline
Contact path geometries & $\rkg \geq 3$ & $\text{Sp}_{2\rkg}(\bbR) / P_{1,2}$ & $\rkg(2\rkg+1)$ & $2\rkg^2 - 5\rkg + 9$\\ \hline
\begin{tabular}{c} Exotic parabolic contact\\ structure of type $E_8$ \end{tabular} & - & $E_8 / P_8$ & 248 & 147\\ \hline
\end{tabular}
\caption{Sample new results of submaximal symmetry dimensions for parabolic geometries}
\label{F:sample-submax}
\end{table}
The study of the gap problem is much more subtle for general real forms and we do not attempt to complete the picture here. However, in the conformal case, we exhibit local models in all non-Riemannian and non-Lorentzian signatures which realize the upper bound coming from complexified considerations, so we establish $\fS$ for these cases as well. The problem for Riemannian and Lorentzian conformal structures has been recently settled by B. Doubrov and D. The \cite{DT-Weyl}.
For arbitrary parabolic geometries, \v{C}ap \& Neusser \cite{CN2009} gave a general algebraic strategy for finding upper bounds on $\fS$ using Kostant's version of the Bott--Borel--Weil theorem \cite{Kos1963}. However, the implementation of their strategy must be carried out on a case-by-case basis; moreover, their upper bounds are in general not sharp. (See also Remark \ref{RM:CN}.) For structures determined by a bracket-generating distribution (not necessarily underlying parabolic geometries), another approach based on an elaboration of Tanaka theory \cite{Tan1970, Tan1979} was proposed in the works \cite{Kru2011} and \cite{Kru2012} of the first author; the latter reference contains also a review of some results on submaximal symmetry dimensions.
The main idea behind our approach is to combine Tanaka theory with the \v{C}ap--Neusser approach based on Kostant theory. This yields a {\em uniform algebraic approach} to the gap problem which is rooted in the structure theory of \ss Lie algebras. In contrast, earlier (sharp) results were obtained using a variety of techniques, e.g.\ computation of the algebra of all differential invariants of a pseudogroup (scalar 2nd order ODE), Cartan's method of equivalence ($(2,3,5)$-distributions), classification of Lie algebras of contact vector fields in the plane (3rd order ODE), or studying integrability conditions for the equations characterizing symmetries (projective structures). Some of these relied heavily on the low-dimensional setup.
In Section \ref{S:background}, we review background from Tanaka theory, representation theory, and parabolic geometry. We discuss the Yamaguchi prolongation and rigidity theorems, Kostant's theorem, as well as correspondence and twistor space constructions. Given a subalgebra $\fa_0$ of the reductive part $\fg_0$ of $\fp$, we introduce in Definition \ref{D:g-prolong} a slight variant $\prn^\fg(\fg_-,\fa_0)$ of Tanaka's original prolongation algebra $\prn(\fg_-,\fa_0)$.
Section \ref{S:PR-analysis} is representation-theoretic and focuses on the following questions: Letting $\phi \in H^2(\fg_-,\fg)$ be {\bf nonzero}, $\fann(\phi) \subset \fg_0$ its annihilator, and $\fa^\phi := \prn^\fg(\fg_-,\fann(\phi))$,
\begin{itemize}
\item {\em What is the maximum of $\dim(\fa^\phi)$?}
\item {\em In the maximal case, how can one describe the Lie algebra structure of $\fa^\phi$?}
\item {\em What is the maximal height of the grading on $\fa^\phi$?}
\end{itemize}
Over $\bbC$, we show in Lemma \ref{L:Tanaka-lw} that $\dim(\fa^\phi)$ must be maximized on a lowest weight vector $\phi_0$ of some $\fg_0$-irreducible submodule $\bbV_\mu \subseteq H^2(\fg_-,\fg)$. We then prove a new Dynkin diagram recipe in Section \ref{S:Dynkin} which gives a way to understand the structure of $\fa^{\phi_0}$. For example, the $E_8 / P_8$ case (Example \ref{ex:E8P8} and \ref{ex:E8P8-2}) becomes a trivial exercise. When $\fg$ is {\em simple}, regularity forces the grading height $\tilde\nu$ of $\fa^\phi$ (for any $\phi \neq 0$) to be highly constrained: $0 \leq \tilde\nu \leq 2$ always -- see Section \ref{S:height}.
Section \ref{S:main} is the core technical part of the paper. Let $(\cG \stackrel{\pi}{\to} M, \omega)$ be a regular, normal parabolic geometry and let $u \in \cG$. Any $\xi \in \finf(\cG,\omega)$ is uniquely determined by its value at $u$, so the isomorphism $\omega_u : T_u \cG \to \fg$ embeds $\finf(\cG,\omega)$ into $\fg$ as a vector subspace. Its image $\ff(u)$ becomes a filtered Lie algebra by restriction of the canonical $P$-invariant filtration on $\fg$, and the associated-graded $\fs(u) := \gr(\ff(u))$ is a graded subalgebra of $\fg$. If $x = \pi(u)$ is a {\em regular point} (Definition \ref{D:reg-pt}), then we prove the crucial fact that $\fs(u) \subseteq \fa^{\Kh(u)}$ (Proposition \ref{P:key-bracket}), which establishes a bridge to Tanaka theory. Since the set of regular points is open and dense in $M$ (Lemma \ref{L:dense}), any neighbourhood of a non-flat point contains a non-flat regular point, and this leads to our upper bound $\fS \leq \fU$ (Theorem \ref{T:upper}). In complex or split-real cases, realizability of $\fU$ (or more precisely, $\fU_\mu$, when restricting curvature type $\im(\Kh) \subseteq \bbV_\mu \subseteq H^2_+(\fg_-,\fg)$) is then addressed: for almost all cases, a model can be constructed by deforming the Lie algebra structure on $\fa^{\phi_0}$ by $\phi_0$ (Theorem \ref{T:realize}). We then investigate exceptional cases in Section \ref{S:exceptions}.
Section \ref{S:specific} contains finer analysis of specific geometries, and in particular we exhibit some submaximally symmetric models in local coordinates. We mention a few highlights.
\begin{itemize}
\item {\em Conformal geometry:} In addition to investigating the gap problem in general signatures, we find the maximum conformal symmetry dimensions for each Petrov type in the 4-dimensional Lorentzian case. In doing so, we discovered that a Petrov type II metric with four (conformal) Killing vectors was not known in the literature. We exhibit such a metric in Section \ref{S:4d-Lor}.
\item {\em Systems of 2nd order ODE:} We discuss the Fels invariants \cite{Fels1995} and their connection to harmonic curvature. There are two (degenerate) branches: the projective branch and the Segr\'e branch.\footnote{Equations in the Segr\'e branch correspond to the {\em torsion-free} path geometries in the sense of Grossman \cite{Gro2000}.} Submaximally symmetric models arise in the latter. We exhibit such a model and illustrate the twistor correspondence by exhibiting the corresponding Segr\'e geometry.
\item {\em Projective structures:} We recover Egorov's result \cite{Ego1951} by our algebraic method.
\item {\em $(2,3,5)$-distributions:} We establish $\fS = 7$, and then investigate maximal symmetry dimensions for each root type of the binary quartic invariant.
\item {\em Generic rank $\rkg$ distributions on $\binom{\rkg+1}{2}$-manifolds:} We obtain $\fS$ in general, and give a submaximally symmetric $(3,6)$-distribution along with its 11 symmetry generators.
\end{itemize}
{\bf Conventions:} We assume throughout that $M$ is a connected manifold. When working with real and complex parabolic geometries, our results are formulated in the smooth and holomorphic categories, respectively. We use {\em left} cosets and {\em right} principal bundles. For a Lie group $G$, we identify its Lie algebra $\fg := \text{Lie}(G)$ with the {\em left}-invariant vector fields on $G$. We write $A_\rkg, B_\rkg, C_\rkg, D_\rkg, G_2, F_4, E_6, E_7, E_8$ for the complex simple Lie algebras (see Appendix \ref{App:rep-data}), {\bf or} any complex Lie groups having these as their Lie algebras. (In Section \ref{S:specific}, we abuse this notation further by letting it refer to real forms, and specify the precise real form as necessary.) Parabolic subalgebras will be denoted $\fp$, $\fq$, and corresponding parabolic subgroups are $P,Q$.
We always assume that $G$ acts on $G/P$ {\em infinitesimally effectively}, i.e. the kernel of the $G$-action on $G/P$ is at most discrete. Equivalently, simple ideals of $\fg$ are not contained in $\fg_0$. (This avoids situations like $G = G' \times G''$ and $P = P' \times G''$, where the $G''$-action is not visible on $G/P$.)
For complex \ss Lie algebras, we draw our Dynkin diagrams with open white circles. This is the same notation as the Satake diagram for the corresponding split real form, and serves to emphasize that all our results are the same in both settings. We use the ``minus lowest weight'' Dynkin diagram convention (see Section \ref{S:Kostant}) when referring to irreducible $\fg_0$-modules. If $\fg$ is simple, we use $\lambda_\fg$ to denote its highest weight (root).\\
{\bf Acknowledgements}: We are grateful for many helpful discussions with Boris Doubrov, Mike Eastwood, Katharina Neusser, Katja Sagerschnig, and Travis Willse. Much progress on this paper was made during the conference ``The Interaction of Geometry and Representation Theory: Exploring New Frontiers'' devoted to Mike Eastwood's 60th birthday, and held in Vienna in September 2012 at the Erwin Schr\"odinger Institute. Boris Doubrov gave some key insights during this conference which led to the proof of Lemma \ref{L:Tanaka-lw}.
The representation theory software {\tt LiE}, as well as Ian Anderson's {\tt DifferentialGeometry} package in {\tt Maple} were invaluable tools for facilitating the analysis in this paper.
B.K. was supported by the University of Troms\mathfrak{o}{} while visiting the Australian National University, where this work was initiated. The hospitality of ANU is gratefully acknowledged. D.T. was partially supported by an ARC Research Fellowship.
\section{Background}
\label{S:background}
We review some necessary background from Tanaka theory \cite{Tan1970, Tan1979, Zel2009}, representation theory and parabolic geometry \cite{CS2009}, including Kostant's version of the Bott--Borel--Weil theorem \cite{Kos1963, BE1989, CS2009}.
\subsection{Tanaka theory in a nutshell}
The aim of Tanaka theory is to study the equivalence problem for geometric structures. The given geometric data is a manifold $M$ with a (vector) distribution $D \subseteq TM$ endowed possibly with some additional structure on it, e.g.\ a sub-Riemannian metric or a conformal structure. For many interesting geometric structures, one can canonically associate a Cartan geometry $(\cG \to M,\omega)$ of some type $(G,K)$. We give an outline of these ideas.
Iterating Lie brackets of sections of $D$, we form the weak derived flag
\[
D =: D^{-1} \subset D^{-2} \subset D^{-3} \subset...\,\,\,, \qbox{where} \Gamma(D^{i-1}) := [\Gamma(D^i), \Gamma(D^{-1})].
\]
We assume that all $D^i$ have constant rank, and $D$ is bracket-generating in $TM$, i.e. $D^{-\nu} = TM$ for some minimal $\nu \geq 1$ (called the {\em depth}). By construction, the Lie bracket satisfies $\Gamma(D^i) \times \Gamma(D^j) \to \Gamma(D^{i+j})$, so $M$ is a {\em filtered manifold}. On the associated graded of the above filtration
\[
\fm(x) = \fg_-(x) = \bop_{i < 0} \fg_i(x), \qquad \fg_i(x) = D^i(x) / D^{i+1}(x),
\]
the Lie bracket induces a tensorial bracket on $\fm(x)$ called the Levi bracket, which turns $\fm(x)$ into a graded nilpotent Lie algebra (GNLA) called the {\em symbol algebra at $x$}. We further assume that $D$ is of {\em constant type}, i.e. there is a fixed GNLA $\fm = \fg_-$ such that $\fm(x) \cong \fm$, $\forall x \in M$.
\begin{example} \label{ex:Monge} Any $(2,3,5)$-distribution $D$ is locally specified by an underdetermined ODE $z' = F(x,y,y',y'',z)$ (Monge equation) satisfying $F_{y'' y''} \neq 0$. On a 5-manifold $M$ with coordinates $(x,y,p,q,z)$, $D = D_F$ is spanned by $\p_q$ and $\frac{d}{dx} := \p_x + p \p_y + q\p_p + F \p_z$. We have
\[
\l[\p_q,\frac{d}{dx}\r] = \p_p + F_q \p_z, \quad
\l[\p_q, \p_p+F_q\p_z\r] = F_{qq} \p_z, \quad
\l[\frac{d}{dx}, \p_p + F_q\p_z\r] = - \p_y + H \p_z,
\]
for some differential function $H$ of $F$. Assuming $F_{qq} \neq 0$, we get a filtration $D = D^{-1} \subset D^{-2} \subset D^{-3} = TM$. Its associated graded is isomorphic to $\fg_- = \fg_{-1} \op \fg_{-2} \op \fg_{-3}$ with relations
\begin{align} \label{E:G2P1-symbol}
[e_{-1}^1, e_{-1}^2] = e_{-2}, \quad [e_{-1}^1, e_{-2}] = e_{-3}^1, \quad [e_{-1}^2, e_{-2}] = e_{-3}^2.
\end{align}
\end{example}
Since $D$ has constant type, consider the graded frame bundle $\cF_{gr}(M) \stackrel{\pi}{\to} M$ whose total space consists of all GNLA isomorphisms $u : \fm \to \fm(x)$, where $x = \pi(u)$. Indeed, $\cF_{gr}(M) \to M$ is a (right) principal bundle with structure group $\text{Aut}_{gr}(\fm)$, the group of graded automorphisms of $\fm$. If $D$ is endowed with additional structure, we can reduce to a principal subbundle $\cG_0 \to M$ with structure group $G_0 \subseteq \text{Aut}_{gr}(\fm)$. For example, in conformal geometry, $D = TM$, and $\cG_0 \to M$ is the conformal frame bundle with $G_0 = \CO(\fm_{-1})$; in Example \ref{ex:Monge}, $G_0 \cong \text{Aut}_{gr}(\fm) \cong \text{GL}(\fg_{-1}) \cong \text{GL}_2$. From here, one constructs the {\em geometric prolongation} of the given structure, namely a tower of adapted bundles $...\to \cG_2 \to \cG_1 \to \cG_0 \to M$. We refer to \cite{Zel2009} for details on this procedure. This geometric prolongation is controlled by an {\em algebraic prolongation} which we describe below.
At the Lie algebra level, $\fg_0 \subseteq \fder_{gr}(\fm)$, where the latter is the algebra of graded derivations of $\fm$. Given $(\fm,\fg_0)$, the axioms for Tanaka's algebraic prolongation $\prn(\fm,\fg_0) := \bop_{i \in \bbZ} \fg_i(\fm,\fg_0)$ are:
\begin{align}
& \fg_{\leq 0}(\fm,\fg_0) = \fm \op \fg_0; \tag{T.1} \label{T1}\\
& \mbox{If $X \in \fg_i(\fm,\fg_0)$ for $i \geq 0$ satisfies $[X,\fg_{-1}] = 0$, then $X = 0$;} \tag{T.2} \label{T2}\\
& \mbox{$\prn(\fm,\fg_0)$ is the maximal graded Lie algebra satisfying \eqref{T1} and \eqref{T2}.} \tag{T.3} \label{T3}
\end{align}
Write $\prn(\fm) := \prn(\fm,\fder_{gr}(\fm))$. Up to isomorphism, there is a unique graded Lie algebra satisfying \eqref{T1}--\eqref{T3}. In fact, Tanaka gives an explicit inductive realization of $\fg := \prn(\fm,\fg_0)$; for $i > 0$,
\begin{align}
\fg_i = \l\{ f \in \bop_{j < 0} \fg_j^* \ot \fg_{j+i} \mid f([v_1,v_2]) = [f(v_1),v_2] + [v_1,f(v_2)], \,\, \forall v_1,v_2 \in \fm \r\}.
\label{E:alg-pr-explicit}
\end{align}
The brackets on $\fg$ are: (i) The given brackets on $\fm$ and $\fg_0$; (ii) If $f \in \fg_i$, $i \geq 0$, and $v \in \fm$, then define $[f,v] = -[v,f] := f(v)$; (iii) Brackets on the non-negative part are defined inductively: If $f_1 \in \fg_i$ and $f_2 \in \fg_j$, for $i,j \geq 0$, then define $[f_1,f_2] \in \fg_{i+j}$ by $ [f_1,f_2](v) := [f_1(v),f_2] + [f_1,f_2(v)]$, $v \in \fm$.
(Note $f_1(v) \in \fg_{<i}$ and $f_2(v) \in \fg_{<j}$, so the brackets on the right side are known by induction.)
Verification of the Jacobi identity is left to the interested reader.
\begin{remark} \label{RM:Hom}
Since $\fg_-$ is generated by $\fg_{-1}$, any $X \in \fg_i$, $i \geq 0$, is determined by its action on $\fg_{-1}$. By \eqref{T2} the Lie bracket on $\fg$ induces $\fg_i \hookrightarrow \fg_{-1}^* \ot \fg_{i-1}$, so $\fg_i \hookrightarrow \Hom(\ot^i \fg_{-1}, \fg_0) \hookrightarrow \Hom(\ot^{i+1} \fg_{-1}, \fg_{-1})$.
\end{remark}
Note that by \eqref{T2}, if $\fg_r = 0$ for some $r \geq 0$, then $\fg_i = 0$ for all $i > r$. This case of finite termination is particularly important.
\begin{thm}[Tanaka] \label{T:Tanaka} Let $\cG_0 \to M$ be a structure of constant type $(\fm,\fg_0)$ and $\fg = \prn(\fm,\fg_0)$. Suppose $r \geq 0$ is minimal such that $\fg_{r+1} = 0$. Then there exists a canonical frame on the $r$-th geometric prolongation $\cG_r$ of $\cG_0$. Moreover, any such structure can be described as a Cartan geometry of some type $(G,K)$, where $\fg = \text{Lie}(G)$ and $\fg_{\geq 0} =: \fk = \text{Lie}(K)$.
\end{thm}
We recall some definitions from Cartan geometry:
\begin{defn} \label{D:Cartan} A Cartan geometry $(\cG \to M, \omega)$ of type $(G,K)$ (or a ``$G/K$ geometry'') is a principal $K$-bundle $\cG \to M$ endowed with a Cartan connection $\omega \in \Omega^1(\cG,\fg)$ with defining properties: (i) $\omega$ is $K$-equivariant; (ii) $\omega(\zeta_Y) = Y$ for any $Y \in \fk$, where $\zeta_Y(u) = \l.\frac{d}{dt}\r|_{t=0} u \cdot \exp(tY)$; (iii) $\omega_u : T_u \cG \to \fg$ is a linear isomorphism for any $u \in \cG$. The curvature of $\omega$ is $d\omega + \frac{1}{2} [\omega,\omega] \in \Omega^2(\cG,\fg)$. Evaluation on $\omega^{-1}(X)$, $X \in \fg$, yields the curvature function $\kappa : \cG \to \bigwedge^2 \fg^* \ot \fg$, which is horizontal, so $\kappa : \cG \to \bigwedge^2 (\fg/\fk)^* \ot \fg$. The infinitesimal symmetries are $\finf(\cG,\omega) = \{ \xi \in \fX(\cG)^K \mid \cL_\xi \omega = 0 \}$.
\end{defn}
While calculating the Tanaka prolongation from given data $(\fm,\fg_0)$ is algorithmic, a naive application of \eqref{E:alg-pr-explicit} generally leads to a computationally intensive exercise in linear algebra. Moreover, even if this task is completed, there still remains the general problem of understanding the structure of the resulting Tanaka algebra. However, in the context of parabolic geometries, these aforementioned problems are resolved by Yamaguchi's prolongation theorem \cite{Yam1993}; see Theorem \ref{T:Y-pr-thm}.
A general situation in which the computation of the Tanaka prolongation is much simpler is when one already knows $\fg = \prn(\fm,\fg_0)$, and one is interested in $\fa = \prn(\fm,\fa_0)$ for some subalgebra $\fa_0 \subseteq \fg_0$. By Remark \ref{RM:Hom}, for $k > 0$, we have $\fa_k \hookrightarrow \fg_{-1}^* \ot \fa_{k-1} \hookrightarrow \fg_{-1}^* \ot \fg_{k-1}$, and $\fa_k \hookrightarrow \Hom(\ot^k \fg_{-1},\fa_0)$. The inclusion $\fa_0 \hookrightarrow \fg_0$ induces inclusions $\fa_k \hookrightarrow\fg_k$. Hence,
\begin{lemma} \label{L:T-subalg} Suppose $\fm = \fg_-$ is generated by $\fg_{-1}$. If $\fa_0 \subseteq \fg_0$ is any subalgebra, then $\fa := \prn(\fm,\fa_0) \hookrightarrow \fg := \prn(\fm,\fg_0)$. Indeed for $k > 0$,
\begin{align*}
\fa_k = \{ X \in \fg_k \mid [X,\fg_{-1}] \subset \fa_{k-1} \} = \{ X \in \fg_k \mid \text{ad}_{\fg_{-1}}^{k}(X) \subset \fa_0 \}.
\end{align*}
More precisely, $\fa_k = \{ X \in \fg_k \mid \text{ad}_{u_1} \circ ... \circ \text{ad}_{u_k}(X) \in \fa_0,\,\, \forall u_i \in \fg_{-1} \}$.
\end{lemma}
\subsection{Parabolic subalgebras} \label{S:parabolic}
Given a real or complex \ss Lie algebra $\fg$, a {\em $\bbZ$-grading} with {\em depth} $\nu \in \bbZ_{\geq 0}$ (or a {\em $\nu$-grading}) is a vector space decomposition
\begin{align} \label{E:g-graded}
\fg = \overbrace{\fg_{-\nu} \op ... \op \fg_{-1}}^{\fg_-} \op\, \overbrace{\fg_0 \op \fg_1 \op ... \op \fg_\nu}^{\fp},
\end{align}
with: (i) $\fg_{\pm \nu} \neq 0$; (ii) $[\fg_i,\fg_j] \subseteq \fg_{i+j}$, $\forall i,j$; (iii) $[\fg_j, \fg_{-1}] = \fg_{j-1}$ for $j < 0$, i.e. $\fg_{-1}$ is {\em bracket-generating} in $\fg_-$. The subalgebra $\fg_0$ is reductive with $\fg_0 = \fz(\fg_0) \op \fg_0^{ss}$, and each $\fg_j$ is a $\fg_0$-module. (Here, $\fz(\fg_0)$ is the centralizer of $\fg_0$ in $\fg$, and $\fg_0^{ss}$ is the \ss part of $\fg_0$.) Defining $\fg^i := \bop_{j \geq i} \fg_j$, we have $[\fg^i, \fg^j] \subseteq \fg^{i+j}$, so $\fg$ is canonically a filtered Lie algebra. A subalgebra $\fp \subset \fg$ is {\em parabolic} if $\fp = \fg^0 = \fg_{\geq 0}$ for some $\nu$-grading of $\fg$. (We call $\fp^{\opn} = \fg_{\leq 0}$ the {\em opposite} parabolic.) Each filtrand $\fg^i$ is $\fp$-invariant, and the quotient $\fg / \fp$ is naturally filtered. The Killing form $B(X,Y) = \tr(\text{ad}_X \circ \text{ad}_Y)$ is compatible with the grading and filtration on $\fg$, so induces the dualities $\fg_{-i} \cong (\fg_i)^*$ (as $\fg_0$-modules) and $\fg^i = (\fg^{-i+1})^\perp$ (annihilator with respect to $B$).
Let $G$ be a \ss Lie group with Lie algebra $\fg$ and parabolic subalgebra $\fp \subset \fg$. A subgroup $P \subset G$ is {\em parabolic} if it lies between the normalizer $N_G(\fp)$ and its connected component of the identity. Under the adjoint action, $P$ preserves the filtration on $\fg$. Define $G_0 \subset P$ to be the subgroup which preserves the grading on $\fg$; its Lie algebra is $\fg_0$. There is a decomposition $P = G_0 \ltimes P_+$ for some closed normal subgroup $P_+ \subset P$ with Lie algebra $\fp_+ := \fg_+$.
We focus now on the complex case and recall some representation theory. Let $\fb \subset \fg$ be a Borel subalgebra. This is equivalent to a choice of Cartan subalgebra $\fh \subset \fb$, a root system $\Delta \subset \fh^*$, and a basis of simple roots $\Delta^0 = \{ \alpha_1,..., \alpha_\rkg \}$, where $\rkg = \dim(\fh) = \rnk(\fg)$. (We use the Bourbaki ordering -- see Appendix \ref{App:rep-data}.) For any $\alpha \in \Delta$, write $\alpha = \sum_{i=1}^\rkg m_i(\alpha) \alpha_i$, where $m_i(\alpha) \in \bbZ$ are all non-negative or all non-positive, and $\fg_\alpha$ is the corresponding root space, with $\dim(\fg_\alpha) = 1$. If $\alpha, \beta \in \Delta$, we have: (i) $[\fg_\alpha,\fg_\beta] = \fg_{\alpha+\beta}$ if $\alpha+\beta \in \Delta$; (ii) $[\fg_\alpha,\fg_\beta] = 0$ if $\alpha + \beta \not\in\Delta$ and $\beta \neq -\alpha$; (iii) $[\fg_\alpha,\fg_{-\alpha}] \subset \fh$. If a subspace $\fk \subset \fg$ is a direct sum of root spaces and possibly some subspace of $\fh$, let $\Delta(\fk)$ denote the corresponding collection of roots. Thus, $\fb = \fh \op \bop_{\alpha \in \Delta^+} \fg_\alpha$ and $\Delta(\fb) = \Delta^+$. If $\fg$ is simple, there is a unique highest root, which we denote by $\lambda_\fg$.
The Killing form $B$ induces a symmetric pairing $\langle \cdot, \cdot \rangle$ on $\fh^*$. This determines the Cartan matrix $c_{ij} = \langle \alpha_i, \alpha_j^\vee \rangle \in \bbZ$, where $\alpha^\vee = \frac{2\alpha}{\langle \alpha, \alpha \rangle}$. We have $c_{ij}=2$ iff $i = j$; otherwise, $-3 \leq c_{ij} \leq 0$. The Dynkin diagram $\cD(\fg)$ of $\fg$ is the graph whose $i$-th node corresponds to $\alpha_i$, and nodes $i,j$ are connected by an edge of multiplicity $c_{ij} c_{ji} \leq 3$ (or disconnected if $c_{ij} c_{ji} = 0$). Moreover, if $c_{ij} c_{ji} = 2$ or $3$, this edge is directed towards the shorter root (i.e. if $c_{ij} = -1$, then it is directed from $j$ to $i$).
Index sets $I \subset \{ 1, ..., \rkg \}$ correspond to (standard) parabolic subalgebras $\fp \supset \fb$:
\begin{align*}
\fp \,\,\mapsto\,\, I_\fp := \{ i \mid \fg_{-\alpha_i} \not\subseteq \fp \}, \qquad\qquad
I \,\,\mapsto\,\, \fp_I := \fb \op \bop_{\substack{\alpha \in \Delta^+\\ m_i(\alpha) = 0,\, \forall i \in I}} \fg_{-\alpha}.
\end{align*}
We encode $\fp$ on $\cD(\fg)$ by putting a cross over all nodes corresponding to $I_\fp$. This marked Dynkin diagram will be denoted $\cD(\fg,\fp)$. Thus, $I_\fb = \{ 1, ..., \rkg \}$ and all nodes are crossed.
Let $\{ Z_1,...,Z_\rkg \}$ be the basis of $\fh$ dual to $\Delta^0 \subset \fh^*$. Fixing a parabolic $\fp$ determines the {\em grading element} $Z = Z_I = \sum_{i \in I_\fp} Z_i$, which induces a $\bbZ$-grading \eqref{E:g-graded} via $\fg_j = \{ x \in \fg \mid [Z,x] = j x \}$, and $\fz(\fg_0) \subset \fg_0$ is spanned by $\{ Z_i \}_{i \in I_\fp}$. The grading has depth $\nu = \max\{ Z(\lambda_{\fg'}) \mid \fg' \subset \fg \mbox{ simple ideal} \}$.
\begin{framed}
\begin{recipe} \label{R:dim}
Deleting the $I_\fp$ nodes from $\cD(\fg,\fp)$ yields $\cD(\fg_0^{ss})$, and $\dim(\fz(\fg_0)) = |I_\fp|$. Also, $\dim(\fg_-) = \half(\dim(\fg) - \dim(\fg_0))$ and $\dim(\fp) = \half(\dim(\fg) + \dim(\fg_0))$.
\end{recipe}
\end{framed}
Given a weight $\lambda \in \fh^*$, we refer to $Z(\lambda)$ as the {\em homogeneity} of $\lambda$.
Let $\{ \lambda_1,..., \lambda_\rkg \} \subset \fh^*$ denote the fundamental weights of $\fg$, defined by $\langle \lambda_i, \alpha_j^\vee \rangle = \delta_{ij}$. Encode $\lambda = \sum_{i=1}^\rkg r_i(\lambda) \lambda_i$, where $r_i(\lambda) := \langle \lambda, \alpha_i^\vee \rangle$, on a $\cD(\fg)$ or $\cD(\fg,\fp)$ by inscribing $r_i(\lambda)$ over the $i$-th node. If $r_i(\lambda)$ is nonnegative [integral] for all $i$, then $\lambda$ is $\fg$-dominant [integral]. If these statements hold only for $i \in \{ 1,..., \rkg \} \backslash I_\fp$, then $\lambda$ is $\fp$-dominant [integral]. The Cartan matrix is the transition matrix between $\{ \alpha_i \}_{i=1}^\rkg$ and $\{ \lambda_i \}_{i=1}^\rkg$,
\begin{align} \label{E:lambda}
\lambda = \sum_{i=1}^\rkg m_i \alpha_i = \sum_{i=1}^\rkg r_i \lambda_i \quad\Rightarrow\quad \sum_{i=1}^\rkg m_i c_{ij} = r_j.
\end{align}
Some distinguished weights are $\rho = \sum_{i=1}^\rkg \lambda_i$ and $\rho^\fp = \sum_{i \in I_\fp} \lambda_i$, as well as the highest weight (root) $\lambda_\fg$ of any simple Lie algebra $\fg$ (see Appendix \ref{App:rep-data}).
\begin{example} $G_2$ has $\lambda_\fg = \lambda_2 = 3\alpha_1 + 2\alpha_2$. Also, $\rho = \lambda_1 + \lambda_2$. If $I = \{ 1 \}$, then $Z = Z_1$, so $\lambda_\fg$ has homogeneity $+3$. Also, $\fp = \fp_I = \fg_{\geq 0} = \fb \op \fg_{-\alpha_2}$, and $\rho^\fp = \lambda_1$. We write
\[
\rho = \Gdd{ww}{1,1}, \qquad \lambda_\fg = \Gdd{ww}{0,1}, \qquad \fp = \Gdd{xw}{}, \qquad \rho^\fp = \Gdd{xw}{1,0}.
\]
\end{example}
Given any $\alpha \in \Delta$, the reflection $\sr_\alpha \in O(\fh^*)$ is defined by $\sr_\alpha(\beta) = \beta - \langle \beta, \alpha^\vee \rangle \alpha$. The Weyl group $W \subset O(\fh^*)$ is the group generated by all simple reflections $\sr_i := \sr_{\alpha_i}$. Representing $w$ as a product of simple reflections, let $|w|$ denote its length, i.e. the minimal length of a representative. We will use the notation $(ij) := \sr_i \sr_j$, which means that $\sr_j$ acts first, followed by $\sr_i$. There is a simple Dynkin diagram recipe \cite{BE1989} for the (standard) $W$-action of a simple reflection on weights:
\begin{framed}
\begin{recipe} \label{R:reflection} To compute $\sr_i(\lambda)$, add $c = \langle \lambda, \alpha_i^\vee \rangle$ (i.e. $i$-th coefficient) to adjacent coefficients, with multiplicity if there is a multiple edge directed towards the adjacent node. Then replace $c$ by $-c$.
\end{recipe}
\end{framed}
We will also use the affine action of $W$, which is defined by
\begin{align} \label{E:aff-W}
w \cdot \lambda := w(\lambda + \rho) - \rho.
\end{align}
Defining the inversion set $\Phi_w = w(\Delta^-) \cap \Delta^+$, we have \cite[Proposition 3.2.14]{CS2009}:
\begin{align} \label{E:Phi-w}
|\Phi_w| = |w|, \qquad \sum_{\alpha \in \Phi_w} \alpha = -w\cdot 0.
\end{align}
The {\em Hasse diagram} is $W^\fp = \{ w \in W \mid \Phi_w \subset \Delta(\fg_+) \}$ with length $r$ elements $W^\fp(r)$. (Equivalently, $w \in W^\fp$ sends $\fg$-dominant weights to $\fp$-dominant weights.) By a lemma of Kostant \cite{BE1989},
\begin{framed}
\begin{recipe} \label{R:Hasse} $W^\fp \stackrel{1:1}{\longleftrightarrow}$ $W$-orbit through $\rho^\fp$ under the right action $(\rho^\fp,w) \mapsto w^{-1} \rho^\fp$. In particular, $w = (jk) \in W^\fp(2)$ iff $j \in I_\fp$ and $k \in (I_\fp \cup \cN(j)) \backslash \{ j \}$, where $\cN(j) = \{ i \mid c_{ij} \leq -1 \}$.
\end{recipe}
\end{framed}
The {\em right} action forces a {\em reversal} of the order of reflections from their order of application above.
\begin{example}[$G_2 / P_1$] \label{EX:G2P1-1} For $\Gdd{xw}{}$, $\fz(\fg_0) = \tspan\{ Z_1 \}$, $\fg_0^{ss} \cong \fsl_2(\bbC)$, and $W^\fp(2) = \{ (12) \}$ since
\[
\rho^\fp = \Gdd{xw}{1,0} \stackrel{\sr_1}{\lra} \Gdd{xw}{-1,1} \stackrel{\sr_2}{\lra} \Gdd{xw}{2,-1}.
\]
(This also follows from the second statement in Recipe \ref{R:Hasse}.) Using $w=(12)$ and $\lambda_\fg = \lambda_2 = \Gdd{xw}{0,1}$,
\[
w \cdot \lambda_\fg = (12) \cdot \l(\Gdd{xw}{0,1}\r) = (12) \l(\Gdd{xw}{1,2}\r) - \rho = (1) \l(\Gdd{xw}{7,-2}\r) - \rho= \Gdd{xw}{-8,4},
\]
and since $Z = Z_1$, then $-w\cdot \lambda_\fg = 8\lambda_1 - 4\lambda_2 = +4\alpha_1$ has homogeneity $+4$.
\end{example}
\subsection{Infinitesimal flag structures and prolongation}
Let $(\cG \to M, \omega)$ be a $G/P$ geometry with curvature function $\kappa : \cG \to \bigwedge^2(\fg/\fp)^* \ot \fg$. Since $\omega$ trivializes $T\cG \cong \cG \times \fg$, the filtration $\{ \fg^i \}_{i=-\nu}^\nu$ on $\fg$ induces a ($P$-invariant) filtration $\{ T^i \cG \}_{i=-\nu}^\nu$ of $T\cG$. This projects to a filtration $\{ T^i M \}_{i=-\nu}^{-1}$ of $TM = \cG \times_P (\fg / \fp)$, and defining the principal $G_0$-bundle $\cG_0 := \cG / P_+ \to M$, we have $\gr(TM) \cong \cG_0 \times_{G_0} \fg_-$. Then $M$ becomes a filtered manifold iff $\kappa(\fg_i, \fg_j) \subset \fg^{i+j}$ for all $i, j < 0$ \cite[Corollary 3.1.8]{CS2009}. In this case, the parabolic geometry is {\em regular} if the algebraic bracket on $\gr(TM)$ induced from $\fg_-$ is the same as the Levi bracket. Indeed, regularity is equivalent to $\kappa(\fg_i, \fg_j) \subset \fg^{i+j+1}$. In the latter case, we get a {\em regular infinitesimal flag structure} of type $(G,P)$ on $M$, i.e. a filtered manifold $M$ generated by $D = T^{-1} M$ together with a reduction of structure group of the frame bundle of $\gr(TM)$ to $\cG_0$ corresponding to $\text{Ad}: G_0 \to \text{Aut}_{gr}(\fg_-)$.
Conversely, a regular infinitesimal flag structure of type $(G,P)$ determines a regular parabolic geometry. There is an algebraic statement underlying this fact and motivated by the following question: Given $(\fg,\fp)$ and the resulting $\bbZ$-grading \eqref{E:g-graded}, what are $\prn(\fg_-,\fg_0)$ or $\prn(\fg_-)$?\footnote{In the \ss case, any simple ideal $\fg' \subset \fg$ contained in $\fg_0$ satisfies $[\fg',\fg_{-1}] = 0$. This violates Tanaka's axiom \eqref{T2}, so by convention, we exclude this case from all our considerations in this article.}
\begin{thm}[Yamaguchi's prolongation theorem \cite{Yam1993}] \label{T:Y-pr-thm} Let $\fg = \bop_{k \in \bbZ} \fg_k$ be a simple graded Lie algebra over $\bbC$ such that $\fg_{-1}$ generates $\fg_-$. Then $\fg \cong \prn(\fg_-)$ except for:
\begin{enumerate}[(a)]
\item 1-gradings: \quad $A_\rkg / P_k, \quad B_\rkg / P_1, \quad C_\rkg / P_\rkg, \quad
D_\rkg / P_1, \quad D_\rkg / P_\rkg,\quad E_6 / P_6, \quad E_7 / P_7$.
\item contact gradings, i.e. 2-gradings with $\dim(\fg_{-2}) = 1$, and non-degenerate bracket $\bigwedge^2 \fg_{-1} \to \fg_{-2}:$
\[
A_\rkg / P_{1,\rkg}, \quad B_\rkg / P_2, \quad C_\rkg / P_1, \quad D_\rkg / P_2, \quad G_2 / P_2, \quad F_4 / P_1, \quad E_6 / P_2, \quad E_7 / P_1, \quad E_8 / P_8.
\]
\item $(\fg,\fp) \cong (A_\rkg, P_{1, i})$ with $\rkg \geq 3$ and $i \neq 1, \rkg$, or $(C_\rkg, P_{1,\rkg})$ with $\rkg \geq 2$.
\end{enumerate}
Moreover, we always have $\fg \cong \prn(\fg_-,\fg_0)$ except when $(\fg,\fp) \cong (A_\rkg,P_1)$ or $(C_\rkg,P_1)$.
\end{thm}
If $\prn(\fg_-,\fg_0) \cong \fg$, then there is a regular parabolic geometry (e.g.\ the {\em normal} one -- see next section) which is completely determined by its underlying regular infinitesimal flag structure.
\begin{example} In Example \ref{ex:Monge}, the symbol algebra $\fg_-$ is isomorphic to the $\fg_-$ in \eqref{E:g-graded} arising from $\fp_1 \subset \text{Lie}(G_2) =: \fg$. Namely, we can choose root vectors such that
\[
e_{-1}^1 = e_{-\alpha_1}, \quad e_{-1}^2 = e_{-\alpha_1 - \alpha_2}, \quad e_{-2} = e_{-2\alpha_1 - \alpha_2}, \quad
e_{-3}^1 = e_{-3\alpha_1 - \alpha_2}, \quad e_{-3}^2 = e_{-3\alpha_1 - 2\alpha_2}
\]
satisfy the bracket relations \eqref{E:G2P1-symbol}. By Theorem \ref{T:Y-pr-thm}, $\prn(\fg_-) \cong \fg$, and so by Theorem \ref{T:Tanaka}, $(2,3,5)$-distributions can be described by a Cartan geometry of type $(G_2, P_1)$.
\end{example}
Generally, geometries with $\prn(\fg_-) \cong \fg$ have underlying structure a bracket-generating distribution. For 1-gradings, the filtration on $M$ is trivial, so the geometry is determined by a reduction of structure group alone. These include conformal structures $(B_\rkg / P_1, D_\rkg / P_1)$, Segr\'e (almost Grassmannian) structures $(A_\rkg / P_k$ for $k \neq 1, \rkg)$, and almost spinorial structures $(D_\rkg / P_\rkg)$. Parabolic geometries with contact gradings are determined by a non-trivial filtration as well as a reduction of structure group, e.g.\ CR and Lagrangean contact $(A_\rkg / P_{1, \rkg})$, and Lie contact structures $(B_\rkg / P_2, D_\rkg /P_2)$. In the two exceptional cases, $A_\rkg / P_1$ and $C_\rkg / P_1$, $\prn(\fg_-,\fg_0)$ is infinite-dimensional. Algebraically, one needs to appropriately constrain the grading one component of $\prn(\fg_-,\fg_0)$ through additional structure, namely projective and contact projective structures, respectively. We refer to the regular infinitesimal flag structure, or the structure reduction of $\prn_1(\fg_-,\fg_0)$ to $\fg_1$ in exceptional cases, as the {\em underlying structure} for a regular parabolic geometry.
Motivated by Lemma \ref{L:T-subalg} and Theorem \ref{T:Y-pr-thm}, we define a variant of Tanaka prolongation:
\begin{defn} \label{D:g-prolong} Let $\fg$ be a $\bbZ$-graded Lie algebra, and $\fa_0 \subset \fg_0$ a subalgebra. Define the graded subalgebra $\fa \subset \fg$ by: (i) $\fa_{\leq 0} := \fg_{\leq 0}$; (ii) $\fa_k = \{ X \in \fg_k \mid [X,\fg_{-1}] \subset \fa_{k-1} \}$ for $k > 0$. We will denote $\fa = \bop_k \fa_k$ by $\prn^\fg(\fg_-,\fa_0)$. (In particular, $\prn^\fg(\fg_-,\fg_0) = \fg$.)
\end{defn}
When $\fg$ is simple and $\fg/\fp$ is not $A_\rkg / P_1$ or $C_\rkg / P_1$, then $\prn^\fg(\fg_-,\fa_0)$ is the same as $\prn(\fg_-,\fa_0)$.
\subsection{Regular, normal parabolic geometries}
\label{S:reg-nor}
There is an equivalence of categories between regular, normal parabolic geometries and underlying structures. We articulate normality below.
Using the $P$-equivariant isomorphism $(\fg / \fp)^* \cong \fp_+$, consider the chain spaces $\bigwedge^k \fp_+ \ot \fg$. The ($P$-equivariant) Kostant codifferential $\p^*$, which is minus the Lie algebra homology differential, turns these chain spaces into a complex. A regular parabolic geometry is {\em normal} if the curvature function $\kappa : \cG \to \bigwedge^2 (\fg / \fp)^* \ot \fg$ satisfies $\p^* \kappa = 0$. While $\kappa$ is a complete obstruction to local flatness, it is a rather complicated object. But for {\em normal} geometries, since $(\p^*)^2 = 0$, we can define {\em harmonic curvature} $\Kh : \cG \to \ker(\p^*) / \im(\p^*)$, which is a much simpler object and which is still a complete obstruction to local flatness. (As such, we say that $x \in M$ is a {\em non-flat point} if $\Kh(u) \neq 0$ for some (any) $u \in \pi^{-1}(x)$.) While $\Kh$ is $P$-equivariant, $P_+$ acts trivially on $\ker(\p^*) / \im(\p^*)$, so $\Kh$ descends to a $G_0$-equivariant function on $\cG_0 = \cG / P_+$. As $G_0$-modules $(\fg/\fp)^* \cong \fg_-^* \cong \fp_+$, so consider the ($G_0$-equivariant) Lie algebra cohomology differential $\p$ acting on the co-chain spaces $\bigwedge^k (\fg_-)^* \ot \fg$, and the {\em Kostant Laplacian} $\Box := \p \p^* + \p^* \p$. By a lemma of Kostant, we have as $G_0$-modules
\[
\bigwedge{\!}^k (\fg_-)^* \ot \fg = \lefteqn{\overbrace{\phantom{\im(\partial^*) \op \ker(\Box)}}^{\ker(\partial^*)}}\im(\partial^*) \op \underbrace{\ker(\Box) \op \im(\partial)}_{\ker(\partial)}, \qquad
\ker(\Box) \cong \frac{\ker(\p^*)}{\im(\p^*)} \cong \frac{\ker(\p)}{\im(\p)} =: H^k(\fg_-,\fg).
\]
By $G_0$-equivariancy, $\Kh : \cG_0 \to H^2(\fg_-,\fg)$ maps fibres of $\cG_0 \to M$ to $G_0$-orbits in $H^2(\fg_-,\fg)$. Kostant's theorem (Section \ref{S:Kostant}) gives a complete description of the $G_0$-module structure of $H^2(\fg_-,\fg)$.
The grading element $Z$ acts on $H^2(\fg_-,\fg) \cong \ker(\Box) \subset \bigwedge^2 \fg_-^* \ot \fg$. Let $H^2_+(\fg_-,\fg)$ denote the submodules with positive homogeneity. Regularity is equivalent to $\im(\Kh) \subseteq H^2_+(\fg_-,\fg)$ \cite[Theorem 3.1.12]{CS2009}. This has the following important corollary which was mentioned in \cite[Section 2.5]{Cap2005b}:
\begin{prop} \label{P:loc-flat}
Let $(\cG \to M, \omega)$ be a regular, normal parabolic geometry of type $(G,P)$. If $\dim(\finf(\cG,\omega)) = \dim(\fg)$, then the geometry is locally flat.
\end{prop}
\begin{proof}
Given $0 \neq \phi \in H^2_+(\fg_-,\fg)$, the grading element $Z \in \fz(\fg_0)$ satisfies $Z \not\in \fann(\phi) \subsetneq \fg_0$. Let $u \in \cG$. We use two observations (see Section \ref{S:CNK}): (i) $\ff(u) := \im(\omega_u|_{\finf(\cG,\omega)}) \subset \fg$ is a filtered Lie algebra with $\fs(u) := \gr(\ff(u))$ a graded subalgebra of $\fg$; and (ii) $\fs_0(u) \subseteq \fann(\Kh(u))$. Hence, $\dim(\finf(\cG,\omega)) = \dim(\fs(u)) \leq \dim(\fg)$. If equality holds, then $Z \in \fs_0(u) \subseteq \fann(\Kh(u))$. Since $\Kh(u) \in H^2_+(\fg_-,\fg)$, this forces $\Kh(u) = 0$.
\end{proof}
\subsection{Kostant's theorem}
\label{S:Kostant}
Given $(\fg,\fp)$ and an irreducible $\fg$-module $\bbU$, we define $\p, \p^*, \Box$, and $H^r(\fg_-,\bbU)$ analogous to Section \ref{S:reg-nor}. Kostant's version of the Bott--Borel--Weil theorem \cite{Kos1963}, \cite{BE1989}, \cite{CS2009} efficiently computes $H^r(\fg_-,\bbU)$ as $\fg_0$-modules. Our main interest will be the $r=2$ case.
\begin{framed}
{\bf Notation:} Let $-\mu$ be a $\fp$-dominant weight. Let $\bbV_\mu$ be the irreducible $\fg_0$-module with {\em lowest} weight $\mu$. Denote this by the Dynkin diagram notation for $-\mu$. The homogeneity of $\bbV_\mu$ is $Z(\mu)$.
\end{framed}
\begin{remark}
A consequence of the seemingly perverse ``minus lowest weight'' convention \cite{BE1989} is that Kostant's algorithm below describes $H^2(\fg_-,\bbU)$ (as opposed to $H^2(\fp_+,\bbU)$).
\end{remark}
If $\fg_0^{ss} \neq 0$ and $-\mu$ is $\fp$-dominant, the theorem of lowest (highest) weight says there is a unique (up to isomorphism) $\fg_0^{ss}$-representation with lowest weight $\mu_0$, where $\mu_0$ is obtained by deleting the crossed nodes from $\mu$. (If $\fg_0^{ss} = 0$, take the trivial 1-dimensional representation.) We augment this to a $\fg_0$-module $\bbV_\mu$ by defining a $\fz(\fg_0)$-action: let $Z_i$ act by the scalar $Z_i(\mu)$ for any $i \in I_\fp$. Note that if $\lambda$ is $\fg$-dominant, then for any $w \in W^\fp$, $w\cdot \lambda$ is $\fp$-dominant (by definition of $W^\fp$).
\begin{framed}
\begin{recipe}[{\bf Kostant's theorem for $H^2(\fg_-,\bbU)$}] \label{R:Kostant} Let $\fg$ be complex \ss, $\fp \subset \fg$ a parabolic subalgebra, and $\bbU$ an irreducible $\fg$-module with highest weight $\lambda$. As a $\fg_0$-module,
\[
H^2(\fg_-,\bbU) \cong \bop_{w \in W^\fp(2)} \bbV_{-w\cdot \lambda}.
\]
For each $w = (jk) \in W^\fp(2)$, $\Phi_w = \{ \alpha_j, \sr_j(\alpha_k) \}$. Via the $\fg_0$-module isomorphism $H^2(\fg_-,\bbU) \cong \ker(\Box) \subset \bigwedge^2 \fg_-^* \ot \bbU$, the module $\bbV_{-w\cdot \lambda}$ has the unique (up to scale) lowest weight vector
\begin{align} \label{E:phi-0}
\phi_0 := e_{\alpha_j} \wedge e_{\sr_j(\alpha_k)} \ot v,
\end{align}
where $e_\gamma \in \fg_\gamma$ are root vectors, and $v \in \bbU$ is a weight vector with weight $w(-\lambda)$.
\end{recipe}
\end{framed}
\begin{example}[$G_2 / P_1$] \label{EX:G2P1-2} In Example \ref{EX:G2P1-1}, since $\lambda_\fg = \lambda_2$ for $\fg = \text{Lie}(G_2)$, we computed $H^2(\fg_-,\fg)$ as a $\fg_0 \cong \fgl_2(\bbC)$ module. Since the lowest root of $\fg_1$ is $\alpha_1 = 2\lambda_1 - \lambda_2$, then
\[
\fg_1 = \Gdd{xw}{-2,1} \quad\Rightarrow\quad H^2(\fg_-,\fg) = \bbV_{-w\cdot\lambda_\fg} = \Gdd{xw}{-8,4} = \bigodot{}^4(\fg_1) = \bigodot{}^4(\fg_{-1})^*,
\]
where $w = (12) \in W^\fp(2)$.
Since $\fg_1$ is the standard representation of $\fg_0$, then $H^2(\fg_-,\fg)$ is the space of binary quartics on $\fg_{-1}$. This recovers Cartan's result \cite{Car1910} concerning the fundamental (harmonic) curvature tensor for generic rank two distributions in dimension five. Furthermore,
\[
\Phi_w = \{ \alpha_1, 3\alpha_1 + \alpha_2 \}, \qquad w(-\lambda_\fg) = 3\lambda_1 - 2\lambda_2 = -\alpha_2.
\]
Thus, $H^2(\fg_-,\fg) \cong \ker(\Box) \subset \bigwedge^2 \fg_-^* \ot \fg$ has {\em lowest} weight vector $\phi_0 = e_{\alpha_1} \wedge e_{3\alpha_1 + \alpha_2} \ot e_{-\alpha_2}$.
\end{example}
For $H^r(\fg_-,\bbU)$, we make the obvious changes to Recipe \ref{R:Kostant}, e.g. $w \in W^\fp(r)$, etc. If $\fg$ is simple, the highest weight $\lambda_\fg$ is also the highest root, so $\alpha = w(-\lambda_\fg) \in \Delta$ and $v = e_\alpha \in \fg_\alpha$. If $\fg$ is \ss, say with decomposition into simple ideals $\fg = \fg' \times \fg''$, then $H^2(\fg_-,\fg) = H^2(\fg_-,\fg') \op H^2(\fg_-,\fg'')$, and we can apply Kostant's theorem to each factor. Note that in the \ss case, $\fg_0 = \fg_0' \times \fg_0''$ and writing $\mu = \mu' + \mu''$, we have $\bbV_\mu \cong \bbV_{\mu'} \boxtimes \bbV_{\mu''}$. Here, $\boxtimes$ is the ``external'' tensor product, so $\fg_0',\fg_0''$ act trivially on $\bbV_{\mu''},\bbV_{\mu'}$, respectively. More generally, when $\fg$ contains simple ideals $\fg'$ and $\fg''$, the $\fg_0$-irreducibles appearing $H^2(\fg_-,\fg)$ come in three types (see Table \ref{F:H2-types}). (The analysis for the gap problem in the general (complex or split-real) \ss case will reduce to the case with no more than two factors -- see Remark \ref{E:reduce-to-two}.)
\begin{center}
\begin{table}[h]
$\begin{array}{ccccc}\\ \hline
\mbox{Label} & w \in W^\fp(2) & w\cdot\lambda_{\fg'} & \mbox{Submodule of...}\\ \hline
\mbox{Type I} & (j'k') \in W^{\fp'}(2) & (j'k') \cdot\lambda_{\fg'} & H^2(\fg_-',\fg') \boxtimes \bbC\\
\mbox{Type II} & (j'k'') \in W^{\fp'}(1) \times W^{\fp''}(1) & \sr_{j'} \cdot \lambda_{\fg'} + \sr_{k''} \cdot 0'' & H^1(\fg_-',\fg') \boxtimes \fg_1''\\
\mbox{Type III} & (j''k'') \in W^{\fp''}(2) & \lambda_{\fg'} + (j''k'') \cdot 0'' & \fg'_{-\nu'} \boxtimes H^2(\fg_-'',\bbC)\\ \hline
\end{array}$
\caption{Types of $\fg_0$-submodules in $H^2(\fg_-,\fg)$ when $\fg$ contains simple ideals $\fg'$ and $\fg''$}
\label{F:H2-types}
\end{table}
\end{center}
\subsection{Regularity and Yamaguchi's rigidity theorem}
\label{S:reg}
Let $\fg$ be complex \ss with a simple ideal having highest weight $\lambda$. Let us write the regularity condition $Z(-w\cdot \lambda) \geq 1$ in detail. Let $w = (jk) \in W^\fp(2)$, so $\Phi_w := w\Delta^- \cap \Delta^+ = \{ \alpha_j, \sr_j(\alpha_k) \}$. Using \eqref{E:aff-W}, \eqref{E:Phi-w}, and writing $r_a = \langle \lambda, \alpha_a^\vee \rangle \geq 0$ (since $\lambda$ is $\fg$-dominant),
\begin{align}
w\cdot \lambda &= w(\lambda) + w\cdot 0 = w(\lambda) - \sum_{\alpha \in \Phi_w} \alpha
= \sr_j (\sr_k(\lambda)) - \alpha_j - \sr_j(\alpha_k) \label{E:w-lambda}\\
&= \sr_j(\lambda - r_k \alpha_k) - \alpha_j - \sr_j(\alpha_k)
= \lambda - (r_j + 1) \alpha_j - (r_k +1)(\alpha_k - c_{kj} \alpha_j). \nonumber
\end{align}
From Recipe \ref{R:Hasse}, we know $j \in I$, so $Z(w\cdot \lambda) \leq -1$ is equivalent to
\begin{align} \label{E:reg}
Z(\lambda) \leq r_j + (r_k +1) (Z(\alpha_k) - c_{kj}).
\end{align}
\begin{defn} \label{D:reg} Let $\fg$ be complex simple with highest weight $\lambda_\fg$. Define
\[
W^\fp_+(2) := \{ w \in W^\fp(2) \mid Z(-w\cdot \lambda_\fg) \geq 1 \} \qRa
H^2_+(\fg_-,\fg) = \bop_{w \in W^\fp_+(2)} \bbV_{-w\cdot \lambda_\fg}.
\]
\end{defn}
\begin{example} \label{E:reg-quick}
Condition \eqref{E:reg} can be used to calculate $W^\fp_+(2)$. Consider $A_\rkg / P_{1,2,s,t}$ for $4 \leq s < t < \rkg$. Since $\lambda_\fg = \lambda_1 + \lambda_\rkg = \alpha_1 + ... + \alpha_\rkg$, then $Z(\lambda_\fg) = 4$. Since $r_j, r_k, Z(\alpha_k), -c_{kj} \in \{ 0, 1 \}$, then \eqref{E:reg} forces $r_k = Z(\alpha_k) = -c_{kj} = 1$, so $k=1$, $j=2$, and $W^\fp_+(2) = \{ (21) \}$, while $|W^\fp(2)| = 11$.
\end{example}
For regular, normal $G/P$ geometries, $\im(\Kh) \subset H^2_+(\fg_-,\fg)$. If $H^2_+(\fg_-,\fg) = 0$, then $\Kh \equiv 0$, so the geometry is locally flat. These geometries were classified by Yamaguchi \cite{Yam1993}, \cite{Yam1997} so we refer to them as {\em Yamaguchi-rigid}.\footnote{The list in \cite{Yam1993} contains some minor errors, which were corrected in \cite{Yam1997}.} For the gap problem, we need only study {\em Yamaguchi-nonrigid} geometries.
\begin{thm}[Yamaguchi's rigidity theorem] \label{T:Y-rigid} Let $\fg$ be a complex simple Lie algebra and $\fp$ a parabolic subalgebra. Then $H^2_+(\fg_-,\fg) \neq 0$ if and only if $(\fg,\fp)$ is: (i) 1-graded, (ii) a contact gradation, or (iii) listed in Table \ref{table:non-rigid}.
\end{thm}
\begin{table}[h]
$\begin{array}{|l|c|c|c|c|c|c|} \hline
G & \mbox{Range} & \mbox{2-graded} & \mbox{3-graded} & \mbox{4-graded} & \mbox{5-graded} & \mbox{6-graded} \\ \hline\hline
A_\rkg & \rkg \geq 4 & P_{1,s}, P_{2,s}, P_{s,s+1} & P_{1,2,s}, P_{1,s,\rkg} & P_{1,2,s,t} & - & -\\
& \rkg = 3 & P_{1,2} & P_{1,2,3} & - & - & -\\ \hline
B_\rkg & \rkg \geq 4 & P_3, P_\rkg & P_{1,2} & P_{2,3} & - & -\\
& \rkg = 3 & P_3 & P_{1,2}, P_{1,3} & P_{2,3} & P_{1,2,3} & -\\
& \rkg = 2 & - & P_{1,2} & - & - & - \\ \hline
C_\rkg & \rkg \geq 4 & P_2, P_{\rkg-1} & P_{1,\rkg}, P_{2,\rkg}, P_{\rkg-1,\rkg} & P_{1,2} & P_{1,2,\rkg} & P_{1,2,s}\, (s < \rkg)\\
& \rkg = 3 & P_2 & P_{1,3}, P_{2,3} & P_{1,2} & P_{1,2,3} & -\\ \hline
D_\rkg & \rkg \geq 5 & P_3, P_{1,\rkg} & P_{1,2} & P_{2,3}, P_{1,2,\rkg} & - & -\\
& \rkg = 4 & P_{1,4} & P_{1,2} & P_{1,2,4} & - & - \\ \hline
G_2 & -& - & P_1 & - & P_{1,2} & -\\ \hline
\end{array}$
\caption{Yamaguchi-nonrigid geometries, excluding 1-graded and parabolic contact geometries}
\label{table:non-rigid}
\end{table}
\subsection{Correspondence and twistor spaces}
\label{S:corr-twistor}
We review correspondence and twistor space constructions from \cite{Cap2005,CS2009}.
Let $G$ be a \ss Lie group $G$, and $Q \subset P \subset G$ parabolic subgroups, so there is a projection $G / Q \to G / P$. Write the decompositions of $\fg$ corresponding to $\fq,\fp$ as
\begin{align} \label{E:pq-decomp}
\fg = \fq_- \op \fq_0 \op \fq_+ = \fp_- \op \fp_0 \op \fp_+,
\end{align}
where (see Lemma 4.4.1 in \cite{CS2009}),
\begin{align} \label{E:pq-grading}
I_\fp \subset I_\fq, \qquad \fp_\pm \subset \fq_\pm, \qquad \fq_0 \subset \fp_0, \qquad \fz(\fp_0) \subset \fz(\fq_0).
\end{align}
Given a $G/P$ geometry $(\cG \to N, \omega)$, the {\em correspondence space} for $Q \subset P$ is $\cC N = \cG / Q$. This carries a canonical $G/Q$ geometry $(\cG \to \cC N, \omega)$, and the fibers of $\cC N \to N$ are diffeomorphic to $P/Q$. The vertical subbundle $V \cC N \subset T\cC N$ corresponds to the $Q$-submodule $\fp/\fq \subset \fg / \fq$, and the Cartan curvature $\kappa^{\cC N}$ of $(\cG \to \cC N,\omega)$ satisfies $i_\xi \kappa^{\cC N} = 0$ for any $\xi \in \Gamma(V\cC N)$.
Conversely, given a $G/Q$ geometry $(\cG \to M, \omega)$, we may ask if there exists a $G/P$ geometry $(\cG \to N,\omega)$ such that $M$ is locally isomorphic to $\cC N$. If so, we call $N$ the (local) {\em twistor space} for $M$. From above, a necessary condition is that the subbundle $VM \subset TM$ induced from $\fp/\fq \subset \fg / \fq$ must satisfy $i_\xi \kappa = 0$ for all $\xi \in \Gamma(VM)$. Locally, this is sufficient, and moreover $(\cG \to N,\omega)$ is unique when $P/Q$ is assumed to be connected \cite{Cap2005} (or Corollary 4.4.1 in \cite{CS2009}). More convenient still is that it suffices to only check that $\im(\Kh)$ vanishes on $\fp \cap \fq_-$, cf. Theorem 3.3 in \cite{Cap2005}).
Two additional nice properties of correspondence spaces include:
\begin{itemize}
\item $\finf(\cG \to N, \omega) = \finf(\cG \to \cC N, \omega)$, provided $P/Q$ is connected.
\item $(\cG \to N, \omega)$ is normal iff $(\cG \to \cC N, \omega)$ is normal.
\end{itemize}
In contrast to normality, regularity is not preserved in passing to a correspondence space.
\begin{prop} \label{P:twistor} Let $G$ be a complex semisimple Lie group, and $Q \subset G$ a parabolic subgroup. Consider a $G/Q$ geometry $(\cG \to M, \omega)$ with $\im(\Kh) \subset \bbV_{\mu}$, where $\mu = -w\cdot \lambda$, with $\lambda$ the highest weight of a simple ideal in $\fg$, and $w = (jk) \in W^\fq(2)$. Let {\rm (a)} $\fp := \fp_j$ if $c_{kj} < 0$; or, {\rm (b)} $\fp := \fp_{j,k}$ if $c_{kj} = 0$. Then $\fq \subset \fp \subset \fg$ and $\im(\Kh)$ consists of maps which vanish on $\fp \cap \fq_- \subset \fp_0$.
\end{prop}
\begin{proof} By Recipe \ref{R:Hasse}, $j \in I_\fq$, and $k \in (I_\fq \cup \cN(j)) \backslash \{ j \}$. From Recipe \ref{R:Kostant}, the lowest $\fq_0$-weight vector in the $\fq_0$-module $\bbV_{\mu}$ is $\phi_0 = e_{\alpha_j} \wedge e_{\sr_j(\alpha_k)} \ot e_{w(-\lambda)} \in \bigwedge^2 \fq_-^* \ot \fg$.
If $c_{kj} < 0$, take $\fp := \fp_j$. Then $\sr_j(\alpha_k) = \alpha_k - c_{kj} \alpha_j$ has positive $Z_{I_\fp}$-grading, so $\phi_0 \in \fp_1 \wedge \fp_+ \ot \fg$. Now recall that the $\fq_0$-module $\bbV_{\mu}$ is generated by applying $\fq_0$-raising operators to $\phi_0$. Since $\fq_0 \subset \fp_0$, and $\fp_1 \wedge \fp_+ \ot \fg$ is $\fp_0$-invariant (hence $\fq_0$-invariant), then $\bbV_{\mu} \subset \fp_1 \wedge \fp_+ \ot \fg$. Thus, any map in $\im(\Kh) \subset \bbV_{\mu}$ vanishes on $\fp_0$ and hence on $\fp \cap \fq_- \subset \fp_0$.
If $c_{kj} = 0$, take $\fp := \fp_{j,k}$. Then $\sr_j(\alpha_k) = \alpha_k$ has positive $Z_{I_\fp}$-grading, and proceed as before.
\end{proof}
\begin{cor} \label{C:P-exists}
Under the hypothesis of Proposition \ref{P:twistor}, if there exists a parabolic subgroup $P$ with $Q \subset P \subset G$, and Lie algebra $\fp$ given as in (a) or (b), then $(\cG \to M, \omega)$ admits a twistor space $N$ with $(\cG \to N, \omega)$ of type $(G,P)$.
\end{cor}
\begin{remark} \label{RM:Q-connected}
If $Q$ is connected, then by the subalgebra--subgroup correspondence, there exists a unique connected (parabolic) subgroup $P$ such that $Q \subset P \subset G$ with $\text{Lie}(P) = \fp$. This guarantees the existence of a twistor space, and will play a substantial simplifying step in the calculation of submaximal symmetry dimensions -- see Remark \ref{E:coverings}.
\end{remark}
\subsection{Annihilators}
\label{S:ann-g0}
One step towards understanding the gap problem is the following question: Given a {\em nonzero} $\phi \in \bbV_\mu$, where $-\mu$ is $\fp$-dominant, what is the maximal dimension of $\fann(\phi) \subset \fg_0$? A well-known consequence of the Borel fixed point theorem is that the $G_0$-action on $\bbP(\bbV_\mu)$ has a unique (Zariski) closed orbit $\cO$, namely the orbit of the lowest weight line $[\phi_0]$. This is the unique orbit of minimal dimension, so $\max_{0 \neq \phi \in \bbV_\mu} \dim(\fann(\phi) )$ is realized precisely when $[\phi] \in \cO$. Let $\mu_0$ be the weight of $\fg_0^{ss}$ obtained by deleting the crossed nodes from $\mu$. In $\bbP(\bbV_\mu)$, the stabilizer in $G_0^{ss}$ of $[\phi_0]$ is the ({\em opposite}) parabolic subgroup with crosses where $-\mu_0$ has positive coefficients.
\begin{defn} \label{D:Jw} Let $J_\mu = \{ j \not\in I \mid \langle \mu, \alpha_j^\vee \rangle \neq 0 \}$ (marked by asterisks on the Dynkin diagram), and $Z_{J_\mu} := \sum_{j\in J_\mu} Z_j$. Using the $Z_{J_\mu}$-grading, let $\fp_\mu^{\opn} = (\fg_0^{ss})_{\leq 0}$ be the (opposite) parabolic subalgebra of $\fg_0^{ss}$ corresponding to $J_\mu$. If $\fg$ is simple, write $J_w := J_{-w\cdot \lambda_\fg}$ and $\fp_w^{\opn} := \fp_\mu^{\opn}$.
\end{defn}
\begin{framed}
\begin{recipe} \label{R:a0} Let $\phi_0 \in \bbV_{\mu}$ be the lowest weight vector. Then $\fa_0 := \fann(\phi_0) \subsetneq \fz(\fg_0) \op \fp_\mu^{\opn}$, where
\begin{align}
\fa_0 & = \{ H \in \fh \mid \mu(H) = 0 \} \op \bop_{\gamma \in \Delta(\fg_{0,\leq0})} \fg_\gamma, \label{E:a0}\\
\dim(\fa_0) &= \dim(\fp_\mu^{\opn}) + |I_\fp| - 1 = \max_{0 \neq \phi \in \bbV_{\mu}} \dim(\fann(\phi) ). \label{E:dim-a0}
\end{align}
(Use Recipe \ref{R:dim} to compute the dimension of $\fp_\mu^{\opn} \subset \fg_0^{ss}$.)
\end{recipe}
\end{framed}
\begin{example}[$E_8 / P_8$] \label{ex:E8P8} This has $\lambda_\fg = \lambda_8$ and $W^\fp_+(2) = \{ w:=(87) \}$. We have $\dim(\fg_0) = 1 + \dim(E_7) = 134$, so $\dim(\fg_-) = \half( \dim(E_8) - \dim(\fg_0)) = \half(248 - 134) = 57$. Moreover,
\[
w\cdot \lambda_\fg = \Edd{wwwwwssx}{0,\quad 0,0,0,0,1,1,-4} \qRa J_w = \{ 6,7 \}.
\]
By Recipe \ref{R:a0}, $\fp^{\opn}_w \cong \fp_{6,7} \subset E_7$ with $\dim(\fa_0) = \dim(\fp^{\opn}_w) = \half(\dim(E_7) + 2 + \dim(D_5)) = 90$.
\end{example}
\begin{example}[$B_\rkg / P_1$ for $\rkg \geq 4$] \label{ex:conf-odd-ann} By Recipe \ref{R:dim}, $\fg_0 \cong B_{\rkg-1} \times \bbC$, and by Kostant (Recipe \ref{R:Kostant}),
\[
H^2(\fg_-,\fg) = \bbV_{\mu} =
\begin{tiny}
\begin{tikzpicture}[scale=0.75,baseline=-3pt]
\bond{0,0};
\bond{1,0};
\bond{2,0};
\tdots{3,0};
\dbond{r}{4,0};
\DDnode{x}{0,0}{-4};
\DDnode{w}{1,0}{0};
\DDnode{s}{2,0}{2};
\DDnode{w}{3,0}{0};
\DDnode{w}{4,0}{0};
\DDnode{w}{5,0}{0};
\useasboundingbox (-.4,-.2) rectangle (5.4,0.55);
\end{tikzpicture}
\end{tiny}, \qquad w = (12), \qquad Z(\mu) = +2.
\]
Removing the crossed node, $\fp_\mu^{\opn} \cong \fp_2 \subset B_{\rkg-1}$, and $(\fg_0^{ss})_0 = \bbC \times A_1 \times B_{\rkg-3}$. By Recipes \ref{R:a0} and \ref{R:dim},
\begin{align*}
\dim(\fa_0) &= \dim(\fp_\mu^{\opn}) = \half(\dim(B_{\rkg-1}) + \dim((\fg_0^{ss})_0)) \\
&= \half( (\rkg-1)(2\rkg-1) + 1 + 3 + (\rkg-3)(2\rkg-5)) = 2\rkg^2 - 7\rkg + 10.
\end{align*}
In terms of $H^2(\fg_-,\fg) \cong \ker(\Box) \subset \bigwedge^2 \fg_-^* \ot \fg$, we have $\phi_0 = e_{\alpha_1} \wedge e_{\alpha_1 + \alpha_2} \ot e_{-\alpha_2 - 2\alpha_3 - ... - 2\alpha_\rkg}$.
We describe $\fa_0$ explicitly. Taking $\bbC^{2\rkg+1}$ with standard anti-diagonal symmetric $\bbC$-bilinear form $g$, $\fso(2\rkg+1,\bbC)$ consists of $g$-skew matrices. Define $\epsilon_i \in \fh^*$ by
$\epsilon_i( \diag(a_1,...,a_\rkg,0,-a_\rkg,...,-a_1)) = a_i$.
The simple roots are $\alpha_1 = \epsilon_1 - \epsilon_2$, ..., $\alpha_{\rkg-1} = \epsilon_{\rkg-1} - \epsilon_\rkg$, $\alpha_\rkg = \epsilon_\rkg$. Hence,
\begin{align*}
\mu &= 4\lambda_1 - 2\lambda_3 = -2\alpha_1 + 2\alpha_3 + ... + 2\alpha_{k-1} + 2\alpha_k
= -2 (\epsilon_1 - \epsilon_2 - \epsilon_3).
\end{align*}
By Recipe \ref{R:a0}, this data completely determines $\fa_0$, e.g. if $\rkg=4$, then
\[
\fa_0 = \mat{c|ccccccc|c}{
a + d& & & & & & & &0\\ \hline
& a & b & & & & & 0 &\\
& c & d & & & & 0 & &\\
& z_1 & z_2 & z_3 & z_4 & 0 & & &\\
& z_5 & z_6 & z_7 & 0 & -z_4 & & &\\
& z_8 & z_9 & 0 & -z_7 & -z_3 & & &\\
& z_{10} & 0 & -z_9 & -z_6 & -z_2 & -d & -b&\\
& 0 & -z_{10} & -z_8 & -z_5 & -z_1 & -c & -a &\\ \hline
0 & &&&&&&&-a-d}.
\]
(The condition $\mu(H) = 0$ determines the diagonal, and $\fa_0$ contains $\fg_\gamma \subset \fg_0$ iff $Z_{J_w}(\gamma) \leq 0$.)
\end{example}
\section{Prolongation analysis}
\label{S:PR-analysis}
This section is representation-theoretic, focusing on the variant of the Tanaka prolongation introduced in Definition \ref{D:g-prolong}. We earlier defined $\fa^\phi := \prn^\fg(\fg_-,\fann(\phi))$, and for regular, normal parabolic geometries, we will be interested in
\begin{align} \label{E:fU}
\fU := \max\l\{ \dim(\fa^\phi) \mid 0 \neq \phi \in H^2_+(\fg_-,\fg) \r\}.
\end{align}
From the first line in the proof of Proposition \ref{P:loc-flat}, $\fU < \dim(\fg)$.
If $H^2_+(\fg_-,\fg) = \bop_i \bbV_i$ is the decomposition into $\fg_0$-irreducibles, and $\phi = \sum_i \phi_i$, where $\phi_i \in \bbV_i$ for each $i$, then $\fann(\phi) \subset \fann(\phi_i)$. By Lemma \ref{L:T-subalg}, $\fa^\phi \subset \fa^{\phi_i}$, so $\fU = \max_i (\max_{0 \neq \phi_i \in \bbV_i} \dim(\fa^{\phi_i}))$.
Thus, it suffices to understand $\max_{0 \neq \phi \in \bbV} \dim(\fa^\phi)$, where $\bbV$ is an irreducible $\fg_0$-module. When $\fg$ is complex \ss, each such submodule in $H^2_+(\fg_-,\fg)$ is of the form $\bbV_\mu$, where $\mu = -w\cdot\lambda$, with $\lambda$ the highest weight of a simple ideal in $\fg$, and $w \in W^\fp(2)$ (so $w\cdot\lambda$ is $\fp$-dominant). So we have
\begin{align} \label{E:fU-mu}
\fU = \max_{\bbV_\mu \subset H^2_+(\fg_-,\fg)} \fU_\mu, \qquad
\fU_\mu := \max\l\{ \dim(\fa^\phi) \mid 0 \neq \phi \in \bbV_\mu \r\}.
\end{align}
Define $\fa(\mu) := \fa^{\phi_0}$, where $\phi_0$ is the lowest weight vector of $\bbV_\mu$. (When $\fg$ is complex simple, we will also use the notation $\fa(w) := \fa(-w\cdot\lambda_\fg)$ for $w \in W^\fp(2)$.) After proving $\fU_\mu = \dim(\fa(\mu))$, we establish a Dynkin diagram recipe to compute $\fU_\mu$, and then study the structure of $\fa(\mu)$.
\subsection{Maximizing the Tanaka prolongation}
\label{S:max-Tanaka}
\begin{lemma} \label{L:Tanaka-lw} Let $G$ be a complex \ss Lie group, and $P$ a parabolic subgroup. Let $\bbV_\mu$ be an irreducible $G_0$-module, and $\phi_0 \in \bbV_\mu$ the lowest $\fg_0$-weight vector. Then for any {\em nonzero} $\phi \in \bbV_\mu$, $\dim(\fa^\phi_k) \leq \dim(\fa^{\phi_0}_k)$, $\forall k \geq 0$.
Thus, $\fU_\mu = \dim(\fa(\mu)) = \dim(\fa^{\phi_0})$. Moreover, $\dim(\fa^\phi) = \dim(\fa^{\phi_0})$ iff $[\phi] \in G_0 \cdot [\phi_0] \subset \bbP(\bbV_\mu)$.
\end{lemma}
\begin{remark} In general, $\fann(\phi) \not\subset \fann(\phi_0)$, and $\fa^\phi_k \not\subset \fa^{\phi_0}_k$.
\end{remark}
\begin{proof}[Proof of Lemma \ref{L:Tanaka-lw}] If $\fg_0^{ss} = 0$, then irreducibility implies $\bbV_\mu \cong \bbC$, and the result is immediate. So suppose $\fg_0^{ss} \neq 0$. The $k=0$ case was established in Section \ref{S:ann-g0}, so let $k \geq 1$. Define $\Psi_k : \bbP( \bbV_\mu ) \to \bbZ_{\geq 0}$ by $\Psi_k([\phi]) = \dim(\fa_k^\phi)$. By Lemma \ref{L:T-subalg},
\begin{align*}
\fa_k^\phi &= \prn_k^\fg(\fg_-,\fann(\phi)) = \{ X \in \fg_k \mid \text{ad}_{\fg_{-1}}^k(X) \cdot \phi = 0 \}\\
&= \{ X \in \fg_k \mid (\text{ad}_{Y_{i_1}} \circ ... \circ \text{ad}_{Y_{i_k}}(X)) \cdot \phi = 0,\,\, \forall Y_{i_j} \in \fg_{-1} \}
\end{align*}
The function $(\text{ad}_{Y_{i_1}} \circ ... \circ \text{ad}_{Y_{i_k}}(X)) \cdot \phi$ is a multilinear function of $X,Y_{i_1},...,Y_{i_k},\phi$. Its vanishing is determined by evaluating all $Y_{i_1},...,Y_{i_k}$ on all tuples of basis elements $\{ e_i \}$ of $\fg_{-1}$. So there is a bilinear function $T(X,\phi)$ such that $\fa_k^\phi = \{ X \in \fg_k \mid T(X,\phi) = 0 \}$. Choosing any basis of $\fg_k$, there is a matrix $M(\phi)$ such that $X \in \fa_k^\phi$ if and only if its coordinate vector is in $\ker(M(\phi))$. Since the rank of a matrix is a lower semi-continuous function of its entries, and $M(\phi)$ depends linearly on $\phi$, then $\rnk(M(\phi))$ is lower semi-continuous in $\phi$ (and depends only on $[\phi]$). We have $\dim(\fa_k^\phi) = \dim(\fg_k) - \rnk(M(\phi))$, so $\Psi_k$ is upper semi-continuous.
Since $\bbV_\mu$ is irreducible and $G_0^{ss}$ is non-trivial, then $\cO = G_0 \cdot [\phi_0]$ is the {\em unique} (Zariski) closed $G_0$-orbit in $\bbP(\bbV_\mu)$. Consequently, $[\phi_0]$ is in the (Zariski) closure of {\em every} $G_0$-orbit in $\bbP(\bbV_\mu)$. For any nonzero $\phi \in \bbV_\mu$, there is a sequence $\{ g_n \} \subset G_0$ such that $g_n \cdot [\phi] = [g_n \cdot \phi] \to [\phi_0]$ as $n \to \infty$. For any $n$, $\fann(g_n \cdot \phi) = \text{Ad}_{g_n}(\fann(\phi))$, so $\fa_k^{g_n \cdot \phi} = \text{Ad}_{g_n}(\fa_k^\phi)$, and hence $\Psi_k$ is constant on $G_0$-orbits. Thus, $\Psi_k([\phi]) = \Psi_k([g_n\cdot\phi]) = \Psi_k(g_n \cdot [\phi]) \leq \Psi_k([\phi_0])$, by upper semi-continuity of $\Psi_k$.
The final statement follows since $\dim(\fa_0^\phi) = \dim(\fa_0^{\phi_0})$ iff $[\phi] \in G_0 \cdot [\phi_0] \subset \bbP(\bbV_\mu)$.
\end{proof}
Henceforth, we study $\fa(\mu)$. In Recipe \ref{R:a0}, we gave a description of $\fa_0$. In the next section, we show that the structure of $\fa_+$ can be easily determined.
\subsection{A Dynkin diagram recipe}
\label{S:Dynkin}
Let $-\mu$ be a $\fp$-dominant weight.
\begin{defn} \label{D:Iw} Define $I_\mu := \{ j \in I \mid \langle \mu, \alpha_j^\vee \rangle = 0 \}$. If $\fg$ is simple, write $I_w := I_{-w\cdot \lambda_\fg}$.
\end{defn}
\begin{framed}
{\bf Notation:} We augment our Dynkin diagram notation for $\mu$ by putting a {\em square} around $I_\mu$ crossed nodes, and an {\em asterisk} on $J_\mu$ nodes. (See Definition \ref{D:Jw}.) Refer to this as $\cD(\fg,\fp,\mu)$.
\end{framed}
\begin{example}[$C_6 / P_{1,2,5}$, $w = (21)$] \label{E:NPRC} We have $\lambda_\fg = 2\lambda_1$, and
\[
w\cdot \lambda_\fg = \NPRC, \quad\mbox{ i.e. }\quad I_w = \{ 1, 5 \}, \quad J_w = \{ 4 \}.
\]
\end{example}
\begin{defn}
Given $S \subset \{ 1, ..., \rkg \}$, let $Z_S = \sum_{s \in S} Z_s$ and $\widetilde{Z}_S = (Z_s)_{s \in S}$ denote the corresponding total grading and multi-grading elements.
\end{defn}
Each $\fg_j$ decomposes into $\fg_0$-irreducibles using $\widetilde{Z}_I$. In turn, these decompose into $\fg_{0,0}$-irreducibles using $\widetilde{Z}_{J_\mu}$. Writing $1_i \in \bbZ^I$ for the tuple which is zero except for a $1$ in the $i$-th position, we have
\begin{align} \label{E:g00-decomp}
\begin{array}{lll}
\mbox{$\fg_0$-module decomposition:} & \displaystyle\fg_1 = \bop_{i \in I} \fg_{1_i}, & \Delta(\fg_{1_i}) = \{ \alpha \in \Delta(\fg_1) \mid Z_i(\alpha) = 1 \};\\
\mbox{$\fg_{0,0}$-module decomposition:}& \displaystyle\fg_{1_i} = \bop_{A \geq 0} \fg_{1_i,A}, & \Delta(\fg_{1_i,A}) = \{ \alpha \in \Delta(\fg_{1_i}) \mid \widetilde{Z}_{J_\mu}(\alpha) = A \}.
\end{array}
\end{align}
Here, $A \in \bbZ^{J_\mu}$ is a multi-index, and ``$A \geq 0$'' means {\em all} entries are non-negative.
Note the unique lowest root in $\Delta(\fg_{1_i,0})$ is $\alpha_i$.
\begin{defn}
If $\cS_1,\cS_2 \subset \Delta$, define $\cS_1\,\dot{+}\, \cS_2 := \{ \alpha + \beta \mid \alpha \in \cS_1, \beta \in \cS_2 \} \cap \Delta$.
\end{defn}
\begin{lemma} \label{L:DD-a1} If $-\mu$ is a $\fp$-dominant weight with $I_\mu \neq \emptyset$, and $\fa = \fa(\mu)$, then
\begin{enumerate}
\item[\rm (a)] $\fa_+$ is a direct sum of root spaces;
\item[\rm (b)] $\Delta(\fa_1) = \{ \alpha \in \Delta(\fg_1) \mid Z_{I_\mu}(\alpha) = 1,\, Z_{J_\mu}(\alpha) = 0 \}$;
\item[\rm (c)] $\Delta(\fa_j) \subseteq \Delta(\fa_{j-1}) \,\dot{+}\, \Delta(\fa_1) \subseteq \{ \alpha \in \Delta(\fg_j) \mid Z_{I_\mu}(\alpha) = j,\, Z_{J_\mu}(\alpha) = 0 \}$, $\forall j \geq 2$.
\end{enumerate}
\end{lemma}
\begin{proof} From Recipe \ref{R:a0}, $\fa_0$ is a $\fg_{0,0}$-module (hence, an $\fh$-module). Since $\fg_-$ is a $\fg_{0,0}$-module, then so is each $\fa_k$. By Lemma \ref{L:T-subalg}, $\fa_k \subset \fg_k$, and all $\fg_\gamma \subset \fg_k$ are 1-dimensional, so (a) is proven.
Prior to calculating $\fa_1$, we recall a few facts. For any nonzero $\gamma \in \fh^*$, $\exists ! H_\gamma \in \fh$ such that $\gamma(H) = B(H,H_\gamma)$, by nondegeneracy of the Killing form $B$. Hence, $\alpha(H_\gamma) = \gamma(H_\alpha) = B(H_\alpha,H_\gamma) =: \langle \alpha, \gamma \rangle$. Choosing root vectors $e_\gamma \in \fg_\gamma$ ($\gamma \in \Delta$), we have $[e_\gamma,e_{-\gamma}] = c_\gamma H_\gamma$, where $c_\gamma = B(e_\gamma, e_{-\gamma}) \neq 0$.
Now calculate $\fa_1 = \{ X \in \fg_1 \mid [X,\fg_{-1}] \subset \fa_0 \}$ by testing $e_\alpha$ for $\alpha \in \Delta(\fg_1)$. Let $\beta \in \Delta(\fg_1)$. Then recalling $\fa_0$ from \eqref{E:a0}, we have
\Ben[(i)]
\item $\beta = \alpha$: $[e_\alpha,e_{-\alpha}] = c_\alpha H_\alpha$. Thus, if $H_\alpha \not\in \fa_0$, i.e.\ $\langle \mu,\alpha \rangle \neq 0$, then $\alpha \not\in \Delta(\fa_1)$.
\item $\beta \neq \alpha$: If $\alpha - \beta \not\in \Delta$, then $[e_\alpha, e_{-\beta}] = 0$ (no constraints). Otherwise, if $\alpha - \beta \in \Delta$, then $[e_\alpha, e_{-\beta}] = f_{\alpha\beta} e_{\alpha - \beta}$, where $f_{\alpha\beta} \neq 0$. Thus, $\alpha - \beta \in \Delta(\fg_{0,+})$ iff $e_{\alpha - \beta} \not\in \fa_0$ iff $\alpha \not\in \Delta(\fa_1)$.
\Een
Defining $\cT_1 = \{ \alpha \in \Delta(\fg_1) \mid \langle \mu, \alpha^\vee \rangle \neq 0 \}$ and
$\cT_2 = \{ \alpha \in \Delta(\fg_1) \mid \exists \beta \in \Delta(\fg_1) \mbox{ with } \alpha - \beta \in \Delta(\fg_{0,+}) \}$, we have $\Delta(\fa_1) = \Delta(\fg_1) \backslash (\cT_1 \cup \cT_2)$.
Now use the $\fg_{0,0}$-module decomposition \eqref{E:g00-decomp}. By Schur's lemma, for each $\fg_{1_i,A}$, either $\fg_{1_i,A} \subset \fa_1$ or $\fa_1\cap \fg_{1_i,A} = 0$. Thus, it suffices to test the lowest root in $\Delta(\fg_{1_i,A})$.
Fix $i \in I$ and suppose $A > 0$ (i.e.\ {\em at least} one entry of $A$ is positive), and let $\alpha \in \Delta(\fg_{1_i,A})$ be the lowest root. Since $A > 0$, $\alpha$ is not the lowest root of $\Delta(\fg_{1_i})$, i.e.\ $\alpha \neq \alpha_i$. However, since $\fg_{1_i}$ is an irreducible $\fg_0$-module, there is a sequence of simple roots in $\Delta(\fg_0)$ (possibly repeated) such that
\[
\alpha - \alpha_{j_1} - ... - \alpha_{j_m} = \alpha_i, \qquad j_k \in \{ 1,..., \rkg \} \backslash I, \qquad \rkg = \rnk(\fg).
\]
and $\alpha - \alpha_{j_1} - ... - \alpha_{j_k} \in \Delta$ for $1 \leq k \leq m$. In particular, $\beta = \alpha - \alpha_{j_1} \in \Delta(\fg_1)$. Since $\alpha$ is the lowest root of $\Delta(\fg_{1_i,A})$, then $j_1 \in J_\mu$ and $\alpha_{j_1} \in \Delta(\fg_{0,+})$. Hence, $\alpha \in \cT_2$, so $\Delta(\fg_{1_i,A}) \subset \cT_2$. Thus, $\Delta(\fa_1) = \Delta(\fg_1) \backslash (\cT_1 \cup \cT_2) \subset \bigcup_{i\in I} \Delta(\fg_{1_i,0})$. For the $A=0$ case, the lowest root of $\Delta(\fg_{1_i,0})$ is $\alpha_i$, and clearly $\alpha_i \not\in \cT_2$. We have $i \not\in I_\mu$ iff $\alpha_i \in \cT_1$. Thus, claim (b) follows from:
\begin{align} \label{E:a1}
\Delta(\fa_1) = \bigcup_{i \in I_\mu} \Delta(\fg_{1_i,0}).
\end{align}
Let $j \geq 2$ and $\gamma \in \Delta(\fa_j) \subset \Delta(\fg_j)$. Then $\gamma = \alpha + \beta$ for some $\alpha \in \Delta(\fg_{j-1})$ and $\beta \in \Delta(\fg_1)$, since $\fg_1$ is bracket-generating in $\fg_+$. By definition, $\fa_j = \{ X \in \fg_j \mid [X,\fg_{-1}] \subset \fa_{j-1} \}$, so $\alpha = \gamma - \beta \in \Delta(\fa_{j-1})$. Since $\fa$ is graded, then $[\fa_j, \fa_{-(j-1)}] \subset \fa_1$, so $\beta = \gamma - \alpha \in \Delta(\fa_1)$. Using (b), we obtain (c).
\end{proof}
\begin{thm} \label{T:DD} If $\fg$ is complex \ss, $\fp \subset \fg$ a parabolic subalgebra, $-\mu$ is a $\fp$-dominant weight with $I_\mu \neq \emptyset$, and $\fa = \fa(\mu)$, then
\begin{enumerate}
\item[\rm (a)] $\fa_1$ generates all brackets $\fa_j$ for $j \geq 1$.
\item[\rm (b)] $\Delta(\fa_j) = \{ \alpha \in \Delta(\fg_j) \mid Z_{I_\mu}(\alpha) = j,\, Z_{J_\mu}(\alpha) = 0 \}$ for $j \geq 1$.
\end{enumerate}
\end{thm}
\begin{proof} The base case $j=1$ is trivial for (a), while for (b) it follows from part (b) of Lemma \ref{L:DD-a1}. Assume the induction hypothesis for both (a) and (b) for $1 \leq j \leq r$. By part (c) of Lemma \ref{L:DD-a1},
\begin{align} \label{E:a1-bracket}
\Delta(\fa_{r+1}) \subseteq \Delta(\fa_r) \,\dot{+}\, \Delta(\fa_1) \subseteq \{ \alpha \in \Delta(\fg_{r+1}) \mid Z_{I_\mu}(\alpha) = r+1,\, Z_{J_\mu}(\alpha) = 0 \} =: \cS_{r+1}.
\end{align}
Conversely, let $\gamma \in \cS_{r+1}$ and $\beta \in \Delta(\fg_1)$. If $\alpha = \gamma - \beta \in \Delta$, then $\alpha \in \Delta(\fg_r)$. Since $Z_{I\backslash I_\mu}(\gamma) = 0$, then $Z_{I_\mu}(\beta) = 1$ and $Z_{J_\mu}(\beta) = 0$, so $\beta \in \Delta(\fa_1)$. Thus, $Z_{I_\mu}(\alpha) = r$ and $Z_{J_\mu}(\alpha) = 0$, so by the induction hypothesis, $\alpha \in \Delta(\fa_r)$, and $\gamma \in \Delta(\fa_{r+1})$. Hence, the inclusions in \eqref{E:a1-bracket} are equalities.
\end{proof}
To the data $(\fg,\fp,\mu)$ when $I_\mu \neq \emptyset$, we associate a {\em reduced geometry} $(\overline\fg,\overline\fp)$ via the following recipe:
\begin{framed}
\begin{recipe} \label{R:red-geom} On $\cD(\fg,\fp,\mu)$, remove all nodes corresponding to $I \backslash I_\mu$ and $J_\mu$, and any edges connected to these nodes. In the resulting diagram, remove any connected components which do not contain an $I_\mu$ node. This defines the Dynkin diagram $\cD(\overline\fg,\overline\fp)$ for the reduced geometry $(\overline\fg, \overline\fp)$.
\end{recipe}
\end{framed}
It is clear that the data $\Delta(\fa_+)$ for $(\fg,\fp,\mu)$, and $\Delta(\overline\fg_+)$ for $(\overline\fg,\overline\fp)$ naturally correspond. We use Recipes \ref{R:dim}, \ref{R:a0}, and \ref{R:red-geom} to compute $\dim(\fg_-), \dim(\fa_0)$, and $\dim(\fa_+)$ respectively; their sum is $\dim(\fa)$.
\begin{example}[$E_8 / P_8$] \label{ex:E8P8-2} From \ref{ex:E8P8}, $I_w = \emptyset$, so $\fa_+ = 0$. Hence, $\dim(\fa) = 57 + 90 = 147$.
\end{example}
\begin{example}[$A_\rkg / P_1 \times A_\rkg / P_\rkg$] Let us ornament indices in the first and second factors by primes and double primes, respectively. Consider the Type II case where $w = (1',\rkg'')$, $\lambda' := \lambda_{1'} + \lambda_{\rkg'}$ (i.e.\ the highest weight in the first $A_\rkg$ factor) and $\mu = -w\cdot \lambda'$. Then $\bbV_\mu$ is
\begin{align*}
\bbV_{-\sr_{1'} \cdot \lambda'} \boxtimes \bbV_{-\sr_{\rkg''} \cdot 0''}
&= \projwts{xswwws}{-3,2,0,0,0,1} \boxtimes \projwts{wwwwsx}{0,0,0,0,1,-2}.
\end{align*}
Since $I_\mu = \emptyset$, then $\fa(\mu)$ has no positive part. We have $\fp^{\opn}_\mu \cong \fp_{1',(\rkg-1)'} \times \fp_{(\rkg-1)''} \subset A_{\rkg-1} \times A_{\rkg-1}$, and
\begin{align*}
\dim(\fa) &= \dim(\fg_-) + \dim(\fa_0) = \dim(\fg_-) + 1 + \dim(\fp^{\opn}_\mu) \\
&= 2\rkg + 1 + \half( \rkg^2 - 1 + 2 + (\rkg-2)^2 - 1) + \half( \rkg^2 - 1 + 1 + (\rkg-1)^2 - 1)\\
&= 2\rkg + 1 + (\rkg^2 - 2\rkg + 2) + ( \rkg^2 - \rkg ) = 2\rkg^2 - \rkg + 3.
\end{align*}
\end{example}
\begin{example}[$C_6 / P_{1,2,5}$, $w = (21)$] Applying Recipe \ref{R:red-geom} to Example \ref{E:NPRC}, we obtain:
\[
\NPRC \quad\leadsto\quad \Aone{x}{}\quad \Cthree{wxw}{}, \quad i.e.\quad \bar\fg / \bar\fp = A_1 / P_1 \times C_3 / P_2.
\]
Since $(\bar\fg, \bar\fp)$ is 2-graded, $\dim(\bar\fg_1) = 5$, and $\dim(\bar\fg_2) = 3$, then $\dim(\fa_1) = 5$ and $\dim(\fa_2) = 3$. Indeed,
\begin{align*}
\Delta(\fa_1) : &\quad \alpha_1, \quad \alpha_5, \quad \alpha_4 + \alpha_5, \quad \alpha_5 + \alpha_6, \quad \alpha_4 + \alpha_5 + \alpha_6;\\
\Delta(\fa_2) : &\quad 2\alpha_5 + \alpha_6, \quad \alpha_4 + 2\alpha_5 + \alpha_6, \quad 2\alpha_4 + 2\alpha_5 + \alpha_6.
\end{align*}
We have $\dim(\fg_-) = \half( \dim(C_6) - 3 - \dim(A_2 \times A_1) ) = \half ( 78 - 3 - 11 ) = 32$. Since $\fp^{\opn}_w \cong \fp_1 \times A_1 \subset A_2 \times A_1$, then $\dim(\fa_0) = 2 + \dim(\fp^{\opn}_w) = 11$. Thus, $\dim(\fa) = 32 + 11 + 8 = 51$.
\end{example}
In fact, this last example is rather exceptional -- in Theorem \ref{T:NPR}, we show that if $\fg$ is simple, and when $w \in W^\fp_+(2)$, then for $\fa(w)$, we have $\fa_2 = 0$ almost always.
\subsection{Prolongation-rigidity} \label{S:PR}
\begin{defn} \label{D:PR} If $\fg$ is complex \ss and $-\mu$ is $\fp$-dominant, then $(\fg,\fp,\mu)$ (or $(\fg,\fp,w)$ with $\mu = -w\cdot \lambda_\fg$ if $\fg$ is simple) is {\em prolongation-rigid} (PR) if $\fa^\phi_+ = 0$ for any {\em nonzero} $\phi \in \bbV_{\mu}$. Here, $\mu$ is not assumed to have positive homogeneity. Otherwise, it is {\em non-prolongation-rigid} (NPR).
If $\fg$ is real or complex, $(\fg,\fp)$ is {\em \PRp} if $\fa^\phi_+ = 0$ for any {\em nonzero} $\phi \in H^2_+(\fg_-,\fg)$. Otherwise, it is {\em N\PRp}.
\end{defn}
\begin{remark} \label{RM:PR} The following observations are immediate:
\Ben
\item $(\fg,\fp)$ is \PRp if and only if $(\fg,\fp,\mu)$ is PR for all $\bbV_\mu \subset H^2_+(\fg_-,\fg)$.
\item Any Yamaguchi-rigid $(\fg,\fp)$ has $H^2_+(\fg_-,\fg) = 0$, so is (vacuously) \PRp.
\item If $(\fg,\fp)$ is real and is N\PRp, then its complexification $(\fg_\bbC,\fp_\bbC)$ will also be N\PRp.
\Een
\end{remark}
Let $\fg$ be complex \ss. An immediate consequence of \eqref{E:a1} is:
\begin{cor} \label{C:PRw} If $-\mu$ is $\fp$-dominant, then $(\fg,\fp,\mu)$ is PR iff $I_\mu = \emptyset$.
\end{cor}
Equivalently, in terms of our augmented marked Dynkin diagram notation,
\begin{framed}
\begin{recipe} \label{R:squares} $(\fg,\fp,\mu)$ is PR iff $\cD(\fg,\fp,\mu)$ has no squares.
\end{recipe}
\end{framed}
\begin{example}[$G_2 / P_1$] From Example \ref{EX:G2P1-1}, $W_+^\fp(2) = \{ (12) \}$, and $(12) \cdot \lambda_2 = \Gdd{xw}{-8,4}$. The coefficient over the cross is nonzero, i.e.\ no squares, and so $G_2 / P_1$ is \PRp.
\end{example}
\begin{example}[$F_4 / P_{1,2}$] Since $F_4 / P_{1,2}$ is Yamaguchi-rigid (see Theorem \ref{T:Y-rigid}), it is \PRp. However, $w=(21) \in W^\fp(2) \backslash W^\fp_+(2)$ gives $w \cdot \lambda_1 = \Fdd{xxsw}{0,-4,6,0}$, so the triple $(\fg,\fp,w)$ is NPR. Its reduced geometry is $A_1/P_1$, so $\fa = \fa(w)$ has 1-dimensional $\fa_{+}=\fa_1$.
\end{example}
Now restrict to $\mu = -w\cdot \lambda$, where $\lambda$ is the highest weight of a simple ideal in $\fg$ and $w = (jk) \in W^\fp(2)$. Let $r_a := \langle \lambda, \alpha_a^\vee \rangle$ and $i \in I$. Then $i \in I_\mu$ is equivalent (using \eqref{E:w-lambda} and Recipe \ref{R:Hasse}) to
\begin{align} \label{E:PR-a}
0 = r_i - (r_j + 1 ) c_{ji} - (r_k +1) (c_{ki} - c_{kj} c_{ji}),
\end{align}
where $i,j \in I$, and $k \in (I \cup \cN(j)) \backslash \{ j \}$. Note $r_a \geq 0$ for all $a$ since $\lambda$ is $\fg$-dominant. Hence,
\begin{itemize}
\item If $i=j$, then $c_{ji} = 2$ and $c_{kj} \leq 0$ (since $k \neq j$). So \eqref{E:PR-a} implies $0 = r_j + 2 - (r_k +1) c_{kj} \geq 2$,
a contradiction. So assume $i \neq j$ below, i.e.\ $c_{ij}, c_{ji} \leq 0$.
\item If $k \in \cN(j) \backslash I$ or $i,j,k\in I$ are distinct, then $c_{ki}, c_{kj} \leq 0$, so \eqref{E:PR-a} implies
\begin{align*}
0 &= \underbrace{r_i}_{\geq 0} - \underbrace{(r_j + 1)}_{\geq 1} \underbrace{c_{ji}}_{\leq 0} - \underbrace{(r_k +1)}_{\geq 1} (\underbrace{c_{ki} - c_{kj} c_{ji}}_{\leq 0}) \qRa r_i = c_{ji} = c_{ki} = 0.
\end{align*}
\item If $k=i$, then \eqref{E:PR-a} implies $(r_i+1) (2 - c_{ij} c_{ji}) = r_i - (r_j+1) c_{ji} \geq 0$. Thus, $0 \leq c_{ij} c_{ji} \leq 2$. This forces $c_{ij} c_{ji} = 1$, so $c_{ij} = c_{ji} = -1$, and \eqref{E:PR-a} reduces to $r_j = 0$.
\end{itemize}
Summarizing, \eqref{E:PR-a} is equivalent to:
\begin{align}
\l\{\begin{array}{rll}
{\rm (a):} & |I| \geq 2, \quad i,j \in I \mbox{ distinct}, & k \in \cN(j) \backslash I, \quad 0 = r_i = c_{ji} = c_{ki}; \quad\mbox{ or},\\
{\rm (b):} & |I| \geq 3, \quad i,j,k \in I \mbox{ distinct}, & 0 = r_i = c_{ji} = c_{ki}; \quad\mbox{ or},\\
{\rm (c):} & |I| \geq 2, \quad i,j \in I \mbox{ distinct}, & k=i,\quad c_{ij} = c_{ji} = -1,\quad r_j = 0.
\end{array} \r. \tag{F.1} \label{E:F1}
\end{align}
\begin{prop} \label{P:PR} Let $\fg$ be complex semisimple and $\lambda$ the highest weight of a simple ideal in $\fg$. Let $w = (jk) \in W^\fp(2)$ and $\mu = -w\cdot \lambda$. Then $(\fg,\fp,\mu)$ is PR if: (a) $|I_\fp| = 1$, or (b) $I_\fp = \{ j,k \}$ and $c_{jk} =0$.
\end{prop}
\begin{proof}
In both cases, it is impossible to satisfy \eqref{E:F1}.
\end{proof}
Now, impose regularity \eqref{E:reg} in each case to obtain:
\begin{align}
\l\{\begin{array}{rll}
{\rm (a):} & Z(\lambda) \leq r_j - (r_k +1) c_{kj};\\
{\rm (b):} & Z(\lambda) \leq r_j + (r_k +1)(1 - c_{kj});\\
{\rm (c):} & Z(\lambda) \leq 2(r_i +1).
\end{array} \r. \tag{F.2} \label{E:F2}
\end{align}
\begin{example} \label{ex:NPR-ss}
Suppose $\fg = \fg' \times \fg''$ is the decomposition into simple ideals, with $I_{\fp'} \neq \emptyset$. If $w' = (j'k') \in W^{\fp'}_+(2)$, then $(\fg,\fp,-w'\cdot \lambda_{\fg'})$ is NPR (provided $I_{\fp''} \neq \emptyset$). Namely, choosing $i \in I_{\fp''}$, we can satisfy both \eqref{E:F1}(a) and \eqref{E:F2}(a). The cohomology component corresponding to $(\fg,\fp,-w'\cdot \lambda_{\fg'})$ lies inside $H^2_+(\fg_-',\fg') \boxtimes \bbC$, so $\fa^\phi_+ \supset \fg_+''$. Thus, in the general \ss case, $(\fg,\fp)$ is always N\PRp.
\end{example}
If $\fg$ is {\em simple}, the N\PRp condition on $(\fg,\fp)$ is restrictive. In particular, since the highest weight $\lambda_\fg$ is also the highest root, then $\lambda_\fg = \sum_{a=1}^\rkg m_a \alpha_a$ with $m_a \geq 1$, so we must have $|I| \leq Z(\lambda_\fg)$.
\begin{cor} \label{C:g-PR} Let $\fg$ be complex simple. Then $(\fg,\fp)$ is \PRp if: {\rm (a)} $\fg$ is 1-graded; {\rm (b)} $\fg$ has a contact gradation; or {\rm (c)} $\fg$ is a Lie algebra of exceptional type.
\end{cor}
\begin{proof}
All Yamaguchi-rigid $(\fg,\fp)$ are \PRp. By Proposition \ref{P:PR}, consider those Yamaguchi-nonrigid $(\fg,\fp)$ with $\fp$ neither maximal nor $\fp \neq \fp_{j,k}$ for $c_{jk} = 0$. Among 1-gradings and contact gradings, this leaves $A_2 / P_{1,2}$ (see Theorem \ref{T:Y-pr-thm}), and among Lie algebras of exceptional type, this leaves $G_2 / P_{1,2}$ (see Theorem \ref{T:Y-rigid}). Both fail \eqref{E:F1}.
\end{proof}
Hence, over $\bbR$ or $\bbC$, almost all known gap results, cf. Table \ref{F:known-submax}, have been for \PRp geometries. (We note $C_2 / P_{1,2}$ also fails \eqref{E:F1}.) The only exception is pairs of 2nd order ODE $(A_3 / P_{1,2})$ which is N\PRp (Section \ref{S:2-ODE}). In the next section, we classify all N\PRp $(\fg,\fp)$ when $\fg$ is simple.
\subsection{Classical flag varieties}
We now analyze \eqref{E:F1} and \eqref{E:F2} from Section \ref{S:PR} when $G/P_I$ is a classical complex flag variety (so $G$ is a classical complex {\em simple} Lie group), and $|I| \geq 2$. Our labelling (a), (b), (c) below corresponds to the cases in \eqref{E:F1} and \eqref{E:F2}. Note that we only list $G/P$ up to Dynkin diagram symmetries, e.g. $D_\rkg / P_{\rkg-1} \cong D_\rkg / P_\rkg$.
\subsubsection{$B_\rkg$ or $D_\rkg$ case} Here, $c_{kj} \in \{ 0,-1\}$ iff $\fg = D_\rkg$. Also, $0 \leq r_j + r_k \leq 1$ (since $j \neq k$) using\footnote{We treat $B_2 = C_2$ together with the general $C_\rkg$ case.}
\begin{align} \label{E:BD-lambda}
\lambda = \lambda_2 = \l\{ \begin{array}{lll}
\alpha_1 + 2\alpha_2 + ... + 2\alpha_\rkg, & \fg = B_\rkg & (\rkg \geq 3);\\
\alpha_1 + 2\alpha_2 + ... + 2\alpha_{\rkg-2} + \alpha_{\rkg-1} + \alpha_\rkg, & \fg = D_\rkg & (\rkg \geq 4).
\end{array} \r.
\end{align}
\begin{enumerate}[(a)]
\item Assume $c_{kj} = -2$, so $\fg = B_\rkg$ and $(k,j) = (\rkg-1,\rkg)$. From \eqref{E:F1}, $c_{ki} = 0$, so $\rkg \geq 4$, $r_j = r_k = 0$. From \eqref{E:F2}, $|I| = Z(\lambda) = 2$, which contradicts \eqref{E:BD-lambda}. Thus, $c_{kj} \in \{ 0, -1 \}$. Then $r_j - (r_k+1) c_{kj} \leq 2$, so from \eqref{E:F2}, $|I| = Z(\lambda) = 2$ and exactly one of $j,k$ is $2$. If $\fg = B_\rkg$ or $j=2$, then $Z(\lambda) \geq 3$, so $\fg = D_\rkg$ and $k=2 \not\in I$. Hence, \framebox{$D_\rkg / P_{1,\rkg}$, $\rkg \geq 5$; $w = (12)$; $i=\rkg$}
\item Assume $\fg = B_\rkg$. From \eqref{E:F2}, $|I| \geq 3$, so $Z(\lambda) \geq 5$. But since $0 \leq r_j + r_k \leq 1$ and $c_{kj} \leq -1$, then $r_j + (r_k+1)(1-c_{kj}) \geq 5$ only if $r_j = 0$, $r_k = 1$, $c_{kj} = -2$, and hence $(k,j) = (\rkg-1,\rkg)$. From \eqref{E:F1}, $c_{ki} = 0$, so $\rkg \geq 4$, $r_j = r_k = 0$. From \eqref{E:F2}, $Z(\lambda) = 3$, a contradiction.
Thus, $\fg = D_\rkg$. Assuming $r_j = 1$, then $r_k = 0$ and \eqref{E:F2} implies $|I|=Z(\lambda) = 3$. But $j = 2\in I$, so by \eqref{E:BD-lambda}, $Z(\lambda) \geq 4$, a contradiction. Hence, $r_j = 0$, and \eqref{E:F2} implies $r_k = 1$, $c_{kj} = -1$, and $3 \leq |I| \leq Z(\lambda) \leq 4$. From \eqref{E:BD-lambda}, $I = \{ 1,2, \rkg \}$, so: \framebox{$D_\rkg / P_{1,2,\rkg}$, $\rkg \geq 5$; $w = (12)$; $i=\rkg$}
\item Assume $r_i = 0$, so from \eqref{E:F2}, $|I| = Z_I(\lambda) = 2$. Using \eqref{E:BD-lambda}, this is impossible given $c_{ij} = -1$ by \eqref{E:F1}. Thus, $r_i = 1$, and $2 \leq |I| \leq Z_I(\lambda) \leq 4$. Keeping in mind $c_{ij} = -1$, we have:
\begin{itemize}
\item \framebox{$B_\rkg / P_{1,2}$, $\rkg \geq 3$; $w = (12)$; $i=2$} and \framebox{$D_\rkg / P_{1,2}$, $\rkg \geq 4$; $w = (12)$; $i=2$}
\item \framebox{$B_\rkg / P_{2,3}$, $\rkg \geq 4$; $w = (32)$; $i=2$} and \framebox{$D_\rkg / P_{2,3}$, $\rkg \geq 4$; $w = (32)$; $i=2$}
\item \framebox{$D_\rkg / P_{1,2,\rkg}$, $\rkg \geq 5$; $w = (12)$; $i=2$}. Also, \framebox{$D_4 / P_{1,2,4}$; $w = (12)$ or $(42)$; $i=2$}
\end{itemize}
\end{enumerate}
\subsubsection{$C_\rkg$ case} Here, $c_{kj} \in \{ 0,-1, -2\}$ with $c_{kj} = -2$ iff $(k,j)=(\rkg,\rkg-1)$. Also, $0 \leq r_j + r_k \leq 2$,
\begin{align} \label{E:C-lambda}
\lambda = 2\lambda_1 = 2\alpha_1 + ... + 2\alpha_{\rkg-1} + \alpha_\rkg \quad (\rkg \geq 2).
\end{align}
\begin{enumerate}[(a)]
\item Assuming $c_{kj} = -2$, then $(k,j)= (\rkg,\rkg-1)$. Since $i,j,k$ are distinct, $c_{ji} = 0$, $r_i = 0$, then $\rkg \geq 5$ and $r_j = r_k = 0$. From \eqref{E:F2}, $|I|= Z(\lambda) = 2$, but $i,j \in I\backslash \{ \rkg \}$ (distinct) implies $Z(\lambda) \geq 4$ (contradiction). Thus, $c_{kj} = -1$. From \eqref{E:F2}, \eqref{E:C-lambda}, $|I| = 2$, $Z(\lambda) = 3$ with $j=1$ or $k=1$.
\begin{enumerate}
\item $(j,k) = (1,2)$: Since $c_{ki} = 0$, then \framebox{$C_\rkg / P_{1,\rkg}$, $\rkg \geq 4$; $w = (12)$; $i=\rkg$}.
\item $(k,j) = (1,2)$: Since $c_{ji} = 0$, then \framebox{$C_\rkg / P_{2,\rkg}$, $\rkg \geq 4$; $w = (21)$; $i=\rkg$}.
\end{enumerate}
\item Assume $c_{kj} = -2$, so $(k,j) = (\rkg,\rkg-1)$ and since $r_i = c_{ji} = c_{ki} = 0$, then $2 \leq i \leq \rkg-3$, and $r_j = r_k = 0$. From \eqref{E:F2}, $|I| = Z(\lambda) = 3$, but then $Z(\lambda) \geq 5$ by \eqref{E:C-lambda} (contradiction). Similarly, $c_{kj} = 0$ yields $|I| = Z(\lambda) = 3$ and the same contradiction. Thus,
\[
c_{kj} = -1, \qquad 3 \leq |I| \leq Z(\lambda) \leq r_j + 2(r_k+1),
\]
with either $(j,k)=(1,2)$ or $(2,1)$. In the former case, $Z(\lambda) \leq 4$, but for $|I| \geq 3$, $Z(\lambda) \geq 5$, a contradiction. Thus, $(j,k) = (2,1)$ and $Z(\lambda) \leq 6$, so $|I| = 3$. Keeping in mind $c_{ji} = c_{ki} = 0$, we have: \framebox{$C_\rkg / P_{1,2,s}$, with $4 \leq s \leq \rkg$, $w = (21)$, $i=s$}.
\item Since $c_{ij} = -1$, then $\rkg \geq 3$. From \eqref{E:F2} and \eqref{E:C-lambda}, we cannot have $r_i=0$. Thus, $r_i=2$, $i=1$, $2 \leq |I| \leq Z(\lambda) \leq 6$ and hence $|I| \leq 3$. Since $c_{ij} = c_{ji} = -1$, then $j=2$. Two possibilities:
\begin{enumerate}
\item \framebox{$C_\rkg / P_{1,2}$, $\rkg \geq 3$; $w = (21)$; $i=1$}
\item \framebox{$C_\rkg / P_{1,2,s}$, $3 \leq s \leq \rkg$; $w = (21)$; $i=1$}
\end{enumerate}
\end{enumerate}
\subsubsection{$A_\rkg$ case}
Here, $c_{kj} \in \{ 0,-1\}$ and $0 \leq r_j + r_k \leq 2$ (since $j \neq k$) using
\[
\lambda = \lambda_1 + \lambda_\rkg = \alpha_1 + ... + \alpha_\rkg \quad (\rkg \geq 1).
\]
\begin{enumerate}
\item We have $\rkg \geq 3$ and $c_{kj} = -1$, so from \eqref{E:F1}, $2 \leq |I| = Z_I(\lambda) \leq r_j + r_k + 1$.
\begin{enumerate}[(i)]
\item $r_j = 1$: We may take $j=1$, so $k=2 \not\in I$, $r_k = 0$, and $|I| = Z_I(\lambda) = 2$. Since $r_i = 0$ and $c_{ji} = c_{ki} = 0$, then $4 \leq i \leq \rkg-1$. Thus, \framebox{$A_\rkg / P_{1,s}$, $4 \leq i \leq \rkg-1$; $w = (12)$; $i=s$}.
\item $r_j = 0$: We have $2 \leq |I| = Z_I(\lambda) \leq 1 + r_k \leq 2$, so $r_k = 1$ and $|I| = 2$. We may take $k=1 \not\in I$, and $j=2 \leq \rkg-1$. Since $c_{ji} = 0$, then: \framebox{$A_\rkg / P_{2,s}$, $4 \leq s \leq \rkg-1$; $w = (21)$; $i=s$}.
\end{enumerate}
\item From \eqref{E:F2}, $3 \leq |I| = Z_I(\lambda) \leq r_j + (r_k +1) (1 - c_{kj})$.
\begin{enumerate}
\item $c_{kj} = 0$: We have $\rkg \geq 3$ and $|I| = Z_I(\lambda) = 3$ with $r_j=r_k=1$, so we may take $(j,k) = (1,\rkg)$. Recalling $c_{ji} = c_{ki} = 0$, we have: \framebox{$A_\rkg / P_{1,s,\rkg}$, $3 \leq s \leq \rkg-2$; $w = (1,\rkg)$; $i=s$}.
\item $c_{kj} = -1$: We cannot have $r_j = r_k = 0$, nor $r_j = r_k = 1$ (since $\rkg \geq 3$). Recalling $r_i = c_{ji} = c_{ki} = 0$,
\begin{enumerate}
\item $(r_j,r_k) = (1,0)$: $|I| = 3$, \framebox{$A_\rkg / P_{1,2,s}$, $4 \leq s \leq \rkg-1$; $w = (12)$; $i=s$}
\item $(r_j,r_k) = (0,1)$:
\begin{enumerate}
\item $|I| = 3$, \framebox{$A_\rkg / P_{1,2,s}$, $4 \leq s \leq \rkg-1$; $w = (21)$; $i=s$}.
\item $|I| = 4$, \framebox{$A_\rkg / P_{1,2,s,t}$, $4 \leq s<t \leq \rkg$; $w = (21)$; $i=s$ or $i=t \leq \rkg-1$}.
\end{enumerate}
\end{enumerate}
\end{enumerate}
\item From \eqref{E:F1} and \eqref{E:F2}, $c_{ij} = c_{ji} = -1$, $r_j = 0$, $2\leq |I| = Z_I(\lambda) \leq 2(r_i + 1)$.
\begin{enumerate}
\item $r_i = 0$: $|I| = 2$, \framebox{$A_\rkg / P_{s,s+1}$, $2 \leq s \leq \rkg-2$; $w = (jk)$, $i=k$ with $\{ j,k \} = \{ s, s+1 \}$}.
\item $r_i = 1$: $2 \leq |I| \leq 4$.
\begin{enumerate}
\item $|I| = 2$: \framebox{$A_\rkg / P_{1,2}$, $\rkg \geq 3$; $w = (21)$; $i=1$}
\item $|I| = 3$: \framebox{$A_\rkg / P_{1,2,s}$, $3 \leq s \leq \rkg$; $w = (21)$; $i=1$}
\item $|I| = 4$: \framebox{$A_\rkg / P_{1,2,s,t}$, $3 \leq s < t \leq \rkg$; $w = (21)$; $i=1$}
\end{enumerate}
\end{enumerate}
\end{enumerate}
We summarize our classification in Table \ref{F:NPR}. (More detail is given in Appendix \ref{App:Submax}.)
\begin{table}[h]
\begin{tabular}{cc} $\begin{array}{|c|c|c|} \hline
G & P & \mbox{Range} \\ \hline\hline
A_\rkg & P_{1,2} & \rkg \geq 3 \\
& P_{1,2,s} & 3 \leq s \leq \rkg\\
& P_{1,2,s,t} & 3 \leq s < t \leq \rkg\\
& P_{1,s,\rkg} & 3 \leq s \leq \rkg-2 \\
& P_{i,i+1} & 2 \leq i \leq \rkg-2\\
& P_{1,i} & 4 \leq i \leq \rkg-1 \\
& P_{2,i} & 4 \leq i \leq \rkg-1 \\ \hline
B_\rkg & P_{1,2} & \rkg \geq 3 \\
& P_{2,3} & \rkg \geq 4 \\ \hline
\end{array}$ &
$\begin{array}{|c|c|c|c|c|} \hline
G & P & \mbox{Range} \\ \hline\hline
C_\rkg & P_{1,2} & \rkg \geq 3 \\
& P_{1,\rkg} & \rkg \geq 4 \\
& P_{2,\rkg} & \rkg \geq 4 \\
& P_{1,2,s} & 3 \leq s \leq \rkg\\ \hline
D_\rkg & P_{1,2} & \rkg \geq 4 \\
& P_{1,\rkg} & \rkg \geq 5 \\
& P_{2,3} & \rkg \geq 5 \\
& P_{1,2,\rkg} & \rkg \geq 4 \\ \hline
\end{array}$
\end{tabular}
\caption{Classification of N\PRp geometries $G/P$ with $G$ simple}
\label{F:NPR}
\end{table}
\subsection{Nested parabolic subalgebras}
\label{S:corr-Tanaka}
We use the notation of Section \ref{S:corr-twistor}. For nested parabolic subalgebras $\fq \subset \fp \subset \fg$ (so $I_\fp \subset I_\fq$), we have \eqref{E:pq-decomp} and \eqref{E:pq-grading}.
\begin{thm} \label{T:corr-Tanaka} Let $\fg$ be complex \ss. Let $\fq \subset \fp \subset \fg$ be parabolic subalgebras, and let $\lambda$ be the highest weight of a simple ideal in $\fg$. Let $w \in W^\fp(2) \subset W^\fq(2)$ and $\mu = -w\cdot \lambda$. Choosing root vectors in $\fg$, define $\phi_0$ as in \eqref{E:phi-0}. Let $\fa = \prn^\fg(\fp_-,\fann_{\fp_0}(\phi_0))$ and $\fb = \prn^\fg(\fq_-, \fann_{\fq_0}(\phi_0))$, which are respectively $Z_{I_\fp}$-graded and $Z_{I_\fq}$-graded. Then as $Z_{I_\fp}$-graded Lie algebras, $\fa = \fb \subset \fg$.
\end{thm}
\begin{proof}
Write $w = (jk) \in W^\fp(2) \subset W^\fq(2)$, so by Recipe \ref{R:Hasse}, $k \neq j \in I_\fp$, and if $c_{kj} = 0$, then $k \in I_\fp$. We always have $\fq \subset \fp \subset \fp_j \subset \fg$, but if $c_{kj} = 0$, we also have $ \fq \subset \fp \subset \fp_{j,k} \subset \fg$. Hence, it suffices to consider either $\fp = \fp_j$ if $c_{kj} < 0$, or $\fp = \fp_{j,k}$ if $c_{kj} = 0$. We assume this below.
By Proposition \ref{P:PR}, $(\fg,\fp,\mu)$ is PR, so $\fa = \fp_- \op \fa_0$. Thus, $I^\fp_\mu = \emptyset$. We know $\fp_- \subset \fq_-$, and by Recipe \ref{R:a0}, $\ker(\mu) \subset \fa_0 \cap \fb_0$. Let us show $\Delta(\fa_0) \subset \Delta(\fb)$.
Given $\alpha \in \Delta(\fa_0)$, we have $Z_{I_\fp}(\alpha) = 0$ and $Z_{J_\mu^\fp}(\alpha) \leq 0$. Since $\fq_- \op \fq_{0,\leq 0} \subset \fb$, we may assume $\alpha \in \Delta(\fq_{0,+} \op \fq_+) \subset \Delta^+$, hence $Z_{J_\mu^\fp}(\alpha) = 0$. (The nonzero coefficients of a root must all have the same sign.) Since $J^\fp_\mu = J^\fq_\mu \,\dot\cup\, (I_\fq \backslash (I_\fp \cup I^\fq_\mu))$, then:
\Ben[(i)]
\item $Z_{J_\mu^\fq}(\alpha) = 0$. Thus, $\alpha \not\in \Delta(\fq_{0,+})$, so $\alpha \in \Delta(\fq_+)$, i.e.\ $r = Z_{I_\fq}(\alpha) > 0$;
\item $Z_{I_\fq \backslash (I_\fp \cup I^\fq_\mu)}(\alpha) = 0$. Since $Z_{I_\fp}(\alpha) = 0$, then $Z_{I_\fq \backslash I_\mu^\fq}(\alpha) = 0$.
\Een
Therefore, $r = Z_{I_\mu^\fq}(\alpha) > 0$. By Theorem \ref{T:DD}, $\alpha \in \Delta(\fb_r)$. Thus, $\fa_0 \subset \fb$. Hence, $\fa \subset \fb$.
Conversely, let $\alpha \in \Delta(\fb)$. We show that $\alpha \in \Delta(\fa)$. We have three cases:
\Ben[(i)]
\item $Z_{I_\fp}(\alpha) < 0$: Then $\alpha \in \Delta(\fp_-) \subset \Delta(\fa)$.
\item $Z_{I_\fp}(\alpha) = 0$: Assume $Z_{J_\mu^\fp}(\alpha) > 0$, so $Z_{J^\fq_\mu}(\alpha) \geq 0$ since $J^\fq_\mu \subset J^\fp_\mu$. Hence, $\alpha \in \Delta(\fb_{\geq 0})$, so by Theorem \ref{T:DD}, $Z_{J_\mu^\fq}(\alpha) \leq 0$. Thus, $Z_{J_\mu^\fq}(\alpha) = 0$ and $Z_{J_\mu^\fp \backslash J_\mu^\fq}(\alpha) > 0$. Since $J^\fp_\mu \backslash J^\fq_\mu \subset I_\fq \backslash I^\fq_\mu$, then $Z_{I_\fq \backslash I^\fq_\mu}(\alpha) > 0$. But by Theorem \ref{T:DD}, $Z_{I_\fq \backslash I^\fq_\mu}(\alpha) = 0$, a contradiction. Thus, $Z_{J_\mu^\fp}(\alpha) \leq 0$, so $\alpha \in \Delta(\fa_0)$.
\item $Z_{I_\fp}(\alpha) > 0$: Since $I_\fp \subset I_\fq \backslash I^\fq_\mu$, then $Z_{I_\fq \backslash I^\fq_\mu}(\alpha) > 0$ and $\alpha \in \Delta(\fb_+)$. But the latter implies $Z_{I_\fq \backslash I^\fq_\mu}(\alpha) = 0$ by Theorem \ref{T:DD}, a contradiction.
\Een
Thus, $\fb = \fp_- \op \fa_0 = \fa$.
\end{proof}
\newcommand\ph{\hphantom{-1}}
\begin{example} Consider $w = (21)$ for $A_6 / P_2$, $A_6 / P_{1,2}$, $A_6 / P_{1,2,4}$, and $A_6 / P_{1,2,4,5}$. In each case, $w \in W^\fp_+(2)$, and the grading on $\fa(w)$ is given below. (Diagonal entries are suppressed.) In all four cases, we obtain the same ($Z_2$-graded) Lie algebra $\fa(w) \subset A_6 = \fsl_7(\bbC)$.
\begin{tiny}
\begin{center}
\begin{tabular}{cc}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$*$ & 0 & \ph & \ph & \ph & \ph & \ph \\ \hline
0 & $*$ & \ph & \ph & \ph & \ph & \ph \\ \hline
-1 & -1 & $*$ & \ph & \ph & \ph & \ph \\ \hline
-1 & -1 & 0 & $*$ & 0 & 0 & \ph \\ \hline
-1 & -1 & 0 & 0 & $*$ & 0 & \ph \\ \hline
-1 & -1 & 0 & 0 & 0 & $*$ & \ph \\ \hline
-1 & -1 & 0 & 0 & 0 & 0 & $*$ \\ \hline
\end{tabular} &
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
$*$ & 1 & \ph & \ph & \ph & \ph & \ph \\ \hline
-1 & $*$ & \ph & \ph & \ph & \ph & \ph \\ \hline
-2 & -1 & $*$ & \ph & \ph & \ph & \ph \\ \hline
-2 & -1 & 0 & $*$ & 0 & 0 & \ph \\ \hline
-2 & -1 & 0 & 0 & $*$ & 0 & \ph \\ \hline
-2 & -1 & 0 & 0 & 0 & $*$ & \ph \\ \hline
-2 & -1 & 0 & 0 & 0 & 0 & $*$ \\ \hline
\end{tabular} \\
$A_6 / P_2$ & $A_6 / P_{1,2}$ \\ \\
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
$*$ & 1 & \ph & \ph & \ph & \ph & \ph \\ \hline
-1 & $*$ & \ph & \ph & \ph & \ph & \ph \\ \hline
-2 & -1 & $*$ & \ph & \ph & \ph & \ph \\ \hline
-2 & -1 & 0 & $*$ & 1 & 1 & \ph \\ \hline
-3 & -2 & -1 & -1 & $*$ & 0 & \ph \\ \hline
-3 & -2 & -1 & -1 & 0 & $*$ & \ph \\ \hline
-3 & -2 & -1 & -1 & 0 & 0 & $*$ \\ \hline
\end{tabular} &
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
$*$ & 1 & \ph & \ph & \ph & \ph & \ph\\ \hline
-1 & $*$ & \ph & \ph & \ph & \ph & \ph \\ \hline
-2 & -1 & $*$ & \ph & \ph & \ph & \ph \\ \hline
-2 & -1 & 0 & $*$ & 1 & 2 & \ph \\ \hline
-3 & -2 & -1 & -1 & $*$ & 1 & \ph \\ \hline
-4 & -3 & -2 & -2 & -1 & $*$ & \ph \\ \hline
-4 & -3 & -2 & -2 & -1 & 0 & $*$ \\ \hline
\end{tabular} \\
$A_6 / P_{1,2,4}$ & $A_6 / P_{1,2,4,5}$
\end{tabular}
\end{center}
\end{tiny}
\end{example}
\subsection{Prolongation height}
\label{S:height}
The {\em depth} of the $Z$-grading on $\fa$ is the maximal $\nu \geq 0$ such that $\fa_{-\nu} \neq 0$. The {\em height} of the $Z$-grading on $\fa$ is the maximal $\tilde\nu \geq 0$ such that $\fa_{\tilde\nu} \neq 0$.
\begin{thm} \label{T:NPR} Let $\fg$ be a complex simple Lie algebra, $\fp \subset \fg$ a parabolic subalgebra. If $(\fg,\fp)$ is N\PRp, then the height of $\fa = \fa(w)$, where $w \in W^\fp_+(2)$, is always $\tilde\nu=1$, except for
\[
\begin{array}{c|c|c|c|c|c}
\mbox{Label} & G/P & \mbox{Range} & w \in W^\fp_+(2) & J_w & I_w \\ \hline
\mbox{NPR--A} & A_\rkg / P_{1,2,s,t} & 4 \leq s < t < \rkg & (21) & \{ 3, \rkg \} & \{ 1, s, t \} \\
\mbox{NPR--C} & C_\rkg / P_{1,2,s} & 4 \leq s < \rkg & (21) & \{ 3 \} & \{ 1 , s \}
\end{array}
\]
In these two exceptional cases, $\tilde\nu = 2$ and
\[
\begin{array}{c|c|c|c}
& \dim(\fa_0) & \dim(\fa_1) & \dim(\fa_2)\\ \hline
{\rm (A):} & (s-3)(s-2) + (t-s)^2 + 2 + (\rkg - t)(\rkg-t+1) & 1 + (t-s)(\rkg - 3 - t + s) & (s-3)(\rkg - t)\\
{\rm (C):} & 2 + (s-3)(s-2) + (\rkg-s)(2\rkg-2s+1)& 1 + 2(s-3)(\rkg - s) & \binom{s-2}{2}
\end{array}
\]
\end{thm}
\begin{proof}
Examine Tables \ref{F:AB} and \ref{F:CDG}. Except for NPR-A and NPR-C, $(\overline\fg,\overline\fp)$ is always 1-graded, so $\fa_2 = 0$. For NPR-A and NPR-C, $(\overline\fg,\overline\fp)$ is 2-graded, so $\fa_3 = 0$. The remaining data listed above is calculated using the Dynkin diagram recipes.
\end{proof}
If we remove the regularity assumption, higher prolongations can exist.
\begin{example} For $\rkg \geq 5$, consider $A_\rkg / B = A_\rkg / P_{1,2, \ldots, \rkg}$ and $w = (21)$. This yields
\[
\begin{tiny}
\begin{tikzpicture}[scale=\myscale,baseline=-3pt]
\DDnode{q}{0,0}{0};
\bond{0,0};
\DDnode{x}{1,0}{-4};
\bond{1,0};
\DDnode{x}{2,0}{3};
\bond{2,0};
\DDnode{q}{3,0}{0};
\tdots{3,0};
\DDnode{q}{4,0}{0};
\bond{4,0};
\DDnode{x}{5,0}{1};
\useasboundingbox (-.4,-.2) rectangle (5.4,0.55);
\end{tikzpicture}
\end{tiny}, \qquad homogeneity = 5 - \rkg.
\]
Thus, $J_w = \emptyset$, $I_w = \{ 1, 4, 5, \ldots, \rkg-1 \}$, so $\bar\fg / \bar\fp \cong A_1 / P_1 \times A_{\rkg-4} / P_{1,...,\rkg - 4}$. Note $\nu = \rkg$, while $\tilde\nu = \rkg - 4$.
\end{example}
\section{Submaximal symmetry dimensions}
\label{S:main}
Define the {\em submaximal symmetry dimension} $\fS$ as the maximum $\dim(\finf(\cG,\omega))$ among all {\bf regular, normal} $G/P$ geometries $(\cG \stackrel{\pi}{\to} M, \omega)$ which are {\bf not locally flat}. (Equivalently, $\fS$ maximizes $\dim(\cS)$, where $\cS$ is the symmetry algebra of an underlying structure which is not locally flat.)
\subsection{Filtration and embedding of the symmetry algebra}
\label{S:CNK} Let us review some known results.
{\em \v{C}ap \cite{Cap2005b}, \v{C}ap--Neusser \cite{CN2009}}: Let $(\cG \stackrel{\pi}{\to} M, \omega)$ be a regular, normal $G/P$ geometry. Let $u \in \cG$, and $x = \pi(u)$. The map $\xi\mapsto\omega(\xi(u))$ defines a linear injection $\widetilde\cS := \finf(\cG, \omega) \hookrightarrow \fg$.\footnote{They state this for the global automorphism algebra $\mathfrak{aut}(\cG,\omega)$, but it is established for $\finf(\cG, \omega)$ in their proof.} Letting $\ff(u) \subset \fg$ denote its image and $\kappa$ the curvature function, the bracket on $\finf(\cG, \omega)$ is mapped to the operation
\begin{align*}
[X,Y]_{\ff(u)} := [X,Y] - \kappa_u(X,Y)
\end{align*}
on $\ff(u)$. The filtration on $\fg$ restricts to $\ff(u)$, and by regularity, $\ff(u)$ is a filtered Lie algebra (generally not a subalgebra of $\fg$). The associated-graded
\begin{align*}
\fs(u) := \gr(\ff(u)) = \bop_i \ff^i(u) / \ff^{i+1}(u) \subset \fg
\end{align*}
is a graded subalgebra. (Note that $\gr(\fg) \cong \fg$.) Moreover, $\fs_0(u) \subseteq \fann(\Kh(u)) \subseteq \fg_0$ \cite[Proof of (3) in Corollary 5]{CN2009}. Though stated for bracket-generating distributions, their argument works in general: Given $\xi \in \finf(\cG,\omega)$, if $Y = \omega(\xi(u)) \in \fp$, then the curve $\varphi_t(u) = u \cdot \exp(tY)$ has vertical tangent vector $\xi(u) = \zeta_Y(u) = \l.\frac{d}{dt}\r|_{t=0} \varphi_t(u)$. By $P$-equivariancy of $\Kh$,
\begin{align*}
0 = (\xi \cdot \Kh)(u) = \l.\frac{d}{dt}\r|_{t=0} \Kh(\varphi_t(u)) = \l.\frac{d}{dt}\r|_{t=0} \exp(-tY) \cdot (\Kh(u)) = -Y \cdot (\Kh(u)).
\end{align*}
Complete reducibility of $H^2_+(\fg_-,\fg)$ implies that this action depends only on $Y \mod \fp_+ \in \fp / \fp_+ \cong \fg_0$. If $\Kh(u) \neq 0$, Kostant's theorem (see Recipe \ref{R:a0}) yields a bound on $\dim(\fs_0(u))$.
\begin{remark} \label{RM:CN}
For non-flat geometries, \v{C}ap \& Neusser bound $\dim(\finf(\cG,\omega))$ via the maximum dimension of any proper graded subalgebra $\fb \subset \fg$ with $\dim(\fb_0)$ no larger than the Kostant bound. However, this strategy lacks finer information about the annihilating subalgebra, i.e.\ they use \eqref{E:dim-a0}, while we will additionally use \eqref{E:a0}. Thus, in general, their upper bounds will not be sharp.
\end{remark}
The filtration of $\fg$ induces a $P$-invariant filtration $\{ T^i \cG \}_{i=-\nu}^\nu$ of $T\cG$, and a filtration of $\wt\cS$:
\begin{align} \label{E:S-filtration}
\wt\cS(x)^i &= \{ \xi \in \wt\cS \mid \xi(u) \in T^i_u \cG,\,\, \forall u \in \pi^{-1}(x) \}, \qquad -\nu \leq i \leq \nu.
\end{align}
This projects to a filtration $\{ \cS(x)^i \}_{i=-\nu}^\nu$ of the symmetry algebra $\cS$ of the underlying structure on $M$. We have $\omega_u(\wt\cS(x)^i) = \ff(u)^i$, and $\fs(u) = \gr(\ff(u)) \cong \gr(\cS(x)) =: \fs(x)$ are canonically isomorphic. For bracket-generating distributions (not necessarily underlying parabolic geometries), a filtration $\{ \cS(x)^i \}$ of $\cS$ was constructed in \cite{Kru2011}. We summarize its definition and properties below.\\
{\em Kruglikov \cite{Kru2011}}: Let $\{ D^i \}_{i=-\nu}^{-1}$ be a filtration of $TM$, with $D := D^{-1}$ bracket-generating and suppose each $D^i$ has constant rank.\footnote{These conditions hold for underlying structures of parabolic geometries.} Given $x \in M$, $\fg_i(x) := D^i_x / D^{i+1}_x$ for $i < 0$. The Tanaka prolongation $\fg(x) = \prn(\fg_-(x))$ has $\fg_i(x) \hookrightarrow \Hom(\ot^{i+1} \fg_{-1}(x), \fg_{-1}(x))$ for $i \geq 0$, cf. Remark \ref{RM:Hom}.
Letting $\ev_x : \cS \to T_x M$ be the evaluation map, define $\cS(x)^i :=\ev_x^{-1}(D^i)$ for $i < 0$, and $\cS(x)^0 := \ker(\ev_x)$. Inductively, for $i \geq 0$, given $\bX\in\cS(x)^i$, there is a map $\Psi^{i+1}_\bX:\ot^{i+1}D_x\to T_xM$ given by
\begin{align*}
\Psi^{i+1}_\bX(Y_1,\dots,Y_{i+1})=\bigl[\bigl[\dots\bigl[[\bX,\bY_1],\bY_2\bigr],\dots\bigr],\bY_{i+1}\bigr](x),
\end{align*}
where $\bY_j \in \Gamma(D)$ and $\bY_j(x) = Y_j$. Moreover, $\im(\Psi^{i+1}_\bX) \subseteq D_x$. Define $\cS(x)^{i+1} :=\{\bX\in\cS(x)^i \mid \Psi_\bX^{i+1}=0\}$. Then $\cS = \cS(x)$ is a filtered Lie algebra, and $\fs_i(x) \hookrightarrow \fg_i(x)$ via $\bX \mod \cS(x)^{i+1} \mapsto \Psi^{i+1}_\bX$.
There is a filtration\footnote{In the complex setting, we replace $C^\infty(M)$ by the algebra of germs of holomorphic functions about the point $x$.} on $C^\infty(M)$ by ideals $\mu^i_x$. Let $\mu^1_x = \{ f \in C^\infty(M) \mid f(x) = 0 \}$ and for $i \geq 0$,
\begin{align*}
\mu^{i+1}_x = \{ f \in C^\infty(M) \mid (\bY_1 \cdots \bY_t \cdot f)(x) = 0, \,\forall \bY_j \in \Gamma(D), \, 0 \leq t \leq i \}.
\end{align*}
For any $\bY \in \Gamma(D)$, we have $\bY \cdot \mu^{i+1}_x \subset \mu^i_x$. Let $\{ \bZ_{jk} \}_{j=1,...,\nu}$ be a local framing near $x$ with $\bZ_{jk} \in \Gamma(D^{-j}) \backslash \Gamma(D^{-j-1})$. Let $\bX \in \cS$. Then
\begin{align} \label{E:Boris-filtration}
i \geq 0: \qquad \bX = \sum_{j,k} f_{jk} \bZ_{jk}\in\mathcal{S}(x)^i \qbox{iff} f_{jk}\in\mu_x^{i+j},\quad \forall j,k.
\end{align}
The ``only if'' direction is proved in \cite[Lemma 1]{Kru2011}, while an easy induction establishes the converse.
\begin{example}[$G_2 / P_1$] \label{EX:G2P1-S-filtration} Consider $D$ spanned by $\p_q$ and $\tilde\p_x := \p_x + p \p_y + q \p_p + (p^3+q^2) \p_z$. Then
\[
\cS: \qquad \bX_1 = \p_x, \quad \bX_2 = \p_y, \quad \bX_3 = \p_z, \quad \bX_4 = x\p_x - y\p_y - 2p\p_p - 3q\p_q - 5z\p_z.
\]
At $\bu_0 = (x_0,y_0,p_0,q_0,z_0)$, the dimensions of $\fs_i = \fs_i(\bu_0)$ are
\[
\begin{array}{c|cccc}
& \fs_{-3} & \fs_{-2} & \fs_{-1} & \fs_{0} \\ \hline
q_0 \neq 0 \mbox{ or } p_0 \neq 0 & 2 & 1 & 1 & 0\\
q_0 = p_0 = 0 & 2 & 0 & 1 & 1
\end{array}
\]
When $p_0=q_0=0$, $\cS(\bu_0)^0$ is spanned by $\bT = \bX_4 - x_0 \bX_1 + y_0 \bX_2 + 5z_0 \bX_3$. Writing $\bT = \sum_{j,k} f_{jk} \bZ_{jk}$ in the framing $(\bZ_{11}, \bZ_{12}, \bZ_{21}, \bZ_{31}, \bZ_{32}) = (\p_q, \tilde\p_x, \p_p + 2q\p_z, \p_y, \p_z)$ shows $f_{jk} \in \mu^j_x$, so $\bT \not\in \cS(\bu_0)^1$. Note $[\bT,\p_q] = 3\p_q$, which at $\bu_0$ is not contained in $\fs_{-1}$. Thus, $[\fs_0,\fg_{-1}] \not\subset \fs_{-1}$ when $p_0=q_0=0$.
\end{example}
\begin{remark}
The above results in Kruglikov's approach still hold if $D$ admits additional structure. In the parabolic case, Kruglikov's filtration $\{ \cS(x)^i \}$ agrees with that of \v{C}ap--Neusser, and \eqref{E:Boris-filtration} gives a means to compute it explicitly. Henceforth, we will mainly rely on the \v{C}ap--Neusser presentation.
\end{remark}
\subsection{A universal upper bound}
\label{S:upper}
\begin{defn} \label{D:reg-pt}
We say that $x \in M$ is a {\em regular point} if there exists a neighbourhood $N_x \subset M$ of $x$ such that for $-\nu \leq j \leq \nu$, $\dim(\fs_j(y))$ is a constant function of $y \in N_x$. Otherwise, $x$ is {\em irregular}.
\end{defn}
We have a tower of bundles $\cG =: \cG_\nu \ra ... \ra \cG_0 \to M$, with $\cG_i = \cG / P_+^{i+1} = \cG / \exp(\fg^{i+1})$, projections $p_i : \cG \to \cG_i$ and $\pi_i : \cG_i \to M$, where $i=-1,..., \nu$. (If $i=-1$, let $p_{-1} = \pi$, i.e. the projection to $M$.) Given any $\xi \in \fX(\cG)^P$, let $\xi^{(i)} = (p_i)_* \xi \in \fX(\cG_i)$. By $P$-invariancy, $\wt\cS$ descends to $\cS^{(i)} \subset \fX(\cG_i)^{P / P^{i+1}_+}$. Given $x \in M$, the filtration $\wt\cS(x)$ projects to a filtration $\cS^{(i)}(x)$, and from \eqref{E:S-filtration}, $\cS^{(i)}(x)^{i+1} = \{ \xi^{(i)} \in \cS^{(i)} \mid \xi^{(i)}(u_i) = 0, \,\forall u_i \in \pi_i^{-1}(x) \}$. Thus, for any $u_i \in \pi_i^{-1}(x)$, $\cS^{(i)}|_{u_i} \subset T_{u_i} \cG_i$ has dimension $\dim\, \cS - \sum_{j=i+1}^\nu \dim\, \fs_j(x)$. If $x$ is a regular point, then $\cS^{(i)}$ yields a constant rank distribution on a neighbourhood of $\pi_i^{-1}(x) \subset \cG_i$. By the Frobenius theorem, there exist local rectifying coordinates and a local foliation by maximal integral submanifolds of $\cS^{(i)}$. We use this to prove the following ``filtered'' generalization of a result of Morimoto \cite[Proposition 4.1]{Mor1977}.
\begin{prop} \label{P:key-bracket} Let $x \in M$ be a regular point. Then for any $i$, $[\fs_{i+1}(u),\fg_{-1}] \subset \fs_i(u)$. In particular, for $i > 0$, the $i$-th graded component of $\prn^\fg(\fg_-,\fs_0(u))$ contains $\fs_i(u)$.
\end{prop}
If $\cS$ is {\em transitive at $x$}, i.e.\ $\ev_x(\cS) = T_x M$, then $[\fs_{i+1}(x),\fg_{-1}(x)] = [\fs_{i+1}(x),\fs_{-1}(x)] \subset \fs_i(x)$ immediately follows since $\fs(x)$ is a graded Lie algebra. Equivalently, $[\fs_{i+1}(u),\fg_{-1}] \subset \fs_i(u)$.
\begin{example} In Example \ref{EX:G2P1-S-filtration}, the regular points are precisely those with $p_0 \neq 0$ or $q_0 \neq 0$. At all irregular points, the first claim of Proposition \ref{P:key-bracket} fails.
\end{example}
\begin{proof}[Proof of Proposition \ref{P:key-bracket}]
Suppose $i \geq -1$. For any $\xi \in \wt\cS = \finf(\cG,\omega)$ and $\eta \in \fX(\cG)$, we have $0 = (\cL_\xi \omega)(\eta) = \xi \cdot \omega(\eta) - \omega([\xi,\eta])$. Let $\xi \in \wt\cS(x)^{i+1}$, and $\eta \in \Gamma(T^{-1} \cG)^P$. Let $u \in \pi^{-1}(x)$, so $X = \omega(\xi(u)) \in \ff(u)^{i+1} \subset \fg^{i+1}$, and $Y = \omega(\eta(u)) \in \fg^{-1}$. Since $i \geq -1$, then $X \in \fp$, so $X = \omega(\zeta_X(u))$, where $\zeta_X$ is a fundamental vertical vector field. By $P$-equivariancy of $\omega(\eta)$,
\begin{align*}
\omega([\xi,\eta](u)) &= (\xi \cdot \omega(\eta))(u) = (\zeta_X \cdot \omega(\eta))(u) = \l.\frac{d}{dt}\r|_{t=0} \omega(\eta)(u \cdot \exp(tX)) = -[X,Y].
\end{align*}
In particular, since $[X,Y] \in \fg^i$, then $[\xi,\eta](u) \in T^i_u \cG$.
Let $u_i = p_i(u) \in \cG_i$. Since $\xi \in \wt\cS(x)^{i+1}$, then $\xi^{(i)}(u_i) = 0$. Since $x$ is a regular point, there exist local (functionally independent) functions $F_1,..., F_{t_i}$ on $\cG_i$ (smooth in the real setting or holomorphic in the complex setting), where $t_i = \dim\,\cG_i - \sum_{-\nu}^i\dim\,\fs_j(x)$, whose joint level sets define the local foliation by maximal integral submanifolds of $\cS^{(i)}$. In particular, for our $\xi \in \wt\cS(x)^{i+1}$, we have $\xi^{(i)} \cdot F_j = 0$ for $j=1,..., t_i$, and since $\xi^{(i)}(u_i) = 0$, we have $ ([\xi^{(i)},\eta^{(i)}] \cdot F_j)(u_i) = 0$. Thus, $[\xi^{(i)},\eta^{(i)}](u_i) \in \cS^{(i)}|_{u_i} = (p_i)_*( \wt\cS|_u )$ is tangent to the foliation. Since any element of $\wt\cS$ is uniquely determined by its value at $u$, then $[\xi^{(i)},\eta^{(i)}](u_i) = \xi'^{(i)}(u_i)$ for some $\xi' \in \wt\cS$. Equivalently,
\[
[\xi,\eta](u) = \xi'(u) + \chi(u) \in T^i_u \cG, \qquad \xi'(u) \in \wt\cS|_u, \quad \chi(u) \in T^{i+1}_u \cG.
\]
Hence, $\omega([\xi,\eta](u)) \in \ff(u)^i + \fg^{i+1}$. Thus, $[\ff(u)^{i+1}, \fg^{-1}] \subset \ff(u)^i + \fg^{i+1}$, so $[\fs_{i+1}(u),\fg_{-1}] \subset \fs_i(u)$.
Now suppose $i \leq -2$. Since $x$ is regular, then $D^j_\cS|_y := D^j_y \cap \cS|_y$, $j \leq -1$, define constant rank distributions near $x$. Let $\bX = \sum_i f_i \bX_i \in \Gamma(D^{i+1}_\cS)$ where $\{ \bX_i \}$ is basis of $\cS$. Given $\bY \in \Gamma(D)$,
\[
[\bX,\bY](x) = \sum_i f_i(x) [\bX_i,\bY](x) - (\bY \cdot f_i)(x) \bX_i(x) \in (D_x + \cS|_x) \cap D^i_x.
\]
Quotienting by $D^{i+1}_x \supset D_x$ implies that with respect to the Levi bracket, $[\fs_{i+1}(x),\fg_{-1}(x)] \subset \fs_i(x)$, so $[\fs_{i+1}(u),\fg_{-1}] \subset \fs_i(u)$. The second claim immediately follows from the first and Definition \ref{D:g-prolong}.
\end{proof}
\begin{lemma} \label{L:dense} The set of regular points is open and dense in $M$.
\end{lemma}
\begin{proof} For $-\nu \leq i \leq \nu$, define $q_i^\pm : M \to \bbZ$ by $q^-_i(x)= \sum_{j=-\nu}^i \dim\,\fs_j(x)$ and $q^+_i(x)=\dim\,\cS-q^-_i(x)=\dim\,\cS(x)^{i+1} = \sum_{j=i+1}^\nu \dim\,\fs_j(x)$. Then $q_i^-$ is lower semi-continuous because linear independence of $\xi^{(i)}_1,\dots,\xi^{(i)}_s \in \cS^{(i)}$ at $u_i \in \cG_i$ persists near $u_i$. Hence, $q_i^+$ is upper semi-continuous. Given $U \subset M$, define $m_i(U) = \{ x \in U \mid q_i^+(x) \leq q_i^+(y),\, \forall y \in U \}$. Upper semi-continuity implies that if $\emptyset \neq U \subset M$ is open, then $\emptyset \neq m_i(U) \subset M$ is open.
Denote $M_0 := M$, and define $N_{j+1} = m_{-\nu} \circ\, \cdots\, \circ m_\nu(M_j)$ and $M_{j+1} = M \backslash (\bar{N}_1 \cup \cdots \cup \bar{N}_j)$ for $j \geq 1$. Each $M_j \subset M$ is open, so $N_{j+1} \subset M$ is open; when the former is non-empty, so is the latter. Each $\im(q_i^+) \subset [0,\dim\,\cS] \cap \bbZ$ is finite, so there exists a minimal $k \geq 0$ with $M_{k+1} = M$. The open set $N = N_1 \cup \cdots \cup N_k$ is the set of all regular points, which is dense in $M$ since $\bar{N} = M$.
\end{proof}
From Section \ref{S:PR-analysis}, $\fa^\phi := \prn^\fg(\fg_-,\fann(\phi))$ and
$\fU := \max\l\{ \dim(\fa^\phi) \mid 0 \neq \phi \in H^2_+(\fg_-,\fg) \r\} < \dim(\fg)$.
\begin{thm} \label{T:upper} Let $G$ be a real or complex semisimple Lie group and $P \subset G$ a parabolic subgroup.
Let $(\cG \stackrel{\pi}{\to} M, \omega)$ be a regular, normal $G/P$ geometry, $x \in M$ a regular point, and $u \in \pi^{-1}(x)$. Then $\fs(u) \subseteq \fa^{\Kh(u)}$ is a graded subalgebra, and $\dim(\finf(\cG,\omega)) \leq \dim(\fa^{\Kh(u)})$. Thus, $\fS \leq \fU < \dim(\fg)$.
\end{thm}
\begin{proof} Using Proposition \ref{P:key-bracket} and $\fs_0(u) \subseteq \fann(\Kh(u))$, we have $\fs(u) \subseteq \fg_- \op \fs_{\geq 0}(u) \subseteq \prn^\fg(\fg_-,\fs_0(u)) \linebreak\subseteq \fa^{\Kh(u)}$. Thus, $\dim(\finf(\cG,\omega)) = \dim(\fs(u)) \leq \dim(\fa^{\Kh(u)})$.
If the geometry is not locally flat, let $N = \{ x \in M \mid \Kh(u) \neq 0, \, \forall u \in \pi^{-1}(x) \}$. Then $N$ is non-empty and open, so by Lemma \ref{L:dense}, $N$ contains a (non-flat) regular point $x$. Hence, $\dim(\finf(\cG,\omega)) \leq \dim(\fa^{\Kh(u)}) \leq \fU$ for any $u \in \pi^{-1}(x)$. Thus, $\fS \leq \fU < \dim(\fg)$.
\end{proof}
\begin{remark}
Only $[\fs_{i+1}(u),\fg_{-1}] \subseteq \fs_i(u)$ for $i \geq 0$ was essential for proving Theorem \ref{T:upper}. This relied only on local constancy of $\dim\,\fs_j(y)$ for $j > 0$, so our ``regular point'' definition could be weakened. The property $[\fs_0(u),\fg_{-1}] \subseteq \fs_{-1}(u)$ will be used in proving Theorems \ref{T:Petrov} and \ref{T:G2P1-bounds}.
\end{remark}
\begin{remark} \label{RM:upper}
Over $\bbR$, $\fS \leq \fU \leq \fU_\bbC$, where $\fU_\bbC$ is the $\fU$ for the complexification $(\fg_\bbC, \fp_\bbC)$ of $(\fg,\fp)$.
\end{remark}
\begin{cor} \label{C:transitive} Let $G$ be a real or complex semisimple Lie group and $P \subset G$ a parabolic subgroup. For regular, normal $G/P$ geometries, suppose that $\fS = \fU$. Then any submaximally symmetric model $(\cG \stackrel{\pi}{\to} M, \omega)$ is locally homogeneous near a non-flat regular point $x \in M$.
\end{cor}
\begin{proof} Since $\fS = \fU$, any submaximally symmetric model has $\fU = \dim(\finf(\cG,\omega)) = \dim(\fs(u)) \leq \dim(\fa^{\Kh(u)}) \leq \fU$ for any $u \in \pi^{-1}(x)$, by Theorem \ref{T:upper}(a). Hence, $\fs(u) = \fa^{\Kh(u)} \supset \fg_-$, so $\fs(x) = \gr(\cS(x)) \supset \fg_-$, i.e. $\cS$ is transitive at $x$, so the result follows by Lie's third theorem.
\end{proof}
\begin{remark} \label{RM:constrained} By $P$-equivariancy of $\Kh$ and complete reducibility of $H^2_+(\fg_-,\fg)$, fibres of $\cG \to M$ are mapped by $\Kh$ into $G_0$-orbits in $H^2_+(\fg_-,\fg)$. Hence, we can further constrain $\Kh$ in two basic ways, and since regular points are dense (Lemma \ref{L:dense}), we get analogues of Theorem \ref{T:upper}:
\Ben[(i)]
\item Let $0 \neq \bbV_i \subset H^2_+(\fg_-,\fg)$ be a $G_0$-submodule, and define $\fS_i$, $\fU_i$ analogous to $\fS$, $\fU$, e.g. $\fS_i$ requires $\im(\Kh) \subset \bbV_i$. Then $\fS_i \leq \fU_i$. In particular, if $\fg$ is complex \ss, we have $\fS_\mu \leq \fU_\mu$. From Section \ref{S:PR-analysis}, $\fU_\mu = \dim(\fa(\mu))$ is easily computable.
\item Let $0 \neq \cO \subset H^2_+(\fg_-,\fg)$ be $G_0$-invariant. Analogously define $\fU_\cO$ and $\fS_\cO$. Then $\fS_\cO \leq \fU_\cO$.
\Een
If either $\fS_i = \fU_i$ or $\fS_\cO = \fU_\cO$, then as in the proof of Corollary \ref{C:transitive}, any model realizing the equality must be locally homogeneous near a non-flat regular point.
\end{remark}
We will use (ii) in Section \ref{S:4d-Lor} to study the Petrov types in 4-dimensional Lorentzian conformal geometry, and in Section \ref{S:G2P1} for $(2,3,5)$-distributions with $\Kh$ having constant root type.
\subsection{Establishing submaximal symmetry dimensions}
\label{S:realize}
In this section, we show that in the {\em complex} (or {\em split-real}) case, the upper bound $\fS_\mu \leq \fU_\mu = \dim(\fa(\mu))$ (for regular, normal geometries) obtained from Theorem \ref{T:upper} and Remark \ref{RM:constrained} is almost always sharp. Let us outline the strategy: Kostant's theorem (Recipe \ref{R:Kostant}) gives an explicit description of the lowest $\fg_0$-weight vector $\phi_0 \in \bbV_\mu$. Using the $\fg_0$-module isomorphism $H^2(\fg_-,\fg) \cong \ker(\Box) \subset \bigwedge^2 \fg_-^* \ot \fg$, we have $\phi_0$ naturally realized in $\bigwedge^2 \fg_-^* \ot \fg$. When $\im(\phi_0) \subset \fa := \fa^{\phi_0}$, we can regard $\phi_0 \in \bigwedge^2 \fg_-^* \ot \fa \hookrightarrow \bigwedge^2 \fa^* \ot \fa$. Now, we {\em deform the Lie bracket on $\fa$ by $\phi_0$.} Namely, define $\ff$ to be the vector space $\fa$ with the deformed bracket
\begin{align} \label{E:f-bracket}
[\cdot,\cdot]_\ff := [\cdot,\cdot] - \phi_0(\cdot,\cdot),
\end{align}
and $[\cdot,\cdot]_\ff$ is indeed a Lie bracket. Since $\phi_0$ vanishes on $\fp = \fg_{\geq 0}$, then $\fk := \fa_{\geq 0}$ is a subalgebra of both $\fa$ and $\ff$. We construct a {\em local} homogeneous space $M = F/K$ (i.e.\ $F$ may only be a local Lie group) and $P$-bundle $\cG = F \times_K P$ with an $F$-invariant normal Cartan connection $\omega$ whose curvature is determined by $\phi_0$. Thus, $\fS_\mu \geq \dim(\finf(\cG,\omega)) \geq \dim(\ff) = \dim(\fa)$. (In fact, this construction does not depend on regularity.)
\begin{lemma} \label{L:def-a} Let $\lambda$ be the highest weight of a simple ideal in $\fg$ and $w \in W^\fp(2)$. Suppose that $w(-\lambda) \in \Delta^-$. Let $\phi_0$ be the lowest weight vector of $\bbV_{-w\cdot\lambda}$ and $\fa := \fa^{\phi_0}$. Define $\ff$ to be the vector space $\fa$ with deformed Lie bracket \eqref{E:f-bracket}. Then $\ff$ is a Lie algebra.
\end{lemma}
\begin{proof} We have $w(-\lambda) \in \Delta^- = \Delta(\fg_-) \cup \Delta^-(\fg_0)$. All root vectors from $\Delta^-(\fg_0)$ annihilate the lowest $\fg_0$-weight vector $\phi_0$, hence $e_{w(-\lambda)} \in \fg_- \op\fann(\phi_0) \subset \fa$. Thus, $\phi_0 = e_{\alpha_j} \wedge e_{\sr_j(\alpha_k)} \ot e_{w(-\lambda)}$ (see \eqref{E:phi-0}) has $\im(\phi_0) \subset \fa$, so we can regard $\phi_0 \in \bigwedge^2 \fa^* \ot \fa$. Note $e_{\alpha_j}, e_{\sr_j(\alpha_k)} \in \fg_+ \cong \fg_-^*$ vanish on $\fa_{\geq 0}$.
By Theorem \ref{T:corr-Tanaka}, the Lie algebra structure of $\fa$ is the same if we take $\fp = \fp_j$ if $c_{kj} < 0$ or $\fp = \fp_{j,k}$ if $c_{kj} = 0$. Assuming this, then by Proposition \ref{P:PR}, $(\fg,\fp,-w\cdot\lambda)$ is PR, i.e. $\fa = \fg_- \op \fa_0$.
It suffices to verify the Jacobi identity $\Jac_\ff(x,y,z) = 0$ for $[\cdot,\cdot]_\ff$. For $x,y,z \in \ff$,
\begin{align*}
[[x,y]_\ff,z]_\ff &= [[x,y]-\phi_0(x,y),z]_\ff \\
& = [[x,y],z] - \phi_0([x,y],z) - [\phi_0(x,y),z] + \phi_0(\phi_0(x,y),z).
\end{align*}
Since $\phi_0$ vanishes on $\fa_0$, clearly $\Jac_\ff(x,y,z) = 0$ when at least two of $x,y,z \in \fa_0$. Suppose $x,y \in \fg_-$ and $z \in \fa_0$. Since $\Jac_\fa(x,y,z) = 0$ and $z \in \fa_0 = \fann(\phi_0)$, then
\[
\Jac_\ff(x,y,z)=[z,\phi_0(x,y)] - \phi_0([z,x],y) - \phi_0([y,z],x) = (z\cdot \phi_0)(x,y) = 0.
\]
Finally, suppose $x,y,z \in \fg_-$. Since $\phi_0 \in \ker(\Box)$, it is in particular $\p$-closed. This means
\begin{align}
0 = (\p\phi_0)(x,y,z) &= [x,\phi_0(y,z)] - [y,\phi_0(x,z)] + [z,\phi_0(x,y)] \label{E:d-phi}\\
&\qquad - \phi_0([x,y],z) + \phi_0([x,z],y) - \phi_0([y,z],x), \qquad \forall x,y,z \in \fg_-. \nonumber
\end{align}
This equation uses the bracket on $\fg$, but restricts to $\fa$ since $\phi_0$ has image in $\fa$, and the bracket on $\fa$ is inherited from that of $\fg$ by Lemma \ref{L:T-subalg}. Using \eqref{E:d-phi} and $\Jac_\fa(x,y,z) = 0$, we obtain
\begin{align*}
\Jac_\ff(x,y,z)
= \phi_0(\phi_0(x,y),z) + \phi_0(\phi_0(y,z),x) + \phi_0(\phi_0(z,x),y).
\end{align*}
Since $\im(\phi_0)$ is spanned by $e_{w(-\lambda)}$, it suffices to show $w(-\lambda) \not\in \{ -\alpha_j, -\sr_j(\alpha_k) \}$. If $w(-\lambda) = -\alpha_j$, then $\lambda = -\sr_k(\alpha_j) \in \Delta^-$. Otherwise, if $w(-\lambda) = -\sr_j(\alpha_k)$, then $\lambda = -\alpha_k \in \Delta^-$. But these contradict $\lambda \in \Delta^+$. Thus, $[\cdot,\cdot]_\ff$ is a Lie bracket on $\ff$.
\end{proof}
Let us determine all exceptions:
\begin{lemma} \label{L:exceptions} Let $\fg$ be complex simple. Let $w \in W^\fp$ satisfy $w(-\lambda_\fg) \in \Delta^+$.
\Ben[(a)]
\item If $|w| = 2$, then $\fg / \fp$ is one of: \quad $A_2 / \fp_1, \quad A_2 / \fp_{1,2},\quad B_2 / \fp_1,\quad B_2 / \fp_{1,2}$.
\item If $|w| = 1$, then $\fg / \fp \cong A_1 / \fp_1$.
\Een
For $B_2 / \fp_{1,2}$, $w(-\lambda_\fg) \in \Delta^+$ if $w = (12)$, but not for $w = (21)$.
\end{lemma}
\begin{proof} Part (b) is obvious, so we prove (a).
Since $W^\fp(2) \neq \emptyset$, then $\rnk(\fg) \geq 2$. Write $w = (jk)$. If $w(-\lambda_\fg) \in \Delta^+$, then $w(-\lambda_\fg) \in w\Delta^- \cap \Delta^+ = \Phi_w = \{ \alpha_j, \sr_j(\alpha_k) \}$. There are two cases:
\begin{enumerate}[(i)]
\item $w(-\lambda_\fg) = \alpha_j$: Then $\lambda_\fg = \sr_k(\alpha_j) = \alpha_j - c_{jk} \alpha_k$, which is the highest root, so $\rnk(\fg) = 2$.
\item $w(-\lambda_\fg) = \sr_j(\alpha_k)$: Then $\lambda_\fg = \alpha_k$. Hence, $\rnk(\fg) = 1$, a contradiction.
\end{enumerate}
For $B_2 / P_2$, $\exists! w = (21) \in W^\fp(2)$, so $w(-\lambda_\fg) = w(-2\lambda_1) = 2\lambda_1 - 2\lambda_2 = -\alpha_2 \not\in \Delta^+$. We verify the final claim for $B_2 / P_{1,2}$ directly.
\end{proof}
\begin{remark}[Semisimple case] \label{RM:ss-exceptions} Part (a) of Lemma \ref{L:exceptions} identifies the exceptions for all Type I modules. Part (b) indicates that exceptional Type II modules must be associated with factors in $\fg / \fp$ of the form $A_1 / \fp_1 \times \fg' / \fp'$ and $w = (1,k')$, where the $1$ comes from the first factor, $k' \in I_{\fp'}$ is arbitrary, and $\lambda = 2\lambda_1$, i.e. the highest weight of $A_1$. (Note $\phi_0 = e_{\alpha_1} \wedge e_{\alpha_{k'}} \ot e_{\alpha_1}$, which has weight $-w\cdot \lambda$ and homogeneity $+3$.) There are no exceptions associated to Type III modules.
\end{remark}
We first establish a result outside the parabolic context:
\begin{lemma} \label{L:loc-model} Let $G$ be a real or complex Lie group, and $P \subset G$ a closed subgroup. Let $\ff$ be a Lie algebra, $\fk \subset \ff$ a subalgebra, and $\iota : \fk \to \fp$ a Lie algebra injection. Suppose there exists a linear map $\vartheta : \ff \to \fg$ such that: (i) $\vartheta|_\fk = \iota$; (ii) $\vartheta([z,x]_\ff) = [\iota(z),\vartheta(x)]$, $\forall z \in \fk$ and $\forall x \in \ff$; (iii) $\vartheta$ induces an isomorphism from $\ff / \fk$ to $\fg / \fp$. Then there exists a $G/P$ geometry $(\cG \to M, \omega)$ with $\finf(\cG,\omega)$ containing a subalgebra isomorphic to $\ff$.
\end{lemma}
\begin{proof} Let $K$ and $F$ be the unique connected, simply-connected Lie groups with Lie algebras $\fk$ and $\ff$, respectively. There are unique Lie group homomorphisms $\Phi : K \to F$ and $\Psi : K \to P$ whose differentials $\Phi' : \fk \to \ff$ and $\Psi' : \fk \to \fp$ are the given inclusions $\fk \hookrightarrow \ff$ and $\iota : \fk \hookrightarrow \fp$.\\
{\em Case 1: $\Phi$ is injective and $\Phi(K) \subset F$ is a Lie subgroup}. Define $\cG=F\times_K P = (F \times P) / \!\sim_K$, where
$(u,p) \sim_K (u\cdot\Phi(k),\Psi(k^{-1})\cdot p)$ for any $k \in K$. This has a natural projection $\cG\to M := F/\Phi(K)$ (note $\Phi(K) \subset F$ is closed). Since $\Phi$ is injective, identify $K$ with its image $\Phi(K)$. Any $F$-invariant Cartan connection $\omega$ on $\cG$ corresponds \cite{Ham2007}, \cite{CS2009} to a linear map $\vartheta : \ff \to \fg$ satisfying (i)--(iii). The curvature function of $\omega = \omega_\vartheta$ corresponds to
\begin{align} \label{E:kappa-alpha}
\kappa_\vartheta(x,y) = [\vartheta(x),\vartheta(y)] - \vartheta([x,y]_\ff) \qquad \forall x,y \in \ff.
\end{align}
By construction, $\ff\subset\finf(\cG,\omega)$.\\
{\em Case 2: $\Phi$ is not injective or $\Phi(K) \subset F$ is not a Lie subgroup}. Since $\Phi'$ is injective, then $\Phi$ has a discrete kernel. In place of $F$ in the definition of the Cartan bundle $\cG$ in Case 1, we will define a {\em semi-local} Lie group $U$, an inclusion $K \hookrightarrow U$ as a closed subgroup which acts on the right on $U$, and such that $\text{Lie}(U) = \ff$. (``Semi-locality'' will be clarified below.) Then $\ff$-invariant Cartan connections on $\cG = U \times_K P \to M :=U/K$ correspond to linear maps $\vartheta : \ff \to \fg$ as in Case 1.
Let $\fv$ be a vector space complementary to $\fk$ in $\ff$. Choose two norms
$\|\cdot\|_\fv$, $\|\cdot\|_\fk$ on these vector spaces and
define the norm on the Lie algebra $\ff=\fv\oplus\fk$ by the formula
$\|v+k\|_\ff=\max\{\|v\|_\fv,\|k\|_\fk\}$ for $v\in\fv$,
$k\in\fk$. Then the balls in these spaces satisfy: $B^\ff_\epsilon=B^\fv_\epsilon\times B^\fk_\epsilon$.
We choose $\epsilon>0$ so small that the maps $\exp:B^\fk_{5\epsilon}\to K$,
$\exp:B^\ff_{5\epsilon}\to F$ are injective,
$\exp(B^\fk_{5\epsilon})\cap\Phi^{-1}(1_F)=1_K$,
and we will further specify $\epsilon$ below. One of the requirements is that
$\exp\cdot\exp:B^\fv_\epsilon\times B^\fk_\epsilon\to F$ is an embedding.
This holds for small $\epsilon$ because
$\log\bigl(\exp(\epsilon v)\exp(\epsilon k)\exp(-\epsilon v-\epsilon k)\bigr)=o(\epsilon)$
by the Baker--Campbell--Hausdorff formula ($\|v\|_\fv\le1$, $\|k\|_\fk\le1$),
implying that for $\epsilon\ll1$, $U_0:=\exp(B^\fv_\epsilon)\cdot\exp(B^\fk_\epsilon)
\subset\exp(B^\ff_{2\epsilon})$ and
$U_0^{-1}=\{g^{-1} \mid g\in U_0\}=
\exp(B^\fk_\epsilon)\cdot\exp(B^\fv_\epsilon)
\subset\exp(B^\ff_{2\epsilon})$.
Let $K_0=\exp(B^\fk_\epsilon)\subset K$ and $V_0=\Phi(K_0)\subset U_0$.
Denote $U_0^2:=U_0^{-1}\cdot U_0\subset F$ and
$V_0^2=(U_0^2\cap\im(\Phi))_0$, where the last zero means the connected component of $1_F$.
We have $V_0^2=\Phi(K_0^2)$ for a unique connected neighborhood $K_0^2$ of $1_K$.
Denote also $K_0^4:=K_0^2\cdot K_0^2$. Using the Baker--Campbell--Hausdorff formula as above we get
$U_0^2\subset\exp(B^\ff_{3\epsilon})$, $K_0^2\subset\exp(B^\fk_{3\epsilon})$
and $K_0^4\subset\exp(B^\fk_{5\epsilon})$
for small enough $\epsilon$. This implies that
$\Phi:K_0^4\to V_0^4=V_0^2\cdot V_0^2\subset F$ is an embedding and that
$V_0^4\cap U_0^2=V_0^2$. Indeed, from $V_0^4\subset\exp(B^\ff_{5\epsilon})$ we get:
$\log(V_0^4\cap U_0^2)\subset\fk\cap\log(U_0^2)=\log(V_0^2)\subset\ff$.
Consequently $V_0^4\cap U_0^2\subset V_0^2$ and the reverse inclusion is obvious.
Consider the sheaf of neighborhoods $\{U_k=U_0\cdot\Phi(k)\}_{k\in K}$ over $K$.
We introduce an equivalence relation on it as follows. Take $x_1\in U_{k_1}$ and $x_2\in U_{k_2}$,
i.e. $x_1=u_1\cdot \Phi(k_1)$, $x_2=u_2\cdot \Phi(k_2)$ for some $u_1,u_2\in U_0$.
We let $x_1\sim x_2$ iff $u_2=u_1\cdot\Phi(k_1\,k_2^{-1})$ and $k_1\,k_2^{-1}\in K_0^2$.
Let us check that it is an equivalence relation. That $\sim$ is reflexive is obvious and the
symmetry follows from $(K_0^2)^{-1}=K_0^2$ $\Leftrightarrow$ $(U_0^2)^{-1}=U_0^2$.
Let now $x_1\sim x_2$ and $x_2\sim x_3$. Then
$u_3=u_2\cdot\Phi(k_2\,k_3^{-1})=u_1\cdot\Phi(k_1\,k_2^{-1})\cdot\Phi(k_2\,k_3^{-1})=u_1\cdot\Phi(k_1\,k_3^{-1})$
and $k_1\,k_3^{-1}=(k_1\,k_2^{-1})(k_2\,k_3^{-1})\in K_0^4$. On the other hand
$\Phi(k_1\,k_3^{-1})=u_1^{-1}u_3\in U_0^2$, and these two imply $k_1\,k_3^{-1}\in K_0^2$.
Thus we get transitivity.
Define $U = \sqcup_{k \in K} U_k/\!\sim$. This is a manifold with the chart centered at $k$
given by the embedding $R_{\Phi(k)}:U_0\to U$. Overlap maps are given by the equivalence relation.
The group $K$ is embedded into $U$ by the formula $K\ni k\mapsto R_{\Phi(k)}(1_F)\in U_k\hookrightarrow U$.
Indeed, if $k\sim k'$ i.e. $1_F\cdot\Phi(k)=1_F\cdot\Phi(k')$ and $k'k^{-1}\in K_0^2$,
then $\Phi(k'k^{-1})=1_F$ yields $k=k'$. This implies that topologically $U\simeq B^\fv_\epsilon\times K$.
In fact, we can make this isomorphism canonical by strengthening the condition on $\epsilon$
(taking it possibly smaller). Namely let us require that
$\exp\cdot\exp:B^\fv_\epsilon\times B^\fk_{5\epsilon}\to F$ is an embedding.
Every element of $U$ can be represented in the form
$U_k\ni u_0\cdot\Phi(k)=\exp(\epsilon v)\cdot\Phi(\exp(\epsilon\kappa)\cdot k)$, where
$\|v\|_\fv\le1$ and $\|\kappa\|_\fk\le1$.
We map this element to $(\epsilon v,\exp(\epsilon\kappa)\cdot k)$.
Independence of the representative follows from the above condition on the map
$\exp\cdot\exp$.
Moreover, $U$ is a semi-local Lie group, which means that
for any two points $k_1,k_2\in K$ there exists two neighborhoods $U_{k_i}'\subset U_{k_i}$ containing
$\exp(B^\fk_\epsilon)\cdot\Phi(k_i)$ in $U$
such that the multiplication (induced from $F$)
acts as $U'_{k_1}\times U'_{k_2}\to U_{k_1\cdot k_2}\subset U$ by
$(u_1\cdot\Phi(k_1))\cdot(u_2\cdot\Phi(k_2))=(u_1\cdot\mathop{Ad}_{\Phi(k_1)}u_2)\cdot\Phi(k_1k_2)$.
Notice that $u_1\cdot\mathop{Ad}_{\Phi(k_1)}u_2\in U_0$ if $u_1,u_2\in U_0$ are sufficiently close
to $1_F$. (This condition specifies $U_{k_i}'$.)
Similarly, we impose locality of the inverse along $K$, as a map
$U'_k\to U_{k^{-1}}$ for some neighborhood $U_k'\subset U_k$ of $\exp(B^\fk_\epsilon)\cdot\Phi(k)$ given by
$(u\cdot\Phi(k))^{-1}=\mathop{Ad}_{\Phi(k^{-1})}u^{-1}\cdot\Phi(k^{-1})$.
The semi-local Lie group $U$ is naturally homomorphically mapped to $F$ (this is an immersion,
not necessarily an embedding). The chart $U_{1_K}\hookrightarrow U$ is isomorphically mapped to
$U_0\subset F$. Therefore the Lie algebra of $U$ is naturally identified with $\ff$.
In addition, $K\hookrightarrow U$ is a closed subgroup,
which acts from the right on $U$.
The orbit of the right $K$-action is given by:
$K\ni k\mapsto u_0\cdot\Phi(k)\in U_k\hookrightarrow U$, $u_0\in U_0$.
All orbits are isomorphic to $K$ and the two orbits either do not intersect or coincide.
Moreover (provided we adopt the stronger condition on $\epsilon$ above) we can
represent the orbit uniquely via $u_0$ by claiming that $u_0\in\exp(B^\fv_\epsilon)$.
The orbit space is then $U/K=U_0/K_0=\exp(B^\fv_\epsilon)\simeq B^\fv_\epsilon$,
and this is a neighborhood of the point $K/K\equiv0\in B^\fv_\epsilon$.
Thus we get the local homogeneous space as the desired model.
Now define the Cartan bundle $\cG=U\times_K P = (U \times P) /\sim_{\Psi}$, where
$(q_1,p_1)\sim_{\Psi}(q_2,p_2)$ iff $q_2=q_1\cdot k$ and $p_2=\Psi(k)^{-1} \cdot p_1$ for some $k \in K$.
This $\cG$ is a principal $P$-bundle over $U/K \simeq B^\fv_\epsilon$,
in particular $\cG\simeq B^\fv_\epsilon\times P$ topologically. We conclude as in Case 1.
\end{proof}
\begin{thm}[Realizability] \label{T:realize} Let $G$ be a complex \ss Lie group, $P \subset G$ a parabolic subgroup, $\lambda$ the highest weight of a simple ideal in $\fg$, $w \in W^\fp(2)$, and $\mu = -w\cdot \lambda$. Suppose $w(-\lambda) \in \Delta^-$. Then there exists a normal $G/P$ geometry $(\cG \to M, \omega)$ such that:
\begin{enumerate}[(a)]
\item $\im(\Kh) \subset \bbV_\mu$;
\item $\dim(\finf(\cG,\omega)) \geq \dim(\fa(\mu))$;
\item the geometry is regular iff $\mu$ has positive homogeneity.
\end{enumerate}
\end{thm}
\begin{proof} Define $\ff$ to be the vector space $\fa := \fa(\mu)$ with deformed bracket \eqref{E:f-bracket}. Since $w(-\lambda) \in \Delta^-$, then by Lemma \ref{L:def-a}, $\ff$ is a Lie algebra. Since $\phi_0$ is trivial on $\fk := \fa_{\geq 0}$, then $\fk \subset \ff$ is a subalgebra. Also, $\fk \subset \fp = \fg_{\geq 0}$ is a subalgebra. Since $\ff = \fa$ as vector spaces, and $\fa \subset \fg$, take $\vartheta : \ff \to \fg$ to be the induced {\em vector space} inclusion. Then (i)--(iii) in Lemma \ref{L:loc-model} hold, so an $\ff$-invariant $G/P$ geometry $(\cG \to M, \omega_\vartheta)$ exists (so (b) holds) with curvature function determined by
\begin{align*}
\kappa_\vartheta(x,y) = [\vartheta(x),\vartheta(y)] - \vartheta([x,y]_\ff) = [x,y] - [x,y]_\ff = \phi_0(x,y), \qquad \forall x,y \in \ff.
\end{align*}
Since $\phi_0 \in \ker(\Box)$, then $\p^*\phi_0 = 0$, so $(\cG \to M, \omega)$ is normal, and (a), (c) are immediate.
\end{proof}
Recall $\fS_\mu$ and $\fU_\mu$ from Remark \ref{RM:constrained}. If $\fg$ is simple, we also write $\fa(w) := \fa(-w\cdot\lambda_\fg)$.
\begin{example} \label{ex:B2P12} For $B_2 / P_{1,2}$, $\lambda_\fg = 2\lambda_2$. Both $w = (12)$ and $w' = (21)$ are PR with $\dim(\fa(w)) = \dim(\fa(w')) = 5$, so $\fS \leq 5$. Since $w'(-\lambda_\fg) \in \Delta^-$, then by Theorem \ref{T:realize}, a model with 5-dimensional symmetry exists, so $\fS = 5$. This has twistor space type $B_2 / P_2$ (also non-exceptional). We will see in Section \ref{S:exceptions} that 5 is not realizable in the $w = (12)$ branch, having twistor type $B_2 / P_1$.
\end{example}
For {\em split real forms}, notions of roots and weights from complex representation theory are similarly defined. All preceding statements and proofs continue to hold. Combining Theorems \ref{T:upper} and \ref{T:realize}, Corollary \ref{C:transitive} and Remark \ref{RM:constrained}, together with Lemma \ref{L:exceptions}, we obtain:
\begin{thm} \label{T:main-thm} Let $G$ be a complex or split-real \ss Lie group, $P \subset G$ a parabolic subgroup, $\lambda$ the highest weight of a simple ideal in $\fg$, $w \in W^\fp(2)$, and $\mu = -w\cdot \lambda$ satisfies $Z(\mu) > 0$. We always have $\fS_\mu \leq \fU_\mu$. If $w(-\lambda) \in \Delta^-$ or $G/P$ does not contain the factors $A_1 / P_1,\, A_2 / P_1,\, A_2 / P_{1,2},\, B_2 / P_1,\, B_2 / P_{1,2}$, then $\fS_\mu = \fU_\mu = \dim(\fa(\mu))$ and any submaximally symmetric model is locally homogeneous about a non-flat regular point.
\end{thm}
From Theorem \ref{T:upper} and \eqref{E:fU-mu}, $\fS \leq \fU = \max_\mu \fU_\mu$, and aside from a few exceptions, this equals $\max_\mu \fS_\mu$ (where all $\mu$ are of the form indicated in the theorem above), which implies $\fS = \fU$. By Example \ref{ex:B2P12}, this also holds for $B_2 / P_{1,2}$. Thus, we have:
\begin{thm} \label{T:main-thm2}
Let $G$ be a complex or split-real \ss Lie group, $P \subset G$ a parabolic subgroup. If $G/P$ does not contain the factors $A_1 / P_1,\, A_2 / P_1,\, A_2 / P_{1,2},\, B_2 / P_1$, then
\[
\fS = \fU = \max\{ \fU_\mu \mid \bbV_\mu \subset H^2_+(\fg_-,\fg) \}, \qquad \fU_\mu = \dim(\fa(\mu)),
\]
and any submaximally symmetric model is locally homogeneous about a non-flat regular point.
\end{thm}
\begin{remark} \label{E:reduce-to-two}
In the general (complex or split-real) \ss case, the gap problem reduces to studying each $\bbV_\mu$, which is one of the three types in Table \ref{F:H2-types}. Each such $\bbV_\mu$ is essentially generated by at most two simple factors: any remaining simple factor $(\hat\fg,\hat\fp)$ yields a flat factor for the geometry, so $\dim(\hat\fg)$ is automatically included in the submaximal symmetry dimension count.
\end{remark}
\begin{remark} \label{E:coverings}
Aside from the exceptions in Theorem \ref{T:main-thm}, given $(\fg,\fp,\mu)$, $\fS_\mu$ is the same for all choices of Lie groups $(G,P)$ with $\text{Lie}(G) = \fg$ and $\text{Lie}(P) = \fp$. Consequently, for the purpose of simplifying the calculation of $\fS_\mu$, we may assume that $P$ is {\em connected} so that we can pass to the minimal twistor space. (See Remark \ref{RM:Q-connected} and Corollary \ref{C:P-exists}.)
\end{remark}
\subsection{Deformations}
A graded subalgebra $\fa \subset \fg$ admits a canonical filtration, i.e. $\fa^i = \bop_{j \geq i} \fa_j$.
\begin{defn}
A graded subalgebra $\fa$ is {\em filtration-rigid} if any filtered Lie algebra $\ff$ with $\gr(\ff) = \fa$ as graded Lie algebras has $\ff \cong \fa$ as filtered Lie algebras.
\end{defn}
\begin{prop} \label{P:FR} Let $G$ be a real or complex \ss Lie group and $P \subset G$ a parabolic subgroup, and suppose that $\prn(\fg_-,\fg_0) \cong \fg$. Let $(\cG \stackrel{\pi}{\to} M, \omega)$ be a regular, normal $G/P$ geometry which is not locally flat. Given $u \in \cG$ with $\Kh(u) \neq 0$, if $\fg_- \subset \fs(u)$, then $\fs(u)$ is not filtration-rigid.
\end{prop}
\begin{proof} Since $\prn(\fg_-,\fg_0) \cong \fg$, the geometry is completely determined by its regular infinitesimal flag structure. Assume $\ff(u) \cong \fs(u)$, so $\ff(u)$ and hence $\cS(x)$ is graded, where $x = \pi(u)$. Then $\cS(x)/\cS(x)^0\cong \fg_- =: \fm$ acts transitively on $M$ at $x$. Thus, $M$ can be locally identified near $x$ with the simply-connected Lie group $M(\fm)$ with Lie algebra $\fm$. This is endowed with the $M(\fm)$-invariant distribution generated by $\fg_{-1}$, and there is a $M(\fm)$-invariant principal $G_0$-bundle $\cG_0 \to M(\fm)$, where $G_0 \subseteq \text{Aut}_{gr}(\fm)$, which is a reduction of the graded frame bundle over $M(\fm)$.
The triple $(M(\fm),\fg_{-1},\fg_0)$ is the standard model in Tanaka theory, and since $\prn(\fg_-,\fg_0) \cong \fg$ is finite-dimensional, its symmetry algebra $\cS$ is isomorphic to $\fg$ \cite[Section 6]{Tan1970}.
But since $\Kh(u) \neq 0$, then $\dim(\cS) \leq \fS < \dim(\fg)$, c.f. Proposition \ref{P:loc-flat} or Theorem \ref{T:upper}, a contradiction.
\end{proof}
In the exceptional cases, we will show that $\fa(\mu)$ is filtration-rigid, and hence $\fS_\mu < \dim(\fa(\mu))$.
\begin{example} \label{EX:proj-FR}
Without the $\prn(\fg_-,\fg_0) \cong \fg$ assumption, Proposition \ref{P:FR} generally fails. Consider projective geometry $(A_\rkg / P_1)$. The condition $\fg_- \subset \fs(u)$ implies local homogeneity, and since $\fg_-$ is abelian, then near $\pi(u)$, we have local coordinates $(t,x^1,...,x^m)$, where $m = \rkg - 1$, and symmetries $\p_t, \p_{x^1},..., \p_{x^m}$ of the projective structure. The geodesic equations \eqref{E:proj-ODE} admit these symmetries, so all coefficients in these equations are constants. This means that by a projective change, we may take the $\Gamma^a_{bc}$ to be constants. For $m \geq 2$, this does not imply the structure is flat, e.g. taking all $\Gamma^a_{bc} = 0$ except $\Gamma^1_{01} = -\frac{1}{2}$, the Fels invariants \eqref{E:Fels-inv} satisfy $S \equiv 0$ and $T \neq 0$.
However, for $m=1$, the Tresse invariants \eqref{E:Tresse} vanish everywhere for $y'' = F(y')$, where $F$ is a cubic polynomial. Thus, Proposition \ref{P:FR} is true for 2-dimensional projective structures.
\end{example}
Consider $\fa = \bop_i \fa_i$ with (graded) basis $\{ e_i^\alpha \}$ and structure equations
$[e_i^\a,e_j^\b]=\sum_\gamma c^{\a\b}_\gamma e^\gamma_{i+j}$. Any filtered Lie algebra $\ff \subset \fg$ with $\gr(\ff) = \fa$ has a (filtered) basis $\{ f_i^\alpha\}$ with
\begin{align} \label{E:f-deform}
[f_i^\alpha,f_j^\beta]_\ff =\sum_\gamma c^{\alpha\beta}_{ij\gamma} f_{i+j}^\gamma+\sum\limits_{k>i+j} \sum_\gamma b^{\alpha\beta k}_{ij\gamma} f_k^\gamma,
\end{align}
and we consider $\ff$ up to filtration-preserving automorphisms
\[
f_i^\alpha\mapsto f_i^\alpha+\sum_{j > i} \sum_\beta \chi^{\alpha j}_{i\beta} f^\beta_j.
\]
We refer to the terms in the double summation in \eqref{E:f-deform} as {\em tails}. So establishing filtration-rigidity amounts to eliminating the presence of tails (in some basis).
\begin{example}
Let $\fg_0$ be a semi-simple Lie algebra, and $V=\fg_{-1}$ an irreducible $\fg_0$-representation such that $\bigwedge^2 V$ does not contain $V$ nor the adjoint representation of $\fg_0$ as summands.
Then the graded Lie algebra $\fg=\fg_{-1}\oplus\fg_0$ is filtration-rigid.
Indeed, let $\ff$ be a filtered Lie algebra with $\gr(\ff) = \fg$. Then $\ff^0 \cong \fg_0$.
By semisimplicity of $\fg_0$, the $\fg_0$-module $\ff$ is completely reducible, and so
$\ff=\fg_{-1}\oplus\fg_0$ as $\fg_0$-modules. By hypothesis, the bracket $\bigwedge^2\fg_{-1}\to\fg$ being $\fg_0$-equivariant must vanish.
The case of $\fg_0=\fso(n)$ and $V = \bbR^n$ has $\bigwedge^2 V\simeq\fg_0$, so there are many filtered deformations. By Schur's lemma, we have deformations parametrized by $R \in \bbR$, and this yields all the maximally symmetric models in Table \ref{F:submax-Riem}. The corresponding algebras depending on the sign of $R$ are: $\fg = \fso(n+1)$, $ \bbR^n \rtimes\fso(n)$, and $\fso(n,1)$.
\end{example}
\begin{prop} \label{P:f-rigid} Let $\ff$ be a filtered Lie algebra, $\gr(\ff) =: \fa = \fa_{-\hat\nu} \op ... \op \fa_0 \op ... \op \fa_{\tilde\nu}$ its associated-graded, and let $\tau_j : \ff^j \to \fa_j$ denote the canonical projections. Suppose there exists $h \in \fa_0$ such that $\text{ad}_h$ is diagonal in some (graded) basis $\{ e_i^\alpha \}$ of $\fa$, and let $\cE_i = \Spec(\text{ad}_h|_{\fa_i})$. If $\cE_i \cap \cE_j = \emptyset$ for distinct $i,j$, then for any $\tilde{h} \in \ff^0$ with $\tau_0(\tilde{h}) = h$, there is a (filtered) basis $\{ f_i^\alpha \}$ of $\ff$, with $\tau_i(f_i^\alpha) = e_i^\alpha$, and with respect to which $\text{ad}_{\tilde{h}}$ is diagonal. Moreover, if:\footnote{Here
$T+T :=\{a+b \mid a,b\in T\}$ and $\bigwedge^2 T :=\{a+b \mid a, b \in T, \, a \neq b\}$.}
\Ben[(i)]
\item $\forall i \neq j$: $(\cE_i + \cE_j) \cap \bigcup_{k > i+j} \cE_k = \emptyset$; and
\item $\forall i$: $\bigwedge^2 \cE_i \cap \bigcup_{k > 2i} \cE_k = \emptyset$,
\Een
then $\fa$ is filtration-rigid.
\end{prop}
\begin{proof}
Use (reverse) induction on $r$, where $-\hat\nu \leq r \leq \tilde\nu$. The $r = \tilde\nu$ case is trivial since $\tau_{\tilde\nu}|_{\ff^{\tilde\nu}} : \ff^{\tilde\nu} \to \fa_{\tilde\nu}$ is a Lie algebra isomorphism. Assume there is a basis $\{ f_i^\alpha \}_{i=r+1}^{\tilde\nu}$ of $\ff^{r+1}$ with $\tau_i(f_i^\alpha) = e_i^\alpha$ such that $\text{ad}_{\tilde{h}}$ is diagonal (with eigenvalues $\mu_i^\alpha$). We have $[\tilde{h},f_r^\alpha] = \mu^\alpha_r f_r^\alpha + \sum_{k > r} \sum_\gamma b^{\alpha,k}_\gamma f_k^\gamma$. Since $\cE_i \cap \cE_j = \emptyset$ for $i \neq j$, then $\mu^\alpha_r \neq \mu^\beta_{r+1}$, so letting $\hat{f}_r^\alpha = f_r^\alpha + \sum_\beta \sigma^\alpha_\beta f_{r+1}^\beta$ with $\sigma^\alpha_\beta = (\mu^\alpha_r - \mu^\beta_{r+1})^{-1} b^{\alpha,r+1}_\beta$, we can normalize $b^{\alpha, r+1}_\beta = 0$. Iteratively, we normalize all $b^{\alpha,k}_\gamma = 0$.
We have $[\tilde{h}, [f_i^\alpha,f_j^\beta]] = (\mu^\alpha_i + \mu^\beta_j) [f_i^\alpha,f_j^\beta]$. If $i \neq j$, then by (i), there is no tail. If $i = j$, then for $\alpha \neq \beta$, (ii) implies there is no tail. Thus, $\fa$ is filtration-rigid.
\end{proof}
\subsection{Exceptional cases and local homogeneity}
\label{S:exceptions}
We now turn to the remaining exceptional cases from Lemma \ref{L:exceptions} and Theorem \ref{T:main-thm}. Our first result in this section is:
\begin{prop} \label{P:ex} For complex or real (regular, normal) $A_2 / P_1$, $A_2 / P_{1,2}$, $B_2 / P_1$ geometries, $\fS = \fU - 1$, given in Table \ref{F:rk2-ex}. For complex or split-real (regular, normal) $G/P = A_1 / P_1 \times G'/P'$ geometries with $\Kh$ concentrated in the (Type II) module $\bbV_{\mu}$, with $\mu = -w\cdot 2\lambda_1$ corresponding to $w = (1,k') \in W^\fp(2)$, we have $\fU_\mu - 1 \leq \fS_\mu \leq \fU_\mu$.
\end{prop}
The rank two exceptions when $\fg$ is simple are given in Table \ref{F:rk2-ex}. These are all \PRp by Recipe \ref{R:squares}. By Example \ref{ex:B2P12}, $B_2 / P_{1,2}$ is not exceptional concerning $\fS$, so we exclude this here.
\begin{center}
\begin{table}[h]
\begin{tabular}{|c|c|l|c|} \hline
$G/P$ & Underlying structures & \multicolumn{1}{c|}{$H^2_+(\fg_-,\fg)$} & $\fS$ \\ \hline\hline
\raisebox{0.05in}{$A_2 / P_1$} & \raisebox{0.05in}{2-dimensional projective structures} & $\Atwo{xs}{-5,1}$ & \raisebox{0.05in}{$3$}\\ \hline
$A_2 / P_{1,2}$ & \begin{tabular}{c} scalar 2nd order ODE up to point transformations \\ (3-dimensional CR structures) \end{tabular}& $\Atwo{xx}{-5,1} \op \Atwo{xx}{1,-5}$ & $3$\\ \hline
$B_2 / P_1$ & \begin{tabular}{c} 3-dimensional conformal Lorentzian \\ (or Riemannian) structures \end{tabular} & $\Btwo{xs}{-5,4}$ & $4$\\ \hline
\end{tabular}
\caption{Rank two exceptions and (complex or real) underlying structures}
\label{F:rk2-ex}
\end{table}
\end{center}
For lower bounds, we exhibit models. For upper bounds, it suffices (by Remark \ref{RM:upper}) to work over $\bbC$, and we show that $\fa^{\phi_0} \subset \fg$ is filtration-rigid using Proposition \ref{P:f-rigid}. By Proposition \ref{P:FR}, this forces $\fS = \fU - 1$ for all three cases. (By Example \ref{EX:proj-FR}, this is valid in the $A_2 / P_1$ case.) For the $A_1 / P_1 \times G'/P'$ exception, we exhibit a model with at least $\fU_\mu-1$ symmetries.
Our second result generalizes Corollary \ref{C:transitive} and Remark \ref{RM:constrained}, which considered the general case, i.e. when $\fS = \fU$ or $\fS_\mu = \fU_\mu$. Considering each exceptional case in turn, we prove:
\begin{thm} \label{T:transitive} Among all complex or split-real (regular, normal) $G/P$ geometries, all submaximally symmetric models are locally homogeneous near a non-flat regular point. Given a $\fg_0$-irreducible $\bbV_\mu \subset H^2_+(\fg_-,\fg)$, the same conclusion holds with the restriction that $\im(\Kh) \subset \bbV_\mu$.
\end{thm}
Given a regular point $x$ and $u \in \pi^{-1}(x)$, this requires showing that $\fg_- \subset \fs(u)$.
\begin{remark} \label{RM:transitive} Since $x$ is regular, then $\fs(u) \subset \fa^{\Kh(u)}$. If $\Kh(u)$ is not in the $G_0$-orbit of $\phi_0 \in \bbV_\mu$, then $\dim(\fa^{\Kh(u)}) < \dim(\fa^{\phi_0}) = \fU_\mu$ by Lemma \ref{L:Tanaka-lw}. In this case, if $\dim(\fs(u)) \geq \fU_\mu -1$, then $\fs(u) = \fa^{\Kh(u)} \supset \fg_-$. Thus, it suffices to consider $\Kh(u)$ in the $G_0$-orbit of $\phi_0$, and by acting on $u$ by $G_0 \subset P$, we may assume $\Kh(u) = \phi_0$, and $\fs(u) \subsetneq \fa^{\phi_0}$ has codimension one.
Since $\fg_- \subset \fg$ is bracket-generating, then intransitivity ($\fg_- \not\subset \fs(u)$) would mean $\fs_{-1}(u) \neq \fg_{-1}$, and $\fs_0(u) = \fa_0$. Now recall from Proposition \ref{P:key-bracket} that $[\fs_0(u),\fg_{-1}] \subset \fs_{-1}(u) \neq \fg_{-1}$. In each case, we will verify that $[\fa_0,\fg_{-1}] = \fg_{-1}$. This yields a contradiction, thereby establishing local homogeneity.
\end{remark}
\subsubsection{2-dimensional projective structures}
\label{S:2d-proj}
Consider $\nabla$ specified in coordinates $(x^0,x^1) = (x,y)$ by
\begin{align*}
\Gamma^0_{11} = \frac{\varepsilon}{x}, \qquad \Gamma^0_{00} = -\frac{1}{2x},
\end{align*}
and zero otherwise, where $\varepsilon = \pm 1$. This corresponds by \eqref{E:proj-ODE} to the ODE $xy'' = \varepsilon (y')^3 - \half y'$. We have $\fS \geq 3$ since the projective structure $[\nabla]$ has symmetries (i.e.\ point symmetries of the ODE):
\begin{align*}
\bX_1 = \p_y, \qquad \bX_2 = x\p_x + y\p_y, \qquad \bX_3 = 2xy\p_x + y^2 \p_y.
\end{align*}
For $A_2 / P_1$, $H^2_+(\fg_-,\fg) = \Atwo{xw}{-5,1} \cong \bigwedge^2 \fg_1 \ot \fg_1$. Indeed, $\Kh$ is the {\em Liouville tensor} $L = (L_1 dx + L_2 dy) \ot (dx \wedge dy)$. Noting $5\lambda_1 - \lambda_2 = 3\alpha_1 + \alpha_2 = 3(\epsilon_1 - \epsilon_2) + \epsilon_2 - \epsilon_3 = 0$ and $\epsilon_1 + \epsilon_2 + \epsilon_3 = 0$,
\begin{align*}
\fa = \fa_{-1} \op \fa_0 = \l\{\l(\begin{array}{c|cc}
h & 0 & 0\\ \hline u & 4h & 0 \\ v & s & -5h
\end{array}\r)\r\} = \tspan\{ e_{-1}^1, e_{-1}^2, s_0, h_0 \}.
\end{align*}
We have $\text{ad}_{h_0}$ diagonal with $\cE_{-1} = \{ 3, -6 \}$, $\cE_0 = \{ -9, 0 \}$. Hence, $\fa$ is filtration-rigid, and $\fS \leq 3$. Clearly, $[\fa_0,\fg_{-1}] = \fg_{-1}$, hence local homogeneity follows.
\subsubsection{Scalar 2nd order ODE}
\label{S:scalar-2-ODE}
Given $y'' = F(x,y,y')$, there are the Tresse relative invariants \cite{Tresse1896}:
\begin{align} \label{E:Tresse}
I_1 = \l( \frac{d}{dx} \r)^2 (F_{pp}) - F_p \frac{d}{dx} (F_{pp}) - 4 \frac{d}{dx} (F_{yp}) + 4 F_p F_{yp} - 3 F_y F_{pp} + 6 F_{yy}, \qquad I_2 = F_{pppp},
\end{align}
where $p = y'$, and $\frac{d}{dx} = \p_x + p \p_y + F \p_p$. There are two lines $L',L'' \subset H^2_+(\fg_-,\fg)$ associated to $w = (12)$ and $(21)$. Two-dimensional projective structures correspond to ODE with $I_2 = 0$.
Suppose the geometry is not locally flat. The set $U = \{ z \in M \mid I_1(z) I_2(z) \neq 0 \}$ is open. If $U \neq \emptyset$, it contains a regular point $z$, and for any $u \in \pi^{-1}(z)$, $\fa^{\Kh(u)} = \fg_-$, so $\dim(\cS) \leq 3$. If $U = \emptyset$, then exactly one of the open sets $U_i = \{ z \in M \mid I_i(z) \neq 0 \}$ is non-empty. If $U_2 = \emptyset$, then by Section \ref{S:2d-proj}, $\dim(\cS) \leq 3$. By the duality $A_\rkg / P_1 \cong A_\rkg / P_2$, we conclude the same if $U_1 = \emptyset$. Hence, $\fS =3$. Local homogeneity follows because the same holds for 2-dimensional projective structures.
\begin{example}
Given $0 \neq a \in \bbR$, $y'' = \frac{1}{x} \l(y' - (y')^3 + a(1-(y')^2)^{3/2}\r)$ admits point symmetries
\begin{align*}
\bX_1 = \p_y, \qquad \bX_2 =x\p_x+y\p_y, \qquad \bX_3 = 2xy \p_x + (x^2 + y^2)\p_y,
\end{align*}
so is submaximally symmetric. Generically, $I_1 I_2 \neq 0$, so $\im(\Kh)$ is not contained in only one of $L'$ or $L''$. On $(x,y,p)$-space, $\cS$ is obtained by prolongation -- see \eqref{E:prolongation}. This gives $\bX_1^{(1)} = \bX_1$, $\bX_2^{(1)} = \bX_2$, and $\bX_3^{(1)} = \bX_3 + 2x(1-p^2) \p_p$. Since $x\neq 0$, $\cS$ is generally transitive, except along $p = \pm 1$. Thus, the singular set consists of: (i) points for which the right hand side of the ODE is not smooth (violating our blanket convention), and (ii) irregular points for $\cS$.
\end{example}
The other real form of $A_2 / P_{1,2}$ geometries yields 3-dimensional CR structures. In \cite{Car1932}, Cartan gave many examples of such structures having 3-dimensional symmetry. Thus, $\fS = 3$ here as well.
\subsubsection{3-dimensional conformal structures} \label{S:3d-conformal} Note the models:
\begin{align*}
\begin{array}{c|c|c}
& \mbox{Metric} & \mbox{Killing symmetries}\\ \hline
\mbox{Riemannian} & dx^2 + dy^2 + (dz - xdy)^2 & \p_y, \quad \p_z, \quad \p_x + y \p_z, \quad y\p_x - x\p_y + \half (y^2 - x^2) \p_z\\
\mbox{Lorentzian} & dxdy + (dz - xdy)^2 & \p_y, \quad \p_z, \quad \p_x + y \p_z, \quad x\p_x - y\p_y
\end{array}
\end{align*}
These have nonzero Cotton tensor, so are not locally conformally flat. Hence, $\fS \geq 4$.
For $B_2 / P_1$, $H^2_+(\fg_-,\fg) = \Btwo{xw}{-5,4}$, and $5\lambda_1 - 4\lambda_2 = 3\alpha_1 + \alpha_2 = 3(\epsilon_1 - \epsilon_2) + \epsilon_2 = 3\epsilon_1 - 2\epsilon_2$:
\begin{align*}
\fa = \fg_{-1} \op \fa_0 = \l\{\l(\begin{array}{c|ccc|c}
2h & 0 & 0 & 0 & 0\\ \hline
u & 3h & 0 & 0 & 0\\
v & s & 0 & 0 & 0\\
w & 0 & -s & -3h & 0\\ \hline
0 & -w & -v & -u & -2h
\end{array}\r)\r\} = \tspan\{ e_{-1}^1, e_{-1}^2, e_{-1}^3, s_0, h_0 \}.
\end{align*}
We have $\text{ad}_{h_0}$ diagonal, $\cE_{-1} = \{ 1, -2, -5 \}$, $\cE_0 = \{ -3, 0 \}$. Hence, $\fa$ is filtration-rigid, and $\fS \leq 4$.
Clearly, $[\fa_0,\fg_{-1}] = \fg_{-1}$, hence local homogeneity follows. This handles the Lorentzian case.
The Riemannian case is simpler. Here, $\fg_0 = \bbR \op \fso(3)$, and $H^2_+(\fg_-,\fg)$ is isomorphic to the space of $3 \times 3$ symmetric trace-free matrices with action $A \cdot \phi = [A,\phi]$. Since any such $\phi$ is orthogonally diagonalizable, we get $\fS \leq \fU = 4$ from considering the two non-trivial cases:
\begin{enumerate}
\item $\phi$ has three distinct eigenvalues: $\fann(\phi) = 0$, so $\fa^\phi = \fg_-$, and $\dim(\fa^\phi) = 3$.
\item $\phi$ has two equal nonzero eigenvalues: $\dim(\fann(\phi)) = 1$, so $\dim(\fa^\phi) = 4$.
\end{enumerate}
\subsubsection{Semisimple exceptions}
Consider $G/P = A_1 / P_1 \times G'/P'$, $w = (1,k') \in W^\fp(2)$, and highest weight $2\lambda_1$ of $A_1$. Since we will consider only geometries with $\Kh$ concentrated in $\bbV_\mu$ with
\begin{align*}
\mu = -w\cdot 2\lambda_1 = -\sr_1 \cdot 2\lambda_1 - \sr_{k'} \cdot 0' = 4\lambda_1 + \alpha_{k'} = 2\alpha_1 + \alpha_{k'},
\end{align*}
we may assume that $G'$ is simple. The lowest weight vector of $\bbV_\mu$ is $\phi_0 = e_{\alpha_1} \wedge e_{\alpha_{k'}} \ot e_{\alpha_1}$, and $\fU_\mu = \dim(\fa^{\phi_0})$. We will construct a model with at least $\fU_\mu -1$ symmetries.
By Theorem \ref{T:corr-Tanaka}, the Lie algebra structure of $\fa^{\phi_0}$ is the same if we consider $\fp' = \fp_{k'}$ to be maximal parabolic. Then by Proposition \ref{P:PR}, $(\fg,\fp,\mu)$ is PR. Take a basis $H,X,Y$ of $\fsl_2(\bbC)$ with the standard relations $[H,X] = X$, $[H,Y] = -Y$, $[X,Y] = H$. Identifying $e_{-\alpha_1}$ with $Y$, its Killing dual $e_{\alpha_1}$ is $\frac{X}{2}$. We have $\alpha_1(H) = 1$, i.e. $H$ acts as the grading element with respect to $P_1$. Thus, we have $\fa = \fa^{\phi_0} = \fg_- \op \fa_0^{\phi_0}$, where $\fg_- = \tspan\{ Y \} \op \fg_-'$, and
\[
\fa_0 = \fh_0 \op \bop_{\gamma \in \Delta(\fg'_{0,\leq 0})} \fg'_\gamma, \qquad \fh_0 := \l\{ -\frac{1}{2} \alpha_{k'}(H') H + H' \mid H' \in \fh' \r\}.
\]
Let $\ker(\alpha_{k'}) = \{ H' \in \fh' \mid \alpha_{k'}(H') = 0 \}$, and define
\[
\tilde\fa_0' = \ker(\alpha_{k'}) \op \bop_{\gamma \in \Delta(\fg'_{0,\leq 0})} \fg'_\gamma, \qquad \tilde\fa = \fg_- \op \tilde\fa_0' = \tspan\{ Y \} \op \fg_-' \op \tilde\fa_0'.
\]
Similar to Section \ref{S:realize}, we define $\tilde\ff$ to be the vector space $\tilde\fa$ with the same brackets, but with the single additional relation $[Y,e_{-\alpha_{k'}}]_{\tilde\ff} = Y$. Equivalently, $[Y,u]_{\tilde\ff} = \lambda(u) Y$, for all $u \in \tilde\ff$, where $\lambda \in e_{\alpha_{k'}}$ is Killing dual to $e_{-\alpha_{k'}}$. For $\tilde\ff$ to be a Lie algebra, it suffices to check $0 = \Jac_{\tilde\ff}(Y,u,v)$ for all $u,v \in \tilde\ff$, or equivalently, $\lambda([u,v]_{\tilde\ff}) = 0$. Clearly, this holds for $u,v \in \fg_-$. If $u \in \tilde\fa_0'$, then $u \cdot \phi_0 = 0$, which is equivalent to $\lambda \circ \text{ad}_u = 0$ since $u \cdot e_{\alpha_1} = 0$. Thus, $\lambda([u,v]_{\tilde\ff}) = \lambda([u,v]) = 0$, so $\tilde\ff$ is a Lie algebra.
Let $\tilde\fk = \tilde\fa_0' \subset \tilde\ff$. Define the linear map $\vartheta : \tilde\ff \to \fg$ by $\vartheta=\text{id}+H\otimes e_{\alpha_{k'}} -\frac{1}{2} X \otimes e_{\alpha_1}$. Thus, $\vartheta|_{\tilde{\fk}}$ is the natural inclusion, and $\vartheta$ maps all root vectors in $\fg_- \subset \tilde\ff$ over to $\fg$ naturally except for
\[
\vartheta(Y) = Y - \frac{1}{2} X, \qquad \vartheta(e_{-\alpha_{k'}}) = e_{-\alpha_{k'}} + H.
\]
For $u \in \tilde\fk$ and $v \in \tilde\ff$, since $Y, e_{-\alpha_{k'}} \perp\im(\tad_u)$, then $\vartheta([u,v]_{\tilde\ff}) = \vartheta([u,v]) = [u,v] = [u,\vartheta(v)] = [\vartheta(u),\vartheta(v)]$, so $\vartheta$ is $\tilde\fk$-equivariant. Also, $\vartheta$ induces a linear isomorphism $\tilde\ff / \tilde\fk \cong \fg / \fp$.
As in Section \ref{S:realize}, the simplifying reduction to $\fp' = \fp_{k'}$ was mainly used to check that $\tilde\ff$ satisfies the Jacobi identity. Now for the original geometry, $\tilde\fa = \tspan\{ Y \} \op \prn^{\fg'}(\fg_-',\fann(e_{\alpha_{k'}}))$ is the same $\tilde\fa$ as above (as Lie algebras), and similarly define $\tilde\ff$ and $\vartheta$ as above. Since $\tilde\fk = \tilde\fa'_{\geq 0}$ is contained in the $\tilde\fk$ stated above for $\fp_{k'}$, then $\vartheta$ defines an $\tilde\ff$-invariant Cartan connection $\omega_\vartheta$ on some $\cG = F \times_K P$ over $M = F/K$ (where $F$ may only be a local Lie group).
Using \eqref{E:kappa-alpha}, $\kappa_\vartheta$ vanishes except for
\begin{align*}
\kappa_\vartheta(Y,e_{-\alpha_{k'}}) = [\vartheta(Y),\vartheta(e_{-\alpha_{k'}})] - \vartheta([Y,e_{-\alpha_{k'}}]_{\tilde\ff})
= \l[Y - \frac{1}{2} X, e_{-\alpha_{k'}} + H\r] - \vartheta(Y) = X,
\end{align*}
i.e. $\kappa_\vartheta = \phi_0$. Thus, the geometry is regular and normal, and $\fU_\mu -1 \leq \fS_\mu \leq \fU_\mu$.
Finally, $[\fa_0, \fg_{-1}] \supset [\fh_0,\fg_{-1}] = \fg_{-1}$, so local homogeneity follows.
This completes the proof of both Proposition \ref{P:ex} and Theorem \ref{T:transitive}.
\section{Results and local models for specific geometries}
\label{S:specific}
In this section, we illustrate the use of Theorems \ref{T:main-thm} and \ref{T:DD} for the computation of submaximal dimensions for specific geometries. We also refer the reader to Appendix \ref{App:Submax} which summarizes our computational algorithm.
\subsection{Conformal geometry}
In dimension $n = p+q \geq 3$, conformal geometry is the prototypical underlying structure of a parabolic geometry. The model space is the (pseudo-) conformal sphere $\mathbb{S}^{p,q}$. Embedded as the (null) projective quadric in $\bbP(\mathbb{R}^{p+1,q+1})$, we have $\mathbb{S}^{p,q} \cong \text{SO}_{p+1,q+1} / P_1$, where $P_1$ is the stabilizer of a null line. The Lie algebra $\fg = \fso_{p+1,q+1}$ is a real form of
\[
\fso_{n+2}(\bbC) = \l\{ \begin{array}{ll} B_\rkg, & \mbox{$n = 2\rkg - 1$ odd};\\ D_\rkg, & \mbox{$n = 2\rkg - 2$ even.} \end{array} \r.
\]
and is 1-graded by $P_1$. The space $\bbW = H^2(\fg_-,\fg)$ is the space of Weyl tensors (homogeneity $+2$) for $n \geq 4$ and Cotton tensors (homogeneity $+3$) for $n=3$. The geometry is \PRp by Corollary \ref{C:g-PR}.
First work over $\bbC$, so $\fg = \fso_{n+2}(\bbC)$, and $\fg_0^{ss}$ is either $B_{\rkg-1}$ or $D_{\rkg-1}$. For $H^2_+(\fg_-,\fg)$, Kostant gives \begin{align*}
& B_2 / P_1: \begin{tiny} \begin{tikzpicture}[scale=\myscale,baseline=-3pt]
\dbond{r}{0,0};
\DDnode{x}{0,0}{-5};
\DDnode{s}{1,0}{4};
\useasboundingbox (-.4,-.2) rectangle (1.2,0.55);
\end{tikzpicture} \end{tiny}, \quad
B_3 / P_1: \begin{tiny} \begin{tikzpicture}[scale=\myscale,baseline=-3pt]
\bond{0,0};
\dbond{r}{1,0};
\DDnode{x}{0,0}{-4};
\DDnode{w}{1,0}{0};
\DDnode{s}{2,0}{4};
\useasboundingbox (-.4,-.2) rectangle (2.2,0.55);
\end{tikzpicture} \end{tiny}, \quad
B_\rkg / P_1\,\, (\rkg \geq 4): \begin{tiny} \begin{tikzpicture}[scale=\myscale,baseline=-3pt]
\bond{0,0};
\bond{1,0};
\bond{2,0};
\tdots{3,0};
\dbond{r}{4,0};
\DDnode{x}{0,0}{-4};
\DDnode{w}{1,0}{0};
\DDnode{s}{2,0}{2};
\DDnode{w}{3,0}{0};
\DDnode{w}{4,0}{0};
\DDnode{w}{5,0}{0};
\useasboundingbox (-.4,-.2) rectangle (2.2,0.55);
\end{tikzpicture} \end{tiny}, \\
& D_3 / P_1: \begin{tiny} \begin{tikzpicture}[scale=\myscale,baseline=-3pt]
\diagbond{u}{0,0};
\diagbond{d}{0,0};
\DDnode{x}{0,0}{\!\!\!\!-4};
\DDnode{s}{0.5,0.865}{4};
\DDnode{w}{0.5,-0.865}{0};
\useasboundingbox (-.4,-.2) rectangle (0.7,0.55);
\end{tikzpicture} \end{tiny} \op
\begin{tiny} \begin{tikzpicture}[scale=\myscale,baseline=-3pt]
\diagbond{u}{0,0};
\diagbond{d}{0,0};
\DDnode{x}{0,0}{\!\!\!\!-4};
\DDnode{w}{0.5,0.865}{0};
\DDnode{s}{0.5,-0.865}{4};
\useasboundingbox (-.4,-.2) rectangle (0.7,0.55);
\end{tikzpicture} \end{tiny}, \quad
D_4 / P_1: \begin{tiny} \begin{tikzpicture}[scale=\myscale,baseline=-3pt]
\bond{0,0};
\diagbond{u}{1,0};
\diagbond{d}{1,0};
\DDnode{x}{0,0}{-4};
\DDnode{w}{1,0}{0};
\DDnode{s}{1.5,0.865}{2};
\DDnode{s}{1.5,-0.865}{2};
\useasboundingbox (-.4,-.2) rectangle (1.7,0.55);
\end{tikzpicture} \end{tiny}, \quad
D_\rkg / P_1\,\, (\rkg \geq 5): \begin{tiny} \begin{tikzpicture}[scale=\myscale,baseline=-3pt]
\bond{0,0};
\bond{1,0};
\bond{2,0};
\tdots{3,0};
\diagbond{u}{4,0};
\diagbond{d}{4,0};
\DDnode{x}{0,0}{-4};
\DDnode{w}{1,0}{0};
\DDnode{s}{2,0}{2};
\DDnode{w}{3,0}{0};
\DDnode{w}{4,0}{0};
\DDnode{w}{4.5,0.865}{0};
\DDnode{w}{4.5,-0.865}{0};
\useasboundingbox (-.4,-.2) rectangle (5.4,0.55);
\end{tikzpicture} \end{tiny}
\end{align*}
Above, $W^\fp_+(2) = \{ (12) \}$ always, except $W^\fp_+(2) = \{ (12), (13) \}$ for $D_3 / P_1$. The $n=3$ case was studied in Section \ref{S:3d-conformal}, so let $n \geq 4$. Then:
\Ben
\item $n \geq 5$: $\lambda_\fg = \lambda_2$, $w = (12)$, $J_w = \{ 3 \}$. Look at $\fp_w^{\opn} \cong \fp_2 \subset \fg_0^{ss}$. Then using Recipe \ref{R:dim},
\begin{align*}
\dim(\fa_0) &= \dim(\fp_2)
= \mycase{
\half\l(\dim(B_{\rkg-1}) + 1 + \dim(A_1) + \dim(B_{\rkg-3})\r), &\quad n \mbox{ odd};\\
\half\l(\dim(D_{\rkg-1}) + 1 + \dim(A_1) + \dim(D_{\rkg-3})\r), &\quad n \mbox{ even};}\\
&= \l\{ \begin{array}{ll}
2\rkg^2 - 7\rkg + 10, & n \mbox{ odd}; \\
2\rkg^2 - 9\rkg + 14, & n \mbox{ even}
\end{array} \r. \qRa \dim(\fa) = \dim(\fg_{-1}) + \dim(\fa_0) = \binom{n-1}{2} + 6.
\end{align*}
\item $n = 4$: $D_3 / P_1$, $\lambda_\fg = \lambda_2 + \lambda_3$, $\fg_0^{ss} = A_1 \times A_1$. For $w = (12)$, $J_w = \{ 3 \}$, $\fp_w^{\opn} \cong A_1 \times \fp_1 \subset \fg_0^{ss}$, so $\dim(\fa_0) = 5$ and $\dim(\fa(w)) = 9$. By symmetry, for $w = (13)$, $\dim(\fa(w)) = 9$ as well.
\Een
Thus, $\fU_\bbC = \binom{n-1}{2} + 6$ for $n \geq 4$. In any signature over $\bbR$, $\fS \leq \fU \leq \fU_\bbC$ by Remark \ref{RM:upper}. For $n \geq 4$, $\fa_0$ is related to $\fp_2 \subset \fg_0^{ss}$, which is the Lie algebra of the stabilizer of a null 2-plane (in the standard representation of $\fg_0^{ss}$). These do not exist in Riemannian and Lorentzian signatures, so $\fS \leq \fU < \fU_\bbC$ in these cases. In all other signatures, we exhibit an explicit model realizing the upper bound.
\subsubsection{Non-Riemannian and non-Lorentzian signatures}
\label{S:nR-nL}
Consider the $(2,2)$ pp-wave metric:\footnote{In \cite{Kru2012}, the signature $(2,2)$ pp-wave metric was announced as having submaximal conformal symmetry dimension. No proof was given there, but using tools developed in this paper, we can confirm that this is indeed correct.}
\begin{align} \label{E:pp-2,2}
\ppmetric{2,2} = y^2 dw^2 + dw dx + dy dz.
\end{align}
This has 9-dimensional conformal symmetry algebra spanned by
\begin{align*}
\bX_1 &= \p_x, \qquad
\bX_2 = \p_z, \qquad
\bX_3 = \p_w, \qquad
\bX_4 = -y\p_x + w \p_z, \\
\bX_5 &= 3(z + yw^2)\p_x - 3w\p_y - w^3 \p_z, \qquad
\bX_6 = 2yw\p_x - \p_y - w^2 \p_z, \\
\bX_7 &= -x\p_x - y\p_y + w\p_w + z\p_z, \qquad
\bX_8 = 2 y^3 \p_x - 3 y \p_w + 3 x \p_z,\\
\bT &= 2x \p_x + y \p_y + z\p_z.
\end{align*}
Note that $\bT$ is a homothety, while $\bX_1, ..., \bX_8$ are Killing fields.
\begin{lemma} Let $n = p+q +4$, and $\eucmetric{p,q}$ the flat Euclidean metric of signature $(p,q)$. Then $\metric = \ppmetric{2,2} + \eucmetric{p,q}$ has conformal symmetry algebra of dimension $\binom{n-1}{2} + 6$.
\end{lemma}
\begin{proof} It is straightforward to check that $\ppmetric{2,2}$ is not conformally flat, so neither is $\metric$. Writing $\epsilon_i = \pm 1$, the metric $\eucmetric{p,q} = \sum_{i=1}^{p+q} \epsilon_i (du_i)^2$ admits the Killing fields $\bU_i = \p_{u_i}$ and $\bV_{ij} = \epsilon_i u_i \p_{u_j} - \epsilon_j u_j \p_{u_i}$, where $i < j$. The metric $\metric$ admits the Killing fields $\bU_i$, $\bV_{ij}$, $\bX_k$, as well as $\bY_i = 2\epsilon_i u_i\p_z - y \p_{u_i}$, and $\bW_i = 2\epsilon_i u_i \p_x - w\p_{u_i}$.
The vector field $\tilde\bT = \bT + \sum_{i=1}^{p+q} u_i \p_{u_i}$ is a homothety for $\metric$.
\end{proof}
Thus, we have proven:
\begin{thm} \label{T:conf-sharp} For conformal geometry in dimension $n \geq 4$, we have $\fS \leq \binom{n-1}{2} + 6$. Except for Riemannian and Lorentzian signatures, this upper bound is sharp. For $n=3$, we have $\fS = 4$.
\end{thm}
\subsubsection{4-dimensional Lorentzian}
\label{S:4d-Lor}
Here, $\fg = \fso_{2,4}$ and $\fg_0 \cong \bbR \op \fsl_2(\bbC)_\bbR$, where $\fsl_2(\bbC)_\bbR \cong \fso(1,3)$ refers to the {\em real} Lie algebra underlying $\fsl_2(\bbC)$. As before, $\bbW = H^2(\fg_-,\fg)$ has homogeneity $+2$, which gives the action of the grading element $Z \in \fz(\fg_0) \cong \bbR$.
As a $\fsl_2(\bbC)_\bbR$-representation, $\bbW \cong \bigodot{}^{\!4} (\bbC^2)$, which is irreducible. Thus, Weyl tensors are identified with complex binary quartics, and their classification according to root type is precisely the well-known Petrov classification \cite{Pet1954}, \cite{Pen1960}.
To each Petrov type, there is a collection $\cO$ of $G_0$-orbits. To calculate $\fU_\cO$, it suffices to maximize $\fa_0^\phi = \fann(\phi)$ among representative elements from these orbits. Then $\fS_\cO \leq \fU_\cO$ by Remark \ref{RM:constrained}. Let $\{ x,y \}$ be the standard $\bbC$-basis of $\bbC^2$, and let
$H = \pmat{1 & 0 \\ 0 & -1}$,
$X = \pmat{0 & 1 \\ 0 & 0}$,
$Y = \pmat{0 & 0 \\ 1 & 0}$
be the standard $\bbC$-basis for $\fsl_2(\bbC)$, so $\fsl_2(\bbC)_\bbR$ has $\bbR$-basis $\{ H, iH, X, iX, Y, iY \}$. A simple case analysis yields Table \ref{tbl:Petrov} (except for the last column).
\begin{table}[h]
$\begin{array}{|c|c|c|c|c|c|} \hline
\mbox{Type} & \mbox{Normal form $\phi$} & \bbR\mbox{-basis for } \fa_0^\phi & \dim(\fa^\phi)& \fa^\phi \mbox{ filtration-rigid?} \\ \hline
\mbox{N} & x^4 & X, iX, 2Z- H & 7 & \times\\
\mbox{III} & x^3 y & Z- H & 5 & \checkmark\\
\mbox{D} & x^2 y^2 & H, iH & 6 & \times\\
\mbox{II} & x^2 y(x-y) & \cdot & 4 & \times\\
\mbox{I} & xy(x-y)(x-ky) & \cdot & 4 & \times\\ \hline
\end{array}$
\caption{Petrov types and upper bounds on conformal symmetry dimensions}
\label{tbl:Petrov}
\end{table}
The following representative metrics and their conformal symmetries establish sharpness of the upper bounds (and filtration non-rigidity) in all cases except type III.
\Ben
\item[N:] $\ppmetric{3,1} = dy^2 + dz^2 + dwdx + y^2 dw^2$ (signature $(3,1)$ pp-wave):
\begin{align*}
\bX_1 &= \p_x, \quad \bX_2 = \p_z, \quad \bX_3 = \p_w, \quad
\bX_4 = -2z\p_x + w\p_z, \\
\bX_5 &= e^{-w}(2y\p_x + \p_y), \quad
\bX_6 = e^w(-2y\p_x + \p_y), \quad \bT = 2x\p_x + y\p_y + z\p_z.
\end{align*}
\item[III:] $\metric = \frac{3}{|\Lambda|} dz^2 + e^{4z} dx^2 + 4 e^z dx dy + 2e^{-2z}(dy^2 + du dx)$ (Kaigorodov metric \cite[(12.35)]{SKMH2003}):
\[
\bX_1 = \p_u, \quad \bX_2 = \p_x, \quad \bX_3 = \p_y, \quad \bX_4 = 2x \p_x - y \p_y - \p_z - 4u\p_u.
\]
Another solution is
$\metric = \frac{r^2}{x^3} (dx^2 + dy^2) - 2 du dr + \frac{3}{2} x du^2$ (Siklos metric \cite[(38.1)]{SKMH2003}):
\[
\bX_1 = \p_y, \quad \bX_2 = \p_u, \quad \bX_3 = 2(x\p_x + y\p_y) + r\p_r - u\p_u, \quad \bT = u\p_u + r\p_r.
\]
\item[D:] $\metric = a^2(dx^2 + \sinh^2(x) dy^2) + b^2(dz^2 - \sinh^2(z) dt^2)$ ($a,b$ constant)
\begin{align*}
\bX_1 &= \p_y, \quad \bX_2 = \p_t, \quad
\bX_3 = e^{-t} (\p_z + \coth(z) \p_t), \quad
\bX_4 = e^t (\p_z - \coth(z) \p_t), \\
\bX_5 &= -\cos(y) \p_x + \coth(x) \sin(y) \p_y, \quad
\bX_6 = \sin(y) \p_x + \coth(x) \cos(y) \p_y.
\end{align*}
This is a product of two spaces of constant curvature \cite[(12.8)]{SKMH2003}.
\item[II:] $\metric = dz^2+ e^{-2z} (dy^2 + 2dxdu) - 35 e^{4z} dx^2 + e^{-8z} du^2$
\[
\bX_1 = \p_u, \quad \bX_2 = \p_x, \quad \bX_3 = \p_y, \quad \bX_4 = 2x \p_x - y \p_y - \p_z - 4u\p_u.
\]
This metric is not Einstein, has Ricci scalar a nonzero constant, and appears to be new. In \cite{SKMH2003}, Table 38.3 incorrectly lists (12.29) as a type II metric with 4-dimensional isometry group, while (12.29) is in fact type D, as indicated at the bottom of p.179. Also, the type II metric (13.65) is indicated as having four Killing vectors (13.66), but the fourth listed vector field is incorrect. Professor Malcolm MacCallum has indicated to us that no type II metric with 4-dimensional isometry group appears to have been known in the literature.
Our strategy for finding such a type II metric was to obtain a deformation of the type III Kaigorodov metric while preserving its symmetries. This led to the ansatz
\[
\metric = dz^2 + a_1 e^{-2z} dy^2 + a_2 e^{-2z} dxdu + a_3 e^{4z} dx^2 + a_4 e^z dy dx + a_5 e^{-8z} du^2,
\]
where $a_i$ are constants. Imposing the type II condition led to the metric indicated above.
\item[I:] $\metric = dx^2 + e^{-2x} dy^2 + e^x \cos(\sqrt{3} x) (dz^2 - dt^2) - 2e^x\sin(\sqrt{3}x) dz dt$ (Petrov metric \cite[(12.14)]{SKMH2003}):
\[
\bX_1 = \p_y, \quad \bX_2 = \p_z, \quad \bX_3 = \p_t, \quad \bX_4 = \p_x + y \p_y + \half (\sqrt{3} t - z) \p_z - \half ( t + \sqrt{3} z) \p_t.
\]
\Een
\begin{thm} \label{T:Petrov} In 4-dimensional Lorentzian geometry, the maximal dimension of the conformal symmetry algebra for metrics of constant Petrov type is:
\begin{center}
\begin{tabular}{|c||c|c|c|c|c|} \hline
Petrov type & N & III & D & II & I\\ \hline
max. sym. dim. & 7 & 4 & 6 & 4 & 4 \\ \hline
\end{tabular}
\end{center}
All models realizing these upper bounds are locally homogeneous near a regular point.
\end{thm}
\begin{proof} For type III metrics, we must show that the associated (5-dimensional) $\fa = \fg_{-1} \op \fa_0$ is filtration-rigid.
As a $\fsl_2(\bbC)_\bbR$-representation, $\fg_{-1}$ is the space of Hermitian $2 \times 2$-matrices $\cH$ with the action of $A \in \fsl_2(\bbC)_\bbR$ given by $M \mapsto AM + M A^*$. For $\cH$, take the standard basis
\[
e_{-1}^1 = \pmat{1 & 0\\ 0 & 0}, \quad
e_{-1}^2 = \pmat{0 & 1\\ 1 & 0}, \quad
e_{-1}^3 = \pmat{0 & i\\ -i & 0}, \quad
e_{-1}^4 = \pmat{0 & 0\\ 0 & 1}.
\]
Since $H$ acts by $\diag(2,0,0,-2)$, and $Z$ acts as $-1$ on $\fg_{-1}$, then $e_0 := Z-H$ acts diagonally with eigenvalues $\cE_{-1} = \{ -3, -1, -1, 1 \}$, $\cE_0 = \{ 0 \}$. (All other brackets on $\fa$ are trivial.) Let $\ff$ be a filtered deformation of $\fa$. The first hypothesis of Proposition \ref{P:f-rigid} is satisfied, as is (i), but (ii) is not. Following the argument there, we still have $\tilde{e}_0 \in \ff^0$ which acts diagonally in some basis $\{ f_i^\alpha \}$ with the same eigenvalues as above, and with the only other non-trivial brackets $[f_{-1}^2,f_{-1}^4] = a \tilde{e}_0$, $[f_{-1}^3,f_{-1}^4] = b \tilde{e}_0$. By the Jacobi identity, $0 = \Jac_\ff(f_{-1}^2, f_{-1}^3,f_{-1}^4) = bf_{-1}^2 - af_{-1}^3$, so $a = b=0$. Thus, $\fa$ is filtration-rigid.
In non-type III cases where the upper bound is realized, $\fg_- \subset \fs(u)$, so local homogeneity follows. In the type III case, $[\fa_0,\fg_{-1}] = \fg_{-1}$, and Remark \ref{RM:transitive} gives the result.
\end{proof}
\subsubsection{General Lorentzian and Riemannian}
For the Lorentzian case, we mimic the construction from Section \ref{S:nR-nL}. Consider $\metric = \ppmetric{3,1} + \eucmetric{p,0}$, where $\ppmetric{3,1}$ and its (seven) conformal symmetries were given in Section \ref{S:4d-Lor}, and $\eucmetric{p,0} = \sum_{i=1}^p (du^i)^2$ is the flat metric with Killing fields $\bU_i = \p_{u^i}$ and $\bV_{ij} = u^i \p_{u^j} - u^j \p_{u^i}$. The metric $\metric$ admits the Killing fields $\bX_k$, $\bU_i$, $\bV_{ij}$, as well as $\bY_i = u^i \p_z - z \p_{u^i}$, and $\bW_i = 2 u^i \p_x - w\p_{u^i}$. In addition, there is the homothety $\wt\bT = \bT + \sum_{i=1}^{p} u_i \p_{u_i}$, and so $\dim(\cS) = \binom{n-1}{2} + 4$. Since the (complex) bound of $\binom{n-1}{2} + 6$ is not realizable, it remains to show that $\binom{n-1}{2} + 5$ is also not realizable. This is done in \cite{DT-Weyl}, from which it follows that $\fS = \binom{n-1}{2} + 4$.
Given a Riemannian metric $\metric$ with nowhere vanishing Weyl tensor, there exists a conformally equivalent metric $\tilde\metric = \lambda^2 \metric$ such that all conformal Killing fields of $\metric$ are Killing fields for $\tilde\metric$ \cite{Nag1958}. Since the symmetry algebra is determined by restriction to any neighborhood, $\fS$ in the conformal Riemannian case is equal to the maximal dimension of the isometry algebra among metrics which are not conformally flat. According to Egorov's final result in \cite{Ego1962} (incorporating results in his earlier article \cite{Ego1956}), this number is exactly $\binom{n-1}{2} + 3$. This is realized by the product of spheres $\bbS^2 \times \bbS^{n-2}$ with their round metrics which is not conformally flat for $n \geq 4$, and has $\text{SO}_3 \times \text{SO}_{n-1}$ symmetry. However, we caution that there are omitted exceptions to Egorov's statement when $n=4$ and $n=6$.\footnote{For $n=4$, the submaximally symmetric structure is {\em unique}: $\bbC\bbP^2$, cf. Egorov \cite{Ego1955}.} In particular, with its Fubini--Study metric, $\bbC\bbP^m$ is not conformally flat, has real dimension $n=2m$, and has $\text{SU}(m+1)$ symmetry group of dimension $m^2 + 2m$. This is strictly less than $\binom{n-1}{2} + 3$ for $n > 8$, equal to it when $n=8$, and greater than it when $n=4$ or $6$.
For an algebraic proof of $\fS$ in the conformal Lorentzian and Riemannian cases, we refer the reader to the article of B. Doubrov and D. The \cite{DT-Weyl}.
\subsection{Scalar 3rd order ODE}
Let $J^k = J^k(\bbR,\bbR)$. Consider $J^3$ with its contact system $\cC^3$. In canonical local coordinates $(x,y,p,q,r)$, $\cC^3$ is locally spanned by the 1-forms $\{ dy - pdx, dp-qdx, dq-rdx \}$. A 3rd order ODE $y''' = F(x,y,y',y'')$ determines a hypersurface $M^4 = \{ r = F(x,y,p,q) \} \subset J^3$ transverse to the projection $\pi^3_2 : J^3 \to J^2$. Pulling $\cC^3$ back to $M$, we have two line fields $(L,\Pi)$ on $M$: (i) $L$ is spanned by $\frac{d}{dx} := \p_x + p\p_y + q\p_p + F\p_q$, and integral curves correspond to solutions of $y''' = F(x,y,y',y'')$; and (ii) $\Pi$ is spanned by $\p_q$. This is the fibre of $J^2 \to J^1$ (and hence $M \to J^1$), which is distinguished by Backl\"und's theorem. The data $(L,\Pi)$ encodes the geometry of scalar 3rd order ODE modulo contact transformations.
This is the underlying structure for a $G/P = C_2 / P_{1,2}$ geometry, where $C_2 = \text{Sp}(4,\bbR)$. There are two well-known relative invariants due to Chern \cite{Chern1939} and Sato--Yoshikawa \cite{SY1998}:
\begin{align*}
I_1 = -F_y - \frac{1}{3} F_p F_q - \frac{2}{27} F_q{}^3 + \frac{1}{2} \frac{d}{dx} (F_p) + \frac{1}{3} F_q \frac{d}{dx}(F_q) - \frac{1}{6} \l( \frac{d}{dx}\r)^2 (F_q), \qquad I_2 = F_{qqqq}.
\end{align*}
These respectively define the {\em conformal} branch ($I_1 = 0$) and {\em contact projective} branch ($I_2=0$). For more details, see also \cite{GN2009}. By Kostant, $H^2_+(\fg_-,\fg)$ has two 1-dimensional components of homogeneity $+3$ and $+4$, and $I_1$ and $I_2$ correspond to $(\Kh)_{+3}$ and $(\Kh)_{+4}$ respectively. We have:
\[
\begin{array}{ccccccc}
w & \bbV_{-w\cdot\lambda_\fg} & (Z_1,Z_2) & \phi_0 & \dim(\fa(w)) & \fS_w & \mbox{Twistor space type}\\ \hline
(12) & \Ctwo{xx}{-6,3} & (+3,0) & e_{\alpha_1} \wedge e_{2\alpha_1 + \alpha_2} \ot e_{-\alpha_2} & 5 & 5 & C_2 / P_1\\
(21) & \Ctwo{xx}{4,-5} & (+1,+3) & e_{\alpha_2} \wedge e_{\alpha_1 + \alpha_2} \ot e_{\alpha_2} & 5 & 4 & C_2 / P_2
\end{array}
\]
The geometry is \PRp. If $\phi \in H^2_+(\fg_-,\fg)$ has nonzero components in both submodules, then $\fa_0^\phi = 0$, and $\dim(\fa^\phi) = \dim(\fg_-) = 4$.
The $w = (21)$ branch is exceptional (3-dimensional conformal -- see Section \ref{S:3d-conformal}), and $\fS_w = \fU_w - 1 = \dim(\fa(w)) - 1 = 4$ here. For $w = (12)$, $\fS_w = \fU_w = \dim(\fa(w)) = 5$. Indeed, for any $a \in \bbR$, $y''' = ay' + y$ has five point (contact) symmetries:
\begin{align*}
\bX_1 = \p_x, \qquad \bX_2 = y\p_y, \qquad \bX_i = \eta_i(x) \p_y, \quad i=3,4,5.
\end{align*}
where $\eta_i(x)$ are the three linearly independent solutions of $y''' = ay' + y$. This proves:
\begin{thm}
For scalar 3rd order ODE modulo contact transformations, $\fS = 5$. Any submaximally symmetric model is in the contact projective branch ($I_2=0$), and is locally homogeneous near a non-flat regular point.
\end{thm}
\begin{remark}
The result $\fS = 5$ for 3rd order ODE is not explicitly stated in \cite{WMQ2002}, but can be readily deduced as follows. Sophus Lie classified all finite-dimensional irreducible Lie algebras of {\em contact} vector fields. There are only three, $L_6,L_7,L_{10}$, with $\dim(L_i) = i$ with $L_6 \subset L_7$ and $L_6 \subset L_{10}$. The only 3rd order ODE invariant under $L_6$ (and hence $L_7$ and $L_{10}$) is the trivial equation $y''' = 0$. In \cite{WMQ2002}, a classification of all {\em point} symmetry algebras for 3rd order ODE and representative equations is given. Aside from $y''' = 0$ and $y''' = \frac{3 (y'')^2}{2 y'}$ (which are {\em contact}-equivalent), having point symmetry algebras of dimensions 7 and 6, all others have dimension at most 5.
\end{remark}
\subsection{Systems of 2nd order ODE}
\label{S:2-ODE}
Let $A_\rkg = \text{SL}(\rkg+1,\bbR)$ and $\rkg \geq 2$.
If $\rkg \neq 3$, the underlying structure for an $A_\rkg / P_{1,2}$ geometry is a system of 2nd order ODE in $m = \rkg - 1$ dependent variables,
\begin{align} \label{E:2-ODE}
\ddot{x}^i = f^i(t,x^j,\dot{x}^j), \qquad 1 \leq i \leq m,
\end{align}
with equivalence up to {\em point transformations}. (This is the same for $\rkg = 3$ provided that $(\Kh)_{+1} \equiv 0$.) This is {\em path geometry}. The $m=1$ case was discussed in Section \ref{S:scalar-2-ODE}, so here we discuss the $m \geq 2$ case, which was studied by Fels \cite{Fels1995} and Grossman \cite{Gro2000}. Let $J^k = J^k(\bbR,\bbR^m)$ with its contact system $\cC^k$. Geometrically, \eqref{E:2-ODE} defines a submanifold $M \subset J^2$ transverse to $\pi^2_1 : J^2 \to J^1$, and point transformations refer to diffeomorphisms of $J^2$ which preserve $\cC^2$ which are prolongations of arbitrary diffeomorphisms of $J^0$. On $J^2$, in canonical local coordinates $(t,x^j,p^j,q^j)$, $\cC^2$ is locally spanned by the 1-forms $\{ dx^j - p^j dt, \, dp^j - q^j dt \}$ and $M = \{ q^i = f^i(t,x^j,p^j) \}$ fibers of $J^0$. This is an ``external'' description of the geometry. Pulling back $\cC^2$ to $M$ yields an ``internal'' description: a manifold of dimension $n = 2m+1$, with coordinates $(t,x^i,p^i)$, and a distribution containing:
\begin{enumerate}
\item a distinguished line field $L$ spanned by $\frac{d}{dt} := \p_t + p^i \p_{x^i} + f^i \p_{p^i}$ whose integral curves correspond to solutions of \eqref{E:2-ODE}, and
\item an integrable $m$-dimensional distribution $\Pi$ spanned by $\{ \p_{p^i} \}_{i=1}^m$. This is the fibre of the submersion $M \to J^0$, which is preserved by all point transformations.
\end{enumerate}
This flat model is $\ddot{x}^i = 0$, $1 \leq i \leq m$, which has $\dim(A_\rkg) = \rkg^2 + 2\rkg$ dimensional symmetry algebra.
In \cite{Fels1995}, Fels studied the geometry of systems of 2nd order ODE using Cartan's method of equivalence and derived the following invariants. Given \eqref{E:2-ODE}, define
\begin{align*}
G^i_{jkl} = \frac{\p^3 f^i}{\p p^j \p p^k \p p^l}, \qquad F^i_j = \half \frac{d}{dt} \l( \frac{\p f^i}{\p p^j} \r) - \frac{\p f^i}{\p x^j} - \frac{1}{4} \frac{\p f^i}{\p p^k} \frac{\p f^k}{\p p^j}.
\end{align*}
The {\em Fels invariants}, namely the Fels curvature and torsion, are respectively:
\begin{align} \label{E:Fels-inv}
S^i_{jkl} = G^i_{jkl} - \frac{3}{m+2} G^r_{r(jk} \delta^i_{l)}, \qquad T^i_j = F^i_j - \frac{1}{m} \delta^i_j F^k_k.
\end{align}
The Tanaka prolongation of the symbol algebra $\fg_-$ and structure group $G_0$ is, by Yamaguchi, $\fg \cong \fsl(\rkg+1,\bbR)$ with 2-grading given by
\begin{align*}
\fg_0 = \bbR^2 \op \fsl(\Pi_{-1}), \qquad \fg_{\pm 1} = L_{\pm 1} \op \Pi_{\pm 1}, \qquad \fg_{\pm 2} = L_{\pm 1} \ot \Pi_{\pm 1}.
\end{align*}
For $\rkg \geq 4$, $W^\fp_+(2) = \{ (21), (12) \}$, which generate homogeneity $+3$ and $+2$ modules respectively:
\begin{align} \label{E:H2-ODE}
H^2_+(\fg_-,\fg) &= \pathwts{qxswws}{0,-4,3,0,0,1} \op \pathwts{xxswws}{-4,1,1,0,0,1}\\
&= L_1 \ot \l(\bigodot{}^3 \Pi_1 \ot \Pi_{-1}\r)_0 \op L_1^{\ot 2} \ot\l( \Pi_1 \ot \Pi_{-1}\r)_0, \nonumber
\end{align}
where $(\Pi_{-1})^* \cong \Pi_1$ via the Killing form, so the ``$0$'' subscripts indicate trace-free with respect to contractions.
For the tensor descriptions above, we used
\begin{align*}
L_{-1} &= \pathwts{xxwwww}{2,-1,0,0,0,0}, \qquad
L_1 = \pathwts{xxwwww}{-2,1,0,0,0,0}, \\
\Pi_{-1} &= \pathwts{xxwwww}{-1,1,0,0,0,1}, \qquad
\Pi_{1} = \pathwts{xxwwww}{1,-2,1,0,0,0}. \nonumber
\end{align*}
(e.g. The lowest roots of $\Pi_1$ and $\Pi_{-1}$ are $\alpha_2$ and $-\alpha_2 - ... - \alpha_\rkg$, so by the ``minus lowest weight convention'', we convert their negatives into weight notation using the Cartan matrix -- see \eqref{E:lambda}.)
For $\rkg = 3$, there is additionally the 1-dimensional module $\Athree{xxw}{4,-4,0} = \bigwedge^2 \Pi_1 \ot L_{-1}$, which has homogeneity $+1$ and corresponds to the word $w = (23) \in W^\fp_+(2)$. For ODE, we must have $(\Kh)_{+1} \equiv 0$. We discuss $A_3 / P_{1,2}$ geometries with $(\Kh)_{+1} \neq 0$ in Section \ref{S:A3P12-dist}.
The tensors $S$ and $T$ correspond to $(\Kh)_{+3}$ and $(\Kh)_{+2}$ respectively. By the discussion on correspondence and twistor spaces (Section \ref{S:corr-twistor}), we have
\begin{itemize}
\item $S \equiv 0$ iff $(\Kh)_{+3} \equiv 0$ iff the geometry admits an $A_\rkg / P_1$ description, i.e.\ \eqref{E:2-ODE} are the equations for (unparametrized) geodesics of projective connection, cf. Section \ref{S:projective}.
\item $T \equiv 0$ iff $(\Kh)_{+2} \equiv 0$ iff the geometry admits an $A_\rkg / P_2 \cong \tGr(2,\bbR^{\rkg+1})$ description. This is a 1-graded geometry with underlying structure a manifold $N^{2m}$, endowed with a field $\cS \subset \bbP(TN)$ of type $(2,m)$-Segr\'e varieties. This is a {\em Segr\'e} (or {\em almost Grassmannian}) {\em structure}.\footnote{When $m=2$, this is equivalent to a signature $(2,2)$ conformal structure. Note $A_3 / P_2 \cong D_3 / P_1$.}
\end{itemize}
In \cite{Gro2000}, Grossman studies \eqref{E:2-ODE} satisfying $T \equiv 0$, which he calls {\em torsion-free} path geometries. If such a geometry is moreover ``geodesic'' (i.e.\ $S\equiv 0$), then $\Kh \equiv 0$, and so the geometry is locally flat, cf. \cite[Theorem 1]{Gro2000}. With these preparations, we state our new result:
\begin{thm} For systems of 2nd order ODE in $m$ dependent variables,
$\fS = \l\{ \begin{array}{cl} m^2 + 5, & m \geq 2;\\ 3, & m=1. \end{array} \r.$
When $m\geq 2$, any submaximally symmetric model satisfies $T \equiv 0$ (so is not geodesic).
\end{thm}
\begin{proof} Let $m = \rkg - 1 \geq 2$. Look at $A_\rkg / P_{1,2}$. (For $m=2$, we require $(\Kh)_{+1} \equiv 0$.) For $w = (12)$, $\fS_w = m^2+4$; see Section \ref{S:projective}. For $w = (21)$, we have by \eqref{E:H2-ODE}, $J_w = \{ 3, \rkg \}$ and $I_w = \{ 1 \}$, so the geometry is N\PRp. By Recipe \ref{R:red-geom}, the reduced geometry is $\bar\fg / \bar\fp \cong A_1 / P_1$, so $\dim(\fa_1) = 1$ and $\dim(\fa_2) = 0$. From $J_w = \{ 3, \rkg \}$ and Recipe \ref{R:a0}, $\fp_w^{\opn} \cong \fp_{1,\rkg-2} \subset A_{\rkg-2}$, and $\dim(\fa_0) = 1 + \dim(\fp_w^{\opn}) = m^2 - 2m + 3$, by Recipe \ref{R:dim}. Hence, $\fS_w = \fU_w = \dim(\fa(w)) = \dim(\fg_-) + \dim(\fa_0) + \dim(\fa_1) = 2m + 1 + (m^2 - 2m + 3) + 1 = m^2 + 5$.
\end{proof}
For any $m \geq 2$, the following is a submaximally symmetric model with its (point) symmetries:
\begin{align} \label{E:2-ODE-sm}
\begin{array}{c|c}
\l\{ \begin{array}{l@{\,}c@{\,\,}ll}
\ddot{x}^1 &=& 0, \\
&\vdots\\
\ddot{x}^{m-1} &=& 0, \\
\ddot{x}^m &=& (\dot{x}^1)^3.
\end{array} \r. &
\begin{array}{lll}
\bT = \p_t, \quad \bX_i = \p_{x^i}, \quad \bA_j = t\p_{x^j}, \quad
\bB_j^k = x^k \p_{x^j}\\
\qquad (1 \leq i \leq m, \quad 2 \leq j \leq m, \quad 1 \leq k < m)\\
\bC = 2 t\p_{x^1} + 3 (x^1)^2\p_{x^m}, \quad
\bD_1 = x^1 \p_{x^1} + 3 x^m\p_{x^m}, \\
\bD_2 = t \p_t - x^m \p_{x^m}, \quad
\bS = t^2 \p_t + t \sum_{i=1}^m x^i\p_{x^i}+\half (x^1)^3 \p_{x^m}
\end{array}\\
\end{array}
\end{align}
The symmetries are stated as vector fields $\bV = \xi \p_t + \varphi^i \p_{x^i}$ on $J^0$, but these admit unique prolongations $\bV^{(1)} = \bV + \varphi^i_{(1)} \p_{p^i}$ on $J^1$ and $\bV^{(2)} = \bV^{(1)} + \varphi^i_{(2)} \p_{q^i}$ on $J^2$ via the formula
\begin{align} \label{E:prolongation}
\varphi^i_{(1)} = \frac{d\varphi^i}{dt} - p^i \frac{d\xi}{dt}, \qquad
\varphi^i_{(2)} = \frac{d\varphi^i_{(1)}}{dt} - q^i \frac{d\xi}{dt}.
\end{align}
Externally, $\bV^{(2)}$ are everywhere tangent to $M \subset J^2$. Internally, $\bV^{(1)}$ are identified with vector fields on $M$ which preserve the geometric data described earlier.
\begin{remark}
The model \eqref{E:2-ODE-sm} was found in the $m=2$ case by Casey et al. \cite[(3.5)]{CDT2012}. For the projective branch $(S \equiv 0)$, they used Egorov's result \cite{Ego1951} (see also our Section \ref{S:projective} below) to assert 8 as the submaximal symmetry dimension. For the conformal (Segr\'e) branch $(T\equiv 0)$, their model ``{\em corresponds to a Ricci-flat ASD conformal structure with only one constant non-vanishing component of the ASD Weyl tensor}''. They conclude submaximality in this branch and hence for pairs of ODE. While their conclusion is correct, we note that:
\begin{enumerate}
\item The position of the single non-vanishing component is relevant in assessing submaximal symmetry: it must correspond to a lowest (or highest) weight vector. However, this argument fails in Riemannian and Lorentzian signatures: lowest weight vectors do not exist, and the most symmetric (degenerate) Weyl tensors have many non-vanishing components \cite{DT-Weyl}.
\item The case where both $S \neq 0$ and $T \neq 0$ is not considered. While one expects any submaximal model to satisfy either $S \equiv 0$ or $T \equiv 0$, this needs to be proven. (See our Theorem \ref{T:main-thm2}.)
\end{enumerate}
\end{remark}
Since $T \equiv 0$, let us exhibit the $(2,m)$-Segr\'e structure. The twistor space $N$ is the quotient of $M$ by the integral curves of $\frac{d}{dt}$, i.e.\ it is the solution space of the ODE. The solution of \eqref{E:2-ODE-sm} is
\begin{align} \label{E:2-ODE-soln}
\begin{array}{lllll}
& x^1 = a_1 t + b_1, & ...\,\,, &
x^{m-1} = a_{m-1} t + b_{m-1}, &
x^m = \half (a_1)^3 t^2 + a_m t + b_m\\
\Rightarrow & p^1 = a_1, & ...\,\,, &
p^{m-1} = a_{m-1}, &
p^m = (a_1)^3 t + a_m,
\end{array}
\end{align}
where $z = (a_1,...,a_m, b_1,..., b_m)$ are parameters (local coordinates on $N$). The map $\Psi : M \to N$ is
\begin{align*}
\l\{\begin{array}{lllll}
a_1 = p^1, & ...\,\,, & a_{m-1} = p^{m-1}, & a_m = p^m - (p^1)^3 t,\\
b_1 = x^1 - p^1 t, & ...\,\,, & b_{m-1} = x^{m-1} - p^{m-1} t, & b_m = x^m + \half (p^1)^3 t^2 - p^m t.
\end{array} \r.
\end{align*}
Fixing $z \in N$, $\Psi^{-1}(z)$ is a solution curve $c$ in $M$ given by \eqref{E:2-ODE-soln}, which induces a curve $\zeta(t) = \Psi_*(\Pi|_{c(t)})$ in $\tGr(m,T_z N)$. Indeed, $\zeta(t)$ is the span of $\bZ_i(t) = \Psi_*(\p_{p^i}|_{c(t)}) \in T_z N$, where
\begin{align} \label{E:Push-Pi}
\begin{array}{ll}
\bZ_1(t) &= \p_{a_1} - 3 (a_1)^2 t \p_{a_m} - t \p_{b_1} + \frac{3}{2} (a_1)^2 t^2 \p_{b_m}, \\
\bZ_2(t) &= \p_{a_2} - t \p_{b_2}, \quad
...\,\,,\quad
\bZ_m(t) = \p_{a_m} - t \p_{b_m}.
\end{array}
\end{align}
Define a $2 \times m$ matrix of 1-forms $\Theta = \{ \Theta^A_i \}$ on $N$ given by
\begin{align} \label{E:Segre-sm}
\Theta^1_i = \l\{\begin{array}{ll} da_i, & i \neq m;\\ da_m - \frac{3}{2} (a_1)^2 db_1, & i = m \end{array} \r., \qquad \Theta^2_i = db_i, \qquad 1 \leq i \leq m.
\end{align}
The $2 \times 2$ minors of $\Theta$ give the Segr\'e equations cutting out a Segr\'e variety $\cS|_z \subset \bbP(T_z N)$,
\begin{align} \label{E:Segre-eqns}
\Theta^A_i \Theta^B_j - \Theta^A_j \Theta^B_i = 0, \quad 1 \leq i, j \leq m, \quad 1 \leq A,B \leq 2.
\end{align}
These quadratic equations characterize the image of the Segr\'e embedding $\sigma : \bbP^1 \times \bbP^{m-1} \hookrightarrow \bbP^{2m-1} \cong \bbP(T_z N)$, $[u] \times [v] \mapsto [u\ot v]$. The curve $\zeta(t)$ given by \eqref{E:Push-Pi} satisfies \eqref{E:Segre-eqns} and so $\zeta(t)$ is contained in and moreover fills out $\cS|_z$ as $t$ varies (regarding $t$ as a projective parameter).
Symmetries of the Segr\'e structure corresponding to \eqref{E:Segre-sm} are easily calculated: given any symmetry $\bV$ of the ODE \eqref{E:2-ODE-sm}, compute $\widetilde\bV = \Psi_*(\bV^{(1)})$. This yields
\begin{align*}
&\widetilde\bT = -(a_1)^3 \p_{a_m} - \sum_{i=1}^m a_i \p_{b_i}, \qquad \widetilde\bX_i = \p_{b_i}, \qquad
\widetilde\bA_j = \p_{a_j}, \qquad \widetilde\bB_j^k = a_k \p_{a_j} + b_k \p_{b_j}, \\
& \qquad\qquad\qquad (1 \leq i \leq m, \quad 2 \leq j \leq m, \quad 1 \leq k < m) \\
&\widetilde\bC = 2\p_{a_1} + 6 a_1 b_1 \p_{a_m} + 3(b_1)^2 \p_{b_m}, \qquad
\widetilde\bD_1 = a_1 \p_{a_1} + 3 a_m \p_{a_m} + b_1 \p_{b_1} + 3 b_m \p_{b_m},\\
&\widetilde\bD_2 = -\sum_{k=1}^{m-1} a_k \p_{a_k} - 2a_m \p_{a_m} - b_m \p_{b_m},\qquad
\widetilde\bS = \sum_{i=1}^m b_i \p_{a_i} + \frac{3}{2} a_1 (b_1)^2 \p_{a_m} + \half (b_1)^3 \p_{b_m}.
\end{align*}
Direct calculation verifies that the ideal generated by the left hand side of \eqref{E:Segre-eqns} is preserved under $\cL_{\widetilde\bV}$, so these are indeed symmetries of the Segr\'e structure.
When $m=2$, \eqref{E:Segre-eqns} is a single quadratic equation $\metric := \Theta^1_1 \Theta^2_2 - \Theta^1_2 \Theta^2_1 = 0$, so a Segr\'e structure is the same (up to a sign) as a split signature conformal structure $[\metric]$ in dimension four (and $\Psi_*(\Pi)$ are the null planes). Changing coordinates, $\metric$ is simply the pp-wave given in \eqref{E:pp-2,2}.
Any Segr\'e variety contains two distinguished {\em rulings}, i.e.\ maximal (projective) linear spaces. With respect to the Segr\'e embedding $\sigma$, writing $\sigma_u(v) = \sigma(u,v) = \sigma^v(u)$, we have for \eqref{E:Segre-sm}:
\begin{itemize}
\item $m$-ruling: $\sigma_u : \bbP^{m-1} \to \bbP^{2m-1}$ for $[u] \in \bbP^1$. Each $m$-plane in $\cR|_z \subset \tGr(m,T_z N)$ has basis
\begin{align*}
u_1\p_{a_1} + u_2\l( \p_{b_1} + \frac{3}{2} (a_1)^2 \p_{a_m}\r), \quad
u_1\p_{a_2} + u_2\p_{b_2}, \quad ...\,\,, \quad u_1\p_{a_m} + u_2\p_{b_m} \quad \in T_z N.
\end{align*}
\item $2$-ruling: $\sigma^v : \bbP^1 \to \bbP^{2m-1}$ for $[v] \in \bbP^{m-1}$. Each $2$-plane in $\cP|_z \subset \tGr(2,T_z N)$ has basis
\begin{align*}
v_1 \p_{a_1} + ... + v_m \p_{a_m}, \quad v_1 \l( \frac{3}{2} (a_1)^2 \p_{a_m} + \p_{b_1}\r) + v_2 \p_{b_2} ... + v_m \p_{b_m} \quad \in T_z N.
\end{align*}
\end{itemize}
This gives submanifolds $\cR \subset \tGr(m,TN)$ and $\cP \subset \tGr(2,TN)$. From \eqref{E:Push-Pi}, we see that $\Psi_*(\Pi) \subset \cR$. The image under $\Psi$ of each fibre of $M \to J^0$ is an $m$-dimensional submanifold $\Sigma \subset N$, with $T_z \Sigma \in \cR|_z \subset T_z N$ for any $z \in \Sigma$. Conversely, every element of $\cR$ is tangent to such a submanifold, so $\cR$ is {\em integrable}. In fact, Grossman \cite{Gro2000} shows that integrability of the $m$-ruling $\cR$ is a general feature of Segr\'e structures arising from 2nd order ODE systems by establishing an isomorphism of the bundles $M \to N$ and the $\bbP^1$-bundle $P_\Delta \to N$ whose fibres parametrize the $m$-ruling. The $2$-ruling $\cP$ is never integrable for such structures, except for the flat model.
The $T \equiv 0$ branch for ODE and associated Segr\'e structures have $\Kh$ concentrated in the respective modules corresponding to $w = (21)$. For Segr\'e structures, there is an additional $(23)$-branch. For structures in this branch, $\cR$ is non-integrable while $\cP$ is integrable.
\begin{thm} Let $m \geq 2$. For $(2,m)$-Segr\'e structures,
\begin{align*}
\fS = \fS_{(21)} = m^2 + 5, \qquad \fS_{(23)} = \l\{ \begin{array}{cl} m^2 - m + 8, & m \geq 3;\\ 9, & m=2. \end{array} \r.
\end{align*}
\end{thm}
\begin{proof} Consider $A_\rkg / P_2$ with $\rkg = m+1$. We have established $\fS_{(21)} = m^2 + 5$. If $m=2$, then $A_3 / P_2 \cong D_3 / P_1$, which we have already considered. Letting $m \geq 3$ and $w = (23)$, $J_w = \{ 1, 4, \rkg \}$ and $I_w = \emptyset$, so PR. By Recipes \ref{R:a0} and \ref{R:dim}, $\fp_w^{\opn} \cong \fp_1 \times \fp_{2, \rkg-2} \subset A_1 \times A_{\rkg-2}$, and $\dim(\fa_0) = \dim(\fp_w^{\opn}) = m^2-3m + 8$. Hence, $\fS_w = \fU_w = \dim(\fa(w)) = \dim(\fg_-) + \dim(\fa_0) = m^2 - m + 8$.
\end{proof}
Thus, the Segr\'e structure \eqref{E:Segre-sm} in the $(21)$-branch is indeed submaximally symmetric. When $m=2$ or $3$ there is also a submaximally symmetric model in the $(23)$-branch.
\subsection{Projective structures}
\label{S:projective}
Two torsion-free affine connections $\nabla, \nabla'$ are equivalent if their {\em unparametrized} geodesics are the same, and a {\em projective connection} is such an equivalence class $[\nabla]$. These are the underlying structures for a (regular, normal) $A_\rkg / P_1$ geometry, or equivalently (see Section \ref{S:2-ODE}) an $A_\rkg / P_{1,2}$ geometry with vanishing Fels curvature ($S\equiv 0$).
Locally, take coordinates $(x^a)_{a=0}^{\rkg-1}$ on an $\rkg$-manifold $N$ and define $\nabla$ via its Christoffel symbols
\begin{align*}
\nabla_{\p_{x^a}} \p_{x^b} = \Gamma^c_{ab} \p_{x^c}, \qquad \Gamma^c_{ab} = \Gamma^c_{(ab)}, \qquad 0 \leq a,b,c \leq \rkg-1 =: m.
\end{align*}
The geodesic equation $\nabla_{\sigma'} \sigma' = 0$ for a curve $\sigma = \sigma(t)$ takes the local form $(x^a)'' + \Gamma^a_{bc} (x^b)' (x^c)' = 0$. Locally, this always has solutions, so relabelling coordinates if necessary we may assume $(x^0)' \neq 0$, and we can solve for $t = t(x^0)$. We eliminate dependence on the parameter $t$ by regarding $x^i$, $i > 0$, as functions of $x^0$. By the chain rule, the geodesic equations become
\begin{align}
\ddot{x}^i &= \Gamma^0_{ab} \dot{x}^i \dot{x}^a \dot{x}^b - \Gamma^i_{ab} \dot{x}^a \dot{x}^b, \label{E:proj-ODE}\\
&= \Gamma^0_{jk} \dot{x}^i \dot{x}^j \dot{x}^k + 2\Gamma^0_{0j} \dot{x}^i \dot{x}^j -\Gamma^i_{jk} \dot{x}^j \dot{x}^k + \Gamma^0_{00} \dot{x}^i - 2\Gamma^i_{0j} \dot{x}^j -\Gamma^i_{00},\qquad 1 \leq i,j,k \leq m, \nonumber
\end{align}
and we consider this system up to point transformations. Internally, the ODE structure is defined on a $(2m+1)$-manifold $M$ with local coordinates $(t,x^j, p^j)$. Projective equivalence $\nabla \mapsto \nabla'$ is equivalently given operationally by
$\Gamma^a_{bc}\mapsto\Gamma^a_{bc}+\delta^a_b\Upsilon_c+\delta^a_c\Upsilon_b$, where $\Upsilon$ is an arbitrary 1-form. As expected, \eqref{E:proj-ODE} is invariant under these changes.
Egorov \cite{Ego1951} studied the gap problem for projective structures by analyzing the integrability conditions for symmetries. Let us reprove his results using our Lie algebraic methods. Observe that the model $A_\rkg / P_1$ gives a 1-grading $\fg = \fg_{-1} \op \fg_0 \op \fg_1$, where $\dim(\fg_{-1}) = \rkg$, and $\fg_0 \cong \bbC \times A_m$. By Kostant, we have $W^\fp_+(2) = \{ (12) \}$, and
\begin{align*}
H^2_+(\fg_-,\fg) = \projwts{xsswws}{-4,1,1,0,0,1} = \l( \bigwedge{\!}^2 \fg_1 \ot \fsl(\fg_{-1}) \r)_0, \qquad \rkg \geq 3.
\end{align*}
\begin{thm} Let $\rkg \geq 2$. For projective structures on $\rkg$-manifolds,
$\fS = \l\{ \begin{array}{cl} (\rkg-1)^2 + 4, & \rkg \geq 3; \\ 3, & \rkg=2. \end{array} \r.$
\end{thm}
\begin{proof} The $\rkg=2$ case is exceptional (see Section \ref{S:exceptions}), so let $\rkg \geq 3$.
For $w = (12)$, $J_w = \{ 2, 3, \rkg \}$, $I_w = \emptyset$, and $\fa(w) = \fg_- \op \fa_0$. Then $\fp_w^{\opn} \cong \fp_{1,2,\rkg-1} \subset A_{\rkg-1}$, and $\dim(\fa_0) = \dim(\fp_w^{\opn}) = \rkg^2 - 3\rkg + 5$. Then $\fS = \fU_w = \dim(\fa(w)) = \dim(\fg_-) + \dim(\fa_0) = \rkg^2 - 2\rkg + 5 = (\rkg-1)^2 + 4$.
\end{proof}
Egorov \cite{Ego1951} also gave the following submaximally symmetric model for $m = \rkg - 1 \geq 2$:
\begin{align} \label{E:Egorov-model}
\Gamma^0_{12} = \Gamma^0_{21} = x^1, \qquad \Gamma^c_{ab} = 0 \mbox{ otherwise}.
\end{align}
The Weyl curvature of this connection has components
\begin{align*}
W_{121}{}^0 = -W_{211}{}^0 = 1, \qquad W_{abc}{}^d = 0 \mbox{ otherwise}.
\end{align*}
The associated 2nd order ODE system is
\begin{align} \label{E:Egorov-ODE}
\ddot{x}^i = 2 x^1 \dot{x}^1 \dot{x}^2 \dot{x}^i \qquad (1 \leq i \leq m).
\end{align}
Its $m^2 + 4$ point symmetries are (writing $t := x^0$):
\begin{align} \label{E:Egorov-syms}
\l\{ \begin{array}{ll}
\p_t,\quad \p_{x^2},\ \dots,\ \p_{x^m},\quad x^i\p_t,\quad x^i\p_{x^j}\quad (i\ge1,\ j\ge3),\\
2t\p_t+x^1\p_{x^1},\quad t\p_t+x^2\p_{x^2},\quad x^1x^2\p_t - \p_{x^1},\quad (x^1)^3\p_t - 3x^1\p_{x^2}.
\end{array} \r.
\end{align}
where, as in Section \ref{S:2-ODE}, the actual symmetries on $(t,x^j,p^j)$-space $M$ are the prolongation of these vector fields. Since $\{ \p_{p^i} \}_{i=1}^m$ spans the vertical subspace for the projection $M \to N$, then \eqref{E:Egorov-syms} are also the symmetries of the projective connection determined by \eqref{E:Egorov-model}. The Fels torsion \eqref{E:Fels-inv} $T = (T^i_j)$ of \eqref{E:Egorov-ODE} has matrix rank 1 and components
\begin{align*}
T_1^i=-p^1p^2 p^i,\qquad T_2^i=(p^1)^2 p^i,\qquad T_j^i=0,\qbox{for} j \geq 3.
\end{align*}
\subsection{Bracket-generating distributions}
\subsubsection{$G_2 / P_1$ geometry} \label{S:G2P1} We work over $\bbC$. (Over $\bbR$, use the split-real form of $G_2$.) The underlying geometry for a (regular, normal) $G_2 / P_1$ geometry is a $(2,3,5)$-distribution. The equivalence problem for such structures was studied by Cartan in his well-known five-variables paper \cite{Car1910}. The flat model, in Monge form (see Example \ref{ex:Monge}), is the Hilbert--Cartan equation $z' = (y'')^2$.
Writing $\fg = \text{Lie}(G_2)$, $\fp_1$ induces a 3-grading on $\fg$, with $\fg_0 \cong \fgl_2(\bbC)$. In Example \ref{EX:G2P1-2}, we saw $H^2_+(\fg_-,\fg) \cong \Gdd{xs}{-8,4} \cong \bigodot^4(\fg_{-1})^* \cong \bigodot^4 (\bbC^2)^*$, i.e. binary quartics.
\begin{thm} For $(2,3,5)$-distributions, $\fS = 7$.
\end{thm}
\begin{proof}
For $w = (12)$, $J_w = \{ 2\}$, $I_w = \emptyset$, so $G_2 / P_1$ is \PRp, and $\fa(w) = \fg_- \op \fa_0$. Hence, $\fp_w^{\opn} \cong \fp_1 \subset A_1$, so $\dim(\fa_0) = \dim(\fp_w^{\opn}) = 2$. Thus, $\fS = \dim(\fa(w)) = \dim(\fg_-) + \dim(\fa_0) = 5 + 2 = 7$.
\end{proof}
Using the classification of $G_0 \cong \text{GL}_2(\bbC)$ orbits in $\bigodot^4(\bbC^2)^*$, we analyze each root type (similar to the Petrov analysis in Section \ref{S:4d-Lor}). Let $\{ e_1, e_2\}$ and $\{ \omega_1, \omega_2 \}$ be (Killing) dual bases in $\fg_{-1}$ and $\fg_1$ respectively. Any element of $\fg_0 = \fgl(\fg_{-1})$ acts on $\bigodot^4 (\fg_{-1})^*$, e.g. $\pmat{a & b \\ c & d} \cdot \omega_1^4 = -4(a \omega_1 + b\omega_2) \omega_1^3$. For a $\fgl_2(\bbC)$ basis, take
$X= \pmat{0 & 0\\ 1 & 0}$,
$Y= \pmat{0 & 1\\ 0 & 0}$,
$H= \pmat{1 & 0\\ 0 & -1}$,
$I= \pmat{1 & 0\\ 0 & 1}$.
(Note $Z = -I$.) Symmetry bounds for each root type are given in Table \ref{F:G2-P1}.
Some models, found either by Cartan \cite{Car1910} or Strazzullo \cite[Section 6.10]{Str2009}, are given in Table \ref{F:G2-P1-ex}. We refer to \cite{Str2009} for the symmetry algebras.
\begin{table}[h]
$\begin{array}{|c|c|c|c|c|} \hline
\mbox{Type} & \mbox{Normal form $\phi$} & \mbox{Basis for } \fa_0^\phi & \dim(\fa^\phi) & \fa^\phi \mbox{ filtration-rigid?} \\ \hline
{}(4) & \omega_1^4 & X, H-I & 7 & \times\\
{}(3,1) & \omega_1^3 \omega_2 & 2H - I & 6 & \checkmark\\
{}(2,2) & \omega_1^2 \omega_2^2 & H & 6 & \times\\
{}(2,1,1) & \omega_1^2 \omega_2 (\omega_1-\omega_2) & \cdot & 5 & \times\\
{}(1,1,1,1) & \omega_1 \omega_2 (\omega_1 - \omega_2)(\omega_1-k\omega_2) & \cdot & 5 & \times\\ \hline
\end{array}$
\caption{Root types and upper bounds on symmetry dimensions for $G_2 / P_1$ geometries}
\label{F:G2-P1}
\end{table}
\begin{table}[h]
$\begin{array}{|c|c|c|c|c|} \hline
\mbox{Root type} & F(x,y,p,q,z) & \dim(\cS)& \mbox{Reference in \cite{Str2009}}\\ \hline
{}(4) & q^3 & 7 & 6.1.1\\
{}(3,1) & p^3 + q^2 & 4 & 6.1.3\\
{}(2,2) & y + \ln(q) & 6 & 6.3.4\\
{}(2,1,1) & pq^2 & 5 & 6.1.6\\
{}(1,1,1,1) & pq^3 & 5 & 6.1.7\\ \hline
\end{array}$
\caption{Some model $(2,3,5)$-distributions for each root type}
\label{F:G2-P1-ex}
\end{table}
\begin{thm} \label{T:G2P1-bounds} Among $(2,3,5)$-distributions with constant root type:
\begin{center}
\begin{tabular}{|c||c|c|c|c|c|} \hline
Root type & $(4)$ & $(3,1)$ & $(2,2)$ & $(2,1,1)$ & $(1,1,1,1)$\\ \hline
max. $\dim(\cS)$ & $7$ & $4$ or $5$ & $6$ & $5$ & $5$ \\ \hline
\end{tabular}
\end{center}
Except for type $(3,1)$, all models realizing these upper bounds are locally homogeneous near a regular point.\footnote{R.L. Bryant \cite{BryComm} analyzed the $(3,1)$ case by continuing Cartan's method of equivalence \cite{Car1910} and showed that no models with $5$-dimensional symmetry exist. Thus, $4$ is the maximal symmetry dimension for type $(3,1)$.}
\end{thm}
\begin{proof} We show that in type $(3,1)$, $\fa = \fg_- \op \fa_0$ is filtration-rigid. Here, $e_0 := 2H-I$ spans $\fa_0$. Take a basis of $\fg_-$ as in \eqref{E:G2P1-symbol}. Then $\text{ad}_{e_0}$ is diagonal with
\[
\cE_0 = \{ 0 \}, \quad \cE_{-1} = \{ 1, -3 \}, \quad \cE_{-2} = \{ -2 \}, \quad \cE_{-3} = \{ -1, -5 \}.
\]
Let $\ff$ be a filtered deformation of $\fa$. The first hypothesis of Proposition \ref{P:f-rigid} is satisfied, as is (i), but (ii) is not. Similar to the Petrov type III case, there is $\tilde{e}_0 \in \ff^0$ which acts diagonally in some basis $\{ f_i^\alpha \}$ with the same eigenvalues as above, and with the only other non-trivial brackets
\[
[f_{-1}^1,f_{-1}^2] = f_{-2}, \quad [f_{-1}^1, f_{-2}] = f_{-3}^1, \quad [f_{-1}^2, f_{-2}] = f_{-3}^2, \quad
[f_{-1}^1,f_{-3}^1] = a \tilde{e}_0, \quad [f_{-2},f_{-3}^1] = b f_{-1}^2.
\]
Since $0 = \Jac_\ff(f_{-1}^1, f_{-3}^1, f_{-3}^2 ) = -5a f_{-3}^2$ and $0 = \Jac_\ff(f_{-1}^1, f_{-2}, f_{-3}^1 ) = (2a - b) f_{-2}$, then $\fa$ is filtration-rigid. Aside from type $(3,1)$, when the upper bound is realized, $\fg_- \subset \fs(u)$.
\end{proof}
Theorem \ref{T:G2P1-bounds} recovers the bounds found by Cartan \cite{Car1910} using his method of equivalence.
\subsubsection{$A_3 / P_{1,2}$ geometries} \label{S:A3P12-dist}
As in Section \ref{S:2-ODE}, the underlying structure for regular, normal $A_3 / P_{1,2}$ geometries is still a distribution $D$ on a 5-manifold with a splitting $D = L \op \Pi$ into rank 1 and 2 subbundles, but now $\Pi$ is not necessarily integrable; instead $[\Pi,\Pi] \subseteq D$ \cite{CS2009,BEGN2012}. Having $(\Kh)_{+1}$ nowhere vanishing is equivalent to $[\Pi,\Pi] = D$, in which case $\Pi$ becomes a $(2,3,5)$-distribution.
\begin{example} On a 5-manifold $(x,y,p,q,z)$, consider $\Pi$ spanned by $\{ \p_q, \p_x + p \p_y + q\p_p + q^2 \p_z \}$ (i.e.\ the flat model for $(2,3,5)$-distributions) and $L = \{ \p_p + 2q\p_z \}$. Then $(\Pi,L)$ has $\cS$ given by
\begin{align*}
& \bX_1 = \p_x, \quad \bX_2 = \p_y, \quad \bX_3 = \p_z, \quad \bX_4 = x\p_y + \p_p, \quad \bX_5 = x^2 \p_y + 2x\p_p + 2\p_q + 4p\p_z, \\
& \bX_6 = q\p_x + \l(pq - \frac{z}{2}\r) \p_y + \frac{q^2}{2} \p_p + \frac{q^3}{3} \p_z, \quad
\bX_7 = \frac{x^3}{6} \p_y + \frac{x^2}{2} \p_p + x\p_q + (2px - 2y) \p_z, \\
& \bX_8 = x\p_x + y\p_y - q\p_q - z\p_z, \quad
\bX_9 = y\p_y + p\p_p + q\p_q + 2z\p_z.
\end{align*}
Abstractly, in the flat $G_2 / P_1$ model, $\Pi$ and $L$ correspond to the subspaces of $\text{Lie}(G_2) / \fp_1$ spanned by $\{ e_{-\alpha_1}, e_{-\alpha_1 - \alpha_2} \}$ and $\{ e_{-2\alpha_1 - \alpha_2} \}$ (modulo $\fp_1$). Only $\fgl_2(\bbC) \cong \fg_0 \subset \fp_1$ stabilizes both $\Pi$ and $L$. Hence, $\cS \cong \fp_1^{\opn}$ (via a Lie algebra anti-automorphism).
\end{example}
\begin{prop} Consider a $(2,3,5)$-distribution $\Pi$ and a line field $L$ such that $[\Pi,\Pi] = L \op \Pi$. The infinitesimal symmetry algebra of $(\Pi,L)$ is at most 9-dimensional, and this bound is sharp.
\end{prop}
\begin{proof}
For $A_3 / P_{1,2}$, $\fg_0 \cong \bbC^2 \times A_1$, and $H^2(\fg_-,\fg)_{+1} = \bbV_{-w\cdot\lambda_\fg} = \Athree{xxw}{4,-4,0}$, where $w = (23) \in W^\fp_+(2)$. We have $J_w = I_w = \emptyset$, so $\fa(w) = \fg_- \op \fa_0$, $\fp_w^{\opn} \cong A_1$, and $\dim(\fa_0) = |I_\fp| - 1 + \dim(\fp_w^{\opn}) = 4$. Then $\fS_w = \dim(\fa(w)) = \dim(\fg_-) + \dim(\fa_0) = 5 + 4 = 9$.
\end{proof}
\begin{remark}
As indicated above, the structure is in fact not just a bracket-generating distribution since $D$ has a splitting on it. However, it fits in naturally after discussing $G_2 / P_1$ geometry.
\end{remark}
\subsubsection{$B_\rkg / P_\rkg$ geometry, $\rkg \geq 3$} Let $\fg \cong B_\rkg \cong \text{SO}_{2\rkg+1}(\bbC)$, but all statements here still hold when considering $\text{SO}_{\rkg+1,\rkg}$. The underlying structure for a (regular, normal) $B_\rkg / P_\rkg$ geometry is a rank $\rkg$ distribution on a $\binom{\rkg+1}{2}$-manifold, with generic growth $(\rkg,\binom{\rkg+1}{2})$. This is a 2-graded geometry with $\fg_0 \cong \bbC \times A_{\rkg-1} \cong \fgl_\rkg(\bbC)$, and Levi bracket inducing the $\fg_0$-module isomorphism $\bigwedge^2 \fg_{-1} \cong \fg_{-2}$. The $\rkg = 3$ case was studied by R.L. Bryant \cite{Bry1979}, \cite{Bry2006}.
\begin{thm} Let $\rkg \geq 3$. For $(\rkg,\binom{\rkg+1}{2})$-distributions,
$\fS = \l\{ \begin{array}{cl} \frac{\rkg (3\rkg - 7)}{2} + 10, & \rkg \geq 4;\\ 11, & \rkg = 3. \end{array} \r.$
\end{thm}
\begin{proof} We have $W^\fp_+(2) = \{ (\rkg,\rkg-1) \}$. For $w = (\rkg,\rkg-1)$, $I_w = \emptyset$, so $\fa(w) = \fg_- \op \fa_0$, and
\Ben
\item $\rkg \geq 4$: $J_w = \{ 2, \rkg - 2, \rkg -1 \}$, so $\fp_w^{\opn} \cong \fp_{2, \rkg-2, \rkg-1} \subset A_{\rkg-1}$, and $\dim(\fa_0) = \frac{\rkg(3\rkg-7)}{2} + 10 - \binom{\rkg+1}{2}$;
\item $\rkg = 3$: $J_w = \{ 1, 2 \}$, so $\fp_w^{\opn} \cong \fp_{1,2} \subset A_2$, and $\dim(\fa_0) = 5$;
\Een
since $\dim(\fa_0) = \dim(\fp_w^{\opn})$. Then $\fS = \dim(\fa(w)) = \dim(\fg_-) + \dim(\fa_0)$ yields the result.
\end{proof}
Let us exhibit some models in the $\rkg = 3$ case. On a 6-manifold $(x_1,x_2,x_3,y_1,y_2,y_3)$, we specify 3-plane distributions by the vanishing of three 1-forms. The flat model is
\[
\theta_1 = dy_1 - x_3 dx_2, \qquad
\theta_2 = dy_2 - x_1 dx_3, \qquad
\theta_3 = dy_3 - x_2 dx_1,
\]
which has $B_3$ symmetry algebra ($21$-dimensional). Two new models are
\[
\begin{array}{cl@{\qquad}l@{\qquad}l}
\mbox{(a)}: & \theta_1 = dy_1 - x_3 dx_2, & \theta_2 = dy_2 - x_1 dx_3, & \theta_3 = dy_3 - x_2 dx_1 - x_1{}^2 x_3{}^2 dx_3;\\
\mbox{(b)}: & \theta_1 = dy_1 - x_3 dx_2, & \theta_2 = dy_2 - x_1 dx_3, & \theta_3 = dy_3 - (x_2 + y_3) dx_1.
\end{array}
\]
The respective 11 and 10-dimensional symmetry algebras (found using the {\tt DifferentialGeometry} package in {\tt Maple}) are given in Table \ref{F:B3P3-models}.
\begin{table}[h]
$\begin{array}{|c|l|} \hline
& \multicolumn{1}{c|}{\mbox{Symmetries}} \\ \hline\hline
\multirow{8}{*}{(a)} &
\bX_1 = \p_{y_1}, \quad \bX_2 = \p_{y_2}, \quad \bX_3 = \p_{y_3}, \quad
\bX_4 = \p_{x_2}+x_1 \p_{y_3}, \\
& \bX_5 = 2 x_1 \p_{x_1} - x_2 \p_{x_2} - x_3 \p_{x_3} - 2 y_1 \p_{y_1} + y_2 \p_{y_2} + y_3 \p_{y_3},\\
& \bX_6 = -x_1 \p_{x_1} + 2 x_2 \p_{x_2} + x_3 \p_{x_3} + 3 y_1 \p_{y_1} + y_3 \p_{y_3},\\
& \bX_7 = x_1 \p_{x_2} + (x_1 x_3 - y_2) \p_{y_1} + \half x_1{}^2 \p_{y_3}, \quad
\bX_8 = x_3 \p_{x_2} + \half x_3{}^2 \p_{y_1} + (x_1 x_3 - y_2) \p_{y_3},\\
& \bX_9 = \frac{1}{2} \p_{x_1} + \frac{1}{3} x_3{}^3 \p_{x_2} + \frac{1}{4} x_3{}^4 \p_{y_1} + \frac{1}{2} x_3 \p_{y_2}+ \frac{1}{3} x_1 x_3{}^3 \p_{y_3},\\
& \bX_{10} = x_3 \p_{x_1} + \frac{1}{2} x_3{}^4 \p_{x_2} + \frac{2}{5} x_3{}^5 \p_{y_1} + \frac{1}{2} x_3{}^2 \p_{y_2}
+ (\half x_1 x_3{}^4 - y_1 + x_2 x_3) \p_{y_3},\\
& \bX_{11} = (2 x_1 x_3{}^2 + 4 x_3 y_2) \p_{x_2} + 3 \p_{x_3}
+ (3 x_2 +2 x_1 x_3{}^3 + 2 x_3{}^2 y_2 ) \p_{y_1} \\
& \qquad\qquad + (x_1{}^2 x_3{}^2 + 4 x_1 x_3 y_2 - 2 y_2{}^2) \p_{y_3}\\ \hline
\multirow{4}{*}{(b)} &
\bX_1 = \p_{y_1}, \quad \bX_2 = \p_{y_2}, \quad \bX_3 = \p_{x_2} - \p_{y_3}, \quad
\bX_4 = e^{x_1} \p_{y_3}, \quad
\bX_5 = \p_{x_3} + x_2 \p_{y_1}, \\
& \bX_6 = \p_{x_1} + x_3 \p_{y_2}, \quad
\bX_7 = x_2 \p_{x_2} + y_1 \p_{y_1} + y_3 \p_{y_3}, \quad
\bX_8 = x_3 \p_{x_3} + y_1 \p_{y_1} + y_2 \p_{y_2}, \\
& \bX_9 = e^{-x_1} (\p_{x_3}+ (y_3+x_2) \p_{y_1} + (1+x_1) \p_{y_2}), \quad
\bX_{10} = x_1 \p_{x_2} + (x_1 x_3 - y_2) \p_{y_1} - (1+x_1) \p_{y_3}\\ \hline
\end{array}$
\caption{Symmetries of new models for generic rank 3 distributions on 6-manifolds}
\label{F:B3P3-models}
\end{table}
Model (a) is submaximally symmetric and realizes the abstract model $\ff / \fk$ in Theorem \ref{T:realize}. We now outline its construction. Take $\bbC^7$ with anti-diagonal symmetric $\bbC$-bilinear form, $\epsilon_i : \fh \to \bbC$, and simple roots as in Example \ref{ex:conf-odd-ann}. By Kostant's theorem,
\[
H^2(\fg_-,\fg) \cong \Bthree{ssx}{2,2,-6} = \bbV_{-w\cdot\lambda_\fg}, \qquad w = (32),
\]
with lowest weight $-w\cdot\lambda_2 = -\alpha_1 + 3\alpha_3 = -\epsilon_1 + \epsilon_2 + 3\epsilon_3$ (homogeneity $+3$). Since $\Phi_w = \{ \alpha_2 + 2\alpha_3, \alpha_3 \}$, and $w(-\lambda_2) = -\lambda_1 - \lambda_2 + 2\lambda_3 = -\alpha_1 - \alpha_2$, then $ \phi_0 = e_{\alpha_2 + 2\alpha_3} \wedge e_{\alpha_3} \ot e_{-\alpha_1 - \alpha_2}$.
Letting $\fa_0 = \fann(\phi_0)$,
\[
\fa = \fg_- \op \fa_0 = \mat{ccc|c|ccc}{ a_{22} + 3a_{33} & 0 & 0 & 0 & 0 & 0 & 0\\
a_{21} & a_{22} & 0 & 0 & 0 & 0 & 0\\
a_{31} & a_{32} & a_{33} & 0 & 0 & 0 & 0\\ \hline
a_{41} & a_{42} & a_{43} & 0 & 0 & 0 & 0\\ \hline
a_{51} & a_{52} & 0 & -a_{43} & -a_{33} & 0 & 0\\
a_{61} & 0 & -a_{52} & -a_{42} & -a_{32} & -a_{22} & 0\\
0 & -a_{61} & -a_{51} & -a_{41} & -a_{31} & -a_{21} & -(a_{22}+3a_{33})},
\]
and let $E_{ij}$ denote the matrix with $a_{ij} = 1$ and $0$'s elsewhere. Calculating matrix commutators determines the Lie algebra structure on $\fa$. If we take the following as root vectors
\[
e_{-\alpha_2 - 2\alpha_3} = E_{52} \in \fg_{-2}; \qquad
e_{-\alpha_3} = E_{43} \in \fg_{-1}; \qquad
e_{-\alpha_1 - \alpha_2} = E_{31} \in \fg_0,
\]
then $\phi_0$ yields the deformation $[E_{52},E_{43}] = E_{31}$ of $\fa$. (Note that $[E_{52}, E_{43}] = 0$ in $\fa$.) This yields a filtered deformation $\ff$ of $\fa$, and model (a) is a local model for $\ff / \fk$, where $\fk = \fa_0$.
\newpage
|
1,108,101,565,521 | arxiv |
\chapter{Secondary $\Ext$-groups associated to pair algebras}\label{sext}
In this chapter we introduce algebraically secondary $\Ext$-groups $\Ext_B$
over a pair algebra $B$. In \cite{BJ5} we already studied secondary
$\Ext$-groups in an additive track category which yield the $\Ext$-groups
$\Ext_B$ as a special case if one considers the track category of
$B$-modules. In chapter \ref{E3} we shall see thet the E$_3$-term of the
Adams spectral sequence is given by secondary $\Ext$-groups over the pair
algebra ${\mathscr B}$ of secondary cohomology operations.
\section{Modules over pair algebras}
\label{secodjf}
We here recall from \cite{Baues} the notion of pair modules, pair
algebras, and pair modules over a pair algebra $B$. The category
$B$-${\mathbf{Mod}}$ of pair modules over $B$ is an additive track category in which
we consider secondary resolutions as defined in \cite{BJ5}. Using such
secondary resolutions we shall obtain the secondary derived functors
$\Ext_B$ in section \ref{secodif}.
Let $k$ be a commutative ring with unit and let ${\mathbf{Mod}}$ be the category of
$k$-modules and $k$-linear maps. This is a symmetric monoidal category via
the tensor product $A\!\ox\!B$ over $k$ of $k$-modules $A$, $B$. A \emph{pair}
of modules is a morphism
\begin{equation}
X=\left(X_1\xto\d X_0\right)
\end{equation}
in ${\mathbf{Mod}}$. We write $\pi_0(X)=\coker\d$ and $\pi_1(X)=\ker\d$. A
\emph{morphism} $f:X\to Y$ of pairs is a commutative diagram
$$
\xymatrix
{
X_1\ar[r]^{f_1}\ar[d]_\d&Y_1\ar[d]^\d\\
X_0\ar[r]^{f_0}&Y_0.
}
$$
Evidently pairs with these morphisms form a category ${\calli{Pair}}({\mathbf{Mod}})$ and
one has functors
$$
\pi_1, \pi_0 : {\calli{Pair}}({\mathbf{Mod}})\to{\mathbf{Mod}}.
$$
A pair morphism is called a
\emph{weak equivalence} if it induces isomorphisms on $\pi_0$ and $\pi_1$.
Clearly a pair in ${\mathbf{Mod}}$ coincides with a chain complex concentrated in
degrees 0 and 1. For two pairs $X$ and $Y$ the tensor product of the
complexes corresponding to them is concentrated in degrees in 0, 1 and 2
and is given by
$$
X_1\!\ox\!Y_1\xto{\d_1}X_1\!\ox\!Y_0\oplus
X_0\!\ox\!Y_1\xto{\d_0}X_0\!\ox\!Y_0
$$
with $\d_0=(\d\ox1,1\ox\d)$ and $\d_1=(-1\ox\d,\d\ox1)$. Truncating $X\ox Y$
we get the pair
$$
X\bar\otimes Y=
\left((X\bar\otimes Y)_1=\coker(\d_1)\xto\d X_0\ox Y_0=(X\bar\otimes Y)_0\right)
$$
with $\d$ induced by $\d_0$.
\begin{Remark}\label{trunc}
Note that the full embedding of the category of pairs into the category of
chain complexes induced by the above identification has a left adjoint
$\Tr$ given by truncation: for a chain complex
$$
C=\left(...\to C_2\xto{\d_1}C_1\xto{\d_0}C_0\xto{\d_{-1}}C_{-1}\to...\right),
$$
one has
$$
\Tr(C)=\left(\coker(\d_1)\xto{\bar{\d_0}}C_0\right),
$$
with $\bar{\d_0}$ induced by $\d_0$. Then clearly one has
$$
X\bar\otimes Y=\Tr(X\ox Y).
$$
Using the fact that $\Tr$ is a reflection onto a full subcategory, one
easily checks that the category ${\calli{Pair}}({\mathbf{Mod}})$ together with the tensor
product $\bar\otimes$ and unit $k=(0\to k)$ is a symmetric monoidal category,
and $\Tr$ is a monoidal functor.
\end{Remark}
We define the tensor product $A\ox B$ of two graded modules in the usual
way, i.~e. by
$$
(A\ox B)^n=\bigoplus_{i+j=n}A^i\ox B^j.
$$
A \emph{pair module} is a graded object of ${\calli{Pair}}({\mathbf{Mod}})$, i.~e. a
sequence $X^n=(\d:X_1^n\to X_0^n)$ of pairs in ${\mathbf{Mod}}$. We identify such a
pair module $X$ with the underlying morphism $\d$ of degree 0 between
graded modules
$$
X=\left(X_1\xto\d X_0\right).
$$
Now the tensor product $X\bar\otimes Y$ of graded pair modules $X$, $Y$ is defined by
\begin{equation}\label{grapr}
(X\bar\otimes Y)^n=\bigoplus_{i+j=n}X^i\bar\otimes Y^j.
\end{equation}
This defines a monoidal structure on the category of graded pair modules.
Morphisms in this category are of degree 0.
For two morphisms $f,g:X\to Y$ between graded pair modules, a
\emph{homotopy} $H:f\then g$ is a morphism $H:X_0\to Y_1$ of degree 0 as in
the diagram
\begin{equation}\label{homoto}
\alignbox{
\xymatrix
{
X_1\ar@<.5ex>[r]^{f_1}\ar@<-.5ex>[r]_{g_1}\ar[d]_\d&Y_1\ar[d]^\d\\
X_0\ar@<.5ex>[r]^{f_0}\ar@<-.5ex>[r]_{g_0}\ar[ur]|H&Y_0,
}}
\end{equation}
satisfying $f_0-g_0=\d H$ and $f_1-g_1=H\d$.
A \emph{pair algebra} $B$ is a monoid in the monoidal category of graded pair
modules, with multiplication
$$
\mu:B\bar\otimes B\to B.
$$
We assume that $B$ is concentrated in nonnegative degrees, that is $B^n=0$
for $n<0$.
A \emph{left $B$-module} is a graded pair module $M$ together with a left
action
$$
\mu:B\bar\otimes M\to M
$$
of the monoid $B$ on $M$.
More explicitly pair algebras and modules over them can be described as
follows.
\begin{Definition}
A \emph{pair algebra} $B$ is a graded pair
$$
\d:B_1\to B_0
$$
in ${\mathbf{Mod}}$ with $B_1^n=B_0^n=0$ for $n<0$ such that $B_0$ is a graded
algebra in ${\mathbf{Mod}}$, $B_1$ is a graded $B_0$-$B_0$-bimodule, and $\d$ is a
bimodule homomorphism. Moreover for $x,y\in B_1$ the equality
$$
\d(x)y=x\d(y)
$$
holds in $B_1$.
\end{Definition}
It is easy to see that there results an exact sequence of graded
$B_0$-$B_0$-bimodules
$$
0\to\pi_1B\to B_1\xto\d B_0\to\pi_0B\to0
$$
where in fact $\pi_0B$ is a $k$-algebra, $\pi_1B$ is a
$\pi_0B$-$\pi_0B$-bimodule, and $B_0\to\pi_0(B)$ is a homomorphism of
algebras.
\begin{Definition}\label{bmod}
A \emph{(left) module} over a pair algebra $B$ is a graded pair
$M=(\d:M_1\to M_0)$ in ${\mathbf{Mod}}$ such that $M_1$ and $M_0$ are left
$B_0$-modules and $\d$ is $B_0$-linear. Moreover, a $B_0$-linear map
$$
\bar\mu:B_1\!\otimes_{B_0}\!M_0\to M_1
$$
is given fitting in the commutative diagram
$$
\xymatrix{
B_1\otimes_{B_0}M_1\ar[r]^{1\ox\d}\ar[d]_\mu&B_1\otimes_{B_0}M_0\ar[dl]^{\bar\mu}\ar[d]^\mu\\
M_1\ar[r]_\d&M_0,
}
$$
where $\mu(b\ox m)=\d(b)m$ for $b\in B_1$ and $m\in M_1\cup M_0$.
For an indeterminate element $x$ of degree $n=|x|$ let $B[x]$ denote the
$B$-module with $B[x]_i$ consisting of expressions $bx$ with $b\in B_i$,
$i=0,1$, with $bx$ having degree $|b|+n$, and structure maps given by
$\d(bx)=\d(b)x$, $\mu(b'\ox bx)=(b'b)x$ and $\bar\mu(b'\ox bx)=(b'b)x$.
A \emph{free} $B$-module is a direct sum of several copies of modules of
the form $B[x]$, with $x\in I$ for some set $I$ of indeterminates of
possibly different degrees. It will be denoted
$$
B[I]=\bigoplus_{x\in I}B[x].
$$
For a left $B$-module $M$ one has the exact sequence of $B_0$-modules
$$
0\to\pi_1M\to M_1\to M_0\to\pi_0M\to0
$$
where $\pi_0M$ and $\pi_1M$ are actually $\pi_0B$-modules.
Let $B$-${\mathbf{Mod}}$ be the category of left modules over the pair algebra $B$.
Morphisms $f=(f_0,f_1):M\to N$ are pair morphisms which are
$B$-equivariant, that is,$f_0$ and $f_1$ are $B_0$-equivariant and
compatible with $\bar\mu$ above, i.~e. the diagram
$$
\xymatrix{
B_1\ox_{B_0}M_0\ar[r]^-{\bar\mu}\ar[d]_{1\ox f_0}&M_1\ar[d]^{f_1}\\
B_1\ox_{B_0}N_0\ar[r]^-{\bar\mu}&N_1
}
$$
commutes.
For two such maps $f,g:M\to N$ a track $H:f\then g$ is a degree zero map
\begin{equation}\label{track}
H:M_0\to N_1
\end{equation}
satisfying $f_0-g_0=\d H$ and $f_1-g_1=H\d$ such that $H$ is
$B_0$-equivariant. For tracks $H:f\then g$, $K:g\then h$ their composition
$K{\scriptstyle\Box} H:f\then h$ is $K+H$.
\end{Definition}
\begin{Proposition}
For a pair algebra $B$, the category $B$-${\mathbf{Mod}}$ with the above track
structure is a well-defined additive track category.
\end{Proposition}
\begin{proof}
For a morphism $f=(f_0,f_1):M\to N$ between $B$-modules, one has
$$
\Aut(f)=\set{H\in\Hom_{B_0}(M_0,N_1)\ |\ \d H=f_0-f_0,H\d=f_1-f_1}\cong\Hom_{\pi_0B}(\pi_0M,\pi_1N).
$$
Since this group is abelian, by \cite{Baues&JibladzeI} we know that $B$-${\mathbf{Mod}}$
is a linear track extension of its homotopy category by the bifunctor $D$
with $D(M,N)=\Hom_{\pi_0B}(\pi_0M,\pi_1N)$. It thus remains to show that the
homotopy category is additive and the bifunctor $D$ is biadditive.
By definition the set of morphisms $[M,N]$ between objects $M$, $N$ in the
homotopy category is given by the exact sequence of abelian groups
$$
\Hom_{B_0}(M_0,N_1)\to\Hom_B(M,N)\onto[M,N].
$$
This makes evident the abelian group structure on the hom-sets $[M,N]$. Bilinearity
of composition follows from consideration of the commutative diagram
$$
\xymatrix{
\Hom_{B_0}(M_0,N_1)\!\ox\!\Hom_B(N,P)\oplus
\Hom_B(M,N)\!\ox\!\Hom_{B_0}(N_0,P_1)\ar[d]\ar[r]^-\mu
&\Hom_{B_0}(M_0,P_1)\ar[d]\\
\Hom_B(M,N)\ox\Hom_B(N,P)\ar[r]\ar@{->>}[d]
&\Hom_B(M,P)\ar@{->>}[d]\\
[M,N]\ox[N,P]\ar@{-->}[r]
&[M,P]
}
$$
with exact columns, where $\mu(H\!\ox\!g+f\!\ox\!K)=g_1H+Kf_0$. It also shows
that the functor $B$-${\mathbf{Mod}}\to B$-${\mathbf{Mod}}_\simeq$ is linear. Since this functor
is the identity on objects, it follows that the homotopy category is additive.
Now note that both functors $\pi_0$, $\pi_1$ factor to define functors on
$B$-${\mathbf{Mod}}_\simeq$. Since these functors are evidently additive, it follows that
$D=\Hom_{\pi_0B}(\pi_0,\pi_1)$ is a biadditive bifunctor.
\end{proof}
\begin{Lemma}\label{freehom}
If $M$ is a free $B$-module, then the canonical map
$$
[M,N]\to\Hom_{\pi_0B}(\pi_0M,\pi_0N)
$$
is an isomorphism for any $B$-module $N$.
\end{Lemma}
\begin{proof}
Let $(g_i)_{i\in I}$ be a free generating set for $M$. Given a
$\pi_0(B)$-equivariant homomorphism $f:\pi_0M\to\pi_0N$, define its lifting
$\tilde f$ to $M$ by specifying $\tilde f(g_i)=n_i$, with $n_i$ chosen
arbitrarily from the class $f([g_i])=[n_i]$.
To show monomorphicity, given $f:M\to N$ such that $\pi_0f=0$, this means
that $\im f_0\subset\im\d$, so we can choose $H(g_i)\in N_1$ in such a way
that $\d H(g_i)=f_0(g_i)$. This then extends uniquely to a $B_0$-module
homomorphism $H:M_0\to N_1$ with $\d H=f_0$; moreover any element of $M_1$ is a linear
combination of elements of the form $b_1g_i$ with $b_1\in B_1$, and for
these one has $H\d(b_1g_i)=H(\d(b_1)g_i)=\d(b_1)H(g_i)$. But
$f_1(b_1g_i)=b_1f_0(g_i)=b_1\d H(g_i)=\d(b_1)H(g_i)$ too, so $H\d=f_1$.
This shows that $f$ is nullhomotopic.
\end{proof}
\section{$\Sigma$-structure}
\begin{Definition}
The \emph{suspension} $\Sigma X$ of a graded object $X=(X^n)_{n\in{\mathbb Z}}$ is
given by degree shift, $(\Sigma X)^n=X^{n-1}$.
\end{Definition}
Let $\Sigma:X\to\Sigma X$ be the map of degree 1 given by the identity. If
$X$ is a left $A$-module over the graded algebra $A$ then $\Sigma X$ is a
left $A$-module via
\begin{equation}\label{suspact}
a\cdot\Sigma x=(-1)^{|a|}\Sigma(a\cdot x)
\end{equation}
for $a\in A$, $x\in X$. On the other hand if $\Sigma X$ is a right $A$-module then
$(\Sigma x)\cdot a=\Sigma(x\cdot a)$ yields the right $A$-module structure
on $\Sigma X$.
\begin{Definition}\label{sigmas}
A \emph{$\Sigma$-module} is a graded pair module $X=(\d:X_1\to X_0)$
equipped with an isomorphism
$$
\sigma:\pi_1X\cong\Sigma\pi_0X
$$
of graded $k$-modules. We then call $\sigma$ a \emph{$\Sigma$-structure} of
$X$. A $\Sigma$-map between $\Sigma$-modules is a map $f$ between pair
modules such that $\sigma(\pi_1f)=\Sigma(\pi_0f)\sigma$. If $X$ is a pair
algebra then a $\Sigma$-structure is an isomorphism of
$\pi_0X$-$\pi_0X$-bimodules. If $X$ is a left module over a pair algebra $B$
then a $\Sigma$-structure of $X$ is an isomorphism $\sigma$ of left
$\pi_0B$-modules. Let
$$
(B\textrm{-}{\mathbf{Mod}})^\Sigma\subset B\textrm{-}{\mathbf{Mod}}
$$
be the track category of $B$-modules with $\Sigma$-structure and
$\Sigma$-maps.
\end{Definition}
\begin{Lemma}
Suspension of a $B$-module $M$ has by \eqref{suspact} the structure of a
$B$-module and $\Sigma M$ has a $\Sigma$-structure if $M$ has one.
\end{Lemma}
\begin{proof}
Given $\sigma:\pi_1M\cong\Sigma\pi_0M$ one defines a $\Sigma$-structure on
$\Sigma M$ via
$$
\pi_1\Sigma
M=\Sigma\pi_1M\xto{\Sigma\sigma}\Sigma\Sigma\pi_0M=\Sigma\pi_0\Sigma M.
$$
\end{proof}
Hence we get suspension functors between track categories
$$
\xymatrix{
B\textrm{-}{\mathbf{Mod}}\ar[r]^\Sigma&B\textrm{-}{\mathbf{Mod}}\\
(B\textrm{-}{\mathbf{Mod}})^\Sigma\ar[u]\ar[r]^\Sigma&(B\textrm{-}{\mathbf{Mod}})^\Sigma.\ar[u]
}
$$
\begin{Lemma}\label{add}
The track category $(B\mathrm{-}{\mathbf{Mod}})^\Sigma$ is ${\mathbb L}$-additive in the sense of
\cite{BJ5}, with ${\mathbb L}=\Sigma^{-1}$, as well as ${\mathbb R}$-additive,
with ${\mathbb R}=\Sigma$.
\end{Lemma}
\begin{proof}
The statement of the lemma means that the bifunctor
$$
D(M,N)=\Aut(0_{M,N})
$$
is either left- or right-representable, i.~e. there is an endofunctor
${\mathbb L}$, respectively ${\mathbb R}$ of $(B$-${\mathbf{Mod}})^\Sigma$ and a binatural
isomorphism $D(M,N)\cong[{\mathbb L} M,N]$, resp. $D(M,N)\cong[M,{\mathbb R} N]$.
Now by \eqref{track}, a track in $\Aut(0_{M,N})$ is a $B_0$-module
homomorphism $H:M_0\to N_1$ with $\d H=H\d=0$; hence
$$
D(M,N)\cong\Hom_{\pi_0B}(\pi_0M,\pi_1N)\cong\Hom_{\pi_0B}(\pi_0\Sigma^{-1}
M,\pi_0N)\cong\Hom_{\pi_0B}(\pi_0M,\pi_0\Sigma N).
$$
\end{proof}
\begin{Lemma}
If $B$ is a pair algebra with $\Sigma$-structure then each free $B$-module
has a $\Sigma$-structure.
\end{Lemma}
\begin{proof}
This is clear from the description of free modules in \ref{bmod}.
\end{proof}
\section{The secondary differential over pair algebras}\label{secodif}
For a pair algebra $B$ with a $\Sigma$-structure, for a $\Sigma$-module
$M$ over $B$, and a module $N$ over $B$ we now define the \emph{secondary differential}
$$
d_{(2)}:\Ext^n_{\pi_0B}(\pi_0M,\pi_0N)\to\Ext^{n+2}_{\pi_0B}(\pi_0M,\pi_1N).
$$
Here $d_{(2)}=d_{(2)}(M,N)$ depends on the $B$-modules $M$ and $N$ and is
natural in $M$ and $N$ with respect to maps in $(B\mathrm{-}{\mathbf{Mod}})^\Sigma$. For the
definition of $d_{(2)}$ we consider secondary chain complexes and secondary
resolutions. In \cite{BJ5} such a construction was performed in the
generality of an arbitrary ${\mathbb L}$-additive track category. We will first
present the construction of $d_{(2)}$ for the track category of pair
modules and then will indicate how this construction is a particular case
of the more general situation discussed in \cite{BJ5}.
\begin{Definition}\label{secs}
For a pair algebra $B$, a \emph{secondary chain complex} $M_\bullet$ in $B$-${\mathbf{Mod}}$
is given by a diagram to be
$$
\xymatrix@!{
...\ar[r]
&M_{n+2,1}\ar[r]^{d_{n+1,1}}\ar[d]|\hole^>(.75){\d_{n+2}}
&M_{n+1,1}\ar[r]^{d_{n,1}}\ar[d]|\hole^>(.75){\d_{n+1}}
&M_{n,1}\ar[r]^{d_{n-1,1}}\ar[d]|\hole^>(.75){\d_n}
&M_{n-1,1}\ar[r]\ar[d]|\hole^>(.75){\d_{n-1}}
&...\\
...\ar[r]\ar[urr]^<(.3){H_{n+1}}
&M_{n+2,0}\ar[r]_{d_{n+1,0}}\ar[urr]^<(.3){H_n}
&M_{n+1,0}\ar[r]_{d_{n,0}}\ar[urr]^<(.3){H_{n-1}}
&M_{n,0}\ar[r]_{d_{n-1,0}}\ar[urr]
&M_{n-1,0}\ar[r]
&...\\
}
$$
where each $M_n=(\partial_n:M_{n,1}\to M_{n,0})$ is a $B$-module, each
$d_n=(d_{n,0},d_{n,1})$ is a morphism in $B$-${\mathbf{Mod}}$, each $H_n$ is
$B_0$-linear and moreover the identities
\begin{align*}
d_{n,0}d_{n+1,0}&=\d_nH_n\\
d_{n,1}d_{n+1,1}&=H_n\d_{n+2}\\
\intertext{and}
H_nd_{n+2,0}&=d_{n,1}H_{n+1}
\end{align*}
hold for all $n\in{\mathbb Z}$. We thus see that in this case a secondary complex is
the same as a graded version of a \emph{multicomplex} (see e.~g. \cite{Meyer}) with only two
nonzero rows.
One then defines the \emph{total complex} Tot$(M_\bullet)$ to be
$$
...\ot
M_{n-1,0}\oplus M_{n-2,1}
\xot{
\left(
\begin{smallmatrix}
d_{n-1,0}&-\d_{n-1}\\
H_{n-2}&-d_{n-2,1}
\end{smallmatrix}
\right)
}
M_{n,0}\oplus M_{n-1,1}
\xot{
\left(
\begin{smallmatrix}
d_{n,0}&-\d_n\\
H_{n-1}&-d_{n-1,1}
\end{smallmatrix}
\right)
}
M_{n+1,0}\oplus M_{n,1}
\ot
...
$$
Cycles and boundaries in this complex will be called secondary cycles,
resp. secondary boundaries of $M_\bullet$. Thus a secondary $n$-cycle in
$M_\bullet$ is a pair $(c,\gamma)$ with $c\in M_{n,0}$, $\gamma\in
M_{n-1,1}$ such that $d_{n-1,0}c=\d_{n-1}\gamma$,
$H_{n-2}c=d_{n-2,1}\gamma$ and such a cycle is a
boundary iff there exist $b\in M_{n+1,0}$ and $\beta\in M_{n,1}$ with
$c=d_{n,0}b+\d_n\beta$ and $\gamma=H_{n-1}b+d_{n-1,1}\beta$. A secondary
complex $M_\bullet$ is called \emph{exact} if its total complex is exact, that is, if
secondary cycles are secondary boundaries.
\end{Definition}
Let us now consider a secondary chain complex $M_\bullet$ in $B$-${\mathbf{Mod}}$. It
is clear then that
$$
...\to\pi_0M_{n+2}\xto{\pi_0d_{n+1}}\pi_0M_{n+1}\xto{\pi_0d_n}\pi_0M_n\xto{\pi_0d_{n-1}}\pi_0M_{n-1}\to...
\leqno{\pi_0M_\bullet:}
$$
is a chain complex of $\pi_0B$-modules. The next result corresponds to
\cite{BJ5}*{lemma 3.5}.
\begin{Proposition}\label{exact}
Let $M_\bullet$ be a secondary complex consisting of $\Sigma$-modules and
$\Sigma$-maps between them. If $\pi_0(M_\bullet)$ is an exact complex then
$M_\bullet$ is an exact secondary complex. Conversely, if $\pi_0M_\bullet$ is
bounded below then secondary exactness of $M_\bullet$ implies exactness of
$\pi_0M_\bullet$.
\end{Proposition}
\begin{proof} The proof consists in translating the argument from the
analogous general statement in \cite{BJ5} to our setting.
Suppose first that $\pi_0M_\bullet$ is an exact complex, and consider a
secondary cycle $(c,\gamma)\in M_{n,0}\oplus M_{n-1,1}$, i.~e. one has
$d_{n-1,0}c=\d_{n-1}\gamma$ and $H_{n-2}c=d_{n-2,1}\gamma$. Then in
particular $[c]\in\pi_0M_n$ is a cycle, so there exists
$[b]\in\pi_0M_{n+1}$ with $[c]=\pi_0(d_n)[b]$. Take $b\in[b]$, then
$c-d_{n,0}b=\d_n\beta$ for some $\beta\in M_{n+1,1}$. Consider
$\delta=\gamma-H_{n-1}b-d_{n-1,1}\beta$. One has
$\d_{n-1}\delta=\d_{n-1}\gamma-\d_{n-1}H_{n-1}b-\d_{n-1}d_{n-1,1}\beta=d_{n-1,0}c-d_{n-1,0}d_{n,0}b-d_{n-1,0}\d_n\beta=0$,
so that $\delta$ is an element of $\pi_1M_n$. Moreover
$d_{n-2,1}\delta=d_{n-2,1}\gamma-d_{n-2,1}H_{n-1}b-d_{n-2,1}d_{n-1,1}\beta=H_{n-2}c-H_{n-2}d_{n,0}b-H_{n-2}\d_n\beta=0$,
i.~e. $\delta$ is a cycle in $\pi_1M_\bullet$. Since by assumption
$\pi_0M_\bullet$ is exact, taking into account the $\Sigma$-structure
$\pi_1M_\bullet$ is exact too, so that there exists $\psi\in\pi_1M_n$
with $\delta=d_{n-1,1}\psi$. Define $\tilde\beta=\beta+\psi$. Then
$d_{n,0}b+\d_n\tilde\beta=d_{n,0}b+\d_n\beta=c$ since $\psi\in\ker\d_n$.
Moreover
$d_{n-1,1}\tilde\beta=d_{n-1,1}\beta+d_{n-1,1}\psi=d_{n-1,1}\beta+\delta=\gamma-H_{n-1}b$,
which means that $(c,\gamma)$ is the boundary of $(b,\tilde\beta)$. Thus
$M_\bullet$ is an exact secondary complex.
Conversely suppose $M_\bullet$ is exact, and $\pi_0M_\bullet$ bounded
below. Given a cycle $[c]\in\pi_0(M_n)$, represent it by a $c\in M_{n,0}$.
Then $\pi_0d_{n-1}[c]=0$ implies $d_{n-1,0}c\in\im\d_{n-1}$, so there is a
$\gamma\in M_{n-1,1}$ such that $d_{n-1,0}c=\d_{n-1}\gamma$. Consider
$\omega=d_{n-2,1}\gamma-H_{n-2}c$. One has
$\d_{n-2}\omega=\d_{n-2}d_{n-2,1}\gamma-\d_{n-2}H_{n-2}c=d_{n-2,0}\d_{n-1}\gamma-d_{n-2,0}d_{n-1,0}c=0$,
i.~e. $\omega$ is an element of $\pi_1M_{n-2}$. Moreover
$d_{n-3,1}\omega=d_{n-3,1}d_{n-2,1}\gamma-d_{n-3,1}H_{n-2}c=H_{n-3}\d_{n-1}\gamma-H_{n-3}d_{n,0}c=0$,
so $\omega$ is a $n-2$-dimensional cycle in $\pi_1M_\bullet$. Using the
$\Sigma$-structure, this then gives a $n-3$-dimensional cycle in
$\pi_0M_\bullet$. Now since $\pi_0M_\bullet$ is bounded below, we might
assume by induction that it is exact in dimension $n-3$, so that $\omega$
is a boundary. That is, there exists $\alpha\in\pi_1M_{n-1}$ with
$d_{n-2,1}\alpha=\omega$. Define $\tilde\gamma=\gamma-\alpha$; then one
has
$d_{n-2,1}\tilde\gamma=d_{n-2,1}\gamma-d_{n-2,1}\alpha=d_{n-2,1}\gamma-\omega=H_{n-2}c$.
Moreover $\d_{n-1}\tilde\gamma=\d_{n-1}\gamma=d_{n-1,0}c$ since
$\alpha\in\ker(\d){n-1}$. Thus $(c,\tilde\gamma)$ is a secondary cycle,
and by secondary exactness of $M_\bullet$ there exists a pair $(b,\beta)$
with $c=d_{n,0}b+\d_n\beta$. Then $[c]=\pi_0(d_n)[b]$, i.~e. $c$ is a
boundary.
\end{proof}
\begin{Definition}
Let $B$ be a pair algebra with $\Sigma$-structure.
A \emph{secondary resolution} of a $\Sigma$-module $M=(\d:M_1\to M_0)$ over
$B$ is an exact secondary complex $F_\bullet$ in $(B\mathrm{-}{\mathbf{Mod}})^\Sigma$ of the form
$$
\xymatrix@!{
...\ar[r]\ar@{}[d]|\cdots
&F_{31}\ar[r]^{d_{21}}\ar[d]|\hole^>(.75){\d_3}
&F_{21}\ar[r]^{d_{11}}\ar[d]|\hole^>(.75){\d_2}
&F_{11}\ar[r]^{d_{01}}\ar[d]|\hole^>(.75){\d_1}
&F_{01}\ar[r]^{\epsilon_1}\ar[d]|\hole^>(.75){\d_0}
&M_1\ar[d]|\hole^>(.9)\d\ar[r]
&0\ar[r]\ar[d]|\hole
&0\ar[r]\ar[d]
&...\ar@{}[d]|\cdots
\\
...\ar[r]\ar[urr]^<(.3){H_2}
&F_{30}\ar[r]_{d_{20}}\ar[urr]^<(.3){H_1}
&F_{20}\ar[r]_{d_{10}}\ar[urr]^<(.3){H_0}
&F_{10}\ar[r]_{d_{00}}\ar[urr]^<(.3){\hat\epsilon}
&F_{00}\ar[r]_{\epsilon_0}\ar[urr]
&M_0\ar[r]\ar[urr]
&0\ar[r]
&0\ar[r]
&...
}
$$
where each $F_n=(\d_n:F_{n1}\to F_{n0})$ is a free $B$-module.
\end{Definition}
It follows from \ref{exact} that for any secondary resolution $F_\bullet$
of a $B$-module $M$ with $\Sigma$-structure, $\pi_0F_\bullet$ will be a
free resolution of the $\pi_0B$-module $\pi_0M$, so that in particular one
has
$$
\Ext^n_{\pi_0B}(\pi_0M,U)\cong H^n\Hom(\pi_0F_\bullet,U)
$$
for all $n$ and any $\pi_0B$-module $U$.
\begin{Definition}\label{secod}
Given a pair algebra $B$ with $\Sigma$-structure, a $\Sigma$-module $M$
over $B$, a module $N$ over $B$ and a secondary resolution $F_\bullet$ of $M$, we define
the \emph{secondary differential}
$$
d_{(2)}:\Ext^n_{\pi_0B}(\pi_0M,\pi_0N)\to\Ext^{n+2}_{\pi_0B}(\pi_0M,\pi_1N)
$$
in the following way. Suppose given a class
$[c]\in\Ext^n_{\pi_0B}(\pi_0M,\pi_0N)$.
First represent it by some element in $\Hom_{\pi_0B}(\pi_0F_n,\pi_0N)$
which is a cocycle, i.~e. its composite with $\pi_0(d_n)$ is 0. By
\ref{freehom} we know that the natural maps
$$
[F_n,N]\to\Hom_{\pi_0B}(\pi_0F,\pi_0N)
$$
are isomorphisms, hence to any such element corresponds a homotopy class
in $[F_n,N]$ which is also a cocycle, i.~e. value of $[d_n,N]$ on it is
zero. Take a representative map $c:F_n\to N$ from this homotopy class. Then
$cd_n$ is nullhomotopic, so we can find a $B_0$-equivariant map $H:F_{n+1,0}\to N_1$
such that in the diagram
$$
\xymatrix@!{
&F_{n+2,1}\ar[r]^{d_{n+1,1}}\ar[d]_{\d_{n+2}}
&F_{n+1,1}\ar[r]^{d_{n,1}}\ar[d]|\hole^>(.75){\d_{n+1}}
&F_{n,1}\ar[r]^{c_1}\ar[d]|\hole^>(.75){\d_n}
&N_1\ar[d]^\d\\
&F_{n+2,0}\ar[r]_{d_{n+1,0}}\ar[urr]^<(.3){H_n}
&F_{n+1,0}\ar[r]_{d_{n,0}}\ar[urr]^<(.3)H
&F_{n,0}\ar[r]_{c_0}
&N_0.
}
$$
one has $c_0d_{n,0}=\d H$, $c_1d_{n,1}=H\d_{n+1}$ and $\d c_1=c_0\d_n$.
Then taking $\Gamma=c_1H_n-Hd_{n+1,0}$ one has $\d\Gamma=0$,
$\Gamma\d_{n+2}=0$, so $\Gamma$ determines a map
$\bar\Gamma:\coker\d_{n+2}\to\ker\d$, i.~e. from $\pi_0F_{n+2}$ to
$\pi_1N$. Moreover $\bar\Gamma\pi_0(d_{n+2})=0$, so it is a cocycle in
$\Hom(\pi_0(F_\bullet),\pi_1N)$ and we define
$$
d_{(2)}[c]=[\bar\Gamma]\in\Ext^{n+2}_{\pi_0B}(\pi_0M,\pi_1N).
$$
\end{Definition}
\begin{Definition}\label{secodef}
Let $M$ and $N$ be $B$-modules with $\Sigma$-structure. Then also all
the $B$-modules $\Sigma^kM$, $\Sigma^kN$ have $\Sigma$-structures and we
get by \ref{secod} the secondary differential
$$
\xymatrix@!C=16em{
\Ext^n_{\pi_0B}(\pi_0M,\pi_0\Sigma^kN)\ar@{=}[d]\ar[r]^{d_{(2)}(M,\Sigma^kN)}&\Ext^{n+2}_{\pi_0B}(\pi_0M,\pi_1\Sigma^kN)\ar@{=}[d]\\
\Ext^n_{\pi_0B}(\pi_0M,\Sigma^k\pi_0N)\ar[r]^d&\Ext^{n+2}_{\pi_0B}(\pi_0M,\Sigma^{k+1}\pi_0N).
}
$$
In case the composite
$$
\Ext^{n-2}_{\pi_0B}(\pi_0M,\Sigma^{k-1}\pi_0N)\xto d\Ext^n_{\pi_0B}(\pi_0M,\Sigma^k\pi_0N)\xto d\Ext^{n+2}_{\pi_0B}(\pi_0M,\Sigma^{k+1}\pi_0N)
$$
vanishes we define the \emph{secondary $\Ext$-groups} to be the quotient
groups
$$
\Ext^n_B(M,N)^k:=\ker d/\im d.
$$
\end{Definition}
\begin{Theorem}\label{sigmacoinc}
For a $\Sigma$-algebra $B$, a $B$-module $M$ with $\Sigma$-structure and
any $B$-module $N$, the secondary differential $d_{(2)}$ in \ref{secod}
coincides with the secondary differential
$$
d_{(2)}:\Ext^n_{\mathbf a}(M,N)\to\Ext^{n+2}_{\mathbf a}(M,N)
$$
from \cite{BJ5}*{Section 4} as constructed for the ${\mathbb L}$-additive track
category $(B\mathrm{-}{\mathbf{Mod}})^\Sigma$ in \ref{add}, relative to the
subcategory ${\mathbf b}$ of free $B$-modules with ${\mathbf a}={\mathbf b}_\simeq$.
\end{Theorem}
\begin{proof}
We begin by recalling the appropriate notions from \cite{BJ5}. There
secondary chain complexes $A_\bullet=(A_n,d_n,\delta_n)_{n\in{\mathbb Z}}$ are
defined in an arbitrary additive track category ${\mathbf B}$. They consist of objects
$A_n$, morphisms $d_n:A_{n+1}\to A_n$ and tracks
$\delta_n:d_nd_{n+1}\then0_{A_{n+2},A_n}$, $n\in{\mathbb Z}$, such that the
equality of tracks
$$
\delta_nd_{n+2}=d_n\delta_{n+1}
$$
holds for all $n$. For an object $X$, an $X$-valued $n$-cycle in a
secondary chain complex $A_\bullet$ is defined to be a pair $(c,\gamma)$
consisting of a morphism $c:X\to A_n$ and a track
$\gamma:d_{n-1}c\then0_{X,A_{n-1}}$ such that the equality of tracks
$$
\delta_{n-2}c=d_{n-2}\gamma
$$
is satisfied. Such a cycle is called a \emph{boundary} if there exists a
map $b:X\to A_{n+1}$ and a track $\beta:c\then d_nb$ such that the equality
$$
\gamma=\delta_{n-1}b{\scriptstyle\Box} d_{n-1}\beta
$$
holds. Here the right hand side is given by track addition. A secondary chain
complex is called $X$-exact if every $X$-valued
cycle in it is a boundary. Similarly it is called \emph{${\mathbf b}$-exact}, if it
is $X$-exact for every object $X$ in ${\mathbf b}$, where ${\mathbf b}$ is a track
subcategory of ${\mathbf B}$. A secondary ${\mathbf b}$-resolution of an object $A$ is a
${\mathbf b}$-exact secondary chain complex $A_\bullet$ with $A_n=0$ for $n<-1$,
$A_{-1}=A$, $A_n\in{\mathbf b}$ for $n\ne-1$; the last differentials will be then
denoted $d_{-1}=\epsilon:A_0\to A$, $\delta_{-1}=\hat\epsilon:\epsilon
d_0\to0_{A_1,A}$ and the pair $(\epsilon,\hat\epsilon)$ will be called the
\emph{augmentation} of the resolution. It is clear that any secondary
chain complex $(A_\bullet,d_\bullet,\delta_\bullet)$ in ${\mathbf B}$ gives rise to
a chain complex $(A_\bullet,[d_\bullet])$, in the ordinary sense, in the
homotopy category ${\mathbf B}_\simeq$ of ${\mathbf B}$. Moreover if ${\mathbf B}$ is $\Sigma$-additive,
i.~e. there exists a functor $\Sigma$ and isomorphisms
$\Aut(0_{X,Y})\cong[\Sigma X,Y]$, natural in $X$, $Y$, then ${\mathbf b}$-exactness
of $(A_\bullet,d_\bullet,\delta_\bullet)$ implies ${\mathbf b}_\simeq$-exactness of
$(A_\bullet,[d_\bullet])$ in the sense that the chain complex of abelian
groups $[X,(A_\bullet,[d_\bullet])]$ will be exact for each $X\in{\mathbf b}$. In
\cite{BJ5}, the notion of ${\mathbf b}_\simeq$-relative derived functors has been
developed using such ${\mathbf b}_\simeq$-resolutions, which we also recall.
For an additive subcategory ${\mathbf a}={\mathbf b}_\simeq$ of the homotopy category ${\mathbf B}_\simeq$,
the ${\mathbf a}$-relative left derived functors ${\mathrm L}^{\mathbf a}_nF$, $n\ge0$, of a
functor $F:{\mathbf B}_\simeq\to{\mathscr A}$ from ${\mathbf B}_\simeq$ to an abelian category $\mathscr
A$ are defined by
$$
({\mathrm L}^{\mathbf a}_nF)A=H_n(F(A_\bullet)),
$$
where $A_\bullet$ is given by any ${\mathbf a}$-resolution of $A$. Similarly,
the ${\mathbf a}$-relative right derived functors of a contravariant functor
$F:{\mathbf B}_\simeq^{\mathrm{op}}\to{\mathscr A}$ are given by
$$
({\mathrm R}_{\mathbf a}^nF)A=H^n(F(A_\bullet)).
$$
In particular, for the contravariant functor $F=[\_,B]$ we get the ${\mathbf a}$-relative $\Ext$-groups
$$
\Ext^n_{\mathbf a}(A,B):=({\mathrm R}_{\mathbf a}^n[\_,B])A=H^n([A_\bullet,B])
$$
for any ${\mathbf a}$-exact resolution $A_\bullet$ of $A$.
Similarly, for the contravariant functor $\Aut(0_{\_,B})$ which assigns to
an object $A$ the group $\Aut(0_{A,B})$ of all tracks
$\alpha:0_{A,B}\then0_{A,B}$ from the zero map $A\to*\to B$ to itself, one
gets the groups of ${\mathbf a}$-derived automorphisms
$$
\Aut^n_{\mathbf a}(A,B):=({\mathrm R}^n_{\mathbf a}\Aut(0_{\_,B}))(A).
$$
It is proved in \cite{BJ5} that under mild conditions (existence of a
subset of ${\mathbf a}$ such that every object of ${\mathbf a}$ is a direct summand of a
direct sum of objects from that subset) every object has an
${\mathbf a}$-resolution, and that the resulting groups do not depend on the choice
of a resolution.
We next recall the construction of the secondary differential from
\cite{BJ5}. This is a map of the form
$$
d_{(2)}:\Ext^n_{\mathbf a}(A,B)\to\Aut^n_{\mathbf a}(0_{A,B});
$$
it is constructed from any secondary ${\mathbf b}$-resolution
$(A_\bullet,d_\bullet,\delta_\bullet,\epsilon,\hat\epsilon)$ of the object
$A$. Given an element $[c]\in\Ext^n_{\mathbf a}(A,B)$, one first represents it
by an $n$-cocycle in $[(A_\bullet,[d_\bullet]),B]$, i.~e. by a homotopy
class $[c]\in[A_n,B]$ with $[cd_n]=0$. One then chooses an actual
representative $c:A_n\to B$ of it in ${\mathbf B}$ and a track $\gamma:0\then
cd_n$. It can be shown that the composite track
$\Gamma=c\delta_n{\scriptstyle\Box}\gamma d_{n+1}\in\Aut(0_{A_{n+2},B})$ satisfies
$\Gamma d_{n+1}=0$, so it is an $(n+2)$-cocycle in the cochain complex
$\Aut(0_{(A_\bullet,[d_\bullet]),B})\cong[(\Sigma A_\bullet,[\Sigma
d_\bullet]),B]$, so determines a cohomology class
$d{(2)}([c])=[\Gamma]\in\Ext^{n+2}_{\mathbf a}(\Sigma A,B)$. It is proved in
\cite{BJ5}*{4.2} that the above construction does not indeed depend on
choices.
Now turning to our situation, it is straightforward to verify that a
secondary chain complex in the sense of \cite{BJ5} in the track category
$B$-${\mathbf{Mod}}$ is the same as a 2-complex in the sense of \ref{secs}, and
that the two notions of exactness coincide. In particular then the notions
of resolution are also equivalent.
The track subcategory ${\mathbf b}$ of free modules is generated by coproducts from
a single object, so ${\mathbf b}_\simeq$-resolutions of any $B$-module exist. In fact
it follows from \cite{BJ5}*{2.13} that any $B$-module has a secondary
${\mathbf b}$-resolution too.
Moreover there are natural isomorphisms
$$
\Aut(0_{M,N})\cong\Hom_{\pi_0B}(\pi_0M,\pi_1N).
$$
Indeed a track from the zero map to itself is a $B_0$-module homomorphism
$H:M_0\to N_1$ with $\d H=0$, $H\d=0$, so $H$ factors through $M_0\onto\pi_0M$
and over $\pi_1N\rightarrowtail N_1$.
Hence the proof is finished with the following lemma.
\end{proof}
\begin{Lemma}
For any $B$-modules $M$, $N$ there are isomorphisms
$$
\Ext^n_{\mathbf a}(M,N)\cong\Ext^n_{\pi_0B}(\pi_0M,\pi_0N)
$$
and
$$
({\mathrm R}^n_{\mathbf a}(\Hom_{\pi_0B}(\pi_0(\_),\pi_1N)))(M)\cong\Ext^n_{\pi_0B}(\pi_0M,\pi_1N).
$$
\end{Lemma}
\begin{proof}
By definition the groups $\Ext^*_{\mathbf a}(M,N)$, respectively
$({\mathrm R}^n_{\mathbf a}(\Hom_{B_0}(\pi_0(\_),\pi_1N)))(M)$, are cohomology groups of the
complex $[F_\bullet,N]$, resp. $\Hom_{\pi_0B}(\pi_0(F_\bullet),\pi_1N)$,
where $F_\bullet$ is some ${\mathbf a}$-resolution of $M$. We can choose for
$F_\bullet$ some secondary ${\mathbf b}$-resolution of $M$. Then $\pi_0F_\bullet$
is a free $\pi_0B$-resolution of $\pi_0M$, which makes evident the second
isomorphism. For the first, just note in addition that by \ref{freehom}
$[F_\bullet,N]$ is isomorphic to $\Hom_{B_0}(\pi_0(F_\bullet),\pi_0N)$.
\end{proof}
\endinput
\chapter{The pair algebra ${\mathscr B}$ of secondary cohomology
operations}\label{secsteen}
The algebra ${\mathscr B}$ of secondary cohomology operations is a pair algebra with
$\Sigma$-structure which as a Hopf algebra was explicitly computed in
\cite{Baues}. In particular the multiplication map $A$ of ${\mathscr B}$ was
determined in \cite{Baues} by an algorithm. In this chapter we recall the
topological definition of the pair algebra ${\mathscr B}$ and the definition of the
multiplication map $A$. The main results of this work will provide methods for
the computation of $A$ or its dual multiplication map $A_*$.
We express in terms of $A$ the secondary $\Ext$-groups $\Ext_{\mathscr B}$ over the pair algebra ${\mathscr B}$.
This yields the computation of the E$_3$-term of the Adams spectral sequence
in the next chapter.
\section{The track category of spectra}
In this section we introduce the notion of stable maps and stable tracks
between spectra. This yields the track category of spectra. See also
\cite{Baues}*{section 2.5}.
\begin{Definition}\label{sp}
A \emph{spectrum} $X$ is a sequence of maps
$$
X_i\xto r\Omega X_{i+1},\ i\in{\mathbb Z}
$$
in the category ${\mathbf{Top}^*}$ of pointed spaces. This is an $\Omega$-spectrum if
$r$ is a homotopy equivalence for all $i$.
A \emph{stable homotopy class} $f:X\to Y$ between spectra is a sequence of
homotopy classes $f_i\in[X_i,Y_i]$ such that the squares
$$
\xymatrix{
X_i\ar[r]^{f_i}\ar[d]^r&Y_i\ar[d]^r\\
\Omega X_{i+1}\ar[r]^{\Omega f_{i+1}}&\Omega Y_{i+1}
}
$$
commute in ${\mathbf{Top}^*}_\simeq$. The category ${\mathbf{Spec}}$ consists of spectra and stable
homotopy classes as morphisms. Its full subcategory $\Omega$-${\mathbf{Spec}}$
consisting of $\Omega$-spectra is equivalent to the homotopy category
of spectra considered as a Quillen model category as in the work on symmetric spectra of
M. Hovey, B. Shipley and J. Smith \cite{Hoveyetal}. For us the classical notion of a spectrum
as above is sufficient.
A \emph{stable map} $f=(f_i,\tilde f_i)_i:X\to Y$ between spectra is a sequence
of diagrams in the track category $\hog{{\mathbf{Top}^*}}$ $(i\in{\mathbb Z})$
$$
\xymatrix{
X_i\ar[r]^{f_i}\ar[d]_r\drtwocell\omit{^{\tilde f_i\ }}&Y_i\ar[d]^r\\
\Omega X_{i+1}\ar[r]_{\Omega f_{i+1}}&\Omega Y_{i+1}.
}
$$
Obvious composition of such maps yields the category
$$
\hog{{\mathbf{Spec}}}_0.
$$
It is the underlying category of a track category $\hog{{\mathbf{Spec}}}$ with tracks
$(H:f\then g)\in\hog{{\mathbf{Spec}}}_1$ given by sequences
$$
H_i:f_i\then g_i
$$
of tracks in ${\mathbf{Top}^*}$ such that the diagrams
$$
\xymatrix@C=8em@!R=3em{
X_i\ar[r]_{f_i}\ar[d]_r\drtwocell\omit{^{\tilde
f_i\ }}\ruppertwocell^{g_i}{^H_i\ }
&Y_i\ar[d]^r\\
\Omega X_{i+1}\ar[r]^{\Omega f_{i+1}}\rlowertwocell_{\Omega
g_{i+1}}{_\Omega H_{i+1}\hskip4em}
&\Omega Y_{i+1}
}
$$
paste to $\tilde g_i$. This yields a well-defined track category
$\hog{{\mathbf{Spec}}}$. Moreover
$$
\hog{{\mathbf{Spec}}}_\simeq\cong{\mathbf{Spec}}
$$
is an isomorphism of categories. Let $\hog{X,Y}$ be the groupoid of
morphisms $X\to Y$ in $\hog{{\mathbf{Spec}}}_0$ and let $\hog{X,Y}_1^0$ be the set of
pairs $(f,H)$ where $f:X\to Y$ is a map and $H:f\then0$ is a track in
$\hog{{\mathbf{Spec}}}$, i.~e. a stable homotopy class of nullhomotopies for $f$.
For a spectrum $X$ let $\Sigma^kX$ be the \emph{shifted spectrum} with
$(\Sigma^kX)_n=X_{n+k}$ and the commutative diagram
$$
\xymatrix{
(\Sigma^kX)_n\ar@{=}[d]\ar[r]^-r&\Omega(\Sigma^kX)_{n+1}\ar@{=}[d]\\
X_{n+k}\ar[r]^-r&\Omega(X_{n+k+1})
}
$$
defining $r$ for $\Sigma^kX$. A map $f:Y\to\Sigma^kX$ is also called a map
$f$ \emph{of degree $k$} from $Y$ to $X$.
\end{Definition}
\section{The pair algebra ${\mathscr B}$ and secondary cohomology of spectra as a
${\mathscr B}$-module}
The secondary cohomology of a space was introduced in \cite{Baues}*{section
6.3}. We here consider the corresponding notion of secondary cohomology of
a spectrum.
Let ${\mathbb F}$ be a prime field ${\mathbb F}={\mathbb Z}/p{\mathbb Z}$ and let $Z$ denote the Eilenberg-Mac
Lane spectrum with
$$
Z^n=K({\mathbb F},n)
$$
chosen as in \cite{Baues}. Here $Z^n$ is a topological ${\mathbb F}$-vector space
and the homotopy equivalence $Z^n\to\Omega Z^{n+1}$ is ${\mathbb F}$-linear. This
shows that for a spectrum $X$ the sets $\hog{X,\Sigma^kZ}_0$ and
$\hog{X,\Sigma^kZ}_1^0$, of stable maps and stable 0-tracks repectively, are
${\mathbb F}$-vector spaces.
We now recall the definition of the pair algebra ${\mathscr B}=(\d:{\mathscr B}_1\to{\mathscr B}_0)$
of secondary cohomology operations from \cite{Baues}. Let ${\mathbb G}={\mathbb Z}/p^2{\mathbb Z}$ and let
$$
{\mathscr B}_0=T_{\mathbb G}(E_{\mathscr A})
$$
be the ${\mathbb G}$-tensor algebra generated by the subset
$$
E_{\mathscr A}=
\begin{cases}
\set{\Sq^1,\Sq^2,...}&\textrm{for $p=2$},\\
\set{{\mathrm P}^1,{\mathrm P}^2,...}\cup\set{\beta,\beta{\mathrm P}^1,\beta{\mathrm P}^2,...}&\textrm{for odd
$p$}
\end{cases}
$$
of the mod $p$ Steenrod algebra ${\mathscr A}$. We define ${\mathscr B}_1$ by the
pullback diagram of graded abelian groups
\begin{equation}\label{defb1}
\begin{aligned}
\xymatrix{
&\Sigma{\mathscr A}\ar@{ >->}[d]\\
{\mathscr B}_1\ar[r]\ar[d]_\d\ar@{}[dr]|<{\Bigg{\lrcorner}}&\hog{Z,\Sigma^*Z}_1^0\ar[d]^\d\\
{\mathscr B}_0\ar[r]^-s&\hog{Z,\Sigma^*Z}_0\ar@{->>}[d]\\
&{\mathscr A}.
}
\end{aligned}
\end{equation}
in which the right hand column is an exact sequence. Here we choose for
$\alpha\in E_{\mathscr A}$ a stable map $s(\alpha):Z\to\Sigma^{|\alpha|}Z$
representing $\alpha$ and we define $s$ to be the ${\mathbb G}$-linear map given on
monomials $a_1\cdots a_n$ in the free monoid Mon$(E_{\mathscr A})$ generated by
$E_{\mathscr A}$ by the composites
$$
s(a_1\cdots a_n)=s(a_1)\cdots s(a_n).
$$
It is proved in \cite{Baues}*{5.2.3} that $s$ defines a pseudofunctor, that is,
there is a well-defined track
$$
\Gamma:s(a\cdot b)\then s(a)\circ s(b)
$$
for $a,b\in{\mathscr B}_0$ such that for any $a$, $b$, $c$ pasting of tracks in the
diagram
$$
\includegraphics{lens.eps}
$$
yields the identity track. Now ${\mathscr B}_1$ is a ${\mathscr B}_0$-${\mathscr B}_0$-bimodule by
defining
$$
a(b,z)=(a\cdot b,a\bullet z)
$$
with $a\bullet z$ given by pasting $s(a)z$ and $\Gamma$. Similarly
$$
(b,z)a=(b\cdot a,z\bullet a)
$$
where $z\bullet a$ is obtained by pasting $zs(a)$ and $\Gamma$. Then it is
shown in \cite{Baues} that ${\mathscr B}=(\d:{\mathscr B}_1\to{\mathscr B}_0)$ is a well-defined pair
algebra with $\pi_0{\mathscr B}={\mathscr A}$ and $\Sigma$-structure $\pi_1{\mathscr B}=\Sigma{\mathscr A}$.
For a spectrum $X$ let
$$
{\mathscr H}(X)_0={\mathscr B}_0\hog{X,\Sigma^*Z}_0
$$
be the free ${\mathscr B}_0$-module generated by the graded set
$\hog{X,\Sigma^*Z}_0$. We define ${\mathscr H}(X)_1$ by the pullback diagram
$$
\xymatrix{
&\Sigma H^*X\ar@{ >->}[d]\\
{\mathscr H}(X)_1\ar[r]\ar[d]_\d\ar@{}[dr]|<{\Bigg{\lrcorner}}&\hog{X,\Sigma^*Z}_1^0\ar[d]^\d\\
{\mathscr H}(X)_0\ar[r]^-s&\hog{X,\Sigma^*Z}_0\ar@{->>}[d]\\
&H^*X
}
$$
where $s$ is the ${\mathbb G}$-linear map which is the identity on generators and is
defined on words $a_1\cdots a_n\cdot u$ by the composite $s(a_1)\cdots
s(a_n)s(u)$ for $a_i$ as above and $u\in\hog{X,\Sigma^*Z}_0$. Again $s$ is
a pseudofunctor and with actions $\bullet$ defined as above we see that the
graded pair module
$$
{\mathscr H}(X)=\left({\mathscr H}(X)_1\xto\d{\mathscr H}(X)_0\right)
$$
is a ${\mathscr B}$-module. We call ${\mathscr H}(X)$ the \emph{secondary cohomology} of the
spectrum $X$. Of course ${\mathscr H}(X)$ has a $\Sigma$-structure in the sense of
\ref{sigmas} above.
\begin{Example}
Let ${\mathbb G}^\Sigma$ be the ${\mathscr B}$-module given by the augmentation ${\mathscr B}\to{\mathbb G}^
\Sigma$ in \cite{Baues}. Recall that ${\mathbb G}^\Sigma$ is the pair
$$
{\mathbb G}^\Sigma=\left({\mathbb F}\oplus\Sigma{\mathbb F}\xto\d{\mathbb G}\right)
$$
with $\restr\d{\mathbb F}$ the inclusion nad $\restr\d{\Sigma{\mathbb F}}=0$. Then the
sphere spectrum $S^0$ admits a weak equivalence of ${\mathscr B}$-modules
$$
{\mathscr H}(S^0)\xto\sim{\mathbb G}^\Sigma.
$$
Compare \cite{Baues}*{12.1.5}.
\end{Example}
\endinput
\chapter{Computation of the ${\mathrm E}_3$-term of the Adams spectral
sequence as a secondary $\Ext$-group}\label{E3}
We show that the E$_3$-term of the Adams spectral sequence (computing stable
maps in $\set{Y,X}^*_p$) is given by the secondary $\Ext$-groups
$$
{\mathrm E}_3(Y,X)=\Ext_{\mathscr B}({\mathscr H} X,{\mathscr H} Y).
$$
Here ${\mathscr H} X$ is the secondary cohomology of the spectrum $X$ which is the
${\mathscr B}$-module ${\mathbb G}^\Sigma$ if $X$ is the sphere spectrum $S^0$. This leads to
an algorithm for the computation of the group
$$
{\mathrm E}_3(S^0,S^0)=\Ext_{\mathscr B}({\mathbb G}^\Sigma,{\mathbb G}^\Sigma)
$$
which is a new explicit approximation of stable homotopy groups of spheres
improving the Adams approximation
$$
{\mathrm E}_2(S^0,S^0)=\Ext_{\mathscr A}({\mathbb F},{\mathbb F}).
$$
An implementation of our algorithm computed ${\mathrm E}_3(S^0,S^0)$ by now up
to degree 40. In this range our results confirm the known results in the
literature, see for example the book of Ravenel \cite{Ravenel}.
\section{The ${\mathrm E}_3$-term of the Adams spectral sequence}
We now are ready to formulate the algebraic equivalent of the E$_3$-term of
the Adams spectral sequence. Let $X$ be a spectrum of finite type and $Y$ a
finite dimensional spectrum. Then for each prime $p$ there is a spectral
sequence ${\mathrm E}_*={\mathrm E}_*(Y,X)$ with
\begin{align*}
{\mathrm E}_*&\Longrightarrow[Y,\Sigma^*X]_p\\
{\mathrm E}_2&=\Ext_{\mathscr A}(H^*X,H^*Y).
\end{align*}
\begin{Theorem}\label{e3der}
The ${\mathrm E}_3$-term ${\mathrm E}_3={\mathrm E}_3(Y,X)$ of the Adams spectral sequence is given by
the secondary $\Ext$ group defined in \ref{secodef}
$$
{\mathrm E}_3=\Ext_{\mathscr B}({\mathscr H}^*X,{\mathscr H}^*Y).
$$
\end{Theorem}
\begin{Corollary}
If $X$ and $Y$ are both the sphere spectrum we get
$$
{\mathrm E}_3(S^0,S^0)=\Ext_{\mathscr B}({\mathbb G}^\Sigma,{\mathbb G}^\Sigma).
$$
\end{Corollary}
Since the pair algebra ${\mathscr B}$ is computed in \cite{Baues} completely we see
that ${\mathrm E}_3(S^0,S^0)$ is algebraically determined. This leads to the
algorithm below computing ${\mathrm E}_3(S^0,S^0)$.
The proof of \ref{e3der} is based on the following result in \cite{Baues}.
Consider the track categories
\begin{align*}
{\mathbf b}&\subset\hog{{\mathbf{Spec}}}\\
{\mathbf b}'&\subset({\mathscr B}\mathrm{-}{\mathbf{Mod}})^\Sigma
\end{align*}
where $\hog{{\mathbf{Spec}}}$ is the track category of spectra in \ref{sp} and
$({\mathscr B}\mathrm{-}{\mathbf{Mod}})^\Sigma$ is the track category of ${\mathscr B}$-modules with
$\Sigma$-structure in \ref{sigmas} with the pair algebra ${\mathscr B}$ defined by
\eqref{defb1}. Let ${\mathbf b}$ be the full track subcategory of $\hog{{\mathbf{Spec}}}$
consisting of finite products of shifted Eilenberg-Mac Lane
spectra $\Sigma^kZ^*$. Moreover let ${\mathbf b}'$ be the full track subcategory of
$({\mathscr B}\mathrm{-}{\mathbf{Mod}})^\Sigma$ consisting of finitely generated free
${\mathscr B}$-modules. As in \cite{BJ5}*{4.3} we obtain for spectra $X$, $Y$ in
\ref{e3der} the track categories
\begin{align*}
\set{Y,X}{\mathbf b}&\subset\hog{{\mathbf{Spec}}}\\
{\mathbf b}'\set{{\mathscr H} X,{\mathscr H} Y}&\subset({\mathscr B}\mathrm{-}{\mathbf{Mod}})^\Sigma
\end{align*}
with $\set{Y,X}{\mathbf b}$ obtained by adding to ${\mathbf b}$ the objects $X$, $Y$ and all
morphisms and tracks from $\hog{X,Z}$, $\hog{Y,Z}$ for all objects $Z$ in
${\mathbf b}$. It is proved in \cite{Baues}*{5.5.6} that the following result holds
which shows that we can apply \cite{BJ5}*{5.1}.
\begin{Theocite}\label{strictif}
There is a strict track equivalence
$$
(\set{Y,X}{\mathbf b})^{\mathrm{op}}\xto\sim{\mathbf b}'\set{{\mathscr H} X,{\mathscr H} Y}.
$$
\end{Theocite}
\qed
\begin{proof}[Proof of \ref{e3der}]
By the main result 7.3 in \cite{BJ5} we have a description of the
differential $d_{(2)}$ in the Adams spectral sequence by the following
commutative diagram
$$
\xymatrix{
\Ext^n_{{\mathbf a}^{\mathrm{op}}}(X,Y)^m\ar[r]^{d_{(2)}}\ar[d]^\cong
&\Ext^{n+2}_{{\mathbf a}^{\mathrm{op}}}(X,Y)^{m+1}\ar[d]^\cong\\
\Ext_{\mathscr A}^n(H^*X,H^*Y)^m\ar[r]^{d_{(2)}}
&\Ext_{\mathscr A}^{n+2}(H^*X,H^*Y)^{m+1}
}
$$
where ${\mathbf a}={\mathbf b}_\simeq$. On the other hand the differential
$d_{(2)}$ defining the secondary $\Ext$-group $\Ext_{\mathscr B}({\mathscr H} X,{\mathscr H} Y)$ is by \ref{sigmacoinc}
given by the commutative diagram
$$
\xymatrix{
\Ext^n_{{\mathbf a}'}({\mathscr H} X,{\mathscr H} Y)^m\ar[r]\ar@{=}[d]
&\Ext^{n+2}_{{\mathbf a}'}({\mathscr H} X,{\mathscr H} Y)^{m+1}\ar@{=}[d]\\
\Ext_{\mathscr A}^n(H^*X,H^*Y)^m\ar[r]
&\Ext_{\mathscr A}^{n+2}(H^*X,H^*Y)^{m+1}
}
$$
where ${\mathbf a}'={\mathbf b}'_\simeq$. Now \cite{BJ5}*{5.1} shows by \ref{strictif} that the top rows of these diagrams coincide.
\end{proof}
\section{The algorithm for the computation of $d_{(2)}$ on
$\Ext_{\mathscr A}({\mathbb F},{\mathbb F})$ in terms of the multiplication maps}\label{d2}
Suppose now given some projective resolution of the left ${\mathscr A}$-module ${\mathbb F}$. For definiteness, we will work with the minimal
resolution
\begin{equation}\label{minireso}
{\mathbb F}\ot{\mathscr A}\brk{g_0^0}\ot{\mathscr A}\brk{g_1^{2^n}\mid
n\ge0}\ot{\mathscr A}\brk{g_2^{2^i+2^j}\mid
|i-j|\ne1}\ot...,
\end{equation}
where $g_m^d$, $d\ge m$, is a generator of the $m$-th resolving module in degree $d$.
Sometimes there are more than one generators with the same $m$ and $d$, in which
case the further ones will be denoted by $'g_m^d$, $''g_m^d, \cdots$.
These generators and values of the differential on them can be computed
effectively; for example, $d(g_1^{2^n})=\Sq^{2^n}g_0^0$ and
$d(g_m^m)=\Sq^1g_{m-1}^{m-1}$; moreover e.~g. an
algorithm from \cite{Bruner} gives
$$
\alignbox{
d(g_2^4)&=\Sq^3g_1^1+\Sq^2g_1^2\\
d(g_2^5)&=\Sq^4g_1^1+\Sq^2\Sq^1g_1^2+\Sq^1g_1^4\\
d(g_2^8)&=\Sq^6g_1^2+(\Sq^4+\Sq^3\Sq^1)g_1^4\\
d(g_2^9)&=\Sq^8g_1^1+(\Sq^5+\Sq^4\Sq^1)g_1^4+\Sq^1g_1^8\\
d(g_2^{10})&=(\Sq^8+\Sq^5\Sq^2\Sq^1)g_1^2+(\Sq^5\Sq^1+\Sq^4\Sq^2)g_1^4+\Sq^2g_1^8\\
d(g_2^{16})&=(\Sq^{12}+\Sq^9\Sq^2\Sq^1+\Sq^8\Sq^3\Sq^1)g_1^4+(\Sq^8+\Sq^7\Sq^1+\Sq^6\Sq^2)g_1^8\\
\cdots,\\
d(g_3^6)&=\Sq^4g_2^2+\Sq^2g_2^4+\Sq^1g_2^5\\
d(g_3^{10})&=\Sq^8g_2^2+(\Sq^5+\Sq^4\Sq^1)g_2^5+\Sq^1g_2^9\\
d(g_3^{11})&=(\Sq^7+\Sq^4\Sq^2\Sq^1)g_2^4+\Sq^6g_2^5+\Sq^2\Sq^1g_2^8\\
d(g_3^{12})&=\Sq^8g_2^4+(\Sq^6\Sq^1+\Sq^5\Sq^2)g_2^5+(\Sq^4+\Sq^3\Sq^1)g_2^8+\Sq^3g_2^9+\Sq^2g_2^{10}\\
\cdots,}
$$
$$
\alignbox{
d(g_4^{11})&=\Sq^8g_3^3+(\Sq^5+\Sq^4\Sq^1)g_3^6+\Sq^1g_3^{10}\\
d(g_4^{13})&=\Sq^8\Sq^2g_3^3+(\Sq^7+\Sq^4\Sq^2\Sq^1)g_3^6+\Sq^2\Sq^1g_3^{10}+\Sq^2g_3^{11}\\
\cdots,\\
d(g_5^{14})&=\Sq^{10}g_4^4+\Sq^2\Sq^1g_4^{11}\\
d(g_5^{16})&=\Sq^{12}g_4^4+\Sq^4\Sq^1g_4^{11}+\Sq^3g_4^{13}\\
\cdots,
\\
d(g_6^{16})&=\Sq^{11}g_5^5+\Sq^2g_5^{14}\\
\cdots,
}
$$
etc.
By understanding the above formul\ae\ as matrices (i.~e. by applying
$\chi$ degreewise to them), each such resolution gives rise to a sequence
of ${\mathscr B}$-module homomorphisms
\begin{equation}\label{precomplex}
{\mathbb G}^\Sigma\ot{\mathscr B}\brk{g_0^0}\ot{\mathscr B}\brk{g_1^{2^n}\mid
n\ge0}\ot{\mathscr B}\brk{g_2^{2^i+2^j}\mid
|i-j|\ne1}\ot...,
\end{equation}
which is far from being exact --- in fact even the composites of consecutive
maps are not zero. In more detail, one has commutative diagrams
$$
\xymatrix{
2{\mathbb G}\ar[d]
&R_{\mathscr B}^0g_0^0\ar[l]_{\epsilon_0}\ar[d]
&0\ar[l]\ar[d]
&...\ar[l]\\
{\mathbb G}
&{\mathscr B}_0^0g_0^0\ar[l]_{\epsilon_0}
&0\ar[l]
&...\ar[l]
}
$$
in degree 0,
$$
\xymatrix{
{\mathbb F}\ar[d]
&R_{\mathscr B}^1g_0^0\oplus{\mathscr A}^0g_0^0\ar[l]_-{(0,\epsilon)}\ar[d]
&R_{\mathscr B}^0g_1^1\ar[l]_-{\binom d0}\ar[d]
&0\ar[l]\ar[d]
&...\ar[l]\\
0
&{\mathscr B}_0^1g_0^0\ar[l]
&{\mathscr B}_0^0g_1^1\ar[l]_d
&0\ar[l]
&...\ar[l]
}
$$
in degree 1,
$$
\xymatrix{
0\ar[d]
&R_{\mathscr B}^2g_0^0\oplus{\mathscr A}^1g_0^0\ar[l]\ar[d]
&\left(R_{\mathscr B}^1g_1^1\oplus R_{\mathscr B}^0g_1^2\right)\oplus{\mathscr
A}^0g_1^1\ar[l]_-{\smat{d&0\\0&d}}\ar[d]
&R_{\mathscr B}^0g_2^2\ar[l]_-{\binom d0}\ar[d]
&0\ar[l]\ar[d]
&...\ar[l]\\
0
&{\mathscr B}_0^2g_0^0\ar[l]
&{\mathscr B}_0^1g_1^1\oplus{\mathscr B}_0^0g_1^2\ar[l]_-d
&{\mathscr B}_0^0g_2^2\ar[l]_-d
&0\ar[l]
&...\ar[l]
}
$$
in degree 2, ...
$$
\xymatrix{
0\ar[d]
&R_{\mathscr B}^ng_0^0\oplus{\mathscr A}^{n-1}g_0^0\ar[l]\ar[d]
&\bigoplus_{2^i\le n}R_{\mathscr B}^{n-2^i}g_1^{2^i}
\oplus\bigoplus_{2^i\le n-1}{\mathscr A}^{n-1-2^i}g_1^{2^i}\ar[l]_-{\smat{d&0\\0&d}}\ar[d]
&...\ar[l]\\
0
&{\mathscr B}_0^ng_0^0\ar[l]
&\bigoplus_{2^i\le n}{\mathscr B}_0^{n-2^i}g_1^{2^i}\ar[l]_-d
&...\ar[l]
}
$$
in degree $n$, etc.
Our task is then to complete these diagrams into an exact secondary complex
via certain (degree preserving) maps
$$
\delta_m=\binom{\delta^R_m}{\delta^{\mathscr A}_m}:{\mathscr B}_0\brk{g_{m+2}^n\mid n}\to(R_{\mathscr B}\oplus\Sigma{\mathscr A})\brk{g_m^n\mid
n}.
$$
Now for these maps to form a secondary complex, according to
\ref{secs}.1 one must have $\d\delta=d_0d_0$,
$\delta\d=d_1d_1$, and $d_1\delta=\delta d_0$. One sees easily that
these equations together with the requirement that $\delta$ be left
${\mathscr B}_0$-module homomorphism are equivalent to
\begin{align}
\delta^R&=dd,\\
\label{deltaeqs}\delta^{\mathscr A}(bg)&=\pi(b)\delta^{\mathscr A}(g)+A(\pi(b),dd(g)),\\
d\delta^{\mathscr A}&=\delta^{\mathscr A} d,
\end{align}
for $b\in{\mathscr B}_0$, $g$ one of the $g_m^n$, and $A(a,rg):=A(a,r)g$ for
$a\in{\mathscr A}$, $r\in R_{\mathscr B}$. Hence $\delta$ is completely determined by the elements
\begin{equation}\label{deltagen}
\delta^{\mathscr A}_m(g_{m+2}^n)\in\bigoplus_k{\mathscr A}^{n-k-1}\brk{g_m^k}
\end{equation}
which, to form a secondary complex, are only required to satisfy
$$
d\delta_m^{\mathscr A}(g_{m+2}^n)=\delta_{m-1}^{\mathscr A} d(g_{m+2}^n),
$$
where on the right $\delta_{m-1}^{\mathscr A}$ is extended to
${\mathscr B}_0\brk{g_{m+1}^*}$ via \ref{deltaeqs}. In addition secondary
exactness must hold, which by \ref{secs} means that the (ordinary) complex
$$
\ot
{\mathscr B}_0\brk{g_{m-1}^*}\oplus(R_{\mathscr B}\oplus\Sigma{\mathscr A})\brk{g_{m-2}^*}
\ot
{\mathscr B}_0\brk{g_m^*}\oplus(R_{\mathscr B}\oplus\Sigma{\mathscr A})\brk{g_{m-1}^*}
\ot
{\mathscr B}_0\brk{g_{m+1}^*}\oplus(R_{\mathscr B}\oplus\Sigma{\mathscr A})\brk{g_m^*}
\ot
$$
with differentials
$$
\smat{d_{m+1}&i_{m+1}&0\\d_md_{m+1}&d_m&0\\\delta_m^{\mathscr A}&0&d_m}:
{\mathscr B}_0\brk{g_{m+2}^*}\oplus R_{\mathscr B}\brk{g_{m+1}^*}\oplus\Sigma{\mathscr A}\brk{g_{m+1}^*}
\to
{\mathscr B}_0\brk{g_{m+1}^*}\oplus R_{\mathscr B}\brk{g_m^*}\oplus\Sigma{\mathscr A}\brk{g_m^*}
$$
is exact. Then straightforward checking shows that one can eliminate $R_{\mathscr B}$ from this complex altogether, so that its exactness is equivalent to the exactness of a smaller complex
$$
\ot
{\mathscr B}_0\brk{g_{m-1}^*}\oplus\Sigma{\mathscr A}\brk{g_{m-2}^*}
\ot
{\mathscr B}_0\brk{g_m^*}\oplus\Sigma{\mathscr A}\brk{g_{m-1}^*}
\ot
{\mathscr B}_0\brk{g_{m+1}^*}\oplus\Sigma{\mathscr A}\brk{g_m^*}
\ot
$$
with differentials
$$
\smat{d_{m+1}&0\\\delta_m^{\mathscr A}&d_m}:
{\mathscr B}_0\brk{g_{m+2}^*}\oplus\Sigma{\mathscr A}\brk{g_{m+1}^*}
\to
{\mathscr B}_0\brk{g_{m+1}^*}\oplus\Sigma{\mathscr A}\brk{g_m^*}.
$$
Note also that by \ref{deltaeqs} $\delta^{\mathscr A}$ factors through
$\pi$ to give
$$
\bar\delta_m:{\mathscr A}\brk{g^*_{m+2}}\to\Sigma{\mathscr A}\brk{g^*_m}.
$$
It follows that secondary exactness of the resulting complex is equivalent
to exactness of the \emph{mapping cone} of this $\bar\delta$, i.~e. to the
requirement that $\bar\delta$ is a quasiisomorphism. On the other hand, the
complex $({\mathscr A}\brk{g^*_*},d_*)$ is acyclic by construction, so any
of its self-maps is a quasiisomorphism. We thus obtain
\begin{Theorem}\label{delta}
Completions of the diagram \ref{precomplex}
to an exact secondary complex are in one-to-one correspondence with
maps $\delta_m:{\mathscr A}\brk{g_{m+2}^*}\to\Sigma{\mathscr A}\brk{g^*_m}$
satisfying
\begin{equation}\label{maineq}
d\delta g=\delta dg,
\end{equation}
with $\delta(ag)$ for $a\in{\mathscr A}$ defined by
$$
\delta(ag)=a\delta(g)+A(a,ddg)
$$
where $A(a,rg)$ for $r\in R_{\mathscr B}$ is interpreted as $A(a,r)g$.
\end{Theorem}
\qed
Later in chapter \ref{diff} we will need to dualize the map $\delta$. For this purpose it is more convenient to reformulate the conditions in \ref{delta} above in terms of commutative diagrams.
Let
$$
W_p=\bigoplus_{q\ge0}W_p^q
$$
denote the free graded ${\mathbb G}$-module spanned by the generators $g_p^q$, so that we can write
$$
{\mathscr B}_0\brk{g_p^q\ \mid\ q\ge0}={\mathscr B}_0\ox W_p.
$$
The differential in the ${\mathscr B}$-lifting of \eqref{minireso}, being ${\mathscr B}$-equivariant, is then given by the composite
$$
{\mathscr B}_0\ox W_{p+1}\xto{1\ox d}{\mathscr B}_0\ox{\mathscr B}_0\ox W_p\xto{m\ox1}{\mathscr B}_0\ox W_p,
$$
where
$$
d:W_{p+1}\to{\mathscr B}_0\ox W_p
$$
is the restriction of this differential to the generators. As a linear
operator, this $d$ is given by the same matrix as the one giving the operator
of the same name in \eqref{minireso}, i.~e. it is obtained by applying the map
$\chi$ componentwise to the latter.
Moreover let us denote
$$
V_p=W_p\ox{\mathbb F},
$$
so that similarly to the above the differential of \eqref{minireso} itself can be given by the same formul\ae, with ${\mathscr A}$ in place of ${\mathscr B}_0$ and ${\mathscr V}_p$ in place of ${\mathscr W}_p$. Then by \ref{delta} the whole map $\delta$ is determined by its restriction
$$
\delta^{\mathscr A}:V_{p+2}\to\Sigma{\mathscr A}\ox V_p
$$
(cf. \eqref{deltagen}). Indeed \ref{delta} implies that $\delta$ is given by the sum of the two composites in the diagram
\begin{equation}\label{deltasum}
\alignbox{
\xymatrix{
&{\mathscr A}\ox\Sigma{\mathscr A}\ox V_p\ar[dr]^{m\ox1}\\
{\mathscr A}\ox V_{p+2}\ar[ur]^{1\ox\delta^{\mathscr A}}\ar[dr]_{1\ox\ph}&&\Sigma{\mathscr A}\ox V_p.\\
&{\mathscr A}\ox R_{\mathscr B}\ox V_p\ar[ur]_{A\ox1}
}
}
\end{equation}
Here we set $\ph=dd\ox{\mathbb F}$, where the map $dd$ is the composite
$$
W_{p+2}\xto d{\mathscr B}_0\ox W_{p+1}\xto{1\ox d}{\mathscr B}_0\ox{\mathscr B}_0\ox W_p\xto{m\ox1}{\mathscr B}_0\ox W_p
$$
whose image, as we know, lies in
$$
R_{\mathscr B}\ox W_p\subset{\mathscr B}_0\ox W_p.
$$
In other words, there is a commutative diagram
\begin{equation}\label{dd}
\alignbox{
\xymatrix{
&{\mathscr B}_0\ox W_{p+1}\ar[rr]^{1\ox d}&&{\mathscr B}_0\ox{\mathscr B}_0\ox W_p\ar[dr]^{m\ox1}\\
W_{p+2}\ar@{->>}[d]\ar[ur]^d\ar@{-->}[drr]_{dd}&&&&{\mathscr B}_0\ox W_p\\
V_{p+2}\ar@{-->}[drr]_\ph&&R_{\mathscr B}\ox W_p\ar@{ >->}[urr]\ar@{->>}[d]\\
&&R_{\mathscr B}\ox V_p
}
}
\end{equation}
Then in terms of the above diagrams of ${\mathbb F}$-vector spaces, the condition of \ref{delta} can be expressed as follows:
\begin{Corollary}
Completions of \ref{precomplex} to a secondary resolution are in one-to-one correspondence with sequences of maps
$$
\delta^{\mathscr A}_p:V_{p+2}\to\Sigma{\mathscr A}\ox V_p,\ \ p\ge0
$$
making the diagrams below commute, with $\ph$ defined by \eqref{dd}.
\begin{equation}\label{diadelta}
\alignbox{
\xymatrix{
&\Sigma{\mathscr A}\ox V_{p+1}\ar[r]^-{1\ox d}&\Sigma{\mathscr A}\ox{\mathscr A}\ox V_p\ar[dr]^{m\ox1}\\
V_{p+3}\ar[ur]^{\delta^{\mathscr A}_{p+1}}\ar[dr]_d&&{\mathscr A}\ox R_{\mathscr B}\ox V_p\ar[r]^{A\ox1}&\Sigma{\mathscr A}\ox V_p\\
&{\mathscr A}\ox V_{p+2}\ar[ur]^{1\ox\ph}\ar@{}[urr]|{+}\ar[r]_-{1\ox\delta^{\mathscr A}_p}&{\mathscr A}\ox\Sigma{\mathscr A}\ox V_p\ar[ur]_{m\ox1}
}
}
\end{equation}
\end{Corollary}\qed
We can use this to construct the secondary resolution inductively.
Just start by introducing values of $\delta$ on the generators as
expressions with indeterminate coefficients; the equation \eqref{maineq} will
impose linear conditions on these coefficients. These are then solved
degree by degree. For example, in degree 2 one may have
$$
\delta(g_2^2)=\eta_2^2(\Sq^1)\Sq^1g_0^0
$$
for some $\eta_2^2(\Sq^1)\in{\mathscr F}$. Similarly in degree 3 one may have
$$
\delta(g_3^3)=\eta_3^3(\Sq^1)\Sq^1g_1^1+\eta_3^3(1)g_1^2.
$$
Then one will get
$$
d\delta(g_3^3)=\eta_3^3(\Sq^1)\Sq^1d(g_1^1)+\eta_3^3(1)d(g_1^2)=\eta_3^3(\Sq^1)\Sq^1\Sq^1g_0^0+\eta_3^3(1)\Sq^2g_0^0=\eta_3^3(1)\Sq^2g_0^0
$$
and
\begin{multline*}
\delta d(g_3^3)=\delta(\Sq^1g_2^2)\\
=\Sq^1\delta(g_2^2)+A(\Sq^1,dd(g_2^2))=\eta_2^2(\Sq^1)\Sq^1\Sq^1g_0^0+A(\Sq^1,d(\Sq^1g_1^1))=A(\Sq^1,\Sq^1\Sq^1g_0^0)=0;
\end{multline*}
thus \eqref{maineq} forces $\eta^3_3(1)=0$.
Similarly one puts $\delta(g_m^d)=\sum_{m-2\le d'\le d-1}\sum_a\eta_m^d(a)ag_{m-2}^{d'}$,
with $a$ running over a basis in ${\mathscr A}^{d-1-d'}$, and then substituting this in
\eqref{maineq} gives linear equations on the numbers $\eta_m^d(a)$. Solving
these equations and choosing the remaining $\eta$'s arbitrarily then gives
values of the differential $\delta$ in the secondary resolution.
Then finally to obtain the secondary differential
$$
d_{(2)}:\Ext^n_{\mathscr A}({\mathbb F},{\mathbb F})^m\to\Ext^{n+2}_{\mathscr A}({\mathbb F},{\mathbb F})^{m+1}
$$
from this $\delta$, one just applies the functor $\Hom_{\mathscr A}(\_,{\mathbb F})$ to the
initial minimal resolution and calculates the map induced by $\delta$ on
cohomology of the resulting cochain complex, i.~e. on $\Ext^*_{\mathscr A}({\mathbb F},{\mathbb F})$.
In fact since \eqref{minireso} is a minimal resolution, the value of
$\Hom_{\mathscr A}(\_,{\mathbb F})$ on it coincides with its own cohomology and is the
${\mathbb F}$-vector space of those linear maps ${\mathscr A}\brk{g_*^*}\to{\mathbb F}$ which vanish
on all elements of the form $ag_*^*$ with $a$ of positive degree.
Let us then identify $\Ext^*_{\mathscr A}({\mathbb F},{\mathbb F})$ with this space and choose a basis
in it consisting of elements $\hat g_m^d$ defined as the maps sending the
generator $g_m^d$ to 1 and all other generators to 0. One then has
$$
(d_{(2)}(\hat g_m^d))(g_{m'}^{d'})=\hat g_m^d\delta(g_{m'}^{d'}).
$$
The right hand side is nonzero precisely when $g_m^d$ appears in
$\delta(g_{m'}^{d'})$ with coefficient 1, i.~e. one has
\begin{equation}\label{d2gen}
d_{(2)}(\hat g_m^d)=\sum_{\textrm{$g_m^d$ appears in
$\delta(g_{m+2}^{d+1})$}}\hat g_{m+2}^{d+1}.
\end{equation}
For example, looking at the table of values of $\delta$ below we see that
the first instance of a $g_m^d$ appearing with coefficient 1 in a value of
$\delta$ on a generator is
$$
\delta(g_3^{17})=g_1^{16}+ \Sq^{12} g_1^4+\Sq^{10}\Sq^4
g_1^2+(\Sq^9\Sq^4\Sq^2+\Sq^{10}\Sq^5+\Sq^{11}\Sq^4)g_1^1.
$$
This means
$$
d_{(2)}(\hat g_1^{16})=\hat g_3^{17}
$$
and moreover $d_{(2)}(\hat g_m^d)=0$ for all $g_m^d$ with $d<17$ (one can
check all cases for each given $d$ since the number of generators $g_m^d$
for each given $d$ is finite).
Treating similarly the rest of the table below we find that the only
nonzero values of $d_{(2)}$ on generators of degree $<40$ are as follows:
$$
\begin{array}{rl}
d_{(2)}(\hat g_1^{16})&=\hat g_3^{17}\\
d_{(2)}(\hat g_4^{21})&=\hat g_6^{22}\\
d_{(2)}(\hat g_4^{22})&=\hat g_6^{23}\\
d_{(2)}(\hat g_5^{23})&=\hat g_7^{24}\\
d_{(2)}(\hat g_7^{30})&=\hat g_9^{31}\\
d_{(2)}(\hat g_8^{31})&=\hat g_{10}^{32}\\
d_{(2)}(\hat g_1^{32})&=\hat g_3^{33}\\
d_{(2)}(\hat g_2^{33})&=\hat g_4^{34}\\
d_{(2)}(\hat g_7^{33})&=\hat g_9^{34}\\
d_{(2)}(\hat g_8^{33})&=\hat g_{10}^{34}\\
d_{(2)}({}'\hat g_3^{34})&=\hat g_5^{35}\\
d_{(2)}(\hat g_8^{34})&=\hat g_{10}^{35}\\
d_{(2)}(\hat g_7^{36})&=\hat g_9^{37}\\
d_{(2)}(\hat g_8^{37})&=\hat g_{10}^{38}.
\end{array}
$$
These data can be summarized in the following picture, thus confirming
calculations presented in Ravenel's book \cite{Ravenel}.
\
\newcount\s
\def\latticebody{
\s=\latticeA
\advance\s by\latticeB
\ifnum\s<40\drop{.}\else\fi}
\ \hfill
\xy
*\xybox{
0;<.3cm,0cm>:
,0,{\xylattice0{40}0{15}}
,(0,0)*{\bullet}
,(0,1)*{\bullet}
,(0,2)*{\bullet}
,(0,3)*{\bullet}
,(0,4)*{\bullet}
,(0,5)*{\bullet}
,(0,6)*{\bullet}
,(0,7)*{\bullet}
,(0,8)*{\bullet}
,(0,9)*{\bullet}
,(0,10)*{\bullet}
,(0,11)*{\bullet}
,(0,12)*{\bullet}
,(0,13)*{\bullet}
,(0,14)*{\bullet}
,(0,15)*{\bullet}
,(1,1)*{\bullet}
,(3,1)*{\bullet}
,(7,1)*{\bullet}
,(15,1)*{\circ}="1.16"
,(31,1)*{\circ}="1.32"
,(2,2)*{\bullet}
,(3,2)*{\bullet}
,(6,2)*{\bullet}
,(7,2)*{\bullet}
,(8,2)*{\bullet}
,(14,2)*{\bullet}
,(15,2)*{\bullet}
,(16,2)*{\bullet}
,(18,2)*{\bullet}
,(30,2)*{\bullet}
,(31,2)*{\circ}="2.33"
,(32,2)*{\bullet}
,(34,2)*{\bullet}
,(3,3)*{\bullet}
,(7,3)*{\bullet}
,(8,3)*{\bullet}
,(9,3)*{\bullet}
,(14,3)*{\circ};"1.16"**\dir{-}
,(15,3)*{\bullet}
,(17,3)*{\bullet}
,(18,3)*{\bullet}
,(19,3)*{\bullet}
,(21,3)*{\bullet}
,(30,3)*{\circ};"1.32"**\dir{-}
,(31.15,2.85)*{\bullet}
,(30.85,3.15)*{\circ}="3.34"
,(33,3)*{\bullet}
,(34,3)*{\bullet}
,(7,4)*{\bullet}
,(9,4)*{\bullet}
,(14,4)*{\bullet}
,(15,4)*{\bullet}
,(17,4)*{\circ}="4.21"
,(18.15,3.85)*{\bullet}
,(17.85,4.15)*{\circ}="4.22"
,(20,4)*{\bullet}
,(22,4)*{\bullet}
,(23,4)*{\bullet}
,(30,4)*{\circ};"2.33"**\dir{-}
,(31,4)*{\bullet}
,(32,4)*{\bullet}
,(33,4)*{\bullet}
,(34,4)*{\bullet}
,(9,5)*{\bullet}
,(11,5)*{\bullet}
,(14,5)*{\bullet}
,(14.85,5.15)*{\bullet}
,(15.15,4.85)*{\bullet}
,(17,5)*{\bullet}
,(18,5)*{\circ}="5.23"
,(20,5)*{\bullet}
,(21,5)*{\bullet}
,(23,5)*{\bullet}
,(24,5)*{\bullet}
,(30,5)*{\circ};"3.34"**\dir{-}
,(30.85,5.15)*{\bullet}
,(31.15,4.85)*{\bullet}
,(33,5)*{\bullet}
,(10,6)*{\bullet}
,(11,6)*{\bullet}
,(14,6)*{\bullet}
,(15,6)*{\bullet}
,(16,6)*{\circ};"4.21"**\dir{-}
,(17,6)*{\circ};"4.22"**\dir{-}
,(20,6)*{\bullet}
,(23,6)*{\bullet}
,(26,6)*{\bullet}
,(30,6)*{\bullet}
,(31,6)*{\bullet}
,(32,6)*{\bullet}
,(11,7)*{\bullet}
,(15,7)*{\bullet}
,(16,7)*{\bullet}
,(17,7)*{\circ};"5.23"**\dir{-}
,(23,7)*{\circ}="7.30"
,(26,7)*{\circ}="7.33"
,(29,7)*{\circ}="7.36"
,(30,7)*{\bullet}
,(31,7)*{\bullet}
,(32,7)*{\bullet}
,(15,8)*{\bullet}
,(17,8)*{\bullet}
,(22,8)*{\bullet}
,(23,8)*{\circ}="8.31"
,(25,8)*{\circ}="8.33"
,(26,8)*{\circ}="8.34"
,(28,8)*{\bullet}
,(29,8)*{\circ}="8.37"
,(30,8)*{\bullet}
,(30.85,8.15)*{\bullet}
,(31.15,7.85)*{\bullet}
,(17,9)*{\bullet}
,(19,9)*{\bullet}
,(22,9)*{\circ};"7.30"**\dir{-}
,(22.85,9.15)*{\bullet}
,(23.15,8.85)*{\bullet}
,(25,9)*{\circ};"7.33"**\dir{-}
,(26,9)*{\circ}="9.35"
,(28,9)*{\circ};"7.36"**\dir{-}
,(29,9)*{\bullet}
,(30,9)*{\bullet}
,(18,10)*{\bullet}
,(19,10)*{\bullet}
,(22,10)*{\circ};"8.31"**\dir{-}
,(23,10)*{\bullet}
,(24,10)*{\circ};"8.33"**\dir{-}
,(25,10)*{\circ};"8.34"**\dir{-}
,(28,10)*{\circ};"8.37"**\dir{-}
,(19,11)*{\bullet}
,(23,11)*{\bullet}
,(24,11)*{\bullet}
,(25,11)*{\circ};"9.35"**\dir{-}
,(23,12)*{\bullet}
,(25,12)*{\bullet}
,(25,13)*{\bullet}
,(0,17)*{\ }
,(42,0)*{\ }
}="O"
,{"O"+L \ar "O"+R*+!LD{d-m}}
,{"O"+D \ar "O"+U*+!RD{m}}
\endxy
\hfill\
\section{The table of values of the differential $\delta$ in the secondary
resolution for ${\mathbb G}^\Sigma$}
The following table presents results of computer calculations of the
differential $\delta$. Note that it does not have invariant meaning since
it depends on the choices involved in determination of the multiplication
map $A$, of the resolution and of those indeterminate coefficients
$\eta_m^d(a)$ which remain undetermined after the conditions
\eqref{maineq} are satisfied. The resulting secondary differential
$d_{(2)}$ however does not depend on these choices and is canonically
determined.
$$
\begin{array}{rl}
\delta(g_2^2) &= 0\\
\ \\
\delta(g_3^3) &= 0\\
\ \\
\delta(g_2^4) &= 0\\
\delta(g_4^4) &= 0\\
\ \\
\delta(g_2^5) &= 0\\
\delta(g_5^5) &= 0\\
\ \\
\delta(g_3^6) &= \Sq^4 g_1^1\\
\delta(g_6^6) &= 0\\
\ \\
\delta(g_7^7) &= 0\\
\ \\
\delta(g_2^8) &= 0\\
\delta(g_8^8) &= 0\\
\ \\
\delta(g_2^9) &= 0\\
\delta(g_9^9) &= 0\\
\ \\
\delta(g_2^{10}) &= 0\\
\delta(g_3^{10}) &= (\Sq^4\Sq^2\Sq^1 + \Sq^7) g_1^2\\
&+ \Sq^8 g_1^1\\
\delta(g_{10}^{10}) &= 0\\
\end{array}
$$
\hfill
$$
\begin{array}{rl}
\delta(g_3^{11}) &= (\Sq^7\Sq^1 + \Sq^8) g_1^2\\
&+ \Sq^6\Sq^3 g_1^1\\
\delta(g_4^{11}) &= \Sq^5 g_2^5\\
&+\Sq^4\Sq^2 g_2^4\\
\delta(g_{11}^{11}) &= 0\\
\ \\
\delta(g_3^{12}) &= \Sq^7\Sq^3 g_1^1\\
\delta(g_{12}^{12}) &= 0\\
\ \\
\delta(g_4^{13}) &= \Sq^4 g_2^8\\
&+ (\Sq^7 + \Sq^5\Sq^2) g_2^5\\
&+ (\Sq^8 + \Sq^6\Sq^2) g_2^4\\
&+ (\Sq^7\Sq^3 + \Sq^8\Sq^2 + \Sq^{10}) g_2^2\\
\delta(g_{13}^{13}) &= 0\\
\ \\
\delta(g_5^{14}) &= \Sq^4\Sq^2\Sq^1 g_3^6\\
&+ (\Sq^7\Sq^3 + \Sq^8\Sq^2) g_3^3\\
\delta(g_{14}^{14}) &= 0\\
\ \\
\delta(g_2^{16}) &= 0\\
\delta(g_5^{16}) &=\Sq^3 g_3^{12}\\
&+ \Sq^4 g_3^{11}\\
&+ \Sq^5 g_3^{10}\\
&+ \Sq^{10}\Sq^2 g_3^3\\
\delta(g_6^{16}) &= 0\\
\ \\
\delta(g_2^{17}) &= 0\\
\delta(g_3^{17}) &= g_1^{16}\\
&+ \Sq^{12} g_1^4\\
&+\Sq^{10}\Sq^4 g_1^2\\
&+ (\Sq^9\Sq^4\Sq^2 + \Sq^{10}\Sq^5 + \Sq^{11}\Sq^4) g_1^1\\
\delta(g_6^{17}) &= (\Sq^5 + \Sq^4\Sq^1) g_4^{11}\\
&+ (\Sq^{12} +\Sq^{10}\Sq^2) g_4^4\\
\ \\
\delta(g_2^{18}) &= 0\\
\delta(g_3^{18}) &=(\Sq^{11}\Sq^4 + \Sq^8\Sq^4\Sq^2\Sq^1) g_1^2\\
&+(\Sq^{10}\Sq^4\Sq^2 + \Sq^{11}\Sq^5 + \Sq^{12}\Sq^4 +\Sq^{14}\Sq^2 + \Sq^{16}) g_1^1\\
\delta(g_4^{18}) &=(\Sq^6\Sq^1 + \Sq^7) g_2^{10}\\
&+(\Sq^6\Sq^3 + \Sq^7\Sq^2 + \Sq^9) g_2^8\\
&+\Sq^8\Sq^4g_2^5\\
&+(\Sq^{10}\Sq^2\Sq^1 + \Sq^{13} + \Sq^{11}\Sq^2 +\Sq^{12}\Sq^1)g_2^4\\
&+(\Sq^9\Sq^4\Sq^2 + \Sq^{15} + \Sq^{12}\Sq^3 +\Sq^{10}\Sq^5)g_2^2\\
\delta(g_7^{18}) &= \Sq^2\Sq^1 g_5^{14}\\
\ \\
\delta(g_4^{19}) &=\Sq^9 g_2^9\\
&+(\Sq^{10} + \Sq^8\Sq^2) g_2^8\\
&+ \Sq^{11}\Sq^2g_2^5\\
&+ ( \Sq^{11}\Sq^2\Sq^1 + \Sq^{13}\Sq^1 + \Sq^8\Sq^4\Sq^2 + \Sq^{10}\Sq^3\Sq^1) g_2^4\\
&+(\Sq^{14}\Sq^2 +\Sq^{10}\Sq^4\Sq^2 + \Sq^{12}\Sq^4) g_2^2\\
\delta(g_5^{19}) &=\Sq^1 g_3^{17}\\
&+ \Sq^4\Sq^2 g_3^{12}\\
&+ \Sq^4\Sq^2\Sq^1 g_3^{11}\\
&+ (\Sq^6\Sq^2 + \Sq^8) g_3^{10}\\
&+ (\Sq^8\Sq^4 +\Sq^{11}\Sq^1) g_3^6\\
&+ (\Sq^{13}\Sq^2 + \Sq^{10}\Sq^5 + \Sq^{15} + \Sq^{11}\Sq^4) g_3^3
\end{array}
$$
$$
\begin{array}{rl}
\delta(g_2^{20}) &= 0\\
\delta(g_3^{20}) &= (\Sq^{15} + \Sq^9\Sq^4\Sq^2) g_1^4\\
&+ (\Sq^{12}\Sq^5 + \Sq^{13}\Sq^4 + \Sq^{16}\Sq^1) g_1^2\\
&+ (\Sq^{11}\Sq^5\Sq^2 + \Sq^{15}\Sq^3 + \Sq^{18} + \Sq^{12}\Sq^6)g_1^1\\
\delta(g_5^{20}) &= \Sq^4\Sq^2\Sq^1 g_3^{12}\\
&+ (\Sq^7\Sq^1 + \Sq^8)g_3^{11}\\
&+ (\Sq^{10}\Sq^3 + \Sq^8\Sq^4\Sq^1 + \Sq^{13} + \Sq^{11}\Sq^2)g_3^6\\
&+ (\Sq^{13}\Sq^3 + \Sq^{10}\Sq^4\Sq^2 + \Sq^{11}\Sq^5 + \Sq^{12}\Sq^4)g_3^3\\
\delta({}'g_5^{20}) &= \Sq^5\Sq^2 g_3^{12}\\
&+ \Sq^7\Sq^2 g_3^{10}\\
&+ (\Sq^{12}\Sq^1 + \Sq^{10}\Sq^3 + \Sq^8\Sq^4\Sq^1 + \Sq^{10}\Sq^2\Sq^1
+ \Sq^{11}\Sq^2) g_3^6\\
&+ (\Sq^{14}\Sq^2 + \Sq^{13}\Sq^3 + \Sq^{11}\Sq^5 + \Sq^{16} + \Sq^{12}\Sq^4)g_3^3\\
\delta(g_6^{20}) &= (\Sq^6\Sq^2 + \Sq^8) g_4^{11}\\
&+ (\Sq^{13}\Sq^2 + \Sq^{15} + \Sq^{11}\Sq^4) g_4^4\\
\ \\
\delta(g_3^{21}) &= (\Sq^{15}\Sq^2\Sq^1 + \Sq^{17}\Sq^1 +
\Sq^{12}\Sq^6) g_1^2\\
&+ ( \Sq^{13}\Sq^4\Sq^2 + \Sq^{15}\Sq^4 + \Sq^{16}\Sq^3 + \Sq^{17}\Sq^2 + \Sq^{19}) g_1^1\\
\delta(g_4^{21}) &=\Sq^3 g_2^{17}\\
&+ (\Sq^{10} +\Sq^9\Sq^1) g_2^{10}\\
&+ (\Sq^9\Sq^3 + \Sq^{11}\Sq^1) g_2^8\\
&+ (\Sq^{15} +\Sq^{13}\Sq^2 + \Sq^{10}\Sq^5) g_2^5\\
&+ ( \Sq^{13}\Sq^2\Sq^1 + \Sq^{12}\Sq^3\Sq^1 + \Sq^{12}\Sq^4 + \Sq^9\Sq^4\Sq^2\Sq^1 + \Sq^{10}\Sq^4\Sq^2) g_2^4\\
&+ (\Sq^{16}\Sq^2 + \Sq^{12}\Sq^6 + \Sq^{15}\Sq^3) g_2^2\\
\delta(g_6^{21}) &= (\Sq^7 + \Sq^6\Sq^1) g_4^{13}\\
&+ (\Sq^9 + \Sq^8\Sq^1) g_4^{11}\\
&+ \Sq^{11}\Sq^5 g_4^4\\
\ \\
\delta(g_3^{22}) &=\Sq^{17} g_1^4\\
&+ (\Sq^{16}\Sq^2\Sq^1 + \Sq^{13}\Sq^6 +\Sq^{12}\Sq^4\Sq^2\Sq^1 + \Sq^{12}\Sq^6\Sq^1) g_1^2\\
&+ ( \Sq^{13}\Sq^5\Sq^2 + \Sq^{17}\Sq^3 + \Sq^{18}\Sq^2 +\Sq^{14}\Sq^4\Sq^2)g_1^1\\
\delta(g_4^{22}) &=\Sq^4 g_2^{17}\\
&+ \Sq^{11} g_2^{10}\\
&+ (\Sq^{12} + \Sq^9\Sq^3) g_2^9\\
&+ (\Sq^9\Sq^4 + \Sq^{13} + \Sq^8\Sq^4\Sq^1)g_2^8\\
&+ \Sq^{12}\Sq^4g_2^5\\
&+ \Sq^{15}\Sq^2 g_2^4\\
&+ (\Sq^{13}\Sq^4\Sq^2 + \Sq^{19} + \Sq^{13}\Sq^6 + \Sq^{14}\Sq^5)g_2^2\\
\delta({}'g_4^{22}) &= \Sq^2\Sq^1 g_2^{18}\\
&+ (\Sq^8\Sq^4 + \Sq^{12}) g_2^9\\
&+ (\Sq^9\Sq^4 + \Sq^{13} + \Sq^{12}\Sq^1) g_2^8\\
&+ (\Sq^{16} + \Sq^{13}\Sq^3) g_2^5\\
&+ (\Sq^{15}\Sq^2 + \Sq^{16}\Sq^1 + \Sq^{13}\Sq^4 + \Sq^{11}\Sq^4\Sq^2)g_2^4\\
&+ (\Sq^{14}\Sq^5 + \Sq^{19} + \Sq^{17}\Sq^2) g_2^2\\
\delta(g_5^{22}) &= (\Sq^7\Sq^2 + \Sq^6\Sq^2\Sq^1 + \Sq^6\Sq^3)g_3^{12}\\
&+ \Sq^{10} g_3^{11} + (\Sq^9\Sq^2 + \Sq^8\Sq^3 + \Sq^{11}) g_3^{10}\\
&+(\Sq^{14}\Sq^1 + \Sq^{11}\Sq^3\Sq^1 + \Sq^{12}\Sq^3 +\Sq^{13}\Sq^2)g_3^6\\
&+ \Sq^{13}\Sq^5 g_3^3\\
\delta(g_6^{22}) &= g_4^{21}\\
&+ (\Sq^6\Sq^2 + \Sq^8 + \Sq^7\Sq^1) g_4^{13}\\
&+ \Sq^{10} g_4^{11}\\
&+ (\Sq^{13}\Sq^4 + \Sq^{15}\Sq^2 + \Sq^{17}) g_4^4\\
\delta(g_7^{22}) &= (\Sq^{13}\Sq^3 + \Sq^{14}\Sq^2 + \Sq^{16}) g_5^5\\
\end{array}
$$
\endinput
\chapter[Hopf pair algebras and Hopf pair coalgebras]{Hopf pair algebras and Hopf pair coalgebras representing the algebra
of secondary cohomology operations}\label{Hpa}
We describe a modification ${\mathscr B}^{\mathbb F}$ of the algebra ${\mathscr B}$ of secondary
cohomology operations in chapter \ref{secsteen} which is suitable for
dualization. The resulting object ${\mathscr B}^{\mathbb F}$ and the dual object ${\mathscr B}_{\mathbb F}$
will be used to give an alternative description of the multiplication map $A$
and the dual multiplication map $A_*$. All triple Massey products in the
Steenrod algebra can be deduced from ${\mathscr B}^{\mathbb F}$ or ${\mathscr B}_{\mathbb F}$ and from $A$ and
$A_*$.
We first recall the notions of pair modules and pair algebras from chapter
\ref{sext} and give the corresponding dual notions. Next we define the
concept of $M$-algebras and $N$-coalgebras, where $M$ is a folding system and
$N$ an unfolding system. An $M$-algebra is a variation on the notion of a
$[p]$-algebra from \cite{Baues}. We show that the algebra ${\mathscr B}$ of secondary
cohomology operations gives rise to a comonoid ${\mathscr B}^{\mathbb F}$ in the monoidal
category of $M$-algebras, and we describe the dual object ${\mathscr B}_{\mathbb F}$, which is
a monoid in the monoidal category of $N$-coalgebras.
In chapter \ref{LS} we study the algebraic objects ${\mathscr B}^{\mathbb F}$ and ${\mathscr B}_{\mathbb F}$ in terms
of generators. This way we obtain explicit descriptions which can be used
for computations. In particular we characterize algebraically
multiplication maps $A_\phi$ and comultiplication maps $A^\psi$ which
determine ${\mathscr B}^{\mathbb F}$ and ${\mathscr B}_{\mathbb F}$ completely, see sections \ref{rcomp},
\ref{bcomp}, \ref{cobcomp}. For the dual object ${\mathscr B}_{\mathbb F}$ the inclusion of
polynomial algebras ${\mathscr A}_*\subset{\mathscr F}_*$ will be crucial. Here ${\mathscr A}_*$ is the
Milnor dual of the Steenrod algebra and ${\mathscr F}_*$ is the dual of a free
associative algebra.
\section{Pair modules and pair algebras}\label{pairs}
We here recall from \ref{secodjf} the following notation in order to prepare the
reader for the dualization of this notation in the next section.
Let $k$ be a commutative ring (usually it will be actually a prime field
${\mathbb F}={\mathbb F}_p={\mathbb Z}/p{\mathbb Z}$ for some prime $p$) and let ${\mathbf{Mod}}$ be the category of
finite dimensional $k$-modules (i.~e. $k$-vector spaces) and $k$-linear
maps.
A \emph{pair module} is a
homomorphism
\begin{equation}\label{pair}
X=\left(X_1\xto\d X_0\right)
\end{equation}
in ${\mathbf{Mod}}$. We write $\pi_0(X)=\coker\d$ and $\pi_1(X)=\ker\d$.
For two pair modules $X$ and $Y$ the tensor product of the
complexes corresponding to them is concentrated in degrees in 0, 1 and 2
and is given by
\begin{equation}\label{tens}
X_1\!\ox\!Y_1\xto{\d_1}X_1\!\ox\!Y_0\oplus
X_0\!\ox\!Y_1\xto{\d_0}X_0\!\ox\!Y_0
\end{equation}
with $\d_0=(\d\ox1,1\ox\d)$ and $\d_1=\binom{-1\ox\d}{\d\ox1}$. Truncating this
chain complex we get the pair module
$$
X\bar\ox Y=
\left((X\bar\ox Y)_1=\coker(\d_1)\xto\d X_0\ox Y_0
=(X\bar\ox Y)_0\right)
$$
with $\d$ induced by $\d_0$. Clearly one has $\pi_0(X\bar\ox Y)\cong\pi_0(X)\ox\pi_0(Y)$
and
\begin{equation}\label{tox}
\pi_1(X\bar\ox Y)\cong\pi_1(X)\!\ox\!\pi_0(Y)\oplus\pi_0(X)\!\ox\!\pi_1(Y).
\end{equation}
We next consider the category ${\mathbf{Mod}}^\cdot$ of \emph{graded} modules, i.~e.
graded objects in ${\mathbf{Mod}}$ (graded $k$-vector spaces
$A^\cdot=(A^n)_{n\in{\mathbb Z}}$ with upper indices, which in each degree have
finite dimension). For graded modules $A^\cdot$, $B^\cdot$ we define their
graded tensor product $A^\cdot\ox B^\cdot$ in the usual way with
an interchange
\begin{equation}\label{symm}
T_{A^\cdot,B^\cdot}:A^\cdot\ox B^\cdot\xto\cong B^\cdot\ox A^\cdot .
\end{equation}
A \emph{graded pair module} is a graded object of ${\mathbf{Mod}}_*$, i.~e. a
sequence $X^n=(\d^n:X_1^n\to X_0^n)$ with $n\in{\mathbb Z}$ of pair modules.
The tensor product $X^\cdot\bar\ox Y^\cdot$ of graded pair modules
$X^\cdot$, $Y^\cdot$ is defined by
\begin{equation}\label{grpr}
(X^\cdot\bar\ox Y^\cdot)^n=\bigoplus_{i+j=n}X^i\bar\ox Y^j.
\end{equation}
For two morphisms $f,g:X^\cdot\to Y^\cdot$ between graded pair modules, a
\emph{homotopy} $H:f\then g$ is a morphism $H:X^\cdot_0\to Y^\cdot_1$ of degree 0 satisfying
\begin{equation}
f_0-g_0=\d H \textrm{ and } f_1-g_1=H\d .
\end{equation}
\begin{Definition}
A \emph{pair algebra} $B^\cdot$ is a graded pair module, i.~e. an object
$$
\d^\cdot:B^\cdot_1\to B^\cdot_0
$$
in ${\mathbf{Mod}}^\cdot_*$ with $B_1^n=B_0^n=0$ for $n<0$ such that $B^\cdot_0$ is a graded
algebra in ${\mathbf{Mod}}^\cdot$, $B^\cdot_1$ is a graded $B^\cdot_0$-$B^\cdot_0$-bimodule, and
$\d^\cdot$ is a bimodule homomorphism. Moreover for $x,y\in B^\cdot_1$ the equality
\begin{equation}\label{paireq}
\d(x)y=x\d(y)
\end{equation}
holds in $B^\cdot_1$.
\end{Definition}
It is easy to see that a graded pair algebra $B^\cdot$ yields an exact
sequence of graded $B^\cdot_0$-$B^\cdot_0$-bimodules
\begin{equation}\label{piseq}
0\to\pi_1B^\cdot\to B^\cdot_1\xto\d B^\cdot_0\to\pi_0B^\cdot\to0
\end{equation}
where in fact $\pi_0B^\cdot$ is a graded $k$-algebra, $\pi_1B^\cdot$ is a graded
$\pi_0B^\cdot$-$\pi_0B^\cdot$-bimodule, and $B^\cdot_0\to\pi_0B^\cdot$ is a homomorphism of
graded $k$-algebras.
The tensor product of pair algebras has a natural pair algebra structure,
as it happens in any symmetric monoidal category.
We are mainly interested in two examples of pair algebras defined below in
sections \ref{grel} and \ref{sec} respectively: the \emph{${\mathbb G}$-relation
pair algebra ${\mathscr R}$ of the Steenrod algebra ${\mathscr A}$} and the \emph{pair algebra
${\mathscr B}$ of secondary cohomology operations} deduced from
\cite{Baues}*{5.5.2}.
By the work of Milnor \cite{Milnor} it is well known that the dual of the
Steenrod algebra ${\mathscr A}$ is a polynomial algebra and this fact yields
important algebraic properties of ${\mathscr A}$. For this reason we also consider
the dual of the ${\mathbb G}$-relation pair algebra ${\mathscr R}$ of ${\mathscr A}$ and the dual of
the pair algebra ${\mathscr B}$ of secondary cohomology operations. The duality
functor $D$ is studied in the next section.
\section{Pair comodules and pair coalgebras}\label{coalg}
This section is exactly dual to the previous one. There is a contravariant
self-equivalence of categories
$$
D=\Hom_k(\_,k):{\mathbf{Mod}}^{\mathrm{op}}\to{\mathbf{Mod}}
$$
which carries a vector space $V$ in ${\mathbf{Mod}}$ to its dual
$$
DV=\Hom_k(V,k).
$$
We also denote the dual of $V$ by $V_*=DV$, for example, the dual of the
Steenrod algebra ${\mathscr A}$ is ${\mathscr A}_*=D({\mathscr A})$. We can apply the functor
$\Hom_k(\_,k)$ to dualize straightforwardly all notions of section
\ref{pairs}. Explicitly, one gets:
A \emph{pair comodule} is a homomorphism
\begin{equation}\label{copair}
X=\left(X^1\xot dX^0\right)
\end{equation}
in ${\mathbf{Mod}}$. We write $\pi^0(X)=\ker d$ and $\pi^1(X)=\coker d$.
The dual of a pair module $X$ is a pair comodule
\begin{align*}
DX&=\Hom_k(X,k)\\
&=(D\d:DX_0\to DX_1)
\end{align*}
with $(DX)^i=D(X_i)$.
A \emph{morphism} $f:X\to Y$ of pair comodules is a commutative diagram
$$
\xymatrix
{
X^1\ar[r]^{f^1}&Y^1\\
X^0\ar[u]^d\ar[r]^{f_0}&Y^0\ar[u]^d.
}
$$
Evidently pair comodules with these morphisms form a category ${\mathbf{Mod}}^*$ and
one has functors
$$
\pi^0, \pi^1 : {\mathbf{Mod}}^*\to{\mathbf{Mod}}.
$$
which are compatible with the duality functor $D$, that is, for any pair
module $X$ one has
$$
\pi_i(DX)=D(\pi_iX)\textrm{ for }i=0,1.
$$
A morphism of pair comodules is called a \emph{weak equivalence} if it
induces isomorphisms on $\pi^0$ and $\pi^1$.
Clearly a pair comodule is the same as a cochain complex concentrated in
degrees 0 and 1. For two pair comodules $X$ and $Y$ the tensor product
of the cochain complexes is concentrated in degrees in 0, 1 and 2 and is
given by
$$
X^1\!\ox\!Y^1\xot{d^1}X^1\!\ox\!Y^0\oplus
X^0\!\ox\!Y^1\xot{d^0}X^0\!\ox\!Y^0
$$
with $d^0=\binom{d\ox1}{1\ox d}$ and $d^1=(-1\ox d,d\ox1)$. Cotruncating this
cochain complex we get the pair comodule
$$
X\dblb\ox Y=
\left((X\dblb\ox Y)^1=\ker(d^1)\xot dX^0\ox Y^0=(X\dblb\ox Y)^0\right)
$$
with $d$ induced by $d_0$. One readily checks the natural isomorphism
\begin{equation}
D(X\bar\ox Y)\cong DX\dblb\ox DY.
\end{equation}
\begin{Remark}[compare \ref{trunc}]
Note that the full embedding of the category of pair comodules into the category of
cochain complexes induced by the above identification has a right adjoint
$\Tr^*$ given by cotruncation: for a cochain complex
$$
C^*=\left(...\ot C^2\xot{d^1}C^1\xot{d^0}C^0\xot{d^{-1}}C^{-1}\ot...\right),
$$
one has
$$
\Tr^*(C^*)=\left(\ker(d^1)\xot{{\bar d}^0}C^0\right),
$$
with ${\bar d}^0$ induced by $d^0$. Then clearly one has
$$
X\dblb\ox Y=\Tr^*(X\ox Y).
$$
Using the fact that $\Tr^*$ is a coreflection onto a full subcategory, one
easily checks that the category ${\mathbf{Mod}}^*$ together with the tensor
product $\dblb\ox$ and unit $k^*=(0\ot k)$ is a symmetric monoidal category,
and $\Tr^*$ is a monoidal functor.
\end{Remark}
We next consider the category ${\mathbf{Mod}}_\bullet$ of \emph{graded} modules, i.~e.
graded objects in ${\mathbf{Mod}}$ (graded $k$-vector spaces
$A_\cdot=(A_n)_{n\in{\mathbb Z}}$ with lower indices which in each degree have
finite dimension). For graded modules $A_\cdot$, $B_\cdot$ we define their
graded tensor product $A_\cdot\ox B_\cdot$ again in the usual way, i.~e.
by
$$
(A_\cdot\ox B_\cdot)_n=\bigoplus_{i+j=n}A_i\ox B_j.
$$
A \emph{graded pair comodule} is a graded object of ${\mathbf{Mod}}^*$, i.~e. a
sequence $X_n=(d_n:X^0_n\to X^1_n)$ of pair comodules. We can also identify such a
graded pair comodule $X_\cdot$ with the underlying morphism $d$ of degree 0 between
graded modules
$$
X_\cdot=\left(X_\cdot^1\xot{d_\cdot}X_\cdot^0\right).
$$
Now the tensor product $X_\cdot\dblb\ox Y_\cdot$ of graded pair comodules
$X_\cdot$, $Y_\cdot$ is defined by
\begin{equation}\label{cogrpr}
(X_\cdot\dblb\ox Y_\cdot)_n=\bigoplus_{i+j=n}X_i\dblb\ox Y_j.
\end{equation}
This defines a monoidal structure on the category ${\mathbf{Mod}}_\bullet$ of graded
pair comodules. Morphisms in this category are of degree 0.
For two morphisms $f,g:X_\cdot\to Y_\cdot$ between graded pair comodules, a
\emph{homotopy} $H:f\then g$ is a morphism $H:X_\cdot^1\to Y_\cdot^0$ of degree 0 as in
the diagram
\begin{equation}\label{cohomot}
\alignbox{
\xymatrix
{
X_\cdot^1\ar[dr]|H\ar@<.5ex>[r]^{f^1}\ar@<-.5ex>[r]_{g^1}&Y_\cdot^1\\
X_\cdot^0\ar[u]_d\ar@<.5ex>[r]^{f^0}\ar@<-.5ex>[r]_{g^0}&Y_\cdot^0\ar[u]^d,
}}
\end{equation}
satisfying $f^0-g^0=Hd$ and $f^1-g^1=dH$.
A \emph{pair coalgebra} $B_\cdot$ is a comonoid in the monoidal category of graded pair
comodules, with the diagonal
$$
\delta:B_\cdot\to B_\cdot\dblb\ox B_\cdot.
$$
We assume that $B_\cdot$ is concentrated in nonnegative degrees, that is
$B_n=0$ for $n<0$.
Of course the duality functor $D$ yields a duality functor
$$
D:({\mathbf{Mod}}^\cdot_*)^{\mathrm{op}}\to{\mathbf{Mod}}_\cdot^*
$$
which is compatible with the monoidal structure, i.~e.
$$
D(X^\cdot\bar\ox Y^\cdot)\cong(DX^\cdot)\dblb\ox(DY^\cdot).
$$
We also write $D(X^\cdot)=X_\cdot$.
More explicitly pair coalgebras can be described as follows.
\begin{Definition}
A \emph{pair coalgebra} $B_\cdot$ is a graded pair comodule, i.~e. an object
$$
d_\cdot:B_\cdot^0\to B_\cdot^1
$$
in ${\mathbf{Mod}}_\bullet^*$ with $B^1_n=B^0_n=0$ for $n<0$ such that $B_\cdot^0$ is
a graded coalgebra in ${\mathbf{Mod}}_\bullet$, $B_\cdot^1$ is a graded
$B_\cdot^0$-$B_\cdot^0$-bicomodule, and $d_\cdot$ is abullebullet
homomorphism. Moreover the diagram
$$
\xymatrix{
B_\cdot^1\ar[r]^\lambda\ar[d]_\rho&
B_\cdot^0\ox B_\cdot^1\ar[d]_{d_\cdot\ox1}\\
B_\cdot^1\ox B_\cdot^0\ar[r]^{1\ox d_\cdot}
&B_\cdot^1\ox B_\cdot^1
}
$$
commutes, where $\lambda$, resp. $\rho$ is the left, resp. right coaction.
\end{Definition}
It is easy to see that there results an exact sequence of graded
$B_\cdot^0$-$B_\cdot^0$-bicomodules dual to \eqref{piseq}
\begin{equation}\label{copiseq}
0\ot\pi^1B_\cdot\ot B_\cdot^1\xot{d_\cdot}B_\cdot^0\ot\pi^0B_\cdot\ot0
\end{equation}
where in fact $\pi^0B_\cdot$ is a graded $k$-coalgebra, $\pi^1B_\cdot$ is a graded
$\pi^0B_\cdot$-$\pi^0B_\cdot$-bicomodule, and $B_\cdot^0\ot\pi^0B_\cdot$ is a homomorphism of
graded $k$-coalgebras.
One sees easily that the notions in this section correspond to those in the
previous section under the duality functor $D=\Hom_k(\_,k)$. In particular,
$D$ carries (graded) pair algebras to (graded) pair coalgebras.
\section{Folding systems}
In this section we associate to a ``right module system'' $M$ a category
of $M$-algebras ${\mathbf{Alg}}^\r_M$ which is a monoidal category if $M$ is a
``folding system''. Our main examples given by the ${\mathbb G}$-relation pair
algebra ${\mathscr R}$ of the Steenrod algebra ${\mathscr A}$ and by the pair algebra ${\mathscr B}$ of
secondary cohomology operations are in fact comonoids in monoidal
categories of such type, see sections \ref{grel} and \ref{sec}. This
generalizes the well known fact that the Steenrod algebra ${\mathscr A}$ is a Hopf
algebra, i.~e. a comonoid in the category of algebras.
\begin{Definition}
Let ${\mathbf A}$ be a subcategory of the category of graded $k$-algebras. A
\emph{right module system} $M$ over ${\mathbf A}$ is an assignment, to each
$A\in{\mathbf A}$, of a right $A$-module $M(A)$, and, to each homomorphism
$f:A\to A'$ in ${\mathbf A}$, of a homomorphism $f_*:M(A)\to M(A')$ which is
$f$-equivariant, i.~e.
$$
f_*(xa)=f_*(x)f(a)
$$
for any $a\in A$, $x\in M(A)$. The assignment must be functorial, i.~e. one
must have $(\id_A)_*=\id_{M(A)}$ for all $A$ and $(fg)_*=f_*g_*$ for all
composable $f$, $g$.
There are the obvious similar notions of a \emph{left module system} and a
\emph{bimodule system} on a category of graded $k$-algebras ${\mathbf A}$. Clearly
any bimodule system can be considered as a left module system and a right
module system by forgetting part of the structure.
\end{Definition}
\begin{Examples}\label{sysex}
One obvious example is the bimodule system ${\mathbbm1}$ given by ${\mathbbm1}(A)=A$,
$f_*=f$ for all $A$ and $f$. Another example is the bimodule system
$\Sigma$ given by the suspension. That is, $\Sigma A$ is given by the shift
$$
\Sigma:A^{n-1}=(\Sigma A)^n
$$
($n\in{\mathbb Z}$) which is the identity map denoted by $\Sigma$. The bimodule
structure for $a,m\in A$ is given by
\begin{align*}
a(\Sigma m)&=(-1)^{\deg(a)}\Sigma(am),\\
(\Sigma m)a&=\Sigma(ma).
\end{align*}
We shall need the \emph{interchange} of $\Sigma$ which for graded modules
$U$, $V$, $W$ is the isomorphism
\begin{equation}\label{sigma}
\sigma_{U,V,W}:U\ox(\Sigma V)\ox W\xto\cong\Sigma(U\ox V\ox W)
\end{equation}
which carries $u\ox\Sigma v\ox w$ to $(-1)^{\deg(u)}\Sigma(u\ox v\ox w)$.
Clearly a direct sum of module systems is again a module system of the
same kind, so that in particular we get a bimodule system ${\mathbbm1}\oplus\Sigma$
with $({\mathbbm1}\oplus\Sigma)(A)=A\oplus\Sigma A$.
\end{Examples}
We are mainly interested in the bimodule system ${\mathbbm1}$ and the bimodule
system ${\mathbbm1}\oplus\Sigma$ which are in fact both folding systems, see
\bref{foldex} below.
\begin{Definition}\label{malg}
For a right module system $M$ on the category of algebras ${\mathbf A}$ and an algebra
$A$ from ${\mathbf A}$, an \emph{$M$-algebra of type $A$} is a pair $D_*=(\d:D_1\to D_0)$
with $\pi_0(D_*)=A$ and $\pi_1(D_*)=M(A)$, such that $D_0$ is a
$k$-algebra, the quotient homomorphism $D_0\onto\pi_0D_*=A$ is a
homomorphism of algebras, $D_1$ is a right $D_0$-module, $\d$ is a
homomorphism of right $D_0$-modules, and the induced structure of a right
$\pi_0(D_*)$-module on $\pi_1(D_*)$ conicides with the original right
$A$-module structure on $M$. For $A$, $A'$ in ${\mathbf A}$, an $M$-algebra $D_*$ of
type $A$, and another one $D'_*$ of type $A'$, a morphism $D_*\to D'_*$ of
$M$-pair algebras is defined to be a commutative diagram of the form
$$
\xymatrix{
0\ar[r]
&M(A)\ar[r]\ar[d]_{f_*}
&D_1\ar[r]^\d\ar[d]^{f_1}
&D_0\ar[r]\ar[d]^{f_0}
&A\ar[r]\ar[d]^f
&0\\
0\ar[r]
&M(A')\ar[r]
&D'_1\ar[r]_{\d'}
&D'_0\ar[r]
&A'\ar[r]
&0
}
$$
where $f_0$ is a homomorphism of algebras and $f_1$ is a right
$f_0$-equivariant $k$-linear map. It is clear how to compose such
morphisms, so that $M$-algebras form a category which we denote
${\mathbf{Alg}}^\r_M$.
With obvious modifications, we also get notions of $M$-algebra of type $A$
when $M$ is a left module system or a bimodule system; the corresonding
categories of algebras will be denoted by ${\mathbf{Alg}}^\l_M$ and ${\mathbf{Alg}}^\b_M$,
respectively. Moreover, for a bimodule system $M$ there is also a further
full subcategory
$$
{\mathbf{Alg}}^{\mathit{pair}}_M\subset{\mathbf{Alg}}^\b_M
$$
whose objects, called \emph{$M$-pair algebras} are those $M$-algebras which
satisfy the pair algebra equation $(\d x)y=x\d y$ for all $x,y\in D_1$.
\end{Definition}
\begin{Remark}\label{initial}
Note that if ${\mathbf A}$ contains $k$, then ${\mathbf{Alg}}^?_M$ has an initial object
given by the $M$-algebra $I=(0:M(k)\to k)$ of type $k$. Moreover if ${\mathbf A}$
contains the trivial algebra $0$, then ${\mathbf{Alg}}^?_M$ also has a terminal
object --- the $M$-algebra $0=M(0)\to0$ of type $0$. Here ? stands for
$\l$, $\r$ or $\b$ if $M$ is a left-, right-, or bimodule system,
respectively.
\end{Remark}
\begin{Definition}\label{defold}
Let ${\mathbf A}$ be a category of graded algebras as above which in addition is closed
under tensor product, i.~e. $k$ belongs to ${\mathbf A}$ and for any $A$, $A'$ from
${\mathbf A}$ the algebra $A\ox_kA'$ also belongs to ${\mathbf A}$. A \emph{right folding system}
on ${\mathbf A}$ is then defined to be a right module system $M$ on ${\mathbf A}$ together with
the collection of right $A\ox_kA'$-module homomorphisms
\begin{align*}
\lambda_{A,A'}&:A\ox_kM(A')\to M(A\ox_kA'),\\
\rho_{A,A'}&:M(A)\ox_kA'\to M(A\ox_kA')
\end{align*}
for all $A$, $A'$ in ${\mathbf A}$ which are natural in the sense that for any
homomorphisms $f:A\to A_1$, $f':A'\to A'_1$ in ${\mathbf A}$ the diagrams
\begin{equation}\label{natfold}
\alignbox{
\xymatrix{
A\ox_kM(A')\ar[r]^{\lambda_{A,A'}}\ar[d]_{f\ox f'_*}
&M(A\ox_kA')\ar[d]^{(f\ox f')_*}\\
A_1\ox_kM(A'_1)\ar[r]^{\lambda_{A_1,A'_1}}
&M(A_1\ox_kA'_1)
}}
,\ \ \ \ \ \ \ \ \ \ \ \ \
\alignbox{
\xymatrix{
M(A)\ox_kA'\ar[r]^{\rho_{A,A'}}\ar[d]_{f_*\ox f'}
&M(A\ox_kA')\ar[d]^{(f\ox f')_*}\\
M(A_1)\ox_kA'_1\ar[r]^{\rho_{A_1,A'_1}}
&M(A_1\ox_kA'_1)
}
}
\end{equation}
commute. Moreover the homomorphisms
\begin{equation}\label{unitfold}
\alignbox{
\lambda_{k,A}&:k\ox_kM(A)\to M(k\ox_kA),\\
\rho_{A,k}&:M(A)\ox_kk\to M(A\ox_kk)
}
\end{equation}
must coincide with the obvious isomorphisms and the diagrams
\begin{gather}\label{leftfold}
\alignbox{\xymatrix{
&A\ox_kM(A'\ox_kA'')\ar[dr]^{\lambda_{A,A'\ox_kA''}}\\
A\ox_kA'\ox_kM(A'')\ar[rr]^{\lambda_{A\ox_kA',A''}}\ar[ur]^{1\ox\lambda_{A',A''}}
&&M(A\ox_kA'\ox_kA''),
}}\\\label{rightfold}
\alignbox{\xymatrix{
&M(A\ox_kA')\ox_kA''\ar[dr]^{\rho_{A\ox_kA',A''}}\\
M(A)\ox_kA'\ox_kA''\ar[rr]^{\rho_{A,A'\ox_kA''}}\ar[ur]^{\rho_{A,A'}A\ox1}
&&M(A\ox_kA'\ox_kA''),
}}\\\label{midfold}
\alignbox{\xymatrix{
&M(A\ox_kA')\ox_kA''\ar[dr]^{\rho_{A\ox_kA',A''}}\\
A\ox_kM(A')\ox_kA''\ar[ur]^{\lambda_{A,A'}\ox1}\ar[dr]_{1\ox\rho_{A',A''}}
&&M(A\ox_kA'\ox_kA'')\\
&A\ox_kM(A'\ox_kA'')\ar[ur]_{\lambda_{A,A'\ox_kA''}}
}}
\end{gather}
must commute for all $A$, $A'$, $A''$ in ${\mathbf A}$. A folding system is called
\emph{symmetric} if in addition the diagrams
$$
\xymatrix{
A\ox_kM(A')\ar[r]^{\lambda_{A,A'}}\ar[d]_{T_{A,M(A')}}&M(A\ox_kA')\ar[d]^{M(T_{A,A'})}\\
M(A')\ox_kA\ar[r]^{\rho_{A',A}}&M(A'\ox_kA)
}
$$
commute for all $A$, $A'$, where $T$ is the graded
interchange operator given in \eqref{symm}.
Once again, we have the corresponding obvious notions of a left folding
system and a bifolding system.
\end{Definition}
For a right folding system $M$, the category ${\mathbf{Alg}}^\r_M$ has a monoidal
structure given by the \emph{folding product} $\hat\ox$ below. Given an
$M$-algebra $D$ of type $A$ and another one, $D'$ of type $A'$, we define
an $M$-pair algebra $D\hat\ox D'$ of type $A\ox A'$ as the lower row in the
diagram
\begin{equation}\label{fopr}
\alignbox{
\xymatrix{
0\ar[r]
&A\!\ox\!M(A')\oplus M(A)\!\ox\!A'
\ar[r]\ar[d]_{(\lambda_{A,A'},\rho_{A,A'})}\ar@{}[dr]|{\textrm{push}}
&(D\bar\ox D')_1\ar[r]^{\d_{\bar\ox}}\ar[d]
&(D\bar\ox D')_0\ar[r]\ar@{=}[d]
&A\ox A'\ar[r]\ar@{=}[d]
&0\\
0\ar[r]
&M(A\ox A')\ar[r]
&(D\hat\ox D')_1\ar[r]_{\d_{\hat\ox}}
&D_0\ox D'_0\ar[r]
&A\ox A'\ar[r]
&0.
}
}
\end{equation}
Here the leftmost square is required to be pushout, and the upper row is
exact by \bref{tox}.
\begin{Proposition}\label{monofold}
For any right (resp. left, bi-) folding system $M$, the folding product
defines a monoidal structure on ${\mathbf{Alg}}^\r_M$ (resp. ${\mathbf{Alg}}^\l_M$,
${\mathbf{Alg}}^\b_M$, ${\mathbf{Alg}}^{\mathit{pair}}_M$), with unit object $I=(0:M(k)\to
k)$. If moreover the folding system is symmetric, then this monoidal
structure is symmetric.
\end{Proposition}
We only will use the monoidal categories ${\mathbf{Alg}}^\r_{{\mathbbm1}\oplus\Sigma}$ and
${\mathbf{Alg}}^{\mathit{pair}}_{\mathbbm1}$.
\begin{proof}
To begin with, let us show that $\hat\ox$ is functorial, i.~e. for any
morphisms $f:D\to E$, $f':D'\to E'$ in ${\mathbf{Alg}}_M$, let us define a morphism
$f\hat\ox f':D\hat\ox E\to D'\hat\ox E'$ in a way compatible with
identities and composition. We put $(f\hat\ox f')_0=f_0\hat\ox f'_0$, and
define $(f\hat\ox f')_1$ as the unique homomorphism making the following
diagram commute:
$$
\xymatrix@!C=6em{
B\!\ox\!M(B')\oplus M(B)\!\ox\!B'\ar[rrr]\ar[ddd]_{(\lambda_{B,B'},\rho_{B,B'})}
&&&(E\bar\ox E')_1\ar[ddd]\\
&A\!\ox\!M(A')\oplus M(A)\!\ox\!A'
\ar[r]\ar[d]_{(\lambda_{A,A'},\rho_{A,A'})}
\ar[ul]_{f\!\ox\!f'_*\oplus f_*\!\ox\!f'}
&(D\bar\ox D')_1\ar[d]\ar[ur]^{(f\bar\ox f')_1}\\
&M(A\ox A')\ar[r]\ar[ld]^{(f\ox f')_*}
&(D\hat\ox D')_1\ar@{-->}[dr]_{(f\hat\ox f')_1}\\
M(B\ox B')\ar[rrr]
&&&(E\hat\ox E')_1
}
$$
where the left hand trapezoid commutes by \eqref{natfold}. Using the universal
property of pushout it is clear that right equivariance of $f_1$ and $f_1'$ iplies
that of $(f\hat\ox f')_1$ so that this indeed defines a morphism in
${\mathbf{Alg}}_M$. The same universality implies compatibility with composition.
Next to show that $I=(0:M(k)\to k)$ is a unit object first note that for an
$M$-algebra $D$ by \bref{trunc} one has
$$
I\bar\ox D=\Tr_*\left(M(k)\ox D_1\xto{\binom0{1\ox\d}}D_1\oplus
M(k)\!\ox\!D_0\xto{(\d,0)}D_0\right)\cong\left(D_1\oplus
M(k)\!\ox\!A\xto{(\d,0)}D_0\right).
$$
From this using \eqref{unitfold} it is easy to see that $(I\hat\ox D)_1$ is given by the pushout
$$
\xymatrix{
M(A)\oplus
M(k)\!\ox\!A\ar[r]^{\mathrm{incl}\oplus1}\ar[d]_{\mathrm{proj}}
&D_1\oplus M(k)\!\ox\!A\ar[d]\\
M(A)\ar[r]&(I\hat\ox D)_1
}
$$
so that there is a canonical isomorphism $(I\hat\ox D)_1\cong D_1$ compatible
with the canonical isomorphism $k\ox D_0\cong D_0$. Symmetrically, one
constructs the isomorphism $D\hat\ox I\cong D$.
Turning now to associativity, first note that the tensor product
\eqref{tens} can be equivalently stated as defining $(D\bar\ox D')_1$ by
the requirement that the diagram
$$
\xymatrix@!=1.5em{
&D_1\ox D'_1\ar[dr]\ar[dl]\ar@{}[dd]|{\textrm{push}}\\
D_0\ox D'_1\ar[dr]&&D_1\ox D'_0\ar[dl]\\
&(D\bar\ox D')_1
}
$$
be pushout. Then combining diagrams we see that $(D\hat\ox D')_1$ can be
equivalently defined as the colimit of the following diagram:
\begin{equation}\label{altox}
\alignbox{
\xymatrix@!=2.5em{
D_0\ox M(A')\ar[d]\ar[dr]|\hole
&D_1\ox D'_1\ar[dr]\ar[dl]
&M(A)\ox D'_0\ar[d]\ar[dl]|\hole\\
D_0\ox D'_1
&M(A\ox A')
&D_1\ox D'_0
}
}
\end{equation}
where the map $D_0\ox M(A')\to M(A\ox A')$ is the composite $D_0\ox
M(A')\to A\ox M(A')\to M(A\ox A')$ and similarly for $M(A)\ox D'_0\to
M(A\ox A')$. Hence $((D\hat\ox D')\hat\ox D'')_1$ is given by the colimit of the diagram
$$
\xymatrix@!=5em{
D_0\ox D'_0\ox M(A'')\ar[d]\ar[dr]|\hole
&(D\hat\ox D')_1\ox D''_1\ar[dr]\ar[dl]
&M(A\ox A')\ox D''_0\ar[d]\ar[dl]|\hole\\
D_0\ox D'_0\ox D''_1
&M(A\ox A'\ox A'')
&(D\hat\ox D')_1\ox D''_0.
}
$$
Substituting here the diagram for $(D\hat\ox D')_1$ we obtain that this is
the same as the colimit of a diagram of the form
$$
\xymatrix@!C=6em{
&&&&D_0\ox D'_1\ox D''_0\\
&&D_0\ox D'_1\ox D''_1\ar[dll]\ar[urr]
&D_0\ox M(A')\ox D''_0\ar[ur]\ar[dl]\\
D_0\ox D'_0\ox D''_1
&D_0\ox D'_1\ox M(A'')\ar[l]\ar[r]
&M(A\ox A'\ox A'')
&&D_1\ox D'_1\ox D''_0\ar[uu]\ar[dd]\\
&&D_1\ox D'_0\ox D''_1\ar[ull]\ar[drr]
&M(A)\ox D'_0\ox D''_0\ar[ul]\ar[dr]\\
&&&&D_1\ox D'_0\ox D''_0.
}
$$
Treating now $(D\hat\ox(D'\hat\ox D''))_1$ in the same way we obtain that
it is colimit of a diagram with same objects; then, using
\eqref{leftfold}, \eqref{midfold}, and \eqref{rightfold}, one can see that
also morphisms in these diagrams are the same.
Finally, suppose that $M$ is a symmetric folding system. Then for any $M$-algebras
$D$, $D'$ of type $A$, $A'$ respectively, there is a commutative diagram
$$
\xymatrix@!=.2em{
&&&&M(A\ox A')\ar[ddd]\\
\\
\\
&&&&M(A'\ox A)\\
&&D_0\ox M(A')\ar[uuuurr]\ar[ddddll]\ar[dr]
&&&&M(A)\ox D'_0\ar[uuuull]\ar[ddddrr]\ar[dl]\\
&&&M(A')\ox D_0\ar[uur]\ar[ddl]
&&D'_0\ox M(A)\ar[uul]\ar[ddr]\\
\\
&&D'_1\ox D_0
&&D'_1\ox D_1\ar[ll]\ar[rr]
&&D'_0\ox D_1\\
D_0\ox D'_1\ar[urr]
&&&&D_1\ox D'_1\ar[u]\ar[llll]\ar[rrrr]
&&&&D_1\ox D'_0\ar[ull]
}
$$
which induces a map from the colimit of the outer triangle to that of the
inner one, i.~e. by \eqref{altox} a map $(D\hat\ox D')_1\to(D'\hat\ox
D)_1$. It is then straightforward to check that this defines an interchange for
the monoidal structure.
\end{proof}
\begin{Examples}\label{foldex}
The bimodule system ${\mathbbm1}$ above clearly has the structure
of a folding system, with $\lambda$ and $\rho$ both identity maps. Also the
bimodule system ${\mathbbm1}\oplus\Sigma$ is a folding system via the obvious
isomorphisms
\begin{align}
\label{lambda+}
\lambda_{A,A'}&:A\ox(A'\oplus\Sigma
A')\cong A\!\ox\!A'\oplus A\!\ox\!\Sigma A'\xto{1\oplus\sigma}
A\!\ox\!A'\oplus\Sigma(A\!\ox\!A'),\\
\label{rho+}
\rho_{A,A'}&:(A\oplus\Sigma A)\ox A'\cong A\!\ox\!A'\oplus(\Sigma A)\!\ox\!A'
\cong A\!\ox\!A'\oplus\Sigma(A\!\ox\!A')
\end{align}
where in \eqref{lambda+}, the interchange \eqref{sigma} for $\Sigma$ is used.
\end{Examples}
\begin{Lemma}\label{sigmafold}
The isomorphisms \eqref{lambda+}, \eqref{rho+} give the bimodule system
${\mathbbm1}\oplus\Sigma$ with the structure of a symmetric folding system on any
category ${\mathbf A}$ of algebras closed under tensor products.
\end{Lemma}
\begin{proof}
It is obvious that ${\mathbbm1}$ with the identity maps is a folding system, and
that a direct sum of folding systems is a folding system again, so it
suffices to show that $\Sigma$ is a folding system.
The right diagram in \eqref{natfold} is trivially commutative, while
commutativity of the left one follows from
\begin{multline*}
\sigma_{A_1,M(A'_1)}(f(a)\ox\Sigma f'(a'))
=(-1)^{\deg(a)}\Sigma(f(a)\ox f'(a'))\\
=\Sigma(f\ox f')((-1)^{\deg(a)}\Sigma(a\ox a'))
=\Sigma(f\ox f')\sigma_{A,M(A')}(a\ox\Sigma a')
\end{multline*}
for any $a\in A$, $a'\in A'$, $f:A\to A_1$, $f':A'\to A'_1$. Next, the
diagrams \eqref{unitfold} commute since $k$ is concentrated in degree 0.
The diagrams \eqref{rightfold} commute trivially, as only right actions are
involved. Commutativity of \eqref{leftfold} follows from the obvious
equality
$$
(-1)^{\deg(a)}\Sigma(a\ox(-1)^{\deg(a')}a'\ox a'')=(-1)^{\deg(a\ox
a')}\Sigma(a\ox a'\ox a'')
$$
and that of \eqref{midfold} is also obvious from
$$
\xymatrix@M=1em@!C=3em{
&(-1)^{\deg(a)}\Sigma(a\ox a')\ox a''\ar@{|->}[dr]\\
a\ox\Sigma(a')\ox a''\ar@{|->}[ur]\ar@{|->}[dr]
&&(-1)^{\deg(a)}\Sigma(a\ox a'\ox a'')\\
&a\ox\Sigma(a'\ox a'')\ar@{|->}[ur]
}
$$
\end{proof}
Thus by \bref{monofold} the folding system ${\mathbbm1}\oplus\Sigma$ yields a
well-defined monoidal category ${\mathbf{Alg}}^\r_{{\mathbbm1}\oplus\Sigma}$ of
\emph{${\mathbbm1}\oplus\Sigma$-algebras} as in \bref{malg}. The initial object and
at the same time the unit for the monoidal structure of
${\mathbf{Alg}}^\r_{{\mathbbm1}\oplus\Sigma}$ is by \bref{initial} and \bref{monofold}
$$
I_{{\mathbbm1}\oplus\Sigma}=\left({\mathbb F}\oplus\Sigma{\mathbb F}\xto0{\mathbb F}\right).
$$
For ${\mathbf{Alg}}^\r_{\mathbbm1}$ it is
$$
I_{\mathbbm1}=\left({\mathbb F}\xto0{\mathbb F}\right).
$$
The projections $q:A\oplus\Sigma A\to A$ can be used to construct a
monoidal functor
\begin{equation}\label{k1}
q:{\mathbf{Alg}}^\r_{{\mathbbm1}\oplus\Sigma}\to{\mathbf{Alg}}^\r_{\mathbbm1}
\end{equation}
carrying an object $D$ in ${\mathbf{Alg}}^\r_{{\mathbbm1}\oplus\Sigma}$ to the pushout in
the following diagram
$$
\xymatrix{
A\oplus\Sigma A\ar@{ >->}[r]\ar[d]_q\ar@{}[dr]|{\textrm{push}}
&D_1\ar[d]\ar[r]
&D_0\ar@{=}[d]\ar@{->>}[r]
&A\ar@{=}[d]\\
A\ar@{ >->}[r]
&q(D)_1\ar[r]
&q(D)_0\ar@{->>}[r]
&A.
}
$$
Evidently $q(I_{{\mathbbm1}\oplus\Sigma})=I_{\mathbbm1}$.
\section{Unfolding systems}\label{unfold}
It is clear how to dualize the constructions from the previous section
along the lines of section \ref{coalg}. We will not give detailed definitions but
only briefly indicate the underlying structures.
We thus consider a category ${\mathbf C}$ of graded $k$-coalgebras, and define a
right comodule system $N$ on ${\mathbf C}$ as an assignment, to each coalgebra $C$
in ${\mathbf C}$, of a $C$-comodule $N(C)$, and to each homomorphism $f:C\to C'$ of
coalgebras of an $f$-equivariant homomorphism $f_*:N(C)\to N(C')$, i.~e.
the diagram
$$
\xymatrix{
N(C)\ar[r]^-{\textrm{coaction}}\ar[d]_{f_*}&N(C)\ox C\ar[d]^{f_*\ox f}\\
N(C')\ar[r]^-{\textrm{coaction}}&N(C')\ox C'
}
$$
is required to commute. Similarly one defines left comodule systems and
bicomodule systems. As before, we have a bicomodule system ${\mathbbm1}$ given by
${\mathbbm1}(C)=C$ and also $\Sigma$, ${\mathbbm1}\oplus\Sigma$ defined dually to
\bref{sysex}.
Then further for a right comodule system $N$ on ${\mathbf C}$ and for a coalgebra $C$
from ${\mathbf C}$ one defines an $N$-coalgebra of type $C$ by dualizing
\bref{malg}. It is thus a pair $D^*=(d:D^0\to D^1)$ where $D^0$ is a
coalgebra, $D^1$ is a right $D^0$-comodule and $d$ is a comodule
homomorphism. Moreover one must have $\pi^0(D^*)=C$, $\pi^1(D^*)=N(C)$, and
the $C$-comodule structure on $N(C)$ induced by this must be the one coming
from the comodule system $N$. With morphisms defined dually to \bref{malg},
the $N$-coalgebras form a category ${\mathbf{Coalg}}^\r_N$. Similarly one
defines categories ${\mathbf{Coalg}}^\l_N$ and
${\mathbf{Coalg}}^{\mathit{pair}}_N\subset{\mathbf{Coalg}}^\b_N$ for a left,
resp. bicomodule system $N$. These categories have the initial object
$0:0\to N(0)$ and the terminal object $0:k\to N(k)$.
Also dually to \bref{defold} one defines \emph{unfolding systems} as
comodule systems $N$ equipped with $C\ox C'$-comodule homomorphisms
\begin{align*}
l^{C,C'}&:N(C\ox C')\to C\ox N(C')\\
r^{C,C'}&:N(C\ox C')\to N(C)\ox C'
\end{align*}
for all $C,C'\in{\mathbf C}$ required to satisfy obvious duals to the diagrams
\eqref{natfold} -- \eqref{midfold}. Also there is an obvious notion of a
symmetric unfolding system.
Then for an unfolding system $N$ we can dualize \eqref{fopr} to obtain
definition of the \emph{unfolding product} $D\check\ox D'$ of $N$-coalgebras
via the upper row in the diagram
$$
\xymatrix{
0\ar[r]
&C\ox C'\ar[r]\ar@{=}[d]
&D^0\ox{D'}^0\ar[r]^{d^{\check\ox}}\ar@{=}[d]
&(D\check\ox D')^1\ar[d]\ar[r]\ar@{}[dr]|{\mathrm{pull}}
&N(C\ox C')\ar[d]^{\binom{l^{C,C'}}{r^{C,C'}}}\ar[r]
&0\\
0\ar[r]
&C\ox C'\ar[r]
&(D\dblb\ox D')^0\ar[r]^{d^{\dblb\ox}}
&(D\dblb\ox D')^1\ar[r]
&C\!\ox\!N(C')\oplus N(C)\!\ox\!C'\ar[r]
&0
}
$$
where now the rightmost square is required to be pullback and the lower row
is exact by the dual of \bref{tox}.
It is then straightforward to dualize \bref{monofold}, so we conclude that
for any unfolding system $N$ the unfolding product equips the category
${\mathbf{Coalg}}_N^?$ with the structure of a monoidal category, symmetric
if $N$ is symmetric. Here, ``?'' stands for ``r'', ``l'', ``b'' or
``pair'', according to the type of $N$. Obviously also the dual of
\bref{sigmafold} holds, so that the categories
${\mathbf{Coalg}}_{\mathbbm1}^{\mathit{pair}}$ and
${\mathbf{Coalg}}_{{\mathbbm1}\oplus\Sigma}^\r$ have monoidal
structures given by the unfolding product.
\section{The ${\mathbb G}$-relation pair algebra of the Steenrod
algebra}\label{grel}
Fix a prime $p$, and let ${\mathbb G}={\mathbb Z}/p^2{\mathbb Z}$ be the ring of integers mod $p^2$,
with the quotient map ${\mathbb G}\onto{\mathbb F}={\mathbb F}_p={\mathbb Z}/p{\mathbb Z}$. Let ${\mathscr A}$ be the mod $p$
Steenrod algebra and let
$$
{\mathrm E}_{\mathscr A}=
\begin{cases}
\set{\Sq^1,\Sq^2,...}&\textrm{for $p=2$},\\
\set{{\mathrm P}^1,{\mathrm P}^2,...}\cup\set{\beta,\beta{\mathrm P}^1,\beta{\mathrm P}^2,...}&\textrm{for odd
$p$}
\end{cases}
$$
be the set of generators of the algebra ${\mathscr A}$. We consider the following
algebras and homomorphisms
\begin{equation}\label{BF}
\alignbox{
\xymatrix{
**[l]q:{\mathscr B}_0\ar@{=}[d]\ar@{->>}[r]
&{\mathscr F}_0\ar@{=}[d]\ar@{->>}[r]^{q_{\mathscr F}}
&{\mathscr A}\\
T_{\mathbb G}({\mathrm E}_{\mathscr A})
&T_{\mathbb F}({\mathrm E}_{\mathscr A})&.
}
}
\end{equation}
For a commutative ring $k$, $T_k(S)$ denotes the free
associative $k$-algebra with unit generated by the set $S$, i.~e. the tensor
algebra of the free $k$-module on $S$. The map $q_{\mathscr F}$ is the algebra
homomorphism which is the identity on $E_{\mathscr A}$. For $f\in{\mathscr F}_0$ we denote the
element $q_{\mathscr F}(f)\in{\mathscr A}$ by
$$
\qf f=q_{\mathscr F}(f).
$$
Let $R_{\mathscr B}$ denote the kernel of $q$, i.~e. there is a short exact sequence
$$
\xymatrix{
R_{\mathscr B}\ar@{ >->}[r]&{\mathscr B}_0\ar@{->>}[r]^q&{\mathscr A}.
}
$$
This short exact sequence gives rise to a long exact sequence
$$
\xymatrix{
\Tor(R_{\mathscr B},{\mathbb F})\ar@{ >->}[r]
&\Tor({\mathscr B}_0,{\mathbb F})\ar[r]
&\Tor({\mathscr A},{\mathbb F})\ar[r]^i
&R_{\mathscr B}\ox{\mathbb F}\ar[r]
&{\mathscr B}_0\ox{\mathbb F}\ar@{->>}[r]
&{\mathscr A}\ox{\mathbb F}.
}
$$
Here $A\ox{\mathbb F}\cong A/pA$ and $\Tor(A,{\mathbb F})$ is just the $p$-torsion part of $A$
for an abelian group $A$, so the connecting homomorphism $i$ sends
$a=q(b)+p{\mathscr B}_0$ to $pb+pR_{\mathscr B}$. It follows that the second homomorphism in
the above sequence is zero. Moreover clearly we can identify
${\mathscr B}_0\ox{\mathbb F}={\mathscr F}_0$ and $\Tor({\mathscr A},{\mathbb F})={\mathscr A}$, so that there is an exact
sequence
\begin{equation}\label{prel}
\alignbox{
\xymatrix{
{\mathscr A}\ar@{ >->}[r]^i
&{\mathscr R}^{\mathbb F}_1\ar[r]^\d\ar@{=}[d]
&{\mathscr R}^{\mathbb F}_0\ar@{->>}[r]\ar@{=}[d]
&{\mathscr A}\\
&R_{\mathscr B}\ox{\mathbb F}
&{\mathscr F}_0
}
}
\end{equation}
One has
\begin{Lemma}\label{parel}
The pair ${\mathscr R}^{\mathbb F}=(\d:{\mathscr R}^{\mathbb F}_1\to{\mathscr R}^{\mathbb F}_0)$ above has a pair algebra
structure compatible with the standard bimodule structure of ${\mathscr A}$ on
itself, so that ${\mathscr R}^{\mathbb F}$ yields an object in ${\mathbf{Alg}}^{\mathit{pair}}_{\mathbbm1}$,
see \bref{malg}.
\end{Lemma}
\begin{proof}
Clearly mod $p$ reduction of any pair algebra over ${\mathbb G}$ is a pair algebra
over ${\mathbb F}$. Then let ${\mathscr R}^{\mathbb F}$ be the mod $p$ reduction of the pair algebra
$R_{\mathscr B}\into{\mathscr B}_0$. Thus the ${\mathscr F}_0$-${\mathscr F}_0$-bimodule structure on
${\mathscr R}^{\mathbb F}_1=R_{\mathscr B}/pR_{\mathscr B}$ is just the mod $p$ reduction of the
${\mathscr B}_0$-${\mathscr B}_0$-bimodule structure on $R_{\mathscr B}$, i.~e. $b'+p{\mathscr B}_0\in{\mathscr R}^{\mathbb F}_0={\mathscr B}_0/p{\mathscr B}_0$
acts on $r+pR_{\mathscr B}\in{\mathscr R}^{\mathbb F}_1=R_{\mathscr B}/pR_{\mathscr B}$ via
$$
(b'+p{\mathscr B}_0)(r+pR_{\mathscr B})=b'r+pR_{\mathscr B}.
$$
Moreover the above inclusion ${\mathscr A}\into R_{\mathscr B}/pR_{\mathscr B}$ sends an element $q(b)$
to $pb+pR_{\mathscr B}$. Then the action of $a'=q(b')\in{\mathscr A}$ on $i(a)=pb+pR_{\mathscr B}\in
i({\mathscr A})=\ker\d$ induced by this pair algebra is given as follows:
$$
a'i(a)=q_{\mathscr F}(b'+p{\mathscr B}_0)(pb+pR_{\mathscr B})=pb'b+pR_{\mathscr B}=iq(b'b)=i(a'a)
$$
and similarly for the right action.
\end{proof}
We call the object ${\mathscr R}^{\mathbb F}$ of the category ${\mathbf{Alg}}^{\mathit{pair}}_{\mathbbm1}$
the \emph{${\mathbb G}$-relation pair algebra of ${\mathscr A}$}.
\begin{Theorem}\label{relcom}
The ${\mathbbm1}$-pair algebra ${\mathscr R}^{\mathbb F}$ has a structure of a cocommutative comonoid
in the symmetric monoidal category ${\mathbf{Alg}}^{\mathit{pair}}_{\mathbbm1}$.
\end{Theorem}
\begin{proof}
For $n\ge0$, let $R_{\mathscr B}^{(n)}$ denote the kernel of the map $q^{\ox n}$, so
that there is a short exact sequence
$$
\xymatrix{
R_{\mathscr B}^{(n)}\ar@{ >->}[r]&{\mathscr B}_0^{\ox n}\ar@{->>}[r]^{q^{\ox n}}&{\mathscr A}^{\ox n}
}
$$
and similarly to \bref{parel} there is a pair algebra of the form
$$
\xymatrix{
{\mathscr A}^{\ox n}\ar@{ >->}[r]
&R_{\mathscr B}^{(n)}\ox{\mathbb F}\ar[r]
&{\mathscr F}_0^{\ox n}\ar@{->>}[r]^{q_{\mathscr F}^{\ox n}}
&{\mathscr A}^{\ox n}
}
$$
determining an object ${\mathscr R}^{(n)}$ in ${\mathbf{Alg}}^{\mathit{pair}}_{\mathbbm1}$. Then one
has the following lemma which yields natural examples of folding products
in ${\mathbf{Alg}}^{\mathit{pair}}_{\mathbbm1}$.
\begin{Lemma}\label{rn}
There is a canonical isomorphism ${\mathscr R}^{(n)}\cong({\mathscr R}^{\mathbb F})^{\hat\ox n}$ in
${\mathbf{Alg}}_{\mathbbm1}^{\mathit{pair}}$.
\end{Lemma}
\begin{proof}
Using induction, we will assume given an isomorphism
$\alpha_n:({\mathscr R}^{\mathbb F})^{\hat\ox n}\cong{\mathscr R}^{(n)}$ and construct $\alpha_{n+1}$
in a canonical way. To do this it clearly suffices to construct a
canonical isomorphism ${\mathscr R}^{\mathbb F}\hat\ox{\mathscr R}^{(n)}\cong{\mathscr R}^{(n+1)}$ as then its
composite with ${\mathscr R}^{\mathbb F}\hat\ox\alpha_n$ will give $\alpha_{n+1}$.
To construct a map $({\mathscr R}^{\mathbb F}\hat\ox{\mathscr R}^{(n)})_1\to{\mathscr R}^{(n+1)}_1$ means by
\eqref{altox} the same as to find three dashed arrows making the diagram
$$
\xymatrix@!=1em{
&&R_{\mathscr B}\ox R_{\mathscr B}^{(n)}\ox{\mathbb F}\ar[ddl]\ar[ddr]\\
\\
&{\mathscr F}_0\ox R_{\mathscr B}^{(n)}\ar@{-->}[dr]&&R_{\mathscr B}\ox{\mathscr F}_0^{\ox n}\ar@{-->}[dl]\\
&&R_{\mathscr B}^{(n+1)}\ox{\mathbb F}\\
{\mathscr F}_0\ox{\mathscr A}^{\ox n}\ar[rr]\ar[uur]&&{\mathscr A}^{\ox(n+1)}\ar@{-->}[u]&&{\mathscr A}\ox{\mathscr F}_0^{\ox
n}\ar[ll]\ar[uul]
}
$$
commute. For this we use the commutative diagram
$$
\xymatrix@!=1em{
&&R_{\mathscr B}\ox R_{\mathscr B}^{(n)}\ar[ddl]\ar[ddr]\\
\\
&{\mathscr B}_0\ox R_{\mathscr B}^{(n)}\ar[dr]&&R_{\mathscr B}\ox{\mathscr B}_0^{\ox n}\ar[dl]\\
&&R_{\mathscr B}^{(n+1)}\\
{\mathscr B}_0\ox{\mathscr A}^{\ox n}\ar[rr]\ar[uur]&&{\mathscr A}^{\ox(n+1)}\ar[u]&&{\mathscr A}\ox{\mathscr B}_0^{\ox
n};\ar[ll]\ar[uul]
}
$$
This diagram has a commutative subdiagram
$$
\xymatrix@!=1em{
&&p(R_{\mathscr B}\ox R_{\mathscr B}^{(n)})\ar[ddl]\ar[ddr]\\
\\
&p{\mathscr B}_0\ox R_{\mathscr B}^{(n)}\ar[dr]&&R_{\mathscr B}\ox p{\mathscr B}_0^{\ox n}\ar[dl]\\
&&pR_{\mathscr B}^{(n+1)}\\
p{\mathscr B}_0\ox{\mathscr A}^{\ox n}\ar[rr]\ar[uur]&&0\ar[u]&&{\mathscr A}\ox p{\mathscr B}_0^{\ox
n};\ar[ll]\ar[uul]
}
$$
It is obvious that taking the quotient by this subdiagram gives us a diagram of the
kind we need.
We thus obtain a map $({\mathscr R}^{\mathbb F}\hat\ox{\mathscr R}^{(n)})_1\to R_{\mathscr B}^{(n+1)}\ox{\mathbb F}$.
Moreover by its construction this map fits into the commutative diagram
$$
\xymatrix{
{\mathscr A}^{\ox(n+1)}\ar@{=}[d]\ar@{ >->}[r]
&({\mathscr R}^{\mathbb F}\hat\ox{\mathscr R}^{(n)})_1\ar[d]\ar[r]
&{\mathscr F}_0^{\ox(n+1)}\ar@{=}[d]\ar@{->>}[r]
&{\mathscr A}^{\ox(n+1)}\ar@{=}[d]\\
{\mathscr A}^{\ox(n+1)}\ar@{ >->}[r]
&R_{\mathscr B}^{(n+1)}\ox{\mathbb F}\ar[r]
&{\mathscr F}_0^{\ox(n+1)}\ar@{->>}[r]
&{\mathscr A}^{\ox(n+1)}
}
$$
with exact rows, hence by the five lemma it is an isomorphism.
\end{proof}
Using the lemma, we next construct the diagonal of ${\mathscr R}^{\mathbb F}$ given by
$$
\xymatrix{
R_{\mathscr B}\ox{\mathbb F}\ar@{=}[r]\ar[d]_{\Delta^{\mathbb G}\ox1}
&{\mathscr R}^{\mathbb F}_1\ar[r]^\d\ar[d]_\Delta
&{\mathscr R}^{\mathbb F}_0\ar@{=}[r]\ar[d]_\Delta
&{\mathscr F}_0\ar[d]^\Delta\\
R_{\mathscr B}^{(2)}\ox{\mathbb F}\ar[r]^-\cong
&({\mathscr R}^{\mathbb F}\hat\ox{\mathscr R}^{\mathbb F})_1\ar[r]^{\d_{\hat\ox}}
&{\mathscr R}^{\mathbb F}_0\ox{\mathscr R}^{\mathbb F}_0\ar@{=}[r]
&{\mathscr F}_0\ox{\mathscr F}_0.
}
$$
Here $\Delta^{\mathbb G}$ is defined by the commutative diagram
\begin{equation}\label{deltag}
\alignbox{
\xymatrix{
R_{\mathscr B}\ar[d]_{\Delta^{\mathbb G}}\ar@{ (->}[r]
&{\mathscr B}_0\ar[d]_{\Delta^{\mathbb G}}\\
R_{\mathscr B}^{(2)}\ar@{ (->}[r]
&{\mathscr B}_0\ox{\mathscr B}_0,
}
}
\end{equation}
where the diagonal $\Delta^{\mathbb G}$ on ${\mathscr B}_0$ is defined on generators by
$$
\begin{aligned}
\Delta^{\mathbb G}(\Sq^n)=
\sum_{i=0}^n\Sq^i\ox\Sq^{n-i}\hskip4.5em &\textrm{for $p=2$,}\\
\left.
\begin{aligned}
\Delta^{\mathbb G}(\beta)&=\beta\ox1+1\ox\beta,\\
\Delta^{\mathbb G}({\mathrm P}^n)&=\sum_{i+j=n}{\mathrm P}^i\ox{\mathrm P}^j,\\
\Delta^{\mathbb G}({\mathrm P}^n_\beta)&=\sum_{i+j=n}({\mathrm P}^i_\beta\ox{\mathrm P}^j+{\mathrm P}^i\ox{\mathrm P}^j_\beta)
\end{aligned}
\right\}&\textrm{for odd $p$}
\end{aligned}
$$
(with $\Sq^0=1$, ${\mathrm P}^0=1$ as usual) and extended to the whole ${\mathscr B}_0$ as
the unique algebra homomorphism with respect to the algebra structure on
${\mathscr B}_0\ox{\mathscr B}_0$ given by the nonstandard interchange formula
$$
\xymatrix{
&{\mathscr B}_0\ox{\mathscr B}_0\ox{\mathscr B}_0\ox{\mathscr B}_0\ar[dr]^{\mu\ox\mu}\\
{\mathscr B}_0\ox{\mathscr B}_0\ox{\mathscr B}_0\ox{\mathscr B}_0\ar[ur]^{1\ox T^{\mathbb G}\ox1}\ar[rr]^{\mu_\ox}
&&{\mathscr B}_0\ox{\mathscr B}_0
}
$$
with
\begin{align*}
&T^{\mathbb G}:{\mathscr B}_0\ox{\mathscr B}_0\xto\cong{\mathscr B}_0\ox{\mathscr B}_0\\
&T^{\mathbb G}(x\ox y)=(-1)^{p\deg(x)\deg(y)}y\ox x.
\end{align*}
In particular, clearly for all $p$ one has $T^{\mathbb G}\Delta^{\mathbb G}=\Delta^{\mathbb G}$, i.~e. the
coalgebra structure on ${\mathscr B}_0$ is cocommutative.
The counit for ${\mathscr R}^{\mathbb F}$ is given by the diagram
\begin{equation}\label{rcounit}
\alignbox{
\xymatrix{
{\mathscr A}\ar@{ >->}[r]\ar[d]^\epsilon
&R_{\mathscr B}\ox{\mathbb F}\ar[r]\ar@{-->}[d]
&{\mathscr F}_0\ar@{->>}[r]\ar[d]^\epsilon
&{\mathscr A}\ar[d]^\epsilon\\
{\mathbb F}\ar@{=}[r]
&{\mathbb F}\ar[r]^0
&{\mathbb F}\ar@{=}[r]
&{\mathbb F}
}
}
\end{equation}
where the map $R_{\mathscr B}\ox{\mathbb F}\to{\mathbb F}$ sends the generator $(p1_{{\mathscr B}_0})\ox1$ in degree
$0$ to 1 and all elements in higher degrees to zero. It is then clear from
the formula for $\Delta^{\mathbb G}$ that this indeed gives a counit for this
diagonal.
Finally, to prove coassociativity, by the lemma it suffices to consider the
diagram
$$
\xymatrix@!=2em{
&&R_{\mathscr B}\ar@{_(->}[d]\ar[ddll]_{\Delta^{\mathbb G}}\ar[ddrr]^{\Delta^{\mathbb G}}\\
&&{\mathscr B}_0\ar[dl]_{\Delta^{\mathbb G}}\ar[dr]^{\Delta^{\mathbb G}}\\
R^{(2)}_{\mathscr B}\ar@{ (->}[r]\ar[ddrr]&{\mathscr B}_0^{\ox2}\ar[dr]_{1\ox\Delta^{\mathbb G}}
&&{\mathscr B}_0^{\ox2}\ar[dl]^{\Delta^{\mathbb G}\ox1}&R^{(2)}_{\mathscr B}\ar[ddll]\ar@{_(->}[l]\\
&&{\mathscr B}_0^{\ox3}\\
&&R^{(3)}_{\mathscr B}\ar@{ (->}[u]
}
$$
\end{proof}
\section{The algebra of secondary cohomology operations}\label{sec}
Let us next consider a derivation of degree 0 of the form
$$
\k:{\mathscr A}\to\Sigma{\mathscr A},
$$
uniquely determined by
\begin{equation}\label{kappa}
\begin{aligned}
\k\Sq^n=\Sigma\Sq^{n-1}\hskip.7em &\textrm{for $p=2$,}\\
\left.
\begin{aligned}
\k\beta&=\Sigma1,\\
\k({\mathrm P}^i)&=0,i\ge0
\end{aligned}
\right\}&\textrm{for odd $p$.}
\end{aligned}
\end{equation}
We will use $\k$ to define an ${\mathscr A}$-${\mathscr A}$-bimodule
$$
{\mathscr A}\oplus_\k\Sigma{\mathscr A}
$$
as follows. The right ${\mathscr A}$-module structure is the same as on
${\mathscr A}\oplus\Sigma{\mathscr A}$ above, i.~e. one has $(x,\Sigma y)a=(xa,\Sigma ya)$. As
for the left ${\mathscr A}$-module structure, it is given by
$$
a(x,\Sigma y)=(ax,(-1)^{\deg(a)}\Sigma ay+\k(a)x).
$$
There is a short exact sequence of ${\mathscr A}$-${\mathscr A}$-bimodules
$$
0\to\Sigma{\mathscr A}\to{\mathscr A}\oplus_\k\Sigma{\mathscr A}\to{\mathscr A}\to0
$$
given by the standard inclusion and projection.
\begin{Remark}\label{hoch}
The above short exact sequence of bimodules and the derivation $\k$
correspond to each other under the well known description of the first
Hochschild cohomology group in terms of bimodule extensions and derivations,
respectively. Indeed, more generally recall that for a graded $k$-algebra
$A$ and an $A$-$A$-bimodule $M$, one of the possible definitions of the
Hochschild cohomology of $A$ with coefficients in $M$ is
$$
HH^n(A;M)=\Ext^n_{A\ox_kA^\circ}(A,M).
$$
On the other hand, $HH^1(A;M)$ can be also described in terms of
derivations. Recall that an $M$-valued derivation on $A$ is a $k$-linear
map $\k:A\to M$ of degree 0 satisfying
$$
\k(xy)=\k(x)y+(-1)^{\deg(x)}x\k(y)
$$
for any $x,y\in A$. Such derivations form a $k$-vector space $\Der(A;M)$.
A derivation $\k=\i_m$ is called inner if there is an $m\in M$ such that
$$
\k(x)=mx-(-1)^{\deg(x)}xm=\i_m(x)
$$
for all $x\in A$. These form a subspace $\Ider(A;M)\subset\Der(A;M)$ and
one has an isomorphism $HH^1(A;M)\cong\Der(A;M)/\Ider(A;M)$. Moreover
there is an exact sequence
$$
0\to HH^0(A;M)\to M\xto{\i_{\_}}\Der(A;M)\to HH^1(A;M)\to0.
$$
Explicitly, the isomorphism
$$
\Der(A;M)/\Ider(A;M)\cong\Ext^1_{A\ox A^\circ}(A,M),
$$
can be described by assigning to a class of a derivation $\k:A\to M$ the class of the
extension
$$
0\to M\to A\oplus_\k M\to A\to0
$$
where as a vector space, $A\oplus_\k M=A\oplus M$, the maps are the canonical
inclusion and projection and the bimodule structure is given by
\begin{align*}
a(x,m)&=(ax,am+\k(a)x),\\
(x,m)a&=(xa,ma).
\end{align*}
Obviously ${\mathscr A}\oplus_\k\Sigma{\mathscr A}$ above is an example of this
construction.
\end{Remark}
\begin{Definition}\label{hpa}
A \emph{Hopf pair algebra} ${\mathscr V}$ (associated to ${\mathscr A}$) is a pair algebra
$\d:{\mathscr V}_1\to{\mathscr V}_0$ over ${\mathbb F}$ together with the following commutative diagram in the
category of ${\mathscr F}_0$-${\mathscr F}_0$-bimodules
\begin{equation}\label{hpad}
\alignbox{
\xymatrix{
\Sigma{\mathscr A}\ar@{=}[r]\ar@{ >->}[d]
&\Sigma{\mathscr A}\ar@{ >->}[d]\\
{\mathscr A}\oplus_\k\Sigma{\mathscr A}\ar@{ >->}[r]\ar@{->>}[d]_q
&{\mathscr V}_1\ar[r]^\d\ar@{->>}[d]_q
&{\mathscr V}_0\ar@{=}[d]\ar@{->>}[r]
&{\mathscr A}\ar@{=}[d]\\
{\mathscr A}\ar@{ >->}[r]
&{\mathscr R}^{\mathbb F}_1\ar[r]
&{\mathscr R}^{\mathbb F}_0\ar@{->>}[r]
&{\mathscr A}
}
}
\end{equation}
with exact rows and columns. The pair morphism $q:{\mathscr V}\to{\mathscr R}^{\mathbb F}$ will be
called the \emph{${\mathbb G}$-structure} of ${\mathscr V}$. Moreover ${\mathscr V}$ has a structure
of a comonoid in ${\mathbf{Alg}}^\r_{{\mathbbm1}\oplus\Sigma}$ and $q$ is compatible with
the ${\mathbf{Alg}}^{\mathit{pair}}_{\mathbbm1}$-comonoid structure on ${\mathscr R}^{\mathbb F}$ in
\bref{relcom}, in the sense that the diagrams
\begin{equation}\label{diacomp}
\alignbox{
\xymatrix{
{\mathscr V}_1\ar[r]^-{\Delta_{\mathscr V}}\ar[d]_q
&({\mathscr V}\hat\ox{\mathscr V})_1\ar[d]^{q\hat\ox q}\\
{\mathscr R}^{\mathbb F}_1\ar[r]^-{\Delta_{\mathscr R}}
&({\mathscr R}^{\mathbb F}\hat\ox{\mathscr R}^{\mathbb F})_1
}
}
\end{equation}
and
\begin{equation}\label{coucomp}
\alignbox{
\xymatrix{
{\mathscr V}_1\ar[d]_q\ar[r]^-{\epsilon_{\mathscr V}}
&{\mathbb F}\oplus\Sigma{\mathbb F}\ar[d]\\
{\mathscr R}^{\mathbb F}_1\ar[r]^-{\epsilon_{\mathscr R}}
&{\mathbb F}
}
}
\end{equation}
commute.
\end{Definition}
We next observe that the following diagrams commute:
$$
\xymatrix{
{\mathscr A}\ar[rr]^\k\ar[d]_\delta
&&\Sigma{\mathscr A}\ar[d]^{\Sigma\delta}\\
{\mathscr A}\ox{\mathscr A}\ar[r]^{\k\ox1}
&\Sigma{\mathscr A}\ox{\mathscr A}\ar@{=}[r]
&\Sigma({\mathscr A}\!\ox\!{\mathscr A}),
}\ \ \ \ \ \ \
\xymatrix{
{\mathscr A}\ar[rr]^\k\ar[d]_\delta
&&\Sigma{\mathscr A}\ar[d]^{\Sigma\delta}\\
{\mathscr A}\ox{\mathscr A}\ar[r]^{1\ox\k}
&{\mathscr A}\ox\Sigma{\mathscr A}\ar[r]^\sigma
&\Sigma({\mathscr A}\!\ox\!{\mathscr A})
}
$$
where $\sigma$ is the interchange for $\Sigma$ in \eqref{sigma}. Or, on
elements,
\begin{equation}\label{kapid}
\sum\k(a_\l)\ox a_\r=\sum\k(a)_\l\ox\k(a)_\r=\sum\sigma(a_\l\ox\k(a_\r)),
\end{equation}
where we use the Sweedler notation for the diagonal
$$
\delta(x)=\sum x_\l\ox x_\r.
$$
\begin{Remark}
The above identities have a simple explanation using dualization. We will
see in \bref{dkappa} below that the map dual to $\k$ is the map
$\Sigma{\mathscr A}_*\to{\mathscr A}_*$ given, for $p=2$, by multiplication with the degree 1 generator
$\zeta_1\in{\mathscr A}_*$ and for odd $p$ by the degree 1 generator $\tau_0$.
Then the duals of \eqref{kapid} are the obvious identities for any $x,y\in{\mathscr A}_*$
$$
(\zeta_1x)y=\zeta_1(xy)=x(\zeta_1y)
$$
for $p=2$ and
$$
(\tau_0x)y=\tau_0(xy)=(-1)^{\deg(x)}x(\tau_0y)
$$
for odd $p$ (recall that ${\mathscr A}_*$ is graded commutative).
\end{Remark}
Using \eqref{kapid} we prove:
\begin{Lemma}\label{vleft}
For a Hopf pair algebra ${\mathscr V}$ there is a unique left action of ${\mathscr F}_0$ on
$({\mathscr V}\hat\ox{\mathscr V})_1$ such that the quotient map
$$
({\mathscr V}\bar\ox{\mathscr V})_1\onto({\mathscr V}\hat\ox{\mathscr V})_1
$$
is ${\mathscr F}_0$-equivariant. Here we use the pair algebra structure on
${\mathscr V}\bar\ox{\mathscr V}$ to equip $({\mathscr V}\bar\ox{\mathscr V})_1$ with an ${\mathscr F}_0\ox{\mathscr F}_0$-bimodule
structure and then turn it into a left ${\mathscr F}_0$-module via restriction of
scalars along $\Delta:{\mathscr F}_0\to{\mathscr F}_0\ox{\mathscr F}_0$.
\end{Lemma}
\begin{proof}
Uniqueness is clear as the module structure on the quotient of any module
$M$ by a submodule is clearly uniquely determined by the module structure
on $M$.
For the existence, consider the diagram
\begin{equation}\label{vcolim}
\alignbox{
\xymatrix{
{\mathscr F}_0\ox({\mathscr A}\oplus_\k\Sigma{\mathscr A})\ar[d]\ar[dr]|\hole
&{\mathscr V}_1\ox{\mathscr V}_1\ar[dr]\ar[dl]
&({\mathscr A}\oplus_\k\Sigma{\mathscr A})\ox{\mathscr F}_0\ar[d]\ar[dl]|\hole\\
{\mathscr F}_0\ox{\mathscr V}_1
&{\mathscr A}\!\ox\!{\mathscr A}\oplus_{\k\ox\!1}\Sigma({\mathscr A}\!\ox\!{\mathscr A})
&{\mathscr V}_1\ox{\mathscr F}_0.
}
}
\end{equation}
whose colimit, by \eqref{altox}, is $({\mathscr V}\hat\ox{\mathscr V})_1$, with the right
${\mathscr F}_0\ox{\mathscr F}_0$-module structure coming from the category
${\mathbf{Alg}}^\r_{{\mathbbm1}\oplus\Sigma}$. It then suffices to show that all maps in
this diagram are also left ${\mathscr F}_0$-equivariant, if one uses the left
${\mathscr F}_0$-module structure by restricting scalars along the diagonal
${\mathscr F}_0\to{\mathscr F}_0\ox{\mathscr F}_0$.
This is trivial except possibly for two of the maps involved. For the map
$$
\Phi:{\mathscr F}_0\ox({\mathscr A}\oplus_\k\Sigma{\mathscr A})\to{\mathscr A}\!\ox\!{\mathscr A}\oplus_{\k\!\ox\!{\mathscr A}}\Sigma({\mathscr A}\!\ox\!{\mathscr A})
$$
given by
$$
\Phi(f'\ox(x,\Sigma y))=(\qf{f'}\ox x,(-1)^{\deg(f')}\Sigma\qf{f'}\ox y),
$$
this amounts to checking that for any $f,f'\in{\mathscr F}_0$ and $x,y\in{\mathscr A}$ one must have
$$
\sum(f_\l\ox f_\r)(\qf{f'}\ox x,(-1)^{\deg(f')}\Sigma\qf{f'}\ox y)
=\Phi((-1)^{\deg(f_\r)\deg(f')}\sum f_\l f'\ox(\qf{f_\r}x,(-1)^{\deg(f_\r)}\Sigma
\qf{f_\r}y+\k(\qf{f_\r})x)),
$$
where again the above Sweedler notation
$$
\Delta(f)=\sum f_\l\ox f_\r,
$$
is used for the diagonal of ${\mathscr F}_0$ too, and $\qf{f'}$ denotes $q_{\mathscr F}(f')$ by
the notation in \eqref{BF}.
The left hand side expression then expands as
\begin{multline*}
\sum((-1)^{\deg(f_\r)\deg(f')}\qf{f_\l}\qf{f'}\ox\qf{f_\r}x,\\
(-1)^{\deg(f_\r)\deg(f')}(-1)^{\deg(f)}(-1)^{\deg(f')}\Sigma\qf{f_\l}
\qf{f'}\ox\qf{f_\r}y
+(-1)^{\deg(f_\r)\deg(f')}\k(\qf{f_\l})\qf{f'}\ox\qf{f_\r}x)
\end{multline*}
and the right hand side expands as
$$
(-1)^{\deg(f_\r)\deg(f')}
\sum(\qf{f_\l}\qf{f'}\ox \qf{f_\r}x,
(-1)^{\deg(f_\l f')}((-1)^{\deg(f_\r)}\Sigma\qf{f_\l}\qf{f'}\ox\qf{f_\r}y
+\qf{f_\l}\qf{f'}\ox\k(\qf{f_\r})x)).
$$
Thus left equivariance of $\Phi$ is equivalent to the equality
$$
\sum\k(\qf{f_\l})\qf{f'}\ox\qf{f_\r}x=\sum(-1)^{\deg(f_\l f')}\qf{f_\l}
\qf{f'}\ox\k(\qf{f_\r})x.
$$
This is easily deduced from
$$
\sum\k(\qf{f_\l})\ox\qf{f_\r}=\sum(-1)^{\deg(f_\l)}\qf{f_\l}\ox\k(\qf{f_\r}),
$$
which is an instance of \eqref{kapid}.
For another map
$$
\Psi:({\mathscr A}\oplus_\k\Sigma{\mathscr A})\ox{\mathscr F}_0\to{\mathscr A}\!\ox\!{\mathscr A}\oplus_{\k\!\ox\!{\mathscr A}}\Sigma({\mathscr A}\!\ox\!{\mathscr A})
$$
given by
$$
\Psi((x,\Sigma y)\ox f')=(x\ox\qf{f'},\Sigma y\ox\qf{f'})
$$
the equality to check is
$$
\sum(f_\l\ox f_\r)(x\ox\qf{f'},\Sigma y\ox\qf{f'})\\
=\Psi((-1)^{\deg(f_\r)\deg(x,\Sigma y)}\sum(\qf{f_\l}x,(-1)^{\deg(f_\l)}\Sigma
\qf{f_\l}y+\k(\qf{f_\l})x)\ox f_\r f').
$$
Here the left hand side expands as
$$
\sum((-1)^{\deg(f_\r)\deg(x)}\qf{f_\l}x\ox\qf{f_\r}\qf{f'},\\
(-1)^{\deg(f_\r)\deg(\Sigma y)}(-1)^{\deg(f_\l)}\qf{f_\l}y\ox\qf{f_\r}\qf{f'}
+(-1)^{\deg(f_\r)\deg(x)}\k(\qf{f_\l})x\ox\qf{f_\r}\qf{f'})
$$
and the right hand side expands as
$$
(-1)^{\deg(f_\r)\deg(x,\Sigma y)}\sum(\qf{f_\l}x\ox\qf{f_\r}
\qf{f'},(-1)^{\deg(f_\l)}\Sigma\qf{f_\l}y\ox
\qf{f_\r}\qf{f'}+\k(\qf{f_\l})x\ox\qf{f_\r}\qf{f'});
$$
these two expressions are visibly the same.
\end{proof}
Given this left module structure on $({\mathscr V}\hat\ox{\mathscr V})_1$, one can measure the
deviation from left equivariance of the diagonal
$\Delta_{\mathscr V}:{\mathscr V}_1\to({\mathscr V}\hat\ox{\mathscr V})_1$. For that, consider the map $\hat
L:{\mathscr V}_0\ox{\mathscr V}_1\to({\mathscr V}\hat\ox{\mathscr V})_1$ given by
$$
\hat L(f\ox x):=\Delta_{\mathscr V}(fx)-f\cdot\Delta_{\mathscr V}(x),
$$
for any $f\in{\mathscr F}_0={\mathscr V}_0$, $x\in{\mathscr V}_1$, where $\cdot$ denotes the left
${\mathscr F}_0$-module action defined in \bref{vleft}. Since the diagonal
$\Delta_{\mathscr R}$ of ${\mathscr R}^{\mathbb F}$ is left equivariant, it follows from
\eqref{diacomp} that the image of $\hat L$ lies in the kernel of the map
$q\hat\ox q$, i.~e. in $\Sigma{\mathscr A}\ox{\mathscr A}$. Moreover if $f=\d v_1$ for some
$v_1\in{\mathscr V}_1$, then one has
$$
\Delta_{\mathscr V}(\d(v_1)x)=\Delta_{\mathscr V}(v_1\d x)=\Delta_{\mathscr V}(v_1)\Delta_{\mathscr F}(\d
x)=\Delta_{\mathscr V}(v_1)\d_{\hat\ox}\Delta_{\mathscr V}(x)=\d_{\hat\ox}\Delta_{\mathscr V}(v_1)\Delta_{\mathscr V}(x)
=\Delta_{\mathscr F}(\d v_1)\Delta_{\mathscr V}(x),
$$
so that the image of $\d\ox{\mathscr V}_1:{\mathscr V}_1\ox{\mathscr V}_1\to{\mathscr V}_0\ox{\mathscr V}_1$ lies in the
kernel of $\hat L$. Similarly commutativity of
\begin{equation}\label{diacomm}
\alignbox{
\xymatrix{
{\mathscr V}_1\ar[r]^-{\Delta_{\mathscr V}}\ar[d]_\d
&({\mathscr V}\hat\ox{\mathscr V})_1\ar[d]^{\d_{\hat\ox}}\\
{\mathscr V}_0\ar[r]^-{\Delta_{\mathscr F}}
&{\mathscr V}_0\ox{\mathscr V}_0
}
}
\end{equation}
implies that ${\mathscr V}_0\ox\ker\d$ is in the kernel of $\hat L$. It then follows that
$L$ factors uniquely through a map
$$
{\mathscr A}\ox R_{\mathscr F}=({\mathscr V}_0/\im\d)\ox({\mathscr V}_1/\ker\d)\to\ker(q\hat\ox q)=\Sigma{\mathscr A}\ox{\mathscr A}.
$$
\begin{Definition}\label{lv}
The map
$$
L_{\mathscr V}:{\mathscr A}\ox R_{\mathscr F}\to\Sigma{\mathscr A}\ox{\mathscr A}
$$
given by the unique factorization of the map $\hat L$ above is characterized by
the deviation of the diagonal $\Delta_{\mathscr V}$ of the Hopf pair algebra ${\mathscr V}$
from left equivariance. That is, one has
$$
\Delta_{\mathscr V}(fx)=f\cdot\Delta_{\mathscr V}(x)+L_{\mathscr V}(\qf f\ox\d x)
$$
for any $f\in{\mathscr F}_0={\mathscr V}_0$, $x\in{\mathscr V}_1$ and the action $\cdot$ from \bref{vleft}.
\end{Definition}
Similarly one can measure the deviation of $\Delta_{\mathscr V}:{\mathscr V}_1\to({\mathscr V}\hat\ox{\mathscr V})_1$
from cocommutativity by means of the map $\hat S:{\mathscr V}_1\to({\mathscr V}\hat\ox{\mathscr V})_1$ given
by
$$
\hat S(x):=\Delta_{\mathscr V}(x)-T\Delta_{\mathscr V}(x),
$$
where $T:({\mathscr V}\hat\ox{\mathscr V})_1\to({\mathscr V}\hat\ox{\mathscr V})_1$ is the interchange operator for
${\mathbf{Alg}}^\r_{{\mathbbm1}\oplus\Sigma}$ as constructed in \bref{monofold}. Then
similarly to $\hat L$ above, $\hat S$ admits a factorization in the following way.
First, by commutativity of \eqref{diacomp} one has
$$
(q\hat\ox q)T\Delta_{\mathscr V}=T(q\hat\ox q)\Delta_{\mathscr V}=T\Delta_{\mathscr R} q=\Delta_{\mathscr R}
q=(q\hat\ox q)\Delta_{\mathscr V},
$$
since the ${\mathbf{Alg}}^{\mathit{pair}}_{\mathbbm1}$-comonoid ${\mathscr R}^{\mathbb F}$ is cocommutative.
Thus the image of $\hat S$ is contained in $\ker(q\hat\ox q)=\Sigma{\mathscr A}\ox{\mathscr A}$.
Next, commutativity of \eqref{diacomm} implies that $\ker\d$ is contained
in the kernel of $\hat S$. Hence $\hat S$ factors uniquely as follows
$$
R_{\mathscr F}={\mathscr V}_1/\ker\d\to\ker(q\hat\ox q)=\Sigma{\mathscr A}\ox{\mathscr A}.
$$
\begin{Definition}\label{S}
The map
$$
S_{\mathscr V}:R_{\mathscr F}\to\Sigma{\mathscr A}\ox{\mathscr A}
$$
given by the unique factorization of the map $\hat S$ above is characterized by
the deviation of the diagonal $\Delta_{\mathscr V}$ of the Hopf pair algebra ${\mathscr V}$
from cocommutativity. That is, one has
$$
T\Delta_{\mathscr V}(x)=\Delta_{\mathscr V}(x)+S_{\mathscr V}(\d x)
$$
for any $x\in{\mathscr V}_1$.
\end{Definition}
It is clear from these definitions that $L_{\mathscr V}$ and $S_{\mathscr V}$ are well defined
maps by the Hopf pair algebra ${\mathscr V}$. Below in \bref{lao} we define the left
action operator $L:{\mathscr A}\ox R_{\mathscr F}\to\Sigma{\mathscr A}\ox{\mathscr A}$ and the symmetry operator
$S:R_{\mathscr F}\to\Sigma{\mathscr A}\ox{\mathscr A}$ with $L=0$ and $S=0$ if $p$ is odd. For $p=2$
these operators are quite intricate but explicitly given. We also will
study the dualization of $S$ and $L$.
The next two results are essentially reformulations of the main results in
the book \cite{Baues}.
\begin{Theorem}[Existence]\label{exist}
There exists a Hopf pair algebra ${\mathscr V}$ with $L_{\mathscr V}=L$ and $S_{\mathscr V}=S$.
\end{Theorem}
\begin{Theorem}[Uniqueness]\label{unique}
The Hopf pair algebra ${\mathscr V}$ satisfying $L_{\mathscr V}=L$ and $S_{\mathscr V}=S$ is unique up
to an isomorphism over the ${\mathbb G}$-structure ${\mathscr V}\to{\mathscr R}^{\mathbb F}$ and under the
kernel ${\mathscr A}\oplus_\k\Sigma{\mathscr A}\into{\mathscr V}$.
\end{Theorem}
The Hopf pair algebra appearing in these theorems is the \emph{algebra of
secondary cohomology operations} over ${\mathbb F}$, denoted by
${\mathscr B}^{\mathbb F}=({\mathscr B}_1^{\mathbb F}\to{\mathscr B}_0^{\mathbb F})={\mathscr B}\ox{\mathbb F}$. The algebra ${\mathscr B}$ has been defined
over ${\mathbb G}$ in \cite{Baues}.
\begin{proof}[Proof of \bref{exist}]
Recall that in \cite{Baues}*{12.1.8} a folding product $\hat\ox$ is defined
for pair ${\mathbb G}$-algebras in such a way that ${\mathscr B}$ has a comonoid structure with
respect to it, i.~e. a \emph{secondary Hopf algebra} structure. Let
$$
\Delta_1:{\mathscr B}_1\to({\mathscr B}\hat\ox{\mathscr B})_1
$$
be the corresponding \emph{secondary diagonal} from
\cite{Baues}*{(12.2.2)}. It is proved in \cite{Baues}*{14.4} that the left
action operator $L$ satisfies
$$
\Delta_1(bx)=b\Delta_1(x)+L(q(b)\ox(\d x\ox1))
$$
for $b\in{\mathscr B}_0$, $x\in{\mathscr B}_1$, $\d x\ox1\in R_{\mathscr B}\ox{\mathbb F}={\mathscr R}^{\mathbb F}_1$.
Also in \cite{Baues}*{14.5} it is proved that the symmetry operator $S$
satisfies
$$
T\Delta_1(x)=\Delta_1(x)+S(\d x\ox1)
$$
for $x\in{\mathscr B}_1$. Moreover it is proved in \cite{Baues}*{15.3.13} that the
secondary Hopf algebra ${\mathscr B}$ is determined uniquely up to isomorphism by the
maps $\k$, $L$ and $S$.
Consider now the diagram
$$
\xymatrix{
\Sigma{\mathscr A}\ar@{=}[r]\ar@{ >->}[d]
&\Sigma{\mathscr A}\ar@{ >->}[d]\\
{\mathscr A}\oplus_\k\Sigma{\mathscr A}\ar@{ >->}[r]^{i_\k}\ar@{->>}[d]_q
&{\mathscr B}_1\ox{\mathbb F}\ar[r]^{\d\ox1}\ar@{->>}[d]_{q=\d\ox1}
&{\mathscr B}_0\ox{\mathbb F}\ar@{=}[d]\ar@{->>}[r]
&{\mathscr A}\ar@{=}[d]\\
{\mathscr A}\ar@{ >->}[r]
&R_{\mathscr B}\ox{\mathbb F}\ar[r]
&{\mathscr F}_0\ar@{->>}[r]
&{\mathscr A}.
}
$$
Here the inclusion $i_\k:{\mathscr A}\oplus_\k\Sigma{\mathscr A}\into{\mathscr B}_1\ox{\mathbb F}$ is given by
the inclusion $\Sigma{\mathscr A}\subset{\mathscr B}_1$ and by the map
$$
{\mathscr A}\to{\mathscr B}_1\ox{\mathbb F}
$$
which assigns to an element $q(b)\in{\mathscr A}$, for $b\in{\mathscr B}_0$, the element
$[p]\cdot b\ox1$. Then it is clear that $i_\k$ is a right ${\mathscr A}$-module
homomorphism. Moreover it is also a left ${\mathscr A}$-module homomorphism since for
$b\in{\mathscr B}_0$ the following identity holds in ${\mathscr B}_1$:
$$
b\cdot[p]-[p]\cdot b=\k(b).
$$
Compare \cite{Baues}*{A20 in the introduction}. Now one can check that the
properties of ${\mathscr B}$ established in \cite{Baues} yield the result.
\end{proof}
\begin{Remark}\label{tmp}
For elements $\alpha,\beta,\gamma\in{\mathscr A}$ with $\alpha\beta=0$ and
$\beta\gamma=0$ the \emph{triple Massey product}
$$
\brk{\alpha,\beta,\gamma}\in{\mathscr A}/(\alpha{\mathscr A}+{\mathscr A}\gamma)
$$
is defined. Here the degree of elements in $\brk{\alpha,\beta,\gamma}$ is
$\deg(\alpha)+\deg(\beta)+\deg(\gamma)-1$. We can compute
$\brk{\alpha,\beta,\gamma}$ by use of the Hopf pair algebra ${\mathscr B}^{\mathbb F}$ above
as follows. For this we consider the maps
$$
\xymatrix{
{\mathscr A}&{\mathscr B}_0\supset R_{\mathscr B}\ar@{->>}[l]_{q_{\mathscr B}}\ar@{->>}[r]^{q_R}&R_{\mathscr B}\ox{\mathbb F}.
}
$$
We choose elements $\bar\alpha,\bar\beta,\bar\gamma\in{\mathscr B}_0$ which $q_{\mathscr B}$
carries to $\alpha,\beta,\gamma$ respectively. Then we know that the
products $\bar\alpha\bar\beta$, $\bar\beta\bar\gamma$ are elements in
$R_{\mathscr B}$ for which we can choose elements $x,y\in{\mathscr B}_1\ox{\mathbb F}$ with
\begin{align*}
q(x)&=q_R(\bar\alpha\bar\beta),\\
q(y)&=q_R(\bar\beta\bar\gamma).
\end{align*}
Then the bimodule structure of ${\mathscr B}_1\ox{\mathbb F}$ yields the element $\bar\alpha
y-x\bar\gamma$ in the kernel $\Sigma{\mathscr A}$ of $q:{\mathscr B}_1\ox{\mathbb F}\to R_{\mathscr B}\ox{\mathbb F}$.
Now $\bar\alpha y-x\bar\gamma\in\Sigma{\mathscr A}$ represents
$\brk{\alpha,\beta,\gamma}$, see \cite{Baues}.
\end{Remark}
\section{The dual of the ${\mathbb G}$-relation pair algebra}
We next turn to the dualization of the ${\mathbb G}$-relation pair algebra of the
Steenrod algebra from section \ref{grel}.
For this we just apply the duality functor $D$ to \eqref{prel}. There results an exact
sequence
$$
\xymatrix{
{\mathscr A}_*\ar@{ >->}[r]
&{\mathscr R}^0_{\mathbb F}\ar[r]^d
&{\mathscr R}^1_{\mathbb F}\ar@{->>}[r]
&{\mathscr A}_*,
}
$$
i.~e. the sequence
\begin{equation}\label{drel}
\alignbox{
\xymatrix{
{\mathscr A}_*\ar@{ >->}[r]
&D({\mathscr R}^{\mathbb F}_0)\ar@{=}[d]\ar[r]^{D(\d)}&D({\mathscr R}^{\mathbb F}_1)\ar@{=}[d]\ar@{->>}[r]
&{\mathscr A}_*\\
&\Hom({\mathscr F}_0,{\mathbb F})&\Hom(R_{\mathscr B},{\mathbb F}).
}
}
\end{equation}
In particular, by the dual of \bref{parel} one has
\begin{Lemma}\label{dparel}
The pair ${\mathscr R}_{\mathbb F}=(d:{\mathscr R}^0_{\mathbb F}\to{\mathscr R}^1_{\mathbb F})$ has a pair coalgebra structure
compatible with the standard bicomodule structure of ${\mathscr A}_*$ over itself, so
that ${\mathscr R}_{\mathbb F}$ yields an object in ${\mathbf{Coalg}}^{\mathit{pair}}_{\mathbbm1}$, see section \ref{unfold}.
\end{Lemma}
Moreover the dual of \bref{relcom} takes place, i.~e. one has
\begin{Theorem}\label{drelcom}
The pair coalgebra ${\mathscr R}_{\mathbb F}$ has a structure of a commutative monoid in the
category ${\mathbf{Coalg}}_{\mathbbm1}^{\mathit{pair}}$ with respect to the
unfolding product $\check\ox$.
\end{Theorem}\qed
The proof uses the duals of the pair algebras ${\mathscr R}^{(n)}$, $n\ge0$, from
\bref{relcom}. Namely, applying to the short exact sequence
$$
\xymatrix{
R_{\mathscr B}^{(n)}\ar@{ >->}[r]&{\mathscr B}_0^{\ox n}\ar@{->>}[r]^{q^{\ox n}}&{\mathscr A}^{\ox n}
}
$$
the functor $D=\Hom(\_,{\mathbb F})$ gives, similarly to \bref{dparel}, a pair
coalgebra
$$
{\mathscr R}_*^{(n)}=\left(
\xymatrix{
{\mathscr A}_*^{\ox n}\ar@{ >->}[r]&{\mathscr F}_*^{\ox
n}\ar[r]&{R^{(n)}_{\mathscr B}}_*\ar@{->>}[r]&{\mathscr A}_*^{\ox n}
}
\right)
$$
such that the following dual of \bref{rn} holds:
\begin{Lemma}
There is a canonical isomorphism ${\mathscr R}_*^{(n)}\cong({\mathscr R}_{\mathbb F})^{\check\ox n}$ in
${\mathbf{Coalg}}^{\mathit{pair}}_{\mathbbm1}$.
\end{Lemma}\qed
Using this lemma one constructs the $\check\ox$-monoid structure on
${\mathscr R}_{\mathbb F}$ by the diagram
$$
\xymatrix{
{\mathscr F}_*\ox{\mathscr F}_*\ar@{=}[r]\ar[d]^{\Delta_*}
&{\mathscr R}^0_{\mathbb F}\ox{\mathscr R}^0_{\mathbb F}\ar[r]^{d_{\check\ox}}\ar[d]^\mu
&({\mathscr R}_{\mathbb F}\check\ox{\mathscr R}_{\mathbb F})^1\ar[r]^-\cong\ar[d]^\mu
&{R_{\mathscr B}^{(2)}}_*\ar[d]^{\Delta^{\mathbb G}_*}\\
{\mathscr F}_*\ar@{=}[r]
&{\mathscr R}^0_{\mathbb F}\ar[r]^d
&{\mathscr R}_{\mathbb F}^1\ar@{=}[r]
&{R_{\mathscr B}}_*
}
$$
with $\Delta^{\mathbb G}$ as in \bref{deltag}.
Moreover the unit of ${\mathscr R}_{\mathbb F}$ is given by the dual of \bref{rcounit}, i.~e.
by the diagram
$$
\xymatrix{
{\mathbb F}\ar@{=}[r]\ar[d]^1
&{\mathbb F}\ar[r]^0\ar[d]^1
&{\mathbb F}\ar@{=}[r]\ar@{-->}[d]
&{\mathbb F}\ar[d]^1\\
{\mathscr A}_*\ar@{ >->}[r]
&{\mathscr F}_*\ar[r]
&{R_{\mathscr B}}_*\ar@{->>}[r]
&{\mathscr A}_*
}
$$
so that the unit element of ${R_{\mathscr B}}_*$ is the map $R_{\mathscr B}\to{\mathbb F}$ sending the
generator $p1_{{\mathscr B}_0}$ in degree 0 to $1$ and all elements in higher degrees
to zero.
\section{Hopf pair coalgebras}
We next turn to the dualization of the notion of a Hopf pair algebra from
\bref{hpa}, using the dual ${\mathscr R}_{\mathbb F}$ of ${\mathscr R}^{\mathbb F}$ from the previous section.
\begin{Definition}\label{hpc}
A \emph{Hopf pair coalgebra} ${\mathscr W}$ (associated to ${\mathscr A}_*$) is a pair
coalgebra $d:{\mathscr W}^0\to{\mathscr W}^1$ over ${\mathbb F}$ together with the following
commutative diagram in the category of ${\mathscr F}_*$-${\mathscr F}_*$-bicomodules
$$
\xymatrix{
{\mathscr A}_*\ar@{ >->}[r]\ar@{=}[d]
&{\mathscr R}^0_{\mathbb F}\ar[r]\ar@{=}[d]
&{\mathscr R}^1_{\mathbb F}\ar@{ >->}[d]^i\ar@{->>}[r]
&{\mathscr A}_*\ar@{ >->}[d]^i\\
{\mathscr A}_*\ar@{ >->}[r]
&{\mathscr W}^0\ar[r]^d
&{\mathscr W}^1\ar@{->>}[d]^{\pi_\Sigma}\ar@{->>}[r]^-{\binom\pi{\pi_\Sigma}}
&{\mathscr A}_*\oplus_{\k_*}\Sigma{\mathscr A}_*\ar@{->>}[d]\\
&&\Sigma{\mathscr A}_*\ar@{=}[r]
&\Sigma{\mathscr A}_*
}
$$
with exact rows and columns. The pair morphism $i:{\mathscr R}_{\mathbb F}\to{\mathscr W}$ will be
called the \emph{${\mathbb G}$-structure of ${\mathscr W}$}. Moreover ${\mathscr W}$ must be equipped
with a structure of a monoid $(m_{\mathscr W},1_{\mathscr W})$ in ${\mathbf{Coalg}}^\r_{{\mathbbm1}\oplus\Sigma}$ such
that $i$ is compatible with the ${\mathbf{Coalg}}^{\mathit{pair}}_{\mathbbm1}$-monoid
structure on ${\mathscr R}_{\mathbb F}$ from \bref{drelcom}, i.~e. diagrams dual to
\eqref{diacomp} and \eqref{coucomp}
$$
\xymatrix{
({\mathscr R}_{\mathbb F}\check\ox{\mathscr R}_{\mathbb F})^1\ar[r]^-{m_{\mathscr R}}\ar[d]_{i\check\ox i}
&{\mathscr R}_{\mathbb F}^1\ar[d]^i\\
({\mathscr W}\check\ox{\mathscr W})^1\ar[r]^-{m_{\mathscr W}}
&{\mathscr W}^1,
}\ \ \ \
\xymatrix{
{\mathbb F}\ar[r]^{1_{\mathscr R}}\ar[d]
&{\mathscr R}_{\mathbb F}^1\ar[d]^i\\
{\mathbb F}\oplus\Sigma{\mathbb F}\ar[r]^{1_{\mathscr W}}
&{\mathscr W}^1
}
$$
commute.
\end{Definition}
We next note that the dual of \bref{vleft} holds; more precisely, one has
\begin{Lemma}
For a Hopf pair coalgebra ${\mathscr W}$ the subspace
$$
({\mathscr W}\check\ox{\mathscr W})^1\subset({\mathscr W}\dblb\ox{\mathscr W})^1
$$
is closed under the left coaction of the coalgebra ${\mathscr F}_*$ on
$({\mathscr W}\dblb\ox{\mathscr W})^1$ given by the corestriction of scalars along the
multiplication $m_*:{\mathscr F}_*\ox{\mathscr F}_*\to{\mathscr F}_*$ of the left
${\mathscr F}_*\ox{\mathscr F}_*=({\mathscr W}\dblb\ox{\mathscr W})^0$-comodule structure given by the pair
coalgebra ${\mathscr W}\dblb\ox{\mathscr W}$. In other words, there is a unique map
$m^\l:({\mathscr W}\check\ox{\mathscr W})^1\to{\mathscr F}_*\ox({\mathscr W}\check\ox{\mathscr W})^1$ making the diagram
$$
\xymatrix{
({\mathscr W}\check\ox{\mathscr W})^1\ar@{ >->}[d]\ar@{-->}[rr]^{m^\l}
&&{\mathscr F}_*\ox({\mathscr W}\check\ox{\mathscr W})^1\ar@{ >->}[d]\\
({\mathscr W}\dblb\ox{\mathscr W})^1\ar[r]
&{\mathscr F}_*\ox{\mathscr F}_*\ox({\mathscr W}\dblb\ox{\mathscr W})^1\ar[r]^-{m_*\ox1}
&{\mathscr F}_*\ox({\mathscr W}\dblb\ox{\mathscr W})^1
}
$$
commute.
\end{Lemma}\qed
Given this left coaction, one can define the dual of the left action
operator in \bref{lv} by measuring deviation of the multiplication
$({\mathscr W}\check\ox{\mathscr W})^1\to{\mathscr W}^1$ from being a left comodule homomorphism. For
that, one first observes that the map $\check L:({\mathscr W}\check\ox{\mathscr W})^1\to{\mathscr F}_*\ox{\mathscr W}^1$
is given by the difference of two composites in the diagram
$$
\xymatrix{
({\mathscr W}\check\ox{\mathscr W})^1\ar[r]^-{m^\l}\ar[d]_{m_{\mathscr W}}
&{\mathscr F}_*\ox({\mathscr W}\check\ox{\mathscr W})^1\ar[d]^{1\ox m_{\mathscr W}}\\
{\mathscr W}^1\ar[r]^-{m^\l}
&{\mathscr F}_*\ox{\mathscr W}^1.
}
$$
Then by the argument dual to that before \bref{lv} one sees that the map
$\check L$ factors uniquely through $\coker(i\check\ox
i)=\left(({\mathscr W}\check\ox{\mathscr W})^1\onto\Sigma{\mathscr A}_*\ox{\mathscr A}_*\right)$ and into
$\ker(d)\ox\im(d)=\left({\mathscr A}_*\ox{R_{\mathscr F}}_*\into{\mathscr W}^0\ox{\mathscr W}^1\right)$ to yield a map
$\Sigma{\mathscr A}_*\ox{\mathscr A}_*\to{\mathscr A}_*\ox{R_{\mathscr F}}_*$. We thus can make, dually to \bref{lv},
the following
\begin{Definition}
The map
$$
L_{\mathscr W}:\Sigma{\mathscr A}_*\ox{\mathscr A}_*\to{\mathscr A}_*\ox{R_{\mathscr F}}_*
$$
given by the unique factorization of the map $\check L$ above is
characterized by the deviation of the multiplication $m_{\mathscr W}$ of the Hopf
pair coalgebra ${\mathscr W}$ from being a left ${\mathscr F}_*$-comodule homomorphism. That
is, for any $t\in({\mathscr W}\check\ox{\mathscr W})^1$ one has
$$
(1\ox m_{\mathscr W})m^\l(t)=m^\l
m_{\mathscr W}(t)+L_{\mathscr W}(\pi_\Sigma\check\ox\pi_\Sigma)(t).
$$
\end{Definition}
Next, we define a map $S_{\mathscr W}$ in a manner dual to \bref{S}, measuring
noncommutativity of the ${\mathbf{Coalg}}^\r_{{\mathbbm1}\oplus\Sigma}$-monoid structure on
${\mathscr W}$. For that, we first consider the map $\check
S:({\mathscr W}\check\ox{\mathscr W})^1\to{\mathscr W}^1$ given by
$$
\check S(t)=m_{\mathscr W} T(t)-m_{\mathscr W}(t)
$$
for $t\in({\mathscr W}\check\ox{\mathscr W})^1$ and then observe that, dually to \bref{S}, this
map factors uniquely through $\coker(i\check\ox
i)=\left(({\mathscr W}\check\ox{\mathscr W})^1\onto\Sigma{\mathscr A}_*\ox{\mathscr A}_*\right)$ and into
$\im(d)=\left({R_{\mathscr F}}_*\into{\mathscr W}^1\right)$ so we have
\begin{Definition}
The map
$$
S_{\mathscr W}:\Sigma{\mathscr A}_*\ox{\mathscr A}_*\to{R_{\mathscr F}}_*
$$
given by the unique factorization of the map $\check S$ above is
characterized by being the graded commutator map with respect to the
$\check\ox$-monoid structure on the Hopf pair coalgebra ${\mathscr W}$. That is, for
any $t\in({\mathscr W}\check\ox{\mathscr W})^1$ one has
$$
m_{\mathscr W} T(t)=m_{\mathscr W}(t)+S_{\mathscr W}(\pi_\Sigma\check\ox\pi_\Sigma)(t).
$$
\end{Definition}
We now dualize the left action operator \bref{lao} and the symmetry
operator \bref{so}.
\begin{Definition}
The \emph{left coaction operator}
$$
L_*:{\mathscr A}_*\ox{\mathscr A}_*\to{\mathscr A}_*\ox{R_{\mathscr F}}_*
$$
of degree $+1$ is the graded dual of the left action operator \bref{lao}.
\end{Definition}
\begin{Definition}\label{cosym}
The \emph{cosymmetry operator}
$$
S_*:{\mathscr A}_*\ox{\mathscr A}_*\to{R_{\mathscr F}}_*
$$
of degree $+1$ is the graded dual of the symmetry operator \bref{so}.
\end{Definition}
It is clear that the duals of \bref{exist} and \bref{unique} hold. Let us
state these explicitly.
\begin{Theorem}[Existence]
There exists a Hopf pair coalgebra ${\mathscr W}$ with $L_{\mathscr W}=L_*$ and $S_{\mathscr W}=S_*$.
\end{Theorem}
\begin{Theorem}[Uniqueness]
The Hopf pair coalgebra ${\mathscr W}$ satisfying $L_{\mathscr W}=L_*$ and $S_{\mathscr W}=S_*$ is unique
up to an isomorphism over ${\mathscr W}\onto{\mathscr A}_*\oplus_{\k_*}\Sigma{\mathscr A}_*$ and under ${\mathscr R}_{\mathbb F}\into{\mathscr W}$.
\end{Theorem}
The Hopf pair coalgebra appearing in these theorems will be denoted by
${\mathscr B}_{\mathbb F}=({\mathscr B}^0_{\mathbb F}\to{\mathscr B}^1_{\mathbb F})=D({\mathscr B}^{\mathbb F})$.
\endinput
\chapter{Generators of ${\mathscr B}_{\mathbb F}$ and dual generators of ${\mathscr B}^{\mathbb F}$}\label{gens}
In this chapter we describe polynomial generators in the dual Steenrod algebra
${\mathscr A}_*$ and in the dual of the free tensor algebra $T_{\mathbb F}(E_{\mathscr A})$ with the
Cartan diagonal. We use these results to obtain generators in the dual of the
relation module $R_{\mathscr F}$.
\section{The Milnor dual of the Steenrod algebra}
Here we recall the needed facts from \cite{Milnor}. The graded dual of the
Hopf algebra ${\mathscr A}$ is the Milnor Hopf algebra ${\mathscr A}_*=\Hom({\mathscr A},{\mathbb F})=D({\mathscr A})$. It is proved in
\cite{Milnor} that for odd $p$ as an algebra ${\mathscr A}_*$ is a graded polynomial
algebra, i.~e. it is isomorphic to a tensor product of an exterior algebra
on generators of odd degree and a polynomial algebra on generators of even
degree; for $p=2$ the algebra ${\mathscr A}_*$ is a polynomial algebra. Moreover, in
\cite{Milnor}, explicit generators are given in terms of the admissible
basis.
First recall that the admissible basis for ${\mathscr A}$ is given by the following
monomials: for odd $p$ they are of the form
$$
M=\beta^{\epsilon_0}{\mathrm P}^{s_1}\beta^{\epsilon_1}{\mathrm P}^{s_2}\cdots{\mathrm P}^{s_n}\beta^{\epsilon_n}
$$
where $\epsilon_k\in\{0,1\}$ and
$$
s_1\ge\epsilon_1+ps_2,
s_2\ge\epsilon_2+ps_3,\dots,s_{n-1}\ge\epsilon_{n-1}+ps_n,s_n\ge1.
$$
Then let $\xi_k\in{\mathscr A}_{2(p^k-1)}=\Hom({\mathscr A}^{2(p^k-1)},{\mathbb F})$, $k\ge1$ and
$\tau_k\in{\mathscr A}_{2p^k-1}=\Hom({\mathscr A}^{2p^k-1},{\mathbb F})$, $k\ge0$ be given on this
basis by
\begin{equation}
\xi_k(M)=
\begin{cases}
1,&M={\mathrm P}^{p^{k-1}}{\mathrm P}^{p^{k-2}}\cdots{\mathrm P}^p{\mathrm P}^1,\\
0&\textrm{otherwise}
\end{cases}
\end{equation}
and
\begin{equation}
\tau_k(M)=
\begin{cases}
1,&M={\mathrm P}^{p^{k-1}}{\mathrm P}^{p^{k-2}}\cdots{\mathrm P}^p{\mathrm P}^1\beta,\\
0&\textrm{otherwise}.
\end{cases}
\end{equation}
As proved in \cite{Milnor}, ${\mathscr A}_*$ is a graded polynomial algebra on these
elements, i.~e. it is generated by the elements $\xi_k$ and $\tau_k$ with
the defining relations
\begin{align*}
\xi_i\xi_j&=\xi_j\xi_i,\\
\xi_i\tau_j&=\tau_j\xi_i,\\
\tau_i\tau_j&=-\tau_j\tau_i
\end{align*}
only.
For $p=2$, the admissible basis for ${\mathscr A}$ is given by the monomials
$$
M=\Sq^{s_1}\Sq^{s_2}\cdots\Sq^{s_n}
$$
with
$$
s_1\ge2s_2,s_2\ge2s_3,\dots,s_{n-1}\ge2s_n,s_n\ge1
$$
and the polynomial generators of ${\mathscr A}_*$ are elements
$\zeta_k\in{\mathscr A}_{2^k-1}=\Hom({\mathscr A}^{2^k-1},{\mathbb F})$ given by
\begin{equation}\label{milgen}
\zeta_k(M)=
\begin{cases}
1,&M=\Sq^{2^{k-1}}\Sq^{2^{k-2}}\cdots\Sq^2\Sq^1,\\
0&\textrm{otherwise}.
\end{cases}
\end{equation}
In terms of these generators, likewise, the coalgebra structure
$m_*:{\mathscr A}_*\to{\mathscr A}_*\ox{\mathscr A}_*$ dual to the multiplication $m$ of ${\mathscr A}$
is determined in \cite{Milnor}. Namely, for odd $p$ one has
\begin{equation}
\alignbox{
m_*(\xi_k)&=\xi_k\ox1+\xi_{k-1}^p\ox\xi_1+\xi_{k-2}^{p^2}\ox\xi_2+\dots+\xi_1^{p^{k-1}}\ox\xi_{k-1}+1\ox\xi_k,\\
m_*(\tau_k)&=\xi_k\ox\tau_0+\xi_{k-1}^p\ox\tau_1+\xi_{k-2}^{p^2}\ox\tau_2+\dots+\xi_1^{p^{k-1}}\ox\tau_{k-1}+1\ox\tau_k+\tau_k\ox1.
}
\end{equation}
For $p=2$ one has
\begin{equation}\label{mildiag}
m_*(\zeta_k)=\zeta_k\ox1+\zeta_{k-1}^2\ox\zeta_1+\zeta_{k-2}^4\ox\zeta_2+\dots+\zeta_1^{2^{k-1}}\ox\zeta_{k-1}+1\ox\zeta_k.
\end{equation}
We will need an expression for the dual $\Sq^1_*:{\mathscr A}_*\to\Sigma{\mathscr A}_*$ to the map
$\Sq^1\cdot:\Sigma{\mathscr A}\to{\mathscr A}$ given by multiplication with $\Sq^1$ from the
left.
\begin{Lemma}\label{sqd}
The map $\Sq^1_*$ is equal to $\frac\partial{\partial\zeta_1}$. That is, on
the monomial basis it is given by
$$
\Sq^1_*(\zeta_1^{n_1}\zeta_2^{n_2}\cdots)=
\begin{cases}
\zeta_1^{n_1-1}\zeta_2^{n_2}\cdots,&n_1\equiv1\mod2\\
0,&n_1\equiv0\mod2.
\end{cases}
$$
\end{Lemma}
\begin{proof}
Note that $\Sq^1_*$ is a derivation, since $\Sq^1\cdot$ is a coderivation,
i.e. the diagram
$$
\xymatrix{
\Sigma{\mathscr A}\ar[rr]^{\Sq^1\cdot}\ar[d]_{\Sigma\delta}
&&{\mathscr A}\ar[d]^\delta\\
\Sigma{\mathscr A}\ox{\mathscr A}\ar[r]^-{\binom{\Sq^1\cdot\ox1}{1\ox\Sq^1\cdot}}
&{\mathscr A}\!\ox\!{\mathscr A}\x{\mathscr A}\!\ox\!{\mathscr A}\ar[r]^-{+}
&{\mathscr A}\ox{\mathscr A}
}
$$
commutes: indeed for any $x\in{\mathscr A}$ one has
$$
\delta(\Sq^1x)=\delta(\Sq^1)\delta(x)=(\Sq^1\ox1+1\ox\Sq^1)\delta(x)
=(\Sq^1\ox1)\delta(x)+(1\ox\Sq^1)\delta(x).
$$
On the other hand, the derivation on the Milnor generators $\Sq^1_*$ acts
as follows:
$$
\Sq^1_*(\zeta_n)(x)=\zeta_n(\Sq^1x)=
\begin{cases}
1,&\Sq^1x=\Sq^{2^{n-1}}\Sq^{2^{n-2}}\cdots\Sq^1,\\
0,&\Sq^1x\ne\Sq^{2^{n-1}}\Sq^{2^{n-2}}\cdots\Sq^1.
\end{cases}
$$
It follows that $\Sq^1_*(\zeta_1)=1$; on the other hand for $n>1$ the
equation $\Sq^1x=\Sq^{2^{n-1}}\Sq^{2^{n-2}}\cdots\Sq^1$ has no solutions,
since it would imply
$\Sq^1\Sq^{2^{n-1}}\Sq^{2^{n-2}}\cdots\Sq^1=\Sq^1\Sq^1x=0$, whereas
actually
$$
\Sq^1\Sq^{2^{n-1}}\Sq^{2^{n-2}}\cdots\Sq^1=\Sq^{1+2^{n-1}}\Sq^{2^{n-2}}\cdots\Sq^1\ne0.
$$
But $\frac\partial{\partial\zeta_1}$ is the unique derivation sending
$\zeta_1$ to 1 and all other $\zeta_n$'s to 0.
\end{proof}
We will also need expression of the dual $\k_*$ of the
derivation $\k$ from \eqref{kappa} in terms of the above generators.
\begin{Lemma}\label{dkappa}
The map $\k_*:\Sigma{\mathscr A}_*\to{\mathscr A}_*$ is equal to the left multiplication by $\tau_0$
for odd $p$ and by $\zeta_1$ for $p=2$.
\end{Lemma}
\begin{proof}
For any linear map $\phi:{\mathscr A}^n\to{\mathbb F}$ the map
$\k_*(\phi):{\mathscr A}_{n+1}\to{\mathbb F}$ is the composite of $\phi$ with
$\k:{\mathscr A}_{n+1}\to{\mathscr A}_n$. Thus for $p$ odd one has
\begin{equation}\label{kodd}
\k_*(\phi)(\beta^{\epsilon_0}{\mathrm P}^{s_1}\beta^{\epsilon_1}{\mathrm P}^{s_2}\cdots{\mathrm P}^{s_n}\beta^{\epsilon_n})
=\sum_{\epsilon_k=1}(-1)^{\epsilon_0+\epsilon_1+\dots+\epsilon_{k-1}}
\phi(\beta^{\epsilon_0}{\mathrm P}^{s_1}\beta^{\epsilon_1}\cdots\beta^{\epsilon_{k-1}}{\mathrm P}^{s_k}{\mathrm P}^{s_{k+1}}\beta^{\epsilon_{k+1}}\cdots{\mathrm P}^{s_n}\beta^{\epsilon_n}).
\end{equation}
On the other hand, one has for $M$ as above
$$
(\tau_0\phi)(M)=\sum\tau_0(M_\l)\phi(M_\r)=\sum_{\substack{M_\l=c\beta\\0\ne
c\in{\mathbb F}}}c\phi(M_\r),
$$
if
$$
\delta(M)=\sum M_\l\ox M_\r.
$$
On the other hand one evidently has
$$
\delta(\beta^{\epsilon_0}{\mathrm P}^{s_1}\beta^{\epsilon_1}{\mathrm P}^{s_2}\cdots{\mathrm P}^{s_n}\beta^{\epsilon_n})
=\sum_{\substack{0\le\iota_0\le\epsilon_0\\0\le i_1\le
s_1\\0\le\iota_1\le\epsilon_1\\\cdots\\0\le i_n\le
s_n\\0\le\iota_n\le\epsilon_n}}
(-1)^{\sum_{0\le\mu<\nu\le n}(\epsilon_\mu-\iota_\mu)\iota_\nu}
\beta^{\iota_0}{\mathrm P}^{i_1}\beta^{\iota_1}\cdots{\mathrm P}^{i_n}\beta^{\iota_n}
\ox
\beta^{\epsilon_0-\iota_0}{\mathrm P}^{s_1-i_1}\beta^{\epsilon_1-\iota_1}\cdots{\mathrm P}^{s_n-i_n}\beta^{\epsilon_n-\iota_n}
$$
so that for
$M=\beta^{\epsilon_0}{\mathrm P}^{s_1}\beta^{\epsilon_1}\cdots{\mathrm P}^{s_n}\beta^{\epsilon_n}$
one has
\begin{multline*}
\sum_{\substack{M_\l=c\beta\\0\ne c\in{\mathbb F}}}c\phi(M_\r)
=\sum_{\epsilon_k=1}\sum_{\substack{\iota_0=0\\i_1=0\\\cdots\\i_k=0\\\iota_k=1\\i_{k+1}=0\\\cdots\\i_n=0\\\iota_n=0}}
(-1)^{\sum_{0\le\mu<\nu\le n}(\epsilon_\mu-\iota_\mu)\iota_\nu}
\phi(\beta^{\epsilon_0-\iota_0}{\mathrm P}^{s_1-i_1}\beta^{\epsilon_1-\iota_1}\cdots{\mathrm P}^{s_n-i_n}\beta^{\epsilon_n-\iota_n})\\
=\sum_{\epsilon_k=1}
(-1)^{\sum_{0\le\mu<k}\epsilon_\mu}
\phi(\beta^{\epsilon_0}{\mathrm P}^{s_1}\beta^{\epsilon_1}\cdots{\mathrm P}^{s_k}{\mathrm P}^{s_{k+1}}\beta^{\epsilon_{k+1}}\cdots{\mathrm P}^{s_n}\beta^{\epsilon_n})
\end{multline*}
which is the same as \eqref{kodd} above.
Similarly for $p=2$ the map $\k_*(\phi)$ is given by
\begin{equation}\label{k2}
\k_*(\phi)(\Sq^{s_1}\cdots\Sq^{s_n})=\phi(\k(\Sq^{s_1}\cdots\Sq^{s_n}))=\sum_{k=1}^n\phi(\Sq^{s_1}\cdots\Sq^{s_k-1}\cdots\Sq^{s_n})
\end{equation}
and the map $\zeta_1\phi$ is given by
$$
(\zeta_1\phi)(M)=\sum\zeta_1(M_\l)\phi(M_\r)=\sum_{M_\l=\Sq^1}\phi(M_\r).
$$
On the other hand one has
$$
\delta(\Sq^{s_1}\cdots\Sq^{s_n})=\sum_{\substack{0\le i_1\le
s_1\\\cdots\\0\le i_n\le
s_n}}\Sq^{i_1}\cdots\Sq^{i_n}\ox\Sq^{s_1-i_1}\cdots\Sq^{s_n-i_n},
$$
so that for $M=\Sq^{s_1}\cdots\Sq^{s_n}$ one has
$$
\sum_{M_\l=\Sq^1}\phi(M_\r)=\sum_{k=1}^n\sum_{\substack{i_1=0\\\cdots\\i_{k-1}=0\\i_k=1\\i_{k+1}=0\\\cdots\\i_n=0}}
\phi(\Sq^{s_1-i_1}\cdots\Sq^{s_n-i_n})
$$
which is equal to \eqref{k2}.
\end{proof}
It is clear that with respect to the coalgebra structure on ${\mathscr A}_*$ the map
$\k_*$ is a coderivation, i.~e. the diagram
$$
\xymatrix@!C{
\Sigma{\mathscr A}_*\ar[rr]^{\k_*}\ar[d]_{\Sigma m_*}
&&{\mathscr A}_*\ar[d]^{m_*}\\
\Sigma({\mathscr A}_*\!\ox\!{\mathscr A}_*)\ar[r]^-{\binom1{\sigma}}
&\Sigma{\mathscr A}_*\!\ox\!{\mathscr A}_*\oplus{\mathscr A}_*\!\ox\!\Sigma{\mathscr A}_*\ar[r]^-{(\k_*\ox1,1\ox\k_*)}
&{\mathscr A}_*\ox{\mathscr A}_*
}
$$
is commutative. Here $\sigma$ is the interchange of $\Sigma$ as in
\eqref{sigma}. Then using dual of the construction mentioned in \bref{hoch}
one may equip the vector space ${\mathscr A}_*\oplus\Sigma{\mathscr A}_*$ with a structure of
an ${\mathscr A}_*$-${\mathscr A}_*$-bicomodule, in such a way that one has a short exact
sequence of ${\mathscr A}_*$-${\mathscr A}_*$-bicomodules
\begin{equation}
0\to{\mathscr A}_*\to{\mathscr A}_*\oplus_{\k_*}\Sigma{\mathscr A}_*\to\Sigma{\mathscr A}_*\to0.
\end{equation}
Explicitly, one defines the right coaction of ${\mathscr A}_*$ on
${\mathscr A}_*\oplus_{\k_*}\Sigma{\mathscr A}_*$ as the direct sum of standard coactions on
${\mathscr A}_*$ and on $\Sigma{\mathscr A}_*$, whereas the left coaction is given by the
composite
$$
{\mathscr A}_*\oplus\Sigma{\mathscr A}_*\xto{m_*\oplus\Sigma m_*}
{\mathscr A}_*\!\ox\!{\mathscr A}_*\oplus\Sigma{\mathscr A}_*\!\ox\!{\mathscr A}_*
\xto{\left(\begin{smallmatrix}1&\k_*\ox1\\0&\sigma\end{smallmatrix}\right)}
{\mathscr A}_*\!\ox\!{\mathscr A}_*\oplus{\mathscr A}_*\!\ox\!\Sigma{\mathscr A}_*\cong{\mathscr A}_*\ox({\mathscr A}_*\oplus\Sigma{\mathscr A}_*).
$$
\section{The dual of the tensor algebra ${\mathscr F}_0=T_{\mathbb F}(E_{\mathscr A})$ for $p=2$}
We begin by recalling the constructions from \cite{Hazewinkel} relevant to our case.
The \emph{Leibniz-Hopf algebra} is the free graded associative ring with
unit $1=Z_0$
\begin{equation}
{\mathscr Z}=T_{\mathbb Z}\{Z_1,Z_2,...\}
\end{equation}
on generators $Z_n$, one for each degree $n\ge1$. Here we use notation as
in \eqref{BF}. ${\mathscr Z}$ is a cocommutative Hopf algebra with respect to the
diagonal
$$
\Delta(Z_n)=\sum_{i=0}^nZ_i\ox Z_{n-i}.
$$
Of course for $p=2$ we have ${\mathscr Z}\ox{\mathbb F}={\mathscr F}_0=T_{\mathbb Z}(E_{\mathscr A})$ by identifying
$Z_i=\Sq^i$, and moreover the diagonal $\Delta$ corresponds to
$\Delta^{\mathbb G}\ox{\mathbb F}$ in \eqref{deltag}. The graded dual of ${\mathscr Z}$ over the
integers is denoted by ${\mathscr M}$; it is proved in \cite{Hazewinkel} that it is
a polynomial algebra. There also a certain set of elements of ${\mathscr M}$ is given; it is
still a conjecture (first formulated by Ditters) that these elements form
a set of polynomial generators for ${\mathscr M}$. If, however, one localizes at any
prime $p$, then there is another set of elements, defined using the so
called \emph{$p$-elementary words}, which, as proved in \cite{Hazewinkel},
is a set of polynomial generators for the localized algebra ${\mathscr M}$. This in
particular gives a polynomial generating set for
${\mathscr F}_*=\Hom({\mathscr F}_0,{\mathbb F}_2)\cong{\mathscr M}/2{\mathscr M}$. Moreover it
turns out that the embedding ${\mathscr A}_*\into{\mathscr F}_*$ given by $\Hom({\mathscr A},{\mathbb F}_2)\into\Hom({\mathscr F}_0,{\mathbb F}_2)$ (dual
to the quotient map ${\mathscr F}_0\onto{\mathscr A}$) carries the Milnor generators of ${\mathscr A}_*$
to a subset of these generators.
Choose a basis in ${\mathscr M}$ which is dual to the (noncommutative) monomial
basis in ${\mathscr Z}$: for any sequence $\alpha=(d_1,...,d_n)$ of positive
integers, let $M_\alpha=M_{d_1,...,d_n}$ be the element of the free
abelian group ${\mathscr M}^{d_1+...+d_n}=\Hom({\mathscr Z}^{d_1+...+d_n},{\mathbb Z})$ determined by
$$
M_{d_1,...,d_n}(Z_{k_1}\cdots Z_{k_m})=
\begin{cases}
1,&(k_1,...,k_m)=(d_1,...,d_n),\\
0&\textrm{otherwise.}
\end{cases}
$$
Since ${\mathscr Z}$ is a free algebra, dually ${\mathscr M}$ is a cofree coalgebra, i.~e. the
diagonal is given by deconcatenation:
\begin{equation}\label{deconc}
\Delta(M_{d_1,...,d_n})=\sum_{i=0}^nM_{d_1,...,d_i}\ox M_{d_{i+1},...,d_n}.
\end{equation}
It is noted in \cite{Hazewinkel} (and easy to check) that in this basis the
multiplication in ${\mathscr M}$ is given by the so called \emph{overlapping shuffle
product}. Rather than defining this rigorously, we will give some examples.
\begin{align*}
M_5M_{2,4,1,9}&=M_{5,2,4,1,9}+M_{7,4,1,9}+M_{2,5,4,1,9}+M_{2,9,1,9}+M_{2,4,5,1,9}+M_{2,4,6,9}\\
&+M_{2,4,1,5,9}+M_{2,4,1,14}+M_{2,4,1,9,5};\\
M_{8,5}M_{1,2}&=M_{8,5,1,2}+M_{8,6,2}+M_{8,1,5,2}+M_{9,5,2}+M_{8,1,7}+M_{9,7}+M_{1,8,5,2}\\
&+M_{1,8,7}+M_{1,8,2,5}+M_{9,2,5}+M_{1,2,8,5}+M_{1,10,5}+M_{8,1,2,5}
\end{align*}
Thus in general, whereas the ordinary shuffle product of the elements, say,
$M_{a_1,a_2,a_3}$ and $M_{b_1,b_2,b_3,b_4,b_5}$ contains all possible
summands like $M_{b_1,a_1,a_2,b_2,b_3,a_3,b_4,b_5}$, the overlapping
shuffle product contains together with each such summand also in addition the summands of the
form
$M_{b_1+a_1,a_2,b_2,b_3,a_3,b_4,b_5}$,
$M_{b_1,a_1,a_2+b_2,b_3,a_3,b_4,b_5}$,
$M_{b_1,a_1,a_2,b_2,b_3+a_3,b_4,b_5}$,
$M_{b_1,a_1,a_2,b_2,b_3,a_3+b_4,b_5}$,
$M_{b_1+a_1,a_2+b_2,b_3,a_3,b_4,b_5}$ and so on, obtained by replacing
an $a_i$ and a $b_j$ standing one next to other with their sum, in all
possible positions.
Note that the algebra of ordinary shuffles is also a polynomial algebra,
but over rationals; it is \emph{not} a polynomial algebra until at least
one prime number remains uninverted. On the other hand, over rationals ${\mathscr M}$
becomes isomorphic to the algebra of ordinary shuffles.
To define a polynomial generating set for ${\mathscr M}$, we need some definitions.
To conform with the admissible basis in the Steenrod algebra, which
consists of monomials with decreasing indices, we will reverse the order of
indices in the definitions from \cite{Hazewinkel}, where the indices go in the
increasing order. Thus in our case statements about some $M_{d_1,...,d_n}$
will be equivalent to the corresponding ones in \cite{Hazewinkel} about
$M_{d_n,...,d_1}$.
\begin{Definitions}
The \emph{lexicographic order} on the basis $M_{d_1,...,d_n}$ of ${\mathscr M}$ is
defined by declaring $M_{d_1,...,d_n}>M_{e_1,...,e_m}$ if either there is an $i$
with $1\le i\le\min(n,m)$ and $d_i>e_i$, $d_n=e_m$, $d_{n-1}=e_{m-1}$, ...,
$d_{n-i+1}=e_{m-i+1}$, $d_{n-i}>e_{m-i}$
or $n>m$ and $d_{n-m+1}=e_1$, $d_{n-m+2}=e_2$, ..., $d_n=e_m$.
A basis element $M_{d_1,...,d_n}$ is \emph{Lyndon} if with respect to this
ordering one has $M_{d_1,...,d_n}<M_{d_1,...,d_i}$ for all $1<i\le n$. For
example, $M_{3,2,3,2,2}$ and $M_{2,2,1,2,1,2,1}$ are Lyndon but
$M_{3,2,2,3,2}$ and $M_{2,1,2,1,2,1}$ are not.
A basis element $M_{d_1,...,d_n}$ is \emph{${\mathbb Z}$-elementary} if no number
$>1$ divides all of the $d_i$, i.~e. $\gcd(d_1,...,d_n)=1$. The set
${\mathrm{ESL}}({\mathbb Z})$ is the set of elementary basis elements of the form
$M_{d_1,...,d_n,d_1,...,d_n,....,d_1,...,d_n}$ (i.~e. $d_1,...,d_n$
repeated any number of times), where $M_{d_1,...,d_n}$ is a
Lyndon element.
For a prime $p$, a basis element $M_{d_1,...,d_n}$ is called
\emph{$p$-elementary} if there is a $d_i$ not divisible by $p$, i.~e.
$p\nmid\gcd(d_1,...,d_n)$. The set ${\mathrm{ESL}}(p)$ is defined as the set of
$p$-elementary basis elements of the form
$$
M_{\underbrace{\scriptstyle{d_1,...,d_n,d_1,...,d_n,...,d_1,...,d_n}}_{p^r\textrm{
times}}}
$$
with $d_1,...,d_n$ repeated $p^r$ times for some $r$, where
$M_{d_1,...,d_n}$ is required to be Lyndon.
\end{Definitions}
For example, $M_{15,6,15,6,15,6,15,6}$ is in ${\mathrm{ESL}}(2)$ but not in ${\mathrm{ESL}}({\mathbb Z})$
or in ${\mathrm{ESL}}(p)$ for any other $p$, whereas $M_{30,6,6}$ is in ${\mathrm{ESL}}(p)$ for
any $p\ne2,3$ but not in ${\mathrm{ESL}}(2)$, not in ${\mathrm{ESL}}(3)$ and not in ${\mathrm{ESL}}({\mathbb Z})$.
One then has
\begin{Theorem}[\cite{Hazewinkel}]
The algebra ${\mathscr M}$ is a polynomial algebra.
\end{Theorem}
\begin{Conjecture}[Ditters, \cite{Hazewinkel}]
The set ${\mathrm{ESL}}({\mathbb Z})$ is the set of polynomial generators for ${\mathscr M}$.
\end{Conjecture}
\begin{Theorem}[\cite{Hazewinkel}]\label{eslp}
For each prime $p$, the set ${\mathrm{ESL}}(p)$ is a set of polynomial
generators for ${\mathscr M}_{(p)}={\mathscr M}\ox{\mathbb Z}_{(p)}$, i.~e. if one inverts all primes
except $p$.
\end{Theorem}
In particular, it follows that ${\mathrm{ESL}}(p)$ is a set of polynomial generators
for ${\mathscr M}/p^n$ over ${\mathbb Z}/p^n$ for all $n$.
Here are the polynomial generators in low degrees, over ${\mathbb Z}$ and over few
first primes. Note that the numbers of generators in each degree are the
same (as it should be since all these algebras become isomorphic over
${\mathbb Q}$).
$$
\begin{array}{c||c|c|c|c|c}
&1&2&3&4&5\\
\hline
\hline
&&&&&\\
{\mathbb Z}&M_1&M_{1,1}&M_{2,1},M_{1,1,1}&M_{3,1},M_{2,1,1},M_{1,1,1,1}&M_{4,1},M_{3,2},M_{3,1,1},
M_{2,2,1},M_{2,1,1,1},M_{1,1,1,1,1}\\
&&&&&\\
\hline
&&&&&\\
p=2&M_1&M_{1,1}&M_3,M_{2,1}&M_{3,1},M_{2,1,1},M_{1,1,1,1}&M_5,M_{4,1},M_{3,2},
M_{3,1,1},M_{2,2,1},M_{2,1,1,1}\\
&&&&&\\
\hline
&&&&&\\
p=3&M_1&M_2&M_{2,1},M_{1,1,1}&M_4,M_{3,1},M_{2,1,1}&M_5,M_{4,1},M_{3,2},
M_{3,1,1},M_{2,2,1},M_{2,1,1,1}\\
&&&&&\\
\hline
&&&&&\\
p=5&M_1&M_2&M_3,M_{2,1}&M_4,M_{3,1},M_{2,1,1}&M_{4,1},M_{3,2},M_{3,1,1},
M_{2,2,1},M_{2,1,1,1},M_{1,1,1,1,1}\\
&&&&&
\end{array}
$$
It is easy to calculate the numbers of polynomial generators in each
degree. Let these numbers be $m_1$, $m_2$, $\cdots$. Then the Poincar\'e
series for the algebra ${\mathscr M}$ (or ${\mathscr Z}$, or ${\mathscr F}$, or ${\mathscr F}_*$, it does not
matter) is
$$
\sum_n\dim({\mathscr M}_n)t^n=(1-t)^{-m_1}(1-t^2)^{-m_2}(1-t^3)^{-m_3}\cdots;
$$
on the other hand, we know that it is a tensor coalgebra with one generator
in each degree $n\ge1$; this implies that $\dim({\mathscr M}_n)=2^{n-1}$ for $n\ge1$
(and $\dim(M_0)=1$). Thus we have equality of power series
$$
\prod_{k=1}^\infty(1-t^k)^{-m_k}=1+t+2t^2+4t^3+8t^4+\cdots
=1+t(1+2t+(2t)^2+(2t)^3+\cdots)=1+t\frac1{1-2t}=\frac{1-t}{1-2t}.
$$
Then taking logarithmic derivatives one obtains
$$
\sum_{k=1}^\infty\frac{km_kt^k}{1-t^k}=\frac{2t}{1-2t}-\frac
t{1-t}=t+3t^2+7t^3+\cdots+(2^n-1)t^n+\cdots.
$$
It follows that for all $n$ one has
$$
\sum_{d|n}dm_d=2^n-1,
$$
which by the M\"obius inversion formula gives
$$
m_n=\frac1n\sum_{d|n}\mu(d)(2^{\frac nd}-1).
$$
The latter expression is well known in the literature on combinatorics; it
equals the number of aperiodic bicolored necklaces consisting of $n$
beads, and also the dimension of the $n$th homogeneous component of the
free Lie algebra on two generators. See e.~g. \cite{S}.
\section{The dual of the relation module $R_{\mathscr F}$}
We now turn to the algebra ${\mathscr F}_*=\Hom({\mathscr F},{\mathbb F}_2)\cong{\mathscr M}/2$. By the above,
we know that it, as well as ${\mathscr M}_{(2)}$, is a polynomial algebra on the set
of generators ${\mathrm{ESL}}(2)$. As an illustration, we will give some expressions of
the $M$-basis elements in terms of sums of overlapping shuffle products of
elements from ${\mathrm{ESL}}(2)$. We will give these in ${\mathscr M}_{(2)}$ and then their
images in ${\mathscr F}_*$.
\begin{align*}
M_2&=M_1^2-2M_{1,1}\\
&\equiv M_1^2\mod2\\
M_{1,2}&=M_1^3-M_3-M_{2,1}-2M_1M_{1,1}\\
&\equiv M_1^3+M_3+M_{2,1}\mod2\\
M_{1,1,1}&=M_1M_{1,1}-\frac13M_1^3+\frac13M_3\\
&\equiv M_1M_{1,1}+M_1^3+M_3\mod2\\
M_4&=\frac43M_1M_3-\frac13M_1^4+2M_{1,1}^2-4M_{1,1,1,1}\\
&\equiv M_1^4\mod2\\
M_{2,2}&=M_{1,1}^2-2M_1^2M_{1,1}-\frac23M_1M_3+\frac23M_1^4+2M_{1,1,1,1}\\
&\equiv M_{1,1}^2\mod2\\
M_{1,3}&=\frac13M_1^4-\frac13M_1M_3-2M_{1,1}^2-M_{3,1}+4M_{1,1,1,1}\\
&\equiv M_1^4+M_1M_3+M_{3,1}\mod2\\
M_{1,2,1}&=M_1M_{2,1}-M_{3,1}-M_{1,1}^2+2M_1^2M_{1,1}+\frac23M_1M_3-\frac23M_1^4-2M_{1,1,1,1}-2M_{2,1,1}\\
&\equiv M_1M_{2,1}+M_{3,1}+M_{1,1}^2\mod2\\
M_{1,1,2}&=M_{1,1}^2-M_1^2M_{1,1}-\frac13M_1M_3+\frac13M_1^4-2M_{1,1,1,1}+M_{3,1}-M_1M_{2,1}+M_{2,1,1}\\
&\equiv M_{1,1}^2+M_1^2M_{1,1}+M_1M_3+M_1^4+M_{3,1}+M_1M_{2,1}+M_{2,1,1}\mod2
\end{align*}
Moreover it is straightforward to calculate the diagonal in terms of these
generators. For example, in ${\mathscr F}_*$ one has
\begin{align*}
\Delta(M_1)&=1\ox M_1+M_1\ox1,\\
\Delta(M_{1,1})&=1\ox M_{1,1}+M_1\ox M_1+M_{1,1}\ox1,\\
\Delta(M_3)&=1\ox M_3+M_3\ox1\\
\Delta(M_{2,1})&=1\ox M_{2,1}+M_1^2\ox M_1+M_{2,1}\ox1\\
\Delta(M_{3,1})&=1\ox M_{3,1}+M_3\ox M_1+M_{3,1}\ox1\\
\Delta(M_{2,1,1})&=1\ox M_{2,1,1}+M_1^2\ox M_{1,1}+M_{2,1}\ox
M_1+M_{2,1,1}\ox1\\
\Delta(M_{1,1,1,1})&=1\ox M_{1,1,1,1}+M_1\ox M_1M_{1,1}+M_1\ox M_1^3+M_1\ox
M_3+M_{1,1}\ox M_{1,1}+M_1M_{1,1}\ox M_1\\
&+M_1^3\ox M_1+M_3\ox M_1+M_{1,1,1,1}\ox1\\
\Delta(M_{4,1})&=1\ox M_{4,1}+M_1^4\ox M_1+M_{4,1}\ox1\\
\Delta(M_{3,2})&=1\ox M_{3,2}+M_3\ox M_1^2+M_{3,2}\ox1\\
\Delta(M_{2,1,1,1})&=1\ox M_{2,1,1,1}+M_1^2\ox M_1M_{1,1}+M_1^2\ox
M_1^3+M_1^2\ox M_3+M_{2,1}\ox M_{1,1}+M_{2,1,1}\ox M_1\\
&+M_{2,1,1,1}\ox1\\
\Delta(M_5)&=1\ox M_5+M_5\ox1\\
\Delta(M_{3,1,1})&=1\ox M_{3,1,1}+M_3\ox M_{1,1}+M_{3,1}\ox
M_1+M_{3,1,1}\ox1\\
\Delta(M_{2,2,1})&=1\ox M_{2,2,1}+M_1^2\ox M_{2,1}+M_{1,1}^2\ox
M_1+M_{2,2,1}\ox1.
\end{align*}
Also it follows from the results in \cite{Hazewinkel} that one has
\begin{Lemma}\label{pthpow}
For any prime $p$, in ${\mathscr M}_{(p)}$ one has
$$
M_{pd_1,...,pd_n}\equiv M_{d_1,...,d_n}^p\mod p.
$$
\end{Lemma}
To identify the elements to which the Milnor generators $\zeta_k$ of ${\mathscr A}_*$
go under the isomorphism ${\mathscr F}_*\cong{\mathscr M}/2$, we first identify ${\mathscr A}_*$ with
the graded dual of ${\mathscr A}$; then $\zeta_k$ corresponds to a linear form
${\mathscr A}_{2^k-1}\to{\mathbb F}$ given by \eqref{milgen}.
\begin{Proposition}\label{imxi}
Under the embedding ${\mathscr A}_*\into{\mathscr M}/2$, the Milnor generator $\zeta_k$ maps to
the generator $M_{2^{k-1},2^{k-2},...,2,1}$. In particular, this generator is in
${\mathrm{ESL}}(2)$, i.~e. is one of the polynomial generators of ${\mathscr F}_*$.
\end{Proposition}
Note that this together with \eqref{deconc} and \bref{pthpow} implies the
Milnor formula \eqref{mildiag} for the diagonal in ${\mathscr A}_*$. Identifying
$\zeta_k$ with its image in ${\mathscr M}/2$ by \bref{imxi}, one obtains
\begin{equation}\label{dizeta}
\begin{aligned}
m_*(\zeta_k)=\Delta(M_{2^{k-1},2^{k-2},...,2,1})=\sum_{i=0}^kM_{2^{k-1},2^{k-2},...,2^i}\ox
M_{2^{i-1},...,2,1}&=\sum_{i=0}^kM_{2^{k-1-i},2^{k-2-i},...,2,1}^{2^i}\ox M_{2^{i-1},...,2,1}\\
&=\sum_{i=0}^k\zeta_{k-i}^{2^i}\ox\zeta_i.
\end{aligned}
\end{equation}
Thus the set $\{\zeta_1,\zeta_2,...\}$ of polynomial generators for ${\mathscr A}_*$ can be
identified with the subset
$$
Q=\{M_1,M_{2,1},M_{4,2,1},M_{8,4,2,1},...\}
$$
of the set of polynomial generators ${\mathrm{ESL}}(2)$ for ${\mathscr M}/2\cong{\mathscr F}_*$. This in
particular gives an explicit basis for ${R_{\mathscr F}}_*$: it is in one-to-one
correspondence with those monomials in the generators $M_{d_1,...,d_n}$
from ${\mathrm{ESL}}(2)$ not all of whose variables belong to $Q$. For example, in the first
few dimensions this basis contains the following monomials:
\begin{align*}
&M_{1,1},\\
&M_1M_{1,1},M_3,\\
&M_1^2M_{1,1},M_1M_3,M_{1,1}^2,M_{3,1},M_{2,1,1},M_{1,1,1,1},\\
&M_1^3M_{1,1},M_1^2M_3,M_1M_{1,1}^2,M_1M_{3,1},M_1M_{2,1,1},
M_1M_{1,1,1,1},M_{1,1}M_3,M_{1,1}M_{2,1},M_5,M_{4,1},M_{3,2},M_{3,1,1},M_{2,2,1},\\
&\ \ \ \ \ M_{2,1,1,1}.
\end{align*}
We next note that obviously the embedding ${\mathscr A}_*\into{\mathscr F}_*$ identifies ${\mathscr F}_*$
with a polynomial algebra over ${\mathscr A}_*$, namely one has a canonical
isomorphism
\begin{equation}
{\mathscr F}_*\cong{\mathscr A}_*[{\mathrm{ESL}}(2)\setminus Q].
\end{equation}
In particular, as an ${\mathscr A}_*$-module ${\mathscr F}_*$ is free on the generating set
${\mathbb N}^{({\mathrm{ESL}}(2)\setminus Q)}$ (= the free commutative monoid on
${\mathrm{ESL}}(2)\setminus Q$). Then obviously the quotient module ${R_{\mathscr F}}_*$ is a
free ${\mathscr A}_*$-module with the generating set ${\mathbb N}^{({\mathrm{ESL}}(2)\setminus
Q)}\setminus\set{1}$.
We will need the dual ${\mathscr F}_*^{\le2}$ of the subspace ${\mathscr F}_0^{\le2}\subset{\mathscr F}_0$ spanned by
the monomials of length $\le2$ in the generators $\Sq^i$. Observe that
${\mathscr F}_0^{\le2}$ is a subcoalgebra of ${\mathscr F}_0$, so that dually
${\mathscr F}_*\onto{\mathscr F}_*^{\le2}$ is a quotient algebra. We have
\begin{Proposition}
The algebra ${\mathscr F}_*^{\le2}$ is a quotient of the polynomial algebra on three
generators $M_1$, $M_{1,1}$, $M_{2,1}$ by a single relation
$$
M_1M_{1,1}M_{2,1}+M_{1,1}^3+M_{2,1}^2=0.
$$
\end{Proposition}
\begin{proof}
First of all, it is straightforward to calculate in ${\mathscr F}_*$ the sum of the overlapping
shuffle products
\begin{multline*}
M_1M_{1,1}M_{2,1}+M_{1,1}^3+M_{2,1}^2=\{\mathscr M}_{1,4,1}+M_{2,2,2}+M_{2,3,1}+M_{3,1,2}+M_{3,2,1}+M_{2,1,2,1}+M_{3,1,1,1}+M_{1,1,3,1}+M_{1,2,1,1,1}+M_{1,2,1,2}+M_{1,1,1,2,1}
\end{multline*}
so that indeed this gives zero in ${\mathscr F}^{\le2}_*$.
Let
$$
X={\mathbb F}[x_1,x_2,x_3]/(x_1x_2x_3+x_2^3+x_3^2)
$$
be the graded algebra with deg$(x_i)=i$, $i=1,2,3$, so that there is a
homomorphism of algebras $f:X\to{\mathscr F}_*^{\le2}$ sending $x_1\mapsto M_1$,
$x_2\mapsto M_{1,1}$, $x_3\mapsto M_{2,1}$. It is straightforward
to calculate the Hilbert function of $X$, i.~e. the formal power
series
$$
\sum_n\dim(X_n)t^n;
$$
it is equal to
$$
\frac{1-t^6}{(1-t)(1-t^2)(1-t^3)}.
$$
On the other hand ${\mathscr F}_*^{\le2}$ is dual to ${\mathscr F}_0^{\le2}$ and it is
straightforward also to calculate dimensions of homogeneous components of
this space. One then simply checks that these dimensions coincide for $X$
and for ${\mathscr F}_*^{\le2}$. Thus it suffices to show that $f$ is surjective,
i.~e. that ${\mathscr F}_*^{\le2}$ is generated by (the images of) $M_1$, $M_{1,1}$
and $M_{2,1}$.
We will show by induction on degree that every $M_n$ and $M_{i,j}$ can be obtained
as a polynomial in these three elements. In degree 1, $M_1$ is the only
nonzero element. In degree 2, besides $M_{1,1}$ we have $M_2$ which is
equal to $M_1^2$ by \bref{pthpow}. In degree 3, we have
$$
M_1M_{1,1}=M_{1,2}+M_{2,1}+M_{1,1,1}\equiv M_{1,2}+M_{2,1}\mod{\mathscr F}^{>2}_*
$$
and
$$
M_1^3=M_3+M_{1,2}+M_{2,1},
$$
so that in ${\mathscr F}_*^{\le2}$ we may solve
$$
M_{1,2}\equiv M_1M_{1,1}+M_{2,1}
$$
and
$$
M_3\equiv M_1^3+M_1M_{1,1}.
$$
Given now any degree $n>3$, we can obtain any element $M_{i,j}$ with $i>1$,
$j>1$, $i+j=n$ from elements of lower degree since
$$
M_{i,j}\equiv M_{1,1}M_{i-1,j-1}.
$$
Next we also can obtain the element $M_{n-1,1}$ from
$$
M_{n-1,1}+M_{2,n-2}\equiv M_{2,1}M_{n-3}.
$$
Then we can obtain $M_{1,n-1}$ from
$$
M_{1,n-1}+M_{n-1,1}\equiv M_{1,1}M_{n-2},
$$
and finally we can obtain $M_n$ from
$$
M_n+M_{1,n-1}+M_{n-1,1}\equiv M_1M_{n-1}.
$$
\end{proof}
Let us also identify the dual of the product map
$$
{\mathscr F}_0^{\le1}\ox{\mathscr F}_0^{\le1}\to{\mathscr F}_0^{\le2}
$$
in terms of the above generators. By dualizing it is clear that this dual
is the unique factorization in the diagram
$$
\xymatrix{
{\mathscr F}_*\ar[r]^-{m_*}\ar@{->>}[d]
&{\mathscr F}_*\ox{\mathscr F}_*\ar@{->>}[d]\\
{\mathscr F}_*^{\le2}\ar@{-->}[r]
&{\mathscr F}_*^{\le1}\ox{\mathscr F}_*^{\le1}.
}
$$
In particular, it is an algebra homomorphism. Moreover the algebra
${\mathscr F}_*^{\le1}$ may be identified with the polynomial algebra on a single
generator $M_1=\zeta_1$, with the quotient map ${\mathscr F}_*\to{\mathscr F}_*^{\le1}$ given
by sending $M_1$ to itself and all other polynomial generators from
${\mathrm{ESL}}(2)$ to zero. From this it is straightforward to identify the map
${\mathscr F}_*^{\le2}\to{\mathscr F}_*^{\le1}\ox{\mathscr F}_*^{\le1}$ with the algebra homomorphism
$$
{\mathbb F}[x_1,x_2,x_3]/(x_1x_2x_3+x_2^3+x_3^2)\to{\mathbb F}[y_1,z_1]
$$
given by
\begin{equation}\label{mstar}
\begin{aligned}
x_1&\mapsto y_1+z_1\\
x_2&\mapsto y_1z_1\\
x_3&\mapsto y_1^2z_1.
\end{aligned}
\end{equation}
Let us identify in these terms the map ${\mathscr F}_*^{\le2}\onto{R_{\mathscr F}}^{\le2}_*$. One
clearly has
$$
R_{\mathscr F}^{\le2}=R_{\mathscr F}\cap{\mathscr F}_0^{\le2}
$$
in ${\mathscr F}_0$, so that dually one has that the diagram
$$
\xymatrix{
{\mathscr F}_*\ar@{->>}[r]\ar@{->>}[d]&{R_{\mathscr F}}_*\ar@{->>}[d]\\
{\mathscr F}_*^{\le2}\ar@{->>}[r]&{R_{\mathscr F}^{\le2}}_*
}
$$
is pushout. Thus ${R_{\mathscr F}^{\le2}}_*$ is isomorphic to the quotient of
${\mathscr F}^{\le2}_*$ by the image of the composite
${\mathscr A}_*\into{\mathscr F}_*\onto{\mathscr F}_*^{\le2}$. That image is clearly the subalgebra
generated by $M_1$ and $M_{2,1}$.
We can alternatively describe ${R_{\mathscr F}^{\le2}}_*$ in terms of linear forms on
$R_{\mathscr F}^{\le2}\subset{\mathscr F}_0^{\le2}$. It is clear that the latter subspace is
spanned by all Adem relations $[n,m]$, $n<2m$. The map
$\pi:{\mathscr F}^{\le2}_*\onto{R_{\mathscr F}^{\le2}}_*$ assigns to a linear form on
${\mathscr F}_0^{\le2}$ its restriction to $R_{\mathscr F}^{\le2}$. One then clearly has
\begin{equation}
\pi(M_1^k)=\pi(M_{2,1}^k)=0
\end{equation}
for all $k\ge0$; moreover $\pi(M_{1,1})$ is dual to $[1,1]$ in the basis
given by the elements $[n,m]$, i.~e. $M_{1,1}([1,1])=1$ and
$M_{1,1}([n,m])=0$ for all other $n$, $m$. Moreover for
$x,y\in{\mathscr F}^{\le2}_*$ we have
\begin{equation}
(xy)([n,m])=\sum x([n,m]_\l)y([n,m]_\r)
\end{equation}
in the Sweedler notation
$$
\Delta([n,m])=\sum[n,m]_\l\ox[n,m]_\r.
$$
For example, we have
$$
\Delta([1,2])=(1+T)(1\ox[1,2]+\Sq^1\ox[1,1])
$$
which implies that $M_1M_{1,1}$ is dual to $[1,2]$ in this basis, i.~e.
$(M_1M_{1,1})[1,2]=1$ and $(M_1M_{1,1})[n,m]=0$ for all other $n$, $m$.
Similarly
$$
\Delta([1,3])=(1+T)(1\ox[1,3]+\Sq^1\ox[1,2]+\Sq^2\ox[1,1])
$$
and
$$
\Delta([2,2])=(1+T)(1\ox[2,2]+\Sq^1\ox[1,2]+\Sq^2\ox[1,1])+[1,1]\ox[1,1]
$$
imply that $M_{1,1}^2$ is dual to $[2,2]$ whereas
$(M_1^2M_{1,1})[1,3]=(M_1^2M_{1,1})[2,2]=1$, so that dual to $[1,3]$ is
$M_1^2M_{1,1}+M_{1,1}^2$.
We will also need a description of the dual $\bar R_*$ of $\bar
R=R_{\mathscr F}/(R_{\mathscr F}\cdot R_{\mathscr F})$. For this first note that similarly to the above
${\mathscr F}_*\ox{\mathscr F}_*$ is a free ${\mathscr A}_*\ox{\mathscr A}_*$-module on
${\mathbb N}^{({\mathrm{ESL}}(2)\setminus Q)}\x{\mathbb N}^{({\mathrm{ESL}}(2)\setminus Q)}$ and ${R_{\mathscr F}}_*\ox{R_{\mathscr F}}_*$ is a
free ${\mathscr A}_*\ox{\mathscr A}_*$-module on
$\left({\mathbb N}^{({\mathrm{ESL}}(2)\setminus
Q)}\setminus\set{1}\right)\x\left({\mathbb N}^{({\mathrm{ESL}}(2)\setminus
Q)}\setminus\set{1}\right)$. Moreover the diagonal
$\Delta_{\mathscr F}:{\mathscr F}_*\to{\mathscr F}_*\ox{\mathscr F}_*$ and its factorization
$\Delta_R:{R_{\mathscr F}}_*\to{R_{\mathscr F}}_*\ox{R_{\mathscr F}}_*$ through the quotient maps
${\mathscr F}_*\onto{R_{\mathscr F}}_*$, ${\mathscr F}_*\ox{\mathscr F}_*\onto{R_{\mathscr F}}_*\ox{R_{\mathscr F}}_*$ are obviously both
equivariant with respect to the diagonal $\delta:{\mathscr A}_*\to{\mathscr A}_*\ox{\mathscr A}_*$, i.~e.
one has
\begin{equation}
\begin{aligned}
\Delta_{\mathscr F}(af)&=\delta(a)\Delta_{\mathscr F}(f),\\
\Delta_R(ar)&=\delta(a)\Delta_R(r)
\end{aligned}
\end{equation}
for any $a\in{\mathscr A}_*$, $f\in{\mathscr F}_*$, $r\in{R_{\mathscr F}}_*$.
\endinput
\chapter{The invariants $L$ and $S$ and the dual invariants $L_*$ and $S_*$ in
terms of generators}\label{LS}
As proved in \cite{Baues} there are invariants $L$ and $S$ of the
Steenrod algebra which determine the algebra ${\mathscr B}$ of secondary cohomology
operations up to isomorphism. Therefore $L$ and $S$ and the dual invariants
$L_*$ and $S_*$ also determine ${\mathscr B}^{\mathbb F}$ and ${\mathscr B}_{\mathbb F}$ respectively. In this
chapter we recall the definition of $L$ and $S$ and we discuss algebraic
properties of $L_*$ and $S_*$.
\section{The left action operator $L$ and its dual}\label{L*}
We recall constructions of the maps $L$ and $S$ from
\cite{Baues}*{14.4,14.5} of the same kind as the operators in \bref{lv}
and \bref{S} respectively. For that, we first introduce the following
notation:
\begin{equation}\label{rbar}
\bar R:=R_{\mathscr F}/(R_{\mathscr F}\cdot R_{\mathscr F}),
\end{equation}
with the quotient map $R_{\mathscr F}\onto\bar R$ denoted by $r\mapsto\bar r$. There is a
well-defined ${\mathscr A}$-${\mathscr A}$-bimodule structure on $\bar R$ given by
$$
\qf f\bar r=\overline{fr},\ \ \bar r\qf f=\overline{rf}
$$
for $f\in{\mathscr F}_0$, $r\in R_{\mathscr F}$. As we show below $\bar R$ is free both as a
left and as a right ${\mathscr A}$-module (but not as a bimodule). A basis for $\bar
R$ as a right ${\mathscr A}$-module can be found using the set ${\mathrm{PAR}}\subset R_{\mathscr F}$ of
\emph{preadmissible relations} as defined in \cite{Baues}*{16.5}. These
are the elements of $R_{\mathscr F}$ of the form
$$
\Sq^{n_1}\cdots\Sq^{n_k}[n,m]
$$
where $[n,m]$, $n<2m$, is an Adem relation, the monomial
$\Sq^{n_1}\cdots\Sq^{n_k}$ is admissible (i.~e. $n_1\ge2n_2$,
$n_2\ge2n_3$, ..., $n_{k-1}\ge2n_k$), and moreover $n_k\ge2n$. It is then
proved in \cite{Baues}*{16.5.2} that ${\mathrm{PAR}}$ is a basis of $R_{\mathscr F}$ as a free
right ${\mathscr F}_0$-module.
It is equally true that $R_{\mathscr F}$ is a free left ${\mathscr F}_0$-module. An explicit
basis ${\mathrm{PAR}}'$ of $R_{\mathscr F}$ as a left ${\mathscr F}_0$-module consists of \emph{left
preadmissible} relations --- elements of the form
$$
[n,m]\Sq^{m_1}\cdots\Sq^{m_k}
$$
where $[n,m]$, $n<2m$, is an Adem relation, the monomial
$\Sq^{m_1}\cdots\Sq^{m_k}$ is admissible, and moreover $m\ge2m_1$.
Using this, one also has
\begin{Lemma}\label{barr}
$\bar R$ is free both as a right ${\mathscr A}$-module and as a left ${\mathscr A}$-module.
Moreover, the images $\bar\rho$ of the preadmissible relations $\rho\in{\mathrm{PAR}}$ under the quotient map
$R_{\mathscr F}\onto\bar R$ form a basis of this free right ${\mathscr A}$-module, and the images
of left preadmissible relations form its basis as a left ${\mathscr A}$-module.
\end{Lemma}
\begin{proof}
This is clear from the obvious isomorphisms
$$
{\mathscr A}\ox_{{\mathscr F}_0}R_{\mathscr F}\cong\bar R\cong R_{\mathscr F}\ox_{{\mathscr F}_0}{\mathscr A}
$$
of left, resp. right ${\mathscr A}$-modules.
\end{proof}
In particular we see that every element of $R_{\mathscr F}$ can be written
uniquely in the form
\begin{equation}\label{norf}
\rho^{(2)}+\sum_i\alpha_i[n_i,m_i]\beta_i
\end{equation}
with $\rho^{(2)}\in R_{\mathscr F}\cdot R_{\mathscr F}$, $\alpha_i[n_i,m_i]\in{\mathrm{PAR}}$ and
$\beta_i$ an admissible monomial. Moreover it can be also uniquely written in
the form
\begin{equation}\label{norfl}
\ro^{(2)}+\sum_i\alpha'_i[n'_i,m'_i]\beta'_i
\end{equation}
with $\ro^{(2)}\in R_{\mathscr F}\cdot R_{\mathscr F}$, admissible monomials $\alpha'_i$ and
$[n'_i,m'_i]\beta'_i\in{\mathrm{PAR}}'$.
\begin{Definition}\label{lao}
The \emph{left action operator}
$$
L:{\mathscr A}\ox R_{\mathscr F}\to{\mathscr A}\ox{\mathscr A}
$$
of degree $-1$ is defined as follows. For odd $p$ let $L$ be the zero map. For $p=2$, let
first the additive map $L_{\mathscr F}:{\mathscr F}_0^{\le2}\to{\mathscr A}\ox{\mathscr A}$ be given by the formula
$$
L_{\mathscr F}(\Sq^n\Sq^m)=\sum_{\substack{n_1+n_2=n\\m_1+m_2=m\\\textrm{$m_1$, $n_2$ odd}}}
\Sq^{n_1}\Sq^{m_1}\ox\Sq^{n_2}\Sq^{m_2}
$$
($n,m\ge0$; remember that $\Sq^0=1$). Equivalently, using the algebra
structure on ${\mathscr A}\ox{\mathscr A}$ one may write
$$
L_{\mathscr F}(\Sq^n\Sq^m)=(1\ox\Sq^1)\delta(\Sq^{n-1})(\Sq^1\ox1)\delta(\Sq^{m-1}).
$$
Restricting this map to $R_{\mathscr F}^{\le2}\subset{\mathscr F}_0^{\le2}$ gives a map
$L_R:R_{\mathscr F}^{\le2}\to{\mathscr A}\ox{\mathscr A}$. It is thus an additive map given on the Adem
relations $[n,m]$, for $0<n<2m$, by
$$
L_R[n,m]=L_{\mathscr F}(\Sq^n\Sq^m)+\sum_{k=\max\{0,n-m+1\}}^{\min\{n/2,m-1\}}\binom{m-k-1}{n-2k}L_{\mathscr F}(\Sq^{n+m-k}\Sq^k).
$$
Next we define the map
$$
\bar L:{\mathscr A}\ox\bar R\to{\mathscr A}\ox{\mathscr A}
$$
as the right ${\mathscr A}$-module homomorphism which satisfies
\begin{equation}\label{lbar}
\bar L(a\ox\overline{\alpha[n,m]})=\delta(\k(a)\qf\alpha)L_R[n,m]
\end{equation}
with $\alpha[n,m]\in{\mathrm{PAR}}$; by \bref{barr} such a homomorphism exists
and is unique.
Finally, $\bar L$ yields a unique linear map $L:{\mathscr A}\ox R_{\mathscr F}\to{\mathscr A}\ox{\mathscr A}$ by
composing $\bar L$ with the quotient map ${\mathscr A}\ox R_{\mathscr F}\onto{\mathscr A}\ox\bar R$. Thus
one has
$$
L\left({\mathscr A}\ox\left(R_{\mathscr F}\cdot R_{\mathscr F}\right)\right)=0.
$$
\end{Definition}
The map $L$ is the left action operator in \cite{Baues}*{14.4} where the
following lemma is proved (see \cite{Baues}*{14.4.3}):
\begin{Lemma}\label{lprop}
The map $\bar L$ satisfies the equalities
\begin{align*}
\bar L(a\ox[n,m])&=\k(a)L_R[n,m]\\
\bar L(a\ox br)&=\bar L(ab\ox r)+\delta(a)\bar L(b\ox r)\\
\bar L(a\ox rb)&=\bar L(a\ox r)\delta(b)
\end{align*}
for any $a,b\in{\mathscr A}$, $r\in\bar R$.
\end{Lemma}\qed
We observe that $L$ can be alternatively constructed as follows. Let
$$
\tilde L:\bar R\to{\mathscr A}\ox{\mathscr A}
$$
be the map given by
$$
\tilde L(\bar r)=\bar L(\Sq^1\ox\bar r).
$$
Then one has
\begin{Proposition}\label{ltil}
For any $a\in{\mathscr A}$, $r\in R_{\mathscr F}$ one has
$$
L(a\ox r)=\delta(\k(a))\tilde L(\bar r);
$$
moreover $\tilde L$ is a homomorphism of ${\mathscr A}$-${\mathscr A}$-bimodules, hence
uniquely determined by its values on the Adem relations, which are
$$
\tilde L([n,m])=L_R[n,m].
$$
\end{Proposition}
\begin{proof}
For any $a\in{\mathscr A}$, $\alpha[n,m]\in{\mathrm{PAR}}$ and $\beta$ admissible we have
\begin{align*}
L(a\ox\alpha[n,m]\beta)
&=\bar L(a\ox\overline{\alpha[n,m]})\qf\beta\\
&=\delta(\k(a)\qf\alpha)L_R[n,m]\delta\qf\beta\\
&=\delta\k(a)\delta\qf\alpha L_R[n,m]\delta\qf\beta\\
&=\delta\k(a)\delta(\k(\Sq^1)\qf\alpha)L_R[n,m]\delta\qf\beta\\
&=\delta\k(a)L(\Sq^1\ox\alpha[n,m]\beta)\\
&=\delta\k(a)\tilde L(\alpha[n,m]\beta).
\end{align*}
Then using \eqref{norf} we see that the same identity holds for $L(a\ox r)$
with any $r\in R_{\mathscr F}$.
Next for any $a\in{\mathscr A}$, $r\in R_{\mathscr F}$ we have by \bref{lprop} and
$\k\Sq^1=\Sq^0=1$,
$$
\begin{aligned}
\tilde L(a\bar r)
&=\bar L(\Sq^1\ox a\bar r)\\
&=\bar L(\Sq^1a\ox\bar r)+\delta(\Sq^1)\bar L(a\ox\bar r)\\
&=\delta(\k(\Sq^1a))\tilde L(\bar r)+\delta(\Sq^1\k(a))\tilde L(\bar r)\\
&=\delta(\k(\Sq^1a)+\Sq^1\k(a))\tilde L(\bar r)\\
&=\delta(a)\tilde L(\bar r).
\end{aligned}
$$
Thus $\tilde L$ is a left ${\mathscr A}$-module homomorphism. It is also clearly a
right ${\mathscr A}$-module homomorphism since $\bar L$ is.
Finally by \eqref{lbar} we have
$$
L_R[n,m]=\delta(\k(\Sq^1))L_R[n,m]=\bar L(\Sq^1\ox\overline{[n,m]})=\tilde L([n,m]).
$$
\end{proof}
Explicit calculation of the left coaction operator $L_*$ is as follows.
For odd $p$ it is the zero map, and for $p=2$ we first define the additive
map ${L_R}_*:{\mathscr A}_*\ox{\mathscr A}_*\to{R_{\mathscr F}}_*^{\le2}$. It is dual to the composite map
$R_{\mathscr F}^{\le2}\to{\mathscr A}\ox{\mathscr A}$ in the diagram
\begin{equation}
\alignbox{
\xymatrix{
&R_{\mathscr F}^{\le2}\ar@{ >->}[r]\ar@{
>->}[d]\ar@{}[dr]|{\textrm{pull}
&R_{\mathscr F}\ar@{ >->}[d]
\\
{\mathscr F}_0^{\le1}\ox{\mathscr F}_0^{\le1}\ar[d]_{\Delta\ox\Delta}\ar@{->>}[r]^m
&{\mathscr F}_0^{\le2}\ar@{ >->}[r]\ar@{-->}[ddd]^{L_{\mathscr F}}
&{\mathscr F}_0
\\
{\mathscr F}_0^{\le1}\ox{\mathscr F}_0^{\le1}\ox{\mathscr F}_0^{\le1}\ox{\mathscr F}_0^{\le1}\ar[d]_{1\ox\Phi\ox\Phi\ox1}
\\
{\mathscr F}_0^{\le1}\ox{\mathscr F}_0^{\le1}\ox{\mathscr F}_0^{\le1}\ox{\mathscr F}_0^{\le1}\ar[d]_{1\ox T\ox1}
\\
{\mathscr F}_0^{\le1}\ox{\mathscr F}_0^{\le1}\ox{\mathscr F}_0^{\le1}\ox{\mathscr F}_0^{\le1}\ar@{->>}[r]^-{m\ox m}
&{\mathscr F}_0^{\le2}\ox{\mathscr F}_0^{\le2}\ar@{ >->}[r]
&{\mathscr F}_0\ox{\mathscr F}_0\ar@{->>}[d]
\\
&&{\mathscr A}\ox{\mathscr A}
}
}
\end{equation}
where $\Phi$ is restriction ${\mathscr F}_0^{\le1}\to{\mathscr F}_0^{\le1}$ of the map ${\mathscr F}_0\to{\mathscr F}_0$ given
by
$$
\Phi(x)=\Sq^1\k(x),
$$
so that one has
$$
\Phi(\Sq^n)=
\begin{cases}
\Sq^n,&n\equiv1\mod2\\
0,&n\equiv0\mod2.
\end{cases}
$$
Indeed by \bref{lao} we have
$$
L_{\mathscr F}(\Sq^n\Sq^m)=(1\ox\Sq^1)\Delta(\Sq^{n-1})(\Sq^1\ox1)\Delta(\Sq^{m-1})
=(1\ox\Sq^1)\Delta\k(\Sq^n)(\Sq^1\ox1)\Delta\k(\Sq^m);
$$
on the other hand we saw in \eqref{kapid} that
$$
\Delta\k=(\k\ox1)\Delta=(1\ox\k)\Delta,
$$
so that we can write
$$
L_{\mathscr F}(\Sq^n\Sq^m)=(1\ox\Sq^1\k)\Delta(\Sq^n)(\Sq^1\k\ox1)\Delta(\Sq^m)
=(1\ox\Phi)\Delta(\Sq^n)(\Phi\ox1)\Delta(\Sq^m).
$$
Therefore, the map dual of $\Phi$ is the map $\Phi_*:{\mathbb F}[\zeta_1]\to{\mathbb F}[\zeta_1]$
given by factorization through ${\mathscr A}_*\onto{\mathbb F}[\zeta_1]$ of the map
$\Phi_*:{\mathscr A}_*\to{\mathscr A}_*$ given on the monomial basis by
$$
\Phi_*(\zeta_1^{n_1}\zeta_2^{n_2}\cdots)=
\begin{cases}
\zeta_1^{n_1}\zeta_2^{n_2}\cdots,&n_1\equiv1\mod2\\
0,&n_1\equiv0\mod2.
\end{cases}
$$
Equivalently, by \eqref{sqd} and \eqref{dkappa}, $\Phi_*=\k_*\Sq^1_*$ is
the map $\zeta_1\frac\partial{\partial\zeta_1}$.
Thus the map ${L_R}_*$ is the composite ${\mathscr A}_*\ox{\mathscr A}_*\to{R_{\mathscr F}^{\le2}}_*$ in the
diagram
\begin{equation}
\alignbox{
\xymatrix{
{\mathscr A}_*\ox{\mathscr A}_*\ar@{ >->}[d]\\
{\mathscr F}_*\ox{\mathscr F}_*\ar@{->>}[r]
&{\mathscr F}^{\le2}_*\ox{\mathscr F}^{\le2}_*\ar@{ >->}[r]^-{m_*\ox
m_*}\ar@{-->}[ddd]_{{L_{\mathscr F}}_*}
&{\mathscr F}^{\le1}_*\ox{\mathscr F}^{\le1}_*\ox{\mathscr F}^{\le1}_*\ox{\mathscr F}^{\le1}_*\ar[d]^{1\ox T\ox1}\\
&&{\mathscr F}^{\le1}_*\ox{\mathscr F}^{\le1}_*\ox{\mathscr F}^{\le1}_*\ox{\mathscr F}^{\le1}_*\ar[d]^{1\ox\Phi_*\ox\Phi_*\ox1}\\
&&{\mathscr F}^{\le1}_*\ox{\mathscr F}^{\le1}_*\ox{\mathscr F}^{\le1}_*\ox{\mathscr F}^{\le1}_*\ar[d]^{\Delta_*\ox\Delta_*}\\
{\mathscr F}_*\ar@{->>}[r]\ar@{->>}[d]\ar@{}[dr]|{\textrm{push}}
&{\mathscr F}^{\le2}_*\ar@{ >->}[r]^{m_*}\ar@{->>}[d]
&{\mathscr F}^{\le1}_*\ox{\mathscr F}^{\le1}_*\\
{R_{\mathscr F}}_*\ar@{->>}[r]
&{R_{\mathscr F}^{\le2}}_*
}
}
\end{equation}
Now by \bref{ltil} we know that $\tilde L$ is a bimodule homomorphism, and
moreover $\bar R$ is generated by $R^{\le2}_{\mathscr F}\cong\bar R^{\le2}\subset\bar
R$ as an ${\mathscr A}$-${\mathscr A}$-bimodule, so knowledge of $L_R$ (actually already of
$L_{\mathscr F}$ whose restriction it is) determines $\tilde L$ and, by \bref{ltil},
also $L$. Dually, one can reconstruct $\tilde L_*$ and then $L_*$ from
${L_{\mathscr F}}_*$ via the diagram
$$
\xymatrix{
{\mathscr A}_*\ox{\mathscr A}_*\ar@{-->}[r]^{\tilde L_*}\ar[d]_{\textrm{bicoaction}}
&\bar R_*\ar@{ >->}[d]\ar[r]^-{\textrm{bicoaction}}
&{\mathscr A}_*\ox\bar R_*\ox{\mathscr A}_*\ar@{ >->}[d]\\
{\mathscr A}_*\ox({\mathscr A}_*\ox{\mathscr A}_*)\ox{\mathscr A}_*\ar[r]^-{1\ox L_R\ox1}
&{\mathscr A}_*\ox{R_{\mathscr F}^{\le2}}_*\ox{\mathscr A}_*
&{\mathscr A}_*\ox{R_{\mathscr F}}_*\ox{\mathscr A}_*.\ar@{->>}[l]
}
$$
Here the bicoaction ${\mathscr A}_*\ox{\mathscr A}_*\to{\mathscr A}_*\ox({\mathscr A}_*\ox{\mathscr A}_*)\ox{\mathscr A}_*$ is the
composite
$$
\xymatrix{
{\mathscr A}_*\ox{\mathscr A}_*\ar[r]^-{m^{(2)}_*\ox m^{(2)}_*}
&({\mathscr A}_*\ox{\mathscr A}_*\ox{\mathscr A}_*)\ox({\mathscr A}_*\ox{\mathscr A}_*\ox{\mathscr A}_*)\ar[d]^{(142536)}\\
&({\mathscr A}_*\ox{\mathscr A}_*)\ox({\mathscr A}_*\ox{\mathscr A}_*)\ox({\mathscr A}_*\ox{\mathscr A}_*)\ar[r]^-{\delta_*\ox1\ox1\ox\delta_*}
&{\mathscr A}_*\ox({\mathscr A}_*\ox{\mathscr A}_*)\ox{\mathscr A}_*
}
$$
We next note the following
\begin{Lemma}\label{bider}
The map $\tilde L_*$ is a biderivation, i.~e.
\begin{align*}
\tilde L_*(x_1x_2,y)&=x_1\tilde L_*(x_2,y)+x_2\tilde L_*(x_1,y),\\
\tilde L_*(x,y_1y_2)&=y_1\tilde L_*(x,y_2)+y_2\tilde L_*(x,y_1)
\end{align*}
for any $x,x_1,x_2,y,y_1,y_2\in{\mathscr A}_*$.
\end{Lemma}
It thus follows that $\tilde L_*$ is fully determined by its values
$\tilde L_*(\zeta_n\ox\zeta_{n'})$ on the Milnor generators. To calculate
the bicoaction on these, first note that we have
$$
m^{(2)}_*(\zeta_n)=(1\ox m_*)m_*(\zeta_n)
=\sum_{i+i'=n}\zeta_i^{2^{i'}}\ox m_*(\zeta_{i'})
=\sum_{i+j+k=n}\zeta_i^{2^{j+k}}\ox\zeta_j^{2^k}\ox\zeta_k,
$$
where as always $\zeta_0=1$. For the coaction on $\zeta_n\ox\zeta_{n'}$
this then gives in succession
\begin{align*}
\zeta_n\ox\zeta_{n'}
&\mapsto\sum_{\substack{i+j+k=n\\i'+j'+k'=n'}}
\zeta_i^{2^{j+k}}\ox\zeta_j^{2^k}\ox\zeta_k\ox\zeta_{i'}^{2^{j'+k'}}\ox\zeta_{j'}^{2^{k'}}\ox\zeta_{k'}\\
&\mapsto\sum_{\substack{i+j+k=n\\i'+j'+k'=n'}}
\zeta_i^{2^{j+k}}\ox\zeta_{i'}^{2^{j'+k'}}\ox\zeta_j^{2^k}\ox\zeta_{j'}^{2^{k'}}\ox\zeta_k\ox\zeta_{k'}\\
&\mapsto\sum_{\substack{i+j+k=n\\i'+j'+k'=n'}}
\zeta_i^{2^{j+k}}\zeta_{i'}^{2^{j'+k'}}\ox\zeta_j^{2^k}\ox\zeta_{j'}^{2^{k'}}\ox\zeta_k\zeta_{k'},
\end{align*}
so that for the values of $\tilde L_*$ we have the equation
$$
\iota\tilde L_*(\zeta_n\ox\zeta_{n'})=\sum_{\substack{i+j+k=n\\i'+j'+k'=n'}}
\zeta_i^{2^{j+k}}\zeta_{i'}^{2^{j'+k'}}\ox{L_R}_*(\zeta_j^{2^k}\ox\zeta_{j'}^{2^{k'}})\ox\zeta_k\zeta_{k'}
$$
where $\iota$ is the above embedding $\bar
R_*\into{\mathscr A}_*\ox{R_{\mathscr F}^{\le2}}_*\ox{\mathscr A}_*$. Thus we only have to know the
values of ${L_{\mathscr F}}_*$ on the elements of the form
$\zeta_j^{2^k}\ox\zeta_{j'}^{2^{k'}}$ for $j\ge0$, $k\ge0$. Obviously
these values are zero for $j>2$ or $j'>2$. They are also zero for $j=0$ or
$j'=0$ since $\Phi_*(1)=0$. There thus remain four cases $j=j'=1$,
$j=j'=2$, $j=1$, $j'=2$, and $j=2$, $j'=1$. We then have under ${L_{\mathscr F}}_*$
$$
\xymatrixrowsep{.5em}
\xymatrix@!C=3em{
**[l]\zeta_1^{2^k}\ox\zeta_1^{2^{k'}}
\ar@{|->}[r]^{m_*\ox m_*}
&**[r](\zeta_1\ox1+1\ox\zeta_1)^{2^k}\ox(\zeta_1\ox1+1\ox\zeta_1)^{2^{k'}}=\\
**[l]{}&**[r]
\zeta_1^{2^k}\ox1\ox\zeta_1^{2^{k'}}\ox1
+\zeta_1^{2^k}\ox1\ox1\ox\zeta_1^{2^{k'}}
+1\ox\zeta_1^{2^k}\ox\zeta_1^{2^{k'}}\ox1
+1\ox\zeta_1^{2^k}\ox1\ox\zeta_1^{2^{k'}}\\
**[l]{}\ar@{|->}[r]^{1\ox T\ox1}
&**[r]\zeta_1^{2^k}\ox\zeta_1^{2^{k'}}\ox1\ox1
+\zeta_1^{2^k}\ox1\ox1\ox\zeta_1^{2^{k'}}
+1\ox\zeta_1^{2^{k'}}\ox\zeta_1^{2^k}\ox1
+1\ox1\ox\zeta_1^{2^k}\ox\zeta_1^{2^{k'}}\\
**[l]{}\ar@{|->}[r]^{1\ox\Phi_*\ox\Phi_*\ox1}
&**[r]0+0+1\ox\Phi_*\zeta_1^{2^{k'}}\ox\Phi_*\zeta_1^{2^k}\ox1+0\\
**[l]{}\ar@{|->}[r]^{\Delta_*\ox\Delta_*}
&**[r]\Phi_*\zeta_1^{2^{k'}}\ox\Phi_*\zeta_1^{2^k}.
}
$$
We thus have
$$
{L_{\mathscr F}}_*(\zeta_1^{2^k}\ox\zeta_1^{2^{k'}})=
\begin{cases}
M_{1,1},&k=k'=0\\
0&\textrm{otherwise.}
\end{cases}
$$
We next take $j=j'=2$; then
$$
\xymatrixrowsep{.5em}
\xymatrix@!C=3em{
**[l]\zeta_2^{2^k}\ox\zeta_2^{2^{k'}}
\ar@{|->}[r]^{m_*\ox m_*}
&**[r](\zeta_1^2\ox\zeta_1)^{2^k}\ox(\zeta_1^2\ox\zeta_1)^{2^{k'}}=
\zeta_1^{2^{k+1}}\ox\zeta_1^{2^k}\ox\zeta_1^{2^{k'+1}}\ox\zeta_1^{2^{k'}}\\
**[l]{}\ar@{|->}[r]^{1\ox T\ox1}
&**[r]\zeta_1^{2^{k+1}}\ox\zeta_1^{2^{k'+1}}\ox\zeta_1^{2^k}\ox\zeta_1^{2^{k'}}\\
**[l]{}\ar@{|->}[r]^{1\ox\Phi_*\ox\Phi_*\ox1}
&**[r]\zeta_1^{2^{k+1}}\ox\Phi_*\zeta_1^{2^{k'+1}}\ox\Phi_*\zeta_1^{2^k}\ox\zeta_1^{2^{k'}}=0\\
**[l]{}\ar@{|->}[r]^{\Delta_*\ox\Delta_*}
&**[r]0,
}
$$
so that
$$
{L_{\mathscr F}}_*(\zeta_2^{2^k}\ox\zeta_2^{2^{k'}})=0
$$
for all $k$ and $k'$. Next for $j=2$, $j'=1$ we have
$$
\xymatrixrowsep{.5em}
\xymatrix@!C=3em{
**[l]\zeta_2^{2^k}\ox\zeta_1^{2^{k'}}
\ar@{|->}[r]^{m_*\ox m_*}
&**[r](\zeta_1^2\ox\zeta_1)^{2^k}\ox(\zeta_1\ox1+1\ox\zeta_1)^{2^{k'}}=
\zeta_1^{2^{k+1}}\ox\zeta_1^{2^k}\ox\zeta_1^{2^{k'}}\ox1+\zeta_1^{2^{k+1}}\ox\zeta_1^{2^k}\ox1\ox\zeta_1^{2^{k'}}\\
**[l]{}\ar@{|->}[r]^{1\ox T\ox1}
&**[r]\zeta_1^{2^{k+1}}\ox\zeta_1^{2^{k'}}\ox\zeta_1^{2^k}\ox1+\zeta_1^{2^{k+1}}\ox1\ox\zeta_1^{2^k}\ox\zeta_1^{2^{k'}}\\
**[l]{}\ar@{|->}[r]^{1\ox\Phi_*\ox\Phi_*\ox1}
&**[r]\zeta_1^{2^{k+1}}\ox\Phi_*\zeta_1^{2^{k'}}\ox\Phi_*\zeta_1^{2^k}\ox1+0\\
**[l]{}\ar@{|->}[r]^{\Delta_*\ox\Delta_*}
&**[r]\zeta_1^{2^{k+1}}\Phi_*\zeta_1^{2^{k'}}\ox\Phi_*\zeta_1^{2^k},
}
$$
hence
$$
{L_{\mathscr F}}_*(\zeta_2^{2^k}\ox\zeta_1^{2^{k'}})=
\begin{cases}
M_{1,1}^2+M_1M_{2,1},&k=k'=0\\
0&\textrm{otherwise.}
\end{cases}
$$
Finally for $j=1$, $j'=2$ we get
$$
\xymatrixrowsep{.5em}
\xymatrix@!C=3em{
**[l]\zeta_1^{2^k}\ox\zeta_2^{2^{k'}}
\ar@{|->}[r]^{m_*\ox m_*}
&**[r](\zeta_1\ox1+1\ox\zeta_1)^{2^k}\ox(\zeta_1^2\ox\zeta_1)^{2^{k'}}=
\zeta_1^{2^k}\ox1\ox\zeta_1^{2^{k'+1}}\ox\zeta_1^{2^{k'}}+1\ox\zeta_1^{2^k}\ox\zeta_1^{2^{k'+1}}\ox\zeta_1^{2^{k'}}\\
**[l]{}\ar@{|->}[r]^{1\ox T\ox1}
&**[r]
\zeta_1^{2^k}\ox\zeta_1^{2^{k'+1}}\ox1\ox\zeta_1^{2^{k'}}+1\ox\zeta_1^{2^{k'+1}}\ox\zeta_1^{2^k}\ox\zeta_1^{2^{k'}}\\
**[l]{}\ar@{|->}[r]^{1\ox\Phi_*\ox\Phi_*\ox1}
&**[r]0+0\\
**[l]{}\ar@{|->}[r]^{\Delta_*\ox\Delta_*}
&**[r]0,
}
$$
so that
$$
{L_{\mathscr F}}_*(\zeta_1^{2^k}\ox\zeta_2^{2^{k'}})=0
$$
for all $k$ and $k'$.
To pass to ${L_R}_*$ from these values means just omitting all monomials
which do not contain $M_{1,1}$; we thus obtain
\begin{align*}
{L_R}_*(\zeta_1\ox\zeta_1)&=M_{1,1},\\
{L_R}_*(\zeta_2\ox\zeta_1)&=M_{1,1}^2,
\end{align*}
and ${L_R}_*(\zeta_j^{2^k}\ox\zeta_{j'}^{2^{k'}})=0$ in all other cases.
From this we easily obtain
\begin{Proposition}
$\iota\tilde L_*(\zeta_n\ox\zeta_{n'})=\zeta_{n-1}^2\zeta_{n'-1}^2\ox
M_{1,1}\ox1+\zeta_{n-2}^4\zeta_{n'-1}^2\ox M_{1,1}^2\ox1
$
\end{Proposition}\qed
\noindent where now $\zeta_{n-2}=0$ for $n=1$ is understood. Solving $\tilde
L_*(\zeta_n,\zeta_{n'})$ from these equations is then straightforward. In
this way we obtain
\begin{align*}
\tilde L_*(\zeta_1,\zeta_1)&=M_{1,1}\\
\tilde L_*(\zeta_1,\zeta_2)&=M_{2,1,1}\\
\tilde L_*(\zeta_2,\zeta_1)&=M_{2,1,1}+M_{1,1}^2\\
\tilde L_*(\zeta_2,\zeta_2)&=M_{4,1,1}+M_{2,3,1}+M_{2,1,2,1}\\
\tilde L_*(\zeta_1,\zeta_3)&=M_{4,2,1,1}\\
\tilde L_*(\zeta_3,\zeta_1)&=M_{4,2,1,1}+M_{2,1,1}^2\\
\tilde L_*(\zeta_2,\zeta_3)&=M_{6,2,1,1}+M_{4,4,1,1}+M_{4,2,3,1}+M_{4,2,1,2,1}+M_{2,4,2,1,1}\\
\tilde L_*(\zeta_3,\zeta_2)&=M_{6,2,1,1}+M_{4,4,1,1}+M_{4,2,3,1}+M_{4,2,1,2,1}+M_{2,4,2,1,1}\\
&+M_5^2+M_{4,1}^2+M_{3,2}^2+M_{2,1,1,1}^2+M_1^2M_{2,1,1}^2+M_1^4M_3^2\\
\tilde L_*(\zeta_3,\zeta_3)&=M_{8,4,1,1}+M_{8,2,3,1}+M_{8,2,1,2,1}+M_{4,6,3,1}+M_{4,6,1,2,1}+M_{4,2,5,2,1}+M_{4,2,4,3,1}+M_{4,2,4,1,2,1}\\
&+M_{4,2,1,4,2,1}\\
\tilde L_*(\zeta_1,\zeta_4)&=M_{8,4,2,1,1}\\
\tilde L_*(\zeta_4,\zeta_1)&=M_{8,4,2,1,1}+M_{4,2,1,1}^2\\
\tilde L_*(\zeta_2,\zeta_4)&=M_{10,4,2,1,1}+M_{8,6,2,1,1}+M_{8,4,4,1,1}+M_{8,4,2,3,1}+M_{8,4,2,1,2,1}+M_{8,2,4,2,1,1}+M_{2,8,4,2,1,1}\\
\tilde L_*(\zeta_4,\zeta_2)&=M_{10,4,2,1,1}+M_{8,6,2,1,1}+M_{8,4,4,1,1}+M_{8,4,2,3,1}+M_{8,4,2,1,2,1}+M_{8,2,4,2,1,1}+M_{2,8,4,2,1,1}\\
&+M_9^2+M_{7,2}^2+M_{5,4}^2+M_{6,2,1}^2+M_{4,4,1}^2+M_{4,3,2}^2+M_{4,2,1,1,1}^2+M_{3,4,2}^2+M_{2,4,2,1}^2\\
&+M_1^2M_{4,2,1,1}^2+M_1^8M_5^2+M_{2,1}^4M_3^2\\
\tilde L_*(\zeta_3,\zeta_4)&=M_{12,6,2,1,1}+M_{12,4,4,1,1}+M_{12,4,2,3,1}+M_{12,4,2,1,2,1}+M_{12,2,4,2,1,1}+M_{8,8,4,1,1}+M_{8,8,2,3,1}\\
&+M_{8,8,2,1,2,1}+M_{8,4,6,3,1}+M_{8,4,6,1,2,1}+M_{8,4,2,5,2,1}+M_{8,4,2,4,3,1}+M_{8,4,2,4,1,2,1}+M_{8,4,2,1,4,2,1}\\
&+M_{4,10,4,2,1,1}+M_{4,8,6,2,1,1}+M_{4,8,4,4,1,1}+M_{4,8,4,2,3,1}+M_{4,8,4,2,1,2,1}+M_{4,8,2,4,2,1,1}+M_{4,2,8,4,2,1,1},
\end{align*}
etc.
Having $\tilde L_*$ we then can obtain $L_*$ by the dual of \bref{ltil} as
\begin{equation}\label{L}
L_*(x,y)=\sum\zeta_1x_\l y_{\l'}\ox\tilde L_*(x_\r,y_{\r'})
\end{equation}
for $x,y\in{\mathscr A}_*$, with
$$
m_*(x)=\sum x_\l\ox x_\r,\ m_*(y)=\sum y_{\l'}\ox y_{\r'}.
$$
\section{The symmetry operator $S$ and its dual}\label{S*}
\begin{Definition}\label{so}
The \emph{symmetry operator}
$$
S:R_{\mathscr F}\to{\mathscr A}\ox{\mathscr A}
$$
of degree $-1$ is defined as follows. For odd $p$, let $S$ be the zero map.
For $p=2$ let the elements $S_n\in{\mathscr A}\ox{\mathscr A}$, $n\ge0$, be given by
$$
S_n=\sum_{\substack{n_1+n_2=n-1\\\textrm{$n_1$, $n_2$
odd}}}\Sq^{n_1}\ox\Sq^{n_2}=(\Sq^1\ox\Sq^1)\delta(\Sq^{n-3}),
$$
i.~e.
\begin{align*}
S_{2k}&=0,\\
S_{2k+1}&=\sum_{0\le i<k}\Sq^{2i+1}\ox\Sq^{2(k-i)-1},
\end{align*}
$k\ge0$. Then let the linear map $S_{\mathscr F}:{\mathscr F}_0^{\le2}\to{\mathscr A}\ox{\mathscr A}$ be given by
\begin{align*}
S_{\mathscr F}(\Sq^n\Sq^m)&=S_n\delta(\Sq^m)+\delta(\Sq^n)S_m+\delta(\Sq^{n-1})S_{m+1}\\
&=(\Sq^1\ox\Sq^1)\delta(\Sq^{n-3}\Sq^m)+\delta(\Sq^n)(\Sq^1\ox\Sq^1)\delta(\Sq^{m-3})+\delta(\Sq^{n-1})(\Sq^1\ox\Sq^1)\delta(\Sq^{m-2}),
\end{align*}
$n,m\ge0$. Next define the map $S_R:R_{\mathscr F}^{\le2}\to{\mathscr A}\ox{\mathscr A}$ by restriction to
$R_{\mathscr F}^{\le2}\subset{\mathscr F}_0^{\le2}$. Thus on the Adem relations this map is
given by
\begin{equation}\label{sr}
S_R[n,m]=S_{\mathscr F}(\Sq^n\Sq^m)+\sum_{k=\max\{0,n-m+1\}}^{\min\{n/2,m-1\}}\binom{m-k-1}{n-2k}S_{\mathscr F}(\Sq^{n+m-k}\Sq^k).
\end{equation}
Now let us define the map
$$
\bar S:\bar R\to{\mathscr A}\ox{\mathscr A}
$$
as a unique right ${\mathscr A}$-module homomorphism satisfying
$$
\bar S(\overline{\alpha[n,m]})=\delta(\alpha)S_R[n,m]+(1+T)\bar
L(\alpha\ox\overline{[n,m]})
$$
for $\alpha[n,m]\in{\mathrm{PAR}}$. Then finally this determines a unique linear map $S:R_{\mathscr F}\to{\mathscr A}\ox{\mathscr A}$ by
composing with the quotient map $R_{\mathscr F}\onto\bar R$.
\end{Definition}
The map $S$ is the symmetry operator in \cite{Baues}*{14.5.2} where the
following lemma is proved.
\begin{Lemma}
The map $\bar S$ satisfies the equations
\begin{align*}
\bar S([n,m])&=S_R[n,m]\\
\bar S(ar)&=\delta(a)\bar S(r)+(1+T)\bar L(a\ox r)\\
\bar S(ra)&=\bar S(r)\delta(a)
\end{align*}
for any $0<n<2m$, $a\in{\mathscr A}$ and $r\in\bar R$.
\end{Lemma}
We now turn to the dual $S_*:{\mathscr A}_*\ox{\mathscr A}_*\to{R_{\mathscr F}}_*$ of $S$ (dually to the
above, the image of this operator actually lies in $\bar
R_*\subset{R_{\mathscr F}}_*$ and so defines the operator $\bar
S_*:{\mathscr A}_*\ox{\mathscr A}_*\to\bar R_*$). Since we know that $S_*$ is a biderivation,
it suffices to compute the values $S_*(\zeta_n\ox\zeta_{n'})$. Now
dually to the equation
$$
S(a[n,m]b)
=\delta(a)S_R([n,m])\delta(b)+(1+T)L(a\ox[n,m]b)
=\delta(a)S_R([n,m])\delta(b)+(1+T)\left(\delta\k(a)L_R([n,m])\delta(b)\right)
$$
we have
\begin{multline*}
\iota S_*(\zeta_n\ox\zeta_{n'})=\\
\sum_{\substack{i+j+k=n\\i'+j'+k'=n'}}
\left(
\zeta_i^{2^{j+k}}\zeta_{i'}^{2^{j'+k'}}
\ox{S_R}_*(\zeta_j^{2^k}\ox\zeta_{j'}^{2^{k'}})\ox\zeta_k\zeta_{k'}
+\zeta_1\zeta_i^{2^{j+k}}\zeta_{i'}^{2^{j'+k'}}
\ox\left({L_R}_*(\zeta_j^{2^k}\ox\zeta_{j'}^{2^{k'}})
+{L_R}_*(\zeta_{j'}^{2^{k'}}\ox\zeta_j^{2^k})\right)\ox\zeta_k\zeta_{k'}\right)\\
=
\sum_{\substack{i+j+k=n\\i'+j'+k'=n'}}
\zeta_i^{2^{j+k}}\zeta_{i'}^{2^{j'+k'}}
\ox{S_R}_*(\zeta_j^{2^k}\ox\zeta_{j'}^{2^{k'}})\ox\zeta_k\zeta_{k'}
+
\zeta_1\zeta_{n-2}^4\zeta_{n'-1}^2
\ox M_{1,1}^2\ox1
+
\zeta_1\zeta_{n-1}^2\zeta_{n'-2}^4
\ox M_{1,1}^2\ox1,
\end{multline*}
with $\zeta_0=1$ and $\zeta_n=0$ for $n<0$, as before.
It thus remains to find the values
${S_R}_*(\zeta_j^{2^k}\ox\zeta_{j'}^{2^{k'}})$ --- which in turn are
images of the corresponding values of ${S_{\mathscr F}}_*$ under the map
${\mathscr F}_*\onto{R_{\mathscr F}}_*$. To find the latter, let us first define another
intermediate operator
$$
S^1:{\mathscr F}_0^{\le1}\to{\mathscr A}\ox{\mathscr A}
$$
by the equation
$$
S^1(\Sq^n)=S_{n+1}=(\Sq^1\ox\Sq^1)\delta\k\k(\Sq^n)=\sum_{\substack{n_1+n_2=n\\\textrm{$n_1$, $n_2$
odd}}}\Sq^{n_1}\ox\Sq^{n_2},
$$
so that we have
$$
S_{\mathscr F} m(\Sq^n\ox\Sq^m)=S_{\mathscr F}(\Sq^n\Sq^m)=S^1\k(\Sq^n)\delta(\Sq^m)+\delta(\Sq^n)S^1\k(\Sq^m)+\delta\k(\Sq^n)S^1(\Sq^m).
$$
We have the dual operator
$$
S^1_*:{\mathscr A}_*\ox{\mathscr A}_*\to{\mathscr F}_*^{\le1}
$$
such that dual
$$
{S_{\mathscr F}}_*:{\mathscr A}_*\ox{\mathscr A}_*\to{\mathscr F}_*^{\le2}
$$
of $S_{\mathscr F}$ is given by
\begin{equation}\label{msf}
\alignbox{
&m_*{S_{\mathscr F}}_*(x\ox y)=\\
&\sum\left(\zeta_1S^1_*(x_\l\ox y_{\l'})\ox(x_\r y_{\r'})^{\le1}
+(x_\l y_{\l'})^{\le1}\ox\zeta_1S^1_*(x_\r\ox y_{\r'})
+(\zeta_1x_\l y_{\l'})^{\le1}\ox S^1_*(x_\r\ox y_{\r'})\right)
}
\end{equation}
where as before we use the Sweedler notation
$$
m_*(x)=\sum x_\l\ox x_\r,\ \ m_*(y)=\sum y_{\l'}\ox y_{\r'}
$$
and
$$
(\_)^{\le1}:{\mathscr A}_*\to{\mathscr F}_*^{\le1}
$$
sends $\zeta_1$ to $M_1$ and all other Milnor generators to 0. Thus we have
\begin{multline*}
m_*{S_{\mathscr F}}_*(\zeta_j^{2^k}\ox\zeta_{j'}^{2^{k'}})=\\
\sum_{\substack{\l+\r=j\\\l'+\r'=j'}}
\zeta_1S^1_*(\zeta_\l^{2^{\r+k}}\ox\zeta_{\l'}^{2^{\r'+k'}})\ox(\zeta_\r^{2^k}\zeta_{\r'}^{2^{k'}})^{\le1}
+(\zeta_\l^{2^{\r+k}}\zeta_{\l'}^{2^{\r'+k'}})^{\le1}\ox\zeta_1S^1_*(\zeta_\r^{2^k}\ox\zeta_{\r'}^{2^{k'}})
+(\zeta_1\zeta_\l^{2^{\r+k}}\zeta_{\l'}^{2^{\r'+k'}})^{\le1}\ox
S^1_*(\zeta_\r^{2^k}\ox\zeta_{\r'}^{2^{k'}})
\end{multline*}
Now the operator $S^1_*$ is obviously given by
\begin{equation}\label{s1}
S^1_*(x\ox y)=
\begin{cases}
xy,&\textrm{$x=\zeta_1^{n_1}$, $y=\zeta_1^{n_2}$, $n_1$, $n_2$ odd},\\
0&\textrm{otherwise,}
\end{cases}
\end{equation}
so that ${S_{\mathscr F}}_*(\zeta_j^{2^k}\ox\zeta_{j'}^{2^{k'}})=0$ whenever $k>0$ or
$k'>0$. And among the remaining values ${S_{\mathscr F}}_*(\zeta_j\ox\zeta_{j'})$ the
only nonzero ones are given by
\begin{align*}
{S_{\mathscr F}}_*(\zeta_1\ox\zeta_1)&=M_3+M_{1,2}=M_1^3+M_{2,1},\\
{S_{\mathscr F}}_*(\zeta_1\ox\zeta_2)={S_{\mathscr F}}_*(\zeta_2\ox\zeta_1)&=M_{2,3}+M_{3,2}=M_1M_{1,1}^2,\\
{S_{\mathscr F}}_*(\zeta_2\ox\zeta_2)&=M_{5,2}+M_{4,3}=M_1M_{2,1}^2.
\end{align*}
Then further passing to ${S_R}_*$ means, as before, removing the monomials
not containing $M_{1,1}$, so that the only nonzero values of the form
${S_R}_*(\zeta_j^{2^k}\ox\zeta_{j'}^{2^{k'}})$ are
$$
{S_R}_*(\zeta_1\ox\zeta_2)={S_R}_*(\zeta_2\ox\zeta_1)=M_1M_{1,1}^2.
$$
Hence we obtain
\begin{Proposition}
$$
\iota S_*(\zeta_n\ox\zeta_{n'})=
\zeta_{n-1}^2\zeta_{n'-2}^4
\ox M_1M_{1,1}^2\ox1
+
\zeta_{n-2}^4\zeta_{n'-1}^2
\ox M_1M_{1,1}^2\ox1
+
\zeta_1\zeta_{n-2}^4\zeta_{n'-1}^2
\ox M_{1,1}^2\ox1
+
\zeta_1\zeta_{n-1}^2\zeta_{n'-2}^4
\ox M_{1,1}^2\ox1.
$$
\end{Proposition}\qed
As for $\tilde L_*$ above, we then solve these equations obtaining e.~g.
\begin{align*}
S_*(\zeta_1,\zeta_1)&=0,\\
S_*(\zeta_1,\zeta_2)=S_*(\zeta_2,\zeta_1)&=M_{2,2,1}+M_1M_{1,1}^2,\\
S_*(\zeta_2,\zeta_2)&=0,\\
S_*(\zeta_1,\zeta_3)=S_*(\zeta_3,\zeta_1)&=M_{4,2,2,1}+M_1M_{2,1,1}^2,\\
S_*(\zeta_2,\zeta_3)=S_*(\zeta_3,\zeta_2)&=M_{6,2,2,1}+M_{4,4,2,1}+M_{2,4,2,2,1}\\
&+M_1M_5^2+M_1M_{4,1}^2+M_1M_{3,2}^2+M_1M_{2,1,1,1}^2+M_1^3M_{2,1,1}^2+M_1^5M_3^2,\\
S_*(\zeta_3,\zeta_3)&=0,\\
S_*(\zeta_1,\zeta_4)=S_*(\zeta_4,\zeta_1)&=M_{8,4,2,2,1}+M_1M_{4,2,1,1}^2,\\
S_*(\zeta_2,\zeta_4)=S_*(\zeta_4,\zeta_2)&=M_{10,4,2,2,1}+M_{8,6,2,2,1}+M_{8,4,4,2,1}+M_{8,2,4,2,2,1}+M_{2,8,4,2,2,1}\\
&+M_1M_9^2+M_1M_{7,2}^2+M_1M_{6,2,1}^2+M_1M_{5,4}^2+M_1M_{4,4,1}^2+M_1M_{4,3,2}^2+M_1M_{4,2,1,1,1}^2\\
&+M_1M_{3,4,2}^2+M_1M_{2,4,2,1}^2+M_1M_{2,1}^2M_3^2+M_1^3M_{4,2,1,1}^2+M_1^9M_5^2,
\end{align*}
etc.
\endinput
\chapter{The extended Steenrod algebra and its cocycle}\label{xi}
We show that the dual invariant $S_*$ determines a singular extension of the
Hopf algebra structure of the Steenrod algebra. We also give a formula for a
cocycle representing the extension. Then we show that $S_*$ is related to a
formula which describes the main result of Kristensen on secondary cohomology
operations. A proof of this formula has not appeared in the literature yet.
\section{Singular extensions of Hopf algebras}
In this section we introduce a singular extension $\hat{\mathscr A}$ of the Steenrod
algebra ${\mathscr A}$ which is determined by the symmetry operator $S$.
\begin{Definition}
A \emph{singular extension} of a Hopf algebra $A$ is a direct sum diagram
$$
\xymatrix{
R\ar@{ >->}@<.5ex>[r]^-i
&\hat A\ar@{->>}@<.5ex>[r]^-p\ar@{->>}@<.5ex>[l]^-q
&A,\ar@{ >->}@<.5ex>[l]^-s
}
$$
i.~e. one has $ps=\id_A$, $qi=\id_R$ and $sp+iq=\id_{\hat A}$, such that
$\hat A$ is an algebra with multiplication $\mu:\hat A\ox\hat A\to\hat A$
and $\hat A$ is also a coalgebra with diagonal $\hat\delta:\hat A\to\hat
A\ox\hat A$. (Here we do not assume that $\hat\delta$ is a homomorphism of
algebras, or equivalently that $\mu$ is a homomorphism of coalgebras, so
that in general $\hat A$ is not a Hopf algebra). In addition $p$ is an algebra
homomorphism, and $s$ is a coalgebra homomorphism. Moreover $(i,p)$ must
be a singular extension of algebras and $(q,s)$ must be a singular
extension of coalgebras. This means that the ideal $R=\ker i$ of the
algebra $\hat A$ is a square zero ideal, i.~e. $xy=0$ for any $x,y\in
R$, and the coideal $R=\coker s$ of the coalgebra $\hat A$ is a
square zero coideal, i.~e. the composite
$$
\hat A\xto{\hat\delta}\hat A\ox\hat A\xto{q\ox q}R\ox R
$$
is zero.
\end{Definition}
It follows that the $\hat A$-$\hat A$-bimodule and $\hat A$-$\hat
A$-bicomodule structures on $R$ descend to $A$-$A$-bimodule and
$A$-$A$-bicomodule structures respectively.
Our basic example of a singular Hopf algebra extension is as follows. We
have seen that $\bar R$ from \eqref{rbar} has an ${\mathscr A}$-${\mathscr A}$-bimodule
structure. Now it also has an ${\mathscr A}$-${\mathscr A}$-bicomodule structure as follows. On
the one hand, there is a diagonal $\Delta_R:R_{\mathscr F}\to R^{(2)}_{\mathscr F}=\ker(q_{\mathscr F}\ox
q_{\mathscr F})$ induced in the commutative diagram
$$
\xymatrix{
R_{\mathscr F}\ar@{ >->}[r]\ar@{-->}[d]_\Delta
&{\mathscr F}_0\ar@{->>}[r]^{q_{\mathscr F}}\ar[d]^\Delta
&{\mathscr A}\ar[d]^\delta\\
R^{(2)}_{\mathscr F}\ar@{ >->}[r]
&{\mathscr F}_0\ox{\mathscr F}_0\ar@{->>}[r]^{q_{\mathscr F}\ox q_{\mathscr F}}
&{\mathscr A}\ox{\mathscr A}
}
$$
with short exact rows. Moreover there is a short exact sequence
$$
\xymatrix{
R_{\mathscr F}\ox R_{\mathscr F}\ar@{ >->}[r]^-{i^{(2)}=\binom{i_{\mathscr F}\ox1}{-1\ox i_{\mathscr F}}}
&{\mathscr F}_0\!\ox\!R_{\mathscr F}\oplus R_{\mathscr F}\!\ox\!{\mathscr F}_0\ar@{->>}[r]
&R_{\mathscr F}^{(2)},
}
$$
where $i_{\mathscr F}:R_{\mathscr F}\incl{\mathscr F}_0$ is the inclusion. Since the composite of the
quotient map
$$
{\mathscr F}_0\!\ox\!R_{\mathscr F}\oplus R_{\mathscr F}\!\ox\!{\mathscr F}_0\onto{\mathscr A}\!\ox\!\bar R\oplus\bar R\!\ox\!{\mathscr A}
$$
with $i^{(2)}$ is obviously zero, we get the induced map
$$
R_{\mathscr F}^{(2)}\to{\mathscr A}\!\ox\!\bar R\oplus\bar R\!\ox\!{\mathscr A}.
$$
Moreover the diagonal of ${\mathscr F}_0$ factors through this map as follows
\begin{equation}\label{bicomod}
\alignbox{
\xymatrix{
\bar R\ar@{-->}[d]^{\binom{\Delta_\l}{\Delta_\r}}
&R_{\mathscr F}\ar@{->>}[l]\ar@{ (->}[r]^{i_{\mathscr F}}\ar@{-->}[d]^\Delta
&{\mathscr F}_0\ar@{->>}[r]^{q_{\mathscr F}}\ar[d]^\Delta
&{\mathscr A}\ar[d]^\delta\\
{\mathscr A}\!\ox\!\bar R\oplus\bar R\!\ox\!{\mathscr A}
&R^{(2)}\ar@{->>}[l]\ar@{ (->}[r]
&{\mathscr F}_0\ox{\mathscr F}_0\ar@{->>}[r]^{q_{\mathscr F}\ox q_{\mathscr F}}
&{\mathscr A}\ox{\mathscr A}
}
}
\end{equation}
giving the left, resp. right coaction $\Delta_\l$, resp. $\Delta_\r$ of the
desired ${\mathscr A}$-${\mathscr A}$-bicomodule structure on $\bar R$.
Note that the above construction is actually precisely dual to the
standard procedure for equipping the kernel of a singular extension with a
structure of a bimodule over a base. In particular we could use the dual
diagram
\begin{equation}\label{bimod}
\alignbox{
\xymatrix{
{\mathscr A}\!\ox\!\bar R\oplus\bar R\!\ox\!{\mathscr A}\ar@{-->}[d]^{\binom{m_\l}{m_\r}}
&R^{(2)}\ar@{->>}[l]\ar@{ (->}[r]\ar@{-->}[d]^m
&{\mathscr F}_0\ox{\mathscr F}_0\ar@{->>}[r]^{q_{\mathscr F}\ox q_{\mathscr F}}\ar[d]^m
&{\mathscr A}\ox{\mathscr A}\ar[d]^m\\
\bar R
&R_{\mathscr F}\ar@{->>}[l]\ar@{ (->}[r]^{i_{\mathscr F}}
&{\mathscr F}_0\ar@{->>}[r]^{q_{\mathscr F}}
&{\mathscr A}
}
}
\end{equation}
to give $\bar R$ via $m_\l$ and $m_\r$ the structure of ${\mathscr A}$-${\mathscr A}$-bimodule.
\begin{Theorem}\label{usplit}
There is a unique singular extension of Hopf algebras
$$
\xymatrix{
\Sigma^{-1}\bar R\ar@{ >->}@<.5ex>[r]^-i
&\hat{\mathscr A}\ar@{->>}@<.5ex>[r]^-p\ar@{->>}@<.5ex>[l]^-q
&{\mathscr A},\ar@{ >->}@<.5ex>[l]^-s
}
$$
where $\hat{\mathscr A}$ is the split singular extension of algebras, that is, as an
algebra
$$
\hat{\mathscr A}={\mathscr A}\oplus\Sigma^{-1}\bar R
$$
is the semidirect product with multiplication
$$
(a,r)(a',r')=(aa',ar'+ra')
$$
and the following conditions are satisfied.
The induced ${\mathscr A}$-${\mathscr A}$-bimodule and ${\mathscr A}$-${\mathscr A}$-bicomodule structures on
$\Sigma^{-1}\bar R$ are given by the ones
indicated in \eqref{bicomod} above, and the diagonal $\hat\delta$ of
the coalgebra $\hat{\mathscr A}$ fits into the commutative diagram
\begin{equation}\label{coext}
\alignbox{
\xymatrix{
\hat{\mathscr A}\ar[rr]^{\hat\delta}\ar@{->>}[d]
&&\hat{\mathscr A}\ox\hat{\mathscr A}\ar[d]^{1+T}\\
\Sigma^{-1}\bar R\ar[r]^-S
&{\mathscr A}\ox{\mathscr A}\ar@{ (->}[r]
&\hat{\mathscr A}\ox\hat{\mathscr A}}
}
\end{equation}
where $S$ is the symmetry operator in \bref{so}.
\end{Theorem}
We will prove this theorem together with the dual statement. Note that
clearly the dual of a singular extension of any Hopf algebra $A$ is a
singular extension of the dual Hopf algebra $A_*$. Clearly then the above
theorem is equivalent to
\begin{Theorem}\label{dusplit}
There is a unique singular extension of Hopf algebras
$$
\xymatrix{
\Sigma^{-1}\bar R_*\ar@{ >->}@<.5ex>[r]^-{q_*}
&\hat{\mathscr A}_*\ar@{->>}@<.5ex>[r]^-{s_*}\ar@{->>}@<.5ex>[l]^-{i_*}
&{\mathscr A}_*,\ar@{ >->}@<.5ex>[l]^-{p_*}
}
$$
where $\hat{\mathscr A}_*$ is the split singular extension of coalgebras, that is, as
a coalgebra
$$
\hat{\mathscr A}_*={\mathscr A}_*\oplus\Sigma^{-1}\bar R_*
$$
with diagonal
$$
{\mathscr A}_*\oplus\Sigma^{-1}\bar R_*\xto{
\left(\begin{smallmatrix}
m_*&0\\
0&{m_\l}_*\\
0&{m_\r}_*\\
0&0
\end{smallmatrix}\right)
}
{\mathscr A}_*\!\ox\!{\mathscr A}_*\oplus
{\mathscr A}_*\!\ox\!\Sigma^{-1}\bar R_*\oplus
\Sigma^{-1}\bar R_*\!\ox\!{\mathscr A}_*\oplus
\Sigma^{-1}\bar R_*\!\ox\!\Sigma^{-1}\bar R_*
$$
where the diagonal $m_*$ is dual to the multiplication $m:{\mathscr A}\ox{\mathscr A}\to{\mathscr A}$
and ${m_\l}_*$, ${m_\r}_*$ are the ${\mathscr A}_*$-${\mathscr A}_*$-bicomodule structure maps
dual to the ${\mathscr A}$-${\mathscr A}$-bimodule structure maps $m_\l:{\mathscr A}\ox\Sigma^{-1}\bar
R\to\Sigma^{-1}\bar R$, $m_\r:\Sigma^{-1}\bar R\ox{\mathscr A}\to\Sigma^{-1}\bar R$
in \eqref{bimod}, where the induced ${\mathscr A}_*$-${\mathscr A}_*$-bimodule structure on
$\bar R_*$ is dual to the ${\mathscr A}$-${\mathscr A}$-bicomodule structure indicated in
\eqref{bicomod} above, and where the multiplication $\hat\delta_*$ of the
algebra $\hat{\mathscr A}_*$ satisfies the commutation rule
$$
p_*(y)p_*(x)=p_*(x)p_*(y)+S_*(x\ox y)
$$
for any $x,y\in{\mathscr A}_*$, where
$$
S_*:{\mathscr A}_*\ox{\mathscr A}_*\to\Sigma^{-1}\bar R_*
$$
is the cosymmetry operator from \bref{cosym}.
\end{Theorem}
\begin{proof}[Proof of \bref{usplit} and \bref{dusplit}]
The diagonal $\hat\delta$ can be written as follows
$$
{\mathscr A}\oplus\Sigma^{-1}\bar R\xto{
\left(\begin{smallmatrix}
\phi_{11}&\phi_{12}\\
\phi_{21}&\phi_{22}\\
\phi_{31}&\phi_{32}\\
\phi_{41}&\phi_{42}
\end{smallmatrix}\right)
}
{\mathscr A}\!\ox\!{\mathscr A}\oplus
{\mathscr A}\!\ox\!\Sigma^{-1}\bar R\oplus
\Sigma^{-1}\bar R\!\ox\!{\mathscr A}\oplus
\Sigma^{-1}\bar R\!\ox\!\Sigma^{-1}\bar R.
$$
Then the condition that $s:{\mathscr A}\into{\mathscr A}\oplus\Sigma^{-1}\bar R$ is a coalgebra
homomorphism implies $\phi_{11}=\delta$ and $\phi_{21}=0$, $\phi_{31}=0$,
$\phi_{41}=0$. Moreover the condition that the ${\mathscr A}$-${\mathscr A}$-bicomodule
structure induced on $\Sigma^{-1}\bar R$ coincides with the one given in
\eqref{bicomod} implies $\phi_{22}=\Delta_\l$, $\phi_{32}=\Delta_\r$. Next the
condition that $(s,q)$ is a singular extension of coalgebras, i.~e. the
coideal $\bar R$ has zero comultiplication, implies $\phi_{42}=0$. Finally,
let us look at the diagram \eqref{coext}. The lower composite in this
diagram sends $(a,r)\in{\mathscr A}\oplus\Sigma^{-1}\bar R$ to
$$
(S(r),0,0,0)\in{\mathscr A}\!\ox\!{\mathscr A}\oplus
{\mathscr A}\!\ox\!\Sigma^{-1}\bar R\oplus
\Sigma^{-1}\bar R\!\ox\!{\mathscr A}\oplus
\Sigma^{-1}\bar R\!\ox\!\Sigma^{-1}\bar R.
$$
The upper composite sends it to
\begin{align*}
(1+T)\hat\delta(a,r)&=(1+T)(\delta(a)+\phi_{12}(r),\Delta_\l(r),\Delta_\r(r),0)\\
&=((1+T)\delta(a)+(1+T)\phi_{12}(r),\Delta_\l(r)+T\Delta_\r(r),\Delta_\r(r)+T\Delta_\l(r),0).
\end{align*}
Since $\delta$ is cocommutative, one has $(1+T)\delta=0$. Moreover
cocommutativity of $\Delta:{\mathscr F}_0\to{\mathscr F}_0\ox{\mathscr F}_0$ implies $T\Delta_\l=\Delta_\r$,
$T\Delta_\r=\Delta_\l$. Thus commutativity of \eqref{coext} is equivalent to the
condition
\begin{equation}\label{xis}
(1+T)\phi_{12}=S:\Sigma^{-1}\bar R\to{\mathscr A}\ox{\mathscr A}.
\end{equation}
Equivalently, passing to the dual we see that the dual map
$\xi_*={\phi_{12}}_*:{\mathscr A}_*\ox{\mathscr A}_*\to\Sigma^{-1}\bar R_*$ must satisfy
$$
\xi_*(1+T)=S_*.
$$
Now it is easy to see that $\xi_*$ is in fact the algebra cocycle
determining the algebra extension
$$
\xymatrix{
\bar R_*\ar@{ >->}[r]^{q_*}&\hat{\mathscr A}_*\ar@{->>}[r]^{s_*}&{\mathscr A}_*,
}
$$
that is, in $\hat{\mathscr A}_*={\mathscr A}_*\oplus\Sigma^{-1}\bar R_*$ one has
$$
(\alpha,\beta)(\alpha',\beta')=(\alpha\alpha',\alpha\beta'+\beta\alpha'+\xi_*(\alpha\ox\alpha')).
$$
Hence by \eqref{xis} one has
$$
(\alpha,\beta)(\alpha',\beta')-(\alpha',\beta')(\alpha,\beta)=(0,S_*(\alpha\ox\alpha')).
$$
Now recall that ${\mathscr A}_*$ is actually a polynomial algebra. Using this fact it
has been shown in \cite{Baues}*{16.2} that the algebra structure of any of
its singular extensions such as $\hat{\mathscr A}_*$ above is completely determined
by its commutator map, i.~e. by $S_*$. Thus ${\phi_{12}}_*$ and hence the
whole $\phi_{ij}$ matrix is uniquely determined. It is then straightforward
to check that indeed this matrix yields a coalgebra structure on $\hat{\mathscr A}$
with desired properties.
\end{proof}
It follows immediately from \bref{dusplit} (and actually this was also
deduced during its proof) that one has
\begin{Corollary}\label{cosxi}
For the cosymmetry operator $S_*$ from \bref{cosym} there exists a map
$$
\xi_*:{\mathscr A}_*\ox{\mathscr A}_*\to\Sigma^{-1}\bar R_*
$$
which is a 2-cocycle, i.~e. for any $x,y,z\in{\mathscr A}_*$ one has
$$
x\xi_*(y,z)+\xi_*(x,yz)=z\xi_*(x,y)+\xi_*(xy,z)
$$
and such that its symmetrization is equal to $S_*$, i.~e. for any
$x,y\in{\mathscr A}_*$ one has
$$
\xi_*(x,y)+\xi_*(y,x)=S_*(x,y).
$$
\end{Corollary}
\begin{proof}
This follows since any extension
$$
\xymatrix{
M\ar@{ >->}[r]^i&A'\ar@{->>}[r]^p&A
}
$$
of a commutative algebra $A$ by a symmetric $A$-module $M$ is determined by a 2-cocycle
$c:A\ox A\to M$ such that for any $x,y\in A'$ one has
$$
xy-yx=i(c(px,py)-c(py,px)),
$$
i.~e. the commutator map for $A'$ is given by the antisymmetrization of
$c$. Of course for $p=2$ there is no difference between symmetrization and
antisymmetrization.
\end{proof}
\begin{Remark}
The above corollary is easily seen to be exactly dual to
\cite{Baues}*{Theorem 16.1.5}.
\end{Remark}
Using the extended Steenrod algebra we can next compute the deviation of
the cocycle $\xi_*$ from being an ${\mathscr A}_*$-comodule homomorphism. Namely, let
$$
{\nabla_\xi}_*:{\mathscr A}_*\ox{\mathscr A}_*\to{\mathscr A}_*\ox\Sigma^{-1}\bar R_*
$$
be the difference between the upper and lower composites in the diagram
\begin{equation}\label{nabladef}
\begin{aligned}
\xymatrix{
{\mathscr A}_*\ox{\mathscr A}_*\ar[d]_{\xi_*}\ar[r]^-{\textrm{coaction}}&{\mathscr A}_*\ox{\mathscr A}_*\ox{\mathscr A}_*\ar[d]^{1\ox\xi_*}\\
\Sigma^{-1}\bar R_*\ar[r]^-{\textrm{coaction}}&{\mathscr A}_*\ox\Sigma^{-1}\bar R_*.
}
\end{aligned}
\end{equation}
Thus on elements we have
\begin{equation}\label{nablaelts}
{\nabla_\xi}_*(x,y)=\sum\xi_*(x,y)_{\mathscr A}\ox\xi_*(x,y)_R-\sum x_\l
y_{\l'}\ox\xi_*(x_\r,y_{\r'}),
\end{equation}
where again the Sweedler notation is used,
$$
m_*(x)=\sum x_\l\ox x_\r
$$
for the diagonal
$$
m_*:{\mathscr A}_*\to{\mathscr A}_*\ox{\mathscr A}_*
$$
and
$$
a_*(x)=\sum x_{\mathscr A}\ox x_C
$$
for the coaction
$$
a_*:C\to{\mathscr A}_*\ox C
$$
of a left ${\mathscr A}_*$-comodule $C$.
Let us also denote by ${\nabla_S}_*$ the similar operator but with $S_*$ in
place of $\xi_*$. That is, we define
$$
{\nabla_S}_*(x,y)=\sum S_*(x,y)_{\mathscr A}\ox S_*(x,y)_R-\sum x_\l
y_{\l'}\ox S_*(x_\r,y_{\r'}).
$$
We then obviously have
\begin{equation}\label{nablas}
{\nabla_\xi}_*(x,y)+{\nabla_\xi}_*(y,x)={\nabla_S}_*(x,y)
\end{equation}
for any $x,y\in{\mathscr A}_*$.
\begin{Lemma}
The map ${\nabla_\xi}_*$ above is a 2-cocycle, i.~e. for any $x,y,z\in{\mathscr A}_*$ one has
$$
m_*(x){\nabla_\xi}_*(y,z)+{\nabla_\xi}_*(x,yz)={\nabla_\xi}_*(x,y)m_*(z)+{\nabla_\xi}_*(xy,z).
$$
\end{Lemma}
\begin{proof}
First note that the diagram
$$
\xymatrix{
{\mathscr A}_*\ox\bar R_*\ar[r]^-{m_*\ox\textrm{coaction}}\ar[dd]_{\textrm{action}}
&{\mathscr A}_*\ox{\mathscr A}_*\ox{\mathscr A}_*\ox\bar R_*\ar[dr]^{1\ox T\ox1}\\
&&{\mathscr A}_*\ox{\mathscr A}_*\ox{\mathscr A}_*\ox\bar R_*\ar[d]^{\delta_*\ox\textrm{action}}\\
\bar R_*\ar[rr]^{\textrm{coaction}}&&{\mathscr A}_*\ox\bar R_*
}
$$
commutes --- this follows from the fact that the action and coaction of
${\mathscr A}_*$ on $\bar R_*$ are induced from the multiplication and
comultiplication in ${\mathscr F}_*$ which is a Hopf algebra.
We thus conclude that the coaction map
$$
\bar R_*\to{\mathscr A}_*\ox\bar R_*
$$
is a homomorphism of ${\mathscr A}_*$-modules, so that its composite with the cocycle
$\xi_*$ is a cocycle. It thus remains to show that the composite
$$
{\mathscr A}_*\ox{\mathscr A}_*\to{\mathscr A}_*\ox{\mathscr A}_*\ox{\mathscr A}_*\to{\mathscr A}_*\ox\bar R_*
$$
in the diagram \eqref{nabladef} is also a cocycle. Let us denote this
composite by $\phi$.
Observe that the Hopf algebra diagram for ${\mathscr A}_*$ expressing interchange of the
multiplication and diagonal can be written on elements as follows:
$$
\sum(xy)_\l\ox(xy)_\r=\sum x_\l y_{\l'}\ox x_\r y_{\r'}.
$$
Using this identity we then have for any $x,y,z\in{\mathscr A}_*$
\begin{align*}
m_*(x)\phi(y,z)&=\sum x_\l y_{\l'}z_{\l''}\ox x_\r\xi_*(y_{\r'},z_{\r''});\\
\phi(x,yz)&=\sum
x_\l(yz)_{\l'}\ox\xi_*(x_\r,(yz)_{\r'})
=(\delta_*\ox\xi_*)\left(\sum x_\l\ox(yz)_{\l'}\ox
x_\r\ox(yz)_{\r'}\right)\\
&=(\delta_*\ox\xi_*)\left(\sum x_\l\ox y_{\l'}z_{\l''}\ox x_\r\ox
y_{\r'}z_{\r''}\right)=\sum x_\l
y_{\l'}z_{\l''}\ox\xi_*(x_\r,y_{\r'}z_{\r''});\\
\phi(xy,z)&=\sum(xy)_\l
z_{\l'}\ox\xi_*((xy)_\r,z_{\r'})=(\delta_*\ox\xi_*)\left(\sum(xy)_\l\ox
z_{\l'}\ox(xy)_\r\ox z_{\r'}\right)\\
&=(\delta_*\ox\xi_*)\left(\sum x_\l y_{\l'}\ox z_{\l''}\ox x_\r y_{\r'}\ox
z_{\r''}\right)=\sum x_\l y_{\l'}z_{\l''}\ox\xi_*(x_\r y_\r',z_{\r''});\\
\phi(x,y)m_*(z)&=\sum x_\l y_{\l'}z_{\l''}\ox\xi_*(x_\r,y_{\r'})z_{\r''}.
\end{align*}
These indentities readily imply that $\phi$ is a cocycle as required.
\end{proof}
We next use the fact the cocycle ${\nabla_\xi}_*$ is defined on a polynomial
algebra and hence can be expressed by its values on generators and by its
(anti)symmetrization ${\nabla_S}_*$. Indeed the proof of \cite{Baues}*{16.2.3}
works in this generality, i.~e. one has
\begin{Proposition}
Let $P=k[\zeta_1,\zeta_2,...]$ be a polynomial algebra over a commutative ring $k$,
let $M$ be a $P$-module, let
$$
\gamma:P\ox P\to M
$$
be a Hochschild 2-cocycle, i.~e. one has
$$
x\gamma(y,z)-\gamma(xy,z)+\gamma(x,yz)-z\gamma(x,y)=0
$$
for all $x,y,z\in P$, and let $\sigma$ be the antisymmetrization of
$\gamma$, i.~e.
$$
\sigma(x,y)=\gamma(x,y)-\gamma(y,x).
$$
Then, up to coboundaries, $\gamma$ can be recovered from $\sigma$, i.~e.
there is a cocycle $\gamma_\sigma$ cohomologous to $\gamma$ which depends
only on $\sigma$.
\end{Proposition}
\begin{proof}
To $\gamma$ corresponds a singular extension of $k$-algebras
$$
\xymatrix{
M\ar@{ >->}[r]^i&E\ar@{->>}[r]^p&P
}
$$
whose isomorphism class uniquely determines the cohomology class of
$\gamma$. Let us choose for each polynomial generator $\zeta_n\in P$ an
element $s(\zeta_n)\in E$ with $ps(\zeta_n)=\zeta_n$. Furthermore let us
choose an ordering on the polynomial generators of $P$,
$\zeta_1<\zeta_2<...$; these data determine uniquely a $k$-linear
section of $p$, by the formula
$$
s(\zeta_{n_1}\zeta_{n_2}\cdots)=s(\zeta_{n_1})s(\zeta_{n_2})\cdots
$$
for any finite sequence $n_1\le n_2\le\cdots$ of positive integers. Then we
can use $s$ to construct a cocycle $\gamma_\sigma$ cohomologous to
$\gamma$ determined by
$$
s(xy)=s(x)s(y)+i\gamma_\sigma(x,y).
$$
But if $x$ and $y$ are monomials, then $s(xy)$ and $s(x)s(y)$ differ only
by the order of terms, so that $i\gamma_\sigma(x,y)$ is contained in the
ideal generated by commutators
$$
\gamma_\sigma(\zeta_i,\zeta_j)=s(\zeta_i)s(\zeta_j)-s(\zeta_j)s(\zeta_i)=\sigma(\zeta_i,\zeta_j)
$$
for $i>j$. So in fact one can express each $\gamma_\sigma(x,y)$ by a linear
combination of elements of $M$ of the form $z\sigma(\zeta_i,\zeta_j)$ for
$z\in P$.
\end{proof}
\begin{Remark}
Obviously the above proof actually contains an algorithm for expressing the
cocycle $\gamma_\sigma$ in terms of $\sigma$. For $x=\zeta_{n_1}\zeta_{n_2}\cdots\zeta_{n_k}$ and
$y=\zeta_{m_1}\zeta_{m_2}\cdots\zeta_{m_l}$, with $n_1\le n_2\le\cdots\le
n_k$, $m_1\le m_2\le\cdots\le m_l$, either one has $n_k\le m_1$, in which
case $\gamma_\sigma(x,y)=0$ since $s(x)s(y)=s(xy)$, or one has $n_k>m_1$,
in which case one can write
$$
s(x)s(y)=s(\zeta_{n_1})\cdots
s(\zeta_{n_{k-1}})s(\zeta_{m_1})s(\zeta_{n_k})s(\zeta_{m_2})\cdots
s(\zeta_{m_l})
+\zeta_{n_1}\cdots\zeta_{n_{k-1}}\zeta_{m_2}\cdots\zeta_{m_l}\sigma(\zeta_{m_1},\zeta_{n_k}).
$$
Applying the same procedure again several times one finally arrives at
$s(xy)+$(a sum of elements of the form $z\sigma(\zeta_i,\zeta_j)$). In fact
it is easy to see that one has
$$
\gamma_\sigma(\zeta_{n_1}\zeta_{n_2}\cdots\zeta_{n_k},\zeta_{m_1}\zeta_{m_2}\cdots\zeta_{m_l})=
\sum_{n_i>m_j}
\zeta_{n_1}\cdots\zeta_{n_{i-1}}\zeta_{n_{i+1}}\cdots\zeta_{n_k}
\zeta_{m_1}\cdots\zeta_{m_{j-1}}\zeta_{m_{j+1}}\cdots\zeta_{m_l}\sigma(\zeta_{m_j},\zeta_{n_i}).
$$
In the characteristic $p>0$ case further obvious simplifications occur.
In particular we can choose the cocycle $\xi_*$ in \bref{cosxi} in such a way that the formula
\begin{equation}\label{xifromS}
\begin{aligned}
&\xi_*(\zeta_1^{d_1}\zeta_2^{d_2}\cdots,\zeta_1^{e_1}\zeta_2^{e_2}\cdots)=\\
&\sum_{\substack{i<j\\\textrm{$e_i$, $d_j$ odd}}}
\zeta_1^{d_1+e_1}\cdots\zeta_{i-1}^{d_{i-1}+e_{i-1}}\zeta_i^{d_i+e_i-1}\zeta_{i+1}^{d_{i+1}+e_{i+1}}
\cdots\zeta_{j-1}^{d_{j-1}+e_{j-1}}\zeta_j^{d_j+e_j-1}\zeta_{j+1}^{d_{j+1}+e_{j+1}}\cdots
S_*(\zeta_i,\zeta_j)
\end{aligned}
\end{equation}
hold
\end{Remark}
The operator ${\nabla_S}_*$ is readily computable. It is a symmetric
biderivation, with ${\nabla_S}_*(x,x)=0$ for all $x$, thus uniquely
determined by its values of the form ${\nabla_S}_*(\zeta_n,\zeta_m)$ for
$n<m$, which are expressed easily from the corresponding values of $S_*$.
For example, one has
\begin{align*}
{\nabla_S}_*(\zeta_1,\zeta_2)&
=\zeta_1\ox M_{1,1}^2,\\
{\nabla_S}_*(\zeta_1,\zeta_3)&
=\zeta_1^5\ox M_{1,1}^2+\zeta_1\ox M_{2,1,1}^2,\\
{\nabla_S}_*(\zeta_2,\zeta_3)&
=\left(\zeta_1^7+\zeta_1\zeta_2^2\right)\ox M_{1,1}^2
+\zeta_1^3\ox M_{2,1,1}^2
+\zeta_1\ox\left(M_1^2M_3+M_1M_{2,1,1}+M_5+M_{4,1}+M_{3,2}+M_{2,1,1,1}\right)^2,\\
{\nabla_S}_*(\zeta_1,\zeta_4)&
=\zeta_1\zeta_2^4\ox M_{1,1}^2
+\zeta_1^9\ox M_{2,1,1}^2
+\zeta_1\ox M_{4,2,1,1}^2,\\
{\nabla_S}_*(\zeta_2,\zeta_4)&
=\left(\zeta_1^3\zeta_2^4+\zeta_1\zeta_3^2\right)\ox M_{1,1}^2
+\zeta_1^{11}\ox M_{2,1,1}^2\\
&+\zeta_1^9\ox\left(M_1^2M_3+M_1M_{2,1,1}+M_5+M_{4,1}+M_{3,2}+M_{2,1,1,1}\right)^2
+\zeta_1^3\ox M_{4,2,1,1}^2\\
&+\zeta_1\ox\left(
M_1^4M_5
+M_{2,1}^2M_3
+M_1^2M_{4,2,1,1}
+M_9
+M_{7,2}
+M_{6,2,1}
+M_{5,4}
+M_{3,4,2}
+M_{4,3,2}
+M_{4,4,1}
\right.\\
&\phantom{+\zeta_1\ox\ }\left.
+M_{2,4,2,1}
+M_{4,2,1,1,1}
\right)^2,
\end{align*}
etc.
\section{The formula of Kristensen}
We will next use certain elements defined in \cite{Kristensen}*{Theorem 3.3} to
derive more explicit expressions for $\xi_*$, hence for $S_*$,
${\nabla_S}_*$ and ${\nabla_\xi}_*$. We recall that Kristensen defines
$$
A[a,b]=(\Sq^1\ox\Sq^{0,1})\delta\left(\Sq^{a-3}\Sq^{b-2}+\Sq^{a-2}\Sq^{b-3}
+\sum_j\binom{b-1-j}{a-2j}(\Sq^{a+b-j-3}\Sq^{j-2}+\Sq^{a+b-j-2}\Sq^{j-3})\right),
$$
for natural numbers $a,b$. Obviously one has
$$
A[a,b]=(\Sq^1\ox\Sq^{0,1})\delta k([a,b]),
$$
where $k$ is the operator determined by
$$
k(xy)=\k(\k\k(x)\k\k(y))
$$
for $x,y\in{\mathscr F}_0^{\le1}$. We then interpret $A[a,b]$ as an ${\mathbb F}$-linear
operator of the form
$$
K:{\mathscr F}_0^{\le1}\ox{\mathscr F}_0^{\le1}\to{\mathscr A}\ox{\mathscr A}
$$
given by
$$
K(x\ox y)=(\Sq^1\ox\Sq^{0,1})\delta\k(\k\k(x)\k\k(y))
$$
which is factored through ${\mathscr F}_0^{\le1}\ox{\mathscr F}_0^{\le1}\onto{\mathscr F}_0^{\le2}$ and
then restricted to $R_{\mathscr F}^{\le2}\into{\mathscr F}_0^{\le2}$. We then can dualize $K$
to get
\begin{Definition}
We define an ${\mathbb F}$-linear operator
$$
K_*:{\mathscr A}_*\ox{\mathscr A}_*\to{R^{\le2}_{\mathscr F}}_*
$$
as composite with the quotient map ${\mathscr F}^{\le2}_*\onto{R^{\le2}_{\mathscr F}}_*$ of the
dual of $K$ above (whose image lies in that of
$m_*:{\mathscr F}^{\le2}_*\into{\mathscr F}^{\le1}_*\ox{\mathscr F}^{\le1}_*$.
\end{Definition}
Thus explicitly, $K_*$ is the composite
$$
{\mathscr A}_*\ox{\mathscr A}_*\xto{\Sq^1\cdot_*\ox\Sq^{0,1}\cdot_*}{\mathscr A}_*\ox{\mathscr A}_*\xto{\delta_*}
{\mathscr A}_*\xto{\zeta_1}{\mathscr A}_*\into{\mathscr F}_*\xto{m_*}{\mathscr F}_*\ox{\mathscr F}_*\onto{\mathscr F}^{\le1}_*\ox{\mathscr F}^{\le1}_*
\xto{M_1^2\ox M_1^2}{\mathscr F}^{\le1}_*\ox{\mathscr F}^{\le1}_*
$$
landing in ${\mathscr F}^{\le2}_*\into{\mathscr F}^{\le1}_*\ox{\mathscr F}^{\le1}_*$ and precomposed with
${\mathscr F}^{\le2}_*\onto{R_{\mathscr F}^{\le2}}_*$. Or on elements,
$$
K_*(x\ox y)=(M_1^2\ox M_1^2)\left(m_*(\zeta_1\frac{\d x}{\d\zeta_1}\frac{\d
y}{\d\zeta_2})\right)^{\le1}=m_*(M_{1,1}^2M_1\frac{\d x}{\d\zeta_1}\frac{\d
y}{\d\zeta_2})^{\le1}.
$$
One thus has
\begin{equation}\label{kstar}
K_*(\zeta_1^{n_1}\zeta_2^{n_2}\cdots\ox\zeta_1^{m_1}\zeta_2^{m_2}\cdots)=
\begin{cases}
M_1^{n_1+m_1}M_{2,1}^{n_2+m_2-1}M_{1,1}^2,&\textrm{$n_1$, $m_2$ odd,
$n_i=m_i=0$ for $i>2$},\\
0&\textrm{otherwise.}
\end{cases}
\end{equation}
We have
\begin{Proposition}
Symmetrization of the operator $K_*$ dual to the operator $S_R$ in
\eqref{sr}, i.~e. is given by precomposing ${S_{\mathscr F}}_*$ given in \eqref{msf}
with the restriction map ${\mathscr F}_*^{\le2}\onto{R_{\mathscr F}}_*^{\le2}$.
\end{Proposition}
\begin{proof}
From the above formula \eqref{kstar}, for monomials $x=\zeta_1^{n_1}\zeta_2^{n_2}\zeta_3^{n_3}\cdots$ and
$y=\zeta_1^{m_1}\zeta_2^{m_2}\zeta_3^{m_3}\cdots$ we have
$$
K_*(x\ox y)+K_*(y\ox x)=
\begin{cases}
M_1^{n_1+m_1}M_{2,1}^{n_2+m_2-1}M_{1,1}^2,&\textrm{$n_1m_2+m_1n_2$ odd and
$n_i=m_i=0$ for $i>2$,}\\
0&\textrm{otherwise.}
\end{cases}
$$
On the other hand, using the explicit expression \eqref{msf} and the expression for the
operator $S^1_*$ in \eqref{s1} we can write
$$
m_*{S_{\mathscr F}}_*(x\ox y)=\sum_{\substack{x_\l=\zeta_1^{2n-1}\\y_{\l'}=\zeta_1^{2n'-1}}}
\zeta_1^{2(n+n')-1}\ox(x_\r y_{\r'})^{\le1}
+(\zeta_1\ox1+1\ox\zeta_1)\sum_{\substack{x_\r=\zeta_1^{2n-1}\\y_{\r'}=\zeta_1^{2n'-1}}}
(x_\l y_{\l'})^{\le1}\ox\zeta_1^{2(n+n'-1)}.
$$
From the expression \eqref{mildiag} for the Milnor diagonal we thus see
that for monomials $x=\zeta_1^{n_1}\zeta_2^{n_2}\zeta_3^{n_3}\cdots$ and
$y=\zeta_1^{m_1}\zeta_2^{m_2}\zeta_3^{m_3}\cdots$ one has ${S_{\mathscr F}}_*(x\ox
y)=0$ unless $n_i=m_i=0$ for $i>2$, whereas in the remaining cases one
has
\begin{multline*}
m_*{S_{\mathscr F}}_*(\zeta_1^{n_1}\zeta_2^{n_2}\ox\zeta_1^{m_1}\zeta_2^{m_2})=
\sum_{\substack{0\le i\le n_1\\0\le j\le m_1\\\textrm{$i$, $j$ odd}}}
\binom{n_1}i\binom{m_1}j
\zeta_1^{i+j+2(n_2+m_2)+1}\ox\zeta_1^{n_1+m_1-i-j+n_2+m_2}\\
+(\zeta_1\ox1+1\ox\zeta_1)
\sum_{\substack{0\le i\le n_1\\0\le j\le m_1\\\textrm{$n_1-i+n_2$, $m_1-j+m_2$ odd}}}
\binom{n_1}i\binom{m_1}j\zeta_1^{i+j+2(n_2+m_2)}\ox\zeta_1^{n_1+m_1-i-j+n_2+m_2}.
\end{multline*}
Let us now turn back to the symmetrization of $K_*$. We compute its image
under the map $m_*$; by \eqref{mstar} it sends $M_1$ to
$\zeta_1\ox1+1\ox\zeta_1$, $M_{1,1}$ to $\zeta_1\ox\zeta_1$ and $M_{2,1}$
to $\zeta_1^2\ox\zeta_1$. Thus the nonzero values of this image are, for
$n_1m_2+m_1n_2$ odd,
$$
m_*K_*(1+T)(\zeta_1^{n_1}\zeta_2^{n_2}\ox\zeta_1^{m_1}\zeta_2^{m_2})=
(\zeta_1\ox1+1\ox\zeta_1)^{n_1+m_1}(\zeta_1^2\ox\zeta_1)^{n_2+m_2-1}(\zeta_1^2\ox\zeta_1^2).
$$
Then expanding
$(\zeta_1\ox1+1\ox\zeta_1)^{n_1+m_1}=(\zeta_1\ox1+1\ox\zeta_1)^{n_1}(\zeta_1\ox1+1\ox\zeta_1)^{m_1}$
via binomials we obtain
$$
m_*K_*(1+T)(\zeta_1^{n_1}\zeta_2^{n_2}\ox\zeta_1^{m_1}\zeta_2^{m_2})=
\sum_{\substack{0\le i\le n_1\\0\le j\le
m_1}}\binom{n_1}i\binom{m_1}j\zeta_1^{i+j+2(n_2+m_2)}\ox\zeta_1^{n_1+m_1-i-j+n_2+m_2+1}.
$$
It follows that nonzero values of the difference $m_*({S_{\mathscr F}}_*-K_*(1+T))$
on monomials in Milnor generators are equal to
\begin{multline*}
\sum_{\substack{0\le i\le n_1\\0\le j\le m_1\\\textrm{$i$, $j$ odd}}}
\binom{n_1}i\binom{m_1}j
\zeta_1^{i+j+2(n_2+m_2)+1}\ox\zeta_1^{n_1+m_1-i-j+n_2+m_2}\\
+\sum_{\substack{0\le i\le n_1\\0\le j\le m_1\\\textrm{$n_1-i+n_2$, $m_1-j+m_2$ odd}}}
\binom{n_1}i\binom{m_1}j\zeta_1^{i+j+2(n_2+m_2)+1}\ox\zeta_1^{n_1+m_1-i-j+n_2+m_2}\\
+\sum_{\substack{0\le i\le n_1\\0\le j\le m_1\\\textrm{$n_1-i+n_2$, $m_1-j+m_2$ even}}}
\binom{n_1}i\binom{m_1}j\zeta_1^{i+j+2(n_2+m_2)}\ox\zeta_1^{n_1+m_1-i-j+n_2+m_2+1}
\end{multline*}
for $n_1m_2+m_1n_2$ odd and
$m_*{S_{\mathscr F}}_*(\zeta_1^{n_1}\zeta_2^{n_2}\ox\zeta_1^{m_1}\zeta_2^{m_2})$ for
$n_1m_2+m_1n_2$ even.
The first expression can be rewritten as
\begin{multline*}
(\zeta_1^2\ox\zeta_1)^{n_2+m_2}\sum_k\zeta_1^{k+1}\ox\zeta_1^{n_1+m_1-k}\\
\left(
\sum_{\substack{0\le i\le n_1\\0\le k-i\le m_1\\\textrm{$i$, $k-i$ odd}}}
\binom{n_1}i\binom{m_1}{k-i}
+\sum_{\substack{0\le i\le n_1\\0\le k-i\le m_1\\\textrm{$n_1-i+n_2$,
$m_1-k+i+m_2$ odd}}}\binom{n_1}i\binom{m_1}{k-i}
+\sum_{\substack{0\le i\le n_1\\0\le k+1-i\le m_1\\\textrm{$n_1-i+n_2$,
$m_1-k-1+i+m_2$
even}}}\binom{n_1}i\binom{m_1}{k+1-i}
\right)
\end{multline*}
and in the second case we may write
\begin{multline*}
m_*{S_{\mathscr F}}_*(\zeta_1^{n_1}\zeta_2^{n_2}\ox\zeta_1^{m_1}\zeta_2^{m_2})=(\zeta_1^2\ox\zeta_1)^{n_2+m_2}
\sum_k\zeta_1^{k+1}\ox\zeta_1^{n_1+m_1-k}\\\left(\sum_{\substack{0\le i\le n_1\\0\le k-i\le m_1\\\textrm{$i$, $k-i$ odd}}}
\binom{n_1}i\binom{m_1}{k-i}
+\sum_{\substack{0\le i\le n_1\\0\le k-i\le m_1\\\textrm{$n_1-i+n_2$,
$m_1-k+i+m_2$ odd}}}\binom{n_1}i\binom{m_1}{k-i}
+\sum_{\substack{0\le i\le n_1\\0\le k+1-i\le m_1\\\textrm{$n_1-i+n_2$,
$m_1-k-1+i+m_2$
odd}}}\binom{n_1}i\binom{m_1}{k+1-i}\right).
\end{multline*}
One then shows that these expressions lie in the subalgebra of
${\mathscr F}_*^{\le1}\ox{\mathscr F}_*^{\le1}$ generated by $\zeta_1^2\ox\zeta_1$ and
$\zeta_1\ox1+1\ox\zeta_1$, without involvement of $\zeta_1\ox\zeta_1$. This
means that the image of the difference ${S_{\mathscr F}}_*-K_*(1+T)$ under the
restriction map ${\mathscr F}_*^{\le2}\onto{R_{\mathscr F}}_*^{\le2}$ is zero.
\end{proof}
\endinput
\chapter{Computation of the algebra of secondary cohomology operations and its
dual}\label{A}
We first describe explicit splittings of the pair algebra ${\mathscr R}^{\mathbb F}$ of relations
in the Steenrod algebra and its dual ${\mathscr R}_{\mathbb F}$. Then we describe in terms of
these splittings $s$ the multiplication maps $A^s$ for the Hopf pair algebra
${\mathscr B}^{\mathbb F}$ of secondary cohomology operations and we describe the dual maps
$A_s$ determining the Hopf pair coalgebra ${\mathscr B}_{\mathbb F}$ dual to ${\mathscr B}^{\mathbb F}$. On the
basis of the main result in \cite{Baues} we describe systems of
equations which can be solved inductively by a computer and which yield the
multiplication maps $A^s$ and $A_s$ as a solution. It turns out that $A_s$ is
explicitly given by a formula in which only the values $A_s(\zeta_n)$,
$n\ge1$, have to be computed where $\zeta_n$ is the Milnor generator in the
dual Steenrod algebra ${\mathscr A}_*$.
\section{Computation of ${\mathscr R}^{\mathbb F}$ and ${\mathscr R}_{\mathbb F}$}\label{rcomp}
Let us fix a function $\chi:{\mathbb F}\to{\mathbb G}$ which splits the projection
${\mathbb G}\to{\mathbb F}$, namely, take
\begin{equation}\label{chieq}
\chi(k\!\!\mod p) = k\!\!\mod p^2, \ 0\le k<p.
\end{equation}
We will use $\chi$ to define splittings of
${\mathscr R}^{\mathbb F}=\left({\mathscr R}^{\mathbb F}_1\xto\d{\mathscr R}^{\mathbb F}_0\right)$. Here a \emph{splitting} $s$
of ${\mathscr R}^{\mathbb F}$ is an ${\mathbb F}$-linear map for which the diagram
\begin{equation}\label{s}
\alignbox{
\xymatrix{
&&{\mathscr R}^{\mathbb F}_1\ar[d]^\d\\
R_{\mathscr F}\ar@{ (->}[r]\ar[urr]^s
&{\mathscr F}_0\ar@{=}[r]
&{\mathscr R}^{\mathbb F}_0
}
}
\end{equation}
commutes with $R_{\mathscr F}=\im(\d)=\ker(q_{\mathscr F}:{\mathscr F}_0\to{\mathscr A})$. We only consider the
case $p=2$.
\begin{Definition}[The right equivariant splitting of ${\mathscr R}^{\mathbb F}$]\label{chi}
Using $\chi$, all \emph{Adem relations}
$$
[a,b]:=\Sq^a\Sq^b+\sum_{k=0}^{\left[\frac
a2\right]}\binom{b-k-1}{a-2k}\Sq^{a+b-k}\Sq^k
$$
for $a,b>0$, $a<2b$, can be lifted to elements $[a,b]_\chi\in R_{\mathscr B}$ by
applying $\chi$ to all coefficients, i.~e. by interpreting $[a,b]$ as an
element of ${\mathscr B}$. As shown in \cite{Baues}*{16.5.2}, $R_{\mathscr F}$ is a free right
${\mathscr F}_0$-module with a basis consisting of \emph{preadmissible relations}. For
$p=2$ these are elements of the form
$$
\Sq^{a_1}\cdots\Sq^{a_{k-1}}[a_k,a]\in R_{\mathscr F}
$$
satisfying $a_1\ge2a_2$, ..., $a_{k-2}\ge2a_{k-1}$, $a_{k-1}\ge2a_k$,
$a_k<2a$. Sending such an element to
$$
\Sq^{a_1}\cdots\Sq^{a_{k-1}}[a_k,a]_\chi\in R_{\mathbf B}
$$
determines then a unique right ${\mathscr F}_0$-equivariant splitting $\phi$ in the pair
algebra ${\mathscr R}^{\mathbb F}$; that is, we get a commutative diagram
$$
\xymatrix@!C=5em{
R_{\mathscr F}\ar[r]^-\phi\ar@{ (->}[d]
&**[l]R_{\mathscr B}\ox{\mathbb F}={\mathscr R}^{\mathbb F}_1\ar[d]^\d\\
{\mathscr F}_0\ar@{=}[r]
&{\mathscr R}^{\mathbb F}_0.
}
$$
\end{Definition}
For a splitting $s$ of ${\mathscr R}^{\mathbb F}$ the map $s\!\ox\!1\oplus1\!\ox\!s$
induces the map $s_\#$ in the diagram
\begin{equation}\label{rsplit}
\alignbox{
\xymatrix{
{\mathscr R}^{\mathbb F}_1\ar[r]^-\Delta\ar@{->>}[d]^\d
&({\mathscr R}^{\mathbb F}\hat\ox{\mathscr R}^{\mathbb F})_1\ar@{->>}[d]^{\d_{\hat\ox}}
&{\mathscr R}^{\mathbb F}_1\!\ox\!{\mathscr F}_0\oplus{\mathscr F}_0\!\ox\!{\mathscr R}^{\mathbb F}_1\ar[l]\\
R_{\mathscr F}\ar@/^/@{-->}[u]^s\ar@{ (->}[d]\ar[r]^{\Delta_R}
&R^{(2)}_{\mathscr F}\ar@/^/@{-->}[u]^{s_\#}\ar@{ (->}[d]
&R_{\mathscr F}\!\ox\!{\mathscr F}_0\oplus{\mathscr F}_0\!\ox\!R_{\mathscr F}\ar[l]\ar@{-->}[u]_{s\ox\!1\oplus1\!\ox\!s}\\
{\mathscr F}_0\ar[r]^-\Delta
&{\mathscr F}_0\ox{\mathscr F}_0.
}
}
\end{equation}
Then the difference
$U=s_\#\Delta_R-\Delta s:R_{\mathscr F}\to({\mathscr R}^{\mathbb F}\hat\ox{\mathscr R}^{\mathbb F})_1$ satisfies $\d_{\hat\ox}U=0$ since
$$
\d_{\hat\ox} s_\#\Delta_R=\Delta_R=\Delta_R\d s=\d_{\hat\ox}\Delta s.
$$
Thus $U$ lifts to $\ker\d_{\hat\ox}\cong{\mathscr A}\ox{\mathscr A}$ and gives an ${\mathbb F}$-linear
map
\begin{equation}\label{ueq}
U^s:R_{\mathscr F}\to{\mathscr A}\ox{\mathscr A}.
\end{equation}
If we use the splitting $s$ to identify ${\mathscr R}^{\mathbb F}_1$ with the direct sum
${\mathscr A}\oplus R_{\mathscr F}$, then it is clear that knowledge of the map $U^s$ determines the
diagonal ${\mathscr R}^{\mathbb F}_1\to({\mathscr R}^{\mathbb F}\hat\ox{\mathscr R}^{\mathbb F})_1$ completely. Indeed $s_\#$
yields the identification $({\mathscr R}^{\mathbb F}\hat\ox{\mathscr R}^{\mathbb F})_1\cong{\mathscr A}\!\ox\!{\mathscr A}\oplus
R^{(2)}_{\mathscr F}$, and under these identifications
$\Delta:{\mathscr R}^{\mathbb F}_1\to({\mathscr R}^{\mathbb F}\hat\ox{\mathscr R}^{\mathbb F})_1$ corresponds to a map which by
commutativity of \eqref{rsplit} must have the form
\begin{equation}\label{udia}
{\mathscr A}\oplus R_{\mathscr F}
\xto{\left(\begin{smallmatrix}\Delta_{\mathscr A}&U^s\\0&\Delta_R\end{smallmatrix}\right)}
{\mathscr A}\!\ox\!{\mathscr A}\oplus R^{(2)}_{\mathscr F}
\end{equation}
and is thus determined by $U^s$.
One readily checks that the map $U^s$ for $s=\phi$ in \bref{chi} coincides with the map $U$
defined in \cite{Baues}*{16.4.3} in terms of the algebra ${\mathscr B}$.
Given the splitting $s$ and the map $U^s$, the only piece of structure
remaining to determine the ${\mathbf{Alg}}^{\mathit{pair}}_{\mathbbm1}$-comonoid structure of
${\mathscr R}^{\mathbb F}$ completely is the ${\mathscr F}_0$-${\mathscr F}_0$-bimodule structure on
${\mathscr R}^{\mathbb F}_1\cong{\mathscr A}\oplus R_{\mathscr F}$. Consider for $f\in{\mathscr F}_0$, $r\in R_{\mathscr F}$ the
difference $ s(fr)-f s(r)$. It belongs to the kernel of $\d$ since
$$
\d s(fr)=fr=f\d s(r)=\d(f s(r)).
$$
Thus we obtain the \emph{left multiplication map}
\begin{equation}\label{aeq}
a^s:{\mathscr F}_0\ox R_{\mathscr F}\to{\mathscr A}.
\end{equation}
Similarly we obtain the \emph{right multiplication map}
$$
b^s:R_{\mathscr F}\ox{\mathscr F}_0\to{\mathscr A}
$$
by the difference $s(rf)-s(r)f$.
\begin{Lemma}\label{chisp}
For $s=\phi$ in \bref{chi} the right multiplication map $b^\phi$ is trivial,
that is $\phi$ is right equivariant, and the left multiplication map factors
through $q_{\mathscr F}\ox1$ inducing the map
$$
a^\phi:{\mathscr A}\ox R_{\mathscr F}\to{\mathscr A}.
$$
\end{Lemma}
\begin{proof}
Right equivariance holds by definition. As for the factorization, $R_{\mathscr F}\ox
R_{\mathscr F}\into{\mathscr F}_0\ox R_{\mathscr F}$ is in the kernel of $a^\phi:{\mathscr F}_0\ox R_{\mathscr F}\to{\mathscr A}$, since
by right equivariance of $s$ and by the pair algebra property \eqref{paireq}
for ${\mathscr R}^{\mathbb F}$ one has for any $r,r'\in R_{\mathscr F}$
$$
s(rr')=s(r)r'=s(r)\d s(r')=(\d s(r))s(r')=rs(r').
$$
Hence factoring the above map through $({\mathscr F}_0\ox R_{\mathscr F})/(R_{\mathscr F}\ox
R_{\mathscr F})\cong{\mathscr A}\ox R_{\mathscr F}$ we obtain a map
$$
{\mathscr A}\ox R_{\mathscr F}\to{\mathscr A}.
$$
\end{proof}
Summarizing the above, we thus have proved
\begin{Proposition}\label{detrel}
Using the splitting $s=\phi$ of ${\mathscr R}^{\mathbb F}$ in \bref{chi}
the comonoid ${\mathscr R}^{\mathbb F}$ in the category ${\mathbf{Alg}}^{\mathit{pair}}_{\mathbbm1}$ described
in \bref{relcom} is completely determined by the maps
$$
U^\phi:R_{\mathscr F}\to{\mathscr A}\ox{\mathscr A}
$$
and
$$
a^\phi:{\mathscr A}\ox R_{\mathscr F}\to{\mathscr A}
$$
given in \eqref{ueq} and \eqref{aeq} respectively.
\end{Proposition}\qed
We next introduce another splitting $s=\psi$ for which $U^s=0$. For this we use
the fact that ${\mathscr A}_*=\Hom({\mathscr A},{\mathbb F})$ and
\begin{equation}
{\mathscr B}_\#=\Hom({\mathscr B}_0,{\mathbb G})
\end{equation}
with ${\mathscr B}_0=T_{\mathbb G}(E_{\mathscr A})$ both are polynomial algebras in such a way that
generators of ${\mathscr A}_*$ are also (part of the) generators of ${\mathscr B}_\#$.
Using $\chi$ in \eqref{chieq} we obtain the function
\begin{equation}\label{psichi}
\psi_\chi:{\mathscr A}_*\to{\mathscr B}_\#
\end{equation}
(which is not ${\mathbb F}$-linear) as follows. Each element $x$ in ${\mathscr A}_*$ is
uniquely an ${\mathbb F}$-linear combination $x=\sum_\alpha n_\alpha\alpha$ where
$\alpha$ runs through the monomials in Milnor generators. Such a monomial
can be also considered as an element in ${\mathscr B}_\#$ by \bref{eslp} so that we
can define
$$
\psi_\chi(x)=\sum_\alpha\chi(n_\alpha)\alpha\in{\mathscr B}_\#.
$$
\begin{Definition}[The comultiplicative splitting of ${\mathscr R}^{\mathbb F}$]\label{psi}
Consider the following commutative diagram with exact rows and columns
$$
\xymatrix{
{\mathscr A}_*\ar@{ >->}[r]\ar@{-->}[dr]_{\psi_\chi}&{\mathscr F}_*\ar[r]&\Hom(R_{\mathscr B},{\mathbb F})\\
&{\mathscr B}_\#\ar[r]^-q\ar@{->>}[u]&\Hom(R_{\mathscr B},{\mathbb G})\ar[u]\\
{\mathscr A}_*\ar@{ >->}[r]^{{q_{\mathscr F}}_*}&{\mathscr F}_*\ar@{ >->}[u]_j\ar[r]
&\Hom(R_{\mathscr B},{\mathbb F})\ar@{ >->}[u]_{j_R}\ar@{->>}[r]&{\mathscr A}_*\ar@{-->}[dl]^{q\psi_\chi}\\
&{\mathscr R}_{\mathbb F}^0\ar@{=}[u]&{\mathscr R}_{\mathbb F}^1\ar@{=}[u]
}
$$
with the columns induced by the short exact sequence ${\mathbb F}\into{\mathbb G}\onto{\mathbb F}$
and the rows induced by \eqref{drel}. In particular $q$ is induced by the
inclusion $R_{\mathscr B}\subset{\mathscr B}_0$. Now it is clear that $\psi_\chi$ yields a map
$q\psi_\chi$ which lifts to $\Hom(R_{\mathscr B},{\mathbb F})$ so that we get the map
$$
q\psi_\chi:{\mathscr A}_*\to{\mathscr R}_{\mathbb F}^1
$$
which splits the projection ${\mathscr R}_{\mathbb F}^1\onto{\mathscr A}_*$. Moreover $q\psi_\chi$ is
${\mathbb F}$-linear since for all $x,y\in{\mathscr A}_*$ the elements
$\psi_\chi(x)+\psi_\chi(y)-\psi_\chi(x+y)\in{\mathscr B}_\#$ are in the image of the
inclusion $j{q_{\mathscr F}}_*:{\mathscr A}_*\into{\mathscr B}_\#$ and thus go to zero under $q$.
The dual of $q\psi_\chi$ is thus a retraction $(q\psi_\chi)^*$ in the
short exact sequence
$$
\xymatrix{
R_{\mathscr F}\ar@{-->}[dr]_{(q\psi_\chi)^*_\perp}&R_{\mathscr B}\ox{\mathbb F}\ar@{=}[d]\ar@{->>}[l]_\pi&{\mathscr A}\ar@{
>->}[l]_-\iota\\
&{\mathscr R}^{\mathbb F}_1\ar@{-->}[ur]_{(q\psi_\chi)^*}
}
$$
which induces the splitting $\psi=(q\psi_\chi)^*_\perp$ of ${\mathscr R}^{\mathbb F}$
determined by
$$
\psi(\pi(x))=x-\iota((q\psi_\chi)^*(x)).
$$
\end{Definition}
\begin{Lemma}
For the splitting $s=\psi$ of ${\mathscr R}^{\mathbb F}$ we have $U^\psi=0$.
\end{Lemma}
\begin{proof}
We must show that the following diagram commutes:
$$
\xymatrix{
R_{\mathscr F}\ar[r]^{\Delta_R}\ar[d]_\psi&R_{\mathscr F}^{(2)}\ar[d]^{\psi_\#}\\
R_{\mathscr B}\ox{\mathbb F}\ar[r]^-\Delta&({\mathscr R}^{\mathbb F}\hat\ox{\mathscr R}^{\mathbb F})_1.
}
$$
Obviously this is equivalent to commutativity of the dual diagram
$$
\xymatrix{
({\mathscr R}_{\mathbb F}\check\ox{\mathscr R}_{\mathbb F})^1\ar[r]^-{\Delta_*}\ar[d]_{(\psi_\#)_*}&\Hom(R_{\mathscr B},{\mathbb F})\ar[d]^{\psi_*}\\
{R_{\mathscr F}^{(2)}}_*\ar[r]^{{\Delta_R}_*}&{R_{\mathscr F}}_*
}
$$
which in turn is equivalent to commutativity of
\begin{equation}\label{psicom}
\alignbox{
\xymatrix{
{\mathscr A}_*\ox{\mathscr A}_*\ar[d]_{(\psi_\#)_*^\perp}\ar[r]^{\delta_*}&{\mathscr A}_*\ar[d]^{q\psi_\chi}\\
({\mathscr R}_{\mathbb F}\check\ox{\mathscr R}_{\mathbb F})^1\ar[r]^-{\Delta_*}&\Hom(R_{\mathscr B},{\mathbb F}).
}
}
\end{equation}
On the other hand, the left hand vertical map in the latter diagram can be
included into another commutative diagram
$$
\xymatrix{
{\mathscr F}_*\!\ox\!{\mathscr A}_*\oplus{\mathscr A}_*\!\ox\!{\mathscr F}_*
\ar[d]_{1\!\ox\!q\psi_\chi\oplus q\psi_\chi\!\ox\!1}
&{\mathscr A}_*\ox{\mathscr A}_*
\ar@{ )->}[l]_-{\binom{i\ox1}{1\ox i}}\ar[d]^{(\psi_\#)_*^\perp}\\
{\mathscr F}_*\!\ox\!{\mathscr R}_{\mathbb F}^1\oplus{\mathscr R}_{\mathbb F}^1\!\ox\!{\mathscr F}_*&({\mathscr R}_{\mathbb F}\ox{\mathscr R}_{\mathbb F})^1\ar@{ )->}[l]
}
$$
It follows that on elements, commutativity of \eqref{psicom} means that the
equality
$$
q\psi_\chi(xy)=i(x)q\psi_\chi(y)+q\psi_\chi(x)i(y)
$$
holds for any $x,y\in{\mathscr A}_*$. By linearity, it is clearly enough to prove
this when $x$ and $y$ are monomials in Milnor generators.
For this observe that for any $x\in{\mathscr A}_*=\Hom({\mathscr A},{\mathbb F})$, the element
$q\psi_\chi(x)\in\Hom(R_{\mathscr B},{\mathbb F})$ is the unique ${\mathbb F}$-linear map making the
diagram
$$
\xymatrix{
R_{\mathscr B}\ar@{ >->}[r]\ar@{-->}[d]_{q\psi_\chi(x)}
&{\mathscr B}_0\ar[d]_{\psi_\chi(x)}\ar@{->>}[r]
&{\mathscr A}\ar[d]^x\\
{\mathbb F}\ar@{ >->}[r]
&{\mathbb G}\ar@{->>}[r]
&{\mathbb F}
}
$$
commute. This uniqueness implies the equality we need in view of the
following commutative diagram with exact columns:
$$
\xymatrix@!C=5em{
R_{\mathscr B}\ar[r]\ar@{ >->}[d]
&R_{\mathscr B}^{(2)}\ar@{ >->}[d]\ar@{-->}[r]
&{\mathbb F}\ox{\mathbb F}\ar[r]^\cong\ar@{ >->}[d]
&{\mathbb F}\ar@{ >->}[d]\\
{\mathscr B}_0\ar[r]^-\Delta\ar@{->>}[d]
&{\mathscr B}_0\ox{\mathscr B}_0\ar@{->>}[d]\ar[r]^{\psi_\chi(x)\ox\psi_\chi(y)}
&{\mathbb G}\ox{\mathbb G}\ar@{->>}[d]\ar[r]^\cong
&{\mathbb G}\ar@{->>}[d]\\
{\mathscr A}\ar[r]^-\delta\ar@/_3ex/[rrr]_{xy}
&{\mathscr A}\ox{\mathscr A}\ar[r]^{x\ox y}
&{\mathbb F}\ox{\mathbb F}\ar[r]^\cong
&{\mathbb F},
}
$$
since when $x$ and $y$ are monomials in Milnor generators, one has
$\psi_\chi(xy)=\psi_\chi(x)\psi_\chi(y)$.
\end{proof}
Therefore we call $\psi$ the comultiplicative splitting of ${\mathscr R}^{\mathbb F}$. We now
want to compute the left and right multiplication maps $a^\psi$ and
$b^\psi$ defined in \eqref{aeq}. The dual maps $a_\psi=(a^\psi)_*$ and
$b_\psi=(b^\psi)_*$ can be described by the diagrams
\begin{equation}\label{ml}
\alignbox{
\xymatrix{
&(R_{\mathscr B})_*\ar@{->>}[r]\ar[d]_{m_*^\l}&{\mathscr A}_*\ar@/_1em/@{-->}[l]_{q\psi_\chi}\ar[d]^{m_*}\\
{\mathscr F}_*\ox(R_{\mathscr F})_*\ar@{
(->}[r]&{\mathscr F}_*\ox(R_{\mathscr B})_*\ar@{->>}[r]
&{\mathscr A}_*\ox{\mathscr A}_*\ar@/^1em/@{-->}[l]^{i\ox q\psi_\chi}
}
}
\end{equation}
and
\begin{equation}\label{mr}
\alignbox{
\xymatrix{
&(R_{\mathscr B})_*\ar@{->>}[r]\ar[d]_{m_*^\r}&{\mathscr A}_*\ar@/_1em/@{-->}[l]_{q\psi_\chi}\ar[d]^{m_*}\\
(R_{\mathscr F})_*\ox{\mathscr F}_*\ar@{
(->}[r]&(R_{\mathscr B})_*\ox{\mathscr F}_*\ar@{->>}[r]
&{\mathscr A}_*\ox{\mathscr A}_*.\ar@/^1em/@{-->}[l]^{q\psi_\chi\ox i}
}
}
\end{equation}
Here $m_*$ is dual to the multiplication in ${\mathscr A}$ and $m_*^\l$ and $m_*^\r$
are induced by the ${\mathscr F}_0$-${\mathscr F}_0$-bimodule structure of $R_{\mathscr B}\ox{\mathbb F}$. One
readily checks
\begin{align*}
a_\psi&=m_*^\l q\psi_\chi-(i\ox q\psi_\chi)m_*\\
b_\psi&=m_*^\r q\psi_\chi-(q\psi_\chi\ox i)m_*.
\end{align*}
We now consider the diagram
$$
\xymatrix{
&{\mathscr B}_\#\ar[d]_{m_*^{\mathbb G}}&{\mathscr A}_*\ar[l]_{\psi_\chi}\ar[d]^{m_*}\\
{\mathscr F}_*\ox{\mathscr F}_*\ar@{ (->}[r]&{\mathscr B}_\#\ox{\mathscr B}_\#&{\mathscr A}_*\ox{\mathscr A}_*\ar[l]_-{\psi_\chi^\ox}
}
$$
Here $\psi_\chi^\ox$ is defined similarly as $\psi_\chi$ in \eqref{psichi}
by the formula
$$
\psi_\chi^\ox\left(\sum_{\alpha,\beta}n_{\alpha\beta}\alpha\ox\beta\right)=
\sum_{\alpha,\beta}\chi(n_{\alpha\beta})\alpha\ox\beta
$$
where $\alpha$, $\beta$ run through the monomials in Milnor generators.
Moreover $m_*^{\mathbb G}$ is the dual of the multiplication map $m^{\mathbb G}$ of
${\mathscr B}_0=T_{\mathbb G}(E_{\mathscr A})$.
\begin{Lemma}\label{mulstar}
The difference $m^{\mathbb G}_*\psi_\chi-\psi_\chi^\ox m_*$ lifts to an
${\mathbb F}$-linear map $\nabla_\chi:{\mathscr A}_*\to{\mathscr F}_*\ox{\mathscr F}_*$ such that one has
\begin{align*}
a_\psi&=(1\ox\pi)\nabla_\chi\\
b_\psi&=(\pi\ox1)\nabla_\chi.
\end{align*}
Here $\pi:{\mathscr F}_*\onto{R_{\mathscr F}}_*$ is induced by the inclusion $R_{\mathscr F}\subset{\mathscr F}_0$.
\end{Lemma}
\begin{proof}
We will only prove the first equality; the proof for the second one is
similar.
The following diagram
$$
\xymatrix@!C=3em{
&{R_{\mathscr B}}_*\ar@{ >->}[rrd]_{j_R}\ar[ddddd]_{m^\l_*}&&&&&&{\mathscr A}_*\ar[llllll]_{q\psi_\chi}\ar@{=}[ddl]\ar[ddddd]^{m_*}\\
&&&{R_{\mathscr B}}_\#\ar[ddd]_{m^\l_\#}\\
&&&&&{\mathscr B}_\#\ar[d]_{m_*^{\mathbb G}}\ar[ull]^\pi&{\mathscr A}_*\ar[l]_{\psi_\chi}\ar[d]^{m_*}\\
&&&&&{\mathscr B}_\#\ox{\mathscr B}_\#\ar[dll]_{1\ox\pi}&{\mathscr A}_*\ox{\mathscr A}_*\ar[l]_-{\psi_\chi^\ox}\\
&&&{\mathscr B}_\#\ox{R_{\mathscr B}}_\#&{\mathscr F}_*\ox{\mathscr F}_*\ar[lllldd]|\hole^>(.75){1\ox\pi}\ar@{ (->}[ur]\\
&{\mathscr F}_*\ox{R_{\mathscr B}}_*\ar@{ >->}[urr]^{j\ox j_R}&&&&&&{\mathscr A}_*\ox{\mathscr A}_*\ar[llllll]^{i\ox q\psi_\chi}\ar@{=}[uul]\\
{\mathscr F}_*\ox{R_{\mathscr F}}_*\ar@{ (->}[ur]
}
$$
commutes except for the innermost square, whose deviation from
commutativity is $\nabla_\chi$ and lies in the image of
${\mathscr F}_*\ox{\mathscr F}_*\incl{\mathscr B}_\#\ox{\mathscr B}_\#$, and the outermost square, whose deviation
from commutativity is $a_\psi$ and lies in the image of
${\mathscr F}_*\ox{R_{\mathscr F}}_*\incl{\mathscr F}_*\ox{R_{\mathscr B}}_*$. It follows that
$(1\ox\pi)\nabla_\chi$ and $a_\psi$ have the same image under $j\ox
j_R$, and since the latter map is injective we are done.
\end{proof}
Let us describe the map $\nabla_\chi$ more explicitly.
\begin{Lemma}
The map $\nabla_\chi$ factors as follows
$$
{\mathscr A}_*\xto{\bar\nabla\chi}{\mathscr F}_*\ox{\mathscr A}_*\xto{1\ox i}{\mathscr F}_*\ox{\mathscr F}_*.
$$
\end{Lemma}
\begin{proof}
Let ${\mathscr A}_\#\subset{\mathscr B}_\#$ be the subring generated by the elements $M_1$,
$M_{21}$, $M_{421}$, $M_{8421}$, .... It is then clear that the image of
$\psi_\chi$ lies in ${\mathscr A}_\#$ and the reduction ${\mathscr B}_\#\onto{\mathscr F}_*$ carries
${\mathscr A}_\#$ to ${\mathscr A}_*$. Moreover obviously the image of $\psi^\ox m_*$ lies in
${\mathscr A}_\#$, hence it only remains to show the inclusion
$$
m_*^{\mathbb G}({\mathscr A}_\#)\subset{\mathscr B}_\#\ox{\mathscr A}_\#.
$$
Since $m_*^{\mathbb G}$ is a ring homomorphism, it suffices to check this on the
generators $M_1$, $M_{21}$, $M_{421}$, $M_{8421}$, .... But this is clear from
\eqref{dizeta}.
\end{proof}
\begin{Corollary}\label{calcab}
For the comultiplicative splitting $\psi$ one has
$$
a_\psi=0.
$$
Moreover the map $b_\psi$ factors as follows
$$
{\mathscr A}_*\xto{\bar b_\psi}{R_{\mathscr F}}_*\ox{\mathscr A}_*\xto{1\ox i}{R_{\mathscr F}}_*\ox{\mathscr F}_*.
$$
\end{Corollary}
\begin{proof}
The first statement follows as by definition $\pi({\mathscr A}_*)=0$; the second is
obvious.
\end{proof}
Using the splitting $\psi$ we get the following analogue of \bref{detrel}.
\begin{Proposition}\label{psicomon}
The comonoid ${\mathscr R}^{\mathbb F}$ in the category ${\mathbf{Alg}}^{\mathit{pair}}_{\mathbbm1}$ described in
\bref{relcom} is completely determined by the multiplication map
$$
\bar b^\psi: R_{\mathscr F}\ox{\mathscr A}\to{\mathscr A}
$$
dual to the map $\bar b_\psi$ from \ref{calcab}. In fact, the identification
$$
{\mathscr R}^{\mathbb F}_1={\mathscr A}\oplus R_{\mathscr F}
$$
induced by the splitting $s=\psi$ identifies the diagonal of ${\mathscr R}^{\mathbb F}$ with
$\Delta_{\mathscr A}\oplus\Delta_R$ (see \eqref{ueq}, \eqref{udia}), and the bimodule
structure of ${\mathscr R}_1^{\mathbb F}$ with
\begin{align*}
f(\alpha,r)&=(f\alpha,fr)\\
(\alpha,r)f&=(\alpha\qf f-\bar b^\psi(r,\qf f),rf)
\end{align*}
for $f\in{\mathscr F}_0$, $r\in R_{\mathscr F}$, $\alpha\in{\mathscr A}$.
\end{Proposition}
\section{Computation of the Hopf pair algebra ${\mathscr B}^{\mathbb F}$}\label{bcomp}
The Hopf pair algebra ${\mathscr V}={\mathscr B}^{\mathbb F}$ in \bref{unique}, given by the algebra of
secondary cohomology operations, satisfies the following crucial condition
which we deduce from \cite{Baues}*{16.1.5}.
\begin{Theorem}\label{splitr}
There exists a right ${\mathscr F}_0$-equivariant splitting
$$
u:{\mathscr R}^{\mathbb F}_1=R_{\mathscr B}\ox{\mathbb F}\to{\mathscr B}_1\ox{\mathbb F}={\mathscr B}^{\mathbb F}_1
$$
of the projection ${\mathscr B}^{\mathbb F}_1\to{\mathscr R}^{\mathbb F}_1$, see \eqref{hpad}, such that the
following holds. The diagram
$$
\xymatrix{
{\mathscr A}\oplus_\k\Sigma{\mathscr A}\ar@/_/@{-->}[d]_q\ar@{ >->}[r]
&{\mathscr B}^{\mathbb F}_1\ar@/_/@{-->}[d]_q\ar[r]
&{\mathscr B}^{\mathbb F}_0\ar@{=}[d]\ar@{->>}[r]
&{\mathscr A}\ar@{=}[d]\\
{\mathscr A}\ar[u]_{\bar u}\ar@{ >->}[r]
&{\mathscr R}^{\mathbb F}_1\ar[u]_u\ar[r]
&{\mathscr R}^{\mathbb F}_0\ar@{->>}[r]
&{\mathscr A}
}
$$
commutes, where $\bar u$ is the inclusion. Moreover in the diagram of
diagonals, see \eqref{diacomp},
$$
\xymatrix{
{\mathscr B}^{\mathbb F}_1\ar[r]^-{\Delta_{\mathscr B}}
&({\mathscr B}^{\mathbb F}\hat\ox{\mathscr B}^{\mathbb F})_1
&\Sigma{\mathscr A}\ox{\mathscr A}\ar@{ )->}[l]\\
{\mathscr R}^{\mathbb F}_1\ar[r]^-{\Delta_R}\ar[u]^u
&({\mathscr R}^{\mathbb F}\hat\ox{\mathscr R}^{\mathbb F})_1\ar[u]^{u\hat\ox u}
}
$$
the difference $\Delta_{\mathscr B} u-(u\hat\ox u)\Delta_R$ lifts to $\Sigma{\mathscr A}\ox{\mathscr A}$
and satisfies
$$
\xi\bar\pi=\Delta_{\mathscr B} u-(u\hat\ox
u)\Delta_R:\xymatrix@1{{\mathscr R}_{\mathbb F}^1\ar@{->>}[r]^{\bar\pi}&\bar
R\ar[r]^-\xi&\Sigma{\mathscr A}\ox{\mathscr A}}
$$
where $\xi$ is dual to $\xi_*$ in \bref{cosxi}. Here $\bar\pi$ is the
projection ${\mathscr R}_{\mathbb F}\onto R_{\mathscr F}\onto\bar R$. The cocycle $\xi$ is trivial if
$p$ is odd.
\end{Theorem}
\begin{Definition}\label{multop}
Using a splitting $u$ of ${\mathscr B}^{\mathbb F}$ as in \bref{splitr} we define a
\emph{multiplication operator}
$$
A:{\mathscr A}\ox R_{\mathscr B}\to\Sigma{\mathscr A}
$$
by the equation
$$
A(\bar\alpha\ox x)=u(\alpha x)-\alpha u(x)
$$
for $\alpha\in{\mathscr F}_0$, $x\in R_{\mathscr B}$. Thus $-A$ is a multiplication map as
studied in \cite{Baues}*{16.3.1}. Fixing a splitting $s$ of ${\mathscr R}^{\mathbb F}$ as in
\eqref{s} we define an \emph{$s$-multiplication operator} $A^s$ to be the
composite
$$
A^s:\xymatrix@1{{\mathscr A}\ox R_{\mathscr F}\ar[r]^-{1\ox s}&{\mathscr A}\ox R_{\mathscr B}\ar[r]^-A&\Sigma{\mathscr A}}.
$$
Such operators have the properties of the following $s$-multiplication
maps.
\end{Definition}
\begin{Definition}\label{mulmap}
Let $s$ be a splitting of ${\mathscr R}^{\mathbb F}$ as in \eqref{s} and let $U^s$, $a^s$,
$b^s$ be defined as in section \ref{rcomp}. An \emph{$s$-multiplication
map}
$$
A^s:{\mathscr A}\ox R_{\mathscr F}\to{\mathscr A}
$$
is an ${\mathbb F}$-linear map of degree $-1$ satisfying the following conditions
with $\alpha,\alpha',\beta,\beta'\in{\mathscr F}_0$, $x,y\in R_{\mathscr F}$
\begin{enumerate}
\item $A^s(\alpha,x\beta)=A^s(\alpha,x)\beta+\k(\alpha)b^s(x,\beta)$
\item
$A^s(\alpha\alpha',x)=A^s(\alpha,\alpha'x)+\k(\alpha)a^s(\alpha',x)+(-1)^{\deg(\alpha)}\alpha
A^s(\alpha',x)$
\item $\delta A^s(\alpha,x)=A^s_\ox(\alpha\ox\Delta
x)+L(\alpha,x)+\nabla_\xi(\alpha,x)+\delta\k(\alpha)U^s(x)$.
\end{enumerate}
Here $A^s_\ox:{\mathscr A}\ox R_{\mathscr F}^{(2)}\to{\mathscr A}\ox{\mathscr A}$ is defined by the equalities
\begin{align*}
A^s_\ox(\alpha\ox x\ox\beta')
&=\sum(-1)^{\deg(\alpha_\r)\deg(x)}A^s(\alpha_\l,x)\ox\alpha_\r\beta',\\
A^s_\ox(\alpha\ox\beta\ox y)
&=\sum(-1)^{\deg(\alpha_\r)\deg(\beta)+\deg(\alpha_\l)+\deg(\beta)}\alpha_\l\beta\ox
A^s(\alpha_\r,y),
\end{align*}
where as always
$$
\delta(\alpha)=\sum\alpha_\l\ox\alpha_\r\in{\mathscr A}\ox{\mathscr A}.
$$
Two $s$-multiplication maps $A^s$ and ${A^s}'$ are \emph{equivalent} if there exists an
${\mathbb F}$-linear map
$$
\gamma:R_{\mathscr F}\to{\mathscr A}
$$
of degree $-1$ such that the equality
$$
A^s(\alpha,x)-{A^s}'(\alpha,x)
=\gamma(\alpha x)-(-1)^{\deg(\alpha)}\alpha\gamma(x)
$$
holds for any $\alpha\in{\mathscr A}$, $x\in R_{\mathscr F}$ and moreover $\gamma$ is right
${\mathscr F}_0$-equivariant and the diagram
$$
\xymatrix{
{\mathscr A}\ar[r]^-\delta&{\mathscr A}\ox{\mathscr A}\\
R_{\mathscr F}\ar[u]_\gamma\ar[r]_\Delta&R_{\mathscr F}^{(2)}\ar[u]_{\gamma_\ox}
}
$$
commutes, with $\gamma_\ox$ given by
\begin{align*}
\gamma_\ox(x\ox\beta)&=\gamma(x)\ox\beta,\\
\gamma_\ox(\alpha\ox y)&=(-1)^{\deg(\alpha)}\alpha\ox\gamma(y)
\end{align*}
for $\alpha,\beta\in{\mathscr F}_0$, $x,y\in R_{\mathscr F}$.
\end{Definition}
\begin{Theorem}\label{exmul}
There exists an $s$-multiplication map $A^s$ and any two such
$s$-multiplication maps are equivalent. Moreover each $s$-multiplication
map is an $s$-multiplication operator as in \bref{multop} and vice versa.
\end{Theorem}
\begin{proof}
We apply \cite{Baues}*{16.3.3}. In fact, we obtain by $A^s$ the
multiplication operator
$$
A:{\mathscr A}\ox R_{\mathscr B}={\mathscr A}\!\ox\!{\mathscr A}\oplus{\mathscr A}\!\ox\!R_{\mathscr F}\to\Sigma{\mathscr A}
$$
with
\begin{equation}\label{exmulf}
A(\alpha\ox x)=A^s(\alpha\ox\bar x)+\k(\alpha)\xi
\end{equation}
where $(\bar x,\xi)\in R_{\mathscr F}\oplus{\mathscr A}=R_{\mathscr B}\ox{\mathbb F}$ corresponds to $x$, that is $s(\bar x)+\iota(\xi)=x$ for $\iota:{\mathscr A}\subset R_{\mathscr B}\ox{\mathbb F}$.
\end{proof}
\begin{Remark}
For the splitting $s=\phi$ of ${\mathscr R}^{\mathbb F}$ in \bref{chi} the maps
$$
A_{n,m}:{\mathscr A}\to{\mathscr A}
$$
are defined by $A_{n,m}(\alpha)=A^\phi(\alpha\ox[n,m])$, with $[n,m]$
the Adem relations in $R_{\mathscr F}$. Using formul\ae\ in \bref{mulmap} the maps
$A_{n,m}$ determine the $\phi$-multiplication map $A^\phi$ completely. The
maps $A_{n,m}$ coincide with the corresponding maps $A_{n,m}$ in
\cite{Baues}*{16.4.4}. In \cite{Baues}*{16.6} an algorithm for
determination of $A_{n,m}$ is described, leading to a list of values of
$A_{n,m}$ on the elements of the admissible basis of ${\mathscr A}$. The algorithm
for the computation of $A_{n,m}$ can be deduced from theorem \bref{exmul}
above.
\end{Remark}
\begin{Remark}
Triple Massey products $\brk{\alpha,\beta,\gamma}$ with
$\alpha,\beta,\gamma\in{\mathscr A}$, $\alpha\beta=0=\beta\gamma$, as in \bref{tmp}
can be computed by $A^s$ as follows. Let $\bar\beta\bar\gamma\in R_{\mathscr B}$ be
given as in \bref{tmp}. Then $\bar\beta\bar\gamma\ox1\in R_{\mathscr B}\ox{\mathbb F}$
satisfies
$$
\bar\beta\bar\gamma\ox1=s(\bar x)+\iota(\xi)
$$
with $\bar x\in R_{\mathscr F}$, $\xi\in{\mathscr A}$ and $\brk{\alpha,\beta,\gamma}$ satisfies
$$
A^s(\alpha\ox\bar x)+\k(\alpha)\xi\in\brk{\alpha,\beta,\gamma}.
$$
Compare \cite{Baues}*{16.3.4}.
\end{Remark}
Now it is clear how to introduce via $a^s$, $b^s$, $U^s$, $\xi$, $\k$, and $A^s$ a
Hopf pair algebra structure on
\begin{equation}\label{hopfin}
\alignbox{
\xymatrix{
{\mathscr A}\oplus\Sigma{\mathscr A}\oplus R_{\mathscr F}\ar[r]^-q\ar@{=}[d]&{\mathscr A}\oplus R_{\mathscr F}\ar@{=}[d]\\
{\mathscr B}^{\mathbb F}_1&{\mathscr R}^{\mathbb F}_1
}
}
\end{equation}
which is isomorphic to ${\mathscr B}^{\mathbb F}$, compare \bref{detrel}.
In the next section we describe an algorithm for the computation of a
$\psi$-multiplication map, where $\psi$ is the comultiplicative splitting
of ${\mathscr R}^{\mathbb F}$ in \bref{psi}. For this we compute the dual map $A_\psi$ of
$A^\psi$.
\section{Computation of the Hopf pair coalgebra ${\mathscr B}_{\mathbb F}$}\label{cobcomp}
For the comultiplicative splitting $s=\psi$ of ${\mathscr R}^{\mathbb F}$ in \bref{psi} we
introduce the following $\psi$-comultiplication maps which are dual to the
$\psi$-multiplication maps in \bref{mulmap}.
\begin{Definition}\label{apsi}
Let $\bar b_\psi$ be given as in \ref{calcab}. A
\emph{$\psi$-comultiplication map}
$$
A_\psi:{\mathscr A}_*\to{\mathscr A}_*\ox{R_{\mathscr F}}_*
$$
is an ${\mathbb F}$-linear map of degree $+1$ satisfying the following conditions.
\begin{enumerate}
\item\label{mreqs}
The maps in the diagram
$$
\xymatrix{
{\mathscr A}_*\ox{R_{\mathscr F}}_*\ar[d]_{1\ox m^\r_*}&{\mathscr A}_*\ar[d]^{m_*}\ar[l]_-{A_\psi}\\
{\mathscr A}_*\ox{R_{\mathscr F}}_*\ox{\mathscr F}_*&{\mathscr A}_*\ox{\mathscr A}_*\ar[l]^-{A_\psi\ox i}
}
$$
satisfy
$$
(1\ox m^\r_*)A_\psi=(A_\psi\ox i)m_*+(\k_*\ox\bar b_\psi)m_*.
$$
Here $\k_*$ is computed in \bref{dkappa} and $m^\r_*$ is defined in
\eqref{mr}.
\item\label{mleqs}
The maps in the diagram
$$
\xymatrix{
{\mathscr A}_*\ox{R_{\mathscr F}}_*\ar[d]^{1\ox m^\l_*}&&{\mathscr A}_*\ar[d]^{A_\psi}\ar[ll]_{A_\psi}\\
{\mathscr A}_*\ox{\mathscr F}_*\ox{R_{\mathscr F}}_*&{\mathscr A}_*\ox{\mathscr A}_*\ox{R_{\mathscr F}}_*\ar[l]^{1\ox i\ox1}
&{\mathscr A}_*\ox{R_{\mathscr F}}_*\ar[l]^-{m_*\ox1}
}
$$
satisfy
$$
(1\ox m^\l_*)A_\psi=(1\ox i\ox1)(m_*\ox1)A_\psi-(\tau\ox i\ox1)(1\ox A_\psi)m_*.
$$
Here $m^\l_*$ is as in \eqref{ml}, and $\tau:{\mathscr A}_*\to{\mathscr A}_*$ is given
by $\tau(\alpha)=(-1)^{\deg(\alpha)}\alpha$.
\item\label{mult}
For $x,y\in{\mathscr A}_*$ the product $xy$ in the algebra ${\mathscr A}_*$ satisfies the
formula
$$
A_\psi(xy)=A_\psi(x)m_*(y)+(-1)^{\deg(x)}m_*(x)A_\psi(y)+L_*(x,y)+{\nabla_\xi}_*(x,y).
$$
Here $L_*$ and ${\nabla_\xi}_*$ are given in \ref{L} and
\ref{nablaelts} respectively, with $L_*={\nabla_\xi}_*=0$ for $p$ odd.
\end{enumerate}
Two $\psi$-comultiplication maps $A_\psi$, $A_\psi'$ are
\emph{equivalent} if there is a derivation
$$
\gamma_*:{\mathscr A}_*\to{R_{\mathscr F}}_*
$$
of degree $+1$ satisfying the equality
$$
A_\psi-A_\psi'=m^\l_*\gamma_*-(\tau\ox\gamma_*)m_*.
$$
\end{Definition}
As a dual statement to \bref{exmul} we get
\begin{Theorem}
There exists a $\psi$-comultiplication map $A_\psi$ and any two such
$\psi$-comultiplication maps are equivalent. Moreover each
$\psi$-comultiplication map $A_\psi$ is the dual of a $\psi$-multiplication
map $A^\psi$ in \bref{exmul} with $A_\psi={A^\psi}_*$.
\end{Theorem}\qed
Now dually to \eqref{hopfin}, it is clear how to introduce via
$a_\psi$, $b_\psi$, $\xi_*$, $\k_*$, and $A_\psi$ a Hopf pair
coalgebra structure on
$$
\xymatrix{
{\mathscr A}_*\oplus\Sigma{\mathscr A}_*\oplus{R_{\mathscr F}}_*\ar@{=}[d]&{\mathscr A}_*\oplus{R_{\mathscr F}}_*\ar[l]_-i\ar@{=}[d]\\
{\mathscr B}_{\mathbb F}^1&{\mathscr R}_{\mathbb F}^1
}
$$
which is isomorphic to ${\mathscr B}_{\mathbb F}$, compare \bref{psicomon}.
We now embark on the simplification and solution of the equations
\ref{apsi}\eqref{mreqs} and \ref{apsi}\eqref{mleqs}. To begin with, note that
the equations \ref{apsi}\eqref{mreqs} imply that the image of the composite
map
$$
{\mathscr A}_*\xto{A_\psi}{\mathscr A}_*\ox{R_{\mathscr F}}_*\xto{1\ox m^\r_*}{\mathscr A}_*\ox{R_{\mathscr F}}_*\ox{\mathscr F}_*
$$
actually lies in
$$
{\mathscr A}_*\ox{R_{\mathscr F}}_*\ox{\mathscr A}_*\subset{\mathscr A}_*\ox{R_{\mathscr F}}_*\ox{\mathscr F}_*;
$$
similarly \ref{apsi}\eqref{mleqs} implies that the image of
$$
{\mathscr A}_*\xto{A_\psi}{\mathscr A}_*\ox{R_{\mathscr F}}_*\xto{1\ox m^\l_*}{\mathscr A}_*\ox{\mathscr F}_*\ox{R_{\mathscr F}}_*
$$
lies in
$$
{\mathscr A}_*\ox{\mathscr A}_*\ox{R_{\mathscr F}}_*\subset{\mathscr A}_*\ox{\mathscr F}_*\ox{R_{\mathscr F}}_*.
$$
\begin{Lemma}
The following conditions on an element $x\in{R_{\mathscr F}}_*=\Hom(R_{\mathscr F},{\mathbb F})$ are
equivalent:
\begin{itemize}
\item $m^\l_*(x)\in{\mathscr A}_*\ox{R_{\mathscr F}}_*\subset{\mathscr F}_*\ox{R_{\mathscr F}}_*$;
\item $m^\r_*(x)\in{R_{\mathscr F}}_*\ox{\mathscr A}_*\subset{R_{\mathscr F}}_*\ox{\mathscr F}_*$;
\item $x\in\bar R_*\subset{R_{\mathscr F}}_*$.
\end{itemize}
\end{Lemma}
\begin{proof}
Recall that $\bar R=R_{\mathscr F}/{R_{\mathscr F}}^2$, i.~e. $\bar R_*$ is the space of linear forms
on $R_{\mathscr F}$ which vanish on ${R_{\mathscr F}}^2$. Then the first condition means that
$x:R_{\mathscr F}\to{\mathbb F}$ has the property that the composite
$$
{\mathscr F}_0\ox R_{\mathscr F}\xto{m^\l} R_{\mathscr F}\xto x{\mathbb F}
$$
vanishes on $R_{\mathscr F}\ox R_{\mathscr F}\subset{\mathscr F}_0\ox R_{\mathscr F}$; but the image of $R_{\mathscr F}\ox R_{\mathscr F}$
under $m^\l$ is precisely ${R_{\mathscr F}}^2$. Similarly for the second condition.
\end{proof}
We thus conclude that the image of $A_\psi$ lies in ${\mathscr A}_*\ox\bar R_*$.
Next note that the condition \ref{apsi}\eqref{mult} implies
\begin{equation}\label{apsisq}
A_\psi(x^2)=L_*(x,x)+\nabla_{\xi_*}(x,x)
\end{equation}
for any $x\in{\mathscr A}_*$. Moreover the latter formula also implies
\begin{Proposition}\label{4=0}
For any $x\in{\mathscr A}_*$ one has
$$
A_\psi(x^4)=0.
$$
\end{Proposition}
\begin{proof}
Since the squaring map is an algebra endomorphism, by \ref{bider} one has
$$
L_*(x,y^2)=\sum\zeta_1x_\l y_{\l'}^2\ox\tilde L_*(x_r,y_{r'}^2),
$$
with
$$
m_*(x)=\sum x_\l\ox x_r,\ \ m_*(y)=\sum y_{\l'}\ox y_{r'}.
$$
But $\tilde L_*$ vanishes on squares since it is a biderivation, so $L_*$ also
vanishes on squares. Moreover by \eqref{nablaelts}
$$
\nabla_{\xi_*}(x^2,y^2)=\sum\xi_*(x^2,y^2)_{\mathscr A}\ox\xi_*(x^2,y^2)_R-\sum
x_\l^2y_{\l'}^2\ox\xi_*(x_r^2,y_{r'}^2);
$$
this is zero since $\xi_*(x^2,y^2)=0$ for any $x$ and $y$ by \eqref{xifromS}.
\end{proof}
Taking the above into account, and identifying the image of $i:{\mathscr A}_*\into{\mathscr F}_*$
with ${\mathscr A}_*$, \ref{apsi}\eqref{mreqs} can be rewritten as follows:
$$
(1\ox m_*^r)A_\psi(\zeta_n)
=A_\psi(\zeta_n)\ox1+\left(L_*(\zeta_{n-1},\zeta_{n-1})+\nabla_{\xi_*}(\zeta_{n-1},\zeta_{n-1})\right)\ox\zeta_1
+\sum_{i=0}^n\zeta_1\zeta_{n-i}^{2^i}\ox\bar b_\psi(\zeta_i),
$$
or
$$
(1\ox\tilde m_*^r)A_\psi(\zeta_n)
=\left(L_*(\zeta_{n-1},\zeta_{n-1})+\nabla_{\xi_*}(\zeta_{n-1},\zeta_{n-1})\right)\ox\zeta_1
+\sum_{i=0}^n\zeta_1\zeta_{n-i}^{2^i}\ox\bar b_\psi(\zeta_i).
$$
Still more explicitly one has
$$
L_*(\zeta_k,\zeta_k)=\sum_{0\le i,j\le k}\zeta_1\zeta_{k-i}^{2^i}\zeta_{k-j}^{2^j}\ox\tilde
L_*(\zeta_i,\zeta_j)=\sum_{0\le i\le k}\zeta_1\zeta_{k-i}^{2^{i+1}}\ox\tilde
L_*(\zeta_i,\zeta_i)
+\sum_{0\le i<j\le k}\zeta_1\zeta_{k-i}^{2^i}\zeta_{k-j}^{2^j}\ox\tilde
L^S_*(\zeta_i,\zeta_j),
$$
where we have denoted
$$
\tilde L^S_*(\zeta_i,\zeta_j):=\tilde
L_*(\zeta_i,\zeta_j)+\tilde L_*(\zeta_j,\zeta_i);
$$
similarly
$$
\nabla_{\xi_*}(\zeta_k,\zeta_k)=
\sum_{0\le i<j\le k}\zeta_{k-i}^{2^i}\zeta_{k-j}^{2^j}\ox
S_*(\zeta_i,\zeta_j).
$$
As for $b_\psi(\zeta_i)$, by \ref{mulstar} it can be calculated by the formula
\begin{equation}\label{bpsibar}
\bar b_\psi(\zeta_i)=\sum_{0<j<i}v_{i-j}^{2^{j-1}}\ox\zeta_j,
\end{equation}
where $v_k$ are determined by the equalities
$$
M_{2^k,2^{k-1},...,2}-M_{2^{k-1},2^{k-2},...,1}^2\equiv2v_k\mod4
$$
in ${\mathscr B}_\#$. For example,
\begin{align*}
v_1&=M_{11},\\
v_2&=M_{411}+M_{231}+M_{222}+M_{2121},\\
v_3&
=M_{8411}
+M_{8231}
+M_{8222}
+M_{82121}
+M_{4631}
+M_{4622}
+M_{46121}
+M_{4442}
+M_{42521}
+M_{42431}
+M_{42422}\\
&+M_{424121}
+M_{421421},
\end{align*}
etc.
Thus putting everything together we see
\begin{Lemma}\label{mrC}
The equation \ref{apsi}\eqref{mreqs} for the
value on $\zeta_n$ is equivalent to
$$
(1\ox\tilde m^r_*)A_\psi(\zeta_n)=\sum_{0<k<n}C^{(n)}_{2^n-2^k+1}\ox\zeta_k
$$
where
\begin{multline*}
C^{(n)}_{2^n-1}=
\sum_{0<i<n}\zeta_1\zeta_{n-1-i}^{2^{i+1}}\ox\left(\tilde
L_*(\zeta_i,\zeta_i)+v_i\right)
+\sum_{0<i<j<n}\zeta_1\zeta_{n-1-i}^{2^i}\zeta_{n-1-j}^{2^j}\ox\tilde
L^S_*(\zeta_i,\zeta_j)
+\sum_{0<i<j<n}\zeta_{n-1-i}^{2^i}\zeta_{n-1-j}^{2^j}\ox
S_*(\zeta_i,\zeta_j)
\end{multline*}
and, for $1<k<n$,
$$
C^{(n)}_{2^n-2^k+1}=\sum_{0<i\le n-k}\zeta_1\zeta_{n-k-i}^{2^{k+i}}\ox
v_i^{2^{k-1}}.
$$
\end{Lemma}\qed
For low values of $n$ these equations look like
\begin{align*}
(1\ox\tilde m^r_*)A_\psi(\zeta_2)&=0,\\
(1\ox\tilde m^r_*)A_\psi(\zeta_3)&
=\zeta_1\ox(\pi(M_{222})\ox\zeta_1+\pi(M_{22})\ox\zeta_2)
+\zeta_1^2\ox\pi(M_{32}+M_{23}+M_{212}+M_{122})\ox\zeta_1\\
&+\zeta_1^3\ox\pi M_{22}\ox\zeta_1,\\
(1\ox\tilde m^r_*)A_\psi(\zeta_4)&
=\zeta_1\ox\left(\pi(M_{8222}+M_{722}+M_{4622}+M_{4442}+M_{42422})\ox\zeta_1\right.\\
&\ \ \ \ \ \ \ \left.+\pi(M_{822}+M_{462}+M_{444}+M_{4242})\ox\zeta_2+\pi(M_{44})\ox\zeta_3\right)\\
&+\zeta_1^4\ox\pi(M_{632}+M_{623}+M_{6212}+M_{6122}+M_{542}+M_{452}+M_{443}+M_{4412}+M_{4142}+M_{3422}\\
&\ \ \ \ \ \ \ +M_{2522}+M_{2432}+M_{2423}+M_{24212}+M_{24122}+M_{21422}+M_{1622}+M_{1442}+M_{12422})\ox\zeta_1\\
&+\zeta_1^5\ox\pi(M_{622}+M_{442}+M_{2422})\ox\zeta_1\\
&+\zeta_2^2\ox\pi(M_{522}+M_{432}+M_{423}+M_{4212}+M_{4122}+M_{1422})\ox\zeta_1
+\zeta_1\zeta_2^2\ox\pi(M_{422})\ox\zeta_1\\
&+\zeta_1^9\ox\left(\pi(M_{222})\ox\zeta_1+\pi(M_{22})\ox\zeta_2\right)
+\zeta_1^4\zeta_2^2\ox\pi(M_{32}+M_{23}+M_{212}+M_{122})\ox\zeta_1\\
&+\zeta_1^5\zeta_2^2\ox\pi(M_{22})\ox\zeta_1,
\end{align*}
etc. (Note that $A_\psi(\zeta_1)=0$ by dimension considerations.)
As for the equations \ref{apsi}\eqref{mleqs}, they have form
$$
(1\ox\tilde m^\l_*)A_\psi(\zeta_n)
=(\tilde m_*\ox1)A_\psi(\zeta_n)+\zeta_1^{2^{n-1}}\ox A_\psi(\zeta_{n-1})+\zeta_2^{2^{n-2}}\ox
A_\psi(\zeta_{n-2})+...+\zeta_{n-2}^4\ox A_\psi(\zeta_2)+\zeta_{n-1}^2\ox
A_\psi(\zeta_1).
$$
\begin{Lemma}
Suppose given a map $A_\psi$ satisfying \ref{apsi}\eqref{mult} and those
instances of \ref{apsi}\eqref{mreqs}, \ref{apsi}\eqref{mleqs} which involve
starting value of ${\mathscr A}_\psi$ on the Milnor generators $i(\zeta_1)$,
$i(\zeta_2)$, ..., where $i:{\mathscr A}_*\to{\mathscr F}_*$ is the inclusion. Then ${\mathscr A}_\psi$
satisfies these equations for all other values too.
\end{Lemma}
Now recall that, as already mentioned in \ref{L*}, according to
\cite{Baues}*{16.5} $\bar R$ is a free right ${\mathscr A}$-module generated by the set
${\mathrm{PAR}}\subset\bar R$ of preadmissible relations. More explicitly, the composite
$$
R^{\mathrm{pre}}\ox{\mathscr A}\xto{\textrm{inclusion}\ox1}\bar R\ox{\mathscr A}\xto{m^\r}\bar R
$$
is an isomorphism of right ${\mathscr A}$-modules, where $R^{\mathrm{pre}}$ is the ${\mathbb F}$-vector
space spanned by the set ${\mathrm{PAR}}$ of preadmissible relations.
Dually it follows that the composite
$$
\Phi^\r_*:\bar R_*\xto{m^\r_*}\bar R_*\ox{\mathscr A}_*\xto{\ro\ox1}R_{\mathrm{pre}}\ox{\mathscr A}_*
$$
is an isomorphism of right ${\mathscr A}_*$-comodules. Here $\ro:\bar R_*\onto R_{\mathrm{pre}}$ denotes
the restriction homomorphism from the space $\bar R_*$ of ${\mathbb F}$-linear forms
on $\bar R$ to the space $R_{\mathrm{pre}}$ of linear forms on its subspace
$R^{\mathrm{pre}}\subset\bar R$ spanned by ${\mathrm{PAR}}$.
It thus follows that we will obtain equations equivalent to \ref{apsi}\eqref{mreqs} if
we compose both sides of these equations with the isomorphism
$1\ox\Phi^\r_*:{\mathscr A}_*\ox\bar R_*\to{\mathscr A}_*\ox R_{\mathrm{pre}}\ox{\mathscr A}_*$. Let us then denote
$$
(1\ox\Phi^\r_*)A_\psi(\zeta_n)=\sum_\mu\rho_{2^n-|\mu|}(\mu)\ox\mu
$$
with some unknown elements $\rho_{j}(\mu)\in({\mathscr A}_*\ox R_{\mathrm{pre}})_j$, where
$\mu$ runs through some basis of ${\mathscr A}_*$.
Now freedom of the right ${\mathscr A}_*$-comodule $\bar R_*$ on $R_{\mathrm{pre}}$ means that the
above isomorphism $\Phi^\r_*$ fits in the commutative diagram
$$
\xymatrix{
\bar R_*\ar[r]^-{\Phi^\r_*}\ar[d]^{m^\r_*}&R_{\mathrm{pre}}\ox{\mathscr A}_*\ar[d]^{1\ox m_*}\\
\bar R_*\ox{\mathscr A}_*\ar[r]^-{\Phi^\r_*\ox1}&R_{\mathrm{pre}}\ox{\mathscr A}_*\ox{\mathscr A}_*.
}
$$
It follows that we have
$$
(1\ox1\ox m_*)(1\ox\Phi^\r_*)A_\psi(\zeta_n)
=(1\ox\Phi^\r_*\ox1)(1\ox m^r_*)A_\psi(\zeta_n).
$$
Then taking into account \ref{mrC} this gives equations
$$
\sum_\mu\rho_{2^n-|\mu|}(\mu)\ox m_*(\mu)=
\sum_\mu\rho_{2^n-|\mu|}(\mu)\ox\mu\ox1+\sum_{0<k<n}(1\ox\Phi^\r_*)(C^{(n)}_{2^n-2^k+1})\ox\zeta_k,
$$
with the constants $C^{(j)}_n$ as in \ref{mrC}. This immediately determines
the elements $\rho_j(\mu)$ for $|\mu|>0$. Indeed, the above equation implies
that $(1\ox\Phi^\r_*)A_\psi(\zeta_n)$ actually lies in the subspace ${\mathscr A}_*\ox
R_{\mathrm{pre}}\ox\Pi\subset{\mathscr A}_*\ox R_{\mathrm{pre}}\ox{\mathscr A}_*$ where $\Pi\subset{\mathscr A}_*$ is the
following subspace:
$$
\Pi=\set{x\in{\mathscr A}_*\ \mid\ m_*(x)\in\bigoplus_{k\ge0}{\mathscr A}_*\ox{\mathbb F}\zeta_k}.
$$
It is easy to see that actually
$$
\Pi=\bigoplus_{k\ge0}{\mathbb F}\zeta_k,
$$
so we can write
$$
(1\ox\Phi^\r_*)A_\psi(\zeta_n)=\sum_{k\ge0}\rho_{2^n-2^k+1}(\zeta_k)\ox\zeta_k
$$
where we necessarily have
$$
\rho_{2^n-2^k+1}(\zeta_k)\ox1+\rho_{2^n-2^{k+1}+1}(\zeta_{k+1})\ox\zeta_1^{2^k}+\rho_{2^n-2^{j+k}+1}(\zeta_{k+2})\ox\zeta_2^{2^k}+...
=(1\ox\Phi_\r)(C^{(n)}_{2^n-2^k+1}).
$$
for all $k\ge1$. By dimension considerations, $\rho_{2^n-2^k+1}(\zeta_k)$ can only be nonzero
for $k<n$, so the number of unknowns in these equations
strictly decreases as $k$ grows. Thus moving ``backwards'' and using
successive elimination we determine all $\rho_{2^n-2^k+1}(\zeta_k)$ for $k>0$.
It is easy to compute values of the isomorphism $1\ox\Phi^\r_*$ on all elements
involved in the constants $C^{(n)}_j$. In particular, elements of the form
$\Phi^\r_*(v_j^{2^k})$ can be given by an explicit formula. One has
$$
\Phi^\r_*(v_k)=\sum_{0\le i<k}\left(\Sq^{2^k}\Sq^{2^{k-1}}\cdots\Sq^{2^{i+2}}[2^i,2^i]\right)_*\ox\zeta_i^2
$$
and
$$
\Phi^\r_*(v_k^{2^{j-1}})
=\sum_{0\le i<k}\left(\Sq^{2^{k+j-1}}\Sq^{2^{k+j-2}}\cdots\Sq^{2^{i+j+1}}[2^{i+j-1},2^{i+j-1}]\right)_*\ox\zeta_i^{2^j},
$$
so our ``upside-down'' solving gives
\begin{align*}
\rho_{2^{n-1}+1}(\zeta_{n-1})&=\zeta_1\ox[2^{n-2},2^{n-2}]_*,\\
\rho_{2^n-2^{n-2}+1}(\zeta_{n-2})&=\zeta_1^{1+2^{n-1}}\ox[2^{n-3},2^{n-3}]_*+\zeta_1\ox\left(\Sq^{2^{n-1}}[2^{n-3},2^{n-3}]\right)_*\\
\rho_{2^n-2^{n-3}+1}(\zeta_{n-3})&=\zeta_1\zeta_2^{2^{n-2}}\ox[2^{n-4},2^{n-4}]_*+\zeta_1^{1+2^{n-1}}\ox\left(\Sq^{2^{n-2}}[2^{n-4},2^{n-4}]\right)_*+\zeta_1\ox\left(\Sq^{2^{n-1}}\Sq^{2^{n-2}}[2^{n-4},2^{n-4}]\right)_*\\
\cdots\\
\rho_{2^n-2^{n-k}+1}(\zeta_{n-k})
&=\sum_{1\le i\le k}\zeta_1\zeta_{k-i}^{2^{n-k+i}}\ox\left(\Sq^{2^{n-k+i-1}}\Sq^{2^{n-k+i-2}}\cdots\Sq^{2^{n-k+1}}[2^{n-k-1},2^{n-k-1}]\right)_*
\end{align*}
for $k<n-1$.
As for $\rho_{2^n-1}(\zeta_1)$, here we do not have a general formula, but
nevertheless it is easy to compute this value explicitly. In this way we obtain, for example,
\begin{align*}
\rho_1(\zeta_1)&=0,\\
\rho_3(\zeta_1)&=0,\\
\rho_7(\zeta_1)
&
=\zeta_1^3\ox[2,2]_*
+\zeta_1^2\ox\left([3,2]_*+[2,3]_*\right),\\
\rho_{15}(\zeta_1)
&
=\zeta_1^5\zeta_2^2\ox[2,2]_*
+\zeta_1^4\zeta_2^2\ox\left([3,2]_*+[2,3]_*\right)
+\zeta_1\zeta_2^2\ox\left(\Sq^4[2,2]\right)_*
+\zeta_2^2\ox\left((\Sq^5[2,2])_*+(\Sq^4[2,3])_*\right)\\
&
+\zeta_1^5\ox\left(\Sq^6[2,2]\right)_*
+\zeta_1^4\ox\left((\Sq^7[2,2])_*+(\Sq^6[3,2])_*+(\Sq^6[2,3])_*\right),\\
\rho_{31}(\zeta_1)
&
=\zeta_1\zeta_2^4\zeta_3^2\ox[2,2]_*
+\zeta_2^4\zeta_3^2\ox\left([3,2]_*+[2,3]_*\right)
+\zeta_1^9\zeta_3^2\ox\left(\Sq^4[2,2]\right)_*\\
&
+\zeta_1^8\zeta_3^2\ox\left((\Sq^5[2,2])_*+(\Sq^4[2,3])_*\right)
+\zeta_1^9\zeta_2^4\ox\left(\Sq^6[2,2]\right)_*\\
&
+\zeta_1^8\zeta_2^4\ox\left((\Sq^7[2,2])_*+(\Sq^6[3,2])_*+(\Sq^6[2,3])_*\right)
+\zeta_1\zeta_3^2\ox\left(\Sq^8\Sq^4[2,2]\right)_*\\
&
+\zeta_3^2\ox\left((\Sq^9\Sq^4[2,2])_*+(\Sq^8\Sq^4[2,3])_*\right)
+\zeta_1\zeta_2^4\ox\left(\Sq^{10}\Sq^4[2,2]\right)_*\\
&
+\zeta_2^4\ox\left((\Sq^{11}\Sq^4[2,2])_*+(\Sq^{10}\Sq^5[2,2])_*+(\Sq^{10}\Sq^4[2,3])_*\right)
+\zeta_1^9\ox\left(\Sq^{12}\Sq^6[2,2]\right)_*\\
&
+\zeta_1^8\ox\left((\Sq^{13}\Sq^6[2,2])_*+(\Sq^{12}\Sq^6[3,2])_*+(\Sq^{12}\Sq^6[2,3])_*\right),
\end{align*}
etc.
To summarize, let us state
\begin{Proposition}\label{mrrho}
The general solution of \ref{apsi}\eqref{mreqs} for the value on $\zeta_n$ is
given by the formula
$$
A_\psi(\zeta_n)=(1\ox\Phi^\r_*)^{-1}\sum_{k\ge0}\rho_{2^n-2^k+1}(\zeta_k)\ox\zeta_k,
$$
where the elements $\rho_j(\zeta_k)\in({\mathscr A}_*\ox R_{\mathrm{pre}})_j$ are the ones explicitly
given above for $k>0$ while $\rho_{2^n}(1)\in({\mathscr A}_*\ox R_{\mathrm{pre}})_{2^n}$ is
arbitrary.
\end{Proposition}\qed
\
Let us now treat the equations \ref{apsi}\eqref{mleqs} in a similar way, now using the
fact that $\bar R$ is a free \emph{left} ${\mathscr A}$-module on an explicit basis
${\mathrm{PAR}}'$ (see \ref{barr} again).
Then similarly to the above dualization it follows that the
composite
$$
\Phi^\l_*:\bar R_*\xto{m^\l_*}{\mathscr A}_*\ox\bar R_*\xto{1\ox\ro'}{\mathscr A}_*\ox R'_{\mathrm{pre}}
$$
is an isomorphism of left ${\mathscr A}_*$-comodules, where $\ro':\bar R_*\onto
R'_{\mathrm{pre}}$ denotes the restriction homomorphism from the space $\bar R_*$ of
${\mathbb F}$-linear forms on $\bar R$ to the space $R'_{\mathrm{pre}}$ of linear forms on the
subspace ${R^{\mathrm{pre}}}'$ of $\bar R$ spanned by ${\mathrm{PAR}}'$.
Thus similarly to the above the equations \ref{apsi}\eqref{mleqs} are
equivalent to ones obtained by composing them with the isomorphism
$1\ox\Phi^\l_*:{\mathscr A}_*\ox\bar R_*\to{\mathscr A}_*\ox{\mathscr A}_*\ox R'_{\mathrm{pre}}$. Let us then denote
$$
(1\ox\Phi^\l_*)A_\psi(\zeta_n)=\sum_{\pi\in{\mathrm{PAR}}'}\sigma_{2^n-|\pi|}(\pi)\ox\pi_*
$$
with some unknown elements $\sigma_j(\pi)\in({\mathscr A}_*\ox{\mathscr A}_*)_j$, where $\pi_*$
denotes the corresponding element of the dual basis, i.~e. the unique linear form on
$R'_{\mathrm{pre}}$ assigning 1 to $\pi$ and 0 to all other elements of ${\mathrm{PAR}}'$.
Now again as above, freedom of the left ${\mathscr A}_*$-comodule $\bar R_*$ on $R'_{\mathrm{pre}}$ means that the
above isomorphism $\Phi^\l_*$ fits in the commutative diagram
$$
\xymatrix{
\bar R_*\ar[r]^-{\Phi^\l_*}\ar[d]^{m^\l_*}&{\mathscr A}_*\ox R'_{\mathrm{pre}}\ar[d]^{m_*\ox1}\\
{\mathscr A}_*\ox\bar R_*\ar[r]^-{1\ox\Phi^\l_*}&{\mathscr A}_*\ox{\mathscr A}_*\ox R'_{\mathrm{pre}}.
}
$$
In particular one has
$$
(1\ox1\ox\Phi^\l_*)(1\ox m^\l_*)A_\psi(\zeta_n)
=(1\ox m_*\ox1)(1\ox\Phi^\l_*)A_\psi(\zeta_n).
$$
Using this, we obtain that the equations \ref{apsi}\eqref{mleqs} are
equivalent to the following system of equations
$$
(1\ox m_*-m_*\ox1)(\sigma_{2^n-|\pi|}(\pi))
=1\ox\sigma_{2^n-|\pi|}(\pi)+\varSigma_{2^n-|\pi|}(\pi),
$$
where we denote
$$
\varSigma_{2^n-|\pi|}(\pi)=\zeta_1^{2^{n-1}}\ox\sigma_{2^{n-1}-|\pi|}(\pi)+\zeta_2^{2^{n-2}}\ox\sigma_{2^{n-2}-|\pi|}(\pi)+...+\zeta_{n-2}^4\ox\sigma_{4-|\pi|}(\pi)+\zeta_{n-1}^2\ox\sigma_{2-|\pi|}(\pi).
$$
We next use the following standard fact:
\begin{Proposition}\label{contra}
For any coalgebra $C$ with the diagonal $m_*:C\to C\ox C$ and counit
$\eps:C\to{\mathbb F}$ there is a contractible cochain complex of the form
$$
\xymatrix{
C\ar[r]^{d_1}
&C^{\ox2}\ar@/^/[l]^{s_1}\ar[r]^{d_2}
&C^{\ox3}\ar@/^/[l]^{s_2}\ar[r]^{d_3}
&C^{\ox4}\ar@/^/[l]^{s_3}\ar[r]^{d_4}
&\cdots,\ar@/^/[l]^{s_4}
}
$$
i.~e. one has
$$
s_nd_n+d_{n-1}s_{n-1}=1_{C^{\ox n}}
$$
for all $n$. Here,
\begin{align*}
d_1&=m_*,\\
d_2&=1\ox m_*-m_*\ox1,\\
d_3&=1\ox1\ox m_*-1\ox m_*\ox1+m_*\ox1\ox1,\\
d_4&=1\ox1\ox1\ox m_*-1\ox1\ox m_*\ox1+1\ox m_*\ox1\ox1-m_*\ox1\ox1\ox1,
\end{align*}
etc., while $s_n$ can be taken to be equal to either
$$
s_n=\eps\ox 1_{C^{\ox n}}
$$
or
$$
s_n=1_{C^{\ox n}}\ox\eps.
$$
\end{Proposition}
\qed
Now suppose given the elements
$\sigma_{2^k-|\pi|}(\pi)$, $k<n$, satisfying the equations; we must then find
$\sigma_{2^n-|\pi|}(\pi)$ with
$$
d_2\sigma_{2^n-|\pi|}(\pi)=1\ox\sigma_{2^n-|\pi|}(\pi)+\varSigma_{2^n-|\pi|}(\pi),
$$
with $\varSigma_{2^n-|\pi|}(\pi)$ as above. Then since $d_3d_2=0$, it will follow
$$
d_3(1\ox\sigma_{2^n-|\pi|}(\pi)+\varSigma_{2^n-|\pi|}(\pi))=0.
$$
Then
$$
1\ox\sigma_{2^n-|\pi|}(\pi)+\varSigma_{2^n-|\pi|}(\pi)
=(s_3d_3+d_2s_2)(1\ox\sigma_{2^n-|\pi|}(\pi)+\varSigma_{2^n-|\pi|}(\pi))
=d_2s_2(1\ox\sigma_{2^n-|\pi|}(\pi)+\varSigma_{2^n-|\pi|}(\pi))
$$
Taking here $s_n$ from the second equality of \ref{contra}, we see that one
has
$$
1\ox\sigma_{2^n-|\pi|}(\pi)
=\varSigma_{2^n-|\pi|}(\pi)+d_2\left(1\ox(1\ox\eps)(\sigma_{2^n-|\pi|}(\pi))+(1\ox1\ox\eps)(\varSigma_{2^n-|\pi|}(\pi))\right).
$$
It follows that we can reconstruct the terms $\sigma_{2^n-|\pi|}(\pi)$ from
$(1\ox\eps)\sigma_{2^n-|\pi|}(\pi)$, i.~e. from their components that lie in
${\mathscr A}_*\ox{\mathbb F}\subset{\mathscr A}_*\ox{\mathscr A}_*$.
Then denoting
$$
\sigma_{2^n-|\pi|}(\pi)=x_{2^n-|\pi|}(\pi)\ox1+\sigma'_{2^n-|\pi|}(\pi),
$$
with
$$
\sigma'_{2^n-|\pi|}(\pi)\in{\mathscr A}_*\ox\tilde{\mathscr A}_*,
$$
the last equation gives
$$
1\ox x_{2^n-|\pi|}(\pi)\ox1+1\ox\sigma'_{2^n-|\pi|}(\pi)
=\varSigma_{2^n-|\pi|}(\pi)+(m_*\ox1+1\ox m_*)\sum_{i\ge0}\zeta_i^{2^{n-i}}\ox
x_{2^{n-i}-|\pi|}(\pi).
$$
By collecting terms of the form $1\ox...$ on both sides, we conclude that any solution for $\sigma$ satisfies
$$
\sigma_{2^n-|\pi|}(\pi)
=m_*(x_{2^n-|\pi|}(\pi))+\sum_{i\ge0}\zeta_i^{2^{n-i}}\ox
x_{2^{n-i}-|\pi|}(\pi).
$$
Thus the equation \ref{apsi}\eqref{mleqs} is equivalent to the system of
equations
$$
(1\ox m_*+m_*\ox1)\sum_{i\ge0}\zeta_i^{2^{n-i}}\ox x_{2^{n-i}-|\pi|}(\pi)
=1\ox m_*(x_{2^n-|\pi|}(\pi))+\sum_{i\ge0}1\ox\zeta_i^{2^{n-i}}\ox x_{2^{n-i}-|\pi|}(\pi)
+\varSigma_{2^n-|\pi|}(\pi)
$$
on the elements $x_j(\pi)\in{\mathscr A}_j$. Substituting here back the value of
$\varSigma_{2^n-|\pi|}(\pi)$ we obtain the equations
\begin{multline*}
\sum_{i\ge0}\zeta_i^{2^{n-i}}\ox m_*(x_{2^{n-i}-|\pi|}(\pi))
+\sum_{i\ge0}m_*(\zeta_i)^{2^{n-i}}\ox x_{2^{n-i}-|\pi|}(\pi)
=1\ox m_*(x_{2^n-|\pi|}(\pi))+\sum_{i\ge0}1\ox\zeta_i^{2^{n-i}}\ox
x_{2^{n-i}-|\pi|}(\pi)\\
+\sum_{i>0}\zeta_i^{2^{n-i}}\ox m_*(x_{2^{n-i}-|\pi|}(\pi))
+\sum_{i'>0,j\ge0}\zeta_{i'}^{2^{n-i'}}\ox\zeta_j^{2^{n-i'-j}}\ox x_{2^{n-i'-j}-|\pi|}(\pi).
\end{multline*}
These equations easily reduce to
$$
m_*(\zeta_i)^{2^{n-i}}=1\ox\zeta_i^{2^{n-i}}+\sum_{0\le
j<i}\zeta_{i-j}^{2^{n-(i-j)}}\ox\zeta_j^{2^{n-i}},
$$
which is identically true. We thus conclude
\begin{Proposition}\label{mlx}
The general solution $A_\psi(\zeta_n)$ of \ref{apsi}\eqref{mleqs} is determined by
$$
A_\psi(\zeta_n)=(1\ox\Phi^\l_*)^{-1}\sum_{\pi\in{\mathrm{PAR}}'}\left(x_{2^n-|\pi|}(\pi)\ox1+\tilde
m_*(x_{2^n-|\pi|}(\pi))+\sum_{i>0}\zeta_i^{2^{n-i}}\ox
x_{2^{n-i}-|\pi|}(\pi)\right)\ox\pi_*,
$$
where $x_j(\pi)\in{\mathscr A}_j$ are arbitrary homogeneous elements.
\end{Proposition}\qed
Now to put together \ref{mrrho} and \ref{mlx} we must use the dual
$$
\Phi_*:R_{\mathrm{pre}}\ox{\mathscr A}_*\to{\mathscr A}_*\ox R'_{\mathrm{pre}}
$$
of the composite isomorphism
$$
\Phi:{\mathscr A}\ox{R^{\mathrm{pre}}}'\xto{{\Phi^\l}^{-1}}\bar R\xto{\Phi^\r}R^{\mathrm{pre}}\ox{\mathscr A}.
$$
We will need
\begin{Lemma}
There is an inclusion
$$
\Phi_*\left(R_{\mathrm{pre}}\ox\FF1\right)\subset{\mathscr A}_*\ox{R'_{\mathrm{pre}}}^{\le2},
$$
where
$$
{R'_{\mathrm{pre}}}^{\le2}\subset R'_{\mathrm{pre}}
$$
is the subspace of those linear forms on ${R^{\mathrm{pre}}}'$ which vanish on all left
preadmissible elements $[n,m]a\in{\mathrm{PAR}}'$ with $a\in\tilde{\mathscr A}$.
Similarly, there is an inclusion
$$
\Phi_*^{-1}\left(\FF1\ox R'_{\mathrm{pre}}\right)\subset{R_{\mathrm{pre}}}^{\le2}\ox{\mathscr A}_*,
$$
where
$$
{R_{\mathrm{pre}}}^{\le2}\subset R_{\mathrm{pre}}
$$
is the subspace of those linear forms on $R^{\mathrm{pre}}$ which vanish on all right
preadmissible elements $a[n,m]$ with $a\in\tilde{\mathscr A}$.
\end{Lemma}
\begin{proof}
Dualizing, for the first inclusion what we have to prove is that given any
admissible monomial $a\in{\mathscr A}$ and any $[n,m]b\in{\mathrm{PAR}}'$ with $b\in\tilde{\mathscr A}$, in
$\bar R$ one has the equality
$$
a[n,m]b=\sum_ia_i[n_i,m_i]b_i
$$
with $a_i[n_i,m_i]\in{\mathrm{PAR}}$ and admissible monomials $b_i\in\tilde{\mathscr A}$. Indeed,
considering $a$ as a monomial in ${\mathscr F}_0$ there is a unique way to write
$$
a[n,m]=\sum_ia_i[n_i,m_i]c_i
$$
in ${\mathscr F}_0$, with $a_i[n_i,m_i]\in{\mathrm{PAR}}$ and $c_i$ some (not necessarily
admissible or belonging to $\tilde{\mathscr F}_0$) monomials in the $\Sq^k$ generators
of ${\mathscr F}_0$. Thus in ${\mathscr F}_0$ we have
$$
a[n,m]b=\sum_ia_i[n_i,m_i]c_ib.
$$
In $\bar R$ we may replace each $c_ib$ with a sum of admissible monomials of
the same degree; obviously this degree is positive as $b\in\tilde{\mathscr A}$.
The proof for the second inclusion is exactly similar.
\end{proof}
This lemma implies that for any simultaneous solution $A_\psi(\zeta_n)$ of
\ref{apsi}\eqref{mreqs} and \ref{apsi}\eqref{mleqs}, the elements in ${\mathscr A}_*\ox
R_{\mathrm{pre}}\ox{\mathscr A}_*$ and ${\mathscr A}_*\ox{\mathscr A}_*\ox R'_{\mathrm{pre}}$ corresponding to it according to,
respectively, \ref{mrrho} and \ref{mlx}, satisfy
\begin{multline*}
\sum_{\substack{a\in\tilde{\mathscr A}\\{}[k,l]a\in{\mathrm{PAR}}'}}\left(x_{2^n-k-l-|a|}([k,l]a)\ox1+\tilde
m_*(x_{2^n-k-l-|a|}([k,l]a))+\sum_{i>0}\zeta_i^{2^{n-i}}\ox
x_{2^{n-i}-k-l-|a|}([k,l]a)\right)\ox([k,l]a)_*\\
=(1\ox1\ox\varrho^{>2})(1\ox\Phi_*)\left(\sum_{k>0}\rho_{2^n-2^k+1}(\zeta_k)\ox\zeta_k\right),
\end{multline*}
where
$$
\varrho^{>2}:R_{\mathrm{pre}}'\onto{R_{\mathrm{pre}}'}^{>2}
$$
is the restriction of linear forms on ${R^{\mathrm{pre}}}'$ to the subspace spanned by
the subset of ${\mathrm{PAR}}'$ consisting of the left preadmissible relations of the
form $[k,l]a$ with $a\in\tilde{\mathscr A}$. Indeed the remaining part of the element
from \ref{mrrho} is
$$
\rho_{2^n}(1)\ox1,
$$
and according to the lemma its image under $1\ox\Phi_*$ goes to zero under the
map $\varrho^{>2}$.
Since the elements $\rho_{2^n-2^k+1}(\zeta_k)$ are explicitly given for all $k>0$, this
allows us to explicitly determine all elements $x_j([k,l]a)$ for
$[k,l]a\in{\mathrm{PAR}}'$ with $a\in\tilde{\mathscr A}$. For example, in low degrees we obtain
\begin{align*}
x_2([2,3]\Sq^1)=x_2([3,2]\Sq^1)&=\zeta_1^2,\\
x_3([2,2]\Sq^1)&=\zeta_1^3,\\
x_{10}([2,3]\Sq^1)=x_{10}([3,2]\Sq^1)&=\zeta_1^4\zeta_2^2,\\
x_{11}([2,2]\Sq^1)&=\zeta_1^5\zeta_2^2,\\
x_{26}([2,3]\Sq^1)=x_{26}([3,2]\Sq^1)&=\zeta_2^4\zeta_3^2,\\
x_{27}([2,2]\Sq^1)&=\zeta_1\zeta_2^4\zeta_3^2,
\end{align*}
with all other $x_j([k,l]a)=0$ for $j<32$ and $[k,l]a\in{\mathrm{PAR}}'$ with
$a\in\tilde{\mathscr A}$.
\begin{Remark}\label{conj>0}
Calculations can be performed for larger $j$ too. But in fact a pattern is
clearly apparent here. It suggests the conjecture that actually all
elements $x_j([k,l]a)$ for $[k,l]a\in{\mathrm{PAR}}'$ with $a\in\tilde{\mathscr A}$ can be chosen
to be
\begin{align*}
x_{2^n-6}([2,3]\Sq^1)=x_{2^n-6}([3,2]\Sq^1)&=\zeta_{n-3}^4\zeta_{n-2}^2,\\
x_{2^n-5}([2,2]\Sq^1)&=\zeta_1\zeta_{n-3}^4\zeta_{n-2}^2,
\end{align*}
for $n\ge3$, with all other $x_j([k,l]a)=0$.
\end{Remark}
It remains to deal with the elements $x_j([k,l])$. These shall satisfy
\begin{multline*}
\sum_{k<2l}\left(x_{2^n-k-l}([k,l])\ox1+\tilde
m_*(x_{2^n-k-l}([k,l]))+\sum_{i>0}\zeta_i^{2^{n-i}}\ox
x_{2^{n-i}-k-l}([k,l])\right)\ox[k,l]_*\\
=(1\ox\Phi_*)\left(\rho_{2^n}(1)\ox1\right)+(1\ox1\ox\varrho^{\le2})(1\ox\Phi_*)\left(\sum_{k>0}\rho_{2^n-2^k+1}(\zeta_k)\ox\zeta_k\right),
\end{multline*}
where now
$$
\varrho^{\le2}:R_{\mathrm{pre}}'\onto{R_{\mathrm{pre}}'}^{\le2}
$$
is the restriction of linear forms on ${R^{\mathrm{pre}}}'$ to the subspace spanned by
the Adem relations. The last summand
$D_n=(1\ox1\ox\varrho^{\le2})(1\ox\Phi_*)\left(\sum_{k>0}\rho_{2^n-2^k+1}(\zeta_k)\ox\zeta_k\right)$
is again explicitly given; for example, in low degrees it is equal to
\begin{align*}
D_1&=0,\\
D_2&=0,\\
D_3&=\left(\zeta_1\ox\zeta_1\right)^2\ox[2,2]_*,\\
D_4&=\left(\zeta_1^2\zeta_2\ox\zeta_1+\zeta_2\ox\zeta_2+\zeta_1^2\ox\zeta_1\zeta_2\right)^2\ox[2,2]_*,\\
D_5&=\left(
\zeta_2^2\zeta_3\ox\zeta_1
+\zeta_1^4\zeta_3\ox\zeta_2
+\zeta_1^4\zeta_2^2\ox\zeta_1\zeta_2
+\zeta_1^4\ox\zeta_2\zeta_3
+\zeta_3\ox\zeta_3
+\zeta_2^2\ox\zeta_1\zeta_3
\right)^2\ox[2,2]_*.
\end{align*}
Then finally the equations that remain to be solved can be equivalently
written as follows:
\begin{multline*}
(1\ox1\ox\tilde\eps)(1\ox\Phi_*)^{-1}\left(\sum_{k<2l}\left(x_{2^n-k-l}([k,l])\ox1+\tilde
m_*(x_{2^n-k-l}([k,l]))+\sum_{i>0}\zeta_i^{2^{n-i}}\ox
x_{2^{n-i}-k-l}([k,l])\right)\ox[k,l]_*\right)\\
=(1\ox1\ox\tilde\eps)(1\ox\Phi_*)^{-1}(D_n),
\end{multline*}
where
$$
\tilde\eps:{\mathscr A}_*\onto\tilde{\mathscr A}_*
$$
is the projection to the positive degree part, i.~e. maps 1 to 0 and all
homogeneous positive degree elements to themselves. Again, the right hand
sides of these equations are explicitly given constants, for example, in low
degrees they are given by
\begin{tabular}{cl}
$0$,&$n=1$;\\
$0$,&$n=2$;\\
$\zeta_1^2\ox[2,2]_*\ox\zeta_1^2$,&$n=3$;\\
$\left(\zeta_1^4\zeta_2^2\ox[2,2]_*+\zeta_2^2\ox(\Sq^4[2,2])_*+\zeta_1^4\ox(\Sq^6[2,2])_*\right)\ox\zeta_1^2$,&$n=4$;\\
$\left(\zeta_2^4\zeta_3^2\ox[2,2]_*+\zeta_1^8\zeta_3^2\ox(\Sq^4[2,2])_*+\zeta_1^8\zeta_2^4\ox(\Sq^6[2,2])_*+\zeta_3^2\ox(\Sq^8\Sq^4[2,2])_*\right.$\\
$\left.+\zeta_2^4\ox(\Sq^{10}\Sq^4[2,2])_*+\zeta_1^8\ox(\Sq^{12}\Sq^6[2,2])_*\right)\ox\zeta_1^2$,&$n=5$.
\end{tabular}
One possible set of solutions for $\zeta_k$ with $k\le5$ is given by
\begin{align*}
x_5([1,2])&=\zeta_1^2\zeta_2,\\
x_4([1,3])&=\zeta_1^4,\\
x_{13}([1,2])&=\zeta_2^2\zeta_3,\\
x_{12}([1,3])&=\zeta_2^4,\\
x_{29}([1,2])&=\zeta_3^2\zeta_4,\\
x_{28}([1,3])&=\zeta_3^4
\end{align*}
and all remaining $x_j([k,l])=0$ for $j+k+l\le32$.
Or equivalently one might give the same solution ``on the other side of $\Phi$'' by
\begin{align*}
\rho_2(1)&=0,\\
\rho_4(1)&=0,\\
\rho_8(1)&
=\zeta_1^2\zeta_2\ox[1,2]_*
+\zeta_1^4\ox[1,3]_*
+\zeta_2\ox(\Sq^2[1,2])_*
+\zeta_1^2\ox(\Sq^3[1,2])_*,\\
\rho_{16}(1)&
=\zeta_2^2\zeta_3\ox[1,2]_*
+\zeta_2^4\ox[1,3]_*
+\zeta_1^4\zeta_3\ox\left(\Sq^2[1,2]\right)_*
+\zeta_1^4\zeta_2^2\ox\left(\Sq^3[1,2]\right)_*\\
&
+\zeta_3\ox\left(\Sq^4\Sq^2[1,2]\right)_*
+\zeta_2^2\ox\left(\Sq^5\Sq^2[1,2]\right)_*
+\zeta_1^4\ox\left(\Sq^6\Sq^3[1,2]\right)_*,\\
\rho_{32}(1)&
=\zeta_3^2\zeta_4\ox[1,2]_*.
+\zeta_3^4\ox[1,3]_*
+\zeta_2^4\zeta_4\ox\left(\Sq^2[1,2]\right)_*
+\zeta_2^4\zeta_3^2\ox\left(\Sq^3[1,2]\right)_*\\
&
+\zeta_1^8\zeta_4\ox\left(\Sq^4\Sq^2[1,2]\right)_*
+\zeta_1^8\zeta_3^2\ox\left(\Sq^5\Sq^2[1,2]\right)_*
+\zeta_1^8\zeta_2^4\ox\left(\Sq^6\Sq^3[1,2]\right)_*\\
&
+\zeta_4\ox\left(\Sq^8\Sq^4\Sq^2[1,2]\right)_*
+\zeta_3^2\ox\left(\Sq^9\Sq^4\Sq^2[1,2]\right)_*
+\zeta_2^4\ox\left(\Sq^{10}\Sq^5\Sq^2[1,2]\right)_*
+\zeta_1^8\ox\left(\Sq^{12}\Sq^6\Sq^3[1,2]\right)_*
\end{align*}
\begin{Remark}\label{conj0}
As in \ref{conj>0}, here one also has a suggestive pattern which leads to a
conjecture that a simultaneous solution of \eqref{mreqs} and
\eqref{mleqs} is determined by putting
\begin{align*}
x_{2^n-3}([1,2])&=\zeta_{n-2}^2\zeta_{n-1},\\
x_{2^n-4}([1,3])&=\zeta_{n-2}^4
\end{align*}
for $n\ge3$, with all other $x_j([k,l])=0$.
\end{Remark}
This then gives the solution itself as follows:
\begin{align*}
A_\psi(\zeta_1)&=0,\\
\\
A_\psi(\zeta_2)&=0,\\
\\
A_\psi(\zeta_3)
=\zeta_1^2\zeta_2&\ox M_3\\
+\zeta_1^4&\ox\left(M_{31}+\zeta_1M_3\right)\\
+\zeta_1^3&\ox M_{221}\\
+\zeta_2&\ox\left(M_5+M_{41}+M_{32}+\zeta_1^2M_3\right)\\
+\zeta_1^2&\ox\left(M_{51}+M_{321}+M_{231}+M_{2121}+\zeta_1(M_{5}+M_{41}+M_{32}+M_{221})+\zeta_1^2M_{11}^2+(\zeta_1^3+\zeta_2)M_3\right)\\
+\zeta_1&\ox M_{2221},\\
\\
A_\psi(\zeta_4)
=\zeta_2^2\zeta_3&\ox M_3\\
+\zeta_2^4&\ox\left(M_{31}+\zeta_1M_3\right)\\
+\zeta_1^5\zeta_2^2&\ox M_{221}\\
+\zeta_1^4\zeta_3&\ox\left(M_5+M_{41}+M_{32}+\zeta_1^2M_3\right)\\
+\zeta_1^4\zeta_2^2&\ox\left(M_{51}+M_{321}+M_{231}+M_{2121}+\zeta_1(M_5+M_{41}+M_{32}+M_{221})+\zeta_1^2M_{11}^2+(\zeta_1^3+\zeta_2)M_3\right)\\
+\zeta_1^9&\ox M_{2221}\\
+\zeta_1\zeta_2^2&\ox M_{4221}\\
+\zeta_3&\ox\left(M_9+M_{72}+M_{621}+M_{54}+M_{441}+M_{432}+M_{342}+M_{2421}+\zeta_1^4M_5+\zeta_2^2M_3\right)\\
+\zeta_2^2&\ox\left(
M_{721}
+M_{451}
+M_{4321}
+M_{4231}
+M_{42121}
+M_{3421}
+(M_5
+M_{41}
+M_{32}
+M_{2111})^2\right.\\
&
\left.
+\zeta_1
(M_9
+M_{72}
+M_{621}
+M_{54}
+M_{441}
+M_{432}
+M_{4221}
+M_{342}
+M_{2421}
)
+\zeta_1^4M_3^2
+\zeta_1^5M_5
+(\zeta_1\zeta_2^2
+\zeta_3)M_3
\right)\\
+\zeta_1^5&\ox\left(M_{6221}+M_{4421}+M_{24221}\right)\\
+\zeta_1^4&\ox\left(
M_{831}
+M_{8121}
+M_{651}
+M_{6321}
+M_{6231}
+M_{62121}
+M_{4521}
+M_{4431}
+M_{44121}
+M_{41421}\right.\\
&
+M_{2721}
+M_{2451}
+M_{24321}
+M_{24231}
+M_{242121}
+M_{23421}\\
&
+\zeta_1(M_{6221}+M_{4421}+M_{24221})
+\zeta_1^2(M_5+M_{41}+M_{32}+M_{2111})^2\\
&
+\zeta_2(M_9+M_{72}+M_{621}+M_{54}+M_{441}+M_{432}+M_{342}+M_{2421})
+\zeta_1^4M_{211}^2
+\zeta_1^6M_3^2
+\zeta_3(M_5+M_{41}+M_{32})\\
&\left.
+\zeta_1^4\zeta_2M_5
+(\zeta_1^2\zeta_3+\zeta_2^3)M_3
\right)\\
+\zeta_1&\ox\left(M_{82221}+M_{44421}+M_{46221}+M_{424221}\right),
\end{align*}
\begin{align*}
A_\psi(\zeta_5)
=\zeta_3^2\zeta_4&\ox M_3\\
+\zeta_3^4&\ox\left(M_{31}+\zeta_1M_3\right)\\
+\zeta_1\zeta_2^4\zeta_3^2&\ox M_{221}\\
+\zeta_2^4\zeta_4&\ox\left(M_5+M_{41}+M_{32}+\zeta_1^2M_3\right)\\
+\zeta_2^4\zeta_3^2&\ox\left(M_{51}+M_{321}+M_{231}+M_{2121}+\zeta_1(M_5+M_{41}+M_{32}+M_{221})+\zeta_1^2M_{11}^2+(\zeta_1^3+\zeta_2)M_3\right)\\
+\zeta_1\zeta_2^8&\ox M_{2221}\\
+\zeta_1^9\zeta_3^2&\ox M_{4221}\\
+\zeta_1^8\zeta_4&\ox\left(M_9+M_{72}+M_{621}+M_{54}+M_{441}+M_{432}+M_{342}+M_{2421}+\zeta_1^4M_5+\zeta_2^2M_3\right)\\
+\zeta_1^8\zeta_3^2&\ox\left(
M_{721}
+M_{451}
+M_{4321}
+M_{4231}
+M_{42121}
+M_{3421}
+(M_5
+M_{41}
+M_{32}
+M_{2111})^2\right.\\
&
+\zeta_1
(M_9
+M_{72}
+M_{621}
+M_{54}
+M_{441}
+M_{432}
+M_{4221}
+M_{342}
+M_{2421}
)\\
&\left.
+\zeta_1^4M_3^2
+\zeta_1^5M_5
+(\zeta_1\zeta_2^2
+\zeta_3)M_3
\right)\\
+\zeta_1^9\zeta_2^4&\ox\left(M_{6221}+M_{4421}+M_{24221}\right)\\
+\zeta_1^8\zeta_2^4&\ox\left(
M_{831}
+M_{8121}
+M_{651}
+M_{6321}
+M_{6231}
+M_{62121}
+M_{4521}
+M_{4431}
+M_{44121}
+M_{41421}\right.\\
&
+M_{2721}
+M_{2451}
+M_{24321}
+M_{24231}
+M_{242121}
+M_{23421}\\
&
+\zeta_1(M_{6221}+M_{4421}+M_{24221})
+\zeta_1^2(M_5+M_{41}+M_{32}+M_{2111})^2\\
&
+\zeta_2(M_9+M_{72}+M_{621}+M_{54}+M_{441}+M_{432}+M_{342}+M_{2421})
+\zeta_1^4M_{211}^2
+\zeta_1^6M_3^2\\
&\left.
+\zeta_1^4\zeta_2M_5+\zeta_3(M_5+M_{41}+M_{32})
+(\zeta_1^2\zeta_3+\zeta_2^3)M_3
\right)\\
+\zeta_1^{17}&\ox\left(M_{82221}+M_{44421}+M_{46221}+M_{424221}\right)\\
+\zeta_4&\ox\left(
M_{\underline{17}}
+M_{\underline{13}4}
+M_{\underline{11}42}
+M_{\underline{10}421}
+M_{98}
+M_{872}
+M_{8621}
+M_{854}
+M_{8441}
+M_{8432}
+M_{8342}
+M_{82421}\right.\\
&
\left.+M_{584}
+M_{3842}
+M_{28421}
+\zeta_1^8M_9
+\zeta_2^4M_5
+\zeta_3^2M_3
\right)\\
+\zeta_1\zeta_3^2&\ox M_{84221}\\
+\zeta_3^2&\ox\left(
M_{\underline{11}421}
+M_{8721}
+M_{8451}
+M_{84321}
+M_{84231}
+M_{842121}
+M_{83421}
+M_{38421}\right.\\
&+
(M_9
+M_{72}
+M_{621}
+M_{54}
+M_{441}
+M_{432}
+M_{42111}
+M_{342}
+M_{2421}
)^2\\
&
+\zeta_1
(M_{\underline{17}}
+M_{\underline{13}4}
+M_{\underline{11}42}
+M_{\underline{10}421}
+M_{98}\\
&\quad
+M_{872}
+M_{8621}
+M_{854}
+M_{8441}
+M_{8432}
+M_{84221}
+M_{8342}
+M_{82421}
+M_{584}
+M_{3842}
+M_{28421})\\
&\left.
+\zeta_1^8M_5^2
+\zeta_1^9M_9
+\zeta_2^4M_3^2
+\zeta_1\zeta_2^4M_5
+(\zeta_1\zeta_3^2+\zeta_4)M_3
\right)\\
+\zeta_1\zeta_2^4&\ox\left(M_{\underline{10}4221}+M_{86221}+M_{84421}+M_{824221}+M_{284221}\right)\\
+\zeta_2^4&\ox\left(
M_{\underline{12}521}
+M_{\underline{12}431}
+M_{\underline{12}4121}
+M_{\underline{12}1421}
+M_{\underline{10}721}
+M_{\underline{10}451}
+M_{\underline{10}4321}
+M_{\underline{10}4231}
+M_{\underline{10}42121}
+M_{\underline{10}3421}\right.\\
&
+M_{8831}
+M_{88121}
+M_{8651}
+M_{86321}
+M_{86231}
+M_{862121}
+M_{84521}
+M_{84431}
+M_{844121}
+M_{841421}\\
&
+M_{82451}
+M_{82721}
+M_{823421}
+M_{824321}
+M_{824231}
+M_{8242121}\\
&
+M_{49421}
+M_{48521}
+M_{48431}
+M_{484121}
+M_{481421}
+M_{418421}\\
&
+M_{2\underline{11}421}
+M_{28721}
+M_{28451}
+M_{284321}
+M_{284231}
+M_{2842121}
+M_{283421}
+M_{238421}\\
&
+\zeta_1(M_{\underline{10}4221}+M_{86221}+M_{84421}+M_{824221}+M_{284221})\\
&
+\zeta_1^2(M_9+M_{72}+M_{621}+M_{54}+M_{441}+M_{432}+M_{42111}+M_{342}+M_{2421})^2\\
&
+\zeta_2
(M_{\underline{17}}
+M_{\underline{13}4}
+M_{\underline{11}42}
+M_{\underline{10}421}
+M_{98}
+M_{872}
+M_{8621}
+M_{854}
+M_{8441}
+M_{8432}
+M_{8342}
+M_{82421}\\
&\quad
+M_{584}
+M_{3842}
+M_{28421})\\
&\left.
+\zeta_1^4M_{4211}^2
+\zeta_1^{10}M_5^2
+\zeta_1^8\zeta_2M_9
+\zeta_1^2\zeta_2^4M_3^2
+\zeta_2^5M_5
+\zeta_4(M_5+M_{41}+M_{32})
+(\zeta_1^2\zeta_4+\zeta_2\zeta_3^2)M_3
\right)
\end{align*}
\begin{align*}
+\zeta_1^9&\ox\left(M_{\underline{12}6221}+M_{\underline{12}4421}+M_{\underline{12}24221}+M_{4\underline{10}4221}+M_{88421}+M_{486221}+M_{484421}+M_{4824221}+M_{4284221}\right)\\
+\zeta_1^8&\ox\left(
M_{\underline{14}631}
+M_{\underline{14}6121}
+M_{\underline{14}2521}
+M_{\underline{14}2431}
+M_{\underline{14}24121}
+M_{\underline{14}21421}\right.\\
&
+M_{\underline{12}831}
+M_{\underline{12}8121}
+M_{\underline{12}651}
+M_{\underline{12}6321}
+M_{\underline{12}6231}
+M_{\underline{12}62121}
+M_{\underline{12}4521}
+M_{\underline{12}4431}
+M_{\underline{12}44121}
+M_{\underline{12}41421}\\
&
+M_{\underline{12}2721}
+M_{\underline{12}2451}
+M_{\underline{12}24321}
+M_{\underline{12}24231}
+M_{\underline{12}242121}
+M_{\underline{12}23421}\\
&
+M_{86631}
+M_{866121}
+M_{862521}
+M_{862431}
+M_{8624121}
+M_{8621421}\\
&
+M_{844521}
+M_{844431}
+M_{8444121}
+M_{8441421}
+M_{842631}
+M_{8426121}
+M_{8423421}
+M_{84212421}\\
&
+M_{6\underline{10}521}
+M_{6\underline{10}431}
+M_{6\underline{10}4121}
+M_{6\underline{10}1421}
+M_{68631}
+M_{686121}
+M_{682521}
+M_{682431}
+M_{6824121}
+M_{6821421}\\
&
+M_{629421}
+M_{628521}
+M_{628431}
+M_{6284121}
+M_{6281421}
+M_{6218421}\\
&
+M_{4\underline{12}521}
+M_{4\underline{12}431}
+M_{4\underline{12}4121}
+M_{4\underline{12}1421}
+M_{4\underline{10}721}
+M_{4\underline{10}451}
+M_{4\underline{10}4321}
+M_{4\underline{10}4231}
+M_{4\underline{10}42121}
+M_{4\underline{10}3421}\\
&
+M_{48831}
+M_{488121}
+M_{48651}
+M_{486321}
+M_{486231}
+M_{4862121}
+M_{484521}
+M_{484431}
+M_{4844121}
+M_{4841421}\\
&
+M_{482721}
+M_{482451}
+M_{4824321}
+M_{4824231}
+M_{48242121}
+M_{4823421}\\
&
+M_{449421}
+M_{448521}
+M_{448431}
+M_{4484121}
+M_{4481421}
+M_{4418421}\\
&
+M_{42\underline{11}421}
+M_{428721}
+M_{428451}
+M_{4284321}
+M_{4284231}
+M_{42842121}
+M_{4283421}
+M_{4238421}\\
&
+(M_{831}
+M_{8121}
+M_{7311}
+M_{7221}
+M_{71211}
+M_{651}
+M_{6411}
+M_{6321}
+M_{63111}
+M_{62211}
+M_{612111}\\
&\quad
+M_{43311}
+M_{43221}
+M_{431211}
+M_{422211}
+M_{421311}
+M_{421221}
+M_{41421}\\
&\quad
+M_{35211}
+M_{34311}
+M_{34221}
+M_{341211}
+M_{314211}\\
&\quad
+M_{2721}
+M_{26211}
+M_{252111}
+M_{2451}
+M_{24411}
+M_{24321}
+M_{243111}
+M_{242211}
+M_{2412111}\\
&\quad
+M_{23421}
+M_{224211}
+M_{2142111})^2\\
&
+(M_{51}+M_{411}+M_{321})^4+M_3^8\\
&
+\zeta_1(M_{\underline{12}6221}+M_{\underline{12}4421}+M_{\underline{12}24221}+M_{88421}+M_{4\underline{10}4221}+M_{486221}+M_{484421}+M_{4824221}+M_{4284221})\\
&
+\zeta_1^4\left(M_5+M_{41}+M_{32}\right)^4\\
&
+\zeta_2^2\left(M_9+M_{72}+M_{621}+M_{54}+M_{441}+M_{432}+M_{342}+M_{2421}\right)^2\\
&
+\zeta_3(M_{\underline{17}}+M_{\underline{13}4}+M_{\underline{11}42}+M_{\underline{10}421}+M_{98}+M_{872}+M_{8621}+M_{854}+M_{8441}+M_{8432}+M_{8342}+M_{82421}\\
&\quad
+M_{584}+M_{3842}+M_{28421})\\
&
+\zeta_1^8\zeta_2^2M_5^2+\zeta_3^2(M_5+M_{41}+M_{32})^2+\zeta_1^8\zeta_3M_9\\
&
+\zeta_4(M_9+M_{72}+M_{621}+M_{54}+M_{441}+M_{432}+M_{342})\\
&
\left.+(\zeta_1^{12}+\zeta_2^4)M_3^4+(\zeta_1^4\zeta_3^2+\zeta_2^6)M_3^2+(\zeta_1^4\zeta_4+\zeta_2^4\zeta_3)M_5+(\zeta_2^2\zeta_4+\zeta_3^3)M_3
\right)\\
+\zeta_1&\ox\left(
M_{\underline{16}82221}
+M_{\underline{16}46221}
+M_{\underline{16}44421}
+M_{\underline{16}424221}\right.\\
&\left.
+M_{8\underline{12}4421}
+M_{8\underline{12}6221}
+M_{8\underline{12}24221}
+M_{888421}
+M_{84\underline{10}4221}
+M_{8486221}
+M_{8484421}
+M_{84824221}
+M_{84284221}
\right)
\end{align*}
The formul\ae\ above were obtained via computer calculations. They lead to the
general patterns in \ref{conj>0} and \ref{conj0} which would determine the map
$A_\psi$ completely.
\begin{comment}
This solution satisfies
$A_\psi(\zeta_k)=0$ for $k<3$,
\begin{align*}
(1\ox\Phi_\l)(A_\psi(\zeta_3))
&
=\left(\zeta_2\ox1+\zeta_1^2\ox\zeta_1\right)\ox[2,3]_*\\
&+\left(\zeta_1^3\ox1+\zeta_1^2\ox\zeta_1+\zeta_1\ox\zeta_1^2\right)\ox([2,2]\Sq^1)_*\\
&+\zeta_1^2\ox1\ox[3,3]_*\\
&+\zeta_1^2\ox1\ox\left(([3,2]\Sq^1)_*+([2,3]\Sq^1)_*\right),\\
\\
(1\ox\Phi_\r)(A_\psi(\zeta_3))
&
=\zeta_2\ox[2,3]_*\ox1\\
&
+\zeta_1^2\ox[3,3]_*\ox1\\
&
+\zeta_1^2\ox\left([2,3]_*+[3,2]_*\right)\ox\zeta_1\\
&
+\zeta_1\ox[2,2]_*\ox\zeta_2,
\end{align*}
and
\begin{align*}
(1\ox\Phi_\l)(A_\psi(\zeta_4))
&
=\left(
\zeta_1^4\zeta_3\ox1
+\zeta_1^4\zeta_2^2\ox\zeta_1
+\zeta_1^8\ox\zeta_2
+\zeta_2^2\ox\zeta_1^5
+\zeta_1^4\ox\zeta_1^4\zeta_2
+\zeta_1^4\ox\zeta_3
+\zeta_3\ox\zeta_1^4\right)\ox[2,3]_*\\
&
+\left(
\zeta_1^5\zeta_2^2\ox1
+\zeta_1^4\zeta_2^2\ox\zeta_1
+\zeta_1^9\ox\zeta_1^2
+\zeta_1^8\ox\zeta_1^3
+\zeta_1\zeta_2^2\ox\zeta_1^4
+\zeta_2^2\ox\zeta_1^5
+\zeta_1^5\ox\zeta_1^6
+\zeta_1^5\ox\zeta_2^2\right.\\
&
\left.\ \
+\zeta_1^4\ox\zeta_1^7
+\zeta_1^4\ox\zeta_1\zeta_2^2
+\zeta_1\ox\zeta_1^4\zeta_2^2\right)\ox([2,2]\Sq^1)_*\\
&
+\left(
\zeta_1^4\zeta_2^2\ox1
+\zeta_1^8\ox\zeta_1^2
+\zeta_1^4\ox\zeta_2^2
+\zeta_2^2\ox\zeta_1^4
+\zeta_1^4\ox\zeta_1^6\right)\ox[3,3]_*\\
&
+\left(
\zeta_1^4\zeta_2^2\ox1
+\zeta_1^8\ox\zeta_1^2
+\zeta_1^4\ox\zeta_2^2
+\zeta_2^2\ox\zeta_1^4
+\zeta_1^4\ox\zeta_1^6\right)\ox\left(([3,2]\Sq^1)_*+([2,3]\Sq^1)_*\right),\\
\\
(1\ox\Phi_\r)(A_\psi(\zeta_4))
&
=\left(\zeta_1^4\ox(\Sq^6[3,3])_*
+\zeta_2^2\ox(\Sq^5[2,3])_*
+\zeta_3\ox(\Sq^4[2,3])_*
+\zeta_1^4\zeta_2^2\ox[3,3]_*\right.\\
&
\ \ \left.+\zeta_1^4\zeta_3\ox[2,3]_*\right)\ox1\\
&
+\left(\zeta_1^4\ox(\Sq^7[2,2])_*
+\zeta_1^4\ox(\Sq^6[2,3])_*
+\zeta_1^4\ox(\Sq^6[3,2])_*
+\zeta_1^5\ox(\Sq^6[2,2])_*\right.\\
&
\ \ +\zeta_2^2\ox(\Sq^5[2,2])_*
+\zeta_2^2\ox(\Sq^4[2,3])_*
+\zeta_1\zeta_2^2\ox(\Sq^4[2,2])_*
+\zeta_1^4\zeta_2^2\ox[2,3]_*\\
&
\ \ \left.+\zeta_1^4\zeta_2^2\ox[3,2]_*
+\zeta_1^5\zeta_2^2\ox[2,2]_*\right)\ox\zeta_1\\
&
+\left(\zeta_1\ox(\Sq^8[2,2])_*
+\zeta_1^8\ox[3,2]_*
+\zeta_1^9\ox[2,2]_*\right)\ox\zeta_2\\
&
+\zeta_1^8\ox[2,3]_*\ox\zeta_1^3\\
&
+\zeta_1^8\ox[2,2]_*\ox\zeta_1\zeta_2\\
&
+\zeta_1\ox[4,4]_*\ox\zeta_3.
\end{align*}
The solutions themselves are readily obtained from these. For example, one has
\begin{align*}
A_\psi(\zeta_3)
&
=\zeta_1\ox\left(\zeta_1^3M_{11}^2
+\zeta_1^2M_{11}M_3
+\zeta_1M_{2211}
+\zeta_2M_{31}
+\zeta_2M_{11}^2
+M_{11}M_{32}
+\zeta_1^3M_{1111}\right.\\
&
+\zeta_2M_{1111}
+\zeta_1M_{321}
+\zeta_1\zeta_2M_3
+M_{2221}
+\zeta_1^2M_{311}
+\zeta_1M_{11}M_{31}
+M_{3121}\\
&\left.
+\zeta_1^2M_{221}
+\zeta_1M_{11}M_{211}
+\zeta_1M_{3111}\right)\\
&
+\zeta_1^2\ox\left(
M_{411}
+M_{231}
+M_{51}
+\zeta_1M_{41}
+\zeta_1M_{5}
+\zeta_1^2M_{31}
+M_{11}M_{211}
+M_{11}M_{31}+\zeta_1^3M_3
\right.\\
&\left.
+\zeta_2M_3+M_{321}+\zeta_1M_{32}
\right)\\
&
+\zeta_2\ox\left(M_{41}
+\zeta_1M_{11}^2
+M_{11}M_3
+\zeta_1M_{31}
+M_5\right)
\end{align*}
\section{Algorithms for machine computations}
We finally briefly indicate how to use the obtained expressions to perform
actual calculations on a computer. Notably, knowledge of either a
multiplication map such as $A^\phi$ or of a comultiplication map such as
$A_\psi$ enables one to calculate all ordinary and matric triple Massey
product in the Steenrod algebra, as well as the $d_{(2)}$ differential of
the Adams spectral sequence, and hence its $E_3$ term --- see \cites{Baues,
Baues&JibladzeVI}.
We next utilize the quotient map $(\_)^{\le2}:\bar R_*\onto R_*^{\le2}$ given on the dual
monomial basis $M$ by sending all the elements $M_{n_1,...,n_k}$ with $k>2$ to
zero. Let us denote by $A_\psi^{\le2}$ the corresponding composite map
$$
{\mathscr A}_*\xto{A_\psi}{\mathscr A}_*\ox\bar R_*\onto{\mathscr A}_*\ox R_*^{\le2}
$$
Note that the composite
$$
\iota:\bar R_*\xto{(1\ox m^\r_*)m^\l_*=(m^\l_*\ox1)m^\r_*}{\mathscr A}_*\ox\bar
R_*\ox{\mathscr A}_*\onto{\mathscr A}_*\ox R_*^{\le2}\ox{\mathscr A}_*
$$
is injective --- the fact that we have used in \ref{L*} and \ref{S*} to
calculate the dual left action operator $L_*$ and the dual symmetry operator
$S_*$ respectively.
The equations \ref{apsi}\eqref{mreqs} and \eqref{mleqs} in particular imply that the composite
$$
W:{\mathscr A}_*\xto{A_\psi}{\mathscr A}_*\ox\bar R_*\xto{1\ox\iota}{\mathscr A}_*\ox{\mathscr A}_*\ox R_*^{\le2}\ox{\mathscr A}_*
$$
is determined by $A_\psi^{\le2}$; namely, one has
\begin{equation}\label{W}
W=
(m_*\ox1\ox1)(A_\psi^{\le2}\ox1)m_*
+(1\ox A_\psi^{\le2}\ox1)(1\ox m_*)m_*
+(1\ox1\ox(\_)^{\le2}\ox1)(1\ox m^\l_*\ox1)(\zeta_1\ox\bar b_\psi)m_*.
\end{equation}
We can now reformulate equations \ref{apsi}\eqref{mreqs} and \eqref{mleqs} in
terms of $A_\psi^{\le2}$. Indeed since the map $\iota$ is injective, these
equations are equivalent to the ones obtained by precomposing with the map
$1\ox\iota\ox1$, respectively $1\ox1\ox\iota$. Then using the equalities
$$
(\iota\ox 1)m^\r_*=(1\ox1\ox m_*)\iota
$$
for \eqref{mreqs} and
$$
(1\ox\iota)m^\l_*=(m_*\ox1\ox1)\iota
$$
for \eqref{mleqs}, we can prepend all occurrences of $A_\psi$ with
$1\ox\iota$, i.~e. switch from $A_\psi$ to $W$. In this way we arrive at the
equations
$$
(1\ox1\ox1\ox m_*)W=(W\ox1)m_*+(1\ox\iota\ox1)(\zeta_1\ox\bar b_\psi)m_*
$$
and
$$
(1\ox m_*\ox1\ox1)W=(m_*\ox1\ox1\ox1)W+(1\ox W)m_*,
$$
respectively. Next substituting here the values from \eqref{W} we obtain the
following equations on $A_\psi^{\le2}$:
\begin{multline*}
(1\ox1\ox1\ox m_*)(m_*\ox1\ox1)(A_\psi^{\le2}\ox1)m_*
+(1\ox1\ox1\ox m_*)(1\ox A_\psi^{\le2}\ox1)(1\ox m_*)m_*\\
+(1\ox1\ox1\ox m_*)(1\ox1\ox(\_)^{\le2}\ox1)(1\ox m^\l_*\ox1)(\zeta_1\ox\bar
b_\psi)m_*\\
=(m_*\ox1\ox1\ox1)(A_\psi^{\le2}\ox1\ox1)(m_*\ox1)m_*
+(1\ox A_\psi^{\le2}\ox1\ox1)(1\ox m_*\ox1)(m_*\ox1)m_*\\
+(1\ox1\ox(\_)^{\le2}\ox1\ox1)(1\ox m^\l_*\ox1\ox1)(\zeta_1\ox\bar
b_\psi\ox1)(m_*\ox1)m_*
\end{multline*}
and
\begin{multline*}
(1\ox m_*\ox1\ox1)(m_*\ox1\ox1)(A_\psi^{\le2}\ox1)m_*
+(1\ox m_*\ox1\ox1)(1\ox A_\psi^{\le2}\ox1)(1\ox m_*)m_*\\
+(1\ox m_*\ox1\ox1)(1\ox1\ox(\_)^{\le2}\ox1)(1\ox m^\l_*\ox1)(\zeta_1\ox\bar b_\psi)m_*\\
=
(m_*\ox1\ox1\ox1)(m_*\ox1\ox1)(A_\psi^{\le2}\ox1)m_*
+(m_*\ox1\ox1\ox1)(1\ox A_\psi^{\le2}\ox1)(1\ox m_*)m_*\\
+(m_*\ox1\ox1\ox1)(1\ox1\ox(\_)^{\le2}\ox1)(1\ox m^\l_*\ox1)(\zeta_1\ox\bar b_\psi)m_*
+(1\ox m_*\ox1\ox1)(1\ox A_\psi^{\le2}\ox1)(1\ox m_*)m_*\\
+(1\ox1\ox A_\psi^{\le2}\ox1)(1\ox1\ox m_*)(1\ox m_*)m_*
+(1\ox1\ox1\ox(\_)^{\le2}\ox1)(1\ox1\ox m^\l_*\ox1)(1\ox\zeta_1\ox\bar
b_\psi)(1\ox m_*)m_*.
\end{multline*}
It is straightforward to check that these equations are identically satisfied
for any choice of values for $A_\psi^{\le2}$.
In particular, for the values on the Milnor generators one has
\begin{multline*}
(1\ox\iota)A_\psi(\zeta_n)
=\sum_{0\le i\le n}(m_*\ox1)A_\psi^{\le2}(\zeta_{n-i}^{2^i})\ox\zeta_i
+\sum_{0\le i+j\le n}\zeta_{n-i-j}^{2^{i+j}}\ox A_\psi^{\le2}(\zeta_i^{2^j})\ox\zeta_j\\
+\sum_{0\le i\le n}\zeta_1\zeta_{n-i}^{2^i}\ox(1\ox(\_)^{\le2}\ox1)(m^\l_*\ox1)(\bar b_\psi(\zeta_i)).
\end{multline*}
Taking into account \eqref{apsisq}, \ref{4=0} and \eqref{bpsibar}, we then have
\begin{equation}\label{ia}
\begin{aligned}
(1\ox\iota)A_\psi(\zeta_n)
=&(m_*\ox(\_)^{\le2})C_n\ox\zeta_1+(m_*\ox1)(A_\psi^{\le2}(\zeta_n))\ox1\\
&+\sum_{0<i\le n}\zeta_{n-i}^{2^i}\ox(1\ox(\_)^{\le2})C_i\ox\zeta_1
+\sum_{0\le i\le n}\zeta_{n-i}^{2^i}\ox A_\psi^{\le2}(\zeta_i)\ox1\\
&+\sum_{\substack{0\le i\le n\\0<j<i}}\zeta_1\zeta_{n-i}^{2^i}\ox(1\ox(\_)^{\le2})(m^\l_*(v_{i-j}^{2^{j-1}}))\ox\zeta_j,
\end{aligned}
\end{equation}
where we have denoted
$$
C_i=L_*(\zeta_{i-1},\zeta_{i-1})+{\nabla_\xi}_*(\zeta_{i-1},\zeta_{i-1})
\in({\mathscr A}_*\ox\bar R_*)_{2^i-1},\ i=1,2,3,...
$$
We see from these equations that the elements $A_\psi(\zeta_n)\in{\mathscr A}_*\ox\bar
R_*$ actually belong to a smaller subspace ${\mathscr A}_*\ox R_Q$, where
$R_Q\subset\bar R_*$ is the subspace defined by the equality
$$
R_Q=\iota^{-1}({\mathscr A}_*\ox R^{\le2}_*\ox Q_*),
$$
with $Q_*\subset{\mathscr A}_*$ denoting the linear subspace spanned by the polynomial
generators $1,\zeta_1,\zeta_2,\zeta_3,...$.
We also note that there is a commutative diagram
$$
\xymatrix{
&{\mathscr A}_*\ox R^{\le2}_*\ox{\mathscr A}_*\ar[dr]^{\eps\ox1\ox\eps}\\
\bar R_*\ar[ur]^\iota\ar[rr]^{(\_)^{\le2}}&&R^{\le2}_*,
}
$$
where $\eps:{\mathscr A}_*\to{\mathbb F}$ is the counit. Indeed the dual diagram just expresses
the trivial fact that applying the multiplication map $\alpha\ox
r\ox\beta\mapsto\alpha r\beta$ to the tensor $1\ox r\ox1\in{\mathscr A}\ox
R^{\le2}\ox{\mathscr A}$ gives the value of the inclusion $R^{\le2}\into\bar R$ on the
element $r$. Thus for any $r\in\bar R_*$ we have
$$
r^{\le2}=(\eps\ox1\ox\eps)\iota(r).
$$
It is convenient in these circumstances to pick the basis $B_*$ of the space
$\bar R_*$ dual to the preadmissible basis $B$ of $\bar R$ as described in
\ref{barr}. Recall that the latter basis consists of elements of the form
$\pi\alpha$ where $\alpha$ is an admissible monomial and $\pi$ is a
preadmissible relation, i.~e. has form $\Sq^{n_k}\cdots\Sq^{n_1}[n_0,n]$
where $[n_0,n]$ is an Adem relation, the monomial $\Sq^{n_k}\cdots\Sq^{n_1}$
is admissible and moreover $n_1\ge2n_0$. Thus the dual basis $B_*$ consists
of linear forms $(\pi\alpha)_*$ on $\bar R$ which take value $1$ on
$\pi\alpha$ and $0$ on all other elements of $B$. We will call $B_*$ the
\emph{dual preadmissible basis} of $\bar R_*$. It is convenient for us because
of the following
\begin{Lemma}
The dual preadmissible basis $B_*$ has the following properties:
\begin{itemize}
\item
the subset $B_Q$ of $B_*$ consisting of the elements
$(\pi1)_*$ and $(\pi\Sq^{2^k}\Sq^{2^{k-1}}\cdots\Sq^2\Sq^1)_*$ for all
preadmissible relations $\pi$ and all $k\ge0$ is a basis for the subspace
$R_Q$ of $\bar R_*$;
\item
the elements $(\pi1)_*$ for all preadmissible relations $\pi$ form a basis
$B_0$ of the subspace
$$
R_0=\iota^{-1}({\mathscr A}_*\ox R^{\le2}_*\ox\FF1)\subset R_Q\subset\bar R_*;
$$
\item
for each $k\ge1$ the elements $(\pi1)_*$ and
$(\pi\Sq^{2^j}\Sq^{2^{j-1}}\cdots\Sq^2\Sq^1)_*$ with $j<k$ form a basis $B_k$
of the subspace
$$
R_k=\iota^{-1}\left(\bigoplus_{0\le j\le k}{\mathscr A}_*\ox
R^{\le2}_*\ox{\mathbb F}\zeta_j\right)\subset R_Q\subset\bar R_*;
$$
\item
the map $(\_)^{\le2}:\bar R_*\to R^{\le2}_*$ sends the elements $(1[n,m]1)_*$
to $[n,m]_*$ and all other elements of $B_*$ to $0$.
\end{itemize}
\end{Lemma}
\begin{proof}
\end{proof}
Thus we have a filtration
$$
B_0\subset B_1\subset\cdots\subset B_Q\subset B_*;
$$
let us also single out the subset
$$
B_{\textrm{Adem}}\subset B_0
$$
consisting of the duals $[a,b]_*=(1[a,b]1)_*$ of the Adem relations, for
$0<a<2b$, $a,b\in{\mathbb N}$.
With respect to the basis $B_Q$, any solution $A_\psi(\zeta_n)$ of the above
equation can be written in the form
$$
A_\psi(\zeta_n)=\sum_{\beta\in B_Q}X_{2^n-|\beta|}(\beta)\ox\beta,
$$
for some uniquely determined elements
$X_{2^n-|\beta|}(\beta)\in{\mathscr A}_{2^n-|\beta|}$, where $|\beta|$ denotes degree of
the (homogeneous) element $\beta$. Moreover in these terms one has
$$
A_\psi^{\le2}(\zeta_n)=\sum_{a<2b}X_{2^n-a-b}([a,b]_*)\ox[a,b]_*.
$$
When substituting these expressions in \eqref{ia} it will be also
convenient to use the map
$$
\tilde\iota:\bar R_*\to(\tilde{\mathscr A}_*\ox\bar R_*\ox{\mathscr A}_*+{\mathscr A}_*\ox\bar
R_*\ox\tilde{\mathscr A}_*)\subset{\mathscr A}_*\ox\bar R_*\ox{\mathscr A}_*
$$
determined by the equality
$$
\iota(r)=1\ox r\ox1+\tilde\iota(r).
$$
Note that the elements $\tilde\iota([a,b]_*)\in\tilde{\mathscr A}_*\ox R^{\le2}_*\ox\FF1$ are
not necessarily zero. For example, one has
\begin{align*}
\tilde\iota([1,3]_*)&=\zeta_1\ox[1,2]_*\ox1,\\
\tilde\iota([3,2]_*)&=\zeta_1\ox[2,2]_*\ox1,\\
\tilde\iota([1,5]_*)&=\zeta_1\ox([1,4]_*+[2,3]_*)\ox1,\\
\tilde\iota([3,3]_*)&=\zeta_1\ox[2,3]_*\ox1+\zeta_1^2\ox[2,2]_*\ox1,
\end{align*}
etc.
Substituting this into \eqref{ia} one obtains
\begin{align*}
\sum_{\beta\in B_Q}X_{2^n-|\beta|}(\beta)\ox\iota\beta
&=\sum_{a<2b}
\left(
m_*(X_{2^n-a-b}([a,b]_*))+\sum_{0\le i\le n}\zeta_{n-i}^{2^i}\ox X_{2^i-a-b}([a,b]_*)\right)
\ox[a,b]_*\ox1\\
&+\left((m_*\ox(\_)^{\le2})C_n
+\sum_{0<i\le n}\zeta_{n-i}^{2^i}\ox(1\ox(\_)^{\le2})C_i
+\sum_{1\le i\le n}\zeta_1\zeta_{n-i}^{2^i}\ox(1\ox(\_)^{\le2})(m^\l_*(v_{i-1}))
\right)\ox\zeta_1\\
&+\sum_{j\ge2}\left(\sum_{j<i\le
n}\zeta_1\zeta_{n-i}^{2^i}\ox(1\ox(\_)^{\le2})(m^\l_*(v_{i-j}^{2^{j-1}}))\right)\ox\zeta_j
\end{align*}
Moreover let us write for $\beta\in B_Q$
$$
\iota\beta=\sum_{k\ge0}c_{|\beta|-2^k+1}(\beta)\ox\zeta_k
$$
with $c_j(\beta)\in({\mathscr A}_*\ox R^{\le2}_*)_j$ the \emph{coordinates} of
$\beta$.
Thus we have
$$
\tilde\iota\beta=\tilde c_{|\beta|}(\beta)\ox1+\sum_{k\ge1}c_{|\beta|-2^k+1}(\beta)\ox\zeta_k,
$$
where
$$
c_{|\beta|}(\beta)=1\ox\beta+\tilde c_{|\beta|}(\beta),
$$
with $\tilde c_{|\beta|}(\beta)\in(\tilde{\mathscr A}_*\ox R^{\le2}_*)_{|\beta|}$.
Then collecting terms with respect to the last component, the above equation
becomes equivalent to the system
$$
\left\{
\begin{aligned}
\sum_{\beta\in B_Q}X_{2^n-|\beta|}(\beta)\ox c_{|\beta|}(\beta)
&=\sum_{a<2b}
\left(
m_*(X_{2^n-a-b}([a,b]_*))
+\sum_{0\le i\le n}\zeta_{n-i}^{2^i}\ox X_{2^i-a-b}([a,b]_*)
\right)
\ox[a,b]_*,\\
\sum_{\beta\in B_Q-B_0}X_{2^n-|\beta|}(\beta)\ox c_{|\beta|-1}(\beta)
&=(m_*\ox(\_)^{\le2})C_n
+\sum_{0<i\le n}\zeta_{n-i}^{2^i}\ox(1\ox(\_)^{\le2})C_i\\
&+\sum_{1<i\le n}\zeta_1\zeta_{n-i}^{2^i}\ox(1\ox(\_)^{\le2})(m^\l_*(v_{i-1})),\\
\sum_{\beta\in B_Q-B_j}X_{2^n-|\beta|}(\beta)\ox c_{|\beta|-2^j+1}(\beta)
&=\sum_{j<i\le
n}\zeta_1\zeta_{n-i}^{2^i}\ox(1\ox(\_)^{\le2})(m^\l_*(v_{i-j}^{2^{j-1}})),&j\ge2.
\end{aligned}
\right.
$$
\end{comment}
\endinput
\chapter{Computation of the algebra of secondary cohomology operations and its
dual}\label{A}
We first describe explicit splittings of the pair algebra ${\mathscr R}^{\mathbb F}$ of relations
in the Steenrod algebra and its dual ${\mathscr R}_{\mathbb F}$. Then we describe in terms of
these splittings $s$ the multiplication maps $A^s$ for the Hopf pair algebra
${\mathscr B}^{\mathbb F}$ of secondary cohomology operations and we describe the dual maps
$A_s$ determining the Hopf pair coalgebra ${\mathscr B}_{\mathbb F}$ dual to ${\mathscr B}^{\mathbb F}$. On the
basis of the main result in the book \cite{Baues} we describe systems of
equations which can be solved inductively by a computer and which yield the
multiplication maps $A^s$ and $A_s$ as a solution. It turns out that $A_s$ is
explicitly given by a formula in which only the values $A_s(\zeta_n)$,
$n\ge1$, have to be computed where $\zeta_n$ is the Milnor generator in the
dual Steenrod algebra ${\mathscr A}_*$.
\section{Computation of ${\mathscr R}^{\mathbb F}$ and ${\mathscr R}_{\mathbb F}$}\label{rcomp}
Let us fix a function $\chi:{\mathbb F}\to{\mathbb G}$ which splits the projection
${\mathbb G}\to{\mathbb F}$, namely, take
\begin{equation}\label{chieq}
\chi(k\!\!\mod p) = k\!\!\mod p^2, \ 0\le k<p.
\end{equation}
We will use $\chi$ to define splittings of
${\mathscr R}^{\mathbb F}=\left({\mathscr R}^{\mathbb F}_1\xto\d{\mathscr R}^{\mathbb F}_0\right)$. Here a \emph{splitting} $s$
of ${\mathscr R}^{\mathbb F}$ is an ${\mathbb F}$-linear map for which the diagram
\begin{equation}\label{s}
\alignbox{
\xymatrix{
&&{\mathscr R}^{\mathbb F}_1\ar[d]^\d\\
R_{\mathscr F}\ar@{ (->}[r]\ar[urr]^s
&{\mathscr F}_0\ar@{=}[r]
&{\mathscr R}^{\mathbb F}_0
}
}
\end{equation}
commutes with $R_{\mathscr F}=\im(\d)=\ker(q_{\mathscr F}:{\mathscr F}_0\to{\mathscr A})$. We only consider the
case $p=2$.
\begin{Definition}[The right equivariant splitting of ${\mathscr R}^{\mathbb F}$]\label{chi}
Using $\chi$, all \emph{Adem relations}
$$
[a,b]:=\Sq^a\Sq^b+\sum_{k=0}^{\left[\frac
a2\right]}\binom{b-k-1}{a-2k}\Sq^{a+b-k}\Sq^k
$$
for $a,b>0$, $a<2b$, can be lifted to elements $[a,b]_\chi\in R_{\mathscr B}$ by
applying $\chi$ to all coefficients, i.~e. by interpreting $[a,b]$ as an
element of ${\mathscr B}$. As shown in \cite{Baues}*{16.5.2}, $R_{\mathscr F}$ is a free right
${\mathscr F}_0$-module with a basis consisting of \emph{preadmissible relations}. For
$p=2$ these are elements of the form
$$
\Sq^{a_1}\cdots\Sq^{a_{k-1}}[a_k,a]\in R_{\mathscr F}
$$
satisfying $a_1\ge2a_2$, ..., $a_{k-2}\ge2a_{k-1}$, $a_{k-1}\ge2a_k$,
$a_k<2a$. Sending such an element to
$$
\Sq^{a_1}\cdots\Sq^{a_{k-1}}[a_k,a]_\chi\in R_{\mathbf B}
$$
determines then a unique right ${\mathscr F}_0$-equivariant splitting $\phi$ in the pair
algebra ${\mathscr R}^{\mathbb F}$; that is, we get a commutative diagram
$$
\xymatrix@!C=5em{
R_{\mathscr F}\ar[r]^-\phi\ar@{ (->}[d]
&**[l]R_{\mathscr B}\ox{\mathbb F}={\mathscr R}^{\mathbb F}_1\ar[d]^\d\\
{\mathscr F}_0\ar@{=}[r]
&{\mathscr R}^{\mathbb F}_0.
}
$$
\end{Definition}
For a splitting $s$ of ${\mathscr R}^{\mathbb F}$ the map $s\!\ox\!1\oplus1\!\ox\!s$
induces the map $s_\#$ in the diagram
\begin{equation}\label{rsplit}
\alignbox{
\xymatrix{
{\mathscr R}^{\mathbb F}_1\ar[r]^-\Delta\ar@{->>}[d]^\d
&({\mathscr R}^{\mathbb F}\hat\ox{\mathscr R}^{\mathbb F})_1\ar@{->>}[d]^{\d_{\hat\ox}}
&{\mathscr R}^{\mathbb F}_1\!\ox\!{\mathscr F}_0\oplus{\mathscr F}_0\!\ox\!{\mathscr R}^{\mathbb F}_1\ar[l]\\
R_{\mathscr F}\ar@/^/@{-->}[u]^s\ar@{ (->}[d]\ar[r]^{\Delta_R}
&R^{(2)}_{\mathscr F}\ar@/^/@{-->}[u]^{s_\#}\ar@{ (->}[d]
&R_{\mathscr F}\!\ox\!{\mathscr F}_0\oplus{\mathscr F}_0\!\ox\!R_{\mathscr F}\ar[l]\ar@{-->}[u]_{s\ox\!1\oplus1\!\ox\!s}\\
{\mathscr F}_0\ar[r]^-\Delta
&{\mathscr F}_0\ox{\mathscr F}_0.
}
}
\end{equation}
Then the difference
$U=s_\#\Delta_R-\Delta s:R_{\mathscr F}\to({\mathscr R}^{\mathbb F}\hat\ox{\mathscr R}^{\mathbb F})_1$ satisfies $\d_{\hat\ox}U=0$ since
$$
\d_{\hat\ox} s_\#\Delta_R=\Delta_R=\Delta_R\d s=\d_{\hat\ox}\Delta s.
$$
Thus $U$ lifts to $\ker\d_{\hat\ox}\cong{\mathscr A}\ox{\mathscr A}$ and gives an ${\mathbb F}$-linear
map
\begin{equation}\label{ueq}
U^s:R_{\mathscr F}\to{\mathscr A}\ox{\mathscr A}.
\end{equation}
If we use the splitting $s$ to identify ${\mathscr R}^{\mathbb F}_1$ with the direct sum
${\mathscr A}\oplus R_{\mathscr F}$, then it is clear that knowledge of the map $U^s$ determines the
diagonal ${\mathscr R}^{\mathbb F}_1\to({\mathscr R}^{\mathbb F}\hat\ox{\mathscr R}^{\mathbb F})_1$ completely. Indeed $s_\#$
yields the identification $({\mathscr R}^{\mathbb F}\hat\ox{\mathscr R}^{\mathbb F})_1\cong{\mathscr A}\!\ox\!{\mathscr A}\oplus
R^{(2)}_{\mathscr F}$, and under these identifications
$\Delta:{\mathscr R}^{\mathbb F}_1\to({\mathscr R}^{\mathbb F}\hat\ox{\mathscr R}^{\mathbb F})_1$ corresponds to a map which by
commutativity of \eqref{rsplit} must have the form
\begin{equation}\label{udia}
{\mathscr A}\oplus R_{\mathscr F}
\xto{\left(\begin{smallmatrix}\Delta_{\mathscr A}&U^s\\0&\Delta_R\end{smallmatrix}\right)}
{\mathscr A}\!\ox\!{\mathscr A}\oplus R^{(2)}_{\mathscr F}
\end{equation}
and is thus determined by $U^s$.
One readily checks that the map $U^s$ for $s=\phi$ in \bref{chi} coincides with the map $U$
defined in \cite{Baues}*{16.4.3} in terms of the algebra ${\mathscr B}$.
Given the splitting $s$ and the map $U^s$, the only piece of structure
remaining to determine the ${\mathbf{Alg}}^{\mathit{pair}}_{\mathbbm1}$-comonoid structure of
${\mathscr R}^{\mathbb F}$ completely is the ${\mathscr F}_0$-${\mathscr F}_0$-bimodule structure on
${\mathscr R}^{\mathbb F}_1\cong{\mathscr A}\oplus R_{\mathscr F}$. Consider for $f\in{\mathscr F}_0$, $r\in R_{\mathscr F}$ the
difference $ s(fr)-f s(r)$. It belongs to the kernel of $\d$ since
$$
\d s(fr)=fr=f\d s(r)=\d(f s(r)).
$$
Thus we obtain the \emph{left multiplication map}
\begin{equation}\label{aeq}
a^s:{\mathscr F}_0\ox R_{\mathscr F}\to{\mathscr A}.
\end{equation}
Similarly we obtain the \emph{right multiplication map}
$$
b^s:R_{\mathscr F}\ox{\mathscr F}_0\to{\mathscr A}
$$
by the difference $s(rf)-s(r)f$.
\begin{Lemma}\label{chisp}
For $s=\phi$ in \bref{chi} the right multiplication map $b^\phi$ is trivial,
that is $\phi$ is right equivariant, and the left multiplication map factors
through $q_{\mathscr F}\ox1$ inducing the map
$$
a^\phi:{\mathscr A}\ox R_{\mathscr F}\to{\mathscr A}.
$$
\end{Lemma}
\begin{proof}
Right equivariance holds by definition. As for the factorization, $R_{\mathscr F}\ox
R_{\mathscr F}\into{\mathscr F}_0\ox R_{\mathscr F}$ is in the kernel of $a^\phi:{\mathscr F}_0\ox R_{\mathscr F}\to{\mathscr A}$, since
by right equivariance of $s$ and by the pair algebra property \eqref{paireq}
for ${\mathscr R}^{\mathbb F}$ one has for any $r,r'\in R_{\mathscr F}$
$$
s(rr')=s(r)r'=s(r)\d s(r')=(\d s(r))s(r')=rs(r').
$$
Hence factoring the above map through $({\mathscr F}_0\ox R_{\mathscr F})/(R_{\mathscr F}\ox
R_{\mathscr F})\cong{\mathscr A}\ox R_{\mathscr F}$ we obtain a map
$$
{\mathscr A}\ox R_{\mathscr F}\to{\mathscr A}.
$$
\end{proof}
Summarizing the above, we thus have proved
\begin{Proposition}\label{detrel}
Using the splitting $s=\phi$ of ${\mathscr R}^{\mathbb F}$ in \bref{chi}
the comonoid ${\mathscr R}^{\mathbb F}$ in the category ${\mathbf{Alg}}^{\mathit{pair}}_{\mathbbm1}$ described
in \bref{relcom} is completely determined by the maps
$$
U^\phi:R_{\mathscr F}\to{\mathscr A}\ox{\mathscr A}
$$
and
$$
a^\phi:{\mathscr A}\ox R_{\mathscr F}\to{\mathscr A}
$$
given in \eqref{ueq} and \eqref{aeq} respectively.
\end{Proposition}\qed
We next introduce another splitting $s=\psi$ for which $U^s=0$. For this we use
the fact that ${\mathscr A}_*=\Hom({\mathscr A},{\mathbb F})$ and
\begin{equation}
{\mathscr B}_\#=\Hom({\mathscr B}_0,{\mathbb G})
\end{equation}
with ${\mathscr B}_0=T_{\mathbb G}(E_{\mathscr A})$ both are polynomial algebras in such a way that
generators of ${\mathscr A}_*$ are also (part of the) generators of ${\mathscr B}_\#$.
Using $\chi$ in \eqref{chieq} we obtain the function
\begin{equation}\label{psichi}
\psi_\chi:{\mathscr A}_*\to{\mathscr B}_\#
\end{equation}
(which is not ${\mathbb F}$-linear) as follows. Each element $x$ in ${\mathscr A}_*$ is
uniquely an ${\mathbb F}$-linear combination $x=\sum_\alpha n_\alpha\alpha$ where
$\alpha$ runs through the monomials in Milnor generators. Such a monomial
can be also considered as an element in ${\mathscr B}_\#$ by \bref{eslp} so that we
can define
$$
\psi_\chi(x)=\sum_\alpha\chi(n_\alpha)\alpha\in{\mathscr B}_\#.
$$
\begin{Definition}[The comultiplicative splitting of ${\mathscr R}^{\mathbb F}$]\label{psi}
Consider the following commutative diagram with exact rows and columns
$$
\xymatrix{
{\mathscr A}_*\ar@{ >->}[r]\ar@{-->}[dr]_{\psi_\chi}&{\mathscr F}_*\ar[r]&\Hom(R_{\mathscr B},{\mathbb F})\\
&{\mathscr B}_\#\ar[r]^-q\ar@{->>}[u]&\Hom(R_{\mathscr B},{\mathbb G})\ar[u]\\
{\mathscr A}_*\ar@{ >->}[r]^{{q_{\mathscr F}}_*}&{\mathscr F}_*\ar@{ >->}[u]_j\ar[r]
&\Hom(R_{\mathscr B},{\mathbb F})\ar@{ >->}[u]_{j_R}\ar@{->>}[r]&{\mathscr A}_*\ar@{-->}[dl]^{q\psi_\chi}\\
&{\mathscr R}_{\mathbb F}^0\ar@{=}[u]&{\mathscr R}_{\mathbb F}^1\ar@{=}[u]
}
$$
with the columns induced by the short exact sequence ${\mathbb F}\into{\mathbb G}\onto{\mathbb F}$
and the rows induced by \eqref{drel}. In particular $q$ is induced by the
inclusion $R_{\mathscr B}\subset{\mathscr B}_0$. Now it is clear that $\psi_\chi$ yields a map
$q\psi_\chi$ which lifts to $\Hom(R_{\mathscr B},{\mathbb F})$ so that we get the map
$$
q\psi_\chi:{\mathscr A}_*\to{\mathscr R}_{\mathbb F}^1
$$
which splits the projection ${\mathscr R}_{\mathbb F}^1\onto{\mathscr A}_*$. Moreover $q\psi_\chi$ is
${\mathbb F}$-linear since for all $x,y\in{\mathscr A}_*$ the elements
$\psi_\chi(x)+\psi_\chi(y)-\psi_\chi(x+y)\in{\mathscr B}_\#$ are in the image of the
inclusion $j{q_{\mathscr F}}_*:{\mathscr A}_*\into{\mathscr B}_\#$ and thus go to zero under $q$.
The dual of $q\psi_\chi$ is thus a retraction $(q\psi_\chi)^*$ in the
short exact sequence
$$
\xymatrix{
R_{\mathscr F}\ar@{-->}[dr]_{(q\psi_\chi)^*_\perp}&R_{\mathscr B}\ox{\mathbb F}\ar@{=}[d]\ar@{->>}[l]_\pi&{\mathscr A}\ar@{
>->}[l]_-\iota\\
&{\mathscr R}^{\mathbb F}_1\ar@{-->}[ur]_{(q\psi_\chi)^*}
}
$$
which induces the splitting $\psi=(q\psi_\chi)^*_\perp$ of ${\mathscr R}^{\mathbb F}$
determined by
$$
\psi(\pi(x))=x-\iota((q\psi_\chi)^*(x)).
$$
\end{Definition}
\begin{Lemma}
For the splitting $s=\psi$ of ${\mathscr R}^{\mathbb F}$ we have $U^\psi=0$.
\end{Lemma}
\begin{proof}
We must show that the following diagram commutes:
$$
\xymatrix{
R_{\mathscr F}\ar[r]^{\Delta_R}\ar[d]_\psi&R_{\mathscr F}^{(2)}\ar[d]^{\psi_\#}\\
R_{\mathscr B}\ox{\mathbb F}\ar[r]^-\Delta&({\mathscr R}^{\mathbb F}\hat\ox{\mathscr R}^{\mathbb F})_1.
}
$$
Obviously this is equivalent to commutativity of the dual diagram
$$
\xymatrix{
({\mathscr R}_{\mathbb F}\check\ox{\mathscr R}_{\mathbb F})^1\ar[r]^-{\Delta_*}\ar[d]_{(\psi_\#)_*}&\Hom(R_{\mathscr B},{\mathbb F})\ar[d]^{\psi_*}\\
{R_{\mathscr F}^{(2)}}_*\ar[r]^{{\Delta_R}_*}&{R_{\mathscr F}}_*
}
$$
which in turn is equivalent to commutativity of
\begin{equation}\label{psicom}
\alignbox{
\xymatrix{
{\mathscr A}_*\ox{\mathscr A}_*\ar[d]_{(\psi_\#)_*^\perp}\ar[r]^{\delta_*}&{\mathscr A}_*\ar[d]^{q\psi_\chi}\\
({\mathscr R}_{\mathbb F}\check\ox{\mathscr R}_{\mathbb F})^1\ar[r]^-{\Delta_*}&\Hom(R_{\mathscr B},{\mathbb F}).
}
}
\end{equation}
On the other hand, the left hand vertical map in the latter diagram can be
included into another commutative diagram
$$
\xymatrix{
{\mathscr F}_*\!\ox\!{\mathscr A}_*\oplus{\mathscr A}_*\!\ox\!{\mathscr F}_*
\ar[d]_{1\!\ox\!q\psi_\chi\oplus q\psi_\chi\!\ox\!1}
&{\mathscr A}_*\ox{\mathscr A}_*
\ar@{ )->}[l]_-{\binom{i\ox1}{1\ox i}}\ar[d]^{(\psi_\#)_*^\perp}\\
{\mathscr F}_*\!\ox\!{\mathscr R}_{\mathbb F}^1\oplus{\mathscr R}_{\mathbb F}^1\!\ox\!{\mathscr F}_*&({\mathscr R}_{\mathbb F}\ox{\mathscr R}_{\mathbb F})^1\ar@{ )->}[l]
}
$$
It follows that on elements, commutativity of \eqref{psicom} means that the
equality
$$
q\psi_\chi(xy)=i(x)q\psi_\chi(y)+q\psi_\chi(x)i(y)
$$
holds for any $x,y\in{\mathscr A}_*$. By linearity, it is clearly enough to prove
this when $x$ and $y$ are monomials in Milnor generators.
For this observe that for any $x\in{\mathscr A}_*=\Hom({\mathscr A},{\mathbb F})$, the element
$q\psi_\chi(x)\in\Hom(R_{\mathscr B},{\mathbb F})$ is the unique ${\mathbb F}$-linear map making the
diagram
$$
\xymatrix{
R_{\mathscr B}\ar@{ >->}[r]\ar@{-->}[d]_{q\psi_\chi(x)}
&{\mathscr B}_0\ar[d]_{\psi_\chi(x)}\ar@{->>}[r]
&{\mathscr A}\ar[d]^x\\
{\mathbb F}\ar@{ >->}[r]
&{\mathbb G}\ar@{->>}[r]
&{\mathbb F}
}
$$
commute. This uniqueness implies the equality we need in view of the
following commutative diagram with exact columns:
$$
\xymatrix@!C=5em{
R_{\mathscr B}\ar[r]\ar@{ >->}[d]
&R_{\mathscr B}^{(2)}\ar@{ >->}[d]\ar@{-->}[r]
&{\mathbb F}\ox{\mathbb F}\ar[r]^\cong\ar@{ >->}[d]
&{\mathbb F}\ar@{ >->}[d]\\
{\mathscr B}_0\ar[r]^-\Delta\ar@{->>}[d]
&{\mathscr B}_0\ox{\mathscr B}_0\ar@{->>}[d]\ar[r]^{\psi_\chi(x)\ox\psi_\chi(y)}
&{\mathbb G}\ox{\mathbb G}\ar@{->>}[d]\ar[r]^\cong
&{\mathbb G}\ar@{->>}[d]\\
{\mathscr A}\ar[r]^-\delta\ar@/_3ex/[rrr]_{xy}
&{\mathscr A}\ox{\mathscr A}\ar[r]^{x\ox y}
&{\mathbb F}\ox{\mathbb F}\ar[r]^\cong
&{\mathbb F},
}
$$
since when $x$ and $y$ are monomials in Milnor generators, one has
$\psi_\chi(xy)=\psi_\chi(x)\psi_\chi(y)$.
\end{proof}
Therefore we call $\psi$ the comultiplicative splitting of ${\mathscr R}^{\mathbb F}$. We now
want to compute the left and right multiplication maps $a^\psi$ and
$b^\psi$ defined in \eqref{aeq}. The dual maps $a_\psi=(a^\psi)_*$ and
$b_\psi=(b^\psi)_*$ can be described by the diagrams
\begin{equation}\label{ml}
\alignbox{
\xymatrix{
&(R_{\mathscr B})_*\ar@{->>}[r]\ar[d]_{m_*^\l}&{\mathscr A}_*\ar@/_1em/@{-->}[l]_{q\psi_\chi}\ar[d]^{m_*}\\
{\mathscr F}_*\ox(R_{\mathscr F})_*\ar@{
(->}[r]&{\mathscr F}_*\ox(R_{\mathscr B})_*\ar@{->>}[r]
&{\mathscr A}_*\ox{\mathscr A}_*\ar@/^1em/@{-->}[l]^{i\ox q\psi_\chi}
}
}
\end{equation}
and
\begin{equation}\label{mr}
\alignbox{
\xymatrix{
&(R_{\mathscr B})_*\ar@{->>}[r]\ar[d]_{m_*^\r}&{\mathscr A}_*\ar@/_1em/@{-->}[l]_{q\psi_\chi}\ar[d]^{m_*}\\
(R_{\mathscr F})_*\ox{\mathscr F}_*\ar@{
(->}[r]&(R_{\mathscr B})_*\ox{\mathscr F}_*\ar@{->>}[r]
&{\mathscr A}_*\ox{\mathscr A}_*.\ar@/^1em/@{-->}[l]^{q\psi_\chi\ox i}
}
}
\end{equation}
Here $m_*$ is dual to the multiplication in ${\mathscr A}$ and $m_*^\l$ and $m_*^\r$
are induced by the ${\mathscr F}_0$-${\mathscr F}_0$-bimodule structure of $R_{\mathscr B}\ox{\mathbb F}$. One
readily checks
\begin{align*}
a_\psi&=m_*^\l q\psi_\chi-(i\ox q\psi_\chi)m_*\\
b_\psi&=m_*^\r q\psi_\chi-(q\psi_\chi\ox i)m_*.
\end{align*}
We now consider the diagram
$$
\xymatrix{
&{\mathscr B}_\#\ar[d]_{m_*^{\mathbb G}}&{\mathscr A}_*\ar[l]_{\psi_\chi}\ar[d]^{m_*}\\
{\mathscr F}_*\ox{\mathscr F}_*\ar@{ (->}[r]&{\mathscr B}_\#\ox{\mathscr B}_\#&{\mathscr A}_*\ox{\mathscr A}_*\ar[l]_-{\psi_\chi^\ox}
}
$$
Here $\psi_\chi^\ox$ is defined similarly as $\psi_\chi$ in \eqref{psichi}
by the formula
$$
\psi_\chi^\ox\left(\sum_{\alpha,\beta}n_{\alpha\beta}\alpha\ox\beta\right)=
\sum_{\alpha,\beta}\chi(n_{\alpha\beta})\alpha\ox\beta
$$
where $\alpha$, $\beta$ run through the monomials in Milnor generators.
Moreover $m_*^{\mathbb G}$ is the dual of the multiplication map $m^{\mathbb G}$ of
${\mathscr B}_0=T_{\mathbb G}(E_{\mathscr A})$.
\begin{Lemma}\label{mulstar}
The difference $m^{\mathbb G}_*\psi_\chi-\psi_\chi^\ox m_*$ lifts to an
${\mathbb F}$-linear map $\nabla_\chi:{\mathscr A}_*\to{\mathscr F}_*\ox{\mathscr F}_*$ such that one has
\begin{align*}
a_\psi&=(1\ox\pi)\nabla_\chi\\
b_\psi&=(\pi\ox1)\nabla_\chi.
\end{align*}
Here $\pi:{\mathscr F}_*\onto{R_{\mathscr F}}_*$ is induced by the inclusion $R_{\mathscr F}\subset{\mathscr F}_0$.
\end{Lemma}
\begin{proof}
We will only prove the first equality; the proof for the second one is
completely similar.
The following diagram
$$
\xymatrix@!C=3em{
&{R_{\mathscr B}}_*\ar@{ >->}[rrd]_{j_R}\ar[ddddd]_{m^\l_*}&&&&&&{\mathscr A}_*\ar[llllll]_{q\psi_\chi}\ar@{=}[ddl]\ar[ddddd]^{m_*}\\
&&&{R_{\mathscr B}}_\#\ar[ddd]_{m^\l_\#}\\
&&&&&{\mathscr B}_\#\ar[d]_{m_*^{\mathbb G}}\ar[ull]^\pi&{\mathscr A}_*\ar[l]_{\psi_\chi}\ar[d]^{m_*}\\
&&&&&{\mathscr B}_\#\ox{\mathscr B}_\#\ar[dll]_{1\ox\pi}&{\mathscr A}_*\ox{\mathscr A}_*\ar[l]_-{\psi_\chi^\ox}\\
&&&{\mathscr B}_\#\ox{R_{\mathscr B}}_\#&{\mathscr F}_*\ox{\mathscr F}_*\ar[lllldd]|\hole^>(.75){1\ox\pi}\ar@{ (->}[ur]\\
&{\mathscr F}_*\ox{R_{\mathscr B}}_*\ar@{ >->}[urr]^{j\ox j_R}&&&&&&{\mathscr A}_*\ox{\mathscr A}_*\ar[llllll]^{i\ox q\psi_\chi}\ar@{=}[uul]\\
{\mathscr F}_*\ox{R_{\mathscr F}}_*\ar@{ (->}[ur]
}
$$
commutes except for the innermost square, whose deviation from
commutativity is $\nabla_\chi$ and lies in the image of
${\mathscr F}_*\ox{\mathscr F}_*\incl{\mathscr B}_\#\ox{\mathscr B}_\#$, and the outermost square, whose deviation
from commutativity is $a_\psi$ and lies in the image of
${\mathscr F}_*\ox{R_{\mathscr F}}_*\incl{\mathscr F}_*\ox{R_{\mathscr B}}_*$. It follows that
$(1\ox\pi)\nabla_\chi$ and $a_\psi$ have the same image under $j\ox
j_R$, and since the latter map is injective we are done.
\end{proof}
Let us describe the map $\nabla_\chi$ more explicitly.
\begin{Lemma}
The map $\nabla_\chi$ factors as follows
$$
{\mathscr A}_*\xto{\bar\nabla\chi}{\mathscr F}_*\ox{\mathscr A}_*\xto{1\ox i}{\mathscr F}_*\ox{\mathscr F}_*.
$$
\end{Lemma}
\begin{proof}
Let ${\mathscr A}_\#\subset{\mathscr B}_\#$ be the subring generated by the elements $M_1$,
$M_{21}$, $M_{421}$, $M_{8421}$, .... It is then clear that the image of
$\psi_\chi$ lies in ${\mathscr A}_\#$ and the reduction ${\mathscr B}_\#\onto{\mathscr F}_*$ carries
${\mathscr A}_\#$ to ${\mathscr A}_*$. Moreover obviously the image of $\psi^\ox m_*$ lies in
${\mathscr A}_\#$, hence it only remains to show the inclusion
$$
m_*^{\mathbb G}({\mathscr A}_\#)\subset{\mathscr B}_\#\ox{\mathscr A}_\#.
$$
Since $m_*^{\mathbb G}$ is a ring homomorphism, it suffices to check this on the
generators $M_1$, $M_{21}$, $M_{421}$, $M_{8421}$, .... But this is clear from
\eqref{dizeta}.
\end{proof}
\begin{Corollary}\label{calcab}
For the comultiplicative splitting $\psi$ one has
$$
a_\psi=0.
$$
Moreover the map $b_\psi$ factors as follows
$$
{\mathscr A}_*\xto{\bar b_\psi}{R_{\mathscr F}}_*\ox{\mathscr A}_*\xto{1\ox i}{R_{\mathscr F}}_*\ox{\mathscr F}_*.
$$
\end{Corollary}
\begin{proof}
The first statement follows as by definition $\pi({\mathscr A}_*)=0$; the second is
obvious.
\end{proof}
Using the splitting $\psi$ we get the following analogue of \bref{detrel}.
\begin{Proposition}\label{psicomon}
The comonoid ${\mathscr R}^{\mathbb F}$ in the category ${\mathbf{Alg}}^{\mathit{pair}}_{\mathbbm1}$ described in
\bref{relcom} is completely determined by the multiplication map
$$
\bar b^\psi: R_{\mathscr F}\ox{\mathscr A}\to{\mathscr A}
$$
dual to the map $\bar b_\psi$ from \ref{calcab}. In fact, the identification
$$
{\mathscr R}^{\mathbb F}_1={\mathscr A}\oplus R_{\mathscr F}
$$
induced by the splitting $s=\psi$ identifies the diagonal of ${\mathscr R}^{\mathbb F}$ with
$\Delta_{\mathscr A}\oplus\Delta_R$ (see \eqref{ueq}, \eqref{udia}), and the bimodule
structure of ${\mathscr R}_1^{\mathbb F}$ with
\begin{align*}
f(\alpha,r)&=(f\alpha,fr)\\
(\alpha,r)f&=(\alpha\qf f-\bar b^\psi(r,\qf f),rf)
\end{align*}
for $f\in{\mathscr F}_0$, $r\in R_{\mathscr F}$, $\alpha\in{\mathscr A}$.
\end{Proposition}
\section{Computation of the Hopf pair algebra ${\mathscr B}^{\mathbb F}$}\label{bcomp}
The Hopf pair algebra ${\mathscr V}={\mathscr B}^{\mathbb F}$ in \bref{unique}, given by the algebra of
secondary cohomology operations, satisfies the following crucial condition
which we deduce from \cite{Baues}*{16.1.5}.
\begin{Theorem}\label{splitr}
There exists a right ${\mathscr F}_0$-equivariant splitting
$$
u:{\mathscr R}^{\mathbb F}_1=R_{\mathscr B}\ox{\mathbb F}\to{\mathscr B}_1\ox{\mathbb F}={\mathscr B}^{\mathbb F}_1
$$
of the projection ${\mathscr B}^{\mathbb F}_1\to{\mathscr R}^{\mathbb F}_1$, see \eqref{hpad}, such that the
following holds. The diagram
$$
\xymatrix{
{\mathscr A}\oplus_\k\Sigma{\mathscr A}\ar@/_/@{-->}[d]_q\ar@{ >->}[r]
&{\mathscr B}^{\mathbb F}_1\ar@/_/@{-->}[d]_q\ar[r]
&{\mathscr B}^{\mathbb F}_0\ar@{=}[d]\ar@{->>}[r]
&{\mathscr A}\ar@{=}[d]\\
{\mathscr A}\ar[u]_{\bar u}\ar@{ >->}[r]
&{\mathscr R}^{\mathbb F}_1\ar[u]_u\ar[r]
&{\mathscr R}^{\mathbb F}_0\ar@{->>}[r]
&{\mathscr A}
}
$$
commutes, where $\bar u$ is the inclusion. Moreover in the diagram of
diagonals, see \eqref{diacomp},
$$
\xymatrix{
{\mathscr B}^{\mathbb F}_1\ar[r]^-{\Delta_{\mathscr B}}
&({\mathscr B}^{\mathbb F}\hat\ox{\mathscr B}^{\mathbb F})_1
&\Sigma{\mathscr A}\ox{\mathscr A}\ar@{ )->}[l]\\
{\mathscr R}^{\mathbb F}_1\ar[r]^-{\Delta_R}\ar[u]^u
&({\mathscr R}^{\mathbb F}\hat\ox{\mathscr R}^{\mathbb F})_1\ar[u]^{u\hat\ox u}
}
$$
the difference $\Delta_{\mathscr B} u-(u\hat\ox u)\Delta_R$ lifts to $\Sigma{\mathscr A}\ox{\mathscr A}$
and satisfies
$$
\xi\bar\pi=\Delta_{\mathscr B} u-(u\hat\ox
u)\Delta_R:\xymatrix@1{{\mathscr R}_{\mathbb F}^1\ar@{->>}[r]^{\bar\pi}&\bar
R\ar[r]^-\xi&\Sigma{\mathscr A}\ox{\mathscr A}}
$$
where $\xi$ is dual to $\xi_*$ in \bref{cosxi}. Here $\bar\pi$ is the
projection ${\mathscr R}_{\mathbb F}\onto R_{\mathscr F}\onto\bar R$. The cocycle $\xi$ is trivial if
$p$ is odd.
\end{Theorem}
\begin{Definition}\label{multop}
Using a splitting $u$ of ${\mathscr B}^{\mathbb F}$ as in \bref{splitr} we define a
\emph{multiplication operator}
$$
A:{\mathscr A}\ox R_{\mathscr B}\to\Sigma{\mathscr A}
$$
by the equation
$$
A(\bar\alpha\ox x)=u(\alpha x)-\alpha u(x)
$$
for $\alpha\in{\mathscr F}_0$, $x\in R_{\mathscr B}$. Thus $-A$ is a multiplication map as
studied in \cite{Baues}*{16.3.1}. Fixing a splitting $s$ of ${\mathscr R}^{\mathbb F}$ as in
\eqref{s} we define an \emph{$s$-multiplication operator} $A^s$ to be the
composite
$$
A^s:\xymatrix@1{{\mathscr A}\ox R_{\mathscr F}\ar[r]^-{1\ox s}&{\mathscr A}\ox R_{\mathscr B}\ar[r]^-A&\Sigma{\mathscr A}}.
$$
Such operators have the properties of the following $s$-multiplication
maps.
\end{Definition}
\begin{Definition}\label{mulmap}
Let $s$ be a splitting of ${\mathscr R}^{\mathbb F}$ as in \eqref{s} and let $U^s$, $a^s$,
$b^s$ be defined as in section \ref{rcomp}. An \emph{$s$-multiplication
map}
$$
A^s:{\mathscr A}\ox R_{\mathscr F}\to{\mathscr A}
$$
is an ${\mathbb F}$-linear map of degree $-1$ satisfying the following conditions
with $\alpha,\alpha',\beta,\beta'\in{\mathscr F}_0$, $x,y\in R_{\mathscr F}$
\begin{enumerate}
\item $A^s(\alpha,x\beta)=A^s(\alpha,x)\beta+\k(\alpha)b^s(x,\beta)$
\item
$A^s(\alpha\alpha',x)=A^s(\alpha,\alpha'x)+\k(\alpha)a^s(\alpha',x)+(-1)^{\deg(\alpha)}\alpha
A^s(\alpha',x)$
\item $\delta A^s(\alpha,x)=A^s_\ox(\alpha\ox\Delta
x)+L(\alpha,x)+\nabla_\xi(\alpha,x)+\delta\k(\alpha)U^s(x)$.
\end{enumerate}
Here $A^s_\ox:{\mathscr A}\ox R_{\mathscr F}^{(2)}\to{\mathscr A}\ox{\mathscr A}$ is defined by the equalities
\begin{align*}
A^s_\ox(\alpha\ox x\ox\beta')
&=\sum(-1)^{\deg(\alpha_\r)\deg(x)}A^s(\alpha_\l,x)\ox\alpha_\r\beta',\\
A^s_\ox(\alpha\ox\beta\ox y)
&=\sum(-1)^{\deg(\alpha_\r)\deg(\beta)+\deg(\alpha_\l)+\deg(\beta)}\alpha_\l\beta\ox
A^s(\alpha_\r,y),
\end{align*}
where as always
$$
\delta(\alpha)=\sum\alpha_\l\ox\alpha_\r\in{\mathscr A}\ox{\mathscr A}.
$$
Two $s$-multiplication maps $A^s$ and ${A^s}'$ are \emph{equivalent} if there exists an
${\mathbb F}$-linear map
$$
\gamma:R_{\mathscr F}\to{\mathscr A}
$$
of degree $-1$ such that the equality
$$
A^s(\alpha,x)-{A^s}'(\alpha,x)
=\gamma(\alpha x)-(-1)^{\deg(\alpha)}\alpha\gamma(x)
$$
holds for any $\alpha\in{\mathscr A}$, $x\in R_{\mathscr F}$ and moreover $\gamma$ is right
${\mathscr F}_0$-equivariant and the diagram
$$
\xymatrix{
{\mathscr A}\ar[r]^-\delta&{\mathscr A}\ox{\mathscr A}\\
R_{\mathscr F}\ar[u]_\gamma\ar[r]_\Delta&R_{\mathscr F}^{(2)}\ar[u]_{\gamma_\ox}
}
$$
commutes, with $\gamma_\ox$ given by
\begin{align*}
\gamma_\ox(x\ox\beta)&=\gamma(x)\ox\beta,\\
\gamma_\ox(\alpha\ox y)&=(-1)^{\deg(\alpha)}\alpha\ox\gamma(y)
\end{align*}
for $\alpha,\beta\in{\mathscr F}_0$, $x,y\in R_{\mathscr F}$.
\end{Definition}
\begin{Theorem}\label{exmul}
There exists an $s$-multiplication map $A^s$ and any two such
$s$-multiplication maps are equivalent. Moreover each $s$-multiplication
map is an $s$-multiplication operator as in \bref{multop} and vice versa.
\end{Theorem}
\begin{proof}
We apply \cite{Baues}*{16.3.3}. In fact, we obtain by $A^s$ the
multiplication operator
$$
A:{\mathscr A}\ox R_{\mathscr B}={\mathscr A}\!\ox\!{\mathscr A}\oplus{\mathscr A}\!\ox\!R_{\mathscr F}\to\Sigma{\mathscr A}
$$
with $A(\alpha\ox x)=A^s(\alpha\ox\bar x)+\k(\alpha)\xi$ where $(\bar
x,\xi)\in R_{\mathscr F}\oplus{\mathscr A}=R_{\mathscr B}\ox{\mathbb F}$ corresponds to $x$, that is $s(\bar
x)+\iota(\xi)=x$ for $\iota:{\mathscr A}\subset R_{\mathscr B}\ox{\mathbb F}$.
\end{proof}
\begin{Remark}
For the splitting $s=\phi$ of ${\mathscr R}^{\mathbb F}$ in \bref{chi} the maps
$$
A_{n,m}:{\mathscr A}\to{\mathscr A}
$$
are defined by $A_{n,m}(\alpha)=A^\phi(\alpha\ox[n,m])$, with $[n,m]$
the Adem relations in $R_{\mathscr F}$. Using formul\ae\ in \bref{mulmap} the maps
$A_{n,m}$ determine the $\phi$-multiplication map $A^\phi$ completely. The
maps $A_{n,m}$ coincide with the corresponding maps $A_{n,m}$ in
\cite{Baues}*{16.4.4}. In \cite{Baues}*{16.6} an algorithm for
determination of $A_{n,m}$ is described, leading to a list of values of
$A_{n,m}$ on the elements of the admissible basis of ${\mathscr A}$. The algorithm
for the computation of $A_{n,m}$ can be deduced from theorem \bref{exmul}
above.
\end{Remark}
\begin{Remark}
Triple Massey products $\brk{\alpha,\beta,\gamma}$ with
$\alpha,\beta,\gamma\in{\mathscr A}$, $\alpha\beta=0=\beta\gamma$, as in \bref{tmp}
can be computed by $A^s$ as follows. Let $\bar\beta\bar\gamma\in R_{\mathscr B}$ be
given as in \bref{tmp}. Then $\bar\beta\bar\gamma\ox1\in R_{\mathscr B}\ox{\mathbb F}$
satisfies
$$
\bar\beta\bar\gamma\ox1=s(\bar x)+\iota(\xi)
$$
with $\bar x\in R_{\mathscr F}$, $\xi\in{\mathscr A}$ and $\brk{\alpha,\beta,\gamma}$ satisfies
$$
A^s(\alpha\ox\bar x)+\k(\alpha)\xi\in\brk{\alpha,\beta,\gamma}.
$$
Compare \cite{Baues}*{16.3.4}.
\end{Remark}
Now it is clear how to introduce via $a^s$, $b^s$, $U^s$, $\xi$, $\k$, and $A^s$ a
Hopf pair algebra structure on
\begin{equation}\label{hopfin}
\alignbox{
\xymatrix{
{\mathscr A}\oplus\Sigma{\mathscr A}\oplus R_{\mathscr F}\ar[r]^-q\ar@{=}[d]&{\mathscr A}\oplus R_{\mathscr F}\ar@{=}[d]\\
{\mathscr B}^{\mathbb F}_1&{\mathscr R}^{\mathbb F}_1
}
}
\end{equation}
which is isomorphic to ${\mathscr B}^{\mathbb F}$, compare \bref{detrel}.
In the next section we describe an algorithm for the computation of a
$\psi$-multiplication map, where $\psi$ is the comultiplicative splitting
of ${\mathscr R}^{\mathbb F}$ in \bref{psi}. For this we compute the dual map $A_\psi$ of
$A^\psi$.
\section{Computation of the Hopf pair coalgebra ${\mathscr B}_{\mathbb F}$}\label{cobcomp}
For the comultiplicative splitting $s=\psi$ of ${\mathscr R}^{\mathbb F}$ in \bref{psi} we
introduce the following $\psi$-comultiplication maps which are dual to the
$\psi$-multiplication maps in \bref{mulmap}.
\begin{Definition}\label{apsi}
Let $\bar b_\psi$ be given as in \ref{calcab}. A
\emph{$\psi$-comultiplication map}
$$
A_\psi:{\mathscr A}_*\to{\mathscr A}_*\ox{R_{\mathscr F}}_*
$$
is an ${\mathbb F}$-linear map of degree $+1$ satisfying the following conditions.
\begin{enumerate}
\item\label{mreqs}
The maps in the diagram
$$
\xymatrix{
{\mathscr A}_*\ox{R_{\mathscr F}}_*\ar[d]_{1\ox m^\r_*}&{\mathscr A}_*\ar[d]^{m_*}\ar[l]_-{A_\psi}\\
{\mathscr A}_*\ox{R_{\mathscr F}}_*\ox{\mathscr F}_*&{\mathscr A}_*\ox{\mathscr A}_*\ar[l]^-{A_\psi\ox i}
}
$$
satisfy
$$
(1\ox m^\r_*)A_\psi=(A_\psi\ox i)m_*+(\k_*\ox\bar b_\psi)m_*.
$$
Here $\k_*$ is computed in \bref{dkappa} and $m^\r_*$ is defined in
\eqref{mr}.
\item\label{mleqs}
The maps in the diagram
$$
\xymatrix{
{\mathscr A}_*\ox{R_{\mathscr F}}_*\ar[d]^{1\ox m^\l_*}&&{\mathscr A}_*\ar[d]^{A_\psi}\ar[ll]_{A_\psi}\\
{\mathscr A}_*\ox{\mathscr F}_*\ox{R_{\mathscr F}}_*&{\mathscr A}_*\ox{\mathscr A}_*\ox{R_{\mathscr F}}_*\ar[l]^{1\ox i\ox1}
&{\mathscr A}_*\ox{R_{\mathscr F}}_*\ar[l]^-{m_*\ox1}
}
$$
satisfy
$$
(1\ox m^\l_*)A_\psi=(1\ox i\ox1)(m_*\ox1)A_\psi-(\tau\ox i\ox1)(1\ox A_\psi)m_*.
$$
Here $m^\l_*$ is as in \eqref{ml}, and $\tau:{\mathscr A}_*\to{\mathscr A}_*$ is given
by $\tau(\alpha)=(-1)^{\deg(\alpha)}\alpha$.
\item\label{mult}
For $x,y\in{\mathscr A}_*$ the product $xy$ in the algebra ${\mathscr A}_*$ satisfies the
formula
$$
A_\psi(xy)=A_\psi(x)m_*(y)+(-1)^{\deg(x)}m_*(x)A_\psi(y)+L_*(x,y)+{\nabla_\xi}_*(x,y).
$$
Here $L_*$ and ${\nabla_\xi}_*$ are given in \ref{L} and
\ref{nablaelts} respectively, with $L_*={\nabla_\xi}_*=0$ for $p$ odd.
\end{enumerate}
Two $\psi$-comultiplication maps $A_\psi$, $A_\psi'$ are
\emph{equivalent} if there is a derivation
$$
\gamma_*:{\mathscr A}_*\to{R_{\mathscr F}}_*
$$
of degree $+1$ satisfying the equality
$$
A_\psi-A_\psi'=m^\l_*\gamma_*-(\tau\ox\gamma_*)m_*.
$$
\end{Definition}
As a dual statement to \bref{exmul} we get
\begin{Theorem}
There exists a $\psi$-comultiplication map $A_\psi$ and any two such
$\psi$-comultiplication maps are equivalent. Moreover each
$\psi$-comultiplication map $A_\psi$ is the dual of a $\psi$-multiplication
map $A^\psi$ in \bref{exmul} with $A_\psi={A^\psi}_*$.
\end{Theorem}\qed
Now dually to \eqref{hopfin}, it is clear how to introduce via
$a_\psi$, $b_\psi$, $\xi_*$, $\k_*$, and $A_\psi$ a Hopf pair
coalgebra structure on
$$
\xymatrix{
{\mathscr A}_*\oplus\Sigma{\mathscr A}_*\oplus{R_{\mathscr F}}_*\ar@{=}[d]&{\mathscr A}_*\oplus{R_{\mathscr F}}_*\ar[l]_-i\ar@{=}[d]\\
{\mathscr B}_{\mathbb F}^1&{\mathscr R}_{\mathbb F}^1
}
$$
which is isomorphic to ${\mathscr B}_{\mathbb F}$, compare \bref{psicomon}.
We now embark on the simplification and solution of the equations
\ref{apsi}\eqref{mreqs} and \ref{apsi}\eqref{mleqs}. To begin with, note that
the equations \ref{apsi}\eqref{mreqs} imply that the image of the composite
map
$$
{\mathscr A}_*\xto{A_\psi}{\mathscr A}_*\ox{R_{\mathscr F}}_*\xto{1\ox m^\r_*}{\mathscr A}_*\ox{R_{\mathscr F}}_*\ox{\mathscr F}_*
$$
actually lies in
$$
{\mathscr A}_*\ox{R_{\mathscr F}}_*\ox{\mathscr A}_*\subset{\mathscr A}_*\ox{R_{\mathscr F}}_*\ox{\mathscr F}_*;
$$
similarly \ref{apsi}\eqref{mleqs} implies that the image of
$$
{\mathscr A}_*\xto{A_\psi}{\mathscr A}_*\ox{R_{\mathscr F}}_*\xto{1\ox m^\l_*}{\mathscr A}_*\ox{\mathscr F}_*\ox{R_{\mathscr F}}_*
$$
lies in
$$
{\mathscr A}_*\ox{\mathscr A}_*\ox{R_{\mathscr F}}_*\subset{\mathscr A}_*\ox{\mathscr F}_*\ox{R_{\mathscr F}}_*.
$$
Now one obviously has
\begin{Lemma}
The following conditions on an element $x\in{R_{\mathscr F}}_*=\Hom(R_{\mathscr F},{\mathbb F})$ are
equivalent:
\begin{itemize}
\item $m^\l_*(x)\in{\mathscr A}_*\ox{R_{\mathscr F}}_*\subset{\mathscr F}_*\ox{R_{\mathscr F}}_*$;
\item $m^\r_*(x)\in{R_{\mathscr F}}_*\ox{\mathscr A}_*\subset{R_{\mathscr F}}_*\ox{\mathscr F}_*$;
\item $x\in\bar R_*\subset{R_{\mathscr F}}_*$.
\end{itemize}
\end{Lemma}
\begin{proof}
Recall that $\bar R=R_{\mathscr F}/{R_{\mathscr F}}^2$, i.~e. $\bar R_*$ is the space of linear forms
on $R_{\mathscr F}$ which vanish on ${R_{\mathscr F}}^2$. Then the first condition means that
$x:R_{\mathscr F}\to{\mathbb F}$ has the property that the composite
$$
{\mathscr F}_0\ox R_{\mathscr F}\xto{m^\l} R_{\mathscr F}\xto x{\mathbb F}
$$
vanishes on $R_{\mathscr F}\ox R_{\mathscr F}\subset{\mathscr F}_0\ox R_{\mathscr F}$; but the image of $R_{\mathscr F}\ox R_{\mathscr F}$
under $m^\l$ is precisely ${R_{\mathscr F}}^2$. Similarly for the second condition.
\end{proof}
We thus conclude that the image of $A_\psi$ lies in ${\mathscr A}_*\ox\bar R_*$.
Next note that the condition \ref{apsi}\eqref{mult} implies
\begin{equation}\label{apsisq}
A_\psi(x^2)=L_*(x,x)+\nabla_{\xi_*}(x,x)
\end{equation}
for any $x\in{\mathscr A}_*$. Moreover the latter formula also implies
\begin{Proposition}\label{4=0}
For any $x\in{\mathscr A}_*$ one has
$$
A_\psi(x^4)=0.
$$
\end{Proposition}
\begin{proof}
Since the squaring map is an algebra endomorphism, by \ref{bider} one has
$$
L_*(x,y^2)=\sum\zeta_1x_\l y_{\l'}^2\ox\tilde L_*(x_r,y_{r'}^2),
$$
with
$$
m_*(x)=\sum x_\l\ox x_r,\ \ m_*(y)=\sum y_{\l'}\ox y_{r'}.
$$
But $\tilde L_*$ vanishes on squares since it is a biderivation, so $L_*$ also
vanishes on squares. Moreover by \eqref{nablaelts}
$$
\nabla_{\xi_*}(x^2,y^2)=\sum\xi_*(x^2,y^2)_{\mathscr A}\ox\xi_*(x^2,y^2)_R-\sum
x_\l^2y_{\l'}^2\ox\xi_*(x_r^2,y_{r'}^2);
$$
this is zero since $\xi_*(x^2,y^2)=0$ for any $x$ and $y$ by \eqref{xifromS}.
\end{proof}
Taking the above into account, and identifying the image of $i:{\mathscr A}_*\into{\mathscr F}_*$
with ${\mathscr A}_*$, \ref{apsi}\eqref{mreqs} can be rewritten as follows:
$$
(1\ox m_*^r)A_\psi(\zeta_n)
=A_\psi(\zeta_n)\ox1+\left(L_*(\zeta_{n-1},\zeta_{n-1})+\nabla_{\xi_*}(\zeta_{n-1},\zeta_{n-1})\right)\ox\zeta_1
+\sum_{i=0}^n\zeta_1\zeta_{n-i}^{2^i}\ox\bar b_\psi(\zeta_i),
$$
or
$$
(1\ox\tilde m_*^r)A_\psi(\zeta_n)
=\left(L_*(\zeta_{n-1},\zeta_{n-1})+\nabla_{\xi_*}(\zeta_{n-1},\zeta_{n-1})\right)\ox\zeta_1
+\sum_{i=0}^n\zeta_1\zeta_{n-i}^{2^i}\ox\bar b_\psi(\zeta_i).
$$
Still more explicitly one has
$$
L_*(\zeta_k,\zeta_k)=\sum_{0\le i,j\le k}\zeta_1\zeta_{k-i}^{2^i}\zeta_{k-j}^{2^j}\ox\tilde
L_*(\zeta_i,\zeta_j)=\sum_{0\le i\le k}\zeta_1\zeta_{k-i}^{2^{i+1}}\ox\tilde
L_*(\zeta_i,\zeta_i)
+\sum_{0\le i<j\le k}\zeta_1\zeta_{k-i}^{2^i}\zeta_{k-j}^{2^j}\ox\tilde
L^S_*(\zeta_i,\zeta_j),
$$
where we have denoted
$$
\tilde L^S_*(\zeta_i,\zeta_j):=\tilde
L_*(\zeta_i,\zeta_j)+\tilde L_*(\zeta_j,\zeta_i);
$$
similarly
$$
\nabla_{\xi_*}(\zeta_k,\zeta_k)=
\sum_{0\le i<j\le k}\zeta_{k-i}^{2^i}\zeta_{k-j}^{2^j}\ox
S_*(\zeta_i,\zeta_j).
$$
As for $b_\psi(\zeta_i)$, by \ref{mulstar} it can be calculated by the formula
\begin{equation}\label{bpsibar}
\bar b_\psi(\zeta_i)=\sum_{0<j<i}v_{i-j}^{2^{j-1}}\ox\zeta_j,
\end{equation}
where $v_k$ are determined by the equalities
$$
M_{2^k,2^{k-1},...,2}-M_{2^{k-1},2^{k-2},...,1}^2\equiv2v_k\mod4
$$
in ${\mathscr B}_\#$. For example,
\begin{align*}
v_1&=M_{11},\\
v_2&=M_{411}+M_{231}+M_{222}+M_{2121},\\
v_3&
=M_{8411}
+M_{8231}
+M_{8222}
+M_{82121}
+M_{4631}
+M_{4622}
+M_{46121}
+M_{4442}
+M_{42521}
+M_{42431}
+M_{42422}\\
&+M_{424121}
+M_{421421},
\end{align*}
etc.
Thus putting everything together we see
\begin{Lemma}\label{mrC}
The equation \ref{apsi}\eqref{mreqs} for the
value on $\zeta_n$ is equivalent to
$$
(1\ox\tilde m^r_*)A_\psi(\zeta_n)=\sum_{0<k<n}C^{(n)}_{2^n-2^k+1}\ox\zeta_k
$$
where
\begin{multline*}
C^{(n)}_{2^n-1}=
\sum_{0<i<n}\zeta_1\zeta_{n-1-i}^{2^{i+1}}\ox\left(\tilde
L_*(\zeta_i,\zeta_i)+v_i\right)
+\sum_{0<i<j<n}\zeta_1\zeta_{n-1-i}^{2^i}\zeta_{n-1-j}^{2^j}\ox\tilde
L^S_*(\zeta_i,\zeta_j)
+\sum_{0<i<j<n}\zeta_{n-1-i}^{2^i}\zeta_{n-1-j}^{2^j}\ox
S_*(\zeta_i,\zeta_j)
\end{multline*}
and, for $1<k<n$,
$$
C^{(n)}_{2^n-2^k+1}=\sum_{0<i\le n-k}\zeta_1\zeta_{n-k-i}^{2^{k+i}}\ox
v_i^{2^{k-1}}.
$$
\end{Lemma}\qed
\begin{comment}
For low values of $n$ these equations look like
\begin{align*}
(1\ox\tilde m^r_*)A_\psi(\zeta_2)&=0,\\
(1\ox\tilde m^r_*)A_\psi(\zeta_3)&
=\zeta_1\ox(\pi(M_{222})\ox\zeta_1+\pi(M_{22})\ox\zeta_2)
+\zeta_1^2\ox\pi(M_{32}+M_{23}+M_{212}+M_{122})\ox\zeta_1\\
&+\zeta_1^3\ox\pi M_{22}\ox\zeta_1,\\
(1\ox\tilde m^r_*)A_\psi(\zeta_4)&
=\zeta_1\ox\left(\pi(M_{8222}+M_{722}+M_{4622}+M_{4442}+M_{42422})\ox\zeta_1\right.\\
&\ \ \ \ \ \ \ \left.+\pi(M_{822}+M_{462}+M_{444}+M_{4242})\ox\zeta_2+\pi(M_{44})\ox\zeta_3\right)\\
&+\zeta_1^4\ox\pi(M_{632}+M_{623}+M_{6212}+M_{6122}+M_{542}+M_{452}+M_{443}+M_{4412}+M_{4142}+M_{3422}\\
&\ \ \ \ \ \ \ +M_{2522}+M_{2432}+M_{2423}+M_{24212}+M_{24122}+M_{21422}+M_{1622}+M_{1442}+M_{12422})\ox\zeta_1\\
&+\zeta_1^5\ox\pi(M_{622}+M_{442}+M_{2422})\ox\zeta_1\\
&+\zeta_2^2\ox\pi(M_{522}+M_{432}+M_{423}+M_{4212}+M_{4122}+M_{1422})\ox\zeta_1
+\zeta_1\zeta_2^2\ox\pi(M_{422})\ox\zeta_1\\
&+\zeta_1^9\ox\left(\pi(M_{222})\ox\zeta_1+\pi(M_{22})\ox\zeta_2\right)
+\zeta_1^4\zeta_2^2\ox\pi(M_{32}+M_{23}+M_{212}+M_{122})\ox\zeta_1\\
&+\zeta_1^5\zeta_2^2\ox\pi(M_{22})\ox\zeta_1,
\end{align*}
etc. (Note that $A_\psi(\zeta_1)=0$ by dimension considerations.)
As for the equations \ref{apsi}\eqref{mleqs}, they have form
$$
(1\ox\tilde m^\l_*)A_\psi(\zeta_n)
=(\tilde m_*\ox1)A_\psi(\zeta_n)+\zeta_1^{2^{n-1}}\ox A_\psi(\zeta_{n-1})+\zeta_2^{2^{n-2}}\ox
A_\psi(\zeta_{n-2})+...+\zeta_{n-2}^4\ox A_\psi(\zeta_2)+\zeta_{n-1}^2\ox
A_\psi(\zeta_1).
$$
\begin{Lemma}
Suppose given a map $A_\psi$ satisfying \ref{apsi}\eqref{mult} and those
instances of \ref{apsi}\eqref{mreqs}, \ref{apsi}\eqref{mleqs} which involve
starting value of ${\mathscr A}_\psi$ on the Milnor generators $i(\zeta_1)$,
$i(\zeta_2)$, ..., where $i:{\mathscr A}_*\to{\mathscr F}_*$ is the inclusion. Then ${\mathscr A}_\psi$
satisfies these equations for all other values too.
\end{Lemma}
\begin{proof}
\end{proof}
\end{comment}
Now recall that, as already mentioned in \ref{L*}, according to
\cite{Baues}*{16.5} $\bar R$ is a free right ${\mathscr A}$-module generated by the set
${\mathrm{PAR}}\subset\bar R$ of preadmissible relations. More explicitly, the composite
$$
R^{\mathrm{pre}}\ox{\mathscr A}\xto{\textrm{inclusion}\ox1}\bar R\ox{\mathscr A}\xto{m^\r}\bar R
$$
is an isomorphism of right ${\mathscr A}$-modules, where $R^{\mathrm{pre}}$ is the ${\mathbb F}$-vector
space spanned by the set ${\mathrm{PAR}}$ of preadmissible relations.
Dually it follows that the composite
$$
\Phi^\r_*:\bar R_*\xto{m^\r_*}\bar R_*\ox{\mathscr A}_*\xto{\ro\ox1}R_{\mathrm{pre}}\ox{\mathscr A}_*
$$
is an isomorphism of right ${\mathscr A}_*$-comodules. Here $\ro:\bar R_*\onto R_{\mathrm{pre}}$ denotes
the restriction homomorphism from the space $\bar R_*$ of ${\mathbb F}$-linear forms
on $\bar R$ to the space $R_{\mathrm{pre}}$ of linear forms on its subspace
$R^{\mathrm{pre}}\subset\bar R$ spanned by ${\mathrm{PAR}}$.
It thus follows that we will obtain equations equivalent to \ref{apsi}\eqref{mreqs} if
we compose both sides of these equations with the isomorphism
$1\ox\Phi^\r_*:{\mathscr A}_*\ox\bar R_*\to{\mathscr A}_*\ox R_{\mathrm{pre}}\ox{\mathscr A}_*$. Let us then denote
$$
(1\ox\Phi^\r_*)A_\psi(\zeta_n)=\sum_\mu\rho_{2^n-|\mu|}(\mu)\ox\mu
$$
with some unknown elements $\rho_{j}(\mu)\in({\mathscr A}_*\ox R_{\mathrm{pre}})_j$, where
$\mu$ runs through some basis of ${\mathscr A}_*$.
Now freedom of the right ${\mathscr A}_*$-comodule $\bar R_*$ on $R_{\mathrm{pre}}$ means that the
above isomorphism $\Phi^\r_*$ fits in the commutative diagram
$$
\xymatrix{
\bar R_*\ar[r]^-{\Phi^\r_*}\ar[d]^{m^\r_*}&R_{\mathrm{pre}}\ox{\mathscr A}_*\ar[d]^{1\ox m_*}\\
\bar R_*\ox{\mathscr A}_*\ar[r]^-{\Phi^\r_*\ox1}&R_{\mathrm{pre}}\ox{\mathscr A}_*\ox{\mathscr A}_*.
}
$$
It follows that we have
$$
(1\ox1\ox m_*)(1\ox\Phi^\r_*)A_\psi(\zeta_n)
=(1\ox\Phi^\r_*\ox1)(1\ox m^r_*)A_\psi(\zeta_n).
$$
Then taking into account \ref{mrC} this gives equations
$$
\sum_\mu\rho_{2^n-|\mu|}(\mu)\ox m_*(\mu)=
\sum_\mu\rho_{2^n-|\mu|}(\mu)\ox\mu\ox1+\sum_{0<k<n}(1\ox\Phi^\r_*)(C^{(n)}_{2^n-2^k+1})\ox\zeta_k,
$$
with the constants $C^{(j)}_n$ as in \ref{mrC}. This immediately determines
the elements $\rho_j(\mu)$ for $|\mu|>0$. Indeed, the above equation implies
that $(1\ox\Phi^\r_*)A_\psi(\zeta_n)$ actually lies in the subspace ${\mathscr A}_*\ox
R_{\mathrm{pre}}\ox\Pi\subset{\mathscr A}_*\ox R_{\mathrm{pre}}\ox{\mathscr A}_*$ where $\Pi\subset{\mathscr A}_*$ is the
following subspace:
$$
\Pi=\set{x\in{\mathscr A}_*\ \mid\ m_*(x)\in\bigoplus_{k\ge0}{\mathscr A}_*\ox{\mathbb F}\zeta_k}.
$$
It is easy to see that actually
$$
\Pi=\bigoplus_{k\ge0}{\mathbb F}\zeta_k,
$$
so we can write
$$
(1\ox\Phi^\r_*)A_\psi(\zeta_n)=\sum_{k\ge0}\rho_{2^n-2^k+1}(\zeta_k)\ox\zeta_k
$$
where we necessarily have
$$
\rho_{2^n-2^k+1}(\zeta_k)\ox1+\rho_{2^n-2^{k+1}+1}(\zeta_{k+1})\ox\zeta_1^{2^k}+\rho_{2^n-2^{j+k}+1}(\zeta_{k+2})\ox\zeta_2^{2^k}+...
=(1\ox\Phi_\r)(C^{(n)}_{2^n-2^k+1}).
$$
for all $k\ge1$. By dimension considerations, $\rho_{2^n-2^k+1}(\zeta_k)$ can only be nonzero
for $k<n$, so the number of unknowns in these equations
strictly decreases as $k$ grows. Thus moving ``backwards'' and using
successive elimination we determine all $\rho_{2^n-2^k+1}(\zeta_k)$ for $k>0$.
It is easy to compute values of the isomorphism $1\ox\Phi^\r_*$ on all elements
involved in the constants $C^{(n)}_j$. In particular, elements of the form
$\Phi^\r_*(v_j^{2^k})$ can be given by an explicit formula. One has
$$
\Phi^\r_*(v_k)=\sum_{0\le i<k}\left(\Sq^{2^k}\Sq^{2^{k-1}}\cdots\Sq^{2^{i+2}}[2^i,2^i]\right)_*\ox\zeta_i^2
$$
and
$$
\Phi^\r_*(v_k^{2^{j-1}})
=\sum_{0\le i<k}\left(\Sq^{2^{k+j-1}}\Sq^{2^{k+j-2}}\cdots\Sq^{2^{i+j+1}}[2^{i+j-1},2^{i+j-1}]\right)_*\ox\zeta_i^{2^j},
$$
so our ``upside-down'' solving gives
\begin{align*}
\rho_{2^{n-1}+1}(\zeta_{n-1})&=\zeta_1\ox[2^{n-2},2^{n-2}]_*,\\
\rho_{2^n-2^{n-2}+1}(\zeta_{n-2})&=\zeta_1^{1+2^{n-1}}\ox[2^{n-3},2^{n-3}]_*+\zeta_1\ox\left(\Sq^{2^{n-1}}[2^{n-3},2^{n-3}]\right)_*\\
\rho_{2^n-2^{n-3}+1}(\zeta_{n-3})&=\zeta_1\zeta_2^{2^{n-2}}\ox[2^{n-4},2^{n-4}]_*+\zeta_1^{1+2^{n-1}}\ox\left(\Sq^{2^{n-2}}[2^{n-4},2^{n-4}]\right)_*+\zeta_1\ox\left(\Sq^{2^{n-1}}\Sq^{2^{n-2}}[2^{n-4},2^{n-4}]\right)_*\\
\cdots\\
\rho_{2^n-2^{n-k}+1}(\zeta_{n-k})
&=\sum_{1\le i\le k}\zeta_1\zeta_{k-i}^{2^{n-k+i}}\ox\left(\Sq^{2^{n-k+i-1}}\Sq^{2^{n-k+i-2}}\cdots\Sq^{2^{n-k+1}}[2^{n-k-1},2^{n-k-1}]\right)_*
\end{align*}
for $k<n-1$.
As for $\rho_{2^n-1}(\zeta_1)$, here we do not have a general formula, but
nevertheless it is easy to compute this value explicitly. In this way we obtain, for example,
\begin{align*}
\rho_1(\zeta_1)&=0,\\
\rho_3(\zeta_1)&=0,\\
\rho_7(\zeta_1)
&
=\zeta_1^3\ox[2,2]_*
+\zeta_1^2\ox\left([3,2]_*+[2,3]_*\right),\\
\rho_{15}(\zeta_1)
&
=\zeta_1^5\zeta_2^2\ox[2,2]_*
+\zeta_1^4\zeta_2^2\ox\left([3,2]_*+[2,3]_*\right)
+\zeta_1\zeta_2^2\ox\left(\Sq^4[2,2]\right)_*
+\zeta_2^2\ox\left((\Sq^5[2,2])_*+(\Sq^4[2,3])_*\right)\\
&
+\zeta_1^5\ox\left(\Sq^6[2,2]\right)_*
+\zeta_1^4\ox\left((\Sq^7[2,2])_*+(\Sq^6[3,2])_*+(\Sq^6[2,3])_*\right),\\
\rho_{31}(\zeta_1)
&
=\zeta_1\zeta_2^4\zeta_3^2\ox[2,2]_*
+\zeta_2^4\zeta_3^2\ox\left([3,2]_*+[2,3]_*\right)
+\zeta_1^9\zeta_3^2\ox\left(\Sq^4[2,2]\right)_*\\
&
+\zeta_1^8\zeta_3^2\ox\left((\Sq^5[2,2])_*+(\Sq^4[2,3])_*\right)
+\zeta_1^9\zeta_2^4\ox\left(\Sq^6[2,2]\right)_*\\
&
+\zeta_1^8\zeta_2^4\ox\left((\Sq^7[2,2])_*+(\Sq^6[3,2])_*+(\Sq^6[2,3])_*\right)
+\zeta_1\zeta_3^2\ox\left(\Sq^8\Sq^4[2,2]\right)_*\\
&
+\zeta_3^2\ox\left((\Sq^9\Sq^4[2,2])_*+(\Sq^8\Sq^4[2,3])_*\right)
+\zeta_1\zeta_2^4\ox\left(\Sq^{10}\Sq^4[2,2]\right)_*\\
&
+\zeta_2^4\ox\left((\Sq^{11}\Sq^4[2,2])_*+(\Sq^{10}\Sq^5[2,2])_*+(\Sq^{10}\Sq^4[2,3])_*\right)
+\zeta_1^9\ox\left(\Sq^{12}\Sq^6[2,2]\right)_*\\
&
+\zeta_1^8\ox\left((\Sq^{13}\Sq^6[2,2])_*+(\Sq^{12}\Sq^6[3,2])_*+(\Sq^{12}\Sq^6[2,3])_*\right),
\end{align*}
etc.
To summarize, let us state
\begin{Proposition}\label{mrrho}
The general solution of \ref{apsi}\eqref{mreqs} for the value on $\zeta_n$ is
given by the formula
$$
A_\psi(\zeta_n)=(1\ox\Phi^\r_*)^{-1}\sum_{k\ge0}\rho_{2^n-2^k+1}(\zeta_k)\ox\zeta_k,
$$
where the elements $\rho_j(\zeta_k)\in({\mathscr A}_*\ox R_{\mathrm{pre}})_j$ are the ones explicitly
given above for $k>0$ while $\rho_{2^n}(1)\in({\mathscr A}_*\ox R_{\mathrm{pre}})_{2^n}$ is
arbitrary.
\end{Proposition}\qed
\
Let us now treat the equations \ref{apsi}\eqref{mleqs} in a similar way, now using the
fact that $\bar R$ is a free \emph{left} ${\mathscr A}$-module on an explicit basis
${\mathrm{PAR}}'$ (see \ref{barr} again).
Then similarly to the above dualization it follows that the
composite
$$
\Phi^\l_*:\bar R_*\xto{m^\l_*}{\mathscr A}_*\ox\bar R_*\xto{1\ox\ro'}{\mathscr A}_*\ox R'_{\mathrm{pre}}
$$
is an isomorphism of left ${\mathscr A}_*$-comodules, where $\ro':\bar R_*\onto
R'_{\mathrm{pre}}$ denotes the restriction homomorphism from the space $\bar R_*$ of
${\mathbb F}$-linear forms on $\bar R$ to the space $R'_{\mathrm{pre}}$ of linear forms on the
subspace ${R^{\mathrm{pre}}}'$ of $\bar R$ spanned by ${\mathrm{PAR}}'$.
Thus similarly to the above the equations \ref{apsi}\eqref{mleqs} are
equivalent to ones obtained by composing them with the isomorphism
$1\ox\Phi^\l_*:{\mathscr A}_*\ox\bar R_*\to{\mathscr A}_*\ox{\mathscr A}_*\ox R'_{\mathrm{pre}}$. Let us then denote
$$
(1\ox\Phi^\l_*)A_\psi(\zeta_n)=\sum_{\pi\in{\mathrm{PAR}}'}\sigma_{2^n-|\pi|}(\pi)\ox\pi_*
$$
with some unknown elements $\sigma_j(\pi)\in({\mathscr A}_*\ox{\mathscr A}_*)_j$, where $\pi_*$
denotes the corresponding element of the dual basis, i.~e. the unique linear form on
$R'_{\mathrm{pre}}$ assigning 1 to $\pi$ and 0 to all other elements of ${\mathrm{PAR}}'$.
Now again as above, freedom of the left ${\mathscr A}_*$-comodule $\bar R_*$ on $R'_{\mathrm{pre}}$ means that the
above isomorphism $\Phi^\l_*$ fits in the commutative diagram
$$
\xymatrix{
\bar R_*\ar[r]^-{\Phi^\l_*}\ar[d]^{m^\l_*}&{\mathscr A}_*\ox R'_{\mathrm{pre}}\ar[d]^{m_*\ox1}\\
{\mathscr A}_*\ox\bar R_*\ar[r]^-{1\ox\Phi^\l_*}&{\mathscr A}_*\ox{\mathscr A}_*\ox R'_{\mathrm{pre}}.
}
$$
In particular one has
$$
(1\ox1\ox\Phi^\l_*)(1\ox m^\l_*)A_\psi(\zeta_n)
=(1\ox m_*\ox1)(1\ox\Phi^\l_*)A_\psi(\zeta_n).
$$
Using this, we obtain that the equations \ref{apsi}\eqref{mleqs} are
equivalent to the following system of equations
$$
(1\ox m_*-m_*\ox1)(\sigma_{2^n-|\pi|}(\pi))
=1\ox\sigma_{2^n-|\pi|}(\pi)+\varSigma_{2^n-|\pi|}(\pi),
$$
where we denote
$$
\varSigma_{2^n-|\pi|}(\pi)=\zeta_1^{2^{n-1}}\ox\sigma_{2^{n-1}-|\pi|}(\pi)+\zeta_2^{2^{n-2}}\ox\sigma_{2^{n-2}-|\pi|}(\pi)+...+\zeta_{n-2}^4\ox\sigma_{4-|\pi|}(\pi)+\zeta_{n-1}^2\ox\sigma_{2-|\pi|}(\pi).
$$
We next use the following standard fact:
\begin{Proposition}\label{contra}
For any coalgebra $C$ with the diagonal $m_*:C\to C\ox C$ and counit
$\eps:C\to{\mathbb F}$ there is a contractible cochain complex of the form
$$
\xymatrix{
C\ar[r]^{d_1}
&C^{\ox2}\ar@/^/[l]^{s_1}\ar[r]^{d_2}
&C^{\ox3}\ar@/^/[l]^{s_2}\ar[r]^{d_3}
&C^{\ox4}\ar@/^/[l]^{s_3}\ar[r]^{d_4}
&\cdots,\ar@/^/[l]^{s_4}
}
$$
i.~e. one has
$$
s_nd_n+d_{n-1}s_{n-1}=1_{C^{\ox n}}
$$
for all $n$. Here,
\begin{align*}
d_1&=m_*,\\
d_2&=1\ox m_*-m_*\ox1,\\
d_3&=1\ox1\ox m_*-1\ox m_*\ox1+m_*\ox1\ox1,\\
d_4&=1\ox1\ox1\ox m_*-1\ox1\ox m_*\ox1+1\ox m_*\ox1\ox1-m_*\ox1\ox1\ox1,
\end{align*}
etc., while $s_n$ can be taken to be equal to either
$$
s_n=\eps\ox 1_{C^{\ox n}}
$$
or
$$
s_n=1_{C^{\ox n}}\ox\eps.
$$
\end{Proposition}
\qed
Now suppose given the elements
$\sigma_{2^k-|\pi|}(\pi)$, $k<n$, satisfying the equations; we must then find
$\sigma_{2^n-|\pi|}(\pi)$ with
$$
d_2\sigma_{2^n-|\pi|}(\pi)=1\ox\sigma_{2^n-|\pi|}(\pi)+\varSigma_{2^n-|\pi|}(\pi),
$$
with $\varSigma_{2^n-|\pi|}(\pi)$ as above. Then since $d_3d_2=0$, it will follow
$$
d_3(1\ox\sigma_{2^n-|\pi|}(\pi)+\varSigma_{2^n-|\pi|}(\pi))=0.
$$
Then
$$
1\ox\sigma_{2^n-|\pi|}(\pi)+\varSigma_{2^n-|\pi|}(\pi)
=(s_3d_3+d_2s_2)(1\ox\sigma_{2^n-|\pi|}(\pi)+\varSigma_{2^n-|\pi|}(\pi))
=d_2s_2(1\ox\sigma_{2^n-|\pi|}(\pi)+\varSigma_{2^n-|\pi|}(\pi))
$$
Taking here $s_n$ from the second equality of \ref{contra}, we see that one
has
$$
1\ox\sigma_{2^n-|\pi|}(\pi)
=\varSigma_{2^n-|\pi|}(\pi)+d_2\left(1\ox(1\ox\eps)(\sigma_{2^n-|\pi|}(\pi))+(1\ox1\ox\eps)(\varSigma_{2^n-|\pi|}(\pi))\right).
$$
It follows that we can reconstruct the terms $\sigma_{2^n-|\pi|}(\pi)$ from
$(1\ox\eps)\sigma_{2^n-|\pi|}(\pi)$, i.~e. from their components that lie in
${\mathscr A}_*\ox{\mathbb F}\subset{\mathscr A}_*\ox{\mathscr A}_*$.
Then denoting
$$
\sigma_{2^n-|\pi|}(\pi)=x_{2^n-|\pi|}(\pi)\ox1+\sigma'_{2^n-|\pi|}(\pi),
$$
with
$$
\sigma'_{2^n-|\pi|}(\pi)\in{\mathscr A}_*\ox\tilde{\mathscr A}_*,
$$
the last equation gives
$$
1\ox x_{2^n-|\pi|}(\pi)\ox1+1\ox\sigma'_{2^n-|\pi|}(\pi)
=\varSigma_{2^n-|\pi|}(\pi)+(m_*\ox1+1\ox m_*)\sum_{i\ge0}\zeta_i^{2^{n-i}}\ox
x_{2^{n-i}-|\pi|}(\pi).
$$
By collecting terms of the form $1\ox...$ on both sides, we conclude that any solution for $\sigma$ satisfies
$$
\sigma_{2^n-|\pi|}(\pi)
=m_*(x_{2^n-|\pi|}(\pi))+\sum_{i\ge0}\zeta_i^{2^{n-i}}\ox
x_{2^{n-i}-|\pi|}(\pi).
$$
Thus the equations \ref{apsi}\eqref{mleqs} are equivalent to the system of
equations
$$
(1\ox m_*+m_*\ox1)\sum_{i\ge0}\zeta_i^{2^{n-i}}\ox x_{2^{n-i}-|\pi|}(\pi)
=1\ox m_*(x_{2^n-|\pi|}(\pi))+\sum_{i\ge0}1\ox\zeta_i^{2^{n-i}}\ox x_{2^{n-i}-|\pi|}(\pi)
+\varSigma_{2^n-|\pi|}(\pi)
$$
on the elements $x_j(\pi)\in{\mathscr A}_j$. Substituting here back the value of
$\varSigma_{2^n-|\pi|}(\pi)$ we obtain the equations
\begin{multline*}
\sum_{i\ge0}\zeta_i^{2^{n-i}}\ox m_*(x_{2^{n-i}-|\pi|}(\pi))
+\sum_{i\ge0}m_*(\zeta_i)^{2^{n-i}}\ox x_{2^{n-i}-|\pi|}(\pi)
=1\ox m_*(x_{2^n-|\pi|}(\pi))+\sum_{i\ge0}1\ox\zeta_i^{2^{n-i}}\ox
x_{2^{n-i}-|\pi|}(\pi)\\
+\sum_{i>0}\zeta_i^{2^{n-i}}\ox m_*(x_{2^{n-i}-|\pi|}(\pi))
+\sum_{i'>0,j\ge0}\zeta_{i'}^{2^{n-i'}}\ox\zeta_j^{2^{n-i'-j}}\ox x_{2^{n-i'-j}-|\pi|}(\pi).
\end{multline*}
These equations easily reduce to
$$
m_*(\zeta_i)^{2^{n-i}}=1\ox\zeta_i^{2^{n-i}}+\sum_{0\le
j<i}\zeta_{i-j}^{2^{n-(i-j)}}\ox\zeta_j^{2^{n-i}},
$$
which is identically true. We thus conclude
\begin{Proposition}\label{mlx}
The general solution $A_\psi(\zeta_n)$ of \ref{apsi}\eqref{mleqs} is determined by
$$
A_\psi(\zeta_n)=(1\ox\Phi^\l_*)^{-1}\sum_{\pi\in{\mathrm{PAR}}'}\left(x_{2^n-|\pi|}(\pi)\ox1+\tilde
m_*(x_{2^n-|\pi|}(\pi))+\sum_{i>0}\zeta_i^{2^{n-i}}\ox
x_{2^{n-i}-|\pi|}(\pi)\right)\ox\pi_*,
$$
where $x_j(\pi)\in{\mathscr A}_j$ are arbitrary homogeneous elements.
\end{Proposition}\qed
Now to put together \ref{mrrho} and \ref{mlx} we must use the dual
$$
\Phi_*:R_{\mathrm{pre}}\ox{\mathscr A}_*\to{\mathscr A}_*\ox R'_{\mathrm{pre}}
$$
of the composite isomorphism
$$
\Phi:{\mathscr A}\ox{R^{\mathrm{pre}}}'\xto{{\Phi^\l}^{-1}}\bar R\xto{\Phi^\r}R^{\mathrm{pre}}\ox{\mathscr A}.
$$
We will need
\begin{Lemma}
There is an inclusion
$$
\Phi_*\left(R_{\mathrm{pre}}\ox\FF1\right)\subset{\mathscr A}_*\ox{R'_{\mathrm{pre}}}^{\le2},
$$
where
$$
{R'_{\mathrm{pre}}}^{\le2}\subset R'_{\mathrm{pre}}
$$
is the subspace of those linear forms on ${R^{\mathrm{pre}}}'$ which vanish on all left
preadmissible elements $[n,m]a\in{\mathrm{PAR}}'$ with $a\in\tilde{\mathscr A}$.
Similarly, there is an inclusion
$$
\Phi_*^{-1}\left(\FF1\ox R'_{\mathrm{pre}}\right)\subset{R_{\mathrm{pre}}}^{\le2}\ox{\mathscr A}_*,
$$
where
$$
{R_{\mathrm{pre}}}^{\le2}\subset R_{\mathrm{pre}}
$$
is the subspace of those linear forms on $R^{\mathrm{pre}}$ which vanish on all right
preadmissible elements $a[n,m]$ with $a\in\tilde{\mathscr A}$.
\end{Lemma}
\begin{proof}
Dualizing, what we have to prove for the first inclusion is that given any
admissible monomial $a\in{\mathscr A}$ and any $[n,m]b\in{\mathrm{PAR}}'$ with $b\in\tilde{\mathscr A}$, in
$\bar R$ one has the equality
$$
a[n,m]b=\sum_ia_i[n_i,m_i]b_i
$$
with $a_i[n_i,m_i]\in{\mathrm{PAR}}$ and admissible monomials $b_i\in\tilde{\mathscr A}$. Indeed,
considering $a$ as a monomial in ${\mathscr F}_0$ there is a unique way to write
$$
a[n,m]=\sum_ia_i[n_i,m_i]c_i
$$
in ${\mathscr F}_0$, with $a_i[n_i,m_i]\in{\mathrm{PAR}}$ and $c_i$ some (not necessarily
admissible or belonging to $\tilde{\mathscr F}_0$) monomials in the $\Sq^k$ generators
of ${\mathscr F}_0$. Thus in ${\mathscr F}_0$ we have
$$
a[n,m]b=\sum_ia_i[n_i,m_i]c_ib.
$$
In $\bar R$ we may replace each $c_ib$ with a sum of admissible monomials of
the same degree; obviously this degree is positive as $b\in\tilde{\mathscr A}$.
The proof for the second inclusion is exactly similar.
\end{proof}
This lemma implies that for any simultaneous solution $A_\psi(\zeta_n)$ of
\ref{apsi}\eqref{mreqs} and \ref{apsi}\eqref{mleqs}, the elements in ${\mathscr A}_*\ox
R_{\mathrm{pre}}\ox{\mathscr A}_*$ and ${\mathscr A}_*\ox{\mathscr A}_*\ox R'_{\mathrm{pre}}$ corresponding to it according to,
respectively, \ref{mrrho} and \ref{mlx}, satisfy
\begin{multline*}
\sum_{\substack{a\in\tilde{\mathscr A}\\{}[k,l]a\in{\mathrm{PAR}}'}}\left(x_{2^n-k-l-|a|}([k,l]a)\ox1+\tilde
m_*(x_{2^n-k-l-|a|}([k,l]a))+\sum_{i>0}\zeta_i^{2^{n-i}}\ox
x_{2^{n-i}-k-l-|a|}([k,l]a)\right)\ox([k,l]a)_*\\
=(1\ox1\ox\varrho^{>2})(1\ox\Phi_*)\left(\sum_{k>0}\rho_{2^n-2^k+1}(\zeta_k)\ox\zeta_k\right),
\end{multline*}
where
$$
\varrho^{>2}:R_{\mathrm{pre}}'\onto{R_{\mathrm{pre}}'}^{>2}
$$
is the restriction of linear forms on ${R^{\mathrm{pre}}}'$ to the subspace spanned by
the subset of ${\mathrm{PAR}}'$ consisting of the left preadmissible relations of the
form $[k,l]a$ with $a\in\tilde{\mathscr A}$. Indeed the remaining part of the element
from \ref{mrrho} is
$$
\rho_{2^n}(1)\ox1,
$$
and according to the lemma its image under $1\ox\Phi_*$ goes to zero under the
map $\varrho^{>2}$.
Since the elements $\rho_{2^n-2^k+1}(\zeta_k)$ are explicitly given for all $k>0$, this
allows us to explicitly determine all elements $x_j([k,l]a)$ for
$[k,l]a\in{\mathrm{PAR}}'$ with $a\in\tilde{\mathscr A}$. For example, in low degrees we obtain
\begin{align*}
x_2([2,3]\Sq^1)=x_2([3,2]\Sq^1)&=\zeta_1^2,\\
x_3([2,2]\Sq^1)&=\zeta_1^3,\\
x_{10}([2,3]\Sq^1)=x_{10}([3,2]\Sq^1)&=\zeta_1^4\zeta_2^2,\\
x_{11}([2,2]\Sq^1)&=\zeta_1^5\zeta_2^2,\\
x_{26}([2,3]\Sq^1)=x_{26}([3,2]\Sq^1)&=\zeta_2^4\zeta_3^2,\\
x_{27}([2,2]\Sq^1)&=\zeta_1\zeta_2^4\zeta_3^2,
\end{align*}
with all other $x_j([k,l]a)=0$ for $j<32$ and $[k,l]a\in{\mathrm{PAR}}'$ with
$a\in\tilde{\mathscr A}$.
It remains to deal with the elements $x_j([k,l])$. These shall satisfy
\begin{multline*}
\sum_{k<2l}\left(x_{2^n-k-l}([k,l])\ox1+\tilde
m_*(x_{2^n-k-l}([k,l]))+\sum_{i>0}\zeta_i^{2^{n-i}}\ox
x_{2^{n-i}-k-l}([k,l])\right)\ox[k,l]_*\\
=(1\ox\Phi_*)\left(\rho_{2^n}(1)\ox1\right)+(1\ox1\ox\varrho^{\le2})(1\ox\Phi_*)\left(\sum_{k>0}\rho_{2^n-2^k+1}(\zeta_k)\ox\zeta_k\right),
\end{multline*}
where now
$$
\varrho^{\le2}:R_{\mathrm{pre}}'\onto{R_{\mathrm{pre}}'}^{\le2}
$$
is the restriction of linear forms on ${R^{\mathrm{pre}}}'$ to the subspace spanned by
the Adem relations. The last summand
$D_n=(1\ox1\ox\varrho^{\le2})(1\ox\Phi_*)\left(\sum_{k>0}\rho_{2^n-2^k+1}(\zeta_k)\ox\zeta_k\right)$
is again explicitly given; for example, in low degrees it is equal to
\begin{align*}
D_1&=0,\\
D_2&=0,\\
D_3&=\left(\zeta_1\ox\zeta_1\right)^2\ox[2,2]_*,\\
D_4&=\left(\zeta_1^2\zeta_2\ox\zeta_1+\zeta_2\ox\zeta_2+\zeta_1^2\ox\zeta_1\zeta_2\right)^2\ox[2,2]_*,\\
D_5&=\left(
\zeta_2^2\zeta_3\ox\zeta_1
+\zeta_1^4\zeta_3\ox\zeta_2
+\zeta_1^4\zeta_2^2\ox\zeta_1\zeta_2
+\zeta_1^4\ox\zeta_2\zeta_3
+\zeta_3\ox\zeta_3
+\zeta_2^2\ox\zeta_1\zeta_3
\right)^2\ox[2,2]_*.
\end{align*}
Then finally the equations that remain to be solved can be equivalently
written as follows:
\begin{multline*}
(1\ox1\ox\tilde\eps)(1\ox\Phi_*)^{-1}\left(\sum_{k<2l}\left(x_{2^n-k-l}([k,l])\ox1+\tilde
m_*(x_{2^n-k-l}([k,l]))+\sum_{i>0}\zeta_i^{2^{n-i}}\ox
x_{2^{n-i}-k-l}([k,l])\right)\ox[k,l]_*\right)\\
=(1\ox1\ox\tilde\eps)(1\ox\Phi_*)^{-1}(D_n),
\end{multline*}
where
$$
\tilde\eps:{\mathscr A}_*\onto\tilde{\mathscr A}_*
$$
is the projection to the positive degree part, i.~e. maps 1 to 0 and all
homogeneous positive degree elements to themselves. Again, the right hand
sides of these equations are explicitly given constants, for example, in low
degrees they are given by
\begin{tabular}{cl}
$0$,&$n=1$;\\
$0$,&$n=2$;\\
$\zeta_1^2\ox[2,2]_*\ox\zeta_1^2$,&$n=3$;\\
$\left(\zeta_1^4\zeta_2^2\ox[2,2]_*+\zeta_2^2\ox(\Sq^4[2,2])_*+\zeta_1^4\ox(\Sq^6[2,2])_*\right)\ox\zeta_1^2$,&$n=4$;\\
$\left(\zeta_2^4\zeta_3^2\ox[2,2]_*+\zeta_1^8\zeta_3^2\ox(\Sq^4[2,2])_*+\zeta_1^8\zeta_2^4\ox(\Sq^6[2,2])_*+\zeta_3^2\ox(\Sq^8\Sq^4[2,2])_*\right.$\\
$\left.+\zeta_2^4\ox(\Sq^{10}\Sq^4[2,2])_*+\zeta_1^8\ox(\Sq^{12}\Sq^6[2,2])_*\right)\ox\zeta_1^2$,&$n=5$.
\end{tabular}
\
\
\
One possible set of solutions for $\zeta_k$ with $k\le4$ is given by
\begin{align*}
x_6([1,1])&=\zeta_1^6+\zeta_2^2,\\
x_8([2,6])&=\zeta_1^8,\\
x_{12}([1,3])&=\zeta_2^4,\\
x_{13}([1,2])&=\zeta_2^2\zeta_3,\\
x_{24}([2,6])&=\zeta_2^8,\\
x_{28}([1,3])&=\zeta_3^4,\\
x_{29}([1,2])&=\zeta_3^2\zeta_4
\end{align*}
and all remaining $x_j([k,l])=0$ for $j+k+l\le32$.
\
Or equivalently one might give the same solution ``on the other side of $\Phi$'' by
\begin{align*}
\rho_2(1)&=0,\\
\rho_4(1)&=0,\\
\rho_8(1)&=\zeta_1^2\ox(\Sq^4[1,1])_*+\left(\zeta_1^6+\zeta_2^2\right)\ox[1,1]_*,\\
\rho_{16}(1)&
=\zeta_1^4\ox\left(\Sq^6\Sq^3[1,2]\right)_*
+\zeta_2^2\ox\left(\Sq^5\Sq^2[1,2]\right)_*
+\zeta_3\ox\left(\Sq^4\Sq^2[1,2]\right)_*\\
&
+\zeta_1^8\ox\left((\Sq^6[1,1])_*+(\Sq^4[1,3])_*+[2,6]_*\right)
+\zeta_1^4\zeta_2^2\ox\left(\Sq^3[1,2]\right)_*
+\zeta_1^4\zeta_3\ox\left(\Sq^2[1,2]\right)_*\\
&
+\zeta_2^4\ox[1,3]_*
+\zeta_2^2\zeta_3\ox[1,2]_*,\\
\rho_{32}(1)&
=\zeta_1^8\ox\left(\Sq^{12}\Sq^6\Sq^3[1,2]\right)_*
+\zeta_2^4\ox\left(\Sq^{10}\Sq^5\Sq^2[1,2]\right)_*
+\zeta_3^2\ox\left(\Sq^9\Sq^4\Sq^2[1,2]\right)_*\\
&
+\zeta_4\ox\left(\Sq^8\Sq^4\Sq^2[1,2]\right)_*
+\zeta_1^8\zeta_2^4\ox\left(\Sq^6\Sq^3[1,2]\right)_*
+\zeta_1^8\zeta_3^2\ox\left(\Sq^5\Sq^2[1,2]\right)_*
+\zeta_1^8\zeta_4\ox\left(\Sq^4\Sq^2[1,2]\right)_*\\
&
+\zeta_2^8\ox\left((\Sq^4[1,3])_*+(\Sq^6[1,1])_*+[2,6]_*\right)
+\zeta_2^4\zeta_3^2\ox\left(\Sq^3[1,2]\right)_*
+\zeta_2^4\zeta_4\ox\left(\Sq^2[1,2]\right)_*\\
&
+\zeta_3^4\ox[1,3]_*
+\zeta_3^2\zeta_4\ox[1,2]_*.
\end{align*}
This then gives the solution itself as follows:
\
\begin{align*}
A_\psi(\zeta_1)&=0,\\
\\
A_\psi(\zeta_2)&=0,\\
\\
A_\psi(\zeta_3)
&
=\zeta_1\ox M_{2221}
+\zeta_1^2\ox\left(M_{411}+M_{231}+M_{2121}+\zeta_1M_{221}+\zeta_1^2M_{11}^2+M_3^2\right)
+\zeta_1^3\ox M_{221}\\
&
+\left(\zeta_1^6+\zeta_2^2\right)\ox M_{11},\\
\\
A_\psi(\zeta_4)
=\zeta_1&\ox\left(M_{82221}+M_{44421}+M_{424221}+M_{46221}\right)\\
+\zeta_1^4&\ox\left(\zeta_1^4\zeta_2M_5+\zeta_1^4M_{211}^2+\zeta_1^2M_{2111}^2+M_{4521}+M_{23421}+M_{62121}+M_{6231}+M_{2721}+M_{41421}+M_{44121}\right.\\
&
+M_{651}+M_{4431}+M_{24231}+M_{242121}+M_{2451}+M_{831}+M_{8121}+\zeta_1^2\zeta_3M_3+\zeta_2^3M_3+\zeta_1^6M_3^2+\zeta_1^2M_5^2\\
&
+\zeta_2M_{2421}+\zeta_1^2M_{41}^2+\zeta_2M_{54}+\zeta_3M_{32}+\zeta_2M_{621}+\zeta_2M_9+\zeta_2M_{441}+\zeta_2M_{342}+\zeta_3M_{41}\\
&
\left.+\zeta_2M_{72}+\zeta_3M_5+\zeta_2M_{432}+\zeta_1^2M_{32}^2+M_{6321}+\zeta_1M_{6221}+\zeta_1M_{4421}+\zeta_1M_{24221}+M_{24321}\right)\\
+\zeta_1^5&\ox\left(M_{24221}+M_{6221}+M_{4421}\right)\\
+\zeta_2^2&\ox\left(M_{4231}+M_{42121}+M_{41}^2+M_5^2+M_{2111}^2+M_{32}^2+M_{721}+M_{3421}+M_{451}+\zeta_1\zeta_2^2M_3+\zeta_1M_{54}+\zeta_1^5M_5\right.\\
&
\left.+\zeta_1^4M_3^2+\zeta_1M_{72}+\zeta_1M_{2421}+\zeta_3M_3+\zeta_1M_{441}+\zeta_1M_9+\zeta_1M_{621}+\zeta_1M_{342}+\zeta_1M_{432}+\zeta_1M_{4221}+M_{4321}\right)\\
+\zeta_3&\ox\left(M_{54}+M_{2421}+M_{342}+M_{441}+M_{72}+M_{621}+M_9+M_{432}+\zeta_2^2M_3+\zeta_1^4M_5\right)\\
+\zeta_1\zeta_2^2&\ox M_{4221}\\
+\zeta_1^8&\ox\left(M_{211}^2+\zeta_2M_5+M_{4121}+M_{31}^2+M_{2411}+M_{251}+M_{611}+\zeta_2M_{32}+\zeta_1^2\zeta_2M_3+\zeta_1^2M_3^2+M_{2321}+\zeta_2M_{41}\right)\\
+\zeta_1^9&\ox M_{2221}\\
+\zeta_1^4\zeta_2^2&\ox\left(M_{231}+M_{2121}+M_{51}+\zeta_1^2M_{11}^2+\zeta_1M_{221}+\zeta_1M_{41}+\zeta_1M_5+\zeta_1^3M_3+M_{321}+\zeta_1M_{32}+\zeta_2M_3\right)\\
+\zeta_1^5\zeta_2^2&\ox M_{221}\\
+\zeta_1^4\zeta_3&\ox\left(M_5+M_{41}+M_{32}+\zeta_1^2M_3\right)\\
+\zeta_2^4&\ox\left(M_{31}+\zeta_1M_3\right)\\
+\zeta_2^2\zeta_3&\ox M_3,
\end{align*}
\begin{align*}
A_\psi(\zeta_5)
=\zeta_1&\ox\left(M_{8484421}+M_{888421}+M_{\underline{16}82221}+M_{\underline{16}424221}+M_{\underline{16}44421}+M_{8\underline{12}4421}+M_{8\underline{12}6221}\right.\\
&
\left.+M_{8\underline{12}24221}+M_{\underline{16}46221}+M_{84\underline{10}4221}+M_{8486221}+M_{84824221}+M_{84284221}\right)\\
+\zeta_1^8&\ox\left(M_{431211}^2+M_{422211}^2+M_{2721}^2+M_{421221}^2+M_{34221}^2+M_{7221}^2+M_{41421}^2+M_{43221}^2\right.\\
&
+M_{\underline{12}24321}+M_{32}^2\zeta_3^2+M_{54}\zeta_4+\zeta_1M_{88421}+M_{4284321}+\zeta_2^2M_{2421}^2\\
&
+\zeta_1M_{\underline{12}24221}+M_{3842}\zeta_3+M_{342}\zeta_4+M_{8426121}+M_{8441}\zeta_3+M_{4\underline{10}4321}+M_{\underline{12}6321}\\
&
+\zeta_1M_{\underline{12}4421}+M_{41}^4\zeta_1^4+M_{41}^2\zeta_3^2+\zeta_1M_{484421}+M_{243111}^2+\zeta_2^2M_{432}^2+\zeta_3M_{\underline{11}42}\\
&
+\zeta_2^2M_{342}^2+M_{441}\zeta_4+M_3^8+\zeta_2^4M_3^4+M_{321}^4+M_{2451}^2+M_{35211}^2+M_{51}^4+M_{411}^4+\zeta_3M_{584}\\
&
+M_{831}^2+M_{8121}^2+M_{621}\zeta_4+M_{651}^2+M_{23421}^2+\zeta_2^2M_{621}^2+\zeta_3M_{98}+\zeta_3M_{8342}+M_{242211}^2\\
&
+M_{24411}^2+M_{7311}^2+M_{486321}+M_{2412111}^2+M_{34311}^2+M_{4824321}+M_{252111}^2+M_{86631}\\
&
+M_{8624121}+M_{8444121}+M_{844431}+M_{844521}+M_{48831}+M_{862431}+M_{866121}\\
&
+M_{8441421}+M_{8423421}+M_{84212421}+M_{4862121}+M_{4823421}+M_{4284231}\\
&
+M_{4824231}+M_{48242121}+M_{\underline{12}62121}+M_{\underline{12}4431}+M_{\underline{12}4521}+M_{\underline{12}44121}+M_{\underline{12}41421}+M_{4844121}\\
&
+M_{4841421}+M_{48651}+M_{\underline{12}6231}+M_{\underline{12}2451}+M_{4\underline{10}451}+M_{\underline{12}2721}+M_{4\underline{10}42121}+M_{\underline{12}242121}\\
&
+M_{484521}+M_{4\underline{10}721}+M_{\underline{12}651}+M_{486231}+M_{482451}+M_{482721}+M_{\underline{12}24231}+M_{4\underline{10}4231}+M_{428451}\\
&
+M_{428721}+M_{\underline{12}23421}+M_{4\underline{10}3421}+M_{42\underline{11}421}+M_{42842121}+M_{\underline{14}2521}+M_{6\underline{10}521}+M_{\underline{14}24121}\\
&
+M_{6\underline{10}4121}+M_{628521}+M_{\underline{14}21421}+M_{629421}+M_{6\underline{10}1421}+M_{68631}+M_{682431}+M_{686121}+M_{682521}\\
&
+M_{628431}+M_{6824121}+M_{6821421}+M_{6284121}+M_{6281421}+M_{6218421}+M_{4\underline{12}431}+M_{\underline{12}8121}+M_{4\underline{12}521}\\
&
+M_{\underline{12}831}+M_{448431}+M_{4\underline{12}4121}+M_{448521}+M_{4\underline{12}1421}+M_{449421}+M_{4484121}+M_{4481421}+M_{4418421}\\
&
+M_{\underline{14}2431}+M_{6\underline{10}431}+M_{\underline{14}6121}+M_{\underline{14}631}+M_{341211}^2+M_{8621}\zeta_3+M_{6321}^2+\zeta_1^{12}M_3^4+\zeta_3^3M_3\\
&
+\zeta_3M_{\underline{13}4}+M_{72}\zeta_4+\zeta_2^2M_{72}^2+M_{842631}+M_{421311}^2+M_{314211}^2+M_{26211}^2+M_{6411}^2+\zeta_3M_{82421}\\
&
+\zeta_1^4M_5^4+M_{224211}^2+M_9\zeta_4+M_{43311}^2+M_{24321}^2+M_{62211}^2+M_{612111}^2+\zeta_1^4M_{32}^4+M_{63111}^2\\
&
+\zeta_3M_{854}+\zeta_3M_{8432}+\zeta_3M_{\underline{10}421}+\zeta_2^2M_{441}^2+\zeta_3M_{28421}+\zeta_2^4\zeta_3M_5+\zeta_1^8\zeta_2^2M_5^2+M_{484431}\\
&
+\zeta_1^4\zeta_4M_5+\zeta_1^4M_3^2\zeta_3^2+M_3^2\zeta_2^6+M_3\zeta_4\zeta_2^2+M_9\zeta_3\zeta_1^8+\zeta_3^2M_5^2+M_{71211}^2+M_{4238421}\\
&
+\zeta_1M_{4284221}+\zeta_1M_{4\underline{10}4221}+\zeta_1M_{486221}+\zeta_2^2M_9^2+\zeta_4M_{432}+\zeta_2^2M_{54}^2+\zeta_3M_{872}+M_{4283421}\\
&
\left.+M_{862521}+M_{488121}+M_{8621421}+\zeta_3M_{\underline{17}}+M_{2142111}^2+\zeta_1M_{\underline{12}6221}+\zeta_1M_{4824221}\right)\\
+\zeta_1^9&\ox\left(M_{4\underline{10}4221}+M_{4824221}+M_{88421}+M_{\underline{12}6221}+M_{4284221}+M_{484421}+M_{\underline{12}4421}+M_{486221}+M_{\underline{12}24221}\right)\\
+\zeta_2^4&\ox\left(M_{283421}+M_{2\underline{11}421}+M_{\underline{12}521}+M_{8831}+M_{88121}+M_{\underline{10}42121}+M_{862121}+M_{82451}+M_{82721}+M_{823421}\right.\\
&
+M_{284231}+M_{86231}+M_{824231}+M_{844121}+M_{841421}+M_{28451}+M_{\underline{10}4231}+M_{8651}+M_{\underline{10}451}+M_{84521}\\
&
+M_{84431}+M_{2842121}+M_{8242121}+M_{\underline{12}431}+M_{484121}+M_{418421}+M_{\underline{12}4121}+M_{48431}+M_{28721}+\zeta_2M_{98}\\
&
+M_{\underline{10}3421}+M_{\underline{10}721}+\zeta_1^8\zeta_2M_9+\zeta_1^2\zeta_2^4M_3^2+\zeta_2\zeta_3^2M_3+\zeta_1^2\zeta_4M_3+M_{238421}+M_{48521}+M_{49421}+M_{\underline{12}1421}\\
&
+M_{481421}+\zeta_2M_{8342}+M_{86321}+\zeta_1M_{284221}+\zeta_1M_{86221}+\zeta_1M_{84421}+\zeta_1M_{824221}+\zeta_1M_{\underline{10}4221}+M_{824321}\\
&
+\zeta_2M_{8432}+\zeta_2M_{\underline{11}42}+\zeta_2M_{872}+\zeta_2M_{28421}+\zeta_2M_{8441}+\zeta_2M_{3842}+\zeta_1^2M_{72}^2+\zeta_1^2M_{621}^2+\zeta_4M_{41}+\zeta_2M_{584}\\
&
+M_{2421}^2\zeta_1^2+\zeta_1^2M_{432}^2+M_{32}\zeta_4+M_{54}^2\zeta_1^2+\zeta_2M_{\underline{10}421}+\zeta_1^2M_{342}^2+\zeta_1^2M_{441}^2+\zeta_2M_{\underline{13}4}+\zeta_2M_{82421}+\zeta_2M_{854}\\
&
\left.+\zeta_2M_{8621}+\zeta_2^5M_5+\zeta_1^{10}M_5^2+\zeta_2M_{\underline{17}}+M_5\zeta_4+M_9^2\zeta_1^2+\zeta_1^2M_{42111}^2+M_{\underline{10}4321}+M_{4211}^2\zeta_1^4+M_{284321}\right)\\
+\zeta_1\zeta_2^4&\ox\left(M_{86221}+M_{284221}+M_{\underline{10}4221}+M_{84421}+M_{824221}\right)\\
+\zeta_3^2&\ox\left(M_{84231}+M_{2421}^2+M_{342}^2+M_{441}^2+M_{72}^2+M_{54}^2+M_{621}^2+M_9^2+M_{432}^2+M_{42111}^2+M_{842121}+M_{8451}\right.\\
&
+M_{\underline{11}421}+M_{8721}+M_{38421}+M_{83421}+M_{84321}+M_5^2\zeta_1^8+M_3\zeta_4+M_3^2\zeta_2^4+\zeta_1^9M_9+\zeta_1M_{\underline{17}}+\zeta_1M_{98}\\
&
+\zeta_1M_{\underline{13}4}+\zeta_1M_{854}+\zeta_1M_{584}+\zeta_1M_{8441}+\zeta_1M_{\underline{11}42}+\zeta_1M_{872}+\zeta_1M_{8342}+\zeta_1M_{8432}+\zeta_1M_{3842}\\
&
\left.+\zeta_1M_{28421}+\zeta_1M_{\underline{10}421}+\zeta_1M_{82421}+\zeta_1M_{8621}+\zeta_1M_{84221}+\zeta_1M_3\zeta_3^2+\zeta_1M_5\zeta_2^4\right)\\
+\zeta_1\zeta_3^2&\ox M_{84221}\\
+\zeta_4&\ox\left(M_{584}+M_{854}+M_{\underline{13}4}+M_{872}+M_{8432}+M_{28421}+M_{98}+M_{\underline{17}}+M_{8342}+M_{\underline{11}42}+M_{8441}+M_{8621}\right.\\
&
\left.+M_{\underline{10}421}+M_{3842}+M_{82421}+\zeta_1^8M_9+\zeta_2^4M_5+\zeta_3^2M_3\right)\\
+\zeta_1^{17}&\ox\left(M_{82221}+M_{44421}+M_{424221}+M_{46221}\right)
\end{align*}
\begin{align*}
+\zeta_1^8\zeta_2^4&\ox\left(\zeta_1^4\zeta_2M_5+\zeta_1^4M_{211}^2+\zeta_1^2M_{2111}^2+M_{4521}+M_{23421}+M_{62121}+M_{6231}+M_{2721}+M_{41421}+M_{44121}+M_{651}\right.\\
&
+M_{4431}+M_{24231}+M_{242121}+M_{2451}+M_{831}+M_{8121}+M_3\zeta_3\zeta_1^2+\zeta_2^3M_3+\zeta_1^6M_3^2+\zeta_1^2M_5^2+\zeta_2M_{2421}+\zeta_1^2M_{41}^2\\
&
+\zeta_2M_{54}+\zeta_3M_{32}+\zeta_2M_{621}+\zeta_2M_9+\zeta_2M_{441}+\zeta_2M_{342}+\zeta_3M_{41}+\zeta_2M_{72}+\zeta_3M_5+\zeta_2M_{432}+\zeta_1^2M_{32}^2+M_{6321}\\
&
\left.+\zeta_1M_{6221}+\zeta_1M_{4421}+\zeta_1M_{24221}+M_{24321}\right)\\
+\zeta_1^9\zeta_2^4&\ox\left(M_{24221}+M_{6221}+M_{4421}\right)\\
+\zeta_1^8\zeta_3^2&\ox\left(M_{4231}+M_{42121}+M_{41}^2+M_5^2+M_{2111}^2+M_{32}^2+M_{721}+M_{3421}+M_{451}+\zeta_1\zeta_2^2M_3+\zeta_1M_{54}+\zeta_1^5M_5+\zeta_1^4M_3^2\right.\\
&
\left.+\zeta_1M_{72}+M_{2421}\zeta_1+M_3\zeta_3+\zeta_1M_{441}+\zeta_1M_9+\zeta_1M_{621}+\zeta_1M_{342}+\zeta_1M_{432}+\zeta_1M_{4221}+M_{4321}\right)\\
+\zeta_1^8\zeta_4&\ox\left(M_{54}+M_{2421}+M_{342}+M_{441}+M_{72}+M_{621}+M_9+M_{432}+M_3\zeta_2^2+M_5\zeta_1^4\right)\\
+\zeta_1^9\zeta_3^2&\ox M_{4221}\\
+\zeta_2^8&\ox\left(M_{211}^2+\zeta_2M_5+M_{4121}+M_{31}^2+M_{2411}+M_{251}+M_{611}+\zeta_2M_{32}+\zeta_2M_3\zeta_1^2+\zeta_1^2M_3^2+M_{2321}+\zeta_2M_{41}\right)\\
+\zeta_1\zeta_2^8&\ox M_{2221}\\
+\zeta_2^4\zeta_3^2&\ox\left(M_{231}+M_{2121}+M_{51}+\zeta_1^2M_{11}^2+\zeta_1M_{221}+\zeta_1M_{41}+\zeta_1M_5+\zeta_1^3M_3+M_{321}+\zeta_1M_{32}+\zeta_2M_3\right)\\
+\zeta_2^4\zeta_4&\ox\left(M_5+M_{41}+M_{32}+\zeta_1^2M_3\right)\\
+\zeta_1\zeta_2^4\zeta_3^2&\ox M_{221}\\
+\zeta_3^4&\ox\left(M_{31}+\zeta_1M_3\right)\\
+\zeta_3^2\zeta_4&\ox M_3
\end{align*}
\chapter*{}
This solution satisfies
$A_\psi(\zeta_k)=0$ for $k<3$,
\begin{align*}
(1\ox\Phi_\l)(A_\psi(\zeta_3))
&
=\left(\zeta_2\ox1+\zeta_1^2\ox\zeta_1\right)\ox[2,3]_*\\
&+\left(\zeta_1^3\ox1+\zeta_1^2\ox\zeta_1+\zeta_1\ox\zeta_1^2\right)\ox([2,2]\Sq^1)_*\\
&+\zeta_1^2\ox1\ox[3,3]_*\\
&+\zeta_1^2\ox1\ox\left(([3,2]\Sq^1)_*+([2,3]\Sq^1)_*\right),\\
\\
(1\ox\Phi_\r)(A_\psi(\zeta_3))
&
=\zeta_2\ox[2,3]_*\ox1\\
&
+\zeta_1^2\ox[3,3]_*\ox1\\
&
+\zeta_1^2\ox\left([2,3]_*+[3,2]_*\right)\ox\zeta_1\\
&
+\zeta_1\ox[2,2]_*\ox\zeta_2,
\end{align*}
and
\begin{align*}
(1\ox\Phi_\l)(A_\psi(\zeta_4))
&
=\left(
\zeta_1^4\zeta_3\ox1
+\zeta_1^4\zeta_2^2\ox\zeta_1
+\zeta_1^8\ox\zeta_2
+\zeta_2^2\ox\zeta_1^5
+\zeta_1^4\ox\zeta_1^4\zeta_2
+\zeta_1^4\ox\zeta_3
+\zeta_3\ox\zeta_1^4\right)\ox[2,3]_*\\
&
+\left(
\zeta_1^5\zeta_2^2\ox1
+\zeta_1^4\zeta_2^2\ox\zeta_1
+\zeta_1^9\ox\zeta_1^2
+\zeta_1^8\ox\zeta_1^3
+\zeta_1\zeta_2^2\ox\zeta_1^4
+\zeta_2^2\ox\zeta_1^5
+\zeta_1^5\ox\zeta_1^6
+\zeta_1^5\ox\zeta_2^2\right.\\
&
\left.\ \
+\zeta_1^4\ox\zeta_1^7
+\zeta_1^4\ox\zeta_1\zeta_2^2
+\zeta_1\ox\zeta_1^4\zeta_2^2\right)\ox([2,2]\Sq^1)_*\\
&
+\left(
\zeta_1^4\zeta_2^2\ox1
+\zeta_1^8\ox\zeta_1^2
+\zeta_1^4\ox\zeta_2^2
+\zeta_2^2\ox\zeta_1^4
+\zeta_1^4\ox\zeta_1^6\right)\ox[3,3]_*\\
&
+\left(
\zeta_1^4\zeta_2^2\ox1
+\zeta_1^8\ox\zeta_1^2
+\zeta_1^4\ox\zeta_2^2
+\zeta_2^2\ox\zeta_1^4
+\zeta_1^4\ox\zeta_1^6\right)\ox\left(([3,2]\Sq^1)_*+([2,3]\Sq^1)_*\right),\\
\\
(1\ox\Phi_\r)(A_\psi(\zeta_4))
&
=\left(\zeta_1^4\ox(\Sq^6[3,3])_*
+\zeta_2^2\ox(\Sq^5[2,3])_*
+\zeta_3\ox(\Sq^4[2,3])_*
+\zeta_1^4\zeta_2^2\ox[3,3]_*\right.\\
&
\ \ \left.+\zeta_1^4\zeta_3\ox[2,3]_*\right)\ox1\\
&
+\left(\zeta_1^4\ox(\Sq^7[2,2])_*
+\zeta_1^4\ox(\Sq^6[2,3])_*
+\zeta_1^4\ox(\Sq^6[3,2])_*
+\zeta_1^5\ox(\Sq^6[2,2])_*\right.\\
&
\ \ +\zeta_2^2\ox(\Sq^5[2,2])_*
+\zeta_2^2\ox(\Sq^4[2,3])_*
+\zeta_1\zeta_2^2\ox(\Sq^4[2,2])_*
+\zeta_1^4\zeta_2^2\ox[2,3]_*\\
&
\ \ \left.+\zeta_1^4\zeta_2^2\ox[3,2]_*
+\zeta_1^5\zeta_2^2\ox[2,2]_*\right)\ox\zeta_1\\
&
+\left(\zeta_1\ox(\Sq^8[2,2])_*
+\zeta_1^8\ox[3,2]_*
+\zeta_1^9\ox[2,2]_*\right)\ox\zeta_2\\
&
+\zeta_1^8\ox[2,3]_*\ox\zeta_1^3\\
&
+\zeta_1^8\ox[2,2]_*\ox\zeta_1\zeta_2\\
&
+\zeta_1\ox[4,4]_*\ox\zeta_3.
\end{align*}
The solutions themselves are readily obtained from these. For example, one has
\begin{align*}
A_\psi(\zeta_3)
&
=\zeta_1\ox\left(\zeta_1^3M_{11}^2
+\zeta_1^2M_{11}M_3
+\zeta_1M_{2211}
+\zeta_2M_{31}
+\zeta_2M_{11}^2
+M_{11}M_{32}
+\zeta_1^3M_{1111}\right.\\
&
+\zeta_2M_{1111}
+\zeta_1M_{321}
+\zeta_1\zeta_2M_3
+M_{2221}
+\zeta_1^2M_{311}
+\zeta_1M_{11}M_{31}
+M_{3121}\\
&\left.
+\zeta_1^2M_{221}
+\zeta_1M_{11}M_{211}
+\zeta_1M_{3111}\right)\\
&
+\zeta_1^2\ox\left(
M_{411}
+M_{231}
+M_{51}
+\zeta_1M_{41}
+\zeta_1M_{5}
+\zeta_1^2M_{31}
+M_{11}M_{211}
+M_{11}M_{31}+\zeta_1^3M_3
\right.\\
&\left.
+\zeta_2M_3+M_{321}+\zeta_1M_{32}
\right)\\
&
+\zeta_2\ox\left(M_{41}
+\zeta_1M_{11}^2
+M_{11}M_3
+\zeta_1M_{31}
+M_5\right)
\end{align*}
\chapter*{}
\section{Algorithms for machine computations}
We finally briefly indicate how to use the obtained expressions to perform
actual calculations on a computer. Notably, knowledge of either a
multiplication map such as $A^\phi$ or of a comultiplication map such as
$A_\psi$ enables one to calculate all ordinary and matric triple Massey
product in the Steenrod algebra, as well as the $d_{(2)}$ differential of
the Adams spectral sequence, and hence its $E_3$ term --- see \cites{Baues,
Baues&JibladzeVI}.
\chapter*{}
We next utilize the quotient map $(\_)^{\le2}:\bar R_*\onto R_*^{\le2}$ given on the dual
monomial basis $M$ by sending all the elements $M_{n_1,...,n_k}$ with $k>2$ to
zero. Let us denote by $A_\psi^{\le2}$ the corresponding composite map
$$
{\mathscr A}_*\xto{A_\psi}{\mathscr A}_*\ox\bar R_*\onto{\mathscr A}_*\ox R_*^{\le2}
$$
Note that the composite
$$
\iota:\bar R_*\xto{(1\ox m^\r_*)m^\l_*=(m^\l_*\ox1)m^\r_*}{\mathscr A}_*\ox\bar
R_*\ox{\mathscr A}_*\onto{\mathscr A}_*\ox R_*^{\le2}\ox{\mathscr A}_*
$$
is injective --- the fact that we have used in \ref{L*} and \ref{S*} to
calculate the dual left action operator $L_*$ and the dual symmetry operator
$S_*$ respectively.
The equations \ref{apsi}\eqref{mreqs} and \eqref{mleqs} in particular imply that the composite
$$
W:{\mathscr A}_*\xto{A_\psi}{\mathscr A}_*\ox\bar R_*\xto{1\ox\iota}{\mathscr A}_*\ox{\mathscr A}_*\ox R_*^{\le2}\ox{\mathscr A}_*
$$
is determined by $A_\psi^{\le2}$; namely, one has
\begin{equation}\label{W}
W=
(m_*\ox1\ox1)(A_\psi^{\le2}\ox1)m_*
+(1\ox A_\psi^{\le2}\ox1)(1\ox m_*)m_*
+(1\ox1\ox(\_)^{\le2}\ox1)(1\ox m^\l_*\ox1)(\zeta_1\ox\bar b_\psi)m_*.
\end{equation}
We can now reformulate equations \ref{apsi}\eqref{mreqs} and \eqref{mleqs} in
terms of $A_\psi^{\le2}$. Indeed since the map $\iota$ is injective, these
equations are equivalent to the ones obtained by precomposing with the map
$1\ox\iota\ox1$, respectively $1\ox1\ox\iota$. Then using the equalities
$$
(\iota\ox 1)m^\r_*=(1\ox1\ox m_*)\iota
$$
for \eqref{mreqs} and
$$
(1\ox\iota)m^\l_*=(m_*\ox1\ox1)\iota
$$
for \eqref{mleqs}, we can prepend all occurrences of $A_\psi$ with
$1\ox\iota$, i.~e. switch from $A_\psi$ to $W$. In this way we arrive at the
equations
$$
(1\ox1\ox1\ox m_*)W=(W\ox1)m_*+(1\ox\iota\ox1)(\zeta_1\ox\bar b_\psi)m_*
$$
and
$$
(1\ox m_*\ox1\ox1)W=(m_*\ox1\ox1\ox1)W+(1\ox W)m_*,
$$
respectively. Next substituting here the values from \eqref{W} we obtain the
following equations on $A_\psi^{\le2}$:
\begin{multline*}
(1\ox1\ox1\ox m_*)(m_*\ox1\ox1)(A_\psi^{\le2}\ox1)m_*
+(1\ox1\ox1\ox m_*)(1\ox A_\psi^{\le2}\ox1)(1\ox m_*)m_*\\
+(1\ox1\ox1\ox m_*)(1\ox1\ox(\_)^{\le2}\ox1)(1\ox m^\l_*\ox1)(\zeta_1\ox\bar
b_\psi)m_*\\
=(m_*\ox1\ox1\ox1)(A_\psi^{\le2}\ox1\ox1)(m_*\ox1)m_*
+(1\ox A_\psi^{\le2}\ox1\ox1)(1\ox m_*\ox1)(m_*\ox1)m_*\\
+(1\ox1\ox(\_)^{\le2}\ox1\ox1)(1\ox m^\l_*\ox1\ox1)(\zeta_1\ox\bar
b_\psi\ox1)(m_*\ox1)m_*
\end{multline*}
and
\begin{multline*}
(1\ox m_*\ox1\ox1)(m_*\ox1\ox1)(A_\psi^{\le2}\ox1)m_*
+(1\ox m_*\ox1\ox1)(1\ox A_\psi^{\le2}\ox1)(1\ox m_*)m_*\\
+(1\ox m_*\ox1\ox1)(1\ox1\ox(\_)^{\le2}\ox1)(1\ox m^\l_*\ox1)(\zeta_1\ox\bar b_\psi)m_*\\
=
(m_*\ox1\ox1\ox1)(m_*\ox1\ox1)(A_\psi^{\le2}\ox1)m_*
+(m_*\ox1\ox1\ox1)(1\ox A_\psi^{\le2}\ox1)(1\ox m_*)m_*\\
+(m_*\ox1\ox1\ox1)(1\ox1\ox(\_)^{\le2}\ox1)(1\ox m^\l_*\ox1)(\zeta_1\ox\bar b_\psi)m_*
+(1\ox m_*\ox1\ox1)(1\ox A_\psi^{\le2}\ox1)(1\ox m_*)m_*\\
+(1\ox1\ox A_\psi^{\le2}\ox1)(1\ox1\ox m_*)(1\ox m_*)m_*
+(1\ox1\ox1\ox(\_)^{\le2}\ox1)(1\ox1\ox m^\l_*\ox1)(1\ox\zeta_1\ox\bar
b_\psi)(1\ox m_*)m_*.
\end{multline*}
It is straightforward to check that these equations are identically satisfied
for any choice of values for $A_\psi^{\le2}$.
In particular, for the values on the Milnor generators one has
\begin{multline*}
(1\ox\iota)A_\psi(\zeta_n)
=\sum_{0\le i\le n}(m_*\ox1)A_\psi^{\le2}(\zeta_{n-i}^{2^i})\ox\zeta_i
+\sum_{0\le i+j\le n}\zeta_{n-i-j}^{2^{i+j}}\ox A_\psi^{\le2}(\zeta_i^{2^j})\ox\zeta_j\\
+\sum_{0\le i\le n}\zeta_1\zeta_{n-i}^{2^i}\ox(1\ox(\_)^{\le2}\ox1)(m^\l_*\ox1)(\bar b_\psi(\zeta_i)).
\end{multline*}
Taking into account \eqref{apsisq}, \ref{4=0} and \eqref{bpsibar}, we then have
\begin{equation}\label{ia}
\begin{aligned}
(1\ox\iota)A_\psi(\zeta_n)
=&(m_*\ox(\_)^{\le2})C_n\ox\zeta_1+(m_*\ox1)(A_\psi^{\le2}(\zeta_n))\ox1\\
&+\sum_{0<i\le n}\zeta_{n-i}^{2^i}\ox(1\ox(\_)^{\le2})C_i\ox\zeta_1
+\sum_{0\le i\le n}\zeta_{n-i}^{2^i}\ox A_\psi^{\le2}(\zeta_i)\ox1\\
&+\sum_{\substack{0\le i\le n\\0<j<i}}\zeta_1\zeta_{n-i}^{2^i}\ox(1\ox(\_)^{\le2})(m^\l_*(v_{i-j}^{2^{j-1}}))\ox\zeta_j,
\end{aligned}
\end{equation}
where we have denoted
$$
C_i=L_*(\zeta_{i-1},\zeta_{i-1})+{\nabla_\xi}_*(\zeta_{i-1},\zeta_{i-1})
\in({\mathscr A}_*\ox\bar R_*)_{2^i-1},\ i=1,2,3,...
$$
We see from these equations that the elements $A_\psi(\zeta_n)\in{\mathscr A}_*\ox\bar
R_*$ actually belong to a smaller subspace ${\mathscr A}_*\ox R_Q$, where
$R_Q\subset\bar R_*$ is the subspace defined by the equality
$$
R_Q=\iota^{-1}({\mathscr A}_*\ox R^{\le2}_*\ox Q_*),
$$
with $Q_*\subset{\mathscr A}_*$ denoting the linear subspace spanned by the polynomial
generators $1,\zeta_1,\zeta_2,\zeta_3,...$.
We also note that there is a commutative diagram
$$
\xymatrix{
&{\mathscr A}_*\ox R^{\le2}_*\ox{\mathscr A}_*\ar[dr]^{\eps\ox1\ox\eps}\\
\bar R_*\ar[ur]^\iota\ar[rr]^{(\_)^{\le2}}&&R^{\le2}_*,
}
$$
where $\eps:{\mathscr A}_*\to{\mathbb F}$ is the counit. Indeed the dual diagram just expresses
the trivial fact that applying the multiplication map $\alpha\ox
r\ox\beta\mapsto\alpha r\beta$ to the tensor $1\ox r\ox1\in{\mathscr A}\ox
R^{\le2}\ox{\mathscr A}$ gives the value of the inclusion $R^{\le2}\into\bar R$ on the
element $r$. Thus for any $r\in\bar R_*$ we have
$$
r^{\le2}=(\eps\ox1\ox\eps)\iota(r).
$$
It is convenient in these circumstances to pick the basis $B_*$ of the space
$\bar R_*$ dual to the preadmissible basis $B$ of $\bar R$ as described in
\ref{barr}. Recall that the latter basis consists of elements of the form
$\pi\alpha$ where $\alpha$ is an admissible monomial and $\pi$ is a
preadmissible relation, i.~e. has form $\Sq^{n_k}\cdots\Sq^{n_1}[n_0,n]$
where $[n_0,n]$ is an Adem relation, the monomial $\Sq^{n_k}\cdots\Sq^{n_1}$
is admissible and moreover $n_1\ge2n_0$. Thus the dual basis $B_*$ consists
of linear forms $(\pi\alpha)_*$ on $\bar R$ which take value $1$ on
$\pi\alpha$ and $0$ on all other elements of $B$. We will call $B_*$ the
\emph{dual preadmissible basis} of $\bar R_*$. It is convenient for us because
of the following
\begin{Lemma}
The dual preadmissible basis $B_*$ has the following properties:
\begin{itemize}
\item
the subset $B_Q$ of $B_*$ consisting of the elements
$(\pi1)_*$ and $(\pi\Sq^{2^k}\Sq^{2^{k-1}}\cdots\Sq^2\Sq^1)_*$ for all
preadmissible relations $\pi$ and all $k\ge0$ is a basis for the subspace
$R_Q$ of $\bar R_*$;
\item
the elements $(\pi1)_*$ for all preadmissible relations $\pi$ form a basis
$B_0$ of the subspace
$$
R_0=\iota^{-1}({\mathscr A}_*\ox R^{\le2}_*\ox\FF1)\subset R_Q\subset\bar R_*;
$$
\item
for each $k\ge1$ the elements $(\pi1)_*$ and
$(\pi\Sq^{2^j}\Sq^{2^{j-1}}\cdots\Sq^2\Sq^1)_*$ with $j<k$ form a basis $B_k$
of the subspace
$$
R_k=\iota^{-1}\left(\bigoplus_{0\le j\le k}{\mathscr A}_*\ox
R^{\le2}_*\ox{\mathbb F}\zeta_j\right)\subset R_Q\subset\bar R_*;
$$
\item
the map $(\_)^{\le2}:\bar R_*\to R^{\le2}_*$ sends the elements $(1[n,m]1)_*$
to $[n,m]_*$ and all other elements of $B_*$ to $0$.
\end{itemize}
\end{Lemma}
\begin{proof}
\end{proof}
Thus we have a filtration
$$
B_0\subset B_1\subset\cdots\subset B_Q\subset B_*;
$$
let us also single out the subset
$$
B_{\textrm{Adem}}\subset B_0
$$
consisting of the duals $[a,b]_*=(1[a,b]1)_*$ of the Adem relations, for
$0<a<2b$, $a,b\in{\mathbb N}$.
With respect to the basis $B_Q$, any solution $A_\psi(\zeta_n)$ of the above
equation can be written in the form
$$
A_\psi(\zeta_n)=\sum_{\beta\in B_Q}X_{2^n-|\beta|}(\beta)\ox\beta,
$$
for some uniquely determined elements
$X_{2^n-|\beta|}(\beta)\in{\mathscr A}_{2^n-|\beta|}$, where $|\beta|$ denotes degree of
the (homogeneous) element $\beta$. Moreover in these terms one has
$$
A_\psi^{\le2}(\zeta_n)=\sum_{a<2b}X_{2^n-a-b}([a,b]_*)\ox[a,b]_*.
$$
\begin{comment}
When substituting these expressions in \eqref{ia} it will be also convenient
to use the map
$$
\tilde\iota:\bar R_*\to(\tilde{\mathscr A}_*\ox\bar R_*\ox{\mathscr A}_*+{\mathscr A}_*\ox\bar
R_*\ox\tilde{\mathscr A}_*)\subset{\mathscr A}_*\ox\bar R_*\ox{\mathscr A}_*
$$
determined by the equality
$$
\iota(r)=1\ox r\ox1+\tilde\iota(r).
$$
Note that the elements $\tilde\iota([a,b]_*)\in\tilde{\mathscr A}_*\ox R^{\le2}_*\ox\FF1$ are
not necessarily zero. For example, one has
\begin{align*}
\tilde\iota([1,3]_*)&=\zeta_1\ox[1,2]_*\ox1,\\
\tilde\iota([3,2]_*)&=\zeta_1\ox[2,2]_*\ox1,\\
\tilde\iota([1,5]_*)&=\zeta_1\ox([1,4]_*+[2,3]_*)\ox1,\\
\tilde\iota([3,3]_*)&=\zeta_1\ox[2,3]_*\ox1+\zeta_1^2\ox[2,2]_*\ox1,
\end{align*}
etc.
\end{comment}
Substituting this into \eqref{ia} one obtains
\begin{align*}
\sum_{\beta\in B_Q}X_{2^n-|\beta|}(\beta)\ox\iota\beta
&=\sum_{a<2b}
\left(
m_*(X_{2^n-a-b}([a,b]_*))+\sum_{0\le i\le n}\zeta_{n-i}^{2^i}\ox X_{2^i-a-b}([a,b]_*)\right)
\ox[a,b]_*\ox1\\
&+\left((m_*\ox(\_)^{\le2})C_n
+\sum_{0<i\le n}\zeta_{n-i}^{2^i}\ox(1\ox(\_)^{\le2})C_i
+\sum_{1\le i\le n}\zeta_1\zeta_{n-i}^{2^i}\ox(1\ox(\_)^{\le2})(m^\l_*(v_{i-1}))
\right)\ox\zeta_1\\
&+\sum_{j\ge2}\left(\sum_{j<i\le
n}\zeta_1\zeta_{n-i}^{2^i}\ox(1\ox(\_)^{\le2})(m^\l_*(v_{i-j}^{2^{j-1}}))\right)\ox\zeta_j
\end{align*}
Moreover let us write for $\beta\in B_Q$
$$
\iota\beta=\sum_{k\ge0}c_{|\beta|-2^k+1}(\beta)\ox\zeta_k
$$
with $c_j(\beta)\in({\mathscr A}_*\ox R^{\le2}_*)_j$ the \emph{coordinates} of $\beta$.
\begin{comment}
Thus we have
$$
\tilde\iota\beta=\tilde c_{|\beta|}(\beta)\ox1+\sum_{k\ge1}c_{|\beta|-2^k+1}(\beta)\ox\zeta_k,
$$
where
$$
c_{|\beta|}(\beta)=1\ox\beta+\tilde c_{|\beta|}(\beta),
$$
with $\tilde c_{|\beta|}(\beta)\in(\tilde{\mathscr A}_*\ox R^{\le2}_*)_{|\beta|}$.
\end{comment}
Then collecting terms with respect to the last component, the above equation
becomes equivalent to the system
$$
\left\{
\begin{aligned}
\sum_{\beta\in B_Q}X_{2^n-|\beta|}(\beta)\ox c_{|\beta|}(\beta)
&=\sum_{a<2b}
\left(
m_*(X_{2^n-a-b}([a,b]_*))
+\sum_{0\le i\le n}\zeta_{n-i}^{2^i}\ox X_{2^i-a-b}([a,b]_*)
\right)
\ox[a,b]_*,\\
\sum_{\beta\in B_Q-B_0}X_{2^n-|\beta|}(\beta)\ox c_{|\beta|-1}(\beta)
&=(m_*\ox(\_)^{\le2})C_n
+\sum_{0<i\le n}\zeta_{n-i}^{2^i}\ox(1\ox(\_)^{\le2})C_i\\
&+\sum_{1<i\le n}\zeta_1\zeta_{n-i}^{2^i}\ox(1\ox(\_)^{\le2})(m^\l_*(v_{i-1})),\\
\sum_{\beta\in B_Q-B_j}X_{2^n-|\beta|}(\beta)\ox c_{|\beta|-2^j+1}(\beta)
&=\sum_{j<i\le
n}\zeta_1\zeta_{n-i}^{2^i}\ox(1\ox(\_)^{\le2})(m^\l_*(v_{i-j}^{2^{j-1}})),&j\ge2.
\end{aligned}
\right.
$$
\endinput
\chapter{The dual $d_{(2)}$ differential
}\label{diff}
In this chapter we will compute the $d_{(2)}$ differential in the ${\mathrm E}^2$ term
$$
{\mathrm E}_2^{p,q}=\Cotor^p_{{\mathscr A}_*}({\mathbb F},{\mathbb F})^q\cong\Ext^p_{{\mathscr A}}({\mathbb F},{\mathbb F})^q
$$
of the Adams spectral sequence. For this we will first set up algebraic formalism necessary to carry out an analog of the computations in Chapter \ref{E3} in the dual setting. First let us recall how the above isomorphism is obtained.
\section{Secondary coresolution}
One starts with a projective resolution of the ${\mathscr A}$-module ${\mathbb F}$, e.~g. with the minimal resolution as in \eqref{minireso}. Its graded ${\mathbb F}$-linear dual
\begin{equation}\label{miniresod}
{\mathbb F}\to{\mathscr A}_*^\set{g_0^0}\to\bigoplus_{n\ge0}{\mathscr A}_*^\set{g_1^{2^n}}\to\bigoplus_{|i-j|\ne1}{\mathscr A}_*^\set{g_2^{2^i+2^j}}\to...
\end{equation}
is then an injective resolution of ${\mathbb F}$ in the category of right ${\mathscr A}_*$-comodules. (This is not entirely trivial since we take \emph{graded} duals. However all (co)modules that we encounter will be degreewise finite, i.~e. having generating sets with finite number of elements in each degree. Obviously then graded duality is a contravariant equivalence between the categories of such (co)modules.)
There are isomorphisms
$$
\Hom_{\mathscr A}(M,N)\cong M_*\cote_{{\mathscr A}_*}N
$$
for any left ${\mathscr A}$-modules $M$ and $N$ of the above kind (i.~e. of graded finite type), where on the right the graded dual $M_*$ is considered as a right ${\mathscr A}_*$-comodule and $N$ as a left ${\mathscr A}_*$-comodule in the standard way. It follows that applying $\Hom_{\mathscr A}(-,{\mathbb F})$ to \eqref{minireso} and applying $-\cote_{{\mathscr A}_*}{\mathbb F}$ to \eqref{miniresod} gives isomorphic cochain complexes (of ${\mathbb F}$-vector spaces). But by definition cohomology of the latter complex is given by
$$
H^p(\eqref{miniresod}\cote_{{\mathscr A}_*}{\mathbb F})^q=\Cotor^p_{{\mathscr A}_*}({\mathbb F},{\mathbb F})^q.
$$
It then follows from \eqref{d2gen} that in these terms the secondary differential
$$
d_{(2)}^{pq}:\Cotor^p_{{\mathscr A}_*}({\mathbb F},{\mathbb F})^q\to\Cotor^{p+2}_{{\mathscr A}_*}({\mathbb F},{\mathbb F})^{q+1}
$$
is given by
\begin{equation}\label{d2delta}
d_{(2)}^{pq}(\hat g_p^q)=\sum_{\textrm{$g_p^q$ appears in
$\delta(g_{p+2}^{q+1})$}}\hat g_{p+2}^{q+1}=\delta_*(\hat g_p^q)^0.
\end{equation}
Here,
$$
\delta_*:\bigoplus_q\Sigma{\mathscr A}_*^{\set{g_p^q}}\to\bigoplus_q{\mathscr A}_*^{\set{g_{p+2}^q}}
$$
is the dual of the map
$$
\delta:{\mathscr A}\brk{g_{p+2}^*}\to\Sigma{\mathscr A}\brk{g_p^*}
$$
determined in \ref{delta}, whereas $\hat g_*^*$ denotes the dual basis of $g_*^*$, i.~e. $\hat g_p^q\in{\mathscr A}_*^\set{g_*^*}$ is the vector with the $g_p^q$-th coordinate equal to 1 and all other coordinates equal to zero. Moreover by $\delta_*(\hat g_p^q)^0$ is denoted the zero degree component of $\delta_*(\hat g_p^q)$, i.~e. the result of applying to the element
$$
\delta_*(\hat g_p^q)\in\bigoplus_{j\ge0}{\mathscr A}_j^\set{g_{p+2}^{q+j+1}}
$$
the projection to the $(j=0)$-th component
$$
\bigoplus_{j\ge0}{\mathscr A}_j^\set{g_{p+2}^{q+j+1}}\to{\mathscr A}_0^\set{g_{p+2}^{q+1}}.
$$
Instead of directly dualizing the map $\delta$, it is more convenient from the computational point of view to dualize the conditions of \ref{delta} using \eqref{diadelta} and determine $\delta_*$ directly from these dualized conditions. In fact using \ref{exmulf} we can further detalize the diagram \eqref{diadelta} in the following way:
\begin{equation}\label{diadeltad}
\alignbox{
\xymatrix{
&\Sigma{\mathscr A}\ox V_{p+1}\ar[r]^-{1\ox d}&\Sigma{\mathscr A}\ox{\mathscr A}\ox V_p\ar[dr]^{m\ox1}\\
V_{p+3}\ar[ur]^{\delta^{\mathscr A}_{p+1}}\ar[dr]_d&{\mathscr A}\ox{\mathscr A}\ox V_p\ar[ur]^{\k\ox1\ox1}&{\mathscr A}\ox R_{\mathscr F}\ox V_p\ar[r]^{1\ox A^s}&\Sigma{\mathscr A}\ox V_p\\
&{\mathscr A}\ox V_{p+2}\ar[u]_{1\ox\ph^{{\mathscr A},s}}\ar[ur]_{1\ox\ph^{R,s}}\ar[r]_-{1\ox\delta^{\mathscr A}_p}&{\mathscr A}\ox\Sigma{\mathscr A}\ox V_p\ar[ur]_{m\ox1}
}
}
\end{equation}
where $A^s$ is the multiplication map corresponding to a splitting $s$ of the
${\mathbb G}$-relation pair algebra used, as in \ref{rcomp}, to identify $R_{\mathscr B}$ with
${\mathscr A}\oplus R_{\mathscr F}$, and $(\ph^{{\mathscr A},s},\ph^{R,s})$ are the components of the corresponding composite map
$$
V_{p+2}\xto\ph R_{\mathscr B}\ox V_p={\mathscr A}\!\ox\!V_p\oplus R_{\mathscr F}\!\ox\!V_p,
$$
with $\ph$ as defined in \eqref{dd}.
Moreover just as the map $\delta$ is completely determined by its restriction to $V_{p+2}$, its dual $\delta_*$ is determined by the composite $\delta_0$ as in
$$
\Hom(V_p,\Sigma{\mathscr A}_*)\xto{\delta_*}\Hom(V_{p+2},{\mathscr A}_*)\xto{\Hom(V_{p+2},\eps)}\Hom(V_{p+2},{\mathbb F}),
$$
where graded $\Hom$ is meant, and $\eps$ is the augmentation of ${\mathscr A}_*$. In fact we only need this composite map $\delta_0$ as by \eqref{d2delta} above we have
\begin{equation}\label{d2delta0}
d_{(2)}^{pq}(\hat g_p^q)=\delta_0(\hat g_p^q).
\end{equation}
Now the dual to diagram \eqref{diadeltad} is easy to identify; it is
\begin{equation}\label{diadeltadu}
\alignbox{
\xymatrix{
&\Sigma{\mathscr A}_*\ox\hat V_{p+1}\ar[dl]_{\delta_0}&\Sigma{\mathscr A}_*\ox{\mathscr A}_*\ox\hat V_p\ar[l]_-{1\ox d_*}\ar[dl]_{\zeta_1\ox1\ox1}\\
\hat V_{p+3}&{\mathscr A}_*\ox{\mathscr A}_*\ox\hat V_p\ar[d]_{1\ox\ph_*^{{\mathscr A},s}}&{\mathscr A}_*\ox{R_{\mathscr F}}_*\ox\hat V_p\ar[dl]_{1\ox\ph_*^{R,s}}&\Sigma{\mathscr A}_*\ox\hat V_p\ar[ul]_{m_*\ox1}\ar[l]_-{A_s\ox1}\ar[dl]^{m_*\ox1}\\
&{\mathscr A}_*\ox\hat V_{p+2}\ar[ul]^{d_*}&{\mathscr A}_*\ox\Sigma{\mathscr A}_*\ox\hat V_p\ar[l]^-{1\ox\delta_0}
}
}
\end{equation}
where $\hat V_p$ are the graded dual spaces of $V_p$.
It is straightforward to reformulate the above in terms of elements: the
values of the map $\delta_0$ on arbitrary elements $a\ox g\in\Sigma{\mathscr A}_*\ox\hat
V_p$ must satisfy
\begin{equation}\label{deltamain}
\begin{aligned}
\delta_0(\sum a_\l\ox d_*(a_\r\ox g))
&=d_*(\sum a_l\ox\delta_0(a_\r\ox g))\\
&+d_*(\sum\zeta_1a_\l\ox\ph_*^{{\mathscr A},s}(a_\r\ox g))
+d_*(\sum a_{\mathscr A}\ox\ph_*^{R,s}(a_R\ox g)),
\end{aligned}
\end{equation}
where we have denoted by
$$
\Delta(a)=\sum a_\l\ox a_\r
$$
the value of the diagonal $\Delta:{\mathscr A}_*\to{\mathscr A}_*\ox{\mathscr A}_*$ and by
$$
A_s(a)=\sum a_{\mathscr A}\ox a_R
$$
the value of the comultiplication map $A_s:\Sigma{\mathscr A}_*\to{\mathscr A}_*\ox{R_{\mathscr F}}_*$ on
$a\in{\mathscr A}_*$.
We thus obtain
\begin{Proposition}
\label{proposi}
The $d_{(2)}$ differential of the Adams spectral sequence is given on the
cohomology classes represented by the generators $\hat g$ in the minimal resolution by the formula
$$
d_{(2)}(\hat g)=\delta_0(\Sigma1\ox\hat g),
$$
where
$$
\delta_0:\Sigma{\mathscr A}_*\ox\hat V_s\to\hat V_{s+2}
$$
are any maps satisfying the equations \eqref{deltamain}.
\end{Proposition}\qed
At this point the cooperation of the authors ended since the time of Jibladze's visit at the
MPIM was over. Therefore our goal of doing computer calculations on the basis of \ref{proposi}
is left to an interested reader.
\endinput
\chapter*{Introduction}
Spheres are the most elementary compact spaces, but the simple question of
counting essential maps between spheres turned out to be a landmark
problem. In fact, progress in algebraic topology might be measured by its
impact on this question. Topologists have worked on the problem of describing the
homotopy groups of spheres for around 80 years and there is still no
satisfactory solution in sight. Many approaches have been developed: a
distinguished one is the Adams spectral sequence
$$
{\mathrm E}_2,{\mathrm E}_3,{\mathrm E}_4,...
$$
converging to homotopy groups of spheres. Adams computed the ${\mathrm E}_2$-term and
showed that
$$
{\mathrm E}_2=\Ext_{\mathscr A}({\mathbb F},{\mathbb F})
$$
is algebraically determined by $\Ext$-groups associated to the Steenrod
algebra ${\mathscr A}$. Hence ${\mathrm E}_2$ is an upper bound for homotopy groups of spheres
and is given by an algebraic resolution of the prime field ${\mathbb F}={\mathbb F}_p$ over
the algebra ${\mathscr A}$. The Steenrod algebra ${\mathscr A}$ is in fact a Hopf algebra with
wonderful algebraic properties. Milnor showed that the dual algebra
$$
{\mathscr A}_*=\Hom({\mathscr A},{\mathbb F})
$$
is a polynomial algebra. Topologically the Steenrod algebra is the algebra of
primary cohomology operations. Adams' formula on $E_2$ shows a
fundamental connection between homotopy groups of spheres and primary
cohomology operations. Much work in the literature is exploiting
this connection. However, since $E_2$ is only an upper bound, one cannot
expect the Steenrod algebra to be sufficient to determine homotopy groups of
spheres. In fact, for this the ``algebra of all higher cohomology operations"
is needed. The structure of this total algebra is highly unknown; it is not even
clear what kind of algebra is needed to describe the additive properties of
higher cohomology operations. The structure of the Adams spectral sequence
$E_2, E_3, \ldots$ shows that the total algebra can be approximated by
constructing inductively primary, secondary, tertiary \ldots operations. In doing
so one might be able to grasp the total algebra. This is the program of computing
homotopy groups of spheres via higher cohomology operations. The first step
beyond Adams' result is understanding the algebra of secondary
cohomology operations which, surprisingly, turned out to be a differential
algebra, namely a pair algebra.
In the book \cite{Baues} the pair algebra ${\mathscr B}$ of secondary cohomology
operations is computed and this enriches the known algebraic structure of the
Steenrod algebra considerably. The pair algebra ${\mathscr B}$ is given by an exact
sequence
$$
\begin{aligned}
\xymatrix@1{
\Sigma{\mathscr A}\ar@{ >->}[r]&{\mathscr B}_1\ar[r]^\d&{\mathscr B}_0\ar@{->>}[r]^q&{\mathscr A}.
}
\end{aligned}
\eqno{(*)}
$$
Here ${\mathscr B}_0$ is the free associative algebra over ${\mathbb G}={\mathbb Z}/p^2{\mathbb Z}$ generated
by the Steenrod operations which also generate ${\mathscr A}$ and $q$ is the identity on
generators. Moreover there is a multiplication map
$$
m:{\mathscr B}_0\!\ox\!{\mathscr B}_1\oplus{\mathscr B}_1\!\ox\!{\mathscr B}_0\to{\mathscr B}_1
$$
and a diagonal map
$$
\Delta:{\mathscr B}_1\to({\mathscr B}_0\!\ox\!{\mathscr B}_1\oplus{\mathscr B}_1\!\ox\!{\mathscr B}_0)/\sim
$$
such that ${\mathscr B}=({\mathscr B},m,\Delta)$ is a ``secondary Hopf algebra'', see
\cite{Baues}, inducing the Hopf algebra structure of the Steenrod algebra
${\mathscr A}$. It is proven in \cite{Baues} that the structure of ${\mathscr B}$ as a secondary
Hopf algebra together with the explicit invariants $L$ and $S$ determines ${\mathscr B}$
up to isomorphism. The nature of secondary homotopy operations leads
forcibly to this kind of new algebraic object which has wonderful properties
shedding light on the structure of the Steenrod algebra ${\mathscr A}$ as a Hopf algebra.
By a striking result of Milnor, the dual ${\mathscr A}_{\ast}$ of the Hopf algebra ${\mathscr A}$
is a polynomial algebra with a nice diagonal which, for many purposes,
is easier to deal with than the algebra ${\mathscr A}$ itself which is given by generators,
the Steenrod squares, and Adem relations. Thus this paper also describes
the dualization ${\mathscr B}_{\ast}$ of the secondary Hopf algebra ${\mathscr B}$. We compute the
invariants dual to $L$ and $S$ by explicit and easy formul\ae. Therefore computations
in terms of ${\mathscr B}$ can equivalently be carried out in terms of the dual ${\mathscr B}_{\ast}$ and
often the dual formul{\ae} are easier to handle. In this paper we use the secondary
Hopf algebra ${\mathscr B}$ and its dual ${\mathscr B}_{\ast}$ for computating a
secondary resolution which determines the differential $d_{(2)}$ on $E_2$ and
hence $E_3$.
Adams
computed
those special values of the differentials $d_{(2)}$ in ${\mathrm E}_2$ which are related
to the Hopf invariant 1 problem. In the book of Ravenel \cite{Ravenel} one
finds a list of all differentials up to degree 60 which, however, is only
tentative in degrees $\ge46$. Corrections of published differentials in low
degrees were made by Bruner \cite{Brunercorr}. An explicit method for
computing the differential $d_{(2)}$ in general, however, has not been achieved in the
literature. But it is done in the present paper. Our result is thus showing the
global computable nature of the $E_3$--term of the Adams spectral sequence.
According to Ravenel's observer, ``who looks to the far distant homotopy groups
of spheres through a telescope,'' such a global result on $E_3$ seemed
impossible for a long time.
We show that the differential $d_{(2)}$ and the ${\mathrm E}_3$-term can be completely
computed by the formula
$$
{\mathrm E}_3=\Ext_{\mathscr B}({\mathbb G}^\Sigma,{\mathbb G}^\Sigma)
$$
where the secondary $\Ext$-groups $\Ext_{\mathscr B}$ are given by an algebraic
secondary resolution associated to the pair algebra ${\mathscr B}$. The computation of
${\mathrm E}_3$ yields a new algebraic upper bound of homotopy groups of spheres
improving the Adams bound given by ${\mathrm E}_2$.
In order to do explicit computations of the new bound ${\mathrm E}_3$ one has to carry
out two tasks. On the one hand one has to describe the algebraic structure of
the secondary Hopf algebra ${\mathscr B}$ explicitly by equations which a computer can
deal with in an easy way. On the other hand one has to choose a secondary
resolution associated to ${\mathscr B}$, by solving inductively a system of explicit
equations determined by ${\mathscr B}$.
In the first part (chapters \ref{sext}, \ref{secsteen}, \ref{E3}) of this
paper we describe the algebra which yields the secondary resolution associated
to ${\mathscr B}$ and which determines the differential $d_{(2)}$ on ${\mathrm E}_2$ by the
resolution. In the second part (chapters \ref{Hpa}, \ref{gens}, \ref{LS},
\ref{xi}, \ref{A}) we study the algebraic properties of ${\mathscr B}$ and of the
dualization of ${\mathscr B}$. In particular we show that the results of Milnor on the
dual Steenrod algebra ${\mathscr A}_*$ have secondary analogues. For the dualization of
${\mathscr B}$ we proceed as follows. The projection $q:{\mathscr B}_0\onto{\mathscr A}$ in $(*)$ above
admits a factorization
$$
q:{\mathscr B}_0\onto{\mathscr F}_0\onto{\mathscr A}
$$
where ${\mathscr F}_0={\mathscr B}_0\ox{\mathbb F}$ is the free associative algebra over ${\mathbb F}={\mathbb Z}/p{\mathbb Z}$
generated by the Steenrod operations. Now let
\begin{align*}
R_{\mathscr B}&=\textrm{kernel}({\mathscr B}_0\to{\mathscr A})\\
R_{\mathscr F}&=\textrm{kernel}({\mathscr F}_0\to{\mathscr A}).
\end{align*}
Then one has an exact sequence of ${\mathbb F}$-vector spaces
$$
{\mathscr A}\into R_{\mathscr B}\ox{\mathbb F}\onto R_{\mathscr F}
$$
which can be dualized by applying the functor $\Hom(-,{\mathbb F})$. Moreover the
exact sequence of ${\mathbb F}$-vector spaces
$$
\Sigma{\mathscr A}\into{\mathscr B}_1\ox{\mathbb F}\onto R_{\mathscr B}\ox{\mathbb F}
$$
can be dualized by $\Hom(-,{\mathbb F})$. The main results of this work describe in detail the multiplication in ${\mathscr B}$ and the diagonal in ${\mathscr B}$ on the level of ${\mathscr B}_1\ox{\mathbb F}$ and on the dual $\Hom({\mathscr B}_1,{\mathbb F})$. In this way we obtain explicit formul\ae\ describing the algebraic structure of ${\mathscr B}$ and of the dual of ${\mathscr B}$. Of course the dual of ${\mathscr B}$ determines ${\mathscr B}$ and vice versa.
We use these formul\ae\ for computer calculations of the secondary resolution associated to ${\mathscr B}$ and we derive in this way the differentials $d_{(2)}$ on ${\mathrm E}_2$. In section \ref{d2} we do such computations up to degree 40 in order to confirm the algebraic equations achieved in the book \cite{Baues}. The goal is to compute ${\mathrm E}_3$ up to degree 210 as this was done for ${\mathrm E}_2$ by Nassau \cite{Nassau}. A more effective computer implementation of ${\mathrm E}_3$, which is left to the interested reader, relies on the computation of the dual of ${\mathscr B}$, see the formul{\ae} in section \ref{cobcomp} below. The functions needed for the implementation are described in the paper by tables of values in low degrees. These tables should be helpful to control the implementation.
|
1,108,101,565,522 | arxiv | \section{Introduction}
The study of supersymmetric field theories in physics literature naturally leads to the notion of a (Lie) superalgebra, cf. \cite{med,med2,of12,of121,of122}, being defined as follows: Let $\g=\g_0 \oplus \g_1$ be a $\mathbb{Z}_2-$graded $\mathbb{K}-$vector space. For a homogeneous element $X \in \g$, we let $|X|:=i$ if $X \in \g_i$. $\g$ together with a bilinear map $[\cdot, \cdot ] : \g \times \g \rightarrow \g$ is called a $(\mathbb{K}-)$superalgebra if
\begin{enumerate}
\item $[\cdot, \cdot ] : \g_i \times \g_j \rightarrow \g_{i+j}$,
\item For homogeneous elements $X,Y \in \g$ it holds that $[X,Y]=-(-1)^{|X||Y|}[Y,X]$.
\end{enumerate}
If moreover the Jacobi identity
\begin{align}
[X,[Y,Z]]=[[X,Y],Z]+(-1)^{|X||Y|}[Y,[X,Z]] \label{grj}
\end{align}
holds for all homogeneous elements, we call $\g$ a Lie superalgebra. Classification results for simple Lie superalgebras can be found in \cite{nahm}. It has been found in \cite{jgeo,suads,med,med2,med3,of121} that some superalgebras naturally appear geometrically. To this end, let $(M,g)$ be a smooth, oriented and time-oriented Lorentzian spin manifold with spinor bundle $S^g$ admitting distinguished spinor fields, e.g. parallel spinors, geometric Killing spinors or spinors being parallel wrt. a connection that depends on more bosonic data of the background in question. By a well-known squaring map, cf. \cite{lei,bl}, each spinor gives rise to a vector field and the spinor field equation translates into natural properties of the associated vector, i.e. being parallel or Killing, for instance. Moreover, vector fields act naturally on spinors by the spinorial Lie derivative as considered in \cite{kos,ha96}. In this way, one obtains a superalgebra naturally associated to $(M,g)$ whose even and odd part consist of distinguished vector- and spinor fields.
The algebraic structure of these infinitesimal symmetries also becomes important within the classification of background geometries on curved space which support some (rigid) supersymmetry, as has been initiated in physics literature in recent years, cf. \cite{pes,ym,CCKS1,CCKS2,CCKS3,fes1,cas}.\\
\newline
There is a conformal analogue of this superalgebra construction which has first been studied in \cite{ha96} and recently has been refined in \cite{raj,med,med2,med3}.
To this end note that besides the Dirac operator $D^g$ on a Lorentzian spin manifold, there is a complementary conformally covariant differential operator acting on spinors, called the Penrose-or twistor operator, and elements of its kernel are equivalently characterized as solutions of the twistor equation
\begin{align*}
\nabla^{S^g}_X \ph + \frac{1}{n} X \cdot D^g \ph = 0 \text{ for }X \in TM.
\end{align*}
There are many geometric classification results for manifolds admitting twistor spinors, cf. \cite{bl,leihabil,bfkg}, and recently they also appeared in Fefferman constructions in parabolic geometry, cf. \cite{hs,hs1}, and in the construction of conformal superalgebras in physics literature, cf. \cite{raj,med,med3,CCKS1,CCKS2,CCKS3}.\\
Under further orientability assumptions on $(M,g)$ every twistor spinor $\ph$ defines an associated Dirac current $V_{\ph} \in \mathfrak{X}(M)$ which turns out to be causal and conformal, see \cite{lei,bl}, and at least in the Lorentzian case their zero sets coincide, i.e. $Z_{\ph} = Z_{V_{\ph}}$.
On the space $\mathfrak{X}^{nc}(M)\oplus \text{ker }P^g$ of \textit{normal} conformal vector fields and twistor spinors \cite{ha96,raj} introduce brackets by setting:
\begin{equation}\label{alr}
\begin{aligned}
\left[V,W\right] &:=[V,W]_{\mathfrak{X}(M)},\\
[V,\ph]&:= V \circ \ph, \\
[\ph,V]&:= -V \circ \ph, \\
[\ph_1,\ph_2]&:=V_{\ph_1,\ph_2},
\end{aligned}
\end{equation}
where $V,W \in \mathfrak{X}^{nc}(M), \ph \in \text{ker }P^g$. $V \circ \ph$ is the spinorial Lie derivative (cf. \cite{kos}). It is proved in \cite{ha96,raj} that $\g:=\mathfrak{X}^{nc}(M)\oplus \text{ker }P^g$ together with these brackets is a superalgebra which is in general no Lie superalgebra. It has earlier been observed in \cite{ha96} that also the space ${\g}^{ec}:=\mathfrak{X}^{c}(M)\oplus \text{ker }P^g$ of conformal vector fields and twistor spinors equipped with the same brackets turns out to be a superalgebra which in general is no Lie superalgebra. We will discuss later why we choose only normal conformal vector fields in the even part.\\
\cite{med,med3} relates the superalgebra defined by (\ref{alr}) to the (local) classification of Lorentzian conformal structures admitting twistor spinors from \cite{leihabil}. One finds that (\ref{alr}) does not define a Lie superalgebra in case that there is a Fefferman metric in the conformal class or a Lorentzian Einstein Sasaki metric or a local splitting into a special Einstein product in the sense of \cite{al2}. In these cases the odd-odd-odd Jacobi identity fails to hold but the situation can be remedied by inclusion of a nontrivial R-symmetry in the construction of the algebra.\\
\newline
The mentioned constructions of conformal superalgebras involving twistor spinors all fix a metric in the conformal class. In contrast to this, our aim is the construction of a superalgebra canonically associated to a conformal spin structure by making use of conformal tractor calculus as developed in \cite{cs,baju,leihabil,feh}, for instance.
To this end, we use the well-known description of a pseudo-Riemannian conformal structure $(M,c=[g])$ of signature $(p,q)$, where $n=p+q \geq 3$, as a parabolic Cartan geometry $(\mathcal{P}^1, \omega^{nc})$ of type $(G=O(p+1,q+1),P)$, where $P \subset G$ is the stabilizer of some isotropic ray, in the sense of \cite{sharp,cs,baju}. It leads to a well-defined algebraic conformal invariant, being the conformal holonomy group $Hol(M,c)$. As no canonical connection for $(M,c)$ can be defined on a reduction of the frame bundle of $M$, the Cartan geometry in question arises via a procedure called the first prolongation of a conformal structure, which naturally identifies $Hol(M,c) \cong Hol(\omega^{nc})$ with a (class of conjugated) subgroup of $O(p+1,q+1)$. Conformal holonomy groups turn out to be interesting objects in their own right, cf. \cite{arm,leihabil,baju,ln,lst,alt}, for instance. Normal conformal vector fields are in this language equivalently characterized as sections of the bundle $\mathcal{P}^1 \times_P \Lambda^2 \R^{p+1,q+1}$ that are parallel wrt. the connection induced by $\omega^{nc}$.
Furthermore, \cite{lei,baju,leihabil} shows that the twistor equation admits a conformally invariant reinterpretation in terms of conformal Cartan geometries. In fact, there is a naturally associated vector bundle $\mathcal{S}$ for a conformal spin manifold $(M,c)$ of signature $(p,q)$ with fibre $\Delta_{p+1,q+1}$, the spinor module in signature $(p+1,q+1)$. On $\mathcal{S}$, a natural lift of the conformal Cartan connection $\omega^{nc}$ induces a covariant derivative such that parallel sections of $\mathcal{S}$ correspond to twistor spinors via a fixed metric $g \in c$. In other words, $(M,c)$ admits a twistor spinor for one - and hence for all - $g \in c$ iff the lift of $Hol(M,c)$ to the spin group $Spin^+(p+1,q+1)$ which double covers $SO(p+1,q+1)$ stabilizes a nonzero spinor. Using these Cartan techniques has lead to a complete local classification of Lorentzian conformal structures admitting twistor spinors in \cite{leihabil}.\\
\newline
We present in this language a manifestly conformally invariant construction of a superalgebra $\g=\g_0 \oplus \g_1$, consisting of parallel tractor 2-forms and parallel tractor spinors,
\begin{equation*}
\boxed{
\begin{aligned}
&(M^{1,n-1},c) && & \rightarrow & \g= \g_0 \oplus \g_1 \text{(real) Superalgebra,}& \\
&(M^{1,n-1},c) &\text{ with \textit{special} holonomy}& & \rightarrow & \g=\g_0 \oplus \g_1 \text{ \textit{Lie} Superalgebra.}&
\end{aligned}
}
\end{equation*}
defined on the level of tractors only. Here, the various brackets are given in purely algebraic terms by the obvious bracket on skew-symmetric endomorphisms, the natural Clifford-action of 2-forms on spinors and the squaring of spinors to 2-forms in signature $(2,n)$, cf. \cite{leihabil}. One should compare this to the construction of a (Lie) superalgebra for Riemannian manifolds admitting geometric Killing spinors via algebraic operations on the metric cone as done in \cite{suads}. We verify in section \ref{cts} that the so constructed superalgebra satisfies all Jacobi identities except the odd-odd-odd-one which has to be checked in a case-by-case analysis.\\
As we shall see in section \ref{dem}, this approach reproduces the superalgebra (\ref{alr}) from \cite{raj,ha96,med} when we fix a metric in the conformal class, which identifies parallel sections with conformal vector fields and twistor spinors, and thus it yields an equivalent description of the conformal symmetry superalgebra. However, we prove in Theorem \ref{hola} that the tractor approach as presented here has the advantage of giving purely algebraic conditions in terms of conformal holonomy exhibiting when the construction actually leads to a Lie superalgebra. \\
Furthermore, we present in section \ref{rsymme} the construction of a Lie superalgebra naturally associated to a Fefferman spin space via the inclusion of nontrivial R-symmetries on the tractor level. Again, the construction is purely algebraic and reproduces results of \cite{med} for a fixed metric in the conformal class.\\
\newline
Consequently, the tractor approach to conformal superalgebras induced by twistor spinors is manifestly conformally invariant, yields direct relations to conformal holonomy and shows that the failure of being a Lie superalgebra is due to purely algebraic identities on the level of spin tractors.\\
Furthermore, we see in section \ref{highersgn} that the tractor approach can also be used to generalize the whole construction to non-Lorentzian signatures. Doing this, one faces an immediate problem: In general, the map $\chi \mapsto \alpha^2_{\chi}$ mapping a spinor to the associated 2-form is nontrivial only in case $p+1=2$, i.e. Lorentzian signature. In arbitrary signature, a nontrivial map can be obtained by forming $\alpha^{p+1}_{\chi}$. However, there is no obvious natural generalization of the Lie bracket on $\Lambda^k_{p+1,q+1}$ for $k>2$. Nevertheless, we introduce a natural superalgebra structure on the space of parallel forms and twistor spinors formulated in a purely algebraic way, check Jacobi identities and describe the algebra wrt. a given metric in the conformal class. Surprisingly, one finds that in arbitrary signature only 2 of the 4 Jacobi identities need to be satisfied. This is illustrated by considering generic twistor spinors in signature $(3,2)$ (cf. \cite{hs1}) as an example. As a second example, we specialize the construction to special Killing forms and geometric Killing spinors on pseudo-Riemannian manifolds.
In all these cases one obtains by fixing a metric in the conformal class interesting new formulas in Propositions \ref{prr} and \ref{na} which produce new twistor spinors and conformal Killing forms out of existing ones and which can be viewed as generalizations of the spinorial Lie derivative.\\
Finally, we relate the dimension of the odd part $\g_1$ to special geometric structures in the conformal class:
In physics, one is often not only interested in the existence of solutions of certain spinor field equations, but wants to relate the existence of a certain number of maximally linearly independent solutions to local geometric structures, cf. \cite{of12,jhom,of122}. From a more mathematical perspective, \cite{cortes} studies the relation between the existence of a certain number of parallel-, Killing- and twistor spinors and underlying local geometries. We present conformal analogues of some of these results in section \ref{podi}. For instance, we show in Proposition \ref{stuv} that a pseudo-Riemannian manifold admitting more than $\frac{3}{4}$ of the maximal number of linearly independent twistor spinors is already conformally flat.\\
\newline
This article is organized as follows: In section \ref{srw} we provide the necessary ingredients from conformal spin geometry and its conformally invariant reformulation in terms of tractors. Section \ref{cts} introduces the conformal symmetry superalgebra in terms of tractors for Lorentzian manifolds and studies elementary properties whereas section \ref{dem} relates this construction to previous results from \cite{med,raj}. Section \ref{rsymme} elaborates on the construction of a tractor conformal superalgebra for Fefferman spaces via the inclusion of an R-symmetry whereas section \ref{66} applies the results obtained so far in low dimensions. In section \ref{highersgn} we leave the Lorentzian setting and show how the purely algebraic tractor-formulas generalize to arbitrary signatures. We conclude with some relations between the algebraic structure of the conformal symmetry algebra and local geometries in the conformal class in section \ref{podi}.
\section{Preliminaries from conformal spin geometry} \label{srw}
\subsection*{Relevant spinor algebra}
We consider $\R^{p,q}$, that is, $\R^n$, where $n=p+q$, equipped with the scalar product $\langle \cdot, \cdot \rangle_{p,q}$ of index $p$, given by $\langle e_i, e_j \rangle_{p,q} = \epsilon_i \delta_{ij}$, where $(e_1,...,e_n)$ denotes the standard basis of $\R^n$ and $\epsilon_{i \leq p} = -1 = -\epsilon_{i>p}$. Let $e_i^{\flat}:=\langle e_i, \cdot \rangle_{p,q} \in \left(\R^{p,q}\right)^*$. We denote by $Cl_{p,q}$ the Clifford algebra of $(\R^{n},- \langle \cdot, \cdot \rangle_{p,q})$ and by $Cl_{p,q}^{\C}$ its complexification. It is the associative real or complex algebra with unit multiplicatively generated by $(e_1,...,e_n)$ with the relations $e_ie_j+e_je_i=-2 \langle e_i,e_j \rangle_{p,q}$.\\
Let $Spin(p,q) \subset Cl(p,q)$ denote the spin group and $Spin^+(p,q)$ its identity component. There is a natural double covering $\lambda:Spin(p,q) \rightarrow SO(p,q)$ of the pseudo-orthogonal group. Restricting irreducible representations of $Cl(p,q)$ or $Cl^{\C}(p,q)$ (cf. \cite{har,lm}) leads to the real or complex spinor module $\Delta_{p,q}^{\R}$ resp. $\Delta_{p,q}^{\C}$, cf. \cite{ba81,har,lm}. Further, $Cl_{p,q}^{(\C)}$ acts on $\De$ and as $\R^n \subset Cl_{p,q} \subset Cl^{\C}_{p,q}$, this defines the Clifford multiplication $\cdot$ of a vector by a spinor, which naturally extends to a multiplication by $k$-forms: Letting
$\omega = \sum_{1 \leq i_1 <...< i_k \leq n} \omega_{i_1...i_k} e^{\flat}_{i_1} \wedge...\wedge e^{\flat}_{i_k} \in \Lambda^k_{p,q}:=\Lambda^k \left(\R^{p,q}\right)^*$ and $\ph \in \De$, we set
\begin{align} \omega \cdot \ph := \sum_{1 \leq i_1 <...< i_k \leq n} \omega_{i_1...i_k} e_{i_1} \cdot...\cdot e_{i_k} \cdot \ph \in \De. \label{clform} \end{align}
$\Delta_{p,q}$ admits a $Spin^+(p,q)$ nondegenerate invariant inner product $\langle \cdot, \cdot \rangle_{\De}$ such that \begin{align} \langle X \cdot u, v \rangle_{\De} + (-1)^p \langle u, X \cdot v \rangle_{\De} = 0. \label{fg}\end{align} for all $u,v \in \De$ and $X \in \R^n$.
In the complex case, it is Hermitian, whereas in the real case it is symmetric if $p=0,1$ mod $4$ with neutral signature ($p \neq0$ and $q \neq 0$) or it is definite ($p=0$ or $q=0$). In case $p=2,3$ mod $4$, the pair $(\De^{\R},\langle \cdot , \cdot \rangle_{\De^{\R}})$ is a symplectic vector space.\\
\newline
There is an important decomposition of $\Delta_{p+1,q+1}$ into $Spin(p,q)-$modules. Let $(e_0,...,e_{n+1})$ denote the standard basis of $\R^{p+1,q+1}$. We introduce lightlike directions $e_{\pm} := \frac{1}{\sqrt{2}}(e_{n+1} \pm e_0)$. One then has a decomposition
\begin{align}
\R^{p+1,q+1} = \R e_- \oplus \R^{p,q}\oplus \R e_+ \label{sp1}
\end{align}
into $O(p,q)-$modules. We define the annihilation spaces $Ann(e_{\pm}):=\{ v \in \Delta_{p+1,q+1} \mid e_{\pm}\cdot v = 0 \}$. It follows that for every $v \in \Delta_{p+1,q+1}$ there is a unique $w \in \Delta_{p+1,q+1}$ such that $v=e_+ w + e_- w$, leading to a decomposition
\begin{align}
\Delta_{p+1,q+1} = Ann(e_+) \oplus Ann(e_-). \label{fs}
\end{align}
$Ann(e_{\pm})$ is acted on by $Spin(p,q) \hookrightarrow Spin(p+1,q+1)$ and there is an isomorphism $\chi: Ann(e_-) \rightarrow \Delta_{p,q}$ of $Spin(p,q)$-modules leading to the identification
\begin{equation}
\begin{aligned} \label{deco}
\Pi: {\Delta_{p+1,q+1}}_{|Spin(p,q)} & \rightarrow \Delta_{p,q} \oplus \Delta_{p,q}, \\
v=e_+w+e_-w & \mapsto (\chi(e_-e_+w),\chi(e_-w))
\end{aligned}
\end{equation}
Spinors are related to forms by squaring, cf. \cite{cor,nc}: For $n=r+s$ we\footnote{For the moment we change the notation from $(p,q)$ to $(r,s)$ because we will later apply these results in cases in conformal geometry, where $(r,s)=(p,q)$ and $(r,s)=(p+1,q+1)$.} define
\begin{equation}\label{6}
\begin{aligned}
\Gamma^k: \Delta_{r,s} \times \Delta_{r,s} \rightarrow \Lambda^k_{r,s}\text{, }(\chi_1,\chi_2) \mapsto \alpha_{\chi_1,\chi_2}^k,\text{ where} \\
\langle \alpha_{\chi_1,\chi_2}^k,\alpha \rangle_{r,s} := d_{k,r} \left( \langle \alpha \cdot \chi_1, \chi_2 \rangle_{\Delta_{r,s}}\right) \textit{ } \forall \alpha \in \Lambda^k_{r,s}.
\end{aligned}
\end{equation}
The map $d_{k,r}: \mathbb{K} \rightarrow \mathbb{K}$ is the identity for $\mathbb{K}=\R$, whereas for $\mathbb{K}=\C$ it is defined as follows: One finds for complex spinors $\chi \in \De^{\C}$ that $\langle \alpha \cdot \chi, \chi \rangle_{\Delta_{r,s}^{\C}}$ is either real or purely imaginary. This depends on $(r,s)$ and $k$ as well as the chosen representation and admissible scalar product, but not on $\chi$. One then chooses $d_{k,r} \in \{Re, Im\}$ so that $\alpha_{\chi}:=\alpha_{\chi,\chi}^k$is indeed a real form and -if possible- nontrivial. It is obvious that the algebraic Dirac form $\alpha_{\chi}^k$ is explicitly given by the formula
\begin{align}
\alpha_{\chi}^k = \sum_{1 \leq i_1 < i_2 <...<i_l \leq n} \epsilon_{i_1}...\epsilon_{i_l} \cdot d_{k,r} \left(\langle e_{i_1}\cdot...e_{i_l}\cdot \chi, \chi \rangle_{\Delta_{r,s}}\right) e^{\flat}_{i_1} \wedge...\wedge e^{\flat}_{i_l}. \label{bla}
\end{align}
For $k=1$ the vector $V_{\chi}:=\left(\alpha^1_{\chi}\right)^{\flat}$ is the Dirac current. The construction is nontrivial at least for $k=r$ since $\alpha^r_{\chi}=0 \Leftrightarrow \chi = 0$.
\subsection*{The twistor equation on spinors}
Let $(M,g)$ be a space- and time oriented, connected pseudo-Riemannian spin manifold of index $p$ and dimension $n=p+q \geq 3$. By $\Pe^g$ we denote the $SO^+(p,q)$-principal bundle of all space-and time-oriented pseudo-orthonormal frames. A spin structure of $(M,g)$ is then given by a $\lambda-$reduction $(\mathcal{Q}^g,f^g)$ of $\Pe^g$ to $Spin^+(p,q)$. The associated bundle $S^g:=\mathcal{Q}^g \times_{Spin(p,q)} \De$ is called the real or complex spinor bundle. Its elements are classes $[u,v]$. Fibrewise application of spinor algebra defines Clifford multiplication $\mu: T^*M \otimes S^g \rightarrow S^g$ and the Levi Civita connection on $(M,g)$ lifts via $df^g$ and $\lambda^*$ to a connection $\widetilde{\omega}^g \in \Omega^1(\mathcal{Q}^g,\mathfrak{spin}(p,q))$ which in turn induces a covariant derivative $\nabla^{S^g}$ on $S^g$, locally given by the formula
\begin{align*}
\nabla^{S^g}_X \ph = X(\ph) + \frac{1}{2} \sum_{1 \leq k < l \leq n} \epsilon_i \epsilon_j g(\nabla^g_X s_k,s_l) s_ks_l \cdot \ph,
\end{align*}
for $\ph \in \Gamma(S^g)$ and $X \in \mathfrak{X}(M)$, where $s=(s_1,...,s_n)$ is any local pseudo-orthonormal frame. The composition of $\nabla^{S^g}$ with Clifford multiplication defines the Dirac operator $D^g : \Gamma(S^g) \rightarrow \Gamma(S^g)$, whereas performing $\nabla^{S^g}$ followed by orthogonal projection onto the kernel of Clifford multiplication gives rise to the twistor operator
\[ P^g : \Gamma(S^g) \stackrel{\nabla^{S^g}}{\rightarrow} \Gamma(T^*M \otimes S^g ) \stackrel{g}{\cong} \Gamma(TM \otimes S^g) \stackrel{\text{proj}_{\text{ker}\mu}}{\rightarrow} \Gamma(\text{ker } \mu). \]
Spinor fields $\ph \in \text{ker }P^g$ are called twistor spinors and they are equivalently characterized as solutions of the twistor equation
\[\nabla^{S^g}_X \ph + \frac{1}{n} X \cdot D^g \ph = 0 \text{ for all } X \in \mathfrak{X}(M). \]
$P^g$ is conformally covariant: Letting $\widetilde{g}=e^{2 \sigma}g$ be a conformal change of the metric, it holds (cf. \cite{bfkg}) that $P^{\widetilde{g}} \widetilde{\varphi} = e^{-\frac{\sigma}{2}} \left( P^g(e^{-\frac{\sigma}{2}} \varphi) \right) \widetilde{}$. In particular, $\varphi \in \Gamma(S^g)$ is a twistor spinor with respect to $g$ if and only if the rescaled spinor $e^\frac{\sigma}{2} \widetilde{\varphi} \in \Gamma(S^{\widetilde{g}})$ is a twistor spinor with respect to $\widetilde{g}$, where $\widetilde{}:S^g \rightarrow S^{\widetilde{g}}$ denotes the natural identification of the spinor bundles, see \cite{ba81}.
\subsection*{Conformally invariant formulation in terms of tractors}
As twistor spinors are in fact objects of conformal geometry, \cite{baju,leihabil,cs,feh} has developed a concept describing twistor spinors if one is only given a conformal class $c=[g]$ instead of a single metric $g \in c$. As a preparation for this, recall that for $G$ an arbitrary Lie group with closed subgroup $P$ a Cartan geometry of type $(G,P)$ on a smooth manifold $M$ of dimension dim$(G/P)$ is specified by the data $(\mathcal{G} \rightarrow M, \omega)$, where $\mathcal{G}$ is a $P-$principal bundle over $M$ and $\omega \in \Omega^1(\mathcal{G},\mathfrak{p})$, called the Cartan connection, is $Ad$-equivariant wrt. the $P-$action, reproduces the generators of fundamental vector fields and gives a pointwise linear isomorphism $T_u \mathcal{G} \cong \mathfrak{g}$. The $P-$bundle $G \rightarrow G/P$ together with the Maurer-Cartan form of $G$ serves as flat and homogeneous model. As a Cartan connection does not allow one to distinguish a connection in the sense of a right-invariant horizontal distribution in $\mathcal{G}$, it is convenient to pass to the enlarged principal $G-$bundle $\overline{\mathcal{G}}:= \mathcal{G} \times_P G$ on which $\omega$ induces a principal bundle connection $\overline{\omega}$, uniquely determined by $\iota^* \overline{\omega} = \omega$, where $\iota: \mathcal{G} \hookrightarrow \overline{\mathcal{G}}$ is the canonical inclusion. For detailed introduction to Cartan geometries, we refer to \cite{sharp,cs}. \\
Applied to our setting, let $(M,c)$ be a connected, space- and time oriented conformal manifold of signature $(p,q)$ and dimension $n=p+q \geq 3$. It is well known that $c$ is equivalently, in the sense of \cite{cs}, encoded in a Cartan geometry $(\mathcal{P}^1 \rightarrow M, \omega^{nc})$ naturally associated to it via a construction called the first Prolongation of a conformal structure, cf. \cite{baju,cs}. In this case, the group $G$ is given by $G=SO^+(p+1,q+1)$ and the parabolic subgroup $P=Stab_{\R^+e_-}G$ is realized as the stabilizer of the lightlike ray $\R^+e_-$ under the natural $G-$action on $\R^{p+1,q+1}$. The homogeneous model is then given by $G/P \cong S^p \times S^q$ equipped with the obvious signature $(p,q)-$conformal structure. $\omega^{nc} \in \Omega^1(\mathcal{P}^1,\mathfrak{g})$ is called the normal conformal Cartan connection, and given $\mathcal{P}^1$, it s uniquely determined by the normalization condition $\partial^* \Omega^{nc}=0$ on its curvature $\Omega^{nc}:\mathcal{P}^1 \rightarrow Hom(\Lambda^2 \R^n, \mathfrak{so}(p+1,q+1)$, where $\partial^*$ denotes the Kostant codifferential, cf. \cite{cs}.\\
Given the standard $G-$action on $\R^{p+1,q+1}$, we obtain the associated standard tractor bundle $\mathcal{T}(M):=\mathcal{P}^1 \times_P \R^{p+1,q+1} = \overline{\mathcal{P}}^1 \times_G \R^{p+1,q+1}$ on which $\overline{\omega}^{nc}$ induces a covariant derivative $\nabla^{nc}$ which is metric wrt. the bundle metric $\langle \cdot, \cdot \rangle_{\mathcal{T}}$ on $\mathcal{T}(M)$ induced by the standard inner product on $\R^{p+1,q+1}$, and $\nabla^{nc}$ is therefore viewed as the conformal analogue of the Levi-Civita connection, making it reasonable to define the conformal holonomy of $(M,c)$ for $x \in M$ to be
\[ Hol_x(M,c):=Hol_x(\nabla^{nc}) \subset SO^+(\mathcal{T}_x(M), \langle \cdot , \cdot \rangle_{\mathcal{T}}) \cong SO^+(p+1,q+1). \]
By means of a metric in the conformal class, the conformally invariant objects introduced so far admit a more concrete description. Concretely, any fixed $g \in c$ induces a so-called Weyl-structure in the sense of \cite{cs} and leads to a $SO^+(p,q) \hookrightarrow G$-reduction $\sigma^g: \mathcal{P}^g \rightarrow \mathcal{P}^1$. Here, $\mathcal{P}^g$ denotes the orthonormal frame bundle for $(M,g)$. It follows with the decomposition (\ref{sp1}) that there is a $g-$metric splitting of the tractor bundle
\begin{align}
\mathcal{T}(M) \stackrel{\Phi^g}{\cong} \underline{\R} \oplus TM \oplus \underline{\R}, \label{phig}
\end{align}
under which tractors correspond to elements $(\alpha,X,\beta)$ and the tractor metric takes the form
\begin{align} \langle (\alpha_1, Y_1, \beta_1), (\alpha_2, Y_2, \beta_2) \rangle_{\mathcal{T}} = \alpha_1 \beta_2 + \alpha_2 \beta_1 + g(Y_1,Y_2). \label{bum} \end{align}
The metric description of the tractor connection $\nabla^{nc}$, i.e. $\Phi^g \circ \nabla^{nc} \circ (\Phi^g)^{-1}$ is (cf. \cite{baju})
\begin{align} \nabla_X^{nc} \begin{pmatrix} \alpha \\ Y \\ \beta \end{pmatrix} = \begin{pmatrix} X(\alpha) + K^g(X,Y) \\ \nabla_X^g Y + \alpha X - \beta K^g(X)^{\sharp} \\ X(\beta) - g(X,Y) \end{pmatrix}, \label{trad} \end{align}
where $K^g := \frac{1}{n-2} \cdot \left( \frac{scal^g}{2(n-1)} \cdot g - Ric^g \right)$ is the Schouten tensor.\\
\newline
Conformal Cartan geometry allows a conformally invariant construction of the twistor operator $P^g$. To this end, suppose that $(M,c)$ is additionally spin for one - and hence for all - $g \in c$. Then the above construction admits a lift to a conformal spin Cartan geometry $(\mathcal{Q}^1, \widetilde{\omega}^{nc})$ of type $(\widetilde{G}:=Spin^+(p+1,q+1),\widetilde{P}:=\lambda^{-1}(P))$ with associated spin tractor bundle
\[ \mathcal{S}=\mathcal{S}(M):= \mathcal{Q}^1 \times_{\widetilde{P}} \Delta_{p+1,q+1}^{\R}, \]
on which $\mathcal{T}(M)$ acts by fibrewise Clifford multiplication and $\widetilde{\omega}^{nc}$ induces a covariant derivative $\nabla^{\mathcal{S}}$ on $\mathcal{S}$.
Fixing a metric $g \in c$ leads to a $Spin^+(p,q) \hookrightarrow Spin^+(p+1,q+1)$-reduction $\widetilde{\sigma}^g: \mathcal{Q}^g \rightarrow \mathcal{Q}^1$ which covers $\sigma^g$. We let $\overline{\mathcal{Q}^1}$ denote the enlarged $Spin^+(p+1,q+1)$-principal bundle and use $g$ to identify $\mathcal{S}(M) \cong Q^g_+ \times_{Spin^+(p,q)} \Delta_{p+1,q+1}$.
Together with the isomorphism (\ref{deco}), this leads to the $g-$metric identification
\begin{equation} \label{gdg}
\begin{aligned}
\widetilde{\Phi}^g: \mathcal{S}(M) &\rightarrow S^g(M) \oplus S^g(M), \\
[\widetilde{\sigma}^g(\widetilde{s}^g),v] &\mapsto [\widetilde{s}^g,\Pi(v)]
\end{aligned}
\end{equation}
with projections $proj^g_{\pm}$ to the annihilation spaces. One calculates that under (\ref{gdg}), $\nabla^{nc}$ is given by the expression (cf. \cite{baju})
\begin{align*}
\nabla^{nc}_X \begin{pmatrix} \ph \\ \phi \end{pmatrix} = \begin{pmatrix} \nabla_X^{S^g} & -X \cdot \\ \frac{1}{2}K^g(X) \cdot & \nabla^{S^g}_X \end{pmatrix} \begin{pmatrix} \ph \\ \phi \end{pmatrix}.
\end{align*}
As every twistor spinor $\ph \in \text{ker }P^g$ satisfies $\nabla^{S^g}_X \ph = \frac{n}{2}K(X) \cdot \ph$, cf. \cite{bfkg}, this yields a reinterpretation of twistor spinors in terms of conformal Cartan geometry.
Namely for any $g \in c$, the vector spaces ker $P^g$ and parallel sections in $\mathcal{S}(M)$ wrt. $\nabla^{nc}$ are naturally isomorphic via
\begin{align*}
\text{ker }P^g \rightarrow \Gamma(S^g(M) \oplus S^g(M)) \stackrel{\left(\widetilde{\Phi}^g\right)^{-1}}{\cong } \Gamma(\mathcal{S}(M))\text{, }
\ph \mapsto \begin{pmatrix} \ph \\ -\frac{1}{n}D^g \ph \end{pmatrix} \stackrel{\left(\widetilde{\Phi}^g\right)^{-1}}{\mapsto} \psi \in Par(\mathcal{S}_{\mathcal{T}}(M), \nabla^{nc}),
\end{align*}
i.e. a spin tractor $\psi \in \Gamma(\mathcal{S}(M))$ is parallel iff for one (and hence for all $g \in c$), it holds that $\ph:= \widetilde{\Phi}^g({proj}_+^g \psi) \in \text{ker }P^g$ and in this case $D^g \ph = -n \cdot \widetilde{\Phi}^g({proj}_-^g \psi)$.\\
In terms of conformal holonomy, the space of twistor spinors is thus in bijective correspondence to the space of spinors fixed by the lift of the conformal holonomy representation to $Spin^+(p+1,q+1)$, i.e. in the simply-connected case we have for $x \in M$ that
\begin{align} \label{sytr}
\text{ker }P^g \cong \{v \in \mathcal{S}_x \cong \Delta_{p+1,q+1} \mid \lambda_*^{-1}(\mathfrak{hol}_x(M,[g])) \cdot v = 0 \}.
\end{align}
\subsection*{The twistor equation on forms}
There is a canonical way of associating other parallel tractors to a twistor spinor. To this end, we introduce the tractor $(k+1)$-form bundle $\Lambda^{k+1}_{\mathcal{T}}(M):= \mathcal{P}^1 \times_P \Lambda^{k+1}_{p+1,q+1}$ on which again $\omega^{nc}$ induces a covariant derivative $\nabla^{nc} : \Gamma(\Lambda^{k+1}_{\mathcal{T}} (M)) \rightarrow \Gamma(T^{*}M \otimes \Lambda^{k+1}_{\mathcal{T}} (M))$. Fixed $g \in c$ allows us to describe tractor forms in terms of usual differential forms with the help of the following algebraic construction, using the decomposition $\R^{p+1,q+1} \cong \R e_- \oplus \R^{p,q} \oplus \R e_+$. Clearly, every form $\alpha \in \Lambda_{p+1,q+1}^{k+1}$ decomposes into
\begin{align}
\alpha = e_+^{\flat} \wedge \alpha_+ + \alpha_0 + e^{\flat}_- \wedge e_+^{\flat} \wedge \alpha_{\mp} + e_-^{\flat} \wedge \alpha_- \label{mind}
\end{align}
for uniquely determined forms $\alpha_-,\alpha_+ \in \Lambda^k_{p,q}, \alpha_0 \in \Lambda^{k+1}_{p,q}$ and $\alpha_{\mp } \in \Lambda^{k-1}_{p,q}$. Using this decomposition, the restriction of the standard action $O(p+1,q+1) \rightarrow GL\left(\Lambda^{k+1}_{p+1,q+1}\right)$ to $O(p,q) {\hookrightarrow} O(p+1,q+1)$ defines an isomorphism of $O(p,q)$-modules,
\begin{align*} \Lambda^{k+1}_{p+1,q+1} \cong \Lambda^k_{p,q} \oplus \Lambda^{k+1}_{p,q} \oplus \Lambda^{k-1}_{p,q} \oplus \Lambda^k_{p,q}.
\end{align*}
This gives the $g$-metric representation of the tractor $(k+1)$-form bundle:
\begin{align*}
\Phi_{\Lambda}^g: \Lambda^{k+1}_{\mathcal{T}} (M) \stackrel{g}{\rightarrow} \Lambda^{k}(M) \oplus \Lambda^{k+1}(M) \oplus \Lambda^{k-1}(M) \oplus \Lambda^{k}(M).
\end{align*}
Applying this pointwise yields that each tractor $(k+1)$-form $\alpha \in \Omega^{k+1}_{\mathcal{T}}(M):=\Gamma\left(\Lambda^{k+1}_{\mathcal{T}}(M)\right)$ uniquely corresponds via $g \in c$ to a set of differential forms,
\begin{align} \Phi_{\Lambda}^g\left( \alpha \right) = (\alpha_+ , \alpha_0 , \alpha_{\mp}, \alpha_-) \in \Omega^k(M) \oplus \Omega^{k+1}(M) \oplus \Omega^{k-1}(M) \oplus \Omega^{k}(M). \label{dr}
\end{align}
We further introduce the $g$-dependent projections
\begin{align*}
proj^g_{\Lambda,+}: \Omega_{\mathcal{T}}^{k+1}(M) &\rightarrow \Omega^k(M) \\
\alpha & \mapsto \alpha_+\text{, where } \Phi_{\Lambda}^g\left( \alpha \right) = (\alpha_+ , \alpha_0 , \alpha_{\mp}, \alpha_-)
\end{align*}
The operator $ \Phi_{\Lambda}^g \circ \nabla^{nc} \circ \left( \Phi_{\Lambda}^g \right)^{-1}$ satisfies
\begin{align} \label{stu}
\nabla_X^{nc} \alpha \stackrel{g}{=} \begin{pmatrix} \nabla^{g}_X & - X \invneg & -X^{\flat} \wedge & 0 \\ -K^g(X) \wedge & \nabla^{g}_X & 0 & X^{\flat} \wedge \\ - \left(K^g(X)\right)^{\sharp} \invneg & 0 & \nabla^{g}_X & -X \invneg \\ 0 & \left(K^g(X)\right)^{\sharp} \invneg & -K^g(X) \wedge & \nabla^{g}_X \end{pmatrix} \begin{pmatrix} \alpha_+ \\ \alpha_0 \\ \alpha_{\mp} \\ \alpha_- \end{pmatrix}.
\end{align}
Finally, let $\alpha \in \Omega^{k+1}_{\mathcal{T}}(M)$ be a tractor $(k+1)-$form on $(M,c)$. Fix $g \in c$ and $\widetilde{g}= e^{2 \sigma} g \in c$ and let $\alpha_+={proj}^g_{\Lambda,+} \alpha$, $\widetilde{\alpha}_+={proj}^{\widetilde{g}}_{\Lambda,+} \alpha \in \Omega^k(M)$. These forms are related by (cf. \cite{nc})
\begin{align}
\widetilde{ \alpha}_+ = e^{(k+1) \sigma} \alpha_+. \label{wit}
\end{align}
The link between the above tractor forms and twistor spinors is given as follows: First, let $\ph_{1,2} \in \Gamma(S^g)$ and $\psi_{1,2} \in \Gamma(\mathcal{S}(M))$ be arbitrary spinor fields. The algebraic construction (\ref{6}) can be made global by defining the following forms $\alpha_{\psi_1,\psi_2}^k \in \Omega^{k}_{\mathcal{T}}(M)$ and $\alpha^k_{\ph_1,\ph_2} \in \Omega^k(M)$ for every $k \in \mathbb{N}$:
\begin{equation} \label{3245}
\begin{aligned}
\begin{array}{llllllll}
\langle \alpha_{\psi_1,\psi_2}^k , \alpha \rangle_{\mathcal{T}} &:=& d_{k,p+1}& \left(\langle \alpha \cdot \psi_1 , \psi_2 \rangle_{\mathcal{S}} \right)& \forall & \alpha & \in & \Omega_{\mathcal{T}}^k(M), \\
g ( \alpha_{\ph_1,\ph_2}^k , \alpha ) &:=& d_{k,p}& \left(\langle \alpha \cdot \ph_1 , \ph_2 \rangle_{S^g} \right)& \forall & \alpha & \in & \Omega^k(M).
\end{array}
\end{aligned}
\end{equation}
It is straightforward to check that for $\psi_{1,2} \in Par (\mathcal{S}(M), \nabla^{nc})$ and $k \in \mathbb{N}$, the tractor $k-$form $\alpha^{k}_{\psi_1,\psi_2}$ is parallel wrt. $\nabla^{nc}$.\\
\newline
More generally, we call every parallel tractor $(k+1)$-form $\alpha \in Par(\Lambda^{k+1}_{\mathcal{T}}(M),\nabla^{nc}) \subset \Omega^{k+1}_{\mathcal{T}}(M)$, i.e $\nabla^{nc} \alpha = 0$, a twistor-$(k+1)$-form.
Using (\ref{stu}), \cite{nc} calculates that $\nabla^{nc} \alpha = 0$ implies for one - and hence for all - $g \in c$ the conformally covariant condition
\begin{align} \label{soga}
\Phi^g_{\Lambda}(\alpha) = \left(\alpha_+,\frac{1}{k+1}d\alpha_+,-\frac{1}{n-k+1}d^*\alpha_+,\Box_k \alpha_+ \right),
\end{align}
whereby we have set
\begin{align*} \Box_k :=\begin{cases} \frac{1}{n-2k} \left( - \frac{scal^g}{2(n-1)} + \nabla^* \nabla \right), & n \neq 2k, \\
\frac{1}{n} \left(\frac{1}{k+1} (d^*d + dd^*)+\sum_{i=1}^n \epsilon_i \left(s_i \invneg (K^g(s_i)^{\flat} \wedge \cdot ) - s_i^{\flat} \wedge (K^g(s_i) \invneg \cdot )\right)\right), & n = 2k. \end{cases}
\end{align*}
Here, $s=(s_1,...,s_n)$ is a local section of $\Pe^g$ and $\nabla^*$ denotes the formal adjoint of $\nabla = \nabla^g$. \\
That is, $\nabla^{nc}\alpha = 0$ translates via (\ref{soga}) into a differential system for $\alpha_+$ only, and we call $\alpha_+ \in \Omega^k (M)$ arising in this way a normal conformal Killing $k$-form (or shorty, a nc-Killing form). We denote the set of these forms by $\Omega_{nc,g}^k(M)$ . Only considering the first equation in (\ref{stu}) leads to conformal Killing forms (cf. \cite{sem}). A conformal Killing form which is closed for some metric $g \in c$ is called a Killing form for $(M,g)$. In summary, each $g \in c$ leads to a natural isomorphism
\begin{align*}
Par\left(\Lambda^{k+1}_{\mathcal{T}}(M),\nabla^{nc}\right) \ni \alpha \mapsto {proj}_{\Lambda,+}^g (\alpha) \in \Omega^k_{nc,g}(M), \\
\end{align*}
where the inverse is given by $\alpha_+ \mapsto \left(\Phi^g_{\Lambda}\right)^{-1} \left(\alpha_+,\frac{1}{k+1}d\alpha_+,-\frac{1}{n-k+1}d^*\alpha_+,\Box_k \alpha_+ \right)$.\\
\newline
We turn again to twistor spinors. Let $\psi \in Par\left(\Lambda_{\mathcal{T}}^{k+1}(M)\right)$, $g \in c$ and $\ph:=\widetilde{\Phi}^g \left( {proj}^g_+ \psi \right) \in \text{ker }P^g$. It has been shown in \cite{ns} that there are constants $c^i_{k,p} \neq 0$ for $i=1,2$ such that
\begin{align}
{proj}_{\Lambda,+}^g \left(\alpha^{k+1}_{\psi} \right) = c^1_{k,p} \cdot \alpha^k_{\ph} \text{ and }{proj}_{\Lambda,-}^g \left(\alpha^{k+1}_{\psi} \right)=c^2_{k,p} \cdot \alpha^k_{D^g\ph}. \label{tuffi}
\end{align}
In particular, (\ref{tuffi}) reveals that for every twistor spinor $\ph \in \text{ker }P^g$, the forms $\alpha^k_{\ph}$ are nc-Killing forms. Together with the conformal transformation behaviour of $\ph$ and $\alpha_+$ under a change $\widetilde{g}=e^{2 \sigma}g$, this may be visualized in the following commutative diagram:
\begin{center}
\boxed{
\begin{aligned}
\begin{xy}
\xymatrix{
\ph \in \text{ker }P^g \ar[r]^{\left(\widetilde{\Phi}^g \circ {proj}^g_+ \right)^{-1}} \ar[dd]^{\text{nc-Killing}} & \psi \in Par(\mathcal{S}(M),\nabla^{nc}) \ar[r]^{\widetilde{\Phi}^{\widetilde{g}} \circ {proj}^{\widetilde{g}}_+} \ar[dd]^{\text{twistor form}} & e^{\frac{\sigma}{2}}\widetilde{\ph} \in \text{ker }P^{\widetilde{g}} \ar[dd]^{\text{nc-Killing}}\\
& & \\
c^1_{k,p} \cdot\alpha^k_{\ph} \in \Omega_{nc,g}^k(M) \ar[r]^{ \left({proj}^g_{\Lambda,+} \right)^{-1} } & \alpha^{k+1}_{\psi} \in \Omega^{k+1}_{\mathcal{T}}(M)\ar[r]^{{proj}^{\widetilde{g}}_{\Lambda,+}} & c^1_{k,p}\cdot e^{(k+1)\sigma}\alpha_{\ph}^k \in \Omega_{nc,\widetilde{g}}^k(M) \\
}
\end{xy}
\end{aligned}}
\end{center}
For the special case of a twistor 2-form $\alpha \in \Omega^2_{\mathcal{T}}(M)$, the vector field $V_{\alpha}:=V_{\alpha_+}:=\left({proj}_{\Lambda,+}^g \alpha \right)^{\sharp}$, which is independent of $g \in c$, is a normal conformal vector field, i.e. the dual of a nc-Killing 1-form. We denote the space of all normal conformal vector fields on $(M,g)$ by $\mathfrak{X}^{nc}(M)$. \cite{raj} shows that for a vector field $V$ being normal conformal is equivalent to being conformal, $V \in \mathfrak{X}^c(M)$, i.e. $L_Vg = \lambda \cdot g$, and to satisfy in addition that
\begin{equation}
\begin{aligned}
V \invneg W^g = 0,\text{ }V \invneg C^g = 0,
\end{aligned}
\end{equation}
where $W^g$ and $C^g$ are the Weyl- and Cotton-York tensor, respectively.
\section{The general construction of tractor conformal superalgebras} \label{cts}
Let $(M^{1,n-1},c)$ be a connected, oriented and time-oriented Lorentzian conformal spin manifold. Here, when dealing with spinor- and spin tractor bundles, we always mean the \textit{complex ones}, i.e. $\mathcal{S}(M)$ or $S^g(M)$ are obtained as associated vector bundles to $\mathcal{Q}^1$ or $\mathcal{Q}^g$ using $\Delta_{2,n}^{\C}$ or $\Delta^{\C}_{1,n-1}$, respectively.
The previous section revealed that distinguished classical and spinorial conformal symmetries of $(M,c)$ can be described in a conformally invariant way in terms of parallel tractors. For the construction of a superalgebra canonically associated to $M,c)$, we therefore set
\begin{align*}
\g_0 &:= {Par}\left(\Lambda_{\mathcal{T}}^2(M), \nabla^{nc}\right) \subset \Omega^2_{\mathcal{T}}(M).\\
\g_1 &:= {Par}\left(\mathcal{S}(M), \nabla^{nc}\right) \subset \Gamma\left(\mathcal{S}(M), \nabla^{nc}\right).
\end{align*}
By means of $g \in c$, $\g_0$ and $\g_1$ correspond to normal conformal vector fields and twistor spinors, respectively. We now introduce natural brackets which make $\g=\g_0 \oplus \g_1$ become a superalgebra:\\
\newline
For the even-even bracket, we first globalize the isomorphism $\mathfrak{so}(2,n) \cong \Lambda^2_{2,n}$ to obtain
\begin{equation}\label{suz}
\begin{aligned}
\tau : \Omega^2_{\mathcal{T}}(M) &\rightarrow \mathfrak{so}(\mathcal{T}(M),\langle \cdot, \cdot \rangle_{\mathcal{T}}),\\
\alpha & \mapsto \alpha_E \text {, }\alpha_E(X):=(X \invneg \alpha)^{\sharp}.
\end{aligned}
\end{equation}
$ \mathfrak{so}(\mathcal{T}(M),\langle \cdot, \cdot \rangle_{\mathcal{T}})$ carries the pointwise defined usual Lie bracket of endomorphisms. We set for $\alpha, \beta \in \g_0$
\begin{align*}
[\alpha, \beta]:= \tau^{-1}\left( \alpha_E \circ \beta_E - \beta_E \circ \alpha_E \right).
\end{align*}
Moreover, $\nabla^{nc}$ induces a covariant derivative $\nabla^{nc}$ on $ \mathfrak{so}(\mathcal{T}(M),\langle \cdot, \cdot \rangle_{\mathcal{T}})$ in a natural way.
\begin{Proposition}
For $\alpha,\beta \in \g_0$ we have that also $[\alpha,\beta] \in \g_0$.
\end{Proposition}
\textit{Proof. }We first show that that $\alpha \in {Par}\left(\Lambda_{\mathcal{T}}^2(M),\nabla^{nc} \right) \Leftrightarrow \alpha_E \in {Par}\left(\mathfrak{so}(\mathcal{T}(M),\langle \cdot, \cdot \rangle_{\mathcal{T}}),\nabla^{nc} \right)$: Let $X \in \mathfrak{X}(M)$, $x \in M$ and let $(v_0,...,v_{n+1})$ be a local frame in $\mathcal{T}M$ which is parallel in $x$ wrt. $\nabla^{nc}$. We have for $i \in \{0,...,n+1 \}$ at $x$:
\begin{align*}
\left(\nabla^{nc}_X \alpha_E \right)(v_i) & = \nabla^{nc}_X \left( \alpha_E (v_i) \right)= \nabla^{nc}_X {\left( v_i \invneg \alpha \right)}^{\sharp} = \left( \nabla^{nc}_X \left( v_i \invneg \alpha \right)\right)^{\sharp} = (v_i \invneg \nabla^{nc}_X \alpha)^{\sharp},
\end{align*}
which proves this claim. Thus, it suffices to check that for $\alpha, \beta \in \g_0$ also $[\alpha_E, \beta_E]_{\mathfrak{so}} \in {Par}\left(\mathfrak{so}(\mathcal{T}(M),\langle \cdot, \cdot \rangle_{\mathcal{T}}),\nabla^{nc} \right)$. We compute with the same notations as above at $x$:
\begin{align*}
\left(\nabla_X^{nc} \left([\alpha_E, \beta_E]_{\mathfrak{so}} \right) \right)(v_i) &= \nabla_X^{nc} \left([\alpha_E, \beta_E]_{\mathfrak{so}}(v_i) \right) - [\alpha_E, \beta_E]_{\mathfrak{so}} \left( \nabla_X^{nc} v_i \right) \\
&=\nabla_X^{nc} \left( \alpha_E (\beta_E (v_i))-\beta_E (\alpha_E(v_i)) \right) = \alpha_E (\beta_E (\nabla_X^{nc}v_i))-\beta_E (\alpha_E (\nabla_X^{nc}v_i))\\
&=0
\end{align*}
This proves the Proposition.
$\hfill \Box$\\
Clearly, $\g_0$ now becomes a Lie algebra in the usual sense. We shall show in the next section that the chosen bracket is \textit{the right one} in the sense that if $\alpha, \beta$ are considered as normal conformal vector fields for some fixed $g \in c$ by means of $\left( {proj}^g_{\Lambda,+} \alpha \right)^{\sharp}$, then $[ \cdot, \cdot ]$ translates into the usual Lie bracket of vector fields. \\
\newline
As a next step we define the odd-odd bracket, which by definition has to be a symmetric bilinear map $\g_1 \times \g_1 \rightarrow \g_0$. A nontrivial way to obtain a parallel tractor 2-form from two parallel spin tractors is given by the parallel tractor form (\ref{3245}), i.e.
\begin{align*}
[ \cdot , \cdot ] : \g_1 \times \g_1 \rightarrow \g_0 \text{ , } (\psi_1,\psi_2) \mapsto \alpha_{\psi_1,\psi_2}^2.
\end{align*}
In signature $(2,n)$, the form $\alpha_{\psi_1,\psi_2}^2$ is given as follows: One observes that $\langle \alpha \cdot \psi, \psi \rangle_{\Delta_{2,n}^{\C}} \in i\R$ for $\psi \in \Delta_{2,n}^{\C}, \alpha \in \Lambda^2_{2,n}$. (\ref{3245}) thus yields that
\begin{align}
\langle \alpha_{\psi_1,\psi_2}^2, \alpha \rangle_{\mathcal{T}} = \text{Im }\langle \alpha \cdot \psi_1, \psi_2 \rangle_{\mathcal{S}}\text{, }\alpha \in \Omega^2_{\mathcal{T}}(M). \label{impa}
\end{align}
$\alpha_{\psi_1,\psi_2}^2$ is then symmetric in $\psi_1$ and $\psi_2$.\\
\newline
It remains to introduce an even-odd-bracket. We set
\begin{align*}
[ \cdot , \cdot ] : \g_0 \times \g_1 \rightarrow \g_1 \text{ , } (\alpha ,\psi) \mapsto \frac{1}{2} \alpha \cdot \psi.
\end{align*}
The meaning of the factor $\frac{1}{2}$ will become clear in a moment. It follows directly from $\nabla_X^{nc} (\alpha \cdot \psi) = (\nabla_X^{nc}\alpha) \cdot \psi + \alpha \cdot \nabla_X^{nc} \psi$ that this map is well-defined, i.e. the image lies again in $\g_1$. Moreover, in order to obtain the right symmetry relations, we must set
\begin{align*}
[ \cdot , \cdot ] : \g_1 \times \g_0 \rightarrow \g_1 \text{ , } (\psi ,\alpha) \mapsto -\frac{1}{2} \alpha \cdot \psi.
\end{align*}
With these choices of $\g_0, \g_1$ and definitions of the brackets, we have associated a nontrivial (real) conformal superalgebra to the conformal structure (where $\g_1$ is considered as a \textit{real} vector space).
\begin{definition}
The (real) superalgebra $\g=\g_0 \oplus \g_1$ associated to $(M^{1,n-1},c)$ is called the tractor conformal superalgebra (associated to $(M,c)$).
\end{definition}
It is natural to ask under which circumstances the construction produces a \textit{Lie} superalgebra, i.e. we have to check the four Jacobi identities from (\ref{grj}). As $\g_0$ is a Lie algebra in its own right, the even-even-even Jacobi identity is always satisfied.
\begin{Proposition}
The tractor conformal superalgebra associated to a Lorentzian conformal spin manifold satisfies the even-even-odd and the even-odd-odd Jacobi identity.
\end{Proposition}
\textit{Proof. }By (\ref{grj}) we have to check that
\begin{align*}
[\alpha,[\beta,\psi]]\stackrel{!}{=}[[\alpha,\beta],\psi]+[\beta,[\alpha,\psi]] \text{ }\forall \alpha,\beta \in \g_0, \psi \in \g_1,
\end{align*}
which by definition of the brackets is equivalent to showing that
\begin{align*}
2 \cdot [\alpha,\beta] \cdot \psi \stackrel{!}{=} \alpha \cdot \beta \cdot \psi - \beta \cdot \alpha \cdot \psi,
\end{align*}
being a purely algebraic identity at each point. Whence, we may for the proof assume that $\alpha, \beta \in \Lambda^2_{2,n}$ and $\psi \in \Delta_{2,n}^{\C}$. With respect to the standard basis of $\R^{2,n}$ we express
\begin{align*}
\alpha = \sum_{i<j} \epsilon_i \epsilon_j \alpha_{ij} e^{\flat}_i \wedge e^{\flat}_j \Rightarrow \alpha_E = \sum_{i<j} \epsilon_i \epsilon_j \alpha_{ij} E_{ij} \text{ and }
\beta = \sum_{k<l} \epsilon_k \epsilon_l \beta_{kl} e^{\flat}_k \wedge e^{\flat}_l \Rightarrow \beta_E = \sum_{k<l} \epsilon_k \epsilon_l \beta_{kl} E_{kl}.
\end{align*}
Here, $E_{kl}:= \epsilon_k D_{lk} - \epsilon_l D_{kl}$ with $k<l$ form a basis of the Lie algebra $\mathfrak{so}(2,n)$, where $D_{kl}$ denotes the matrix in $M(n+2,\R)=\mathfrak{gl}(n+2,\R)$ whose $(k,l)$ entry is 1 and all other entries are 0. The Lie algebra relations read
\begin{align}
[E_{ij},E_{kl}]_{\mathfrak{so}(2,n)}=\begin{cases} 0 & i=k, j=l\text{ or }i,j,k,l \text{ pairwise distinct,} \\ \epsilon_i E_{jl} & i=k, j \neq l, \\
\end{cases} \label{so}
\end{align}
This shows that
{\allowdisplaybreaks
\begin{align*}
2 \cdot [\alpha, \beta] \cdot \psi & = \tau^{-1}\left([\alpha_E,\beta_E]_{\mathfrak{so}(2,n)}\right) = \sum_{i<j}\sum_{k<l} \epsilon_i \epsilon_j \epsilon_k \epsilon_l \alpha_{ij} \beta_{kl} \tau^{-1}\left(2 \cdot [E_{ij},E_{kl}]_{\mathfrak{so}(2,n)}\right) \cdot \psi \\
& = \sum_{i<j}\sum_{k<l} \epsilon_i \epsilon_j \epsilon_k \epsilon_l \alpha_{ij} \beta_{kl} [e_ie_j,e_ke_l]_{\mathfrak{spin}(2,n)} \cdot \psi = (\alpha \cdot \beta - \beta \cdot \alpha) \cdot \psi.
\end{align*}}
The even-odd-odd Jacobi identity is by polarization equivalent to $[\alpha,[\psi,\psi]]=[[\alpha,\psi],\psi]+[\psi,[\alpha,\psi]]$ for all $\alpha \in \g_0$ and $\psi \in \g_1$. By definition of the brackets, we have to show that
\begin{align}
\left[\alpha_E,\left(\alpha_{\psi}^2\right)_E\right]_{\mathfrak{so}(\mathcal{T}(M))}\stackrel{!}{=}\left(\frac{1}{2}\alpha^2_{\alpha \cdot \psi, \psi} + \frac{1}{2}\alpha^2_{\psi,\alpha \cdot \psi}\right)_E = \left(\alpha^2_{\alpha \cdot \psi, \psi}\right)_E. \label{67}
\end{align}
Again, this is pointwise a purely algebraic identity. Whence, it suffices to prove it for $\alpha \in \Lambda^2_{2,n}$ and $\psi \in \Delta^{\C}_{2,n}$. With respect to the standard basis of $\R^{2,n}$, we write $\alpha$ and $\alpha_E$ as above. Inserting the definition of $\alpha_{\psi}^2$ leads to
\begin{align} \label{w1}
\left[\alpha_E,\left(\alpha_{\psi}^2\right)_E \right]= \sum_{i<j}\sum_{k<l} \epsilon_i \epsilon_j \epsilon_k \epsilon_l \alpha_{ij} \cdot \text{Im }(\langle e_k \cdot e_l \cdot \psi, \psi \rangle_{\Delta_{2,n}^{\C}}) \cdot [E_{ij},E_{kl}],
\end{align}
whereas the right-hand side of (\ref{67}) is by definition given by
\begin{align}
\left(\alpha^2_{\alpha \cdot \psi, \psi}\right)_E & = \sum_{k < l} \epsilon_k \epsilon_l \text{Im }(\langle e_k \cdot e_l \cdot \alpha \cdot \psi, \psi \rangle_{\Delta_{2,n}^{\C}}) \cdot E_{kl} \notag \\
& = \sum_{i < j} \sum_{k < l} \epsilon_i \epsilon_j \epsilon_k \epsilon_l \alpha_{ij} \cdot \text{Im }(\langle e_k \cdot e_l \cdot e_i \cdot e_j \cdot \psi, \psi \rangle_{\Delta_{2,n}^{\C}}) \cdot E_{kl}. \label{w2}
\end{align}
Using the algebra relations for $\mathfrak{so}(2,n)$ from (\ref{so}), it is not difficult to show that every summand in (\ref{w1}) shows up also in (\ref{w2}) and vice versa:
\begin{itemize}
\item Consider summands with $i,j,k,l$ pairwise distinct. Clearly, they vanish in (\ref{w1}). On the other hand, $\langle e_k \cdot e_l \cdot e_i \cdot e_j \cdot \psi, \psi \rangle_{\Delta_{2,n}^{\C}} \in \R$, i.e. the summands also vanish in (\ref{w2}).
\item Consider summands with $i=k,j=l$. Again, they vanish in (\ref{w1}). In (\ref{w2}), these summands are proportional to $\langle \psi,\psi \rangle_{\Delta^{\C}_{2,n}} \in \R$, so the imaginary part vanishes.
\item Consider summands in (\ref{w2}) with $i=k$ and $j \neq l$. They lead to the expression $ -\epsilon_j \epsilon_l \alpha_{ij} \text{Im }(\langle \epsilon_i \cdot e_j \cdot e_l \cdot \psi, \psi \rangle_{\Delta^{\C}_{2,n}}) E_{il}$. In (\ref{w1}), these summands can be found for choosing $j=k$ and $i\neq l$ for which we get $[E_{ij},E_{kl}]=-\epsilon_j E_{il}$, and thus the summand $ -\epsilon_j \epsilon_l \alpha_{ij} \text{Im }(\langle \epsilon_i \cdot e_j \cdot e_l \cdot \psi, \psi \rangle_{\Delta^{\C}_{2,n}}) E_{il}$ also shows up in (\ref{w1}). The remaining cases are equivalent to this one after permuting the indices.
\end{itemize}
Consequently, the two sums are identical and (\ref{67}) holds.
$\hfill \Box$\\
In contrast to that, the remaining Jacobi identity does not hold in general as we shall later see for concrete examples. Under certain restrictions on the conformal holonomy representation, we can however show that all Jacobi identities hold.
\begin{satz} \label{hola}
Suppose that the conformal holonomy representation of $Hol_x(M,c)$ on $\mathcal{T}_x(M)$ for $x \in M$ satisfies the following: There exists \textbf{no} (possibly trivial) $m-$dimensional Euclidean subspace $E \subset \mathcal{T}_x(M) \cong \R^{2,n}$ such that both
\begin{enumerate}
\item The action of $Hol_x(M,c)$ fixes $E$ (and therefore also $E^{\bot}$).
\item $E^{\bot}$ is even-dimensional and on $E^{\bot}$, $Hol_x(M,c)_{E^{\bot}}:=\{A_{|E^{\bot}} \mid A \in Hol_x(M,c) \} \subset SO^+(E^{\bot})\cong SO^+(2,n-m)$ is conjugate to a subgroup of $SU(1,\frac{n-m}{2})\subset SO(2,n-m)$.
\end{enumerate}
Then the tractor conformal superalgebra $\g$ satisfies the odd-odd-odd Jacobi identity, and thus carries the structure of a Lie superalgebra.
\end{satz}
\textit{Proof. }As a first step, we show that under the assumptions,
\[\psi \in \g_1 \Rightarrow \text{ker }\psi:= \{ v \in \mathcal{T}(M) \mid v \cdot \psi = 0 \} \neq \{0 \}.\]
To this end, note that all possible algebraic Dirac forms $\alpha_{\chi}^2$ for $0 \neq \chi \in \Delta^{\C}_{2,n-2}$ have been classified in \cite{leihabil}. Precisely one of the following cases occurs:
\begin{enumerate}
\item $\alpha_{\chi}^2 = l_1^{\flat} \wedge l_2^{\flat}$, where $l_1,l_2$ span a totally lightlike plane in $\R^{2,n-2}$.
\item $\alpha_{\chi}^2 = l^{\flat} \wedge t^{\flat}$ where $l$ is lightlike, $t$ is a orthogonal timelike vector.
\item $\alpha_{\chi}^2 =\omega$ (up to conjugation in $SO(2,n-2)$), where $n=2m$ is even and $\omega$ is equivalent to the standard K\"ahler form $\omega_0$ \footnote{By this we mean that there are nonzero constants $\mu_i \in \R$ such that $\omega = \sum_{i=1}^m \mu_i e_{2i-1}^{\flat} \wedge e_{2i}^{\flat}$. One obtains $\omega_0$, the standard pseudo-K\"ahler form for $\mu_i=1$ for all $i$.} on $\R^{2,n-2}$. In this case, $Stab_{\alpha_{\chi}^2} O(2,n-2) \subset U(1,m-1)$.
\item There is a nontrivial Euclidean subspace $E \subset \R^{2,n-2}$ such that ${\alpha_{\chi}^2}_{|E} = 0$ and $\alpha_{\chi}^2$ is equivalent to the standard K\"ahler form on the orthogonal complement $E^{\bot}$ of signature $(2,2m)$ (again, this is up to conjugation in $SO(2,n-2)$). In this case $Stab_{\alpha_{\chi}^2} O(2,n-2) \subset U(1,m) \times O(n-2(m+1))$.
\end{enumerate}
Moreover, one easily calculates that the first case occurs iff ker $ \chi$ is 2-dimensional (and in this case it is spanned by $l_1,l_2$). The second case occurs iff this kernel is one-dimensional (and spanned by $l$), whereas the last two cases can only occur if the kernel under Clifford multiplication is trivial. \\
For $\psi \in \g_1$, the parallel tractor 2-form $\alpha_{\psi}^2$, whose $SO^+(2,n)$-orbit type is constant over $M$, must up to conjugation be one of the four generic types from the above list. Types 3. and 4. obviously contradict our assumptions. Whence, $\alpha_{\psi}^2$ is of type 1. or 2, yielding that $\text{dim ker }\psi \in \{1,2\}$.\\
By a standard polarization argument the odd-odd-odd Jacobi identity is equivalent to show that $[\psi,[\psi,\psi]]=0$ for all $\psi \in \g_1$. By definition of the brackets, this precisely says that
\begin{align*}
\alpha_{\psi}^2 \cdot \psi \stackrel{!}{=} 0.
\end{align*}
However, as ker $\psi \neq \{0 \}$, the above discussion yields that $\alpha_{\psi}^2 = l^{\flat} \wedge r^{\flat}$, where $l \in \text{ker }\psi$ and $r$ is orthogonal to $l$. It follows that $\alpha_{\psi}^2 \cdot \psi = - r \cdot l \cdot \psi = 0$. This proves the remaining Jacobi identity and the Theorem.
$\hfill \Box$ \\
\newline
The requirements from Theorem \ref{hola} translate into more down-to-earth geometric statements using the classification of Lorentzian manifolds admitting twistor spinors by F. Leitner:
\begin{satz}[\cite{leihabil}; Thm.10] \label{bg}
Let $\varphi=\widetilde{\Phi}^g ( {proj}^g_+ \psi) \in \Gamma(S^g)$ be a complex twistor spinor on a Lorentzian spin manifold $(M^{1,n-1},g)$ of dimension $n\geq 3$. Then one of the following holds on an open and dense subset $\widetilde{M} \subset M$:
\begin{enumerate}
\item $\alpha_{\psi}^2=l_1^{\flat} \wedge l_2^{\flat}$ for standard tractors $l_1,l_2$ which span a totally lightlike plane.\\
In this case, $\ph$ is locally conformally equivalent to a parallel spinor with lightlike Dirac current $V_{\ph}$ on a Brinkmann space.
\item $\alpha_{\psi}^2=l^{\flat} \wedge t^{\flat}$ where $l$ is a lightlike, $t$ is an orthogonal, timelike standard tractor.\\
$(M,g)$ is locally conformally equivalent to $(\mathbb{R},-dt^2) \times (N_1, h_1) \times \cdots \times (N_r, h_r)$, where the $(N_i,h_i)$ are Ricci-flat K\"ahler, hyper-K\"ahler, $G_2$-or $Spin(7)$-manifolds.
\item $\alpha_{\psi}^2$ is of K\"ahler-type (cf. cases 3. and 4. in the above list)\\
The following cases can occur:
\begin{enumerate}
\item The dimension $n$ is odd and the space is locally equivalent to a Lorentzian Einstein-Sasaki manifold on which the spinor is a sum of Killing spinors.
\item $n$ is even and $(M,g)$ is locally conformally equivalent to a Fefferman space.
\item
There exists locally a product metric $g_1 \times g_2 \in [g]$ on $M$, where $g_1$ is a Lorentzian Einstein-Sasaki metric on a space $M_1$ of dimension $n_1=2 \cdot rk(\alpha_1(\ph))+1 \geq 3$ admitting a Killing spinor and $g_2$ is a Riemannian Einstein metric with Killing spinor on a space $M_2$ of positive scalar curvature $scal^{g_2} = \frac{(n-n_1)(n-n_1-1)}{n_1 (n_1 - 1)}scal^{g_1}$.
\end{enumerate}
\end{enumerate}
\end{satz}
Again, ker ${\psi} \neq \{0 \}$ occurs exactly in the first two cases of Theorem \ref{bg}. In the third case of Theorem \ref{bg}, it hold that dim ker ${\psi}=\{0\}$
Consequently, geometries admitting twistor spinors and which do \textbf{not} satisfy the conditions from Theorem \ref{hola} correspond to the cases $3.(a)-3.(c)$ mentioned in Theorem \ref{bg}, being Fefferman metrics, Lorentzian Einstein Sasaki manifolds or local splittings $g_1 \times g_2 \in [g]$ where $g_1$ is a Lorentzian Einstein Sasaki metric and $g_2$ is a Riemannian Einstein metric of positive scalar curvature. Thus, Theorem \ref{hola} can be rephrased in more geometric terms by saying that if none of these three special geometries lies in the conformal class of the metric, one obtains a conformal tractor Lie superalgebra. \\
This is in accordance with other observations in the literature (cf. \cite{med}). Namely it is known that for the mentioned special geometries one has to include further symmetries in the algebra in order to obtain a conformal Lie superalgebra to which we will come back later.
\begin{bemerkung}
The construction of a \textit{real} tractor conformal superalgebra can completely analogous be carried out with \textit{real} spinors. One then has to make the obvious modifications, i.e. define $\alpha^2_{\psi_1,\psi_2}$ without the imaginary part from (\ref{impa}). Note that $\langle \psi, \psi \rangle_{\Delta_{2,n}^{\R}} = 0 \text{ }\forall \psi \in \Delta_{2,n}^{\R}$. One obtains the same results, i.e. all Jacobi identities except the odd-odd-odd one are always satisfied. However, as we are later dealing with tractor conformal superalgebras for twistor spinors on Fefferman spaces, cf. \cite{bafe}, it seems more appropriate to work with complex quantities in this chapter.
\end{bemerkung}
\begin{bemerkung} \label{remak}
We defined the even part of the tractor conformal superalgebra to be (isomorphic to) the space of \textit{normal} conformal vector fields. It is possible to include all conformal vector fields $\mathfrak{X}^c(M)$ in the even part using tractor calculus as follows: Let $\alpha \in \Omega^2_{\mathcal{T}}(M)$ be a tractor 2-form on $(M,c)$ and let $V_{\alpha}=\left( {proj}^g_{\Lambda,+}(\alpha) \right)^{\sharp} \in \mathfrak{X}(M)$ be the associated vector field. As proved in \cite{cov1,cov2}, we have that $V_{\alpha} \in \mathfrak{X}^c(M)$, the space of conformal vector fields, if and only if
\begin{align}
\nabla_X^{nc} \alpha = \tau^{-1} \left({R}^{\nabla^{nc},\mathcal{T}(M)}(V_{\alpha},X) \right) \text{ }\forall X \in \mathfrak{X}(M), \label{star}
\end{align}
where we identify the skew-symmetric curvature endomorphism with a tractor 2-form by means of the isomorphism $\tau$ from (\ref{suz}). We now consider the extended tractor superalgebra
\begin{align*}
\mathfrak{g}_0^{ec}:= \{ \alpha \in \Omega^2_{\mathcal{T}}(M) \mid \alpha \text{ satisfies } (\ref{star}) \} \text{ and } \mathfrak{g}^{ec}:=\mathfrak{g}_0^{ec} \oplus \g_1,
\end{align*}
where $\g_1 = Par(\mathcal{S},\nabla^{nc})$ is as before. On this space, we may define the same brackets as defined on $\mathfrak{g}$ above and observe that they are still well-defined: For $\alpha,\beta \in \mathfrak{g}_0^{ec}$, we have that also $[\alpha,\beta] \in \mathfrak{g}_0^{ec}$ as by Proposition \ref{cof} $V_{[\alpha,\beta]} = -[V_{\alpha},V_{\beta}]_{\mathfrak{X}(M)}$, which is a conformal vector field. Next, let $\alpha \in \mathfrak{g}_0^{ec}$ and $\psi \in \g_1$. Then we have that
\begin{align*}
\nabla^{nc}_X (\alpha \cdot \psi) &= (\nabla_X^{nc} \alpha ) \cdot \psi = \tau^{-1} \left({R}^{\nabla^{nc},\mathcal{T}(M)}(V_{\alpha},X) \right) \cdot \psi \\
&= 2 \cdot {R}^{\nabla^{nc},\mathcal{S}}(V_{\alpha},X)\psi \stackrel{\psi \in \mathfrak{g}_1}{=} 0,
\end{align*}
i.e. $\alpha \cdot \psi \in \mathfrak{g}_1.$ This shows that $\mathfrak{g}^{ec}$ together with the defined brackets is a conformal superalgebra which naturally extends $\mathfrak{g}$. Moreover, the subsequent Propositions \ref{cof} and \ref{sld} and (\ref{765}) still hold in this situation and describe $\mathfrak{g}^{ec}$ wrt. a metric $g \in c$ as their proofs only involve the conformal Killing equation for vector fields and not the normalisation conditions. \\
However, we will only consider the superalgebra $\mathfrak{g}$ and not its extension $\mathfrak{g}^{ec}$ in the sequel because in the case of twistor spinors there are always normal conformal vector fields, and it seems to us that the structure of the subalgebra $\mathfrak{g}$ and the existence of distinguished normal conformal vector fields is more directly related to special geometric structures (cf. \cite{nc}) on $(M,c)$ than the structure of $\mathfrak{g}^{ec}$ as we will see in the next sections.
\end{bemerkung}
\section{Metric description and examples} \label{dem}
Fixing a metric $g \in c$ leads to canonical isomorphisms
\begin{align*}
i_0 &: \g_0 = {Par}\left(\Lambda_{\mathcal{T}}^2(M), \nabla^{nc}\right) \rightarrow \mathfrak{X}^{nc}(M), &\alpha &\mapsto V_{\alpha}:=\left({proj}^g_{\Lambda,+}(\alpha)\right)^{\sharp}, \\
i_1 &: \g_1 = {Par}\left(\mathcal{S}(M), \nabla^{nc}\right) \rightarrow \text{ker }P^g, &\psi &\mapsto \ph:=\widetilde{\Phi}^g({proj}^g_+(\psi)).
\end{align*}
The aim of this section is to compute the behaviour of the tractor conformal superalgebra structure under these isomorphisms. As it turns out, the maps $i_0$ and $i_1$ allow us to identify our tractor conformal superalgebra with conformal superalgebras constructed for Lorentzian conformal spin manifolds in \cite{raj} and \cite{ha96}.
\begin{Proposition} \label{cof}
For fixed $g \in c$ it holds for all $\alpha, \beta \in \g_0$ that
\begin{align*}
i_0\left([\alpha, \beta ]_{\g_0}\right) = \left[V_{\beta},V_{\alpha}\right]_{\mathfrak{X}(M)} = \left[i_0(\beta),i_0(\alpha)\right]_{\mathfrak{X}(M)}
\end{align*}
\end{Proposition}
\textit{Proof. }
We start with some algebraic computations: Assume that $\alpha, \beta \in \Lambda^2_{2,n}$. Wrt. the decomposition (\ref{deco}) we may write
$\alpha = e_+^{\flat} \wedge \alpha_+ + \alpha_0 + \alpha_{\mp} \cdot e_-^{\flat} \wedge e_+^{\flat} + e_-^{\flat} \wedge \alpha_-$ with $\alpha_+ = \sum_{i=1}^{n}\epsilon_i \alpha_i^+ \cdot e_i^{\flat}$, $\alpha_-=\sum_{i=1}^{n}\epsilon_i \alpha_i^- \cdot e_i^{\flat}$, $\alpha_{0} = \sum_{i<j} \epsilon_i \epsilon_j \alpha_{ij}^0 \cdot e_i^{\flat} \wedge e_j^{\flat}$ for real coefficients $\alpha_i^+$ etc. For the standard basis of the Lie algebra $\mathfrak{so}(2,n)$, cf. \ref{so}, we let $E_{\pm, i} := \frac{1}{\sqrt{2}}(E_{n+1 i}\pm E_{0i})$. Then the endomorphism $\alpha_E = \tau(\alpha) \in \mathfrak{so}(2,n)$ is given by
\begin{align*}
\alpha_E = \sum^n_{i=1} \epsilon_i \alpha_i^+ E_{+i} + \sum^n_{i=1} \epsilon_i \alpha_i^- E_{-i} + \alpha_{\mp} E_{n+1 0} + \sum_{i<j}^n \epsilon_i \epsilon_j \alpha_{ij}^0 E_{ij}.
\end{align*}
An analogous expression holds for $\beta_E$. Using the algebra relations (\ref{so}), it is straightforward to compute the following commutators for $i,j=1,...,n$:
\begin{align*}
[E_{\pm,i},E_{\pm,j}]&=0, \\
[E_{-,i},E_{+,j}]&=E_{ij}-\epsilon_i \delta_{ij} E_{0n+1}+\epsilon_j \delta_{ij} E_{0n+1}, \\
[E_{\pm,i},E_{n+1 0}]&=\mp E_{\pm,i}, \\
[E_{ij},E_{\pm,k}]&=\epsilon_i \delta_{ik} E_{\pm,j} - \epsilon_j \delta_{jk} E_{\pm,i}.
\end{align*}
With these formulas, we compute
\begin{align*}
\left[\alpha_E,\beta_E \right]_{\mathfrak{so}(2,n)} =&+ \sum_{i=1}^n \epsilon_i (\beta_i^+ \alpha_{\mp} - \alpha_i^+ \beta_{\mp} ) E_{+,i} + \sum_{i<j} \epsilon_i \epsilon_j (\alpha_{ij}^0 \beta_i^+ - \beta_{ij}^0 \alpha_i^+) E_{+,j} \\
&- \sum_{j<i} \epsilon_i \epsilon_j (\alpha_{ji}^0 \beta_j^+ - \beta_{ji}^0 \alpha_j^+) E_{+,i} + \text{Terms not involving }E_{+,i}.
\end{align*}
A global version of this formula yields that for $\alpha,\beta \in \g_0$ one has wrt. $g \in c$
\begin{align*}
{proj}_{\Lambda,+}^g\left([\alpha,\beta]_{\g_0}\right) = \alpha_{\mp} \cdot \beta_+ - \beta_{\mp} \alpha_+ + {\sum_{i < j} \epsilon_i \epsilon_j (\alpha_{ij}^0 \beta_i^+ - \beta_{ij}^0 \alpha_i^+) s_j^{\flat}}- \sum_{j<i} \epsilon_i \epsilon_j (\alpha_{ji}^0 \beta_j^+ - \beta_{ji}^0 \alpha_j^+) s_i^{\flat},
\end{align*}
where $(s_1,...,s_n)$ is a local $g-$pseudo-orthonormal frame in $TM$, with coefficients of $\alpha$ taken with respect to this frame. This can be rewritten as
\begin{align}
i_0\left([\alpha,\beta]_{\g_0}\right) =\left({proj}_{\Lambda,+}^g\left([\alpha,\beta]_{\g_0}\right)\right)^{\sharp} = \alpha_{\mp} \cdot V_{\beta} - \beta_{\mp} V_{\alpha} + \left(V_{\beta} \invneg \alpha_0 - V_{\alpha} \invneg \beta_0 \right)^{\sharp} \label{e1}
\end{align}
We now compare this expression to the Lie bracket $[V_{\alpha},V_{\beta}]$. Dualizing the first nc-Killing equation (cf. \ref{stu}) for $\alpha_+$ yields that
\begin{align*}
\nabla^g_X V_{\alpha} = (X \invneg \alpha_0)^{\sharp} + \alpha_{\mp} \cdot X \text{ }\forall X \in \mathfrak{X}(M).
\end{align*}
Consequently,
\begin{align}
[V_{\beta},V_{\alpha}]=\nabla^g_{V_{\beta}} V_{\alpha}-\nabla^g_{V_{\alpha}}V_{\beta} = \left(\alpha_{\mp}V_{\beta} - \beta_{\mp} V_{\alpha}\right) + \left(V_{\beta} \invneg \alpha_0 - V_{\alpha} \invneg \beta_0 \right)^{\sharp}. \label{e2}
\end{align}
Comparing the two expressions (\ref{e1}) and (\ref{e2}) immediately yields the claim.
$\hfill \Box$\\
The next Proposition will be proved in a more general setting in Proposition \ref{lsd}:
\begin{Proposition} \label{sld}
For $\alpha \in \g_0$, $\psi \in \g_1$, and $g \in c$ such that $\ph = \widetilde{\Phi}^g\left({proj}^g_+ \psi \right)=i_1(\psi)$, and $V_{\alpha_+} = i_0(\alpha)$, we have that
\begin{align*}
i_1\left(\left[\alpha,\psi \right]_{\g_1}\right) = \frac{1}{2}(\widetilde{\Phi}^g \circ {proj}_+^g) \left(\alpha \cdot \psi \right) = - \underbrace{\left(\nabla_{V_{\alpha}} \ph + \frac{1}{4} \tau \left(\nabla V_{\alpha} \right) \cdot \ph \right) }_{=:V_{\alpha} \circ \ph},
\end{align*}
where $\tau \left(\nabla V_{\alpha} \right) := \sum_{j=1}^n \epsilon_j \left( \nabla_{s_j} V_{\alpha} \right) \cdot s_j + (n-2) \cdot \lambda_{\alpha_+}$ and $L_{V_{\alpha_+}}g = 2 \lambda_{\alpha_+}g$.
\end{Proposition}
\begin{bemerkung}
The above term $V_{\alpha} \circ \ph$ is the spinorial Lie derivative used in \cite{kos,ha96,raj} for the construction of a conformal Killing superalgebra.
\end{bemerkung}
Finally, we give the metric expression of the odd-odd bracket. Let $\psi \in \g_1$ and $\ph=\widetilde{\Phi}^g \left({proj}^g_+(\psi)\right)=i_1(\psi)$:
\begin{align}
i_1 \left( [\psi,\psi] \right) &= \left( {proj}^g_{\Lambda,+} \left( \alpha_{\psi}^2 \right) \right)^{\sharp} \stackrel{(\ref{tuffi})}{=} c_{1,1}^1 \cdot \left( \alpha^1_{\ph} \right)^{\sharp} = c_{1,1}^1 \cdot V_{\ph}, \label{765}
\end{align}
where the nonzero constant $c_{1,1}^1 \in \R$ from (\ref{tuffi}) depends only on the choice of an admissible scalar product (in the sense of \cite{cor}) on $\Delta_{2,n}^{\C}$. These computations directly prove the following statement:
\begin{satz}
Given a Lorentzian conformal spin manifold $(M^{1,n-1},c)$, the associated tractor conformal superalgebra $\g=\g_0 \oplus \g_1 = {Par}\left(\Lambda_{\mathcal{T}}^2(M), \nabla^{nc}\right) \oplus {Par}\left(\mathcal{S}(M), \nabla^{nc}\right)$ is via a fixed $g \in c$ isomorphic to the conformal superalgebra (\ref{alr}) on $\mathfrak{X}^{nc}(M)\oplus \text{ker }P^g$ (as considered in \cite{raj}). Up to prefactors, the $g-$dependent maps $i_0$ and $i_1$ are superalgebra (anti-)isomorphisms.
\end{satz}
\begin{bemerkung}
If for some fixed $g \in c$ the manifold admits geometric Killing spinors, for instance if there is an Einstein metric in the conformal class, the restrictions of the brackets (\ref{alr}) to $\mathfrak{X}^k(M) \oplus \mathcal{K}(M)$, the space of Killing vector fields and Killing spinors as even and odd parts, is well-defined (cf. \cite{suads}), and thus gives a subalgebra of the superalgebra $\mathfrak{g}^{ec}$.\\
More generally, the construction of Killing superalgebras for Riemannian or Lorentzian manifolds using the cone construction where the even part consists of Killing vector fields and the odd part of geometric Killing spinors is discussed in \cite{suads}. In case of an Einstein metric in the conformal class this is equivalent to our tractor construction as in this case all conformal holonomy computations restrict to considerations on the metric cone, see \cite{leihabil,baju}.
\end{bemerkung}
Let us consider some examples of tractor conformal superalgebras:
\subsection*{Tractor conformal superalgebras with one twistor spinor}
Consider the case that dim $\g_1=1$, i.e. there is only one linearly independent complex twistor spinor on $(M,c)$. Such examples are easy to generate, as one might for example take a generic Lorentzian metric admitting a parallel spinor as classified in \cite{br} for low dimensions.
\begin{Proposition} \label{stuff4}
Suppose that the tractor conformal superalgebra of a simply-connected Lorentzian conformal spin manifold $(M^{1,n-1},c)$ satisfies dim $\g_1=1$. Then $\g=\g_0 \oplus \g_1$ is a Lie superalgebra.
\end{Proposition}
\textit{Proof. }
We fix a nontrivial twistor spinor $\psi \in \g_1$ which is unique up to multiplication in $\C^*$ and assume that $\text{ker }\psi = \{0\}$. By Theorem \ref{bg}, this implies that
\begin{align*}
Hol(M,c) \subset \begin{cases} SU \left(1,\frac{n}{2}\right) & (1) \\ O(r) \times SU(1,\frac{n-r}{2}) & (2)\end{cases}
\end{align*}
However, the action of $SU(1,\frac{n}{2})$ on $\Delta^{\C}_{2,n}$ fixes two spinors (cf. \cite{kath}), which excludes $(1)$. In case $(2)$ we have that the representation $\rho$ of $Hol(\overline{\mathcal{Q}}^1_+,\overline{\widetilde{\omega}}^{nc}) \subset Spin^+(2,n)$ on $\Delta^{\C}_{2,n}$ splits into a product of representations $\rho \cong \rho_1 \otimes \rho_2$ on $Spin^+(0,r) \times Spin^+(2,n-r)$. Furthermore,
\[ \Delta_{2,n}^{\C} \cong \Delta^{\C}_{0,r} \otimes \Delta^{\C}_{2,n-r}, \]
considered as $Spin^+(0,r) \times Spin^+(2,n-r)$-representations. As there exists a $Hol(\overline{\mathcal{Q}}^1_+,\overline{\widetilde{\omega}}^{nc})$-invariant spinor in $\Delta^{\C}_{2,n}$, we conclude (cf. \cite{ldr}) that each of the factors $\rho_1$ and $\rho_2$ admits an invariant spinor. However, $\left(\rho_2 \right)_*$ is the action of a subalgebra of $\mathfrak{su}(1,\frac{n-r}{2})$ on $\Delta^{\C}_{2,n-r}$ which annihilates at least two linearly independent complex spinors. Consequently, the representation $\rho_2$ fixes at least two linearly independent complex spinors and $\rho_1$ fixes at least one nontrivial complex spinor such that $\rho$ fixes at least two linearly independent complex spinors which means that dim $\g_1 > 1$ , in contradiction to our assumption.
Consequently, we have that ker $\psi \neq \{0\}$ for every $\psi \in \g_1$. The second part of the proof of Theorem \ref{hola} then shows that $\g$ is a Lie superalgebra.
$\hfill \Box$
\begin{kor}
If the tractor conformal superalgebra associated to a simply-connected Lorentzian conformal spin manifold $(M,c)$ is no Lie superalgebra, then there exist at least two linearly independent complex twistor spinors on $(M,c)$.
\end{kor}
\subsection*{The tractor conformal superalgebra of flat Minkowski space}
We describe the even part of the conformal algebra of flat Minkowski space $\R^{1,n-1}$ in terms of conformal tractor calculus and discuss extensions to a superalgebra.
In physics notation (cf. \cite{msl,raj}), the conformal algebra of Minkowski space $\R^{1,n-1}$ with coordinates $x^i$ and the standard flat metric $g_{ij}$ is generated by $P_i, M_{ij}, D$ and $K_i$ - corresponding to translations, rotations, the dilatation and the special orthogonal transformations:
\begin{align*}
P_i &= \partial_i, \\
M_{ij} &=x_i \partial_j - x_j \partial_i, \\
D &= x^i \partial_i, \\
K_i &= 2x_i x^j \partial_j -g(x,x) \partial_i.
\end{align*}
The Lie brackets can be found in \cite{raj}. As $\R^{1,n-1}$ is conformally flat, all conformal vector fields are automatically normal conformal, and thus the above vector fields generate the algebra $\mathfrak{X}^{nc}(\R^{1,n-1})=\mathfrak{X}^{c}(\R^{1,n-1})$.
We now consider the following natural isomorphism:
\begin{align}
\tau_0 : \mathfrak{X}^{nc}(\R^{1,n-1}) \stackrel{g}{\cong} Par\left(\Lambda^2_{\mathcal{T}}\left(\R^{1,n-1}\right),\nabla^{nc}\right) \stackrel{\alpha \mapsto \alpha(0)}{\cong} \mathfrak{so}(2,n), \label{idid}
\end{align}
yielding that for flat Minkowski space $\g_0 \cong \mathfrak{so}(2,n)$ on the tractor level. Solving the twistor equation on $\R^{1,n-1}$ is straightforward (cf. \cite{bfkg}): We have for a twistor spinor $\ph \in \Gamma(\R^{1,n-1},S^g_{\C}) \cong C^{\infty}\left(\R^{1,n-1},\Delta_{1,n-1}^{\C}\right)$ using $K^g=0$ that $\nabla D^g \ph = 0$. Consequently, $D^g \ph =: \ph_1 $ is a constant spinor. Integrating the twistor equation along the line $\{ s \cdot x \mid 0 \leq s \leq 1 \}$ yields that $\ph(x) - \ph(0)= - \frac{1}{n} x \cdot \ph_1$. Thus, $\ph$ is of the form $\ph (x)= \ph_0 - \frac{1}{n} x \cdot \ph_1$. Clearly, this establishes an isomorphism
\begin{align*}
\begin{array}{cccccccc}
\tau_1 : & \text{ker }P^g & \rightarrow & \Delta^{\C}_{1,n-1} \oplus \Delta^{\C}_{1,n-1} & \cong & {Par}\left( \mathcal{S}(\R^{1,n-1}),\nabla^{nc}\right) & \cong & \Delta_{2,n}^{\C}, \\
& \ph & \mapsto & \left(\ph_0, - \frac{1}{n} \ph_1 \right) & \mapsto & \psi:= \left(\widetilde{\Phi}^g \right)^{-1} \left(\ph_0, - \frac{1}{n} \ph_1 \right) & \mapsto & \psi(0).
\end{array}
\end{align*}
Consequently, the tractor conformal superalgebra of $\R^{1,n-1}$is nothing but $\Lambda^2_{2,n} \oplus \Delta_{2,n}^{\C}$ with brackets as introduced in section \ref{cts}. By means of $\tau_1$ and $\tau_2$ we have an identification
\begin{align}
\g \cong \Lambda^2_{2,n} \oplus \Delta_{2,n}^{\C} \stackrel{\tau_0,\tau_1} \cong \mathfrak{X}^{nc} (\R^{1,n-1}) \oplus \text{ker }P^g, \label{sss}
\end{align}
and the right hand side of (\ref{sss}) is precisely the conformal superalgebra of Minkowski space wrt. the fixed standard metric as considered in \cite{raj,med}, for example, emphasising that the tractor approach to conformal superalgebras is equivalent to the classical approaches. Using an explicit Clifford representation, one directly calculates that $\g$ is no Lie superalgebra if $n > 3$, as also follows from Theorem \ref{hola}. In case $n=3$, and considering Minkowski space $\R^{2,1}$, there is a real structure on $\Delta^{\C}_{3,2}$ (cf. \cite{lm,br}), and restricting ourselves to real twistor spinors leads to the Lie superalgebra\footnote{The odd-odd-odd Jacobi identity holds in this case as every nonzero spinor $v \in \Delta_{3,2}^{\R}$ is pure, from which $\alpha^2_{v,v} \cdot v = 0$ follows. Note that there is no real structure on $\Delta_{2,3}^{\R}$.}
\[ \mathfrak{X}^{nc} (\R^{2,1}) \oplus \text{ker }P_{\R}^g \cong \Lambda^2_{3,2} \oplus \Delta_{3,2}^{\R} \subset \Lambda^2_{3,2} \oplus \Delta_{3,2}^{\C} = \mathfrak{X}^{nc} (\R^{2,1}) \oplus \text{ker }P_{\C}^g. \]
\section{A tractor conformal superalgebra with R-symmetries for Fefferman spaces} \label{rsymme}
Our aim is to reproduce the construction of conformal Lie superalgebras with R-symmetries for Fefferman spaces as known from \cite{med} in the framework of the conformal tractor calculus. Let $(M,c)$ be an simply-connected, even-dimensional Lorentzian conformal spin manifold and $g \in c$. For the definition and construction of Fefferman spin spaces as total spaces of $S^1-$bundles over strictly pseudoconvex manifolds we refer to \cite{bafe,bl,baju}. The following is a standard fact:
\begin{Proposition}[\cite{lei,bafe}] \label{tuefo}
On a Lorentzian Fefferman spin space $(M^{1,n-1},g)$ there are distinguished, linearly independent complex twistor spinors $\ph_{\epsilon}$ for $\epsilon = \pm 1$ such that
\begin{enumerate}
\item The Dirac current $V_{\ph_{\epsilon}}$ is a regular lightlike Killing vector field.
\item $\nabla_{V_{\ph_{\epsilon}}} \ph_{\epsilon} = i c \ph_{\epsilon}$ for some $c \in \R \backslash \{0 \}$.
\end{enumerate}
\end{Proposition}
In fact, a Fefferman spin space can also be equivalently described by the existence of twistor spinors $\ph_{\epsilon}$ with Dirac current $V_{\ph_{\epsilon}}$ satisfying 1. and 2. In terms of conformal holonomy, a Fefferman metric in the conformal class is characterized by $Hol(M,c) \subset SU(1,\frac{n}{2})$, cf. \cite{baju,leihabil}.
We now restrict ourselves to \textit{generic} Fefferman spin spaces, i.e. our overall assumption in this section in terms of conformal data is
\begin{align*}
Hol(M^{1,n-1},c) \subset SU\left(1,\frac{n}{2}\right) \text{ and } \text{dim}_{\C}\text{ ker }P^g = 2.
\end{align*}
In case $Hol(M^{1,n-1},c=[g]) = SU\left(1,\frac{n}{2}\right)$, the second requirement follows automatically.
\begin{Proposition} \label{pry}
For Lorentzian conformal structures with $Hol(M,c)=SU\left(1,\frac{n}{2}\right)$ one has that $\text{dim}_{\C}\g_1=2$ and the tractor conformal algebra is \textbf{no} Lie superalgebra.
\end{Proposition}
\textit{Proof. }In order to prove this Proposition, we start with the observation that by (\ref{sytr}) complex parallel spin tractors on $M$ correspond (after fixing a basepoint) to spinors in $\Delta_{2,n}^{\C}$ which are annihilated by the action of $\lambda_*^{-1}\left(\mathfrak{su}\left(1,\frac{n}{2}\right)\right)$. Let us call the space of these spinors $V_{\mathfrak{su}}$. We fix the following \textit{complex} representation of the complex Clifford algebra $Cl^{\C}_{2,n}$ with $n+2=:2m$ on $\C^{2^m}$ (cf. \cite{ba81}):
Let $E,D,U$ and $V$ denote the $2 \times 2$ matrices
\begin{align*}
E = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \text{ , } D = \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix} \text{ , } U = \begin{pmatrix} i & 0 \\ 0 & -i \end{pmatrix} \text{ , } V = \begin{pmatrix} 0 & i \\ i & 0 \end{pmatrix}.
\end{align*}
Furthermore, let $\tau_j =\begin{cases} 1 & \epsilon_j = 1, \\ i & \epsilon_j = -1. \end{cases}$. ${Cl}^{\C}(p,q) \cong M_{2^m}(\C)$ as complex algebras, and an explicit realisation of this isomorphism is given by
\begin{align*}
\Phi_{p,q} (e_{2j-1})&= \tau_{2j-1} \cdot E \otimes...\otimes E \otimes U \otimes \underbrace{D \otimes...\otimes D}_{(j-1) \times},\\
\Phi_{p,q} (e_{2j}) &= \tau_{2j} \cdot E \otimes...\otimes E \otimes V \otimes \underbrace{D \otimes...\otimes D}_{(j-1) \times}.
\end{align*}
We set $\widetilde{u}(\epsilon):=\frac{1}{\sqrt{2}}\cdot \begin{pmatrix} 1 \\ -i \epsilon \end{pmatrix}$ for $\epsilon = \pm 1$ and introduce the spinors
$\widetilde{u}(\epsilon_m,....,\epsilon_1):=\widetilde{u}(\epsilon_m) \otimes...\otimes \widetilde{u}(\epsilon_1)$ which form a basis of $\Delta_{2,n}^{\C}$.
We work with the $Spin^+(2,n)$-invariant scalar product $\langle u, v \rangle_{\Delta^{\C}_{2,n}}:=i (e_1\cdot e_2 \cdot u,v )_{\C^{2^m}}$. One calculates that
\begin{align} \label{625}
\langle \widetilde{u}(\epsilon_m,...,\epsilon_1), \widetilde{u}(\delta_m,...,\delta_1) \rangle = \begin{cases} 0 & (\epsilon_m,...,\epsilon_1) \neq (\delta_m,...,\delta_1) \\
\epsilon_1 & (\epsilon_m,...,\epsilon_1) = (\delta_m,...,\delta_1) \end{cases}
\end{align}
It is now straightforward to compute (cf. \cite{kath}) that
\begin{align} \label{asint}
V_{\mathfrak{su}}:= \{ v \in \Delta_{2,n}^{\C} \mid \lambda_*^{-1}\left(\mathfrak{su}\left(1,\frac{n}{2}\right)\right) \cdot v = 0 \} = \text{span}_{\C} \{u_+:=\widetilde{u}(1,...,1), u_-:=\widetilde{u}(-1,...,-1) \}.
\end{align}
Another straightforward computation involving (\ref{bla}) and (\ref{625}) yields that \begin{align}
\alpha_{u_{\pm}}^2= \sum_{i=1}^{\frac{n}{2}+1}\epsilon_{2i} \cdot e_{2i-1}^{\flat} \wedge e_{2i}^{\flat}, \label{asfo}
\end{align}
from which follows that
\begin{align}
\alpha^2_{u_+} \cdot u_+ = i\cdot \left(\frac{n}{2}-1\right) u_+ \neq 0. \label{tgg}
\end{align}
If we turn to geometry, a global version of the previous observations shows that for simply-connected conformal structures with irreducible holonomy $SU(1,\frac{n}{2})$ the dimension of the complex space of twistor spinors is two-dimensional and (\ref{tgg}) yields that the tractor conformal superalgebra is no Lie superalgebra as the odd-odd-odd Jacobi identity is not satisfied.
$\hfill \Box$
\subsection*{Algebraic preparation}
We want to investigate the space of parallel spin tractors on $(M,c)$ more closely. To this end, we use the complex spinor representation on $\Delta_{2,n}^{\C}$ from the proof of Proposition \ref{pry} with distinguished spinors $u_{\pm}$. Let $W:=\text{span}_{\C} \{u_+,u_-\}$. We have already computed $\omega_0:= \alpha_{u_{\pm},u_{\pm}}^2$ in (\ref{asfo}). A straightforward, purely algebraic calculation reveals the following:
\begin{Proposition} \label{strata0}
The pseudo-K\"ahler form $\omega_0$ on $\R^{2,n}$ is distinguished by the following properties:
\begin{enumerate}
\item For every $w \in W$ there exists a constant $c_w \geq 0$ such that $\alpha^2_{w,w} = c_w \cdot \omega_0$.
\item $||\omega_0||_{2,n}^2 = \frac{n}{2}+1$
\end{enumerate}
\end{Proposition}
Moreover, one calculates that for all $a,b \in \C$ we have $\frac{1}{i}\omega_0 \cdot (a u_+ + b u_-) = \left(\frac{n}{2}-1\right)\cdot (au_+ - b u_-)$, whence
\begin{align}
\text{span}_{\C} \{ u_{\pm} \} = \text{Eig}_{\C} \left(\frac{1}{i}\omega_0,\pm \left(\frac{n}{2}-1\right)\right). \label{ev}
\end{align}
\begin{Lemma} \label{stuckd}
Consider $u_+ \in W$ and let $\alpha \in \Lambda^2_{2,n}$ be a 2-form. If $\alpha \cdot u_+ \in W$, then $\alpha$ can be written as $\alpha = \sum_{i=1}^{\frac{n+2}{2}} a_i \cdot e_{2i-1}^{\flat} \wedge e_{2i}^{\flat}$ for $a_i \in \R$. We denote the space of all these forms by $V$.
\end{Lemma}
\textit{Proof. }
We write a generic 2-form as $\alpha = \sum_{i <j} a_{ij} e_{i}^{\flat} \wedge e_j^{\flat}$. It follows that $\alpha \cdot u_{\pm} = \sum_{i<j} a_{ij} e_i \cdot e_j \cdot u_{\pm}$. Using our concrete realisation of Clifford multiplication, one calculates that $j \neq i+1 \Rightarrow e_i e_j u_+ \propto u(1,...,1,-1,1...,1,-1...,1)$, where $-1$ occurs at positions $\left\lfloor \frac{i+1}{2} \right\rfloor$ and $\left\lfloor \frac{j+1}{2} \right\rfloor$. As $\alpha \cdot u_+ \in W$, it follows that $a_{ij}=0$ for these choices of $i$ and $j$.
$\hfill \Box$\\
Another purely algebraic computation reveals the following:
\begin{Lemma} \label{strata}
On $W$ there exists a up to sign unique $\C$-linear map $\iota : W \rightarrow W$ such that $\iota^2 = 1$ and $\iota$ is an anti-isometry of $(W,\langle \cdot, \cdot \rangle_{\Delta_{2,n}^{\C}})$, i.e. $\langle \iota(u),\iota(v) \rangle_{\Delta_{2,n}^{\C}} = - \langle u,v \rangle_{\Delta_{2,n}^{\C}}$.
\end{Lemma}
Moreover, (\ref{ev}) shows that setting
\begin{align}
\frac{1}{i}\omega_0 \cdot u =: \left(\frac{n}{2}-1\right) \cdot l(u), \text{ for }u \in W
\end{align}
defines a unique $\C-$linear map $l: W \rightarrow W$. $l$ is an isometry wrt. $\langle \cdot, \cdot \rangle_{\Delta_{2,n}^{\C}}$ and $l^2 = 1$. We note that wrt. the basis $(u_+,u_-)$ of $W$, $\iota$ and $l$ are given by $\iota = \begin{pmatrix} 0 & 1 \\ 1 & 0 \\ \end{pmatrix}$ and $l = \begin{pmatrix} 1 & 0 \\ 0 & -1 \\ \end{pmatrix}$. One easily calculates that for all $\alpha \in V$ and $u \in W$ we have
\begin{equation} \label{calcd}
\begin{aligned}
\alpha \cdot \iota(u) &= - \iota (\alpha \cdot u) \text{, } \alpha \cdot l(u) = l (\alpha \cdot u),\\ \iota(l(u)) &= -l(\iota(u)).
\end{aligned}
\end{equation}
\subsection*{Geometric construction}
We now turn to geometry again: Let $(M,c)$ be a simply-connected Lorentzian conformal spin manifold with special unitary conformal holonomy and suppose that $\text{dim}_{\C} W = 2$, where now $W={Par}(\mathcal{S}_{\C}(M), \nabla^{nc})$. Global versions of our previous algebraic observations show: There exists a unique parallel tractor 2-form $\omega_0 \in {Par}\left(\Lambda^2_{\mathcal{T}}(M),\nabla^{nc} \right)$ distinguished by properties of Proposition \ref{strata0}. Furthermore, Clifford multiplication with $\frac{1}{i}\omega_0$ is an automorphism of $W$ with eigenvalues $\pm (\frac{n}{2}-1)$. We now fix $\psi_{\pm} \in \text{Eig} \left( \frac{1}{i}\omega_0,\pm \left(\frac{n}{2}-1 \right)\right) \cap W$ with $\langle \psi_{\pm},\psi_{\pm} \rangle_{\Delta_{2,n}^{\C}} = \pm 1$ and $\langle \psi_{\pm},\psi_{\mp} \rangle_{\Delta_{2,n}^{\C}} =0$. With these requirements, $\psi_{\pm}$ are unique up to multiplications with elements of $S^1 \subset \C$. In fact, if one fixes a Fefferman metric $g$ in the conformal class, then $\widetilde{\Phi}^g(proj^g_+ \psi_{\pm}) = \ph_{\pm} \in \text{ker }P^g$(up to constant multiples), where $\ph_{\pm}$ were introduced in Proposition \ref{tuefo}. We further require that $\iota(\psi_+)=\psi_-$ which reduces the ambiguity in choosing $\psi_{\pm}$ to only one complex phase. We set
\begin{align*}
\g_1 :=W=\text{span}_{\C} \{ \psi_+,\psi_-\} \subset \mathcal{S}_{\C}(M).
\end{align*}
On $\g_1$ there are natural maps $\iota : \g_1 \rightarrow \g_1$ and $l: \g_1 \rightarrow \g_1$ with the same properties as the corresponding maps from the algebraic preparations. $\g_1$ defines the odd part of the tractor superconformal algebra we are about to construct. For the construction of the even part, we first set as in section \ref{cts} $\g_0 :=Par \left( \Lambda^2_{\mathcal{T}}(M),\nabla^{nc}\right)$ and equip it with the bracket of endomorphisms.
\begin{Proposition}
For $Hol(M,c) \subset SU\left( 1, \frac{n}{2} \right)$ and dim $\g_1 = 2$, we have that $\g_0$ is abelian and dim $\g_0 \leq \frac{n}{2}+1$.
\end{Proposition}
\textit{Proof. }
For $\alpha \in \g_0$ and $\psi=\psi_+ \in \g_1$ we must by the derivation property of $\nabla^{nc}$ wrt. Clifford multiplication have that also $ \alpha \cdot \psi$ is a parallel spin tractor, i.e. $\alpha \cdot \psi \stackrel{!}{\in} \g_1$. Lemma \ref{stuckd} then determines all possible forms of $\alpha$ and from this the statement is immediate.
$\hfill \Box$\\
We now set $\widetilde{\g}_0:=\g_0 \oplus \R$, where the sum is a direct sum of abelian Lie algebras and thus $\widetilde{\g}_0$ is abelian too. We introduce further brackets on $\g:= \widetilde{\g}_0\oplus \g_1$:
\begin{equation} \label{rbracket}
\begin{aligned}
\left[\cdot, \cdot \right] : \widetilde{\g}_0 \otimes \g_1 & \rightarrow \g_1,\\
\left((\alpha,a),\psi \right) & \mapsto \frac{1}{i} \cdot \left(\alpha \cdot (\iota(\psi))\right) + a \cdot \iota (l(\psi)),\\
[ \cdot, \cdot] : {\g_1} \otimes \g_1 & \rightarrow \widetilde{\g}_0,\\
(\psi_1,\psi_2) & \mapsto \left( \alpha^2_{\psi_1,\psi_2},\left(\frac{n}{2}-1\right) \cdot \text{Re} \left(\langle \psi_1, l(\psi_2) \rangle_{\mathcal{S}} \right) \right).
\end{aligned}
\end{equation}
Clearly, these brackets have the right symmetry properties to turn $\widetilde{\g}_0 \oplus \g_1$ into a superalgebra.
\begin{satz}
The superalgebra $\widetilde{\g}_0 \oplus \g_1$ associated to $(M,c)$ canonically up to sign\footnote{In fact, defining the above brackets via the abstract maps $\iota$ and $l$ rather than using the basis $\psi_{\pm}$ reveals that the construction involves no further choices once $\g_0$ and $\g_1$ are determined.} is a Lie superalgebra.
\end{satz}
\textit{Proof. }All we have to do is checking the Jacobi identities: By polarization, the odd-odd-odd Jacobi identity is equivalent to $[[\psi,\psi]\psi]=0$ for all $\psi \in \g_1$. By definition, we have for $\psi=a \psi_+ + b \psi_-$ with $a,b \in \C$ that
\begin{align*}
[[\psi,\psi],\psi] &=\left[\left(\alpha^2_{\psi,\psi},\left(\frac{n}{2}-1\right) \cdot \text{Re} \left(\langle \psi, l(\psi) \rangle_{\mathcal{S}}\right) \right),\psi\right] \\
&=\frac{1}{i} \cdot \alpha_{\psi,\psi}^2 \cdot \iota(\psi) + \left(\frac{n}{2}-1\right) \cdot \text{Re} \left(\langle \psi, l(\psi) \rangle_{\mathcal{S}}\right) \cdot \iota (l (\psi))\\
&=\frac{1}{i} \left(|a|^2 \omega_0 + |b|^2 \omega_0 \right) \cdot (a \psi_- + b \psi_+) + \left(\frac{n}{2}-1\right) (|a|^2+|b|^2) \cdot (a \psi_- - b \psi_+) \\
&=(|a|^2 + |b|^2) \cdot \left(\frac{n}{2}-1 \right) \cdot (-a \psi_- + b \psi_+) + \left(\frac{n}{2}-1\right) (|a|^2 + |b|^2) (a \psi_- - b \psi_+) \\
&=0.
\end{align*}
As $\widetilde{\g}_0$ is abelian, the even-odd-odd identity is by polarization equivalent to $\left[\left[(\alpha, \gamma),\psi \right], \psi \right] = 0$ for all $\alpha \in \g_0$, $\gamma \in \R$ and $\psi \in \g_1$. By definition of the brackets involved, this is the case iff
\begin{align} \label{g0part}
\left( \alpha^2_{\frac{1}{i} \cdot \left(\alpha \cdot \iota(\psi) \right) + \gamma \cdot (\iota(l(\psi))), \psi}, \left(\frac{n}{2}-1\right) \cdot \text{Re} \left(\langle \frac{1}{i} \cdot \left(\alpha \cdot \iota(\psi) \right) + \gamma \cdot (\iota(l(\psi))) , l(\psi) \rangle_{\mathcal{S}} \right) \right) \stackrel{!}{=} 0 \in \g_0 \oplus \R.
\end{align}
We again write $\psi = a \psi_+ + b \psi_-$ for complex constants $a$ and $b$. Lemma \ref{stuckd} implies that $\frac{1}{i}\alpha \cdot {\psi}_+ = d \cdot \psi_+$ for some real constant $d$ and $\frac{1}{i} \alpha \cdot \psi_- = -d \cdot \psi_-$. Then the $\g_0-$ part of (\ref{g0part}) is given by
\begin{align*}
\alpha^2_{-ad \psi_- + bd \psi_+ + \gamma(a \psi_- - b \psi_+), a\psi_+ + b \psi_-} = 0,
\end{align*}
where we used that $\alpha^2_{\psi_+,\psi_-}=0$, and the $\R-$part of (\ref{g0part}) is proportional to
\begin{align*}
& \text{Re}\left(\langle \frac{1}{i} \cdot \left(\alpha \cdot \iota(\psi) \right) + \gamma \cdot (\iota(l(\psi))) , l(\psi) \rangle_{\mathcal{S}}\right) \\
= & \text{Re}\left(\langle d \cdot \left(-a \cdot \psi_- + b \cdot \psi_+ \right) + \gamma \cdot (a \psi_- - b \psi_+), a\psi_+ - b \psi_- \rangle_{\mathcal{S}}\right) \\
=&0.
\end{align*}
Finally, since $\widetilde{\g}_0$ is abelian, the even-even-odd identity is equivalent to
\begin{align*}
\left[(\alpha,a),\frac{1}{i} \beta \cdot (\iota (\psi)) + b \cdot \iota(l(\psi)) \right] \stackrel{!}{=} \left[(\beta,b),\frac{1}{i} \alpha \cdot (\iota (\psi)) + a \cdot \iota(l(\psi)) \right] \in \g_1,
\end{align*}
where $(\alpha,a), (\beta,b) \in \widetilde{\g}_0$ and $\psi \in \g_1$. Unwinding the definitions and using (\ref{calcd}), we find that the left hand side is given by
\begin{align*}
&\frac{1}{i}\left(\frac{1}{i} \alpha \cdot \iota (\beta \cdot \iota (\psi)) + {a} \cdot \iota (l(\beta \cdot \iota (\psi))) + b \cdot \alpha \cdot l(\psi) \right)+ ab \cdot \iota(l(\iota(l(\psi)))) \\
{=}& \alpha \cdot \beta \cdot \psi + \frac{a}{i} \beta \cdot l(\psi) + \frac{b}{i} \cdot \alpha \cdot l(\psi) - ab \cdot \psi \stackrel{[\alpha, \beta]=0}{=} \beta \cdot \alpha \cdot \psi + \frac{a}{i} \beta \cdot l(\psi) + \frac{b}{i} \cdot \alpha \cdot l(\psi) - ab \cdot \psi \\
=& \left[(\beta,b),\frac{1}{i} \alpha \cdot (\iota (\psi)) + a \cdot \iota(l(\psi)) \right].
\end{align*}
These calculations prove the Theorem.
$\hfill \Box$
\begin{bemerkung}
Let $g \in c$ be a Fefferman metric on $M$. By means of $g$ we identify the parallel spin tractors $\psi_{\epsilon}$ with the distinguished twistor spinors $\ph_{\epsilon}$ from Proposition \ref{tuefo} for $\epsilon = \pm 1$ and parallel 2-form tractors with normal conformal vector fields. Calculations completely analogous to that in section \ref{dem} reveal that the even-odd bracket (\ref{rbracket}) is under this $g$-metric identification given by (extension of)
\begin{align*}
(\mathfrak{X}^{nc}(M) \oplus \R) \times \text{ker }P^g & \rightarrow \text{ker }P^g, \\
((V,a),\ph_{\epsilon}) & \mapsto L_{V} \ph_{-\epsilon} + \epsilon \cdot a \cdot \ph_{-\epsilon},
\end{align*}
and in this picture the odd-odd-odd Jacobi identity for $\widetilde{\g}_0 \oplus \g_1$ is equivalent to the existence of a constant $\rho$ such that $L_{V_{\ph_\epsilon}} \ph_{\epsilon} + \epsilon \cdot \rho \cdot \ph_{\epsilon} = 0$, as proved independently in \cite{med}.
\end{bemerkung}
\begin{bemerkung} \label{oda} There is an odd-dimensional analogue of this construction: Namely, consider the case of a simply-connected, Lorentzian Einstein-Sasaki manifold $(M^{1,n-1},g)$ of negative scalar curvature (cf. \cite{lei,boh}), which can be equivalently characterized in terms of special unitary holonomy of the cone over $(M,g)$. It follows that $(M,g)$ is spin and there again exist two distinguished conformal Killing spinors (cf. \cite{bl,leihabil}). Let us assume that the complex span of these twistor spinors is already $\text{ker }P^g=: \g_1$. As $(M,g)$ is Einstein with $\text{scal}^g <0$ there exists in this case (cf. \cite{leihabil, baju}) a \textit{distinguished} spacelike, parallel standard tractor $\tau$, defining a holonomy reduction $Hol(M,[g]) \subset SU\left(1,\frac{n-1}{2}\right) \subset SO(2,n-1) \subset SO(2,n)$ and a splitting $\mathcal{T}(M) = \langle \tau \rangle^{\bot} \oplus \langle \tau \rangle$. Furthermore $\Delta_{2,n-1} \cong \Delta_{2,n}$ as $Spin(2,n-1)$-representations. Setting $\g_0 := {Par}\left(\Lambda^2_{\mathcal{T}}(M),\nabla^{nc} \right) \cap \{ \alpha \in \Omega^2_{\mathcal{T}}(M) \mid \alpha(\tau,\cdot) = 0 \}$, we can then proceed completely analogous to the even-dimensional case just discussed, i.e. we perform in the tractor setting the same purely algebraic construction on the orthogonal complement of $\tau$ in $\mathcal{T}(M)$. This turns $(\g_0 \oplus \R) \oplus \g_1$ into a Lie superalgebra with R-symmetries. Again, the overall construction is canonical. For a construction which uses a fixed metric in the conformal class, we refer to \cite{med}.
\end{bemerkung}
\section{Summary and application in small dimensions} \label{66}
We want to summarize the various possibilities and obstructions one faces in the attempt of constructing a conformal Lie superalgebra via the tractor approach in Lorentzian signature. To this end, recall that twistor spinors on Lorentzian manifolds can be categorized into three types according to Theorem \ref{bg}: We have shown:
\begin{itemize}
\item If all twistor spinors are of type 1. or 2., the tractor conformal superalgebra is a Lie superalgebra (cf. Theorem \ref{hola}). Moreover, if $\g$ is a Lie superalgebra, there is a Brinkmann metric in the conformal class or a local splitting $[g]=[-dt^2+h]$, where $h$ is Riemannian Ricci-flat K\"ahler.
\item The previous situation always occurs if the space of twistor spinors is 1-dimensional.
\item If there are exactly two linearly independent twistor spinors of type $3.a$ or $3.b$ (depending on the dimension to be even or odd), one can construct a Lie superalgebra under the inclusion of an R-symmetry. Depending on the dimension, one has a Fefferman metric or a Lorentzian Einstein-Sasaki metric in the conformal class.
\end{itemize}
\begin{bemerkung}
We have not yet discussed the case when the twistor spinor is of type $3.c$ in Theorem \ref{bg}, i.e. when there is -at least locally - a splitting $(M,g) \cong (M_1,g_1) \times (M_2,g_2)$ into a product of Einstein spaces. By \cite{leihabil,baju} we have that $Hol(M,[g]) \cong Hol(M_1,[g_1]) \times Hol(M_2,[g_2])$. In this situation, it is an algebraic fact (cf. \cite{ldr}) that every spinor $v \in \Delta_{2,n}$ which is fixed by $Hol(M,[g])$ is of the form $v=v_1 \otimes v_2$ where $Hol(M_i,[g_i])v_i = v_i$. As also the converse is trivially true, we see that on the level of tractor conformal superalgebras, the product case manifests itself in a splitting of the odd part of $\mathfrak{g}$, i.e. $\g_1 = \g_1^1 \otimes \g_1^2$, where $\g_1^i$ are the odd parts of the tractor conformal Lie superalgebras $\g^i = \g_0^i \oplus \g_1^i$ of $(M_i,[g_i])$ for $i=1,2$. Moreover, note that we never have a splitting in the even part, $\g_0 \neq \g_0^1 \oplus \g_0^2$. This is because as $(M_i,g_i)$ are Einstein manifolds, there are parallel standard tractors $t_i \in \mathcal{T}(M_i)$, and it follows that $t_1 \wedge t_2 \in \mathfrak{g}_0$, but $t_1 \wedge t_2 \notin \g_0 \oplus \g_1$. It is moreover clear from the structure of $\alpha_{\psi}^2$ from Theorem \ref{bg} in this situation that $\g$ is no Lie superalgebra in this case. \cite{med} presents a way of extending $\g$ to a Lie superalgebra under the inclusion of R-symmetries.
\end{bemerkung}
We have now studied the construction of a tractor conformal superalgebra for every (local) geometry admitting twistor spinors and summarize our results:
\begin{satz} \label{satan}
Let $(M^{1,n-1},c)$ be a Lorentzian conformal spin manifold admitting twistor spinors. Assume further that all twistor spinors on $(M,c)$ are of the same type according to Theorem \ref{bg}. Then there are the following relations between special Lorentzian geometries in the conformal class $c$ and properties of the tractor conformal superalgebra $\g = \g_0 \oplus \g_1$ of $(M,c)$:
\begin{center}
\begin{tabular}[h]{|p{3cm}|p{4.5cm}|p{5.5cm}|}
\hline
Twistor spinor type (Thm. \ref{bg}) & Special geometry in c & Structure of $\g = \g_0 \oplus \g_1$ \\ \hline\hline
1. & Brinkmann space & Lie superalgebra \\ \hline
2. & Splitting $(\R,-dt^2) \times $ Riem. Ricci-flat & Lie superalgebra \\ \hline
3.a & Lorentzian Einstein Sasaki (n odd) & No Lie superalgebra, becomes Lie superalgebra under inclusion of nontrivial R-symmetry \\ \hline
3.b & Fefferman space (n even) & No Lie superalgebra, becomes Lie superalgebra under inclusion of nontrivial R-symmetry \\ \hline
3.c & Splitting $M_1 \times M_2$ into Einstein spaces & No Lie superalgebra, odd part splits $\g^i = \g_0^i \otimes \g_1^i$, but $\g_0 \neq \g_0^1 \oplus \g_0^2$ \\ \hline
\end{tabular}
\end{center}
\end{satz}
Let us apply this statement to tractor conformal superalgebras $\g$ of non conformally flat Lorentzian conformal manifolds $(M^{1,n-1},[g])$ admitting twistor spinors in small dimensions which have been studied in \cite{lei, bl}:\\
\newline
\textit{Let n=3. }It is known that dim ker $P^g \leq 1$ in this situation. Consequently, by Proposition \ref{stuff4} $\g$ is a tractor conformal Lie superalgebra. Every twistor spinor is off a singular set locally equivalent to a parallel spinor on a $pp$-wave.\\
\textit{Let n=4. }Here, dim ker $P^g \leq 2$. Exactly one of the following cases occurs: Either, there is a Fefferman metric in the conformal class with two linearly independent twistor spinors. In this case we can construct a tractor superalgebra with R-symmetries. Otherwise, all twistor spinors are locally equivalent to parallel spinors on pp-waves. In this case the ordinary construction of a tractor conformal \textit{Lie} superalgebra $\g$ works.\\
\textit{Let n=5. }This case is already more involved but the possibility of constructing a tractor conformal Lie superalgebra can be completely described: One again has that
dim ker $P^g \leq 2$. Exactly one of the following cases occurs:
\begin{enumerate}
\item There is a Lorentzian Einstein Sasaki metric in the conformal class. In this case, dim ker $P^g = 2$ and one can construct a tractor conformal Lie superalgebra with R-symmetries as indicated in Remark \ref{oda}.
\item $(M,g)$ is (at least locally) conformally equivalent to a product $\R^{1,0} \times (N^4,h)$, where the last factor is Riemannian Ricci-flat K\"ahler and admits two linearly independent parallel spinors. This corresponds to type 2. twistor spinors from Theorem \ref{bg}, and thus one can construct a tractor conformal Lie superalgebra.
\item All twistor spinors are equivalent to parallel spinors on $pp$-waves. Again, the construction yields a Lie superalgebra.
\end{enumerate}
\textit{Let }$n \geq 6$. Now \textit{mixtures} can occur, i.e. it is possible that some twistor spinors are of type 1. or 2., and some twistor spinors are of type 3. according to Theorem \ref{bg}. In this case, Theorem \ref{satan} does not apply.
\section{Extension to higher signatures} \label{highersgn}
Let $(M^{p,q},c)$ be a space- and time oriented conformal spin manifold of arbitrary signature $(p,q)$ with $p+q=n$ and complex space of parallel spin tractors $\g_1 = Par(\mathcal{S}_{\C}(M),\nabla^{nc})$. We want to associate to $(M,c)$ a tractor conformal superalgebra in a natural way. However, our construction from the previous sections depends crucially on Lorentzian signature. More precisely, the bracket $\g_1 \times \g_1 \rightarrow \g_0$ may become trivial in other signatures, and it therefore has to be modified: Every parallel spin tractor on $M$ naturally gives rise to a series of parallel tractor $k-$forms, which are nontrivial at least for $k=p+1$ . We include all these conformal symmetries in the algebra and use them to construct the odd-odd-bracket. Thus we then also have to modify $\g_0$ and would like to set
\begin{align}
\g_0 := Par\left(\Lambda^*_{\mathcal{T}}(M),\nabla^{nc} \right) \subset \Omega^*_{\mathcal{T}}(M). \label{atartup}
\end{align}
\subsection*{Algebraic preparation}
Let us for a moment change our notation to $\R^{r,s}$ and $m=r+s$, as the following results will later be applied to $\R^{p,q}$ and $\R^{p+1,q+1}$. In order to introduce a bracket on $\Lambda^k_{r,s}$, we recall the following formulas for the action of a vector $X \in \R^{r,s}$ and a $k$-form $\omega \in \Lambda^k_{r,s}$ on a spinor $\ph \in \Delta^{\C}_{r,s}$ (cf. \cite{bfkg}):
\begin{equation} \label{ext1}
\begin{aligned}
X \cdot (\omega \cdot \ph) &= (X^{\flat} \wedge \omega) \cdot \ph - (X \invneg \omega) \cdot \ph, \\
\omega \cdot (X \cdot \ph) &= (-1)^k \left((X^{\flat} \wedge \omega) \cdot \ph + (X \invneg \omega) \cdot \ph \right).
\end{aligned}
\end{equation}
This motivates us to set $X \cdot \omega := X^{\flat} \wedge \omega - X \invneg \omega \in \Lambda^{k-1} \oplus \Lambda^{k+1}$ for $X \in \R^{r,s}$ and $\omega \in \Lambda^k_{r,s}$. We use this to set inductively for $e_{I}^{\flat}:=e^{\flat}_{i_1} \wedge ...\wedge e^{\flat}_{i_j} \in \Lambda^j_{r,s}$, where $1 \leq i_1 < i_2<...<i_j \leq n$:
\begin{align}
e_{I}^{\flat} \cdot \omega := e_{i_1} \cdot (e^{\flat}_{I \backslash \{i_1 \}} \cdot \omega ). \label{chapp}
\end{align}
By multilinear extension, this defines $\eta \cdot \omega \in \Lambda^*_{r,s}$ for all $\eta, \omega \in \Lambda^*_{r,s}$. One checks that this product is associative and $O(r,s)-$equivariant, i.e.
\begin{align} \label{ete}
(A\eta) \cdot (A\omega) = A(\eta \cdot \omega) \text{ }\forall A \in O(r,s).
\end{align}
\begin{bemerkung}
The above definition of $\cdot$ is useful for concrete calculations. However, there is an equivalent way of introducing the inner product $\cdot$ on the space of forms which shows that this construction is very natural. To this end, consider the multilinear maps
\[ f_k: \underbrace{\R^{r,s} \times...\times \R^{r,s}}_{k \text{ times}} \rightarrow Cl_{r,s}, \text{ }(v_1,...v_k) \mapsto \frac{1}{k!} \sum_{\sigma \in S_k} \text{sgn}(\sigma) \cdot v_{\sigma_1}\cdot...\cdot v_{\sigma_k}. \]
The maps $f_k$ induce a canonical vector space isomorphism (cf. \cite{lm})
\begin{align*}
\widetilde{f}: \Lambda^*_{r,s} \rightarrow Cl_{r,s},
\end{align*}
for which $\widetilde{f}(v_1^{\flat} \wedge...\wedge v_k^{\flat}) = f_k(v_1,...,v_k)$ holds. It is now straightforward to calculate that our inner product (\ref{chapp}) on $\Lambda^*_{r,s}$ is just the algebra structure which makes $\widetilde{f}$ become an algebra isomorphism, i.e. one has for $\eta, \omega \in \Lambda^*_{r,s}$ that
\begin{align}
\eta \cdot \omega = \widetilde{f}^{-1}\left(\widetilde{f}(\eta) \cdot \widetilde{f}(\omega)\right). \label{prodo}
\end{align}
\end{bemerkung}
With these definitions, the space $\Lambda^*_{r,s}$ together with the map
\begin{align}
[\cdot, \cdot]_{\Lambda}:\Lambda^*_{r,s} \otimes \Lambda^*_{r,s} \rightarrow \Lambda^*_{r,s}\text{, }[\eta,\omega]_{\Lambda}:=\eta \cdot \omega - \omega \cdot \eta \label{dota}
\end{align}
becomes a Lie algebra in a natural way due to associativity of Clifford multiplication.
\begin{bemerkung}
We index this bracket with the subscript $\Lambda$ because on 2-forms there are now the bracket $[\cdot, \cdot]_{\Lambda}$ and the endomorphism-bracket $[\cdot, \cdot]_{\mathfrak{so}}$ from the previous sections. However, it is straightforward to calculate that $[\cdot, \cdot]_{\Lambda}=2\cdot [\cdot,\cdot]_{\mathfrak{so}}$. Whence these two Lie algebra structures are equivalent. Note that $[\Lambda^k_{r,s},\Lambda^l_{r,s}]$ is in general of mixed degree for $k,l \neq 2$.
\end{bemerkung}
\subsection*{Conformally invariant definition of $\mathfrak{g}$}
Turning to geometry again, let $(M^{p,q},c)$ be a conformal spin manifold of signature $(p,q)$. Given $\alpha, \beta \in \Omega^*_{\mathcal{T}}(M)$ and $x \in M$, we may write $\alpha(x)=[s,\widehat{\alpha}]$ and $\beta(x)=[s,\widehat{\beta}]$ for some $s \in \overline{\mathcal{P}}^1_x$ and $\widehat{\alpha}, \widehat{\beta} \in \Lambda^*_{p+1,q+1}$. We then introduce a bracket on tractor forms by setting
\begin{align} \label{pwd}
\left(\alpha \cdot \beta \right)(x):=[s,\widehat{\alpha} \cdot \widehat{\beta}].
\end{align}
The equivariance property (\ref{ete}) shows that (\ref{pwd}) is well-defined. We furthermore define the bracket $[\alpha,\beta]_{\mathcal{T}}$ on $ \Omega^*_{\mathcal{T}}(M)$ by pointwise application of (\ref{dota}). Clearly, this defines a Lie algebra structure on $\Omega^*_{\mathcal{T}}(M)$.
\begin{Lemma}
The normal conformal Cartan connection $\nabla^{nc}$ on $\Omega^*_{\mathcal{T}}(M)$ is a derivation wrt. the product $\cdot$, i.e.
\[ \nabla^{nc}_X (\alpha \cdot \beta) = \left(\nabla^{nc}_X \alpha\right)\cdot \beta + \alpha \cdot \left(\nabla^{nc}_X \beta\right) \text{ }\forall \alpha, \beta \in \Omega^*_{\mathcal{T}}(M) \text{ and }X \in \mathfrak{X}(M). \]
\end{Lemma}
\textit{Proof. }
Suppose first that $\alpha = Y^{\flat}$ for some standard tractor $Y \in \Gamma(\mathcal{T}(M))$. We calculate:
\begin{align*}
\nabla_{X}^{nc}(\alpha \cdot \beta) &= \nabla_X^{nc} (Y^{\flat} \wedge \beta - Y \invneg \beta )\\
&=\left(\nabla_X^{nc} Y\right)^{\flat} \wedge \beta + Y^{\flat} \wedge \left(\nabla_X^{nc} \beta \right) - \left(\nabla_X^{nc} Y\right) \invneg \beta - Y \invneg \left(\nabla_X^{nc} \beta \right) \\
&= \left(\nabla_X^{nc} \alpha \right) \cdot \beta + \alpha \cdot \left(\nabla_X^{nc} \beta \right).
\end{align*}
As a next step, let $\alpha \in \Omega^*_{\mathcal{T}}(M)$ be arbitrary. We fix $x \in M$ and a local pseudo-orthonormal frame $(s_0,...,s_{n+1})$ (wrt $\langle \cdot , \cdot \rangle_{\mathcal{T}}$ ) on $\mathcal{T}(M)$ around $x$ such that $\nabla^{nc} s_i = 0$ for $i=0,...,n+1$ at $x$. Wrt. this frame we write $\alpha = \sum_{I} \alpha_I s_I^{\flat}$ locally around $x$ for smooth functions $\alpha_I$. We apply the above result inductively for $Y = s_i$ to obtain at $x$
\begin{align*}
\nabla^{nc}_X(\alpha \cdot \beta)& = \sum_I \nabla_X^{nc}\left(\alpha_I \cdot s_I \cdot \beta \right) = \sum_I X(\alpha_I) \cdot s_I \cdot \beta + \sum_I \alpha_I s_I \cdot \nabla^{nc}_X \beta \\
&=\left(\nabla_X^{nc}\alpha \right) \cdot \beta + \alpha \cdot \left(\nabla_X^{nc}\beta \right),
\end{align*}
which gives the desired formula.
$\hfill \Box$
\begin{kor}
$\alpha, \beta \in Par\left(\Lambda^*_{\mathcal{T}}(M),\nabla^{nc}\right)$ implies that $[\alpha, \beta]_{\mathcal{T}} \in Par\left(\Lambda^*_{\mathcal{T}}(M),\nabla^{nc}\right)$. Thus the space $Par\left(\Lambda^*_{\mathcal{T}}(M),\nabla^{nc}\right)$ together with the bracket induced by $[\cdot, \cdot]_{\mathcal{T}}$ is a Lie subalgebra of $\left(\Omega_{\mathcal{T}}(M)^*,[\cdot, \cdot]_{\mathcal{T}}\right)$.
\end{kor}
As a next step we extend the Lie algebra $\g_0$ (cf. (\ref{atartup})) of (higher order) conformal symmetries
together with the bracket defined above to a tractor conformal superalgebra $\g=\g_0 \oplus \g_1$ in a natural way by setting as before
\[ \g_1 := Par(\mathcal{S}(M),\nabla^{nc}), \]
and introducing the brackets
\begin{equation} \label{brain}
\begin{aligned}
\g_0 \times \g_0 \rightarrow \g_0\text{, } & (\alpha, \beta) \mapsto [\alpha,\beta]_{\mathcal{T}}, \\
\g_0 \times \g_1 \rightarrow \g_1\text{, } & (\alpha, \psi) \mapsto \alpha \cdot \psi, \\
\g_1 \times \g_0 \rightarrow \g_1\text{, } & (\psi, \alpha) \mapsto - \alpha \cdot \psi, \\
\g_1 \times \g_1 \rightarrow \g_0\text{, } & (\psi_1, \psi_2) \mapsto \sum_{l \in L_p} \alpha^l_{\psi_1, \psi_2}.
\end{aligned}
\end{equation}
Here, $L_p:=\{ l \in \mathbb{N} \mid \psi \mapsto \alpha^l_{\psi,\psi}\text{ not identically }0,\text{ }\alpha^l_{\psi_1, \psi_2} \text{ symm. in } \psi_1,\psi_2\}$, and for given $p$, one always has $p+1 \in L_p$, and thus the brackets have the right symmetry properties.
\begin{bemerkung}
If $p=1$ and we allow only $l=2$ in the last bracket, we recover a tractor conformal superalgebra which is naturally isomorphic to the one constructed in the previous chapter. Thus, the above construction may be viewed as a reasonable extension to arbitrary signatures.
\end{bemerkung}
It is of course natural to ask, as done in the Lorentzian case, under which conditions the tractor conformal superalgebra actually is a \textit{Lie} superalgebra, i.e. we have to check the Jacobi identities:
\begin{itemize}
\item As $\g_0$ is a Lie algebra, the even-even-even identity is trivial.
\item It holds by construction of the bracket $[\cdot, \cdot]_{\mathcal{T}}$ as extension of (\ref{ext1}) that
\[[\alpha, \beta]_{\mathcal{T}} \cdot \psi =\alpha \cdot \beta \cdot \psi - \beta \cdot \alpha \cdot \psi. \]
But this is precisely the even-even-odd Jacobi identity.
\item The Jacobi identity for the odd-odd-odd component again leads to \[ \alpha_{\psi,\psi}^l \cdot \psi \stackrel{!}{=}0.\]However, there is no known way of expressing this condition in terms of $Hol(M,c)$ due to the fact that a classification of possible parallel tractor forms induced by twistor spinors is only available for the Lorentzian case.
\item The even-odd-odd Jacobi identity is by polarization equivalent to \begin{align}
[\alpha,\alpha^l_{\psi,\psi}]_{\mathcal{T}}\stackrel{!}{=}2 \cdot \alpha^l_{\alpha \cdot \psi, \psi} \text{ for }l\in L_p. \label{gt}
\end{align} However, this identity fails to hold in general. From an algebraic point of view this is due to the fact that $[\Lambda^k_{p+1,q+1},\Lambda^k_{p+1,q+1}]_{\Lambda} \subset \Lambda^k_{p+1,q+1}$ only if $k=2$. This was precisely the situation we had in the Lorentzian setting. For other values of $k$ and $p$ the definition of $[ \cdot, \cdot ]_{\mathcal{T}}$ leads to additional terms on the left hand side of (\ref{gt}).
\end{itemize}
\subsection*{Example: Generic twistor spinors in signature (3,2)}
Consider a conformal spin manifold $(M,c)$ in signature $(3,2)$ admitting a generic real twistor spinor, cf. \cite{hs1}. This means that there exists a twistor spinor $\ph \in \Gamma(S^g_{\R})$ such that $\langle \ph, D^g \ph \rangle_{S^g} \neq 0$ (which is independent of $g \in c$). Under further generic assumptions on the conformal structure, one has that $Hol(M,c)=G_{2,2} \subset SO^+(4,3)$\footnote{The existence of a generic real twistor spinor always implies $Hol(M,c)\subset G_{2,2}$, cf. \cite{hs1}. The exact conditions for full holonomy $G_{2,2}$ are given in \cite{leinur} in terms of an explicit ambient metric construction whose metric holonomy coincides with $Hol(M,c)$}, where $G_{2,2}$ can also be defined as the stabilizer of a generic 3-form $\omega_0 \in \Lambda^3_{4,3}$ under the $SO^+(4,3)$-action, see \cite{kath}.\\
Under these conditions, there is up to constant multiples exactly one linearly independent real pure spin tractor $\psi \in \Gamma(M,\mathcal{S}_{\R}(M))$, additionally satisfying dim ker $\psi = 0$. All parallel tractor forms on $(M,c)$ are given by the span of $\alpha_{\psi}^3 \in \Omega^3_{\mathcal{T}}(M)$, being pointwise of type $\omega_0$ and $\ast \alpha_{\psi}^3 \in \Omega^4_{\mathcal{T}}(M)$, being pointwise of type $\ast \omega_0$. Thus, the tractor conformal superalgebra of $(M,c)$ is given by
\begin{align*}
\g=\g_0 \oplus \g_1 = \text{span}\{\alpha_{\psi}^3, \ast \alpha_{\psi}^3 \} \oplus \text{span}\{ \psi \}.
\end{align*}
Pure linear algebra in $\R^{4,3}$ reveals that\footnote{See also \cite{kath} for explicit formulas of $\omega_0$ and pointwise orbit representatives of $\psi$.}
\begin{align*}
\alpha_{\psi}^3 \cdot (\ast \alpha_{\psi}^3) &= (\ast \alpha_{\psi}^3) \cdot \alpha_{\psi}^3, \\
\alpha_{\psi}^3 \cdot \psi &= \text{const.}_1 \cdot \psi, \\
(\ast \alpha_{\psi}^3) \cdot \psi &= \text{const.}_2 \cdot \psi,
\end{align*}
where the $\psi-$dependent constants are proportional to $\langle \psi, \psi \rangle_{\mathcal{S}}$ and zero iff $\psi = 0$. These observations directly translate into the following properties of the superalgebra $\g$ with brackets as introduced in (\ref{brain}).
\begin{Proposition}
The tractor conformal superalgebra $\g$ associated to a conformal spin manifold $(M,c)$ in signature $(3,2)$ with $Hol(M,c)=G_{2,2}$ does not satisfy the odd-odd-odd and the even-odd-odd Jacobi identities. Moreover, the even part $\g_0$ is abelian.
\end{Proposition}
The example underlines that in contrast to the Lorentzian case, tractor conformal superalgebras need not satisfy at least 3 of the 4 Jacobi identities.
\subsection*{Metric description}
As done in the Lorentzian case, we want to compute the brackets of a general tractor conformal superalgebra $\g=\g_0 \oplus \g_1$ wrt. a metric in the conformal class. To this end, let $\alpha \in \Lambda^{k+1}_{p+1,q+1}, \beta \in \Lambda^{l+1}_{p+1,q+1}$. As in (\ref{mind}) we decompose $\alpha = e_+^{\flat} \wedge \alpha_+ + \alpha_0 + e^{\flat}_- \wedge e_+^{\flat} \wedge \alpha_{\mp} + e_-^{\flat} \wedge \alpha_-$. We want to compute $\left([\alpha,\beta]_{\Lambda}\right)_+$, i.e. the $+$-component of $[\alpha,\beta]_{\Lambda}$ wrt. the decomposition (\ref{mind}). As a preparation, we calculate for $\omega \in \Lambda^{r}_{p,q}, \eta \in \Lambda^s_{p,q}$ the products
{\allowdisplaybreaks \begin{align*}
(e^{\flat}_{\pm} \wedge \omega) \cdot \eta &= e^{\flat}_{\pm} \wedge (\omega \cdot \eta), \\
(e^{\flat}_{\pm} \wedge \omega) \cdot (e_{\pm}^{\flat} \wedge \eta) &= 0, \\
(e^{\flat}_{\pm} \wedge \omega) \cdot (e_{\mp}^{\flat} \wedge \eta) &=(-1)^r \left(e_{\pm}^{\flat} \wedge e_{\mp}^{\flat} \wedge (\omega \cdot \eta) - \eta \cdot \omega \right),\\
(e^{\flat}_{\pm} \wedge \omega) \cdot (e_-^{\flat} \wedge e_+^{\flat} \wedge \eta) &= \mp e_{\pm}^{\flat} \wedge (\omega \cdot \eta), \\
\omega \cdot (e_{\pm}^{\flat} \wedge \eta) &= (-1)^r e_{\pm}^{\flat} \wedge (\omega \cdot \eta), \\
\omega \cdot (e_-^{\flat} \wedge e_+^{\flat} \wedge \eta) &=e_-^{\flat} \wedge e_+^{\flat} \wedge (\omega \cdot \eta), \\
(e_-^{\flat} \wedge e_+^{\flat} \wedge \omega) \cdot \eta &= e_-^{\flat} \wedge e_+^{\flat} \wedge (\omega \cdot \eta), \\
(e_-^{\flat} \wedge e_+^{\flat} \wedge \omega) \cdot (e_{\pm}^{\flat} \wedge \eta) &= \pm (-1)^r e_{\pm}^{\flat} \wedge (\omega \cdot \eta),\\
(e_-^{\flat} \wedge e_+^{\flat} \wedge \omega) \cdot (e_-^{\flat} \wedge e_+^{\flat} \wedge \eta) &= \omega \cdot \eta.
\end{align*}}
With these formulas, it is straightforward to compute that for $\alpha, \beta$ as above one has
\begin{align}
\left(\alpha \cdot \beta \right)_+ = \alpha_+ \cdot \beta_0 - \alpha_+ \beta_{\mp} +(-1)^{k+1} \alpha_0 \beta_+ + (-1)^{k+1} \alpha_{\mp} \cdot \beta_+, \label{76}
\end{align}
and therefore,
\begin{align}
\left([\alpha,\beta]_{\Lambda}\right)_+ =& \alpha_+ \cdot \beta_0 - (-1)^{l+1} \beta_0 \cdot \alpha_+ - {\alpha_+} \cdot \beta_{\mp} - (-1)^{l+1}\beta_{\mp} \cdot \alpha_+ +(-1)^{k+1} \alpha_0 \cdot \beta_+ - \beta_+ \cdot \alpha_0 \notag \\
&+ (-1)^{k+1} \alpha_{\mp} \cdot \beta_+ + \beta_+ \cdot \alpha_{\mp}. \label{77}
\end{align}
This directly leads to the following global version:
\begin{Proposition}\label{cne}
Let $g \in c$ and let $\alpha, \beta \in \g_0$ be of degree $k+1$ and $l+1$ respectively. Further, let $\alpha_+ = {proj}^g_{\Lambda,+} \alpha \in \Omega^k_{nc,g}(M)$ and $\beta_+ = {proj}^g_{\Lambda,+} \beta \in \Omega^l_{nc,g}(M)$ denote the associated nc-Killing forms. As $\alpha \cdot \beta \in \g_0$ and $[\alpha,\beta]_{\mathcal{T}} \in \g_0$ are again parallel, the forms $\left(\alpha \cdot \beta\right)_+= {proj}^g_{\Lambda,+} (\alpha \cdot \beta)\in \Omega^*_{nc,g}(M)$ and $\left([\alpha,\beta]_{\mathcal{T}}\right)_+ ={proj}^g_{\Lambda,+} ([\alpha,\beta]_{\mathcal{T}})\in \Omega^*_{nc,g}(M)$ are again nc-Killing forms wrt. $g$. They are explicitly given by
\begin{equation}\label{fo1}
\begin{aligned}
\alpha_+ \circ \beta_+ := \left(\alpha \cdot \beta\right)_+ =& \frac{1}{l+1}\cdot \alpha_+ \cdot d\beta_+ + \frac{1}{n-l+1} \alpha_+ \cdot d^* \beta_+ \\
&+ (-1)^{k+1} \frac{1}{k+1} d\alpha_+ \cdot \beta_+ + (-1)^k \cdot \frac{1}{n-k+1} d^* \alpha_+ \cdot \beta_+,
\end{aligned}
\end{equation}
\begin{align*}
\left([\alpha,\beta]_{\mathcal{T}}\right)_+ =& \frac{1}{l+1}\cdot \alpha_+ \cdot d\beta_+ + (-1)^l \frac{1}{l+1} d\beta_+ \cdot \alpha_+ + \frac{1}{n-l+1} \alpha_+ \cdot d^* \beta_+ +(-1)^{l+1} \frac{1}{n-l+1} d^*\beta_+ \cdot \alpha_+ \\
&+(-1)^{k+1} \frac{1}{k+1} d\alpha_+ \cdot \beta_+ - \frac{1}{k+1} \beta_+ \cdot d\alpha_+ + (-1)^k \cdot \frac{1}{n-k+1} d^* \alpha_+ \cdot \beta_+ - \frac{1}{n-k+1} \beta_+ \cdot d^* \alpha_+.
\end{align*}
\end{Proposition}
\textit{Proof. }
This follows directly from the explicit form of the isomorphism ${proj}^g_{\Lambda,+}$ from (\ref{soga}), i.e. one has to insert $(\alpha_+,\alpha_0,\alpha_{\mp},\alpha_-)=(\alpha_+, \frac{1}{k+1}d\alpha_+, -\frac{1}{n-k+1}d^* \alpha_+, \Box_k \alpha_+)$ into the formulas (\ref{76}), (\ref{77}).
$\hfill \Box$\\
We study some interesting consequences and applications. First, note that Proposition (\ref{cne}) opens a way to construct new nc-Killing forms out of existing ones, i.e. $\circ$ defines a map
\begin{align}
\circ: \Omega^k_{nc,g}(M) \times \Omega^l_{nc,g}(M) \rightarrow \Omega^*_{nc,g}(M). \label{map}
\end{align}
In general, the resulting product is of mixed degree. We have already shown in Proposition \ref{cof} that for nc-Killing 1-forms the bracket $[ \cdot, \cdot ]_{\mathcal{T}}$ corresponds via fixed $g \in c$ to the Lie bracket of vector fields (up to a factor). For deg $\alpha = 2$ one can simplify the expression from Proposition \ref{cne} as follows:
\begin{Proposition} \label{na}
Let $\alpha \in Par(\Lambda^2(M),\nabla^{nc})$, $\beta \in Par(\Lambda^{k+1}(M),\nabla^{nc})$ and $g \in c$. Then it holds for the nc-Killing form $\left([\beta, \alpha]_{\mathcal{T}}\right)_+ \in \Omega_{nc,g}(M)$ that
\begin{align}
\frac{1}{2} \left([\beta, \alpha]_{\mathcal{T}}\right)_+ = L_{V_{\alpha}} \beta_+ - (k+1) \lambda_{\alpha} \cdot \beta_+ \in \Omega^k_{nc,g}(M). \label{equ}
\end{align}
Here, $L$ denotes the Lie derivative, $V_{\alpha}$ is the conformal vector field canonically associated to $\alpha$, and $\lambda_{\alpha} \in C^{\infty}(M)$ is defined via $L_{V_{\alpha}}g=2 \lambda_{\alpha} \cdot g$. In particular, the right hand side of (\ref{equ}) is again a nc-Killing $k$-form.
\end{Proposition}
\begin{bemerkung}
Proposition \ref{na} yields a natural action which gives the space of nc-Killing $k$-forms the structure of a module for the Lie algebra of normal conformal vector fields. In this context, we remark that it has already been shown in \cite{sem} that for a \textit{conformal} vector field $V$ and \textit{conformal} Killing $k-$form $\beta_+$, the form $L_{V} \beta_+ - (k+1) \lambda_{V} \cdot \beta_+$ is again a \textit{conformal} Killing $k-$form.
\end{bemerkung}
\textit{Proof. }
Dualizing the first nc-Killing equation (cf. \ref{stu}) for $\alpha_+$ yields
\begin{align}
\nabla^g_X V_{\alpha} = (X \invneg \alpha_0)^{\sharp} + \alpha_{\mp} X. \label{k45}
\end{align}
We have that $(L_{V_{\alpha}}g)(X,Y) = g(\nabla^g_X V_{\alpha},Y) + g(\nabla^g_Y V_{\alpha},X) = 2 \lambda_{\alpha} g(X,Y)$. Inserting (\ref{k45}) shows that $\alpha_{\mp}=\lambda_{\alpha} \in C^{\infty}(M)$. We fix $x \in M$ and let $(s_1,...,s_n)$ be a local $g-$pseudo-orthonormal frame around $x$. Cartans formula for the Lie derivative $L$ yields that around $x$ we have
\begin{align*}
L_{V_{\alpha}}\beta_+ &= d\left(V_{\alpha} \invneg \beta \right) + V_{\alpha} \invneg d \beta_+ \\
&=\sum_{i=1}^n \epsilon_i s_i^{\flat} \wedge \underbrace{\nabla^g_{s_i} \left(V_{\alpha} \invneg \beta_+\right)}_{=\left(\nabla^g_{s_i} V_{\alpha}\right) \invneg \beta_+ + V_{\alpha} \invneg \nabla^g_{s_i} \beta_+} + V_{\alpha} \invneg d \beta_+ \\
&= \sum_{i=1}^n \epsilon_i \left(s_i^{\flat} \wedge \left( \left(\nabla^g_{s_i} V_{\alpha}\right) \invneg \beta_+ \right) - V_{\alpha} \invneg \left( s_i^{\flat} \wedge \nabla^g_{s_i} \beta_+ \right) + g(s_i,V_{\alpha}) \cdot \nabla^g_{s_i} \beta_+ \right) + V_{\alpha} \invneg d \beta_+ \\
&= \underbrace{\sum_{i=1}^n \epsilon_i s_i^{\flat} \wedge \left( \left(\nabla^g_{s_i} V_{\alpha}\right) \invneg \beta_+ \right)}_{\text{I}} + \underbrace{\nabla^g_{V_{\alpha}} \beta_+}_{\text{II}}
\end{align*}
Using the nc-Killing equations for $\alpha_+$ and $\beta_+$, we rewrite the two summands as follows:
\begin{align*}
\text{I} = \underbrace{\sum_{i=1}^n \epsilon_i s_i^{\flat} \wedge \left((s_i \invneg \alpha_0)^{\sharp} \invneg \beta_+ \right)}_{\text{I}a} + \underbrace{\alpha_{\mp} \cdot \sum_{i=1}^n \epsilon_i s_i^{\flat} \wedge \left(s_i \invneg \beta_+ \right)}_{\text{I}b}
\end{align*}
Clearly, I$b = k \cdot \alpha_{\mp} \cdot \beta_+$. In order to express I$a$ nicely, we introduce functions $a_{ij}$ such that $(s_i \invneg \alpha_0 )^{\sharp} = \sum_j \epsilon_j a_{ij} \cdot s_j$. Clearly, $a_{ij}=-a_{ji}$ and $\alpha_0 = \sum_{i < j} \epsilon_i \epsilon_j a_{ij} s_i^{\flat} \wedge s_j^{\flat}$. Inserting this into I$a$ yields that
\begin{align*}
\text{I}a = \sum_{i <j} \epsilon_i \epsilon_j a_{ij} \cdot \left(s_i^{\flat} \wedge \left(s_j \invneg \beta_+ \right) - s_j^{\flat} \wedge \left(s_i \invneg \beta_+ \right)\right).
\end{align*}
In order to simplify this expression, we proceed as follows: Let $s^{\flat}_{J}:=s_{j_1}^{\flat} \wedge...\wedge s_{j_{k+1}}^{\flat}$ for $1 \leq j_1 < ... < j_{k+1} \leq n$. We compute for $i<j$:
\begin{align*}
s^{\flat}_J \cdot \left( s_i^{\flat} \wedge s_j^{\flat} \right) &= \left(s^{\flat}_J \cdot s_{i} \right) \cdot s_j = (-1)^{k+1}\left(s_i^{\flat} \wedge s_{J}^{\flat} + s_i \invneg s_J^{\flat} \right) \cdot s_j \\
&=s^{\flat}_i \wedge s^{\flat}_j \wedge s^{\flat}_J + s^{\flat}_i \wedge (s_j \invneg s^{\flat}_J ) - s^{\flat}_j \wedge (s^{\flat}_i \invneg s^{\flat}_J ) + s_i \invneg s_j \invneg s^{\flat}_J.
\end{align*}
Similarly, one obtains
\begin{align*}
\left( s_i^{\flat} \wedge s_j^{\flat} \right) \cdot s_J^{\flat} = s^{\flat}_i \wedge s^{\flat}_j \wedge s^{\flat}_J - s^{\flat}_i \wedge (s^{\flat}_j \invneg s^{\flat}_J ) + s^{\flat}_j \invneg (s^{\flat}_i \wedge s^{\flat}_J ) + s_i \invneg s_j \invneg s^{\flat}_J.
\end{align*}
Consequently, $\frac{1}{2} \cdot \left( s^{\flat}_J \cdot \left( s_i^{\flat} \wedge s_j^{\flat} \right) - \left( s_i^{\flat} \wedge s_j^{\flat} \right) \cdot s_J^{\flat} \right) = s_i^{\flat} \wedge \left(s_j \invneg s^{\flat}_J \right) - s_j^{\flat} \wedge \left(s_i \invneg s^{\flat}_J \right)$, and multilinear extension immediately yields that
\[\text{I}a =\frac{1}{2}(\beta_+ \cdot \alpha_0 - \alpha_0 \cdot \beta_+). \]
Furthermore, the summand II can with the nc-Killing equation for $\beta_+$ be rewritten as
\begin{align*}
\nabla^g_{V_{\alpha}}\beta_+ &= V_{\alpha} \invneg \beta_0 + \alpha \wedge \beta_{\mp} \\
&=\frac{1}{2} \cdot \left((-1)^{k+1} \beta_0 \cdot \alpha_+ - \alpha_+ \cdot \beta_0 \right) + \frac{1}{2} \cdot \left( (-1)^{k+1} \beta_{\mp} \cdot \alpha_+ + \alpha_+ \cdot \beta_{\mp} \right).
\end{align*}
Putting all these formulas together again yields that
\begin{align*}
L_{V_{\alpha}} \beta_+ - (k+1) \lambda_{\alpha} \beta_+ =& \frac{1}{2} \left((-1)^{k+1}\beta_0 \cdot \alpha_+ - \alpha_+ \cdot \beta_0 +(-1)^{k+1} \beta_{\mp} \cdot \alpha_+ + \alpha_+ \cdot \beta_{\mp} + \beta_+ \cdot \alpha_0 - \alpha_0 \cdot \beta_+ \right) \\
&+ k \cdot \alpha_{\mp} \beta_+ -(k+1) \cdot \alpha_{\mp} \beta_+.
\end{align*}
Comparing this expression to (\ref{77}) immediately yields the Proposition.
$\hfill \Box$\\
As a second application of Proposition \ref{cne} we consider the case of $g$ being an Einstein metric in the conformal class.
\begin{Proposition}
If $\beta \in \Omega_{nc,g}^k(M)$ is a nc-Killing $k$-form wrt. an Einstein metric $g$ on $M$, then both $\beta_0 = (k+1) \cdot d\beta_+$ and $\beta_{\mp} = -(n-k+1) \cdot d^*\beta_+$ are nc-Killing forms for $g$ as well.
\end{Proposition}
\textit{Proof. }As elaborated in \cite{nc}, on an Einstein manifold $(M,g)$, the tractor 1-form $\alpha = \left( 1,0,0,- \frac{\text{scal}^g}{2(n-1)n} \right)$ is parallel. Inserting this expression for $\alpha$ into the formulas in Proposition \ref{cne} shows that $\frac{1}{k+1}d\beta_+ + \frac{1}{n-k+1}d^*\beta_+$ is a nc Killing form.
$\hfill \Box$
\begin{bemerkung}
The last statement has a well-known spinorial analogue: Consider a twistor spinor $\ph \in \Gamma(S^g)$ on an Einstein manifold. As in this case $\nabla_X D^g \ph = \frac{n}{2} K^g(X) \cdot \ph = X \cdot \left(\frac{\text{scal}^g}{4n(n-1)}\cdot \ph \right)$, the spinor $D^g \ph$ turns out to be a twistor spinor on $(M,g)$ as well.
\end{bemerkung}
We compute the expression of the even-odd bracket wrt. a metric in the conformal class:
\begin{Proposition} \label{prr}
Let $\alpha \in \Omega^{k+1}_{\mathcal{T}}(M)$ be a parallel tractor $(k+1)$-form, $\psi \in \Gamma(\mathcal{S}(M))$ a parallel spin tractor. For given $g \in c$ let $\Phi^g_{\Lambda}(\alpha) = (\alpha_+,\alpha_0,\alpha_{\mp},\alpha_-)$ with $\alpha_+ \in \Omega^k_{nc,g}(M)$ and $\widetilde{\Phi}^g(\psi) = (\ph, -\frac{1}{n}D^g\ph)$ with $\ph \in \text{ker }P^g$. Then the twistor spinor corresponding to the parallel spin tractor $[\alpha,\psi]=\alpha \cdot \psi \in \g_1$ via $g$ is given by
\begin{align*}
\alpha_+ \circ \ph:=\widetilde{\Phi}^g ( proj_+^g \left(\alpha \cdot \psi \right)) &= \frac{2}{n} \alpha_+ \cdot D^g \ph + (-1)^{k+1} \alpha_{\mp} \cdot \ph + (-1)^{k+1} \alpha_{0} \cdot \ph \\
& = \frac{2}{n} \alpha_+ \cdot D^g \ph + \frac{(-1)^{k}}{n-k+1} d^*\alpha_{+} \cdot \ph + \frac{(-1)^{k+1}}{k+1} d\alpha_{+} \cdot \ph \in \text{ker }P^g.
\end{align*}
\end{Proposition}
\textit{Proof. }
For given $x \in M$ we consider the reductions $\sigma^g:\mathcal{P}^g \rightarrow \overline{\mathcal{P}^1}$ and $\widetilde{\sigma}^g:\mathcal{Q}^g \rightarrow \overline{\mathcal{Q}^1}$ as introduced in chapter 2 with $\sigma^g \circ f^g = \overline{f}^1 \circ \widetilde{\sigma}^g$, and on some open neighbourhood $U$ of $x$ in $M$ we have
\begin{align*}
\psi &= [\widetilde{\sigma}^g(\widetilde{u}), e_- \cdot w + e_+ \cdot w], \\
\alpha &= [\sigma^g(u),\underbrace{e_+^{\flat} \wedge \widetilde{\alpha}_+ + e_- \wedge e_+ \wedge \widetilde{\alpha}_{\mp} + \widetilde{\alpha}_0 + e_-^{\flat} \wedge \widetilde{\alpha}_-}_{ =: \widetilde{\alpha}}]
\end{align*}
for sections $\widetilde{u}:U \rightarrow \mathcal{Q}^g$, $u=f^g(\widetilde{u}):U \rightarrow \Pe^g$ and smooth functions $w: U \rightarrow \Delta_{p+1,q+1}$, $\widetilde{\alpha}_+,\widetilde{\alpha}_- : U \rightarrow \Lambda^k_{p,q}$, $\widetilde{\alpha}_0 : U \rightarrow \Lambda^{k+1}_{p,q}$ and $\widetilde{\alpha}_{\mp}: U \rightarrow \Lambda^{k-1}_{p,q}$. It follows by definition that on $U$
\begin{align*}
{\alpha} \cdot \psi = \left[\widetilde{\sigma}^g(\widetilde{u}), {\widetilde{\alpha}} \cdot (e_- \cdot w + e_+ \cdot w) \right].
\end{align*}
Consequently, we get for the corresponding twistor spinor wrt. $g$ that
\begin{align}
\widetilde{\Phi}^g ( {proj}_+^g \left({\alpha} \cdot \psi \right)) = \left[\widetilde{u}, \chi \left(e_- \cdot {proj}_{Ann(e_+)}\left({\widetilde{\alpha}} \cdot (e_- \cdot w + e_+ \cdot w)\right)\right) \right] \label{me}
\end{align}
Here, we identify the $Spin(p,q)-$modules $\Delta_{p,q}^{\C} \cong Ann(e_-)$ (cf. (\ref{fs})) by means of some fixed isomorphism $\chi$. One thus has to compute ${\widetilde{\alpha}} \cdot (e_- \cdot w + e_+ \cdot w)$. With the formulas for the action of $\Lambda_{p+1,q+1}^*$ on $\Delta_{p+1,q+1}$, it is straightforward to calculate that this product is given by
\begin{align*}
{\widetilde{\alpha}} \cdot (e_- \cdot w + e_+ \cdot w) =&(-1)^k \widetilde{\alpha}_+ \cdot e_+ \cdot e_- \cdot w + (-1)^k \widetilde{\alpha}_- \cdot e_- \cdot e_+ \cdot w + \widetilde{\alpha}_0 \cdot (e_- \cdot w + e_+ \cdot w) \\
&+ \widetilde{\alpha}_{\mp} \cdot (e_+ \cdot w - e_- \cdot w) \\
=& (e_- + e_+) \cdot ((-1)^k e_+ \cdot \widetilde{\alpha}_- \cdot w + (-1)^k e_- \cdot \widetilde{\alpha}_+ \cdot w + (-1)^{k+1} \widetilde{\alpha}_0 \cdot w \\
&+ (-1)^{k+1} \widetilde{\alpha}_{\mp} \cdot (e_+\cdot e_- \cdot w + w) ) \\
=:& (e_- + e_+) \cdot \widetilde{w}.
\end{align*}
Thus, one has by definition
\begin{align*}
\chi \left(e_- \cdot {proj}_{Ann(e_+)}\left({\widetilde{\alpha}} \cdot (e_- \cdot w + e_+ \cdot w)\right) \right) =& \chi \left(e_- \cdot e_+ \cdot \widetilde{w} \right) \\
=&-2 \widetilde{\alpha}_+ \cdot \chi(e_- \cdot w ) + (-1)^{k+1} \cdot \widetilde{\alpha}_0 \cdot \chi(e_- \cdot e_+ \cdot w )\\
&+ (-1)^{k+1} \cdot \widetilde{\alpha}_{\mp} \cdot \chi(e_- \cdot e_+ \cdot w).
\end{align*}
Inserting this into (\ref{me}) yields that
\begin{align*}
\widetilde{\Phi}^g ( {proj}_+^g \left({\alpha} \cdot \psi \right)) =& -2\cdot[u,\widetilde{\alpha}_+] \cdot \underbrace{\left[\widetilde{u},\chi(e_- \cdot w)\right]}_{=\widetilde{\Phi}^g ({proj}_-^g \psi)}+ (-1)^{k+1} [u,\widetilde{\alpha}_{\mp}] \cdot \underbrace{[\widetilde{u},\chi(e_- \cdot e_+ \cdot w)]}_{=\widetilde{\Phi}^g ({proj}_+^g \psi)} \\
&+ (-1)^{k+1} [u,\widetilde{\alpha}_{0}] \cdot \underbrace{[\widetilde{u},\chi(e_- \cdot e_+ \cdot w)]}_{=\widetilde{\Phi}^g ({proj}_+^g \psi)} \\
=& \frac{2}{n} {\alpha}_+ \cdot D^g \ph + (-1)^{k+1}{\alpha}_{\mp} \cdot \ph + (-1)^{k+1} {\alpha}_{0} \cdot \ph.
\end{align*}
$\hfill \Box$
\begin{bemerkung}
In particular, Proposition \ref{prr} describes a principle of constructing new twistor spinors from a given twistor spinor and a nc-Killing form in an arbitrary pseudo-Riemannian setting. One can also show independently and more directly, i.e. without using tractor calculus, that for a given nc-Killing form $\alpha_+ \in \Omega^k_{nc,g}(M)$ and $\ph \in$ ker $P^g$, the spinor
\begin{align}
\alpha_+ \circ \ph := \frac{2}{n} \alpha_+ \cdot D^g \ph + \frac{(-1)^{k}}{n-k+1} d^*\alpha_{+} \cdot \ph + \frac{(-1)^{k+1}}{k+1} d\alpha_{+} \cdot \ph \in \Gamma(S^g) \label{fo2}
\end{align}
is again a twistor spinor on $(M,g)$.
To this end, we compute $\nabla^{S^g}_X (\alpha_+ \circ \ph)$ for $X \in \mathfrak{X}(M)$ using the nc-Killing formulas (cf. (\ref{stu})):
\begin{align*}
\nabla^{S^g}_X \left( \alpha_+ \cdot D^g \ph \right) & = \left( \nabla^g_X \alpha_+ \right) \cdot D^g \ph + \alpha_+ \cdot \nabla^{S^g}_X D^g \ph \\
&= (X \invneg \alpha_0) \cdot D^g \ph + (X^{\flat} \wedge \alpha_{\mp}) \cdot D^g \ph + \alpha_+ \cdot \left(\frac{n}{2} \cdot K^g(X) \cdot \ph \right), \\
\nabla^{S^g}_X \left( \alpha_0 \cdot \ph \right) & = \left( \nabla^g_X \alpha_0 \right) \cdot \ph + \alpha_0 \cdot \nabla^{S^g}_X \ph \\
&= (K^g(X) \wedge \alpha_+) \cdot \ph - (X^{\flat} \wedge \alpha_-) \cdot \ph - \frac{1}{n} \cdot \alpha_0 \cdot X \cdot D^g \ph, \\
\nabla^{S^g}_X \left( \alpha_{\mp} \cdot \ph \right) & = \left( \nabla^g_X \alpha_{\mp} \right) \cdot \ph + \alpha_{\mp} \cdot \nabla^{S^g}_X \ph \\
&= (K^g(X) \invneg \alpha_+) \cdot \ph + (X \invneg \alpha_-) \cdot \ph - \frac{1}{n} \cdot \alpha_{\mp} \cdot X \cdot D^g \ph. \\
\end{align*}
We deduce using the formulas (\ref{ext1}) that $\nabla^{S^g}_X (\alpha_+ \circ \ph) = X \cdot \xi$ for all $X \in \mathfrak{X}(M)$, where $\xi := \left(\frac{1}{n} \alpha_0 \cdot D^g \ph + \frac{1}{n} \alpha_{\mp} \cdot D^g \ph + (-1)^{k+1} \alpha_- \cdot \ph \right)$, showing that $\alpha_+ \circ \ph$ satisfies the twistor equation with $D^g(\alpha_+ \circ \ph)=-n \cdot \xi$.
\end{bemerkung}
Finally, we discuss the case of $\alpha_+$ being a nc-Killing 1-form and $V_{\alpha}$ the dual normal conformal vector field\footnote{The proof of the following statement is then also the postponed proof of Proposition \ref{sld}.}.
\begin{Proposition} \label{lsd}
In the setting of Proposition \ref{prr}, if $k=1$ we have
\begin{align}
\Phi^g \left( proj_+^g \left(\alpha \cdot \psi \right)\right) = -2 \cdot \underbrace{\left(\nabla_{V_{\alpha}} \ph + \frac{1}{4} \tau \left(\nabla V_{\alpha} \right) \cdot \ph \right)}_{=:V_{\alpha} \circ \ph}, \label{sld2}
\end{align}
where $\tau \left(\nabla V_{\alpha} \right) = \sum_{j=1}^n \epsilon_j \left( \nabla_{s_j} V_{\alpha} \right) \cdot s_j + (n-2) \cdot \lambda_{\alpha}$ for any local $g-$pseudo-orthonormal frame $(s_1,...,s_n)$, and $L_{V_{\alpha}}g = 2 \lambda_{\alpha}g$.
\end{Proposition}
\textit{Proof. }Wrt. $g$ it holds that $\Phi^g(\alpha) = (\alpha_+,\alpha_0,\alpha_{\mp},\alpha_-)$. As in the proof of Proposition \ref{na} it follows that $\alpha_{\mp} = \lambda_{\alpha}$. Let $(s_1,...,s_n)$ be $g-$orthonormal. The first nc-Killing equation for $\alpha_+$ yields that $\nabla^g_{s_j} V_{\alpha} = \left(s_j \invneg \alpha_0\right)^{\sharp} + \alpha_{\mp} \cdot s_j$. Right-multiplication by $s_j$ gives $\left( \nabla^g_{s_j} V_{\alpha} \right) \cdot s_j = - (s_j \wedge (s_j \invneg \alpha_0)) - \epsilon_j \cdot \alpha_{\mp}$. Summing over $j$ thus reveals that $\tau \left(\nabla V_{\alpha} \right) = -2 \cdot \alpha_0 - n \cdot \alpha_{\mp}$, and together with the twistor equation we conclude that the right-hand side of (\ref{sld2}) is given by
\begin{align*}
-2 \cdot \left( -\frac{1}{n} V_{\alpha} \cdot D^g \ph - \frac{1}{2} \alpha_0 \cdot \ph - \frac{1}{2} \alpha_{\mp} \cdot \ph \right).
\end{align*}
Comparing this to the result of Proposition \ref{prr} immediately yields (\ref{sld2}).
$\hfill \Box$
\begin{bemerkung}
The term $V_{\alpha} \circ \ph$ in (\ref{sld2}) has become standard in the literature as \textsf{spinorial Lie derivative} as introduced in \cite{kos,ha96,raj}. Thus, the metric description of the even odd bracket in Proposition \ref{prr} can be viewed as a \textsf{generalization of the spinorial Lie derivative} to higher order nc-Killing forms, and we see that the brackets in the tractor conformal superalgebra reproduce the spinorial Lie derivative when a metric is fixed. For the case $k=1$, \cite{ha96} shows that $X \circ \ph$ is a twistor spinor for every twistor spinor $\ph$ and every \textit{conformal} vector field $X$, i.e. $X$ need not to be normal conformal.
\end{bemerkung}
\begin{bemerkung}
As in the Lorentzian setting, it is also possible in arbitrary signatures to include all conformal Killing forms, i.e. not only nc-Killing forms, in the even part of the algebra in terms of distinguished tractors. However, the generalization of (\ref{star}) to arbitrary signatures, which can be found in \cite{cov2}, is technically very demanding.
\end{bemerkung}
\subsection*{Analogous construction for special Killing forms and Killing spinors}
We specialize the principle for constructing new nc-Killing forms out of existing ones using the $\circ-$operations from Proposition \ref{cne}. In this context, we make some more general definitions and remarks:
\begin{definition}
Let $(M^{p,q},g)$ be a pseudo-Riemannian manifold of constant scalar curvature $\text{scal}^g$. A $k-$form $\alpha \in \Omega^k(M)$ is called a special Killing k-form to the Killing constant $-\frac{(k+1)\text{scal}^g}{n(n-1)}$ if
\begin{equation} \label{fgt}
\begin{aligned}
\nabla_X^g \alpha &= \frac{1}{k+1} X \invneg d \alpha, \\
\nabla^g_X d\alpha &= - \frac{(k+1) \text{scal}^g}{n(n-1)} \cdot X^{\flat} \wedge \alpha.
\end{aligned}
\end{equation}
We let $\Omega^k_{sk,g}(M)$ denote the space of all special Killing $k-$forms on $(M,g)$.
\end{definition}
Examples and classification results for special Killing forms are discussed in \cite{sem}. For instance, the dual of every Killing vector field defining a Sasakian structure and the Dirac currents of real Killing spinors on Riemannian manifolds are special Killing 1-forms. Note that every special Killing form is conformal and coclosed, i.e. $d^* \alpha = 0$.\\
\newline
Let us from now on assume that $\text{scal}^g \neq 0$. Under this assumption, spaces carrying special conformal Killing forms can be classified using an analogue of B\"ars cone construction for Killing spinors, see \cite{baer}, for differential forms. More precisely, consider the cone $C(M)=\R^+ \times M$ with cone metric $\widehat{g}_b := bdt^2 +t^2g$, where $b \neq 0$ is a constant scaling, of signature $(p,q+1)$ or $(p+1,q)$.
\begin{Proposition}{\cite{sem}} \label{pse}
Let $b=\frac{(n-1)n}{\text{scal}^g}$. Then special Killing $k-$forms to the Killing constant $-\frac{(k+1)\text{scal}^g}{n(n-1)}$ are in 1-to-1 correspondence to parallel $(k+1)$-forms on the cone $(C(M), \widehat{g}_b)$, given by
\begin{align}
\Omega_{sk,g}^k(M) \ni \alpha \leftrightarrow \widehat{\alpha}:= t^k dt \wedge \alpha + \frac{\text{sgn}(b) t^{k+1}}{k+1} d \alpha \in \Omega^{k+1}(C(M)) \label{sug}
\end{align}
\end{Proposition}
Using this, one classifies compact, simply-connected Riemannian manifolds carrying special Killing forms, see \cite{sem}. We come back to this list in the last section of this thesis.
\begin{bemerkung}
One can now derive analogous formulas to (\ref{fo1}),(\ref{fo2}) for special Killing forms and Killing spinors on pseudo-Riemannian manifolds using the cone construction, i.e. proceed as follows:
\begin{enumerate}
\item We let $\alpha \in \Omega^k(M), \beta \in \Omega^l(M)$ be special Killing forms to the same Killing constant and $\ph \in \Gamma(S^g)$ a Killing spinor on $(M,g)$.
\item Using B\"ars construction and Proposition \ref{pse}, we view these objects as parallel tensors $\widehat{\alpha}, \widehat{\beta} \in \Omega^{k+1}(C(M))$, $\widehat{\beta} \in \Omega^{l+1}(C(M))$ and $\widehat{\ph} \in \Gamma(C(M),S^{\widehat{g}_b})$.
\item We compute $\widehat{\alpha} \cdot \widehat{\beta}$ (with (\ref{prodo}) applied pointwise) and $\widehat{\alpha} \cdot \widehat{\ph}$ which again turn out to be parallel forms resp. spinors on the cone.
\item Via (\ref{sug}), one expresses these products as special Killing forms resp. Killing spinors on the base $(M,g)$ using the original data $\alpha, \beta, \ph$ and $d \alpha$, $d\beta$ only. Let us call these objects $\alpha \circ \beta \in \Omega^*_{sk,g}(M)$ and $\alpha \circ \ph \in \mathcal{K}(M)$.
\end{enumerate}
Carrying these steps out is straightforward. One obtains the same formulas (\ref{fo1}) and (\ref{fo2}), which of course simplify since $d^*\alpha = 0, D^g \ph = -\lambda \cdot n \cdot \ph$ for some $\lambda \in i\R \cup \R$ with $\ph \in \mathcal{K}_{\lambda}(M)$. In other words, one obtains a map
\begin{equation} \label{fo21}
\begin{aligned}
\circ: \Omega^k_{sk,g}(M) \times \Omega^l_{sk,g}(M) & \rightarrow \Omega^*_{sk,g}(M), \\
(\alpha, \beta) & \mapsto \alpha \circ \beta = \frac{1}{l+1}\cdot \alpha \cdot d\beta + (-1)^{k+1} \frac{1}{k+1} d\alpha \cdot \beta,
\end{aligned}
\end{equation}
and an action of special Killing forms on Killing spinors, given by
\begin{equation} \label{fo22}
\begin{aligned}
\circ: \Omega^k_{sk,g}(M) \times \mathcal{K}_{\lambda}(M) &\rightarrow \mathcal{K}_{\lambda} \oplus \mathcal{K}_{- \lambda}(M), \\
(\alpha, \ph) & \mapsto \alpha \circ \ph = -2 \cdot \alpha \cdot \ph + \frac{(-1)^{k+1}}{k+1} d\alpha \cdot \ph.
\end{aligned}
\end{equation}
In particular, (\ref{fo21}) allows one to construct new special Killing forms out of existing special Killing forms.
\end{bemerkung}
However, for pseudo-Riemannian Einstein spaces which are not Ricci-flat, special Killing forms are more directly related to normal conformal Killing forms and there is an equivalent way of deriving (\ref{fo21}) and (\ref{fo22}):\\
\cite{leihabil,lst} shows that for every pseudo-Riemannian Einstein space $(M,g)$, the conformal holonomy coincides with the holonomy of an ambient space which is the cone trivially extended by a parallel direction, i.e. $Hol(M,[g])=Hol(C(M),\widehat{g}_b)$. Using this, it is easy to deduce that there is a natural and bijective correspondence between parallel tractor forms on $M$, i.e. normal conformal Killing forms for $(M,g)$, and parallel forms on the cone, i.e. special Killing forms for $(M,g)$. More precisely, one shows:
\begin{Proposition}[\cite{leihabil}] \label{sumsibum}
On a pseudo-Riemannian Einstein space of nonvanishing scalar curvature, every nc-Killing form is the sum of a special Killing form and a closed Killing form.
\end{Proposition}
In particular, the coclosed nc-Killing forms on Einstein spaces are precisely the special Killing forms. This also follows from a direct inspection of the nc-Killing equations. Note that the well-known spinorial analogue of Proposition \ref{sumsibum} is the fact that on an Einstein space every twistor spinor decomposes into the sum of two Killing spinors. Thus, for Einstein spaces one obtains the maps (\ref{fo21}) and (\ref{fo22}) by restriction of (\ref{map}) and (\ref{fo2}) to special Killing forms and Killing spinors.
\section{The possible dimensions of the space of twistor spinors} \label{podi}
We have already discussed for Lorentzian signatures, in how far algebraic structures of the tractor conformal superalgebra $\g = \g_0 \oplus \g_1$, in particular, whether it is a \textit{Lie} superalgebra, are related to (local) geometric structures in the conformal class. It is natural to investigate this question further in arbitrary signatures, and we ask ourselves how possible dimensions of the odd \textit{supersymmetric} part $\g_1$ are related to underlying geometries. Main ingredient is the following algebraic Lemma:
\begin{Lemma} \label{est}
For integers $r$ and $s$ consider the bilinear map
\begin{align*}
V: \Delta_{r,s}^{\R} \otimes \Delta_{r,s}^{\R} \rightarrow \R^{r,s}\text{, }(\psi_1,\psi_2) \mapsto V_{\psi_1,\psi_2}
\end{align*}
mapping a pair of spinors to the associated vector. Let $S_0 \subset \Delta^{\R}_{r,s}$ be a linear subspace and set $V_{S_0}:=V_{|S_0 \otimes S_0 }$. We have:
\begin{enumerate}
\item If dim $S_0 > \frac{3}{4}\cdot\text{dim } \Delta_{r,s}^{\R}$, then $V_{S_0}$ is surjective.
\item If dim $S_0 > \frac{1}{2}\cdot\text{dim } \Delta_{r,s}^{\R}$, then $V_{S_0}$ is not the zero map.
\end{enumerate}
\end{Lemma}
\textit{Proof. }The first part is proved in \cite{cortes}. For the second part, assume that $V_{S_0}(\psi_1,\psi_2)=0$ for all $\psi_1,\psi_2 \in S_0$. By definition, this is equivalent to $\langle v \cdot \psi_1 , \psi_2 \rangle_{\Delta_{r,s}^{\R}} = 0$ for all $\psi_1,\psi_2 \in S_0$ and $v \in \R^{r,s}$, i.e. $\forall v \in \R^{r,s}: cl(v): S_0 \rightarrow S_0^{\bot}$. As dim $S_0 > \frac{1}{2} \cdot \text{dim }\Delta_{r,s}^{\R}$, it follows that dim $S_0^{\bot} < \frac{1}{2} \cdot \text{dim }\Delta_{r,s}^{\R}$. Thus the map $cl(v)$ has a kernel for every $v \in \R^{r,s}$, i.e. there is $\psi_v \in \Delta^{\R}_{r,s} \backslash \{0 \}$ with $v \cdot \psi_v = 0$. This implies that $\langle v,v \rangle_{r,s} = 0$ for every $v \in \R^{r,s}$.
$\hfill \Box$
\begin{bemerkung}
The second statement in Lemma \ref{est} cannot be improved in general. Namely, taking $r=s=2$ and $S_0 := \Delta_{2,2}^{\R,\pm} \subset \Delta_{2,2}^{\R}$ provides an example for dim $S_0 = \frac{1}{2} \cdot \text{dim } \Delta_{r,s}^{\R}$ and $V_{S_0} = 0$.
\end{bemerkung}
Applications of Lemma \ref{est} have already been studied in the literature::
\begin{Proposition}{\cite{cortes}} \label{copo}
Let $(M^{p,q},g)$ be a pseudo-Riemannian spin manifold of dimension $n$ with real spinor bundle $S^g = S^g_{\R}(M)$ of rank $N$.
\begin{enumerate}
\item If $(M,g)$ admits $k > \frac{3}{4}N$ twistor spinors which are linearly independent at $x \in M$, then $(M,g)$ admits $n$ conformal vector fields, which are linearly independent at $x \in M$.
\item If $(M,g)$ admits $k > \frac{3}{4}N$ parallel spinors, then $(M,g)$ is flat.
\end{enumerate}
\end{Proposition}
We now apply Lemma \ref{est} in the tractor setting yielding a conformal analogue of the second part of Proposition \ref{copo}. Let $(M^{p,q},c)$ be a conformal spin structure with real spin tractor bundle $\mathcal{S}(M)$ and space of \textit{real} twistor spinors $\g_1$. Let $N_c:= 2 \cdot \text{dim } \Delta_{p,q}^{\R}$ denote the rank of $\mathcal{S}(M)$, which is the maximal number of linearly independent real twistor spinors on $(M,c)$.
\begin{Proposition} \label{stuv}
In the above notation, we have:
\begin{enumerate}
\item If dim $\g_1 > \frac{3}{4} \cdot N_c$, then $(M,c)$ is conformally flat.
\item If dim $\g_1 > \frac{1}{2} \cdot N_c$, then there exists an Einstein metric in $c$ (at least on an open and dense subset).
\end{enumerate}
\end{Proposition}
\textit{Proof. }We apply Lemma \ref{est} to the case that $r=p+1$, $s=q+1$ and $S_0\subset \Delta_{p+1,q+1}^{\R}$ being the subspace of $Hol(M,c)$-invariant spinors (for some fixed base points) which as we know correspond to twistor spinors. Surjectivity of $V$ yields a basis of $\R^{p+1,q+1}$ which is $Hol(M,c)$-invariant. This proves the first part.\\
For the second part, it follows analogously by nontriviality of $V_{S_0}$ that there exists at least one nontrivial holonomy-invariant vector. By \cite{lst} this yields an Einstein scale in the conformal class (on an open, dense subset).
$\hfill \Box$
\begin{bemerkung}
As a simply-connected, conformally flat manifold always admits the maximal number of twistor spinors, the previous Proposition implies that either dim $\g_1 \leq \frac{3}{4} \cdot N_c$ or dim $g_1 = N_c$ is maximal, i.e. the dimension of $\g_1$ cannot be arbitrary for the simply-connected case.
\end{bemerkung}
In the second case of Proposition \ref{stuv} one can say more: To this end, let $(M^n,c=[g])$ be a simply-connected pseudo-Riemannian conformal spin manifold where $g$ is a Ricci-flat metric. Let $k$ denote the number of linearly independent parallel vector fields on $(M,g)$. By \cite{lst} we have for $x \in M$ that in the Ricci-flat case
\begin{align*}
\mathfrak{hol}_x(M,[g]) = \mathfrak{hol}_x(M,g) \ltimes {\R}^{n-k} = \left\{ \begin{pmatrix} 0 & v^{\flat} & 0 \\ 0 & A & -v \\ 0 & 0 & 0 \end{pmatrix} \mid A \in \mathfrak{hol}_x(M,g), v \in \R^{n-k} \right\},
\end{align*}
where the matrix is written wrt. the basis $(s_+,s_1,...,s_n,s_-)$ of $\mathcal{T}_x(M)$ for some pseudo-orthonormal basis $(s_1,...,s_n)$ of $T_xM$. Assume now that $k < n$, i.e. $(M,g)$ is Ricci-flat but non flat, and let $\psi \in \g_1$ be a parallel spin tractor on $(M,[g])$ with twistor spinor $\ph$. It follows by the holonomy-principle that
\begin{align}
\lambda_*^{-1} \left( \begin{pmatrix} 0 & v^{\flat} & 0 \\ 0 & 0 & -v \\ 0 & 0 & 0 \end{pmatrix} \right) \cdot \psi(x) = 0, \label{disco}
\end{align}
for all $v \in \R^{n-k} \subset \R^n$, i.e. $s_+ \cdot v \cdot \psi(x) = 0$ (cf. \cite{mat} for formulas for $\lambda_*^{-1}$ in this situation). Choosing $v$ to be non lightlike yields that $s_+ \cdot \psi(x) = 0$ for all $x \in M$ which is equivalent to $D^g \ph = -n \cdot \widetilde{\Phi}^g({proj}^g_- \psi) = 0$. Thus, $\ph$ is a parallel spinor on $(M,g)$, ker $\psi \neq \{0 \}$, and we have proved:
\begin{Proposition} \label{let}
Let $(M,g)$ be a simply-connected Ricci-flat spin manifold. Then either every twistor spinor on $(M,g)$ is parallel or $(M,g)$ is flat. In particular, if for a conformal structure $(M,c)$ there is a Ricci-flat metric in the conformal class and dim $\mathfrak{g}_1$ is not maximal, then $\mathfrak{g}$ is a Lie superalgebra.
\end{Proposition}
We now come back to the second case of Proposition \ref{stu}: It follows now directly from Proposition \ref{let} that in case of dim $\g_1 > \frac{1}{2} \cdot {\left( 2 \cdot \text{dim} \Delta_{p,q}^{\R}\right)}$ there exists an Einstein metric in $c$ with nonzero scalar curvature or the conformal structure is conformally flat, provided that $M$ is simply-connected.
\begin{beispiel}
We consider a special class of conformally Ricci-flat Lorentzian metrics admitting twistor spinors, namely plane waves $(M,h)$ which are equivalently characterized by the existence of local coordinates $(x,y_1,...,y_n,z)$ such that
\begin{align*}
h = 2dx dz + \left(\sum_{i,j=1}^n a_{ij}y_i y_j \right) dz^2 + \sum_{i=1}^n dy_i^2,
\end{align*}
where the $a_{ij}$ are functions only of $z$. It is $Ric^h = \sum_{i=1}^n a_{ii} dz^2$ and the isotropic vector field $\frac{\partial}{\partial x}$ is parallel. Let us assume that $(M,h)$ is indecomposable. Then it is known from \cite{lcc} that for $x$ in $M$
\begin{align*}
\mathfrak{hol}_x(M,[h]) = \R^{2n+1} = \left\{ \begin{pmatrix} 0 & 0 & u^T & c & 0 \\ 0 & 0 & v^T & 0 & -c \\ 0 & 0 & 0 & -v & -u \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{pmatrix} \mid u,v \in \R^n, c \in \R \right\}
\end{align*}
This explicit description makes it straightforward to calculate all spinors annihilated by $\lambda_*^{-1}\left(\mathfrak{hol}_x(M,[h])\right)$, yielding that dim ker $P^g = \frac{1}{2}\cdot $ dim $\Delta^{\R}_{1,n+1} = \frac{1}{4} \cdot N_c$.
\end{beispiel}
\textbf{Acknowledgement} The author gladly acknowledges support from the DFG (SFB 647 - Space Time Matter at Humboldt University Berlin) and the DAAD (Deutscher Akademischer Austauschdienst / German Academic Exchange Service). Furthermore, it is a pleasure to thank Jose Figueroa O Farrill for various discussions about mathematical phsycis.
\small
\bibliographystyle{plain}
|
1,108,101,565,523 | arxiv | \section*{\textcolor{SECOL}{#1}}}
\def\coloredtitle#1{\title{\textcolor{TITLECOL}{#1}}}
\def\coloredauthor#1{\author{\textcolor{CITECOL}{#1}}}
\def\textcolor{SECOL}{Tables}{\textcolor{CONTENTSCOL}{Contents}}
\definecolor{URLCOL}{rgb}{0,0.17,0.43}
\definecolor{LINKCOL}{rgb}{0.05,0.4,0}
\definecolor{CITECOL}{rgb}{0.35,0,0.48}
\def\scriptscriptstyle\rm{\scriptscriptstyle\rm}
\def\begin{eqnarray}{\begin{eqnarray}}
\def\end{eqnarray}{\end{eqnarray}}
\def\begin{equation}{\begin{equation}}
\def\end{equation}{\end{equation}}
\def\begin{enumerate}{\begin{enumerate}}
\def\end{enumerate}{\end{enumerate}}
\def\begin{itemize}{\begin{itemize}}
\def\end{itemize}{\end{itemize}}
\def\begin{itemize}{\begin{itemize}}
\def\end{itemize}{\end{itemize}}
\def\begin{enumerate}{\begin{enumerate}}
\def\end{enumerate}{\end{enumerate}}
\def{\hat T}{{\hat T}}
\def{\hat V}{{\hat V}}
\def{\hat H}{{\hat H}}
\def{\bf r}{{\bf r}}
\def\frac{1}{2}{\frac{1}{2}}
\def_{\sss X}{_{\scriptscriptstyle\rm X}}
\def_{\sss C}{_{\scriptscriptstyle\rm C}}
\def_{\sss S}{_{\scriptscriptstyle\rm S}}
\def_{{\sss S},j}{_{{\scriptscriptstyle\rm S},j}}
\def_{{\sss S},0}{_{{\scriptscriptstyle\rm S},0}}
\def_{{\sss S},1}{_{{\scriptscriptstyle\rm S},1}}
\def_{\sss F}{_{\scriptscriptstyle\rm F}}
\def_{\sss XC}{_{\scriptscriptstyle\rm XC}}
\def_{\sss HX}{_{\scriptscriptstyle\rm HX}}
\def_{\sss HXC}{_{\scriptscriptstyle\rm HXC}}
\def_{\sss H}{_{\scriptscriptstyle\rm H}}
\def_{\rm ee}{_{\rm ee}}
\def^{\rm LDA}{^{\rm LDA}}
\def_\text{CF}{_\text{CF}}
\def^{\rm TF}{^{\rm TF}}
\def^{\rm LSD}{^{\rm LSD}}
\def^{\rm unif}{^{\rm unif}}
\def_\uparrow{_\uparrow}
\def_\downarrow{_\downarrow}
\def^{\rm HF}{^{\rm HF}}
\def^{\rm QC}{^{\rm QC}}
\def\int d^3r\,{\int d^3r\,}
\def\int d^3r'\,{\int d^3r'\,}
\defn{n}
\def\Tabref#1{Table \ref{#1}}
\def\Eqsref#1{Eqs.\ \eqref{#1}}
\def\Eqref#1{Eq.\ \eqref{#1}}
\def\Secref#1{Section \ref{#1}}
\def\Figref#1{Fig.\ \ref{#1}}
\def\Ref#1{Ref.\ \cite{#1}}
\def\mnote#1{\marginpar{\footnotesize\raggedright \textcolor{red}{#1}}}
\def\oldnew#1#2{%
\textcolor{blue}{#2}\mnote{old: \textcolor{blue}{#1}
}
\def\revision#1{\textcolor{red}{#1}}
\def\toremove#1{\textcolor{blue}{#1}}
\def\sec#1{\section{\textcolor{SECOL}{#1}}}
\def\ssec#1{\subsection{\textcolor{SSECOL}{#1}}}
\def\sssec#1{\subsubsection{\textcolor{SSSECOL}{#1}}}
\def_\text{e}{_\text{e}}
\def^\text{apx}{^\text{apx}}
\def^\text{apx,0}{^\text{apx,0}}
\begin{document}
\coloredtitle{
Testing and using the Lewin-Lieb bounds in density functional theory
}
\coloredauthor{David V. Feinblum}
\affiliation{Department of Chemistry, University of California, Irvine, CA 92697}
\coloredauthor{John Kenison}
\affiliation{Department of Physics and Astronomy, University of California, Irvine, CA 92697}
\coloredauthor{Kieron Burke}
\affiliation{Department of Chemistry, University of California, Irvine, CA 92697}
\affiliation{Department of Physics and Astronomy, University of California, Irvine, CA 92697}
\date{\today}
\begin{abstract}
Lewin and Lieb have
recently proven several new bounds on the exchange-correlation energy that complement the
Lieb-Oxford bound. We test these bounds for atoms, for slowly-varying gases, and
for Hooke's atom, finding them usually less strict than
the Lieb-Oxford bound. However, we also show that,
if a generalized gradient approximation (GGA) is to guarantee satisfaction
of the new bounds for all densities, new restrictions on the
the exchange-correlation enhancement factor are implied.
\end{abstract}
\maketitle
\section{Introduction}
The Lieb-Oxford (LO) bound \cite{LO81} is a cornerstone of
exact conditions in modern density functional theory.\cite{FNM03}
Rigorously proven for non-relativistic quantum systems, the LO bound provides a strict upper bound on the magnitude of the
exchange-correlation energy, $E_{\sss XC}$, of any system
relative to a simple integral over its density.
The constant in the LO bound was built into
the construction of the PBE generalized gradient
approximation\cite{PBE96}, one of the most popular
approximations in use in density functional theory (DFT) today.\cite{B14}
The use of the bound to construct approximations
remains somewhat controversial, as most systems' $E_{\sss XC}$ does not
come close to reaching this bound.\cite{OC07}
Recently, Lewin and Lieb \cite{LL14} have
proven several alternative forms for the bound
that are distinct from the original LO bound.
In each, some fraction of the density integral is traded for an integral
over a density gradient. Thus, the new bounds are tighter for uniform
and slowly varying gases, and could be hoped to be tighter for real systems.
If so, they would be more useful than the LO bound
in construction and testing of approximate density functionals.
We show below that they are not tighter for atoms or for Hooke's atom
(two electrons in a parabolic well). However, they do lead to new restrictions
on the enhancement factor in GGAs that are constructed to guarantee
satisfaction of the bounds for all possible densities.
To begin, the LO bound can be written as
\begin{equation}
E_{\sss XC} \ge - C_{LO} \int d^3r\, n^{4/3}({\bf r})
\label{LO}
\end{equation}
where $C_{LO}$ is a constant that Lieb and Oxford\cite{LO81} showed is no larger than 1.68
(Chan and Handy showed it to be no larger than 1.6358).\cite{CH99}
For simplicity we define the following density integrals:
\begin{equation}
I_0= \int d^3r\, n^{4/3}({\bf r}),
\label{I0}
\end{equation}
\begin{equation}
I_1 = \int d^3r\, | \nabla n({\bf r})|,
\label{I1}
\end{equation}
\begin{equation}
I_2 = \int d^3r\, | \nabla n^{1/3}({\bf r})|^2.
\label{I2}
\end{equation}
Two new families of bounds are derived by Lewin and Lieb\cite{LL14}:
\begin{equation}
U_{\sss XC} \geq -C_{LL}\, I_0 -\alpha\, I_0 - c_p\, I_p /\alpha^{k-1}, ~~~p=1,2
\label{LLind}
\end{equation}
where $C_{LL}=3(9\pi/2)^{1/3}/5 \approx 1.4508$,
$c_1=1.206\times 10^{-3}$, $c_2=0.2097$, $k=5-p$, and $\alpha$ is any positive number.
Here $U_{\sss XC}$ is the potential energy contribution to the exchange-correlation energy.
We can convert these to a family of conditions on $E_{\sss XC}$ with several simple steps.
We utilize the adiabatic connection formula\cite{LP75,GL76} in terms of the scaled density:
\begin{equation}
E_{\sss XC} = \int_0^1 d\lambda \lambda\, U_{\sss XC}[n_{1/\lambda}],
\end{equation}
where $n_\gamma ({\bf r}) = \gamma^3\, n(\gamma{\bf r})$ is the density
scaled uniformly\cite{LP85,L91}
by positive constant $\gamma$. Examining each of the integrals in the LL bounds, we
find
\begin{equation}
I_p [n_\gamma] = \gamma\, I_p[n],
\end{equation}
so that applying the bounds to every value of $\lambda$ between 0 and 1 yields
a bound on the DFT exchange-correlation energy directly:
\begin{equation}
E_{\sss XC} \geq -C_{LL}\, I_0 -\alpha\, I_0 - c_p\, I_p /\alpha^{k-1}, ~~~p=1,2.
\label{LL}
\end{equation}
There is a Faustian dilemma when it comes to the value of $\alpha$: A very small value
makes the first term smaller in magnitude than that of the LO bound, but
increases the of the gradient additions. A very large value will
make those additions negligible, but also make the first term larger than that
of LO bound.
Choosing $\alpha$ in each case to maximize the right-hand-side,
Lewin and Lieb find
\begin{equation}
E_{\sss XC} \geq -C_{LL}\, I_0 - \tilde c_p\, I_p^{1/k}\, I_0^{1-1/k} ~~~p=1,2
\label{LL}
\end{equation}
where
\begin{equation}
\tilde c_p = \frac{k}{k-1}\, ((k-1) c_p)^{1/k}.
\label{LL123}
\end{equation}
This yields $\tilde c_1=4 (3 c_1)^{1/4}/3 \approx 0.3207$ and
$\tilde c_2=3 (2 c_2)^{1/3}/2 \approx 1.1227$.
Lewin and Lieb report a third bound by combining the $p=2$ case with
the Schwarz inequality:
\begin{equation}
E_{\sss XC} \geq -C_{LL}\, I_0 - \tilde c_3\, I_2^{1/8}\, I_0^{7/8}
\label{LL3}
\end{equation}
where $\tilde c_3 = ((3^{1/4} c_{1})^{2/5})(c_{2}^{3/5}) \approx 0.7650$.
We refer to these as the optimized LL bounds with LL1 and LL2 given by (9) with $p=1,2$ respectively. LL3 is given by (11).
\section{Lewin-Lieb Bounds for Spherically Symmetric Atoms}
To test each of these bounds, we performed calculations
using the non-relativistic atomic OEP code of Engel.\cite{ED99}
We used the PBE functional\cite{PBE96} to find self-consistent atomic
densities, and evaluated all $I_p$.
We did this for a simple subset of atoms
for which highly accurate correlation energies are available.\cite{MT11}
The results are fully converged with respect to the radial grid.
We use accurate $E_{\sss XC}$ from Ref. \onlinecite{BCGP14}
In Table \ref{atoms}, we list the results. We see immediately that, unfortunately,
the new bounds are less restrictive than the current LO bound.
Atoms have gradients that are sufficiently large as to
make the corrections larger than the density-integral term.
\begin{table}[htb]
\begin{tabular}{|c|c|cccc|ccc|}
\hline
$Z$ & $E_{XC}$ & LO & LL1 & LL2 & LL3 & $-C_{ll}I_{0}$ & $I_{1}$ & $I_{2}$ \\ \hline
1 & -0.3125 & -1.200 & -1.395 & -3.131 & -2.159 & -1.036 & 3.978 & 12.73 \\
2 & -1.069 & -1.991 & -2.318 & -4.514 & -3.302 & -1.719 & 6.739 & 10.99 \\
4 & -2.758 & -5.255 & -6.095 & -11.19 & -8.401 & -4.538 & 16.82 & 21.26 \\
10 & -12.51 & -25.01 & -28.56 & -43.09 & -35.35 & -21.60 & 62.18 & 31.64 \\
12 & -16.47 & -33.20 & -37.83 & -56.93 & -46.80 & -28.67 & 79.85 & 40.82 \\
18 & -30.97 & -63.36 & -71.82 & -101.2 & -85.64 & -54.72 & 139.5 & 49.76 \\
20 & -36.11 & -74.16 & -83.97 & -118.6 & -100.3 & -64.04 & 160.4 & 58.77 \\
30 & -71.22 & -149.1 & -167.6 & -220.0 & -192.3 & -128.8 & 284.1 & 68.21 \\
36 & -95.79 & -201.5 & -225.9 & -298.8 & -256.0 & -174.0 & 365.7 & 76.26 \\
38 & -104.0 & -219.2 & -245.5 & -316.2 & -278.9 & -189.3 & 393.2 & 84.90 \\
48 & -151.7 & -321.9 & -359.3 & -446.9 & -400.2 & -278.0 & 542.4 & 92.74 \\
54 & -182.2 & -388.0 & -432.4 & -531.7 & -478.7 & -335.1 & 635.9 & 100.6 \\
56 & -192.4 & -410.1 & -456.8 & -563.8 & -506.9 & -354.2 & 667.1 & 109.3 \\
70 & -281.1 & -603.6 & -669.6 & -799.6 & -729.4 & -521.3 & 911.6 & 117.9 \\
80 & -350.5 & -755.4 & -836.1 & -981.7 & -902.0 & -652.3 & 1096 & 124.9 \\
86 & -393.0 & -848.5 & -938.1 & -1096 & -1009 & -732.7 & 1210 & 132.5 \\
88 & -405.2 & -879.3 & -972.0 & -1139 & -1048 & -759.3 & 1246 & 141.0 \\
\hline
\end{tabular}
\caption{Exchange-correlation energies for neutral spherical atoms, bounds, and integrals.}
\label{atoms}
\end{table}
\begin{figure}[htb]
\includegraphics[width=0.4\textwidth]{i1.pdf}
\caption{$I_1$ (see text) for noble gas and alkaline earth atoms. The red dot indicates the limiting value as $Z \to \infty$}
\label{I1}
\end{figure}
\begin{figure}[htb]
\includegraphics[width=0.4\textwidth]{i2i1.pdf}
\caption{$I_2/I_1$ ratio (see text) for noble gas and alkaline earth atoms.}
\label{I2}
\end{figure}
To be sure that {\em no} atom behaves differently, we examine the large-$Z$ limit, where
Thomas-Fermi theory applies.\cite{LS73,LCPB09} It has recently been shown\cite{BCGP14}
that
\begin{equation}
E_{\sss XC} \to -C_{\sss X} Z^{5/3} - A\, Z\, \ln Z + B_{\sss XC}\, Z+...
\end{equation}
for atoms, where $C_{\sss X}=0.2201$, $A=0.020..$, and $B_{\sss XC} \approx 0.039$.
The dominant term, which is an exchange contribution, was proven by Schwinger\cite{S81},
and can be easily calculated by inserting the TF density\cite{LCPB09} into the local approximation for
$E_{\sss X}$. Since
\begin{equation}
E_{\sss X}^{\rm LDA} = - A_{\sss X}\, I_0
\end{equation}
where $A_{\sss X}=\frac{3}{4}\left(\frac{3}{\pi}\right)^{1/3} \approx 0.738$, this easily satisfies all bounds, including any LL bound with the gradient
terms ignored. In Fig. \ref{I1}, we plot $I_1$ as a function of $Z^{-1/3}$,
to show that it approaches its large $Z$ limit, which can be extracted from the TF density:
\begin{equation}
I_1[n^{\rm TF}] = d_1^{\rm TF}\, Z^{4/3}
\end{equation}
where we find $d_1=3.58749$. In Fig. \ref{I2}, we plot the ratio $I_2/I_1$, showing that, although $I_2$
diverges in TF theory (as noted by Lewin and Lieb), it appears to vanish relative to $I_1$ in this limit.
Thus all the additions in the LL bounds become relatively small in this limit, and no
change in behavior occurs. As $Z \to \infty$, the LL1 bound eventually becomes more restrictive than the LO bound, but only at unrealistically \footnote{$Z_{c} = \left[\frac{d_{1}}{C_{x}} \left(\frac{C_{1}}{C_{LO}-C_{LL}}\right)^{4}\right]^{3} \approx 91493$} large values of Z.
\section{Hooke's Atom and The Slowly-Varying Electron Gas}
We also performed calculations on the model system of two electrons in a harmonic potential,
the Hooke's atom.\cite{T93} One might imagine that, for higher or lower densities,
the bounds might tighten, or their order reverse, given the
different external potential.
We report three distinct results. For $k\to\infty$, where $k$ is the spring
constant, the density becomes large, and $E_{\sss XC}\to E_{\sss X}$. All energies and
integrals scale as $\omega^{1/2}$, where $\omega={\sqrt{k}}$. The first
line of Table \ref{hooke} shows the results, which are analogous to those
of the two-electron ions (with different constants). The order of the bounds
remains the same as in Table \ref{atoms}. In the next line, we report actual
energies for the largest value of $k$ for which there exists an analytic solution, $k=1/4.$ Again we see the same behavior.
The most interesting case is the low-density limit, $k\to 0$. In this limit, the
kinetic energy becomes negligible, and the electrons arrange themselves to minimize
the potential energy, on opposite sides of the center. This regime provides a system where correlation energy becomes comparable to exchange energy.
The third line of Table \ref{hooke} shows that none of the bounds is
tight in this limit (the XC energy vanishes relative to any of them)
and that the LL bounds diverge relative to the LO bound.
\def{k^{1/36}}{{k^{1/36}}}
\begin{table}[htb]
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$k$ & $scale$ & $E_{xc}$ & $LO$ & $LL1$ & $LL2$ & $LL3$ \\ \hline
$\infty$& $k^{1/4}$ & -1.37 & -1.5513 &-1.7879 & -3.1804 & -2.4255 \\
$\frac{1}{4}$ & 1 & -.554 & -1.0031 &-1.1558 & -2.0682 & -1.5740\\
$0$ & $k^{11/36}$ & -.0042 $k^{1/36}$
& -1.85
& -1.6-$\frac{.44}{k^{1/24}}$
& -1.6-$\frac{3.34}{k^{1/6}} $
&-1.6-$\frac{1.8}{k^{19/144}} $\\
\hline
\end{tabular}
\caption{Hooke's atom (two electrons in a harmonic potential) ranging over
all values of the spring constant, $k$.}
\label{hooke}
\end{table}
\section{Effects on $F_{\sss XC}$}
In the rest of this paper, we show how the LL bounds can be used to
derive interesting and new restrictions on the enhancement factor of generalized
gradient approximations (GGA's).
Begin with the definition of the enhancement factor for a GGA for spin unpolarized
systems:
\begin{equation}
E_{\sss XC}^{GGA} = \int d^3r\, e_{\sss X}^{\rm unif}(n({\bf r}))\, F_{\sss XC}(r_{\sss S}({\bf r}),s({\bf r})),
\end{equation}
where $e_{\sss X}^{\rm unif}(n)= -A_{\sss X} n^{4/3}$ is the exchange energy density of a
spin-unpolarized uniform gas, $r_{\sss S}= (3/(4\pin))^{1/3}$ is the local
Wigner-Seitz radius, and $s= |\nabla n|/(2 k_F n)$ is the (exchange) dimensionless
measure of the gradient, where $k_F = (3 \pi^2 n)^{1/3}$ is the local
Fermi wavevector.
Most famously, the PBE approximation was constructed to ensure it satisfies the
LO bound for any density. A sufficient condition to guarantee this is
\begin{equation}
F_{\sss X} (s) \leq 1.804.
\label{FxLO}
\end{equation}
Here we digress slightly, to correct a popular misconception in the literature.\cite{VRLM12}
The LO bound applies to the XC energy. There is no unique choice of XC energy density,
and more than one
choice was used in the derivation of PBE.\cite{BPW97} Thus the enhancement
factor in PBE should not (and does not) correspond to {\em any} choice of
energy density. No bound has ever been defined, much less proven, for a specific energy
density.
As others\cite{OC07} and Table \ref{atoms} have shown, real systems do not come close to saturating
the LO bound. In fact, the B88 exchange functional does not satisfy Eq. (\ref{FxLO})
due to the logarithmic dependence on $s$. But
for the present purposes, B88 gives exchange energies almost identical to
PBE and very close to exact exchange energies for atoms.
While one can design densities that cause B88 to violate the LO bound,
they look nothing like densities of real systems.\cite{PRSB14}
Now we apply the logic of PBE to the LL bounds. We wish to find
conditions on the enhancement factor that guarantee
satisfaction of those bounds for all possible densities. In this context, the
optimum bounds are not useful, since they contain denominators different from the local term.
Dividing each term of Eq. (8) by $e_{\sss X}^{\rm unif}$, we find that
\begin{equation}
F_{\sss XC} \leq {\tilde C}_{LL} + A_{\sss X}^{-1}(\alpha + c'_p\, s^p/\alpha^{k-1})
\end{equation}
is a sufficient condition to ensure satisfaction of the LL bounds for any density, with $\tilde C_{LL}=C_{LL}/A_{\sss X}= 6 (2\pi/3)^{2/3}/5=1.9643...$, and
\begin{equation}
c'_p = 2\, (4-p)\, \left( \frac{\pi}{3} \right)^{2p/3} c_{p} .
\end{equation}
We now find the most restrictive value of $\alpha$ for each value of $s$, to yield
\begin{equation}
F_{\sss XC} \leq {\tilde C}_{LL} + \tilde d_p\, s^{p/(5-p)},
\end{equation}
where
\begin{equation}
\tilde d_p = \frac{k}{k-1} \left( (k-1)\, c'_p \right)^{1/k}.
\end{equation}
Writing these out explicitly yields
\begin{equation}
F_{\sss XC} \leq 1.9643 + 2.76755\, s^{1/4},~~~~~~~~~(\text{LL1})
\end{equation}
and
\begin{equation}
F_{\sss XC} \leq 1.9643 + 3.06212\, s^{2/3},~~~~~~~~~(\text{LL2})
\end{equation}
in contrast to the LO bound
\begin{equation}
F_{\sss XC} \leq 2.273.~~~~~~~~~~(\text{LO})
\end{equation}
\begin{center}
\begin{figure}[htb]
\includegraphics[width=0.47\textwidth]{Fxc2.pdf}
\caption{$F_{\sss XC}$ for LL and LO compared with PBE density for small $s$.}
\label{Fxc}
\end{figure}
\end{center}
In Fig 3, we plot all three bounds and see that for small values of $s$, the LL2 bound is tighter than the LO bound. We therefore define the new Lieb-Oxford-Lewin (LOL) bound to be the LL2 bound for small $s$ and the old LO bound otherwise.
This new bound is more restrictive than the LO bound when $s$ is less than 0.032. The existence of a tighter bound on the enhancement factor for all s has been empirically suggested.\cite{RPCP09} We also plot the PBE enhancement factor, showing that it satisfies the LOL bound, but is much closer for small $s$ than the old LO bound.
Spin-polarization is handled different for exchange than for correlation.
For exchange-correlation together, it does not raise $F_{\sss XC}$ beyond its maximum
for unpolarized systems, as that is achieved in the low-density limit, which
is independent of spin. But in the opposite, high-density limit, exchange
dominates, and its spin-dependence is determined by the exact spin-scaling
relation for exchange:
\begin{equation}
E_{\sss X} [n_\uparrow,n_\downarrow] = \frac{1}{2}( E_{\sss X}[2n_\uparrow,0]+E_{\sss X}[0,2n_\downarrow] )
\end{equation}
which implies
\begin{equation}
F_{\sss X}^{pol} (s) = 2^{1/3}\, F_{\sss X}^{unpol} (s)
\end{equation}
Thus $F_{\sss X}^{unpol} < \tilde C_{LO}/2^{1/3}$ to ensure $F_{\sss X}^{pol}$ satisfies the LO
bound, which is the origin of Eq. (\ref{FxLO}).
\section{Conclusion}
In summary, we have tested the optimum LL bounds (which have already been applied to DFT by other groups\cite{CT14}) for a variety of simple systems,
finding they are less restrictive for those systems than the LO bound.
However, the LL bounds are clearly more restrictive for a uniform gas, and the family
of bounds that the LL bounds come from can be used to place limits on the
enhancement factor of GGA's. With this in mind, we constructed the combined
LOL bound and we recommend that the LOL bound be used whenever relevant for all future
functional development and testing of GGAs.
\section{Acknowledgements}
We thank Mathieu Lewin and Elliott Lieb for bringing their new bounds to our
attention, and
Eberhard Engel for developing the OPMKS atom code.
This work was supported by NSF under grant CHE-1112442.
\label{page:end}
|
1,108,101,565,524 | arxiv | \section{Introduction}
Metamaterials allow engineering of material parameters from their basic constituents. This property helps one to alter various phenomena associated with the propagation of electromagnetic wave. In particular, dispersion in metamaterials can be tailored by stacking a number of highly dispersive sheets of metamaterials \cite{dis}. Then the desired nonlinear properties can be obtained either by using nonlinear insertions \cite{inser},
an element showing nonlinear response to resonant meta atoms or by embedding metamaterials into a nonlinear dielectric medium \cite{10}.
Higher-order harmonics can be selectively generated by introducing varactor diodes into the split-ring resonator (SRR) circuit \cite{ilya}. The self steepening (SS) effect influenced by a light pulse depends on the size of the SRR element. The strength of birefringence can be tuned by adjusting the geometrical parameters of the constituent elements \cite{imhof}. The band gap in the nonlinear metamaterial can be tuned by a variable capacitance insertion \cite{gorkunov}. A photometamaterial with meta-atoms containing both photodiodes and light emitting diodes can show breaking of inversion symmetry at a particular incident wave intensity, which results in the emergence of second harmonic generation \cite{Maxim}. Tunable chirality in photonic metamaterials can be obtained with meta-molecule containing nonlinear nano- Au:polycrystalline indium-tin oxide layer sandwiched between two L-shaped nano-antennas \cite{yu}. Low threshold optical bistability at an ultralow excitation power can be observed in metamaterials with ultrathin holey metallic plates filled with nonlinear materials \cite{shi}.
\par
Soliton propagation is one of the striking and fascinating nonlinear phenomena \cite {ML} which has been investigated in the engineered metamaterials during the recent years. In metamaterials researchers have identified the existence of different kinds of solitons as a consequence of their engineering freedom. For example, existence of subwavelength discrete soliton due to the balance between tunneling of surface plasmon modes and nonlinear self-trapping has been identified \cite{yong}. Dispersive magnetic permeability provides controllability of the Raman soliton self-frequency shift \cite{yuan}. Bright-bright, dark-dark and dark-bright vector solitons can exist in the nonlinear isotropic chiral metamaterials \cite{Tsit}. Anomalous collision of elastic vector solitons has been identified in mechanical metamaterials as a result of large-amplitude characteristics of the solitons \cite{deng}. Nonlinearity induced wave transmission and generation of spatiotemporal solitons through an opaque slab with negative refraction have been studied \cite{nina}. Existence of low power gigahertz solitons have been identified in metamaterials through plasmon induced transparency \cite{Bai}. Metamaterials admit chirped bright, kink, and anti-kink quasi-solitons in the presence of SS effect \cite{anjan}.
\par
On the other hand, a dissipative soliton is a stable localized structure formed by the double balance between nonlinearity and dispersion and between gain and loss which change the pulse energy. It can be observed in a variety of fields such as optics, cosmology, biology, condensed matter physics and medicine \cite{akh}. In fiber-laser cavities TOD can form stable and oscillatory bound states of DS \cite{malo1}. Stability of discrete dissipative localized modes in metamaterials composed of weakly coupled SRRs has been studied \cite{rosa}. Existence of knotted solitons, which are stable self localized dissipative structures in the form of closed knotted chains has been identified in magnetic metamaterials \cite{rosa2}. The delicate balance between input power and intrinsic losses results in the formation of stable dissipative breather in a superconducting quantum interface device metamaterials \cite{laza}.
\par
In contrast to the case of conventional positive index materials, the negative index metamaterials can show not only positive SS effect but also it can be negative, which is determined by the size of SRR contained in the meta-atom. As a result the solitons propagating inside the metamaterials either shift toward the leading edge or trailing edge depending upon the sign of the SS coefficient \cite{wen1}. Under the influence of negative SS effect the center of the pulse shifts toward the leading side and hence shifts a part of the energy of the pulse toward the blue side in contrast to the case of positive SS effect, where the center of the pulse shifts toward the trailing edge of the pulse (red side). Also the positive SS effect suppresses the self-frequency shift of Raman soliton, whereas the negative SS effect enhances it \cite{yuan}. For many applications related to optic communication, stable dynamics of propagating light is necessary. Hence in this paper, we investigate the impact of higher-order effects such as third order dispersion (TOD), fourth order dispersion (FOD), quintic nonlinearity (QN) and second order nonlinear dispersion (SOND) on the SS effect induced shift of the DSs. We consider each higher-order effect as a perturbation to the system and following Lagrangian variational method, we examine the possibility of stable propagation of the DSs in the metamaterials as a result of of the interplay between the above higher-order effects.
\par
The paper is organized as follows. Following a self contained introduction, in Section II the theoretical model of the problem and variational analysis leading to the evolution equations of the pulse parameters are presented. In Section III, investigation on the impact of higher-order effects on the dissipative soliton is carried out in detail followed by a short presentation of the summary and conclusion in section IV.
\section{Theoretical Analysis}
The modified nonlinear Schr\"{o}dinger equation model which describes the propagation of electromagnetic waves in metamaterials is given by following normalized equation \cite{wen},
\begin{eqnarray}
\label{model}
\frac{\partial A}{\partial z } -i\sum_{m=2}^4 \frac{i^m \beta_m}{m!} \frac{\partial^m A}{\partial t^m }- i \gamma (|A|^{2}A) + i \gamma \xi (|A|^{4}A)+\gamma \sigma_1 \frac{\partial(|A|^{2}A)}{\partial t } \nonumber\\+i \gamma \sigma_2 \frac{\partial^2(|A|^{2}A)}{\partial t^2 } -g_l A =0,
\end{eqnarray}
where z and t are normalized propagation distance and time in a comoving frame of reference, respectively. In Eq. (\ref{model}) $A(z,t)$ stands for the normalized envelope of slowly varying electric field. $\beta_m$ and $\gamma$ are the $m^{th}$ order dispersion and nonlinearity coefficients, respectively. $\xi$ represents the quintic nonlinear coefficient. $\sigma_1$ and $\sigma_2$ are normalized first-and second-order nonlinear dispersion coefficients, respectively. Also $g_l=g-\alpha$, where g and $\alpha$ are the normalized gain and loss coefficients, respectively.
\par
Now we discuss the realization of properties presented in Eq. (\ref{model}) based on Drude-Lorentz model.
\begin{figure}[!ht]
\label{block}
\begin{center}
\subfigure[Array of metallic wires with radius of a wire 'r' and the distance between two wires 'a'.]{\label{wire}\includegraphics[height=4 cm, width=4 cm]{wire.eps}}~~~~
\subfigure[Split ring resonator with nonlinear insertions. The spacing between two rings and the radius are 'd' and 'R' respectively.]{\label{ring}\includegraphics[height=4 cm, width=4 cm]{withdiode.eps}}
\caption{Basic building blocks of a NIM}
\end{center}
\end{figure}
According to Drude-Lorentz model \cite{Wooten}, the permittivity of a metal can be expressed as,
\begin{equation}
\epsilon(\omega)=1-\frac{\omega_{pe}^2-\omega_0^2}{\omega^2-\omega_0^2+i \omega \tau_c},
\end{equation}\\
where $\omega_0$ is the resonance frequency, $\omega_{pe}$ is the electron plasma frequency and $\tau_c$ is the damping frequency. For the particular case $\omega_0 < \omega < \omega_{pe}$, the electric permittivity is negative. In order to realize the structure exhibiting negative permittivity Pendry \emph{et al.} \cite{Pendry1} proposed a design of an array of metallic thin wires of radius $r$ and separation $a$ as shown in Fig. \ref{wire}, in which the value of plasma frequency can be controlled by geometric parameters. The plasma frequency of such an array of metallic thin wires is given by,
\begin{equation}
\omega_{pe}^2=\frac{2\pi c_0^2}{a^2 ln(a/r)},
\label{epl}
\end{equation}
where $c_0$ is the velocity of light. From Eq. (\ref{epl}) one can see that, the plasma frequency and hence the electric permittivity depend on the geometrical parameters, namely the radius $r$ and spacing $a$ of the wires and thus on the structure of the lattice. In order to realize the negative magnetic response Pendry\emph{et al.} \cite{Pendry3} suggested that the split ring resonator structures, which consist of loops or ring structures, made of good conductors with a split or gap as shown in Fig. \ref{ring}. The effective magnetic permeability of the structure is given by,\\
\begin{equation}
\mu(\omega)=1-\frac{F \omega^2}{\omega^2-\omega_0^2+i \tau \omega},
\label{permi}
\end{equation}
where $\omega_0=\frac{3 L c_0^2}{\pi R^3 ln(2 \omega/d)}$ is the resonant frequency, $F=\frac{\pi R^2}{L^2}$ is the volume filling factor and $\tau$ is the damping coefficient. Here again the resonant frequency, damping factor, filling factor and hence the effective permeability are functions of geometrical parameters of the structure. Thus, the thin wire lattice and split ring resonator are the two basic building blocks of NIM
medium where the former gives the negative electric responses and the latter gives the negative magnetic responses. Now various coefficients of Eq. (\ref{model}) can be represented as \cite{wen, R22, R21},
$\beta_2=\frac{1}{c_0 \omega_0 n}(1+\frac{3\omega_{pm}^2 \omega_{pe}^2}{\omega_0^4})-\frac{1}{c_0 \omega_0 n^3}(1-\frac{\omega_{pm}^2 \omega_{pe}^2}{\omega_0^4})^2$, $\beta_3=-\frac{12 (\omega_{pm}/\omega_{pe})^2}{n c \omega_{pe}^2 (\omega/\omega_{pe})^6}-\frac{3\beta_2((\omega /\omega_{pe})^4 - (\omega_{pm} /\omega_{pe})^2)}{3\omega_{pe}n^2(\omega/\omega_{pe})^5}$, $\beta_4=\frac{60 n^2 \omega_{pe}^2 \omega_{pm}^2}{k_0 \omega^6 -\frac{3\beta_2^2}{k_0}}$, $\xi=\frac{1}{2n \omega^2 T_p^2} |\frac{1}{n}(1+\frac{3\omega_{pe}^2 \omega_{pm}^2}{\omega_0^4})-\frac{(1-\frac{\omega_{pe}^2 \omega_{pm}^2}{\omega_0^4})^2}{n^3}|$, $\sigma_1=\frac{1}{\omega T_p}(1+\frac{\omega_{pm}^2 \omega_{pe}^2-\omega_0^4}{n^2 \omega_0^4}-\frac{\omega_{pm}^2+\omega_0^2}{\omega_{pm}^2-\omega_0^2})$ and $\sigma_2=\frac{1}{\omega^2 T_p^2}(\frac{\omega_0^2}{\omega_0^2-\omega_{pm}^2}-\frac{1}{4n^2}(1+\frac{3\omega_{pm}^2\omega_{pe}^2}{\omega_0^4}+\frac{1}{4n^4} (1-\frac{\omega_{pe}^2\omega_{pm}^2}{\omega_0^4})^2))$, where $n=\sqrt{\epsilon(\omega) \mu(\omega)}$ is the index of refraction. $\omega_{pm}$, $k_0$ and $T_p$ are the magnetic plasma frequency, propagation constant and initial pulse width, respectively. Note that the choice of the above parameters depends upon the selection of $\omega_{pm}$ and $\omega_{pe}$ that will determine the engineered size of the constituents of the NIM structures.
\par
Now, let us rewrite Eq. (\ref{model}) in the form of a perturbed nonlinear Schr\"{o}dinger equation,
\begin{eqnarray}
\label{per}
i \frac{\partial A}{\partial z } -\frac{\beta_2}{2}\frac{\partial^2 A}{\partial t^2}+ \gamma (|A|^{2}A)= i \vartheta(A),
\end{eqnarray}
where $\vartheta(A)$ is defined as,
\begin{eqnarray}
\label{per1}
\vartheta(A)= \frac{\beta_3}{6}\frac{\partial^3 A}{\partial t^3}+ i \frac{\beta_4}{24}\frac{\partial^4 A}{\partial t^4}- i \gamma \xi (|A|^{4}A)-\gamma \sigma_1 \frac{\partial(|A|^{2}A)}{\partial t }\nonumber\\ -i \gamma \sigma_2 \frac{\partial^2(|A|^{2}A)}{\partial t^2 }+g_l A.
\end{eqnarray}
Now, we follow the Lagrangian variational method developed by Anderson to study the characteristics of soliton evolution \cite{an1,an2}. The Lagrangian density corresponding to Eq. (\ref{per}) can be written as follows,
\begin{eqnarray}
L= \frac{i}{2}\,(A\frac{\partial A^*}{\partial z }- A^* \frac{\partial A}{\partial z })-\, \frac{\beta_2}{2} |\frac{\partial A}{\partial t }|^2 -\frac{\gamma}{2}\,|A|^{4}+i(\vartheta A^* -\vartheta^* A).
\end{eqnarray}
As the problem deals with DSs let us now choose the Pereira-Stenflo solution \cite{pss} as ansatz for the solitary pulse,
\begin{eqnarray}
\label{ans}
A(z,t)=A_0(z)[sech(\rho(z)(t-t_p(z)))]^{(1+i w(z))} e^{i (\varphi (z)- \delta(z)(t-t_p(z)))},
\end{eqnarray}
where the parameters $A_0$, $\rho$, $t_p$, $w$, $\varphi$ and $\delta$ are assumed to be functions of the propagation distance (z). $A_0$ and $\rho$ define the amplitude and half-width, respectively, of the soliton, whereas $t_p$, $w$, $\varphi$ and $\delta$ are temporal position, frequency
chirp, phase and frequency shift, respectively. The variations of these parameters during the evolution of soliton can be found using Lagrangian variational analysis. The reduced Lagrangian of the system can be calculated by using the following relation,
\begin{equation}
\label{leg1}
\langle L\rangle=\int_{-\infty}^\infty L dt.
\end{equation}
After the substitution of $A(z,t)$ and its various derivatives on the right side of Eq. (\ref{leg1}) inside the integral and performing necessary integrations one can obtain the reduced Lagrangian as follows,
\begin{eqnarray}
\label{eg}
\langle L\rangle=2\frac{A_0^2}{\rho}(\frac{\partial \varphi}{\partial z }+\delta \frac{\partial t_p}{\partial z})-w\frac{A_0^2}{\rho^2} \frac{\partial \rho}{\partial z}+N \frac{A_0^2}{\rho}\frac{\partial w}{\partial z}+\rho \frac{A_0^2}{3}(1+w^2)\\ \nonumber+\frac{A_0^2}{\rho}(\delta^2-\frac{2}{3}A_0^2)+i \int_{-\infty}^\infty (\vartheta A^* -\vartheta^* A)dt,
\end{eqnarray}
where $N=\ln(2)-1$, $\beta_2=-1$ and $\gamma=1$. Varying the above effective Lagrangian with respect to the variational parameters given in Eq. (\ref{ans}) and performing the integration after substituting $\vartheta$ and $\vartheta^*$ from Eq. (\ref{per1}), we get the following system of six coupled nonlinear evolution equations corresponding to the six variational parameters,
\begin{subequations}
\label{evolu}
\begin{eqnarray}
\label{evolu1}
\frac{d t_p}{d z }= 2 t_p(g_l+\frac{4}{15}\gamma \sigma_2 A_0^2 w \rho^2)- \delta,
\end{eqnarray}
\begin{eqnarray}
\label{evolu2}
\frac{d \rho}{d z }= \rho (\frac{2}{15} w \rho^2 (5 -A_0^2(8 \gamma \sigma_2 +4 N \gamma \sigma_2 ))-2N g_l ),
\end{eqnarray}
\begin{eqnarray}
\label{evolu3}
\frac{d A_0}{d z }= w \rho^2(\frac{1}{3} A_0 -\frac{4}{5}A_0^3\gamma \sigma_2(1 -\frac{N}{3}))-ln(2)A_0 g_l,
\end{eqnarray}
\begin{eqnarray}
\label{evolu4}
\frac{d \delta}{d z }= \frac{4}{15}(2 \gamma \sigma_1 A_0^2 w \rho^2-15 ln(2)\delta g_l+w \rho^2 \delta(5-\gamma \sigma_2 A_0^2 (16-4 N ))),
\end{eqnarray}
\begin{eqnarray}
\label{evolu5}
\frac{d\varphi}{d z }= \frac{1}{4 f N_1}(\frac{4f A_0^2}{3}(2-\acute{N}-\frac{8\gamma \xi A_0^2}{5})+4 f w g_l(2+N-\acute{N}-N \acute{N}-2ln(2)N_1) \nonumber \\-\frac{2}{3}A_0 \rho(1 +\acute{N}- w^2+3 \acute{N} w^2)+\frac{8}{5} A_0^3 \rho \gamma(\sigma_2+\sigma_2 w^2(\frac{2}{3} N - 1 +2 \acute{N}-\frac{2 N \acute{N}}{3})-\frac{4 \acute{N} \sigma_1 t_p w }{3})\nonumber\\
+\beta_4 A_0 \rho^3(\frac{7}{90} +\frac{1}{9} w^2 +\frac{1}{30} w^4)+\frac{1}{3}A_0 \rho (1+w^2) (2\beta_3 \delta+ \beta_4 \delta^2)+f(\frac{2 \beta_3 \delta^3}{3}+\frac{\beta_4 \delta^4}{6})\nonumber\\
-8 t_p f \delta g_l N_1 -\frac{32}{15} \gamma \sigma_2 t_p A_0^3 w \rho \delta N_1 +2 f \delta^2 N_1
+\frac{8}{3} \gamma f A_0^2 (\sigma_2+\sigma_1 \delta )),
\end{eqnarray}
\begin{eqnarray}
\label{evolu6}
\frac{d w}{d z }= 2 ln(2)g_l N_1 w-\frac{2}{3} N_1 A_0^2+16 t_p \delta g_l N_1+\frac{1}{4(N-1)N_1}( \frac{8A_0^2}{3}(\acute{N}-\frac{8}{3} A_0^2 \gamma\rho-2)\nonumber\\ +8g_l w ((\acute{N}-N)(1-2ln(2))-2+N\acute{N}) +\rho^2(\frac{4}{3}(1+\acute{N}-w^2(1-3\acute{N}))+\frac{16A_0^2\gamma}{5}(\acute{N} \sigma_1 t_p w \nonumber\\ +\sigma_2 w^2 (1-\frac{2}{3}N^2-\frac{2}{3}N+\frac{2}{3}N \acute{N} )-1)) -\frac{\beta_4 \rho^4}{3}(\frac{7}{15}+\frac{2w^2}{3}+\frac{w^4}{5})\nonumber\\ -\frac{16}{3}\sigma_1 A_0^2\delta-\frac{2}{3}\rho^2 \delta(1+w^2 \frac{\delta^2}{\rho^2})(\delta \beta_4+2 \beta_3) +\frac{64}{15} \sigma_2 \gamma t_p A_0^2 w \rho^2 \delta (1+N+ \frac{4\delta}{5t_p \rho^2 w}) +4\delta^2 N_1)\nonumber\\-g_l \frac{4 t_p \delta}{N-1}+\frac{\rho^2}{N-1}(3+w^2+8\sigma_2\gamma A_0^2 w^2 \rho^2(N-\frac{1}{5}+ \frac{2t_p \delta}{15 w}+ \frac{2\sigma_1 t_p}{15\sigma_2 w})+ \frac{\delta^2}{\rho^2}),
\end{eqnarray}
\end{subequations}
where $f=\frac{A_0}{\rho}$, $\acute{N}=\frac{N}{N-1}$ and $N_1=1-\acute{N}$. The above set of six coupled nonlinear differential equations given in Eq. (\ref{evolu}) provide the information about the variation of parameters given in Eq. (\ref{ans}) during the evolution of the dissipative soliton under the influence of perturbations defined in Eq. (\ref{per1}). Now we solve the set of equations given in Eq. (\ref{evolu}) numerically to know the behavior of soliton parameters during the evolution. We consider the input soliton is an unchirped and unit amplitude pulse with zero initial phase, temporal position and frequency shift. Also we assume the metamaterial in which the soliton propagates is a nonlinear, dissipative and effective gain medium for compensating loss \cite{gan1, gan2}. Hence we choose $A_0(0)=1$, $\rho(0)=1$, $t_p(0)=0$, $w(0)=0$, $\varphi(0)=0$, $\delta(0)=0$ $g=0.15$, $\alpha=0.1$ and $\gamma=1$ to solve the system of equations.
\section{Result and Discussion}
Here we present the outcome of the variational analysis based on the coupled nonlinear differential equations given in Eq. (\ref{evolu}) and the numerical results after solving Eq. (\ref{model}). We adopt standard split-step Fourier method for our numerical simulation. The parameter values used are $g=0.15$, $\alpha=0.1$, $\gamma=1$, $\beta_2=-1$ and $A_0(0)=1$. Initially we study the influence of SS effect on the DS propagation. Then we examine the impact of other linear and nonlinear higher-order effects on the SS induced temporal shift. Finally, we demonstrate stable dynamics of the DS as a result of the interplay between various higher-order effects.
\subsection{Impact of self steepening}
\begin{figure*}[htb]
\subfigure[]{\label{1a}\includegraphics[height=5 cm, width=7 cm]{amp1.eps}}
\subfigure[]{\label{1b}\includegraphics[height=5 cm, width=7 cm]{rho1.eps}}
\subfigure[]{\label{1c}\includegraphics[height=5 cm, width=7 cm]{tp1.eps}}
\subfigure[]{\label{1d}\includegraphics[height=5 cm, width=7 cm]{delta1.eps}}
\subfigure[]{\label{1e}\includegraphics[height=5 cm, width=7 cm]{w1.eps}}
\subfigure[]{\label{1f}\includegraphics[height=5 cm, width=7 cm]{phi1.eps}}
\caption{(Color online.) Changes in different variational parameters under the influence of positive SS effect ($\sigma_1=0.5$) during the propagation of soliton.}
\label{fig1}
\end{figure*}
Now we discuss the influence of the SS effect on the propagation of DS in the metamaterials in detail. In contrast to the case of conventional positive index materials, the negative index metamaterials can show not only positive SS effect but also it can be negative, which is determined by the size of SRR circuit contained in the meta-atom. We initially discuss the results from the variational analysis results based on Eq. (\ref{evolu}). Fig. \ref{fig1} depicts the changes in different variational parameters under the influence of positive SS effect ($\sigma_1=0.5$) during the propagation of soliton. The pulse energy, $E=\int_{-\infty}^\infty |A|^{2} dt$, should be a conserved quantity during the evolution of the soliton. From Eq. (\ref{eg}), the cyclic nature of the variational parameter $\varphi(z)$ gives,
\begin{eqnarray}
\label{sd}
E=2\frac{A_0^2}{\rho}.
\end{eqnarray}
Figs. \ref{1a} and \ref{1b} show the changes in the parameters $A_0$ and $\rho$. It is clear from the figures that the parameters vary in such a way to keep the total pulse energy given by Eq. (\ref{sd}) a constant during the evolution. Variations of the temporal position $t_p$ and the frequency shift $\delta$ are depicted in Figs. \ref{1c} and \ref{1d}, respectively. Fig. \ref{1c} shows that as the soliton evolves the value of the temporal position increases. Hence the soliton peak shifts toward the trailing side of the pulse. Consequently $\delta$ decreases (Fig. \ref{1d}) and as a result the soliton spectrum shifts toward the low energy red side. Hence one can conclude that due to the influence of positive SS effect the peak of the DS shifts toward the trailing side and hence the spectrum shifts toward the red side when it propagates in the metamaterials. Also Figs. \ref{1e} and \ref{1f} depict the changes in the frequency chirp and the phase, respectively, during the evolution of the soliton.
\begin{figure*}[htb]
\subfigure[]{\label{2a}\includegraphics[height=5 cm, width=7 cm]{tp2.eps}}
\subfigure[]{\label{2b}\includegraphics[height=5 cm, width=7 cm]{delta2.eps}}
\caption{(Color online.) Impact of negative SS effect ($\sigma_1=-0.5$) on temporal position and frequency shift during the propagation of soliton.}
\label{threecore_normal}
\end{figure*}
On the other hand when the SS effect is negative the evolution of the DS is quite different as compared to the positive SS case. Figs. \ref{2a} and \ref{2b} show the variations of the temporal position and frequency shift, respectively, when the SS effect is negative. As compered to the case of positive SS effect, here the changes are opposite in nature. Here the temporal position decreases and the frequency shift increases during the evolution. Hence the peak of the DS shifts toward the leading side of the pulse and hence the spectrum shifts toward the high energy blue side when the SS effect is negative.
\begin{center}
\begin{figure*}[htb]
\subfigure[]{\label{3a}\includegraphics[height=6 cm, width=6 cm]{soliton1.eps}}
\subfigure[]{\label{3b}\includegraphics[height=6 cm, width=6 cm]{soliton2.eps}}
\subfigure[]{\label{3c}\includegraphics[height=6 cm, width=4.5 cm]{blues.eps}}~~~~~~
\subfigure[]{\label{3d}\includegraphics[height=6 cm, width=4.5 cm]{reds.eps}}~~~~~
\subfigure {\label{3e}\includegraphics[height=4 cm, width=1 cm]{leg.eps}}
\caption{(Color online.) Evolution of the dissipative soliton under the influence of SS effect. (a) negative SS effect ($\sigma_1$=-0.8) and (b) positive SS effect ($\sigma_1$=0.8). Spectral evolutions corresponding to (a) and (b) is shown in (c) and (d), respectively.}
\label{fig3}
\end{figure*}
\end{center}
\begin{center}
\begin{figure*}[htb]
\subfigure[Input soliton]{\label{4a}\includegraphics[height=3.5 cm, width=4 cm]{input.eps}}
\subfigure[$\sigma_1$=-0.8]{\label{4b}\includegraphics[height=3.5 cm, width=4 cm]{sol1.eps}}
\subfigure[$\sigma_1$=0.8]{\label{4c}\includegraphics[height=3.5 cm, width=4 cm]{sol2.eps}}
\caption{(Color online.) Temporal shifts of the soliton under the influence of negative and positive SS effects when z=250.}
\label{d4}
\end{figure*}
\end{center}
\par
In order to confirm the results of the variational analysis obtained here, we now solve Eq. (\ref{model}) numerically and present the results in the following. Fig. \ref{fig3} depicts the numerical outcome, which shows the propagation of the DS up to z=250. Figs. \ref{3a} and \ref{3b} correspond to negative ($\sigma_1=-0.8$) and positive ($\sigma_1=0.8$) SS effects. Fig. \ref{d4} shows the input and output intensity profiles of the DS under the influence of SS effects, respectively. It is clear from the figures that the SS effect induces temporal shift during the evolution of the soliton. This SS effect induced temporal shift is toward the leading edge (Figs. \ref{3a} and \ref{4b}) when the SS effect is negative in contrast to the positive SS effect, where the shift occurs toward the trailing edge of the pulse (Figs. \ref{3b} and \ref{4c}). Figs. \ref{3c} and \ref{3d} are spectral evolutions of the soliton corresponding to Figs. \ref{3a} and \ref{3b}, respectively. In Fig. \ref{3c} the spectrum shifts towards the high energy side whereas the spectral shift is toward the red side in Fig. \ref{3d}.
Also in Figs. \ref{4a} and \ref{4b} we compare the soliton profiles from numerical simulation and variational prediction, respectively at z=250. We can observe that the variational predictions agree
well with numerical simulations. The shape of the pulse remains unchanged under the influence of the SS effect.
\par
In the following section we discuss the impacts of other perturbations considered in Eq. (\ref{per1}) on the self steepening induced temporal shifts and we demonstrate stable dynamics of the DS as a result of the interplay between the higher-order effects.
\subsection{Influence of linear dispersion}
In this section we study the impacts of the higher order linear dispersion such as third and fourth order dispersions on SS induced temporal shift. Initially we discuss the results obtained from the variational analysis.
\begin{figure*}[htb]
\subfigure[$\sigma_1=0.5$]{\label{sigm}\includegraphics[height=5 cm, width=7 cm]{sigma1p8.eps}}
\subfigure[$\sigma_1=-0.5$]{\label{sig}\includegraphics[height=5 cm, width=7 cm]{sigma1mp8.eps}}
\caption{(Color online.) Impact of TOD on temporal shift $t_p$.}
\label{bet3}
\end{figure*}
\begin{figure*}[htb]
\subfigure[$\sigma_1=-0.5$]{\label{bet1}\includegraphics[height=5 cm, width=7 cm]{beta4p.eps}}
\subfigure[$\sigma_1=0.5$]{\label{bet2}\includegraphics[height=5 cm, width=7 cm]{beta4n.eps}}
\caption{(Color online.) Impact of the FOD on the temporal shift $t_p$.}
\label{thr}
\end{figure*}
Fig. \ref{bet3} depicts the impact of TOD on the temporal shift $t_p$. Figs. \ref{sigm} and \ref{sig} correspond to positive and negative SS effects, respectively. It is clear from Fig. \ref{sigm} that when both $\sigma_1$ and $\beta_3$ are positive the peak shift of the soliton towards the trailing edge enhances. On the other hand positive $\sigma_1$ and negative $\beta_3$ suppress the temporal shift. Also in the negative regime of $\sigma_1$ the enhancement and suppression of the peak shift towards the leading edge can be observed when $\beta_3$ is negative and positive, respectively, as shown in Fig. \ref{sig}. However FOD always enhances the SS effect induced temporal shift toward the trailing or leading edges of the pulse, which is clear from Fig. \ref{thr}.
\begin{figure*}[htb]
\subfigure[$\beta_3=0.2$]{\label{bet3num1}\includegraphics[height=3.5 cm, width=4 cm]{bet3p2.eps}}
\subfigure[$\beta_3=0.0$]{\label{bet3num2}\includegraphics[height=3.5 cm, width=4 cm]{bet30.eps}}
\subfigure[$\beta_3=-0.4$]{\label{bet3num3}\includegraphics[height=3.5 cm, width=4 cm]{bet3mp4.eps}}
\caption{(Color online.) Intensity profiles of the DS in the positive regime of SS effect ($\sigma_1$=0.8) at z=250 under the influence of different values of $\beta_3$.}
\label{bet3num}
\end{figure*}
\begin{figure*}[htb]
\subfigure[$\beta_4=0.0$]{\label{bet4num1}\includegraphics[height=3.5 cm, width=4 cm]{bet40.eps}}
\subfigure[$\beta_4=-0.05$]{\label{bet4num2}\includegraphics[height=3.5 cm, width=4 cm]{beta4mp05.eps}}
\subfigure[$\beta_4=-0.1$]{\label{Pbet4num3}\includegraphics[height=3.5 cm, width=4 cm]{beta4mp1.eps}}
\caption{(Color online.) Intensity profiles of the DS in the positive regime of SS effect ($\sigma_1$=0.8) at z=250 under the influence of different values of $\beta_4$.}
\label{bet4num}
\end{figure*}
In order to confirm the variational results presented in Figs. \ref{bet3} and \ref{thr} we perform numerical analysis of Eq. (\ref{model}). Fig. \ref{bet3num} depicts the intensity profiles of DS at z=250 under the influence of different values of $\beta_3$ when $\sigma_1$ =0.8 (a positive value). Fig. \ref{bet3num2} corresponds to the case where $\beta_3$=0, which shows the SS effect induced temporal shift toward the trailing edge of the pulse. Action of positive $\beta_3$ increases the temporal shift along with a reduction of intensity of the profile (Fig. \ref{bet3num1}) \cite{ag}. On the other hand, negative $\beta_3$ tries to bring the soliton peak towards zero temporal shift (Fig. \ref{bet3num3}). As the value of FOD increases the intensity of the profile decreases and it always enhances the SS effect induced temporal shift, which are clear from Fig. \ref{bet4num}.
\subsection{Influence of higher-order nonlinearities}
\begin{figure*}[htb]
\subfigure[$\sigma_1=-0.5$]{\label{quintic1}\includegraphics[height=5 cm, width=7 cm]{signegqui.eps}}
\subfigure[$\sigma_1=0.5$]{\label{quintic2}\includegraphics[height=5 cm, width=7 cm]{sigposqui.eps}}
\caption{(Color online.)Variational analysis results which show changes in the SS effect induced temporal shift due to the presence of quintic nonlinearity.}
\label{quintic1q}
\end{figure*}
Next we discuss the impact of higher-order nonlinear perturbations such as quintic nonlinearity and second order nonlinear dispersion.
\begin{figure*}[htb]
\subfigure[$\sigma_1=-0.8$]{\label{quintic1}\includegraphics[height=5 cm, width=7 cm]{quintic_sigma_N.eps}}
\subfigure[$\sigma_1=0.8$]{\label{quintic2}\includegraphics[height=5 cm, width=7 cm]{quintic_sigma_P.eps}}
\caption{(Color online.) Impact of QN on SS effect induced temporal shift of soliton.}
\label{quintic}
\end{figure*}
Figs. \ref{quintic1q} and \ref{quintic} show the variational and numerical results, respectively, which reveal the impact of QN on the SS induced temporal shift. Positive QN coefficient always increases the SS effect induced temporal shift whereas the negative QN coefficient decreases it. Influence of QN is independent of the sign of the SS effect. On the other hand impact of the SOND depends on the sign of the SS coefficient. When the SS effect is positive SOND enhances the temporal shift towards the trailing edge contrary to the case of negative SS effect, where it enhances the shift towards the leading edge of the pulse as depicted in Figs. \ref{snd} and \ref{thre}. Both QN and SOND considerably influence the intensity of the propagating dissipative soliton. Negative QN increases the intensity of the profile whereas positive QN decreases it. On the other hand SOND reduces the intensity of the profile as it is clear from Fig. \ref{thre}.
\begin{figure*}[htb]
\subfigure[$\sigma_1=-0.5$]{\label{quintic1}\includegraphics[height=5 cm, width=7 cm]{sig1neg.eps}}
\subfigure[$\sigma_1=0.5$]{\label{quintic2}\includegraphics[height=5 cm, width=7 cm]{sig1pov.eps}}
\caption{(Color online.) Variational analysis results which show the influence of SOND on SS effect induced temporal shift.}
\label{snd}
\end{figure*}
\begin{figure*}[htb]
\subfigure[$\sigma_2=0.0$]{\label{Nonlinear_normal}\includegraphics[height=3.5 cm, width=4 cm]{ndp0.eps}}
\subfigure[$\sigma_2=0.005$]{\label{PIM_normal}\includegraphics[height=3.5 cm, width=4 cm]{ndmp005.eps}}
\subfigure[$\sigma_2=0.01$]{\label{PIM_normal}\includegraphics[height=3.5 cm, width=4 cm]{ndmp01.eps}}
\caption{(Color online.) Impact of the SOND on the SS effect induced temporal shift of soliton when $\sigma_1$=0.8.}
\label{thre}
\end{figure*}
\subsection{Perturbations balanced soliton propagation}
\begin{figure*}[htb]
\subfigure[Unperturbed soliton]{\label{sol1}\includegraphics[height=6 cm, width=6 cm]{np.eps}}
\subfigure[Perturbations balanced soliton]{\label{sol2}\includegraphics[height=6 cm, width=6 cm]{p.eps}}
\caption{(Color online.) Unperturbed and perturbation balanced DS propagation in the metamaterials.}
\label{solipro}
\end{figure*}
It is quite clear from the study on the impact of different perturbations shown in Eq. (\ref{per1}), they considerably influence the dynamics of DS. Now we examine the possibility of stable propagation of the DS in the metamaterials as a result of balancing of the perturbations. Proper combination of higher order effects such as the SS effect and TOD can remove instabilities, which provides stable propagation of the soliton \cite{sofia, Liz, Lu, malo2}. Counteracting perturbations can support stable dynamics of the soliton. The stability analysis of a propagating wave in the metamaterials based on the propagation model given in Eq. (\ref{model}) has been carried out in reference \cite{anse}, which ensures the stability of such waves for appropriate choice of parameters. Fig. \ref{sol1} depicts the DS propagation without the action of any perturbation. The DS propagates without any temporal shift or change in shape. Inclusion of any perturbation defined in Eq. (\ref{per1}) may induce shape change, temporal shift and hence the frequency shift for propagating solitons. Properly choosing different perturbations, such that they can counteract to form a stable DS. Propagation of such perturbation balanced DS is depicted in Fig. \ref{sol2}. The coefficients of different perturbations used in Fig. \ref{sol2} are $\sigma_1=0.8$, $\sigma_2=0.005$, $\xi=-0.0008$, $\beta_3=-2.0$ and $\beta_4=-0.05$. The perturbations balance each other for this choice of parameters and in the metamaterial stable dynamics of the DS is observed.
\section{Conclusion}
In this theoretical investigation, we have studied the influence of higher-order effects such as TOD, FOD, QN, SS effect and SOND on the dynamics of DS in metamaterials. It is found that the SS effect induces temporal shift and this shift is toward the leading edge when the SS effect is negative in contrast to the positive SS effect, where the shift occurs toward the trailing edge of the pulse. TOD suppresses SS effect induced temporal shift when the sign of TOD coefficient is opposite to the sign of the SS coefficient. On the other hand the FOD always enhances the SS effect induced temporal shift. Positive QN coefficient always increases the SS effect induced temporal shift whereas the negative QN coefficient decreases it. When the SS effect is positive the SOND enhances the temporal shift towards the trailing edge contrary to the case of negative SS effect, where it enhances the shift towards the leading edge of the pulse. We have also found that stable propagation of the DS in the matamaterials can be achieved as a result of balancing of different perturbations for suitable choice of parameters. This study is helpful to form tunable dissipative solitons with potential application in all optical communication system. We believe that our theoretical results will help to stimulate new experiments with metamaterials.
\section{Acknowledgement}
The work of A.K.S. is supported by the University Grants Commission (UGC), Government of India, through a D. S. Kothari Post Doctoral Fellowship in Sciences. M.L. is supported by DST-SERB through a Distinguished Fellowship (Grant No. SB/DF/04/2017).
\bibliographystyle{amsplain}
|
1,108,101,565,525 | arxiv | \section{Introduction}
\label{sec:introduction}
Outliers are a fundamental problem in statistical data analysis.
Roughly speaking, an outlier is an observation point that differs from the data's ``global picture''~\cite{hawkins1980identification}.
A rule of thumb is that a typical dataset may contain between 1\% and 10\% of outliers~\cite{hampel2011robust}, or much more than that in specific applications such as web data, because of the inherent complex nature and highly uncertain pattern of users' web browsing~\cite{Gupta2016}.
This outliers problem was already considered in the early 50's~\cite{dixon1950analysis, grubbs1969procedures} and it motivated in the 70's the development of a new field called robust statistics~\cite{huber19721972, huber1981wiley}.
In this paper, we consider the problem of linear regression in the presence of outliers.
In this setting, classical estimators, such as the least-squares, are known to fail~\cite{huber19721972}.
In order to conduct regression analysis in the presence of outliers, roughly two approaches are well-known.
The first is based on detection and removal of the outliers to fit least-squares on ``clean'' data~\cite{weisberg2005applied}.
Popular methods rely on leave-one-out methods (sometimes called case-deletion), first described in~\cite{cook} with the use of residuals in linear regression.
The main issue about these methods is that they are theoretically well-designed for the situations where only one given observation is an outlier.
Repeating the process across all locations can lead to well-known masking and swamping effects~\cite{masking}.
An interesting recent method that does not rely on a leave-one-out technique is the so-called IPOD~\cite{ipod}, a penalized least squares method with the choice of tuning parameter relying on a BIC criterion.
A second approach is based on robust regression, that considers loss functions that are less sensitive to outliers~\cite{huber1981wiley}.
This relies on the $M$-estimation framework, that leads to good estimators of regression coefficients in the presence of outliers, thanks to the introduction of robust losses replacing the least-squares.
However, the computation of $M$-estimates is substantially more involved than that of the least-squares estimates, which to some extend counter-balance the apparent computational gain over previous methods.
Moreover, robust regression focuses only on the estimation of the regression coefficients, and does not allow directly to localize the outliers, see also for instance~\cite{robust} for a recent review.
Alternative approaches have been proposed to perform simultaneously outliers detection and robust regression.
Such methods involve median of squares~\cite{siegel1982robust}, S-estimation~\cite{rousseeuw1984robust} and more recently robust weighted least-squares~\cite{gervini2002class}, among many others, see also~\cite{Hadi} for a recent review on such methods.
The development of robust methods intersected with the development of sparse inference techniques recently.
Such inference techniques, in particular applied to high-dimensional linear regression, are of importance in statistics, and have been an area of major developments over the past two decades, with deep results in the field of compressed sensing, and more generally convex relaxation techniques~\cite{tibshirani1996regression, candes2006robust, candes2006stable, chen2001atomic,chandrasekaran2012convex}.
These led to powerful inference algorithms working under a sparsity assumption, thanks to fast and scalable convex optimization algorithms~\cite{bach2012optimization}.
The most popular method allowing to deal with sparsity and variable selection is the LASSO~\cite{lasso}, which is $\ell_1$-penalized least-squares, with improvements such as the Adaptive LASSO~\cite{alasso}, among a large set of other sparsity-inducing penalizations~\cite{buhlmann2011statistics,bach2012structured}.
Within the past few years, a large amount of theoretical results have been established to understand regularization methods for the sparse linear regression model, using so-called oracle inequalities for the prediction and estimation errors~\cite{BRT, CandesPlan,koltchinskii2008saint}, see also~\cite{buhlmann2011statistics,giraud} for nice surveys on this topic.
Another line of works focuses on variable selection, trying to recover the support of the regression coefficients with a high probability~\cite{Wainwright,CandesPlan,chretien}.
Other types of loss functions~\cite{LAD} or penalizations~\cite{SCAD,slope} have also been considered.
Very recently, the sorted-$\ell_1$ norm penalization has been
introduced~\cite{slope,slope1,slopeminimax} and very strong statistical properties have been shown.
In particular, when covariates are orthogonal, SLOPE allows to recover the support of the regression coefficients with a control on the False Discovery Rate~\cite{slope}.
For i.i.d covariates with a multivariate Gaussian distribution, oracle inequalities with optimal minimax rates have been shown, together with a control on a quantity which is very close to the FDR~\cite{slopeminimax}.
For more general covariate distributions, oracle inequalities with an optimal convergence rate are obtained in~\cite{Tsyb}.
However, many high-dimensional datasets, with hundreds or thousands of covariates, do suffer from the presence of outliers.
Robust regression and detection of outliers in a high-dimensional setting is therefore important.
Increased dimensionality and complexity of the data may amplify the chances of an observation being an outlier, and this can have a strong negative impact on the statistical analysis.
In such settings, many of the aforementioned outlier detection methods do not work well.
A new technique for outliers detection in a high-dimensional setting is proposed in~\cite{aggarwal2001outlier}, which tries to find the outliers by studying the behavior of projections from the data set.
A small set of other attempts to deal with this problem have been proposed in literature~\cite{loo,ro2015outlier,elasso,ipod,pwls}, and are described below with more details.
\section{Contributions of the paper}
Our focus is on possibly high dimensional linear regression where observations can be contaminated by gross errors.
This so-called mean-shifted outliers model can be described as follows:
\begin{equation}
y_i = x_i^\top \beta^\star + \mu_i^\star + \varepsilon_i
\label{model}
\end{equation}
for $i=1, \ldots, n$, where $n$ is the sample size.
A non-zero $\mu_i^\star$ means that observation $i$ is an outlier, and $\beta^\star \in \mathds{R}^p$, $x_i \in \mathds{R}^p$, $y_i \in \mathds{R}$ and $\varepsilon_i \in \mathds{R}$ respectively stand for the linear regression coefficients, vector of covariates, label and noise of sample $i$.
For the sake of simplicity we assume throughout the paper that the noise is i.i.d centered Gaussian with known variance $\sigma^2$.
\subsection{Related works}
\label{sub:related_works}
We already said much about the low-dimensional problem so we focus in this part on the high-dimensional one.
The leave-one-out technique has been extended in~\cite{loo} to high-dimension and general regression cases, but the masking and swamping problems remains.
In other models, outliers detection in high-dimension also includes distance-based approaches~\cite{ro2015outlier} where the idea is to find the center of the data and then apply some thresholding rule.
The model~\eqref{model} considered here has been recently studied with LASSO penalizations~\cite{elasso} and
hard-thresholding~\cite{ipod}.
LASSO was used also in~\cite{pwls}, but here outliers are modeled in the variance of the noise.
In~\cite{elasso, ipod}, that are closer to our approach, the penalization is applied differently: in~\cite{elasso}, the procedure named Extended-LASSO uses two different $\ell_1$ penalties for $\beta$ and $\mu$, with tuning parameters that are fixed according to theoretical results, while the IPOD procedure from~\cite{ipod} applies the same penalization to both vectors, with a regularization parameter tuned with a modified BIC criterion.
In~\cite{elasso}, error bounds and a signed support recovery result are obtained for both the regression and intercepts coefficients.
However, these results require that the magnitude of the coefficients is very large, which is one of the issues that we want to overcome with this paper.
It is worth mentioning that model~\eqref{model} can be written in a concatenated form $y=Z\gamma^\star + \varepsilon$, with $Z$ being the concatenation of the covariates matrix $X$ (with lines given by the $x_i$'s) and the identity matrix $I_n$ in $\mathds{R}^n$, and $\gamma^\star$ being the concatenation of $\beta^\star$ and $\mu^\star$.
This leads to a regression problem with a very high dimension $n + p$ for the vector $\gamma^\star$.
Working with this formulation, and trying to estimate $\gamma^\star$ directly is actually a bad idea. This point is illustrated experimentally in~\cite{elasso}, where it is shown that applying two different LASSO penalizations on $\beta$ and $\mu$ leads to a procedure that outperforms the LASSO on the concatenated vector. The separate penalization is even more important in case of SLOPE, whose aim is FDR control for the support recovery of $\mu^\star$. Using SLOPE directly on $\gamma^\star$ would mix the entries of $\mu$ and $\beta$ together, which would make FDR control practically impossible due to the correlations between covariates in the $X$ matrix.
\subsection{Main contributions}
Given a vector $\lambda = [\lambda_1 \cdots \lambda_m] \in \mathds{R}_+^m$ with non-negative and non-increasing entries, we define the sorted-$\ell_1$ norm of a vector $x \in \mathds{R}^m$ as
\begin{equation}
\label{sl1}
\forall x \in \mathds{R}^m, \; J_\lambda (x) = \sum_{j=1}^m \lambda_j \abs{x}_{(j)},
\end{equation}
where $\abs{x}_{(1)} \geq \abs{x}_{(2)} \geq \dots \geq \abs{x}_{(m)}$. In \cite{slope} and \cite{slope1} the sorted-$\ell_1$ norm was used as a penalty in the Sorted L-One Penalized Estimator (SLOPE) of coefficients in the multiple regression.
Degenerate cases of SLOPE are $\ell_1$-penalization whenever $\lambda_j$ are all equal to a positive constant, and null-penalization if this constant is zero.
We apply two different SLOPE penalizations on $\beta$ and~$\mu$, by considering the following optimization problem:
\begin{equation}
\label{pen}
\min_{\beta \in \mathds{R}^p, \mu \in \mathds{R}^n} \bigg\{\Vert y - X \beta - \mu \Vert_2 ^2 + 2\rho_1 J_{\tilde{\lambda}}(\beta) + 2\rho_2 J_\lambda (\mu) \bigg\}
\end{equation}
where $\rho_1$ and $\rho_2$ are positive parameters, $X$ is the $n \times p$ covariates matrix with rows $x_1, \ldots, x_n$, $y = [y_1 \cdots y_n]^T$, $\mu = [\mu_1 \cdots \mu_n]^T$, $\norm{u}_2$ is the Euclidean norm of a vector $u$ and $\lambda = [\lambda_1 \cdots \lambda_p]$ and $\tilde \lambda = [\tilde \lambda_1 \cdots \tilde \lambda_n]$ are two vectors with non-increasing and non-negative entries.
In this artice we provide the set of sequences $\lambda$ and $\tilde \lambda$ which allow to obtain better error bounds for estimation of $\mu^\star$ and $\beta^\star$ than previously
known ones~\cite{elasso}, see Section~\ref{SEC:BOUNDS} below.
Moreover, in Section~\ref{sec:fdr} we provide specific sequences which, under some asymptotic regime, lead to a control of the FDR for the support selection of~$\mu^\star$, and such that the power of the procedure (\ref{pen}) converges to one.
Procedure~\eqref{pen} is therefore, to the best of our knowledge, the first proposed in literature to \emph{robustly estimate $\beta^\star$, estimate and detect outliers at the same time, with a control on the FDR for the multi-test problem of support selection of $\mu^\star$, and power consistency.}
We compare in Section~\ref{sec:simu} our procedure to the recent alternatives for this problem, that is the IPOD procedure~\cite{ipod} and the Extended-Lasso~\cite{elasso}.
The numerical experiments given in Section~\ref{sec:simu} confirm the theoretical findings from Sections~\ref{SEC:BOUNDS} and~\ref{sec:fdr}.
As shown in our numerical experiments, the other procedures fail to guarantee FDR control or exhibit a lack of power when outliers are difficult to detect, namely when their magnitude is not far enough from the noise-level.
It is particularly noticeable that our procedure overcomes this issue.
The theoretical results proposed in this paper are based on two popular assumptions in compressed sensing or other sparsity problems, similar to the ones from~\cite{elasso}: first, a Restricted Eigenvalues (RE) condition~\cite{BRT} on $X$, then a mutual incoherence assumption \cite{DonohoHuo} between $X$ and $I_n$, which is natural since is excludes settings where the column spaces of $X$ and $I_n$ are impossible to distinguish.
Proofs of results stated in Sections~\ref{SEC:BOUNDS} and~\ref{sec:fdr} are given in Section~\ref{Appen:p1} and~\ref{Appen:p2}, while preliminary results are given in Sections~\ref{appen:prel1} and~\ref{appen:prel2}.
Section~\ref{appensim} provides contains supplementary extra numerical results.
\section{Upper bounds for the estimation of $\beta^\star$ and $\mu^\star$}
\label{SEC:BOUNDS}
Throughout the paper, $n$ is the sample size whereas $p$ is the number of covariables, so that $X\in\mathds{R}^{n\times p}$.
For any vector $u$, $\abs{u}_0$, $\norm{u}_1$ and $\norm{u}_2$ denote respectively the number of non-zero
coordinates of $u$ (also called sparsity), the $\ell_1$-norm and the Euclidean norm.
We denote respectively by $\lambda_{\min}(A)$ and $\lambda_{\max}(A)$ the smallest and largest eigenvalue of a symmetric matrix $A$.
We work under the following assumption
\begin{ass}
\label{ass:sparsity_and_normalized_features}
We assume the following sparsity assumption\textup:
\begin{equation}
\abs{\beta^\star}_0 \leq k \quad \text{ and } \quad \abs{\mu^\star}_0 \leq s
\end{equation}
for some positive integers $k$ and $s$,
and we assume that the columns of $X$ are normalized, namely $\norm{Xe_i}_2 = 1$ for $i=1, \ldots, n$,
where $e_i$ stands for the $i$-th element of the canonical basis.
\end{ass}
For the results of this Section, we consider procedure~\eqref{pen} with the following choice of $\lambda$:
\begin{equation}
\label{eq:slope_weights}
\lambda_i = \sigma \sqrt{\log \Big(\frac{2n}{i}\Big)},
\end{equation}
for $i=1,\dots,n$, and we consider three possibilities for $\tilde{\lambda}$, corresponding to no
penalization, $\ell_1$ penalization and SLOPE penalization on $\beta$.
Table~\ref{tab:rates} below gives a quick view of the convergence rates of the squared $\ell_2$ estimation errors of $\beta^\star$ and $\mu^\star$ obtained in Theorems~\ref{thm:upper_bound_nobeta},~\ref{thm:upper_bound_beta_l1} and~\ref{thm:upper_bound_beta_sl1}.
We give also the convergence rate obtained in~\cite{elasso} for $\ell_1$ penalization applied to $\beta$ and $\mu$.
In particular, we see that using two SLOPE penalizations leads to a better convergence rate than the use of $\ell_1$ penalizations.
\begin{table}[htbp]
\centering
\caption{Convergence rates, up to constants, associated to several penalization techniques.
NO means no-penalization, L1 stands for $\ell_1$ penalization, while SL1 stands for SLOPE.
We observe that SL1 + SL1 leads to a better convergence rate than L1 + L1.}
\label{tab:rates}
\begin{tabular}{@{}llll@{}}
\hline
\multicolumn{1}{l}{\begin{tabular}[l]{@{}l@{}}Penalization \\
($\beta$ / $\mu$)\end{tabular}} & \multicolumn{1}{l}{Convergence rates} & Reference \\
\hline
\addlinespace[0.5em]
NO/SL1 & $p\vee s \log(n/s)$ & Theorem~\ref{thm:upper_bound_nobeta} \\
\addlinespace[0.5em]
L1/L1 & $k\log p \vee s\log n$ & \cite{elasso} \\
\addlinespace[0.5em]
L1/SL1 & $k\log p \vee s\log(n/s)$ & Theorem~\ref{thm:upper_bound_beta_l1} \\
\addlinespace[0.5em]
SL1/SL1 & $k\log (p/k) \vee s\log (n/s)$ & Theorem~\ref{thm:upper_bound_beta_sl1} \\
\addlinespace[0.5em]
\hline
\end{tabular}
\end{table}
Condition~\ref{Hyp} below is a Restricted Eigenvalue (RE) type of condition which is adapted to our problem.
Such an assumption is known to be mandatory in order to derive fast rates of convergence for penalizations based on the convex-relaxation
principle~\cite{zhang2014lower}.
\begin{condition}
\label{Hyp}
Consider two vectors $\lambda = (\lambda_i)_{i=1,\dots,n}$ and $\tilde \lambda = (\tilde{\lambda}_i)_{i=1,\dots,p}$ with non-increasing and positive entries, and consider positive integers $k, s$ and $c_0 > 0$.
We define the cone $\mathcal{C}(k,s,c_0)$ of all vectors $[\beta^\top, \mu^\top]^\top \in \mathds{R}^{p + n}$ satisfying
\begin{equation}
\label{Cewre}
\sum_{j=1}^p \frac{\tilde{\lambda}_j}{\tilde{\lambda}_p}\abs{\beta}_{(j)}
+ \sum_{j=1}^n \frac{\lambda_j}{\lambda_n}\abs{\mu}_{(j)} \leq (1+c_0) \big( \sqrt{k}\norm{\beta}_2 + \sqrt{s}\norm{\mu}_2 \big).
\end{equation}
We also define the cone $\mathcal{C}^p (s,c_0)$ of all vectors $[\beta^\top, \mu^\top]^\top \in \mathds{R}^{p + n}$ satisfying
\begin{equation}
\label{Cewre_nobeta}
\sum_{j=1}^n \frac{\lambda_j}{\lambda_n}\abs{\mu}_{(j)} \leq (1+c_0) \big( \sqrt{p}\norm{\beta}_2 + \sqrt{s}\norm{\mu}_2 \big).
\end{equation}
We assume that there are constants $\kappa_1, \kappa_2 > 0$ with $\kappa_1 > 2\kappa_2$ such that $X$ satisfies the following, either for all
$[\beta^\top, \mu^\top]^\top \in \mathcal{C}(k,s,c_0)$ or for all
$[\beta^\top, \mu^\top]^\top \in \mathcal{C}^p(s,c_0)$\textup:
\begin{align}
\label{eq:hyp_eq_1}
\norm{X\beta}_2^2 + \norm{\mu}_2^2 &\geq \kappa_1 \big(\norm{\beta}_2^2 + \norm{\mu}_2^2\big) \\
\label{eq:hyp_eq_2}
\vert\langle X\beta, \mu \rangle\vert &\leq \kappa_2\big(\norm{\beta}_2^2 + \norm{\mu}_2^2\big).
\end{align}
\end{condition}
Equation~\eqref{Cewre_nobeta} corresponds to the particular case where
we do not penalize the regression coefficient $\beta$, namely $\tilde \lambda_i=0$ for all $i$.
Note also that Condition~\ref{Hyp} entails
\begin{equation*}
\norm{X \beta + \mu}_2 \geq \sqrt{\kappa_1 - 2 \kappa_2} \sqrt{\norm{\beta}_2^2 + \norm{\mu}_2^2},
\end{equation*}
which actually corresponds to a RE condition on $[X^\top I_n]^\top$ and that Equation~\eqref{eq:hyp_eq_1} is satisfied if $X$ satisfies a RE condition with constant $\kappa < 1$.
Finally, note that Equation~\eqref{eq:hyp_eq_2}, called mutual incoherence in the literature of compressed sensing, requires in this context that for all $\beta$ and $\mu$ from the respective cones the potential regression predictor $X \beta$ is sufficiently not-aligned with potential outliers $\mu$.
An extreme case of violation of this assumption occurs when $X = I_n$, where we cannot separate the regression coefficients from the outliers.
The Condition~\ref{Hyp} is rather mild and e.g. for a wide range of random designs. Specifically, Theorem~\ref{thm:RE} below, shows that it holds with large probability whenever $X$ has i.i.d $\mathcal N(0,\Sigma)$ rows, with $\lambda_{min}(\Sigma)>0$, and the vectors $\beta$ and $\mu$ are sufficiently sparse.
\begin{thm}
\label{thm:RE}
Let $X' \in \mathds{R}^{n\times p}$ be a random matrix with i.i.d $\mathcal{N}(0, \Sigma)$ rows and $\lambda_{min}(\Sigma)>0$.
Let $X$ be the corresponding matrix with normalized columns.
Given positive integers $k, s$ and $c_0 > 0$, define $r = s $ $\vee$ $k (1+c_0)^2$.
If
\begin{equation*}
\sqrt{n} \geq C\sqrt{r} \quad \text{ and } \quad \sqrt{n}\geq C'\sqrt{r\log (p \vee n)}
\end{equation*}
with
\begin{equation*}
C \geq 30 \frac{\sqrt{\lambda_{\max}(\Sigma)}}{\min_j \Sigma_{jj}}\Big(\frac{256\times 5 \max_j \Sigma_{jj}}{\lambda_{\min}(\Sigma)} \vee 16\Big)
\quad \text{ and } \quad C' \geq 72\sqrt{10}\frac{(\max_j \Sigma_{jj})^{3/2}}{\min_j \Sigma_{jj}\sqrt{\lambda_{\text{min}}(\Sigma)}},
\end{equation*}
then there are $c, c'> 0$ such that for any $[\beta^\top, \mu^\top]^\top \in \mathcal{C}(k,s,c_0)$, we have
\begin{align*}
\norm{X\beta}_2^2 + \norm{\mu}_2^2 &\geq \min\Big\{\frac{\lambda_{\min}(\Sigma)}{128\time 5 (\max_j \Sigma_{jj})^2},
\frac{1}{8}\Big\} \big(\norm{\beta}_2^2 + \norm{\mu}_2^2\big) \\
2\vert\langle X\beta, \mu \rangle\vert &\leq \min\Big\{\frac{\lambda_{\min}(\Sigma)}{256\times 5 (\max_j \Sigma_{jj})^2}, \frac{1}{16}\Big\} \big(\norm{\beta}_2^2 + \norm{\mu}_2^2\big)
\end{align*}
with a probability greater than $1 - cn\exp(-c'n)$.
These inequalities also hold for any $[\beta^\top, \mu^\top]^\top \in \mathcal{C}^p(s,c_0)$ when $k$ is replaced by $p$ in the above conditions.
\end{thm}
The proof of Theorem~\ref{thm:RE} is given in Appendix~\ref{Appen:proof_RE}. It is based on recent bounds results for Gaussian random matrices \cite{REgauss}.
The numerical constants in Theorem~\ref{thm:RE} are far from optimal and chosen for simplicity so that $\kappa_1 > 2 \kappa_2$ as required in Assumption~\ref{Hyp}.
A typical example for $\Sigma$ is the Toeplitz matrix $[a^{\abs{i-j}}]_{i,j}$ with $a \in [0,1)$, for which
$\lambda_{\min}(\Sigma)$ is equal to $1-a$ \cite{REgauss}.
The required lower bound on $n$ is non-restrictive, since $k$ and $s$ correspond to the sparsity of
$\beta^\star$ and $\mu^\star$, that are typically much smaller than $n$. Note also that $\mathcal{C}^p(s,c_0)$ will only be used in low dimension, and in this case $p$ is again much smaller than $n$.
Let us define $\kappa = \sqrt{\kappa_1 - 2\kappa_2}$ for the whole Section,
with $\kappa_1$ and $\kappa_2$ defined
in Assumption~\ref{Hyp}.
The three theorem below and their proof are very similar in nature, but differ in some details, therefore are stated and proved separately. We emphasize that the proof give slightly more general versions of the theorems, allowing the same result with $\hat{\mu}$ having any given support containing $\mathrm{Supp}(\mu^\star)$. This is of great theoretical interest and is a key point for the support detection of $\mu^\star$ investigated in \ref{sec:fdr}. The proof use a very recent bound on the inner product between a white Gaussian noise and any vector, involving the sorted $\ell_1$ norm \cite{Tsyb}.
Our first result deals with linear regression with outliers and no sparsity assumption on $\beta^\star$.
We consider procedure~\eqref{pen} with no penalization on $\beta$, namely
\begin{equation}
\label{eq:procedure_no_beta}
(\hat \beta, \hat \mu) \in \argmin_{\beta \in \mathds{R}^p, \mu \in \mathds{R}^n}
\Big\{ \Vert y - X \beta - \mu \Vert_2 ^2 + 2\rho J_\lambda (\mu) \Big\},
\end{equation}
with $J_\lambda$ given by~\eqref{sl1} and weights $\lambda$ given by~\eqref{eq:slope_weights}, and with $\rho\geq 2(4+\sqrt{2})$.
Theorem~\ref{thm:upper_bound_nobeta}, below, shows that a convergence rate for procedure~\eqref{eq:procedure_no_beta} is indeed $p\vee s\log(n/s)$, as reported in Table~\ref{tab:rates} above.
\begin{thm}
\label{thm:upper_bound_nobeta}
Suppose that Assumption~\ref{ass:sparsity_and_normalized_features} is met with $k = p$, and that $X$
satisfies Assumption~\ref{Hyp} on the cone $\mathcal C(k_1,s_1,4)$
with $k_1 = p/\log 2 $ and $s_1 =s\log\left(2en/s\right)/\log 2$.
Then, the estimators $(\hat \beta, \hat \mu)$ given by~\eqref{eq:procedure_no_beta} satisfy
\begin{equation*}
\Vert\hat{\beta} - \beta^\star \Vert_2^2 + \norm{\hat{\mu}-\mu^\star}_2^2 \leq
\frac{4\rho^2}{\kappa^4}\sum_{j=1}^s \lambda_j^2+ \frac{5\sigma^2 }{\kappa^4} p
\leq \frac{\sigma^2}{\kappa^4}\left(4\rho^2 s \log\left(\frac{2en}{s}\right)+5p\right),
\end{equation*}
with a probability larger than $1-\left(s/2n\right)^{s} / 2 - e^{-p}$.
\end{thm}
The proof of Theorem~\ref{thm:upper_bound_nobeta} is given in Appendix~\ref{proof:upper_bound_nobeta}.
The second result involves a sparsity assumption on $\beta^\star$ and
considers $\ell_1$ penalization for $\beta$.
We consider this time
\begin{equation}
\label{eq:2}
(\hat \beta, \hat \mu) \in \argmin_{\beta, \mu} \Big\{ \Vert y - X\beta - \mu \Vert_2 ^2 +
2 \nu \norm{\beta}_1 + 2\rho J_\lambda (\mu) \Big\},
\end{equation}
where $\nu = 4\sigma\sqrt{\log p}$ is the regularization level for $\ell_1$ penalization, $\rho \geq 2(4+\sqrt{2})$ and $J_\lambda$ is given by~\eqref{sl1}. Theorem \ref{thm:upper_bound_beta_l1}, below, shows that a convergence rate for procedure~\eqref{eq:2} is indeed $k \log p\vee s\log(n/s)$, as reported in Table~\ref{tab:rates} above.
\begin{thm}
Suppose that Assumption~\ref{ass:sparsity_and_normalized_features} is met and that $X$
satisfies Assumption~\ref{Hyp} on the cone $\mathcal C(k_1,s_1,4)$ with
$k_1 = 16k\log p/\log 2 $ and $s_1 = s\log (2en/s)/\log 2$. Suppose also that $\sqrt{\log p}\geq \rho\log 2 /4$.
Then, the estimators $(\hat \beta, \hat \mu)$ given by~\eqref{eq:2} satisfy
\begin{equation*}
\Vert\hat{\beta}-\beta^\ast\Vert_2^2 + \norm{\hat{\mu}-\mu^\ast}_2^2 \leq \frac{36}{\kappa^4}\sigma^2 k\log p + \frac{4\rho^2}{\kappa^4}\sum_{j=1}^s \lambda_j^2 \leq \frac{4 \sigma^2}{\kappa^4}\left(9k\log p+\rho^2 s \log \left(\frac{2en}{s}\right)\right),
\end{equation*}
with a probability larger than $1-\left(s/2n\right)^{s} / 2 - 1/p$.
\label{thm:upper_bound_beta_l1}
\end{thm}
The proof of Theorem~\ref{thm:upper_bound_beta_l1} is given in Appendix~\ref{proof:beta_sl1}.
The third result is obtained using SLOPE both on $\beta$ and $\mu$, namely
\begin{equation}
\label{eq:procedure_with_two_slopes}
(\hat \beta, \hat \mu) \in \argmin_{\beta, \mu} \Big\{ \Vert y - X \beta - \mu \Vert_2 ^2
+ 2\rho J_{\tilde{\lambda}}(\beta) + 2\rho J_\lambda (\mu) \Big\}
\end{equation}
where $\rho \geq 2(4+\sqrt{2})$, $J_\lambda$ is given by~\eqref{sl1}, and where
\begin{equation*}
\tilde{\lambda}_j = \sigma\sqrt{\log \Big(\frac{2p}{j} \Big)}
\end{equation*}
for $j=1,\dots, p$.
Theorem \ref{thm:upper_bound_beta_sl1}, below, shows that the rate of convergence of estimators provided by (\ref{eq:procedure_with_two_slopes}) is indeed $k\log(p/k)\vee s\log(n/s)$, as presented in Table~\ref{tab:rates}.
\begin{thm}
Suppose that Assumption~\ref{ass:sparsity_and_normalized_features} is met and that $X$
satisfies Assumption~\ref{Hyp} on the cone $\mathcal C(k_1,s_1,4)$ with
$k_1 = k \log (2ep / k) / \log 2$ and $s_1 =s \log(2en / s) / \log 2$.
Then, the estimators $(\hat \beta, \hat \mu)$ given by~\eqref{eq:procedure_with_two_slopes} satisfy
\begin{eqnarray}
\Vert\hat{\beta}-\beta^\ast\Vert_2^2 + \norm{\hat{\mu}-\mu^\ast}_2^2 &\leq&\frac{C'}{\kappa^4}
\Big( \sum_{j=1}^k \tilde{\lambda}_j^2 + \sum_{j=1}^s \lambda_j^2 \Big) \nonumber \\
&\leq& \frac{C'\sigma^2}{\kappa^4} \left(k \log\left(\frac{2ep}{k}\right)+s\log \left(\frac{2en}{s}\right)\right),
\end{eqnarray}
with a probability greater than $1 - (s / 2n)^{s} / 2 - (k/2p)^{k} / 2$,
where $C' = 4\rho^2 \vee (3+C)^2 / 2$.
\label{thm:upper_bound_beta_sl1}
\end{thm}
The proof of Theorem~\ref{thm:upper_bound_beta_sl1} is given in Appendix~\ref{proof:beta_sl1}.
Note that according to Theorem~\ref{thm:RE}, the assumptions of Theorem~\ref{thm:upper_bound_beta_sl1} are satisfied with probability converging to one when the rows of $X$ are i.i.d from the multivariate Gaussian distribution with the positive definite covariance matrix, and when the signal is sparse such that $(k \vee s) \log(n \vee p)=o(n)$.
\section{Asymptotic FDR control and power for the selection of the support of $\mu^\star$}
\label{sec:fdr}
We consider the multi-test problem with null-hypotheses
\begin{equation*}
H_i \;: \; \mu^\ast_i = 0
\end{equation*}
for $i=1, \ldots, n$, and we consider the multi-test that rejects $H_i$ whenever
$\hat{\mu}_i \neq 0$, where $\hat \mu$ (and $\hat \beta$) are given either
by~\eqref{eq:procedure_no_beta}, \eqref{eq:2} or~\eqref{eq:procedure_with_two_slopes}.
When $H_i$ is rejected, or ``discovered'', we consider that sample $i$ is an outlier.
Note however that in this case, the value of $\hat \mu_i$ gives extra information on how much
sample $i$ is oulying.
We use the FDR as a standard Type~I error for this multi-test problem~\cite{benjamini1995controlling}.
The FDR is the expectation of the proportion of falses discoveries among all discoveries.
Letting $V$ (resp. $R$) be the number of false rejections (resp. the number of rejections), the FDR is defined as
\begin{equation}
\label{eq:fdr}
\mathrm{FDR}(\hat{\mu}) = \mathds{E} \left[\frac{V}{R\vee 1}\right] =
\mathds{E}\left[\frac{\#\{ i: \mu_i^\star = 0, \hat{\mu}_i\neq 0\}}{\#\{i: \hat{\mu}_i\neq 0\}}\right].
\end{equation}
We use the Power to measure the Type~II error for this multi-test problem. The Power is the expectation of the proportion of true discoveries. It is defined as
\begin{equation}
\label{eq:power}
\mathrm{\Pi}(\hat{\mu}) = \mathds{E}\left[\frac{\#\{ i: \mu_i^\star \neq 0, \hat{\mu}_i\neq 0\}}{\#\{i: \mu_i^\star \neq 0\}}\right],
\end{equation}
the Type~II error is then given by $1-\Pi (\hat{\mu})$.
For the linear regression model without outliers, a multi-test for the support selection of $\beta^\star$ with controlled FDR based on SLOPE is given in~\cite{slope} and \cite{slope1}.
Specifically, it is shown that SLOPE with weights
\begin{equation}
\label{eq:slope_bh_weights}
\lambda_i ^{\mathrm{BH}}= \sigma\Phi^{-1}\Big( 1 - \frac{iq}{2n} \Big)
\end{equation}
for $i=1,\dots, n$, where $\Phi$ is the cumulative distribution function of $\mathcal N(0, 1)$ and $q \in (0, 1)$, controls FDR at the level $q$ in the multiple regression problem with orthogonal design matrix $X^T X=I$. It is also observed that when the columns of $X$ are not orthogonal but independent the weights have to be substantially increased to guarantee FDR control. This effect results from the random correlations between columns of $X$ and the shrinkage of true nonzero coefficients, and in context of LASSO have been thoroughly discussed in \cite{Su_2017}.
In this paper we substantially extend current results on FDR controlling properties of SLOPE.
Specifically, Theorem~\ref{THM:FDR} below gives asymptotic controls of $\mathrm{FDR}(\hat{\mu})$ and $\mathrm{\Pi}(\hat{\mu})$ for the procedures~\eqref{eq:procedure_no_beta},~\eqref{eq:2} and~\eqref{eq:procedure_with_two_slopes}, namely different penalizations on $\beta$ and SLOPE applied on $\mu$, with slightly increased weights
\begin{equation}
\label{eq:increased_lambda}
\lambda = (1+\epsilon)\lambda^{\mathrm{BH}},
\end{equation}
where $\epsilon > 0$, see also~\cite{slopeminimax}.
This choice of $\lambda$ also yields optimal convergence rates, however considering it in Section~\ref{SEC:BOUNDS} would lead to some extra technical difficulties.
Under appropriate assumptions on $p, n$, the signal sparsity and the magnitude of outliers, Theorem~\ref{THM:FDR} not only gives FDR control, but also proves that the Power is actually going to $1$.
Note that all asymptotics considered here are with respect to the sample size $n$, namely the statement $d \rightarrow +\infty$ means that $d = d_n \rightarrow +\infty$ with $n \rightarrow +\infty$.
\begin{thm}
\label{THM:FDR}
Suppose that there is a constant $M$ such that the entries of $X$ satisfy $\abs{x_{i,j}}\sqrt{n}\leq M$
for all $i,j\in\{1,\dots n \}$, and suppose that
\begin{equation*}
|\mu_i^\star| \geq (1+\rho(1+2\epsilon))2\sigma\sqrt{\log n}
\end{equation*}
for any $i=1, \ldots, n$ such that $\mu_i^\star \neq 0$.
Suppose also that $s \rightarrow +\infty$.
Then, consider $(\hat \beta, \hat \mu)$ given either by~\eqref{eq:procedure_no_beta},~\eqref{eq:2} and~\eqref{eq:procedure_with_two_slopes}, with $\lambda$ given by~\eqref{eq:increased_lambda}.
For Procedure~\eqref{eq:procedure_no_beta}, assume the same as in~Theorem~\ref{thm:upper_bound_nobeta}, and that
\begin{equation*}
\frac {p (s\log (n/s) \vee p)}{n} \rightarrow 0.
\end{equation*}
For Procedure~\eqref{eq:2}, assume the same as in~Theorem~\ref{thm:upper_bound_beta_l1}, and that
\begin{equation*}
\frac{(s\log (n/s) \vee k\log p )^2}{n} \rightarrow 0.
\end{equation*}
For Procedure~\eqref{eq:procedure_with_two_slopes}, assume the same as in~Theorem~\ref{thm:upper_bound_beta_sl1}, and that
\begin{equation*}
\frac{\left(s\log (n/s) \vee k\log (p/k)\right)^2}{n} \rightarrow 0.
\end{equation*}
Then, the following properties hold:
\begin{equation}
\Pi (\hat{\mu}) \rightarrow 1, \qquad \limsup \mathrm{FDR}(\hat{\mu}) \leq q.
\end{equation}
\end{thm}
The proof of Theorem~\ref{THM:FDR} is given in Appendix~\ref{Appen:p2}.
It relies on a careful look at the KKT conditions, also known as the dual-certificate method~\cite{Wainwright} or resolvent solution~\cite{slopeminimax}.
The assumptions of Theorem~\ref{THM:FDR} are natural.
The boundedness assumption on the entries of $X$ are typically satisfied with a large probability when $X$ is Gaussian for instance.
When $n \rightarrow +\infty$, it is also natural to assume that $s \rightarrow +\infty$ (let us recall that $s$ stands for the sparsity of the sample outliers $\mu \in \mathds{R}^n$).
The asymptotic assumptions roughly ask for the rates in Table~\ref{tab:rates} to converge to zero.
Finally, the assumption on the magnitude of the non-zero entries of $\mu^\ast$ is somehow unavoidable, since it allows
to distinguish outliers from the Gaussian noise.
We emphasize that good numerical performances are actually obtained with lower magnitudes, as illustrated in Section~\ref{subsec:simu}.
\section{Numerical experiments}
\label{sec:simu}
In this section, we illustrate the performance of procedure~\eqref{eq:procedure_no_beta} and procedure~\eqref{eq:procedure_with_two_slopes} both on simulated and real-word datasets, and compare them to several state-of-the art baselines described below. Experiments are done using the open-source \texttt{tick} library, available at \url{https://x-datainitiative.github.io/tick/}, notebooks allowing to reproduce our experiments are available on demand to the authors.
\subsection{Considered procedures}
\label{par:baselines_procedures}
We consider the following baselines, featuring the best methods available in literature for the joint problem of outlier detection and estimation of the regression coefficients, together with the methods introduced in this paper.
\paragraph{E-SLOPE}
\label{par:experiments_e_slope}
It is procedure~\eqref{eq:procedure_with_two_slopes}.
The weights used in both SLOPE penalizations are given by~\eqref{eq:slope_bh_weights}, with $q = 5\%$ (target
FDR), except in low-dimensional setting where we do not apply any penalization on $\beta$. Similar results for $q=10\%$ and $q=20\%$ are provided in Appendix~\ref{appensim}.
\paragraph{E-LASSO}
\label{par:experiments_lasso}
This is Extended LASSO from~\cite{elasso}, that uses two dedicated $\ell_1$-penalizations for $\beta$ and $\mu$ with respective tuning parameters $\lambda_{\beta} = 2\sigma \sqrt{\log p}$ and $\lambda_{\mu} = 2\sigma\sqrt{\log n}$.
\paragraph{IPOD}
\label{par:experiments_IPOD}
This is (soft-)IPOD from~\cite{ipod}.
It relies on a nice trick, based on a $QR$ decomposition of $X$.
Indeed, write $X = QR$ and let $P$ be formed by $n-p$ column vectors completing the column
vectors of $Q$ into an orthonormal basis, and introduce $\tilde{y} = P^\top y \in\mathds{R}^{n-p}$.
Model~\eqref{model} can be rewritten as $\tilde{y} = P^\top \mu^\ast + \varepsilon$, a new high-dimensional
linear model, where only $\mu^\ast$ is to be estimated.
The IPOD then considers the Lasso procedure applied to this linear model, and a BIC criterion is used to choose the tuning parameter of $\ell_1$-penalization.
Note that this procedure, which involves a QR decomposition of $X$, makes sense only for $p$ significantly smaller than $n$, so that we do not report the performances of IPOD on simulations with a large $p$.
\paragraph{LassoCV}
\label{par:experiments_LassoCV}
Same as IPOD but with tuning parameter for the penalization of individual intercepts chosen by cross-validation.
As explained above, cross-validation is doomed to fail in the considered model, but results are shown for the sake of completeness.
\paragraph{SLOPE}
\label{par:slope}
It is SLOPE applied to the concatenated problem, namely $y = Z \gamma^\star + \varepsilon$, where $Z$ is the concatenation of $X$ and $I_n$ and $\gamma^\star$ is the concatenation of $\beta^\star$ and $\mu^\star$.
We use a single SLOPE penalization on $\gamma$, with weights equal to~\eqref{eq:slope_bh_weights}.
We report the performances of this procedure only in high-dimensional experiments, since it always penalizes $\beta$.
This is considered mostly to illustrate the fact that working on the concatenated problem is indeed a bad idea, and that two distinct penalizations must be used on $\beta$ and $\mu$.
\medskip
Note that the difference between IPOD and E-LASSO is that, as explained in~\cite{elasso}, the weights used for
E-LASSO to penalize $\mu$ (and $\beta$ in high-dimension) are fixed, while the weights in IPOD are data-dependent.
Another difference is that IPOD do not extend well to a high-dimensional setting, since its natural extension
(considered in \cite{ipod}) is a thresholding rule on the the concatenated problem, which is, as explained before, and as illustrated in our numerical experiments, poorly performing.
Another problem is that there is no clear extension of the modified BIC criterion proposed in~\cite{ipod} for high-dimensional problems.
The tuning of the SLOPE or $\ell_1$ penalizations in the procedure described above require the knowledge of the noise level.
We overcome this simply by plugging in~\eqref{eq:slope_bh_weights} or wherever it is necessary a robust estimation of the variance: we first fit a Huber regression model, and apply a robust estimation of the variance of its residuals.
All procedures considered in our experiments use this same variance estimate.
\begin{rem}
The noise level can be estimated directly by the Huber regression since in our simulations $p<n$. When $p$ is comparable or larger than $n$ and the signal
\textup(both $\beta^{\star}$ and $\mu^{\star}$\textup) is sufficiently sparse one can jointly estimate the noise level and other model parameters in the spirit of {\it scaled} LASSO \cite{sun2012scaled}. The corresponding iterative procedure for SLOPE was proposed and investigated
in \cite{slope} in the context of high-dimensional regression with independent regressors. However, we feel that the problem of selection of the optimal estimator of $\sigma$ in ultra-high dimensional settings still requires a separate investigation and we postpone it for a further research.
\end{rem}
\subsection{Simulation settings}
\label{subsec:simu}
The matrix $X$ is simulated as a matrix with i.i.d row distributed as $\mathcal N(0, \Sigma)$, with Toeplitz
covariance $\Sigma_{i, j} = \rho^{|i-j|}$ for $1 \leq i, j \leq p$, with moderate correlation $\rho = 0.4$.
Some results with higher correlation $\rho = 0.8$ are given in Appendix~\ref{appensim}.
The columns of $X$ are normalized to $1$.
We simulate $n=5000$ observations according to model~\eqref{model} with $\sigma=1$
and $\beta^\ast_i = \sqrt{2\log p}$. Two levels of magnitude are considered for $\mu^\star$: \emph{low-magnitude}, where $\mu^\star_i = \sqrt{2\log n}$ and \emph{large-magnitude}, where $\mu^\star_i = 5\sqrt{2\log n}$.
In all reported results based on simulated datasets, the sparsity of $\mu^\star$ varies between $1\%$
to $50\%$, and we display the averages of FDR, MSE and power over 100 replications.
\paragraph{Setting~1 (low-dimension)}
This is the setting described above with $p=20$. Here $\beta^{\star}_1=\ldots=\beta^{\star}_{20}=\sqrt{2\log 20}$.
\paragraph{Setting~2 (high-dimension)}
This is the setting described above with $p=1000$ and a sparse $\beta^\star$ with sparsity $k=50$, with non-zero entries chosen uniformly at random.
\subsection{Metrics}
\label{sub:metrics}
In our experiments, we report the ``MSE coefficients'', namely $\| \hat \beta - \beta^\star \|_2^2$ and the ``MSE intercepts'', namely $\| \hat \mu - \mu^\star \|_2^2$.
We report also the FDR~\eqref{eq:fdr} and the Power~\eqref{eq:power} to assess the procedures for the problem of outliers detection, where the expectations are approximated by averages over 100 simulations.
\subsection{Results and conclusions on simulated datasets}
\label{sub:results_and_conclusions_on_simulated_datasets_}
We comment the displays provided in Figures~\ref{fig:lowdim1},~\ref{fig:lowdim2} and~\ref{fig:highdim1} below.
On Simulation Setting~2 we only display results for the low magnitude case, since it is the most challenging one.
\begin{itemize}
\item In Figures~\ref{fig:lowdim1} and~\ref{fig:lowdim2}, LassoCV is very unstable, which is expected since cross-validation cannot work in the considered setting: data splits are highly non-stationary, because of the significant amount of outliers.
\item In the low dimensional setting, our procedure $E-SLOPE$ allows for almost perfect FDR control and its MSE is the smallest among all considered methods. Note that in this setting the MSE is plotted after debiasing the estimators, performing ordinary least squares on the selected support.
\item In the sparse (on $\beta$) high dimensional setting with correlated regressors, E-SLOPE allows to keep FDR below the nominal level even when the outliers consist 50\% of the total data points. It also allows to maintain a small MSE and high power.
The only procedure that improves E-SLOPE in terms of MSE for $\mu$ is SLOPE in Figure~\ref{fig:highdim1}, at the cost of a worse FDR control.
\item E-SLOPE provides a massive gain of power compared to previous state-of-the-art procedures (power is increased by more than $30\%$) in settings where outliers are difficult to detect.
\end{itemize}
\begin{figure
\centering
\includegraphics[width=\linewidth]{V2lowdim_midcor_fdr0,05_strong.pdf}
\caption{Results for Simulation Setting~1 with high-magnitude outliers. First row gives the FDR (left) and power (right) of each considered procedure for outliers discoveries.
Second row gives the MSE for regressors (left) and intercepts (right).
E-SLOPE gives perfect power, is the only one to respect the required FDR, and provides the best MSEs.}
\label{fig:lowdim1}
\end{figure}
\begin{figure
\centering
\includegraphics[width=\linewidth]{V2lowdim_midcor_fdr0,05_weak.pdf}
\caption{Results for Simulation Setting~1 with low-magnitude outliers. First row gives the FDR (left) and power (right) of each considered procedure for outliers discoveries.
Second row gives the MSE for regressors (left) and intercepts (right).
Once again E-SLOPE nearly gives the best power, but is the only one to respect the required FDR, and provides the best MSEs.}
\label{fig:lowdim2}
\end{figure}
\begin{figure
\centering
\includegraphics[width=\linewidth]{V3highdim_midcor_fdr0,05_weak.pdf}
\caption{Results for Simulation Setting~2 with low-magnitude outliers. First row gives the
FDR (left) and power (right) of each considered procedure for outliers discoveries.
Second row gives the MSE for regressors (left) and intercepts (right).
Once again E-SLOPE nearly gives the best power, but is the only one to respect the required FDR.
It gives the best MSE for outliers estimation, and is competitive for regressors estimation.
All procedures have a poor MSE when the number of outliers is large, since the simulation setting
considered in this experiment is hard: low-magnitude outliers and high-dimension.}
\label{fig:highdim1}
\end{figure}
\subsection{PGA/LPGA dataset}
This dataset contains Distance and Accuracy of shots, for PGA and LPGA players in 2008. This will allow us to visually compare the performance of IPOD, E-LASSO and E-SLOPE. Our data contain 197 points corresponding to PGA (men) players, to which we add~8 points corresponding to LPGA (women) players, injecting outliers.
We apply SLOPE and LASSO on $\mu$ with several levels of penalization.
This leads to the ``regularization paths'' given in the top plots of Figure~\ref{fig:golf}, that shows the value of the 205 sample intercepts $\hat \mu$ as a function of the penalization level used in SLOPE and LASSO. Vertical lines indicate the choice of the parameter according the corresponding method (E-SLOPE, E-LASSO, IPOD).
We observe that E-SLOPE correctly discover the confirmed outliers (women data), together with two men observations that can be considered as outliers in view of the scatter plot. IPOD procedure does quite good, with no false discovery, but misses some real outliers (women data) and the suspicious point detected by E-SLOPE. E-LASSO does not make any false discovery but clearly reveals a lack of power, with only one discovery.
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{golfV2.pdf}
\caption{PGA/LPGA dataset: top plots show the regularization paths for both types of penalization, bottom-left plot is a scatter plot of the data, with colored points corresponding to the discoveries made by E-SLOPE, bottom-right plot show the original data and the true outliers.}
\label{fig:golf}
\end{figure}
\subsection{Retail Sales Data}
This dataset is from the U.S. census Bureau, for year 1992.
The informations contained in it are the per capita retail sales of 845 US counties (in \$1000s). It also contains five covariates: the per capita retail establishments, the per capita income (in \$1000s), per capita federal expenditures (in \$1000s), and the number of males per 100 females.
No outliers are known, so we artificially create outliers by adding a small amount (magnitude $8$, random sign) to the retail sales of counties chosen uniformly at random. We consider various scenarii (from $1\%$ to $20\%$ of outliers) and compute the false discovery proportion and the power. Figure~\ref{fig:sales} below summarizes the results for the three procedures.
The results are in line with the fact that E-SLOPE is able to discover more outliers than its competitors. E-SLOPE has the highest power, and the FDP remains under the target level.
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{salesV4.pdf}
\caption{\emph{Left}: False Discovery proportion, E-SLOPE remains under the target level; \emph{Right}: power, E-SLOPE performs better than the competitors.}
\label{fig:sales}
\end{figure}
\subsection{Colorectal Cancer Data}
We consider whole exome sequencing data for 47 primary colorectal cancer tumors, characterized by a global genomic instability affecting repetitive DNA sequences (also known as microsatellite instable tumors, see \cite{bio1}).
In what follows, we restrict ourselves to repetitive sequences whose base motif is the single nucleotide A, and which are in regulatory regions (following the coding regions) that influence gene expression (UTR3). Same analysis could be run with different base motifs and different regions (exonic, intronic).
It has been shown in recent publications (see \cite{bio2}), that the probability of mutation of a sequence is dependent of the length of the repeat.
So we fit, after a rescaled probit transformation, our mean-shift model with an intercept and the length of the repeat as covariates.
The aim of the analysis is to find two categories of sequences: survivors (multi-satellites that mutated less than expected) and transformators (multi-satellites that mutated more than expected), with the idea that those sequences must play a key role in the cancer development.
We fix the FDR level $\alpha=5\%$, results are shown in Figure~\ref{fig:data_res}: blue dots are the observed mutation frequencies of each gene among the 47 tumors, plotted as a function of the repeat length of the corresponding gene, our discoveries are hightlighted in red.
We made 37 discoveries, and it is of particular interest to see that our procedure select both "obvious" outliers and more "challenging" observations that are discutable with the unaided eye. We also emphasize that with the IPOD procedure and the LASSO procedure described in the previous paragraph, respectively 32 and 22 discoveries were made, meaning that even with this stringent FDR level, our procedure allow us to make about $16\%$ more discoveries than IPOD.
\begin{figure}[H]
\centering
\includegraphics[width=0.8\textwidth]{data_results_A_UTR3.pdf}
\caption{Colorectal Cancer data: 37 discoveries made by E-SLOPE, namely sequence considered by our procedure as transformators or mutators (see main text for details). The procedure selects both ``obvious'' and more ``challenging'' observations for the eye. IPOD and Lasso procedures led only to 32 and 22 discoveries, despite the fact that we restricted E-SLOPE to the stringent FDR level of 5\%}
\label{fig:data_res}
\end{figure}
\section{Conclusion}
This paper introduces a novel approach for simultaneous robust estimation and outliers detection in the linear regression model.
Three main results are provided: optimal bounds for the estimation problem in Section~\ref{SEC:BOUNDS}, that improve in particular previous results obtained with LASSO penalization~\cite{elasso}, and asymptotic FDR control and power consistency for the outlier detection problem in Section~\ref{sec:fdr}.
To the best of our knowledge, this is the first result involving FDR control in this context.
Our theoretical foundings are confirmed on intensive experiments both on real and synthetic datasets, showing that our procedure outperforms existing procedure in terms of power, while maintaining a control on the FDR, even in challenging situations such as low-magnitude outliers, a high-dimensional setting and highly correlated features.
Finally, this work extends the understanding of the deep connection between the SLOPE penalty and FDR control, previously studied in linear regression with orthogonal~\cite{slope} or i.i.d gaussian \cite{slopeminimax} features, which distinguishes SLOPE from other popular convex penalization methods.
\begin{appendix}
\section{Technical inequalities}
\label{appen:prel1}
The following technical inequalities are borrowed from \cite{Tsyb}, where proofs can be found.
Let $m,n,p$ be positive integers.
In the following lemma, an inequality for the sorted $\ell_1$-norm $J_\lambda$ (defined in equation \ref{sl1}) is stated.
\begin{lem}
For any two $x,y\in\mathds{R}^m$, and any $s\in{1,\dots,m}$ such that $\abs{x}_0\leq s$ we have
\begin{equation*}
J_\lambda(x) - J_\lambda(y) \leq \Lambda (s) \norm{x-y}_2 - \sum_{j=s+1}^m \lambda_j \abs{x-y}_{(j)},
\end{equation*}
where
\begin{equation*}
\Lambda (s) = \sqrt{\sum_{j=1}^s \lambda_j^2}.
\end{equation*}
\label{lem:1}
\end{lem}
The following lemma gives a preliminary bound for the prediction error in our context, that are the starting point of our proof.
\begin{lem}
Let $h:\mathds{R}^p \rightarrow \mathds{R}$ be a convex function. Consider a $n\times p$ design matrix $X$, a vector $\varepsilon\in\mathds{R}^n$ and define $y=X\beta^\star+\varepsilon$ where $\beta^\star\in\mathds{R}^p$. If $\hat{\beta}$ is a solution of the minimization problem $\min_{\beta\in\mathds{R}^p} (\norm{y - X\beta}_2^2 + 2h(\beta))$, then $\hat{\beta}$ satisfies:
\begin{equation*}
\norm{X\hat{\beta} - X\beta^\star}_2^2 \leq \varepsilon^\top X(\hat{\beta} - \beta^\star) + h(\beta^\star) - h(\hat{\beta}).
\end{equation*}
\label{lem:2}
\end{lem}
\begin{proof}
Because the proof in \cite{Tsyb} is more general, we give a proof adapted to our context.
Optimality of $\hat{\beta}$ allows to choose $v$ in the subdifferential of $h$ sucht that
\begin{equation*}
0 = X^\top (X\hat{\beta} - y) + v = X^\top (X\hat{\beta} - X\beta^\star - \varepsilon) + v.
\end{equation*}
Therefore,
\begin{align*}
\norm{X\hat{\beta} - X\beta^\star}_2^2 &= (\hat{\beta} - \beta^\star)^\top X^\top X(\hat{\beta} - \beta^\star) \\
&= (\hat{\beta} - \beta^\star)^\top (X^\top\varepsilon - v)\\
&= \varepsilon^\top X(\hat{\beta} - \beta^\star) + \langle v, \beta^\star - \hat{\beta} \rangle.
\end{align*}
Now, by definition of subdifferential, $h(\beta^\star) \geq h(\hat{\beta}) + \langle v, \beta^\star - \hat{\beta} \rangle$. Combining this inequality with the previous equality leads to the conclusion.
\end{proof}
The following lemma allows to bound the inner product between a white Gaussian noise and any vector. The resulting bound involved the sorted $\ell_1$ norm.
\begin{lem}
Let $\delta_0\in (0,1)$ and let $X\in\mathds{R}^{n\times p}$ with columns normed to~$1$. If $\varepsilon$ is $\mathcal{N}(0,I_n)$ distributed, then the event
\begin{equation*}
\left\{\forall u \in\mathds{R}^p,\varepsilon^\top Xu \leq \max\left(H(u), G(u)\right) \right\}
\end{equation*}
is of probability at least $1-\delta_0/2$, where
\begin{equation*}
H(u) = (4+ \sqrt{2})\sum_{j=1}^p \abs{u}_{(j)}\sigma \sqrt{\log(2p/j)}
\end{equation*}
and
\begin{equation*}
G(u) = (4+\sqrt{2})\sigma \sqrt{\log(1/\delta_0)}\norm{u}_2.
\end{equation*}
\label{lem:inner_product_bound}
\end{lem}
\section{Results related to Gaussian matrices}
\label{appen:prel2}
Inequalities for Gaussian random matrices are needed in what follows. They are stated here for the sake of clarity and we refer the reader to \cite{giraud} for proof (except bounds \ref{chi1} and \ref{chi2} that are taken from Lemma 1 in \cite{Massart}).
Again, $n$ and $p$ denote positive integers.
\begin{lem}
Let $X\in\mathds{R}^{n\times p}$ with i.i.d $\mathcal{N}(0,I_p)$ rows. Denote by $\sigma_{max}$ the largest singular value of $X$. Then, for all $\tau \geq 0$,
\begin{equation}
\Proba \left( \frac{\sigma_{max}}{\sqrt{n}} \geq 1+ \sqrt{\frac{p}{n}} + \tau \right) \leq \exp\left(-\frac{n\tau^2}{2}\right).
\label{eq:sing}
\end{equation}
\label{lm:Gauss1}
\end{lem}
\begin{lem}
Concentration inequalities:
\begin{itemize}
\item Let $Z$ be $\mathcal{N}(0,1)$ distributed. Then for all $q\geq 0$:
\begin{equation}
\Proba\left( \abs{Z} \geq q \right) \leq \exp\left(-\frac{q^2}{2} \right).
\label{eq:gaus}
\end{equation}
\item Let $Z_1,Z_2,\dots,Z_p$ be independent and $ \mathcal{N}(0,\sigma^2)$ distributed. Then for all $L > 0$:
\begin{equation}
\Proba\left( \max_{i=1,\dots,p}\abs{Z_i} > \sigma\sqrt{2\log p +2L} \right) \leq e^{-L}.
\label{eq:maxgaus}
\end{equation}
\item Let $X$ be $\chi_2(n)$ distributed. Then, for all $x>0$:
\begin{equation}
\Proba\left( X-n \geq 2\sqrt{nx} +2x\right) \leq \exp(-x).
\label{chi1}
\end{equation}
\begin{equation}
\Proba\left( n-X \geq 2\sqrt{nx} \right) \leq \exp(-x).
\label{chi2}
\end{equation}
\end{itemize}
\label{lm:Gauss2}
\end{lem}
The following recent result (\cite{REgauss}, Theorem~1) will also be useful.
\begin{lem}
Let $X\in\mathds{R}^{n\times p}$ with i.i.d $\mathcal{N}(0,\Sigma)$ rows. There exists positive constants $c$ and $c'$ such that with probability greater than $1-c'\exp(-cn)$, we have for all $z\in\mathds{R}^p$:
\begin{equation}
\frac{\norm{Xz}_2}{\sqrt{n}} \geq \frac{1}{4}\sqrt{\lambda_{min}(\Sigma)}\norm{z}_2 - 9\sqrt{\max_j \Sigma_{jj} \frac{\log p}{n}}\norm{z}_1,
\label{eq:re_gauss}
\end{equation}
where $\lambda_{min}(\Sigma)$ is the lowest eigenvalue of $\Sigma$.
\label{lm:re_gauss}
\end{lem}
\section{Proof of Section \ref{SEC:BOUNDS}}
\label{Appen:p1}
This section is devoted to the proof of our main results, stated in Section~\ref{SEC:BOUNDS}.
\subsection{Proof of Theorem \ref{thm:RE}}
\label{Appen:proof_RE}
Define $D$ the diagonal matrix such that $X=X'D$ ($D$ is the diagonal matrix formed by the inverse of the norm of each column of $X'$). Applying now Lemma~\ref{lm:re_gauss} for $X'$ and $Dz$ we obtain for all $z\in\mathds{R}^p$
\begin{align*}
\norm{Xz}_2 &\geq \frac{1}{4}\sqrt{\lambda_{min}(\Sigma)} \norm{\sqrt{n}Dz}_2 - 9\sqrt{\max_j \Sigma_{jj}\frac{\log p}{n}}\norm{\sqrt{n}Dz}_1 \\
&\geq \frac{\sqrt{n}\sqrt{\lambda_{min}(\Sigma)}}{4M}\norm{z}_2 - 9\frac{\sqrt{\max_j \Sigma_{jj}\log p}}{m}\norm{z}_1,
\end{align*}
with probability greater than $1-c'\exp(-cn)$,
where $M$ and $m$ denote respectively the maximum and minimum of the norms of the columns of $X'$. Note that for all $1\leq i \leq p$, the squared norm of the $i^{th}$ column of $X'$ is $\sigma_i^2\chi_2 (n)$ distributed, so using the bounds \ref{chi1} and \ref{chi2} of Lemma \ref{lm:Gauss2} (respectively with $x=n$ and $x=n/16$), together with a union bound we obtain that with probability greater than $1-ne^{-n} - ne^{-n/16}$
\begin{equation*}
M\leq (\max_j \Sigma_{jj})\sqrt{5n}, \qquad m\geq (\min_j \Sigma_{jj})\sqrt{\frac{n}{2}},
\end{equation*}
and we eventually obtain
\begin{equation}
\norm{Xz}_2 \geq \frac{\sqrt{\lambda_{min}(\Sigma)}}{4\sqrt{5}\max_j \Sigma_{jj}}\norm{z}_2 - \frac{9}{\min_j \Sigma_{jj}}\sqrt{\frac{2\log p\max_j \Sigma_{jj}}{n}}\norm{z}_1.
\label{eq:Xn}
\end{equation}
Let define $v = [\beta^\top, \mu^\top]^\top \in \mathds{R}^{p + n} \in\mathcal{C}(k,s, c_0)$ (see Definition \ref{Cewre}). Then,
\begin{equation}
\norm{\beta}_1 \leq \sum_{j=1}^p \frac{\tilde{\lambda}_j}{\tilde{\lambda}_p}\abs{\beta}_{(j)}\leq (1+c_0)\left(\sqrt{k}\norm{\beta}_2 + \sqrt{s}\norm{\mu}_2\right).
\end{equation}
Thus we obtain
\begin{equation}
\norm{\beta}_1 \leq (1+c_0)\left(\sqrt{k}\norm{\beta}_2 + \sqrt{s}\norm{\mu}_2
\right).
\label{eq:D2}
\end{equation}
Injecting \eqref{eq:D2} in \eqref{eq:Xn} applied to the vector $\beta$ now leads to
\begin{multline}
\norm{X\beta}_2 + \norm{\mu}_2 \geq \norm{\beta}_2 \Big( \frac{\sqrt{\lambda_{min}(\Sigma)}}{4\sqrt{5}\max_j \Sigma_{jj}} - \frac{9}{\min_j \Sigma_{jj}}(1+c_0) \sqrt{\frac{2k(\log p)\max_j \Sigma_{jj}}{n}} \Big) \\
+ \norm{\mu}_2\Big(1- \frac{9}{\min_j \Sigma_{jj}}(1+c_0) \sqrt{\frac{2s(\log p)\max_j \Sigma_{jj}}{n}} \Big).
\label{eq:D3}
\end{multline}
For $n$ large enough as explicited in the assumption of Theorem~\ref{thm:RE}, Equation~\eqref{eq:D3} turns to
\begin{equation*}
\norm{X\beta}_2 + \norm{\mu}_2 \geq \frac{\sqrt{\lambda_{min}(\Sigma)}}{8\sqrt{5}\max_j \Sigma_{jj}}\norm{\beta}_2 + \frac{1}{2} \norm{\mu}_2,
\end{equation*}
and thus, using the fact that $2(a^2+b^2)\geq (a+b)^2$,
\begin{equation}
\norm{X\beta}_2^2 + \norm{\mu}_2^2 \geq \min\left\{\frac{\lambda_{min}(\Sigma)}{128\times 5(\max_j \Sigma_{jj})^2}, \frac{1}{8}\right\} \norm{v}_2^2.
\end{equation}
Now if $v = [\beta^\top, \mu^\top]^\top \in \mathds{R}^{p + n} \in\mathcal{C}^p(s, c_0)$, Equation~\eqref{eq:Xn} together with the inequality $\norm{\beta}_1 \leq \sqrt{p}\norm{\beta}_2 $ lead to
\begin{equation*}
\norm{X\beta}_2 + \norm{\mu}_2 \geq \norm{\beta}_2 \Big( \frac{\sqrt{\lambda_{min}(\Sigma)}}{4\sqrt{5}\max_j \Sigma_{jj}} - \frac{9}{\min_j \Sigma_{jj}} \sqrt{\frac{2p\log p \max_j \Sigma_{jj}}{n}} \Big)
+ \norm{\mu}_2,
\end{equation*}
and we conclude as above.
Thus the first part of the theorem is satisfied.
Now, we must lower bound the scalar product $\langle X\beta, \mu \rangle $.\\ Divide $\{1,\dots,p\} = T_1\cup T_2\cup\dots \cup T_t$ with $T_i$ ($1\leq i \leq t-1$) of cardinality~$k'$ containing the support of the $k'$ largest absolute values of $b_{\left(\bigcup_{j=1}^{i-1} T_j\right)^c}$ and~$T_t$ of cardinality $k''\leq k'$ the support of the remaining values. Divide in the same way $\{1,\dots,n\} = S_1\cup S_2\cup\dots \cup S_q$ (of cardinalitys $s', \dots, s', s''\leq s'$) with respect to the largest absolute values of $\mu$ ($k'$ and $s'$ to be chosen later).
We use this to lower bound the scalar product:
\begin{equation*}
\vert\langle X\beta, \mu \rangle\vert = \vert\langle X'D\beta, \mu \rangle\vert \leq \sum_{i=1}^q\sum_{j=1}^t \vert \langle X'_{S_i,T_j} (D\beta)_{T_j}, \mu_{S_i} \rangle \vert ,
\end{equation*}
so
\begin{equation}
\abs{\langle X\beta, \mu \rangle} \leq \max_{i,j} \nor{X'_{S_i,T_j}}_2 \frac{1}{m}\sum_{j=1}^t \nor{\beta_{T_j}}_2 \sum_{i=1}^q\nor{\mu_{S_i}}_2 ,
\label{eq:sp}
\end{equation}
where we recall that $m$ is the minimal value of the column norms of $X'$.
According to Lemma~\ref{lm:Gauss1}, conditionnally on $S_i$ and $T_j$, we have with probability greater than $1-\exp(-n\tau^2 /2)$,
\begin{equation*}
\nor{X'_{S_i,T_j}}_2 \leq \Vert\Sigma^{1/2}_{T_j,T_j}\Vert\big(\sqrt{k'} + \sqrt{s'} + \sqrt{s'}\tau \big) \leq \sqrt{\lambda_{max}(\Sigma)}\big(\sqrt{k'} + \sqrt{s'} + \sqrt{s'}\tau \big).
\end{equation*}
Considering all possibilities for $S_i$ and $T_j$, we have with probability greater than $1-\dbinom{p}{k'}\dbinom{n}{s'}e^{-n\tau ^2/2}$,
\begin{equation}
\max_{i,j}\nor{X'_{S_i, T_j}}_2 \leq \sqrt{\lambda_{max}(\Sigma)}\big(\sqrt{k'} + (1+\tau)\sqrt{s'} \big).
\label{eq:D7}
\end{equation}
Moreover, thanks to the decreasing value along the subset $T_j$ we can use the trick of \cite{CandesPlan}, writing for all $j\in\{1,\dots,t-1\}$ and all $x\in\{1,\dots,\abs{T_{j+1}}\}$:
\begin{equation*}
\abs{\big(\beta_{T_{j+1}}\big)_x}\leq \frac{\nor{\beta_{T_j}}_1}{\abs{T_j}}.
\end{equation*}
Squaring this inequality and summing over $x$ gives:
\begin{equation*}
\norm{\beta_{T_{j+1}}}_2^2 \leq \frac{\norm{b_{T_j}}_1^2}{\abs{T_j}}\frac{\abs{T_{j+1}}}{\abs{T_j}} \leq \frac{\norm{b_{T_j}}_1^2}{\abs{T_j}} = \frac{\norm{\beta_{T_j}}_1^2}{k'}.
\end{equation*}
Then,
\begin{equation*}
\sum_{j=1}^t \norm{\beta_{T_j}}_2 \leq \norm{\beta}_2 + \sum_{j=2}^t \norm{\beta_{T_j}}_2 \leq \norm{\beta}_2 + \frac{1}{\sqrt{k'}}\sum_{j=1}^{t-1} \norm{\beta_{T_j}}_1 \leq \norm{\beta}_2 + \frac{1}{\sqrt{k'}}\norm{\beta}_1,
\end{equation*}
and so
\begin{equation}
\sum_{j=1}^t \norm{\beta_{T_j}}_2 \leq \norm{\beta}_2 + \frac{1}{\sqrt{k'}}\sum_{j=1}^p \frac{\tilde{\lambda}_j}{\tilde{\lambda}_p}\abs{\beta}_{(j)}.
\label{eq:D8}
\end{equation}
In the same way we obtain:
\begin{equation}
\sum_{i=1}^q\norm{\mu_{S_i}}_2 \leq \norm{\mu}_2 + \frac{1}{\sqrt{s'}}\sum_{j=1}^n \frac{\lambda_j}{\lambda_n}\abs{\mu}_{(j)}.
\label{eq:D9}
\end{equation}
Now if $v = [\beta^\top, \mu^\top]^\top \in \mathds{R}^{p + n} \in\mathcal{C}(k,s, c_0)$,
\begin{align*}
\sum_{j=1}^t \norm{\beta_{T_j}}_2 \sum_{i=1}^q\norm{\mu_{S_i}}_2 &\leq \big( \norm{\beta}_2 +\frac{1}{\sqrt{k'}} (1+c_0)(\sqrt{k}\norm{\beta}_2 + \sqrt{s}\norm{\mu}_2) \big)\\
&\quad \times \big( \norm{\mu}_2 +\frac{1}{\sqrt{s'}} (1+c_0)(\sqrt{k}\norm{\beta}_2 + \sqrt{s}\norm{\mu}_2) \big)\\
&\leq \left(2\norm{\beta}_2 + \norm{\mu}_2\right)\left(2\norm{\mu}_2 + \norm{\beta}_2\right)\\
&\leq 2\norm{v}_2^2 + 5\norm{\mu}_2\norm{\beta}_2\\
&\leq 5\norm{v}_2^2,
\end{align*}
where we chose $k' = s' = (1+c_0)^2(k \vee s)$.
Combining this last inequality with Equations \eqref{eq:sp} and \eqref{eq:D7}, and using again that $m\geq \min_j \Sigma_{jj}\sqrt{n/2}$ with probability greater than $1-ne^{-n/16}$, lead to
\begin{equation}
\abs{\langle X\beta, \mu \rangle} \leq \frac{\sqrt{\lambda_{max}(\Sigma)}}{\min_j \Sigma_{jj}}(2+\tau)\sqrt{\frac{2s'}{n}} 5\norm{v}_2^2.
\label{eq:D10}
\end{equation}
Note that with this choice of $s'$ and $k'$, the assumptions on $n$ and the constant~$C'$ defined in the theorem lead to $\dbinom{p}{k'}\leq (ep/k')^{k'} \leq \exp(n/C')$, and~$\dbinom{n}{s'}\leq (en/s')^{s'} \leq \exp(n/C')$ , so we have Equation \eqref{eq:D7} with probability greater than $1-\exp\left(-n\left(\tau^2 /2 - 2C'^{-1}\right) \right)$.
With the specific assumption on $n$ in the statement of the theorem, the term in the right part of Equation \eqref{eq:D10} is small enough to obtain:
\begin{equation}
2\abs{\langle Xb, u \rangle} \leq \min\left\{\frac{\lambda_{min}(\Sigma)}{256\times 5 \max_j \Sigma_{jj}}, \; \frac{1}{16} \right\} \norm{v}_2^2
\label{eq:D11}
\end{equation}
Eventually, if $v = [\beta^\top, \mu^\top]^\top \in \mathds{R}^{p + n} \in\mathcal{C}^p(s, c_0)$, Equation~\eqref{eq:D9} still holds, and combining it with Equation~\eqref{eq:D7} and Equation~\eqref{eq:sp} with $t=1$ leads to:
\begin{multline}
\abs{\langle X\beta, \mu \rangle} \leq \frac{\sqrt{\lambda_{max}(\Sigma)}}{\min_j \Sigma_{jj}}(\sqrt{p} + (1+\tau)\sqrt{s'})\sqrt{\frac{2}{n}} \norm{\beta}_2 \\
\times \big( \norm{\mu}_2 +\frac{1}{\sqrt{s'}} (1+c_0)\left(\sqrt{p}\norm{\beta}_2 + \sqrt{s}\norm{\mu}_2\right) \big).
\label{eq:D12}
\end{multline}
Choosing $s' = (1+c_0)^2(p \vee s)$,
\begin{align*}
\abs{\langle X\beta, \mu \rangle} &\leq \frac{\sqrt{\lambda_{max}(\Sigma)}}{\min_j \Sigma_{jj}}(2+\tau)\sqrt{\frac{2s'}{n}} \norm{\beta}_2 \left( 2\norm{\mu}_2 + \norm{\beta}_2 \right)\\
& \leq \frac{\sqrt{\lambda_{max}(\Sigma)}}{\min_j \Sigma_{jj}}(2+\tau)\sqrt{\frac{2s'}{n}}2\norm{v}_2^2.
\end{align*}
We conclude as above, thus leading to the second part of the theorem.
\subsection{Proof of Theorem \ref{thm:upper_bound_nobeta}}
\label{proof:upper_bound_nobeta}
We will actually show a slightly more general result. Let $R$ be any subset of cardinality $r$ containing the support of the true parameter $\mu^\star$ and $I_R$ be the matrix obtained by extracting columns with indices in $R$ from the identity matrix. We consider the following minimization:
$$
\hat{\beta}, \hat{\mu} = \argmin_{\beta, \mu} \Vert y - X\beta - I_R \mu \Vert_2 ^2 + 2\rho J_{\lambda^{[r]}} (\mu),
$$
where $\lambda^{[r]}$ contains the first $r$ terms of the sequence of weights defined in Section \ref{SEC:BOUNDS}.
Obviously, the theorem will result from the case $P=\{1,\dots,n\}$. Note that $\hat{\mu}$ belongs to $\mathds{R}^r$.
Defining $b=\hat{\beta}-\beta^\star$ and $u=I_R (\hat{\mu}-\mu_R^\star)$ where $\mu_R^\star$ denotes the vector extracted from $\mu^\star$ selecting coordinates corresponding to indices in $R$ (note that the eliminated coordinates are zeros), we can apply Lemma \ref{lem:2} to obtain
\begin{align*}
\norm{Xb+ u}_2^2 &\leq \varepsilon^\top (Xb + u) + \rho J_{\lambda^{[r]}}(\mu^\star) - \rho J_{\lambda^{[r]}}(\hat{\mu}) \\&= \varepsilon^\top (Xb + u) + \rho J_\lambda(I_R\mu_R^\star) - \rho J_\lambda(I_R\hat{\mu}).
\end{align*}
Note that it is crucial to have $\supp(\mu^\star)\subset R$ in order to write $\mu^\star = I_R \mu_R^\star$.
Applying now Lemma \ref{lem:1} we obtain:
\begin{equation}
\norm{Xb + u}_n^2 \leq \varepsilon^\top (Xb + u) + \rho\big(\Lambda(s)\norm{u}_2 - \sum_{j=s+1}^n \lambda_j \abs{u}_{(j)} \big),
\label{eq:C1}
\end{equation}
where $\Lambda(s)$ is defined as $\sqrt{\sum_{j=1}^s \lambda_j^2}$.
Hence, using Cauchy-Scwarz inequality we get:
\begin{equation*}
\norm{Xb+ u}_2^2 \leq \nor{X^\top \varepsilon}_2 \norm{b}_2 + \varepsilon^\top u + \rho\big(\Lambda(s)\norm{u}_2 - \sum_{j=s+1}^n \lambda_j \abs{u}_{(j)} \big).
\end{equation*}
Then, by Lemma~\ref{lem:inner_product_bound}, with probability greater than $1-\delta_0/2$ we have (the last inequality is used for the sake of simplicity):
\begin{equation*}
\varepsilon^\top u \leq \max (H(u), G(u)) \leq H(u) + G(u),
\end{equation*}
with $H(u)$ and $G(u)$ defined in Lemma~\ref{lem:inner_product_bound}. Additionnally, $\frac{1}{\sigma^2}\norm{X^\top \varepsilon}_2^2$ follows a $\chi^2$ law with $p$ degrees of freedom, so by the third point in Lemma~\ref{lm:Gauss2} with $x=L p$ this provides, chosing $\delta_0 = (s/2n)^s$, that with probability greater than $1-\frac{1}{2}(s/2n)^s- \exp(-L p)$:
\begin{align*}
\norm{Xb+ u}_2^2 &\leq c_L\sigma\sqrt{p} \norm{b}_2 + H(u) + G(u) + \rho\big(\Lambda(s)\norm{u}_2 - \sum_{j=s+1}^n \lambda_j \abs{u}_{(j)} \big) \\
&\leq c_L \sigma \sqrt{p}\norm{b}_2 + \frac{\rho}{2}\sum_{j=1}^n \lambda_j \abs{u}_{(j)} \\
&\quad + \frac{\rho}{2}\sqrt{s\log(2n/s)}\norm{u}_2 + \rho\big(\Lambda(s)\norm{u}_2 - \sum_{j=s+1}^n \lambda_j \abs{u}_{(j)} \big) \\
&\leq c_L \sigma \sqrt{p}\norm{b}_2 + \big(2\rho\Lambda(s)\norm{u}_2 - \frac{\rho}{2} \sum_{j=s+1}^n \lambda_j \abs{u}_{(j)} \big),
\end{align*}
where $c_L = \sqrt{1+2L + 2\sqrt{L}}$ and where we used Equation~\eqref{eq:sum_log_inequality} to obtain the last inequality.
The fact that the left part of the last inequality is positive gives:
$$
\sum_{j=1}^n \lambda_j \abs{u}_{(j)} \leq \sum_{j=s+1}^n \lambda_j \abs{u}_{(j)} + \Lambda(s)\norm{u}_2 \leq \frac{2}{\rho}c_L \sigma\sqrt{p}\norm{b}_2 + 5\Lambda(s)\norm{u}_2,
$$
where the left part of the inequality is obtained using Cauchy-Schwarz inequality.
Hence,
\begin{equation}
\sum_{j=1}^n \frac{\lambda_j}{\lambda_n} \abs{u}_{(j)} \leq \frac{2c_L}{\rho} \sqrt{\frac{p}{\log 2}}\norm{b}_2 + 5\sqrt{\frac{s\log\left(2en/s \right)}{\log 2}} \norm{u}_2,
\end{equation}
where we used the right part of the inequality~\eqref{eq:sum_log_inequality}.
Choosing $L=1$ lead to $c_L = \sqrt{5}$, and reminding that $\rho \geq 2(4+\sqrt{2})$ we conclude that $ [b^\top, u^\top]^\top \in \mathcal{C}^p(s_1,4)$ (see Definition~\ref{Cewre}) with $s_1 = \frac{s\log\left(2en/s\right)}{\log 2}$.
Therefore, by Condition~\ref{Hyp} and the definition of $\kappa$ therein :
\begin{align*}
2\norm{Xb+ u}_2^2 &\leq 2\sqrt{5}\sigma\sqrt{p}\norm{b}_2 + 4\rho
\Lambda(s)\norm{u}_2 \\
&\leq \frac{5\sigma^2}{\kappa^2}p + \kappa^2\norm{b}_2^2 +
\frac{4\rho^2 \Lambda(s)^2}{\kappa^2} + \kappa^2\norm{u}_2^2\\
&\leq \frac{4\rho^2 }{\kappa^2}\Lambda(s)^2 + \frac{5\sigma^2}{\kappa^2}p
+ \kappa^2 \norm{v}_2^2\\
& \leq \frac{4\rho^2 }{\kappa^2}\Lambda(s)^2 + \frac{5\sigma^2}{\kappa^2}p
+ \norm{Xb+ u}_2^2 .
\end{align*}
Thus,
\begin{equation*}
\norm{Xb+u}_2^2 \leq \frac{4\rho^2 }{\kappa^2}\Lambda(s)^2
+ \frac{5\sigma^2}{\kappa^2}p,
\end{equation*}
and
\begin{equation*}
\norm{b}_2^2 + \norm{u}_2^2 \leq \frac{4\rho^2}{\kappa^4}\Lambda(s)^2 + \frac{5\sigma^2 }{\kappa^4}p.
\end{equation*}
The proof of Theorem \ref{thm:upper_bound_nobeta} concludes by the classical inequalities~\cite{Tsyb}
\begin{equation}
\label{eq:sum_log_inequality}
s \log\Big(\frac{2n}{s} \Big) \leq \sum_{j=1}^s \log \Big(\frac{2n}{j} \Big)
= s\log(2n) - \log(s!) \leq s\log\Big(\frac{2en}{s}\Big).
\end{equation}
\subsection{Proof of Theorem \ref{thm:upper_bound_beta_l1}}
As in the previous proof, the more general version still holds and in the same way we obtained \eqref{eq:C1}, with the same definition of $b$ and $u$, we now have:
\begin{equation*}
\norm{Xb+ u}_2^2 \leq \varepsilon^\top (Xb + u) + \nu (\norm{\beta^\star}_1 - \Vert\hat{\beta}\Vert_1) + \rho (\Lambda(s)\norm{u}_2 - \sum_{j=s+1}^n \lambda_j \abs{u}_{(j)}).
\end{equation*}
With $T$ being the support of the true regression vector $\beta^\star$ we have, using the triangle inequality:
$$
\norm{\beta^\star}_1 - \Vert\hat{\beta}\Vert_1 = \norm{\beta^\star_T}_1 - \Vert b+\beta^\star\Vert_1 = \norm{\beta^\star_T}_1 - \norm{b_T + \beta^\star_T}_1 - \norm{b_{T^c}}_1 \leq \norm{b_T}_1 - \norm{b_{T^c}}_1.
$$
Hence we can write:
\begin{align*}
\norm{Xb+ u}_2^2 & \leq \nor{X^\top \varepsilon}_\infty \norm{b}_1 + \nu \left(\norm{b_T}_1 - \norm{b_{T^c}}_1 \right) + \varepsilon^\top u \\
&\quad + \rho\Lambda(s)\norm{u}_2 - \rho\sum_{j=s+1}^n \lambda_j \abs{u}_{(j)}\\
& \leq \norm{b_T}_1(\nu + \nor{X^\top \varepsilon}_\infty) - \norm{b_{T^c}}_1( \nu - \nor{X^\top \varepsilon}_\infty ) + \varepsilon^\top u \\
&\quad + \rho\Lambda(s)\norm{u}_2 - \rho\sum_{j=s+1}^n \lambda_j \abs{u}_{(j)}.
\end{align*}
With the choice $\nu=4\sigma\sqrt{\log p}$ we have $\nor{X^\top \varepsilon}_\infty \leq \nu/2$ according to Lemma~\ref{lm:Gauss2}, with probability greater than $1-\frac{1}{p}$. Using again Lemma \ref{lem:inner_product_bound} to bound $\varepsilon^\top u $, we obtain that with probability greater than $1-\frac{1}{2}\left(\frac{s}{2n}\right)^s - \frac{1}{p}$:
\begin{equation}
\norm{Xb+ u}_2^2 \leq \norm{b_T}_1 (6\sigma\sqrt{\log p}) - \norm{b_{T^c}}_1 (2\sigma\sqrt{\log p} ) + 2\rho\Lambda(s)\norm{u}_2 - \frac{\rho}{2}\sum_{j=s+1}^n \lambda_j \abs{u}_{(j)}.
\label{eq:C3}
\end{equation}
The fact that the left part of the inequality is positive gives:
$$
\frac{4}{\rho}\sigma\sqrt{\log p}\norm{b_{T^c}}_1 + \sum_{j=s+1}^n \lambda_j \abs{u}_{(j)} \leq \frac{12}{\rho}\sigma\sqrt{\log p}\norm{b_T}_1 + 4\Lambda(s)\norm{u}_2,
$$
and using Cauchy-Schwarz inequality, this leads to:
$$
\frac{4}{\rho}\sigma\sqrt{\log p}\norm{b}_1 + \sum_{j=1}^n \lambda_j \abs{u}_{(j)} \leq \frac{16}{\rho}\sigma\sqrt{k\log p}\norm{b}_2 + 5\Lambda(s)\norm{u}_2
$$
Eventually we obtain, because $\lambda_n = \sigma\sqrt{\log 2}$ and $\sqrt{\log p} \geq \frac{\rho \log 2}{4}$:
\begin{equation}
\label{eq:cone_lasso_slope}
\norm{b}_1 + \sum_{j=1}^n \frac{\lambda_j}{\lambda_n} \abs{u}_{(j)} \leq \frac{4\sigma\sqrt{\log p}}{\rho\lambda_n}\norm{b}_1 + \sum_{j=1}^n \frac{\lambda_j}{\lambda_n} \abs{u}_{(j)}\leq \frac{16\sigma\sqrt{k\log p}}{\rho\lambda_n}\norm{b}_2 + \frac{5\Lambda(s)}{\lambda_n}\norm{u}_2
\end{equation}
and the concatenated vector of $b$ and $u$ is therefore in the cone $\mathcal{C}(k_1,s_1,4)$ with $k_1 = 16k\log p/\log 2$ and $s_1 = s\log (2en/s)/\log 2$.
Starting from \eqref{eq:C3}, we obtain, using again $\kappa$ as the capacity constant in Assumption \ref{Hyp}:
\begin{align*}
2\norm{Xb+u}_2^2 &\leq \norm{b_T}_1 12\sigma\sqrt{\log p} + 4\rho\Lambda(s)\norm{u}_2\\
& \leq 12\sigma\sqrt{k\log p}\norm{b}_2 + 4\rho\Lambda(s)\norm{u}_2\\
& \leq \frac{36}{\kappa^2}\sigma^2 k\log p + \kappa^2\norm{b}_2^2 + \frac{4\rho^2}{\kappa^2}\Lambda(s)^2 + \kappa^2 \norm{u}_2^2 \\
&\leq \frac{36}{\kappa^2}\sigma^2 k\log p + \frac{4\rho^2}{\kappa^2}\Lambda(s)^2 + \kappa^2 \norm{v}_2^2\\
& \leq \frac{36}{\kappa^2}\sigma^2 k\log p + \frac{4\rho^2}{\kappa^2}\Lambda(s)^2 + \norm{Xb+ u}_2^2.
\end{align*}
Thus,
$$
\norm{Xb+ u}_2^2 \leq \frac{36}{\kappa^2}\sigma^2 k\log p + \frac{4\rho^2}{\kappa^2}\Lambda(s)^2
$$
and using again Assumption \ref{Hyp} and the remark after:
$$
\norm{b}_2^2 + \norm{u}_2^2 \leq \frac{36}{\kappa^4}\sigma^2 k\log p + \frac{4}{\kappa^4}\Lambda(s)^2
$$
\subsection{Proof of Theorem \ref{thm:upper_bound_beta_sl1}}
\label{proof:beta_sl1}
In the same way we obtained \eqref{eq:C1}, we now have:
$$
\norm{Xb+ u}_2^2 \leq \varepsilon^\top (Xb + u) + \rho \big(\tilde{\Lambda}(k) \norm{b}_2 - \sum_{j=k+1}^p \tilde{\lambda}_j \abs{b}_{(j)} \big) + \rho \big(\Lambda(s)\norm{u}_2 - \sum_{j=s+1}^n \lambda_j \abs{u}_{(j)} \big)
$$
We use twice Lemma \ref{lem:inner_product_bound} to bound $\varepsilon^\top Xb$ and $\varepsilon^\top u$ with $(k/2p)^k$ and~$(s/2n)^s$ as respective choices of $\delta_0$, so that with probability $1- \frac{1}{2}\left(\frac{s}{2n}\right)^{s} - \frac{1}{2}\left(\frac{k}{2p}\right)^{k} $:
\begin{align*}
\norm{Xb+ u}_2^2 &\leq H(b) + G(b) + H(u) + G(u) \\
&\quad + \rho\big(\tilde{\Lambda}(k) \norm{b}_2 - \sum_{j=k+1}^p \tilde{\lambda}_j \abs{b}_{(j)}\big) + \rho\big(\Lambda(s)\norm{u}_2 - \sum_{j=s+1}^n \lambda_j \abs{u}_{(j)}\big)\\
&\leq \frac{\rho}{2}\sum_{j=1}^p \tilde{\lambda}_j \abs{b}_{(j)} + \frac{\rho}{2}\sqrt{k\log(2p/k)}\norm{b}_2 + \rho\big(\tilde{\Lambda}(k)\norm{b}_2 - \sum_{j=k+1}^p \tilde{\lambda}_j \abs{b}_{(j)} \big) \\
& \quad +2\rho\Lambda(s)\norm{u}_2 - \frac{\rho}{2} \sum_{j=s+1}^n \lambda_j \abs{u}_{(j)}\\
&\leq \frac{\rho}{2}4\tilde{\Lambda}(k)\norm{b}_2 - \frac{\rho}{2}\sum_{j=k+1}^p \tilde{\lambda}_j \abs{b}_{(j)} + 2\rho\Lambda(s)\norm{u}_2 - \frac{\rho}{2} \sum_{j=s+1}^n \lambda_j \abs{u}_{(j)},
\end{align*}
where we use Equation~\eqref{eq:sum_log_inequality} to obtain the last inequality.
The left part of the inequality is positive so
\begin{equation}
\sum_{j=k+1}^p \tilde{\lambda}_j \abs{b}_{(j)} + \sum_{j=s+1}^n \lambda_j \abs{u}_{(j)} \leq 4\tilde{\Lambda}(k) \norm{b}_2 + 4\Lambda(s)\norm{u}_2,
\label{eq:C4}
\end{equation}
and
\begin{equation}
2\norm{Xb+ u}_2^2 \leq 4\rho\tilde{\Lambda}(k) \norm{b}_2 + 4\rho\Lambda(s)\norm{u}_2.
\label{eq:C5}
\end{equation}
Equation~\eqref{eq:C4} together with the Cauchy-Schwarz inequality lead to
\begin{equation}
\label{eq:cone_with_two_slope}
\sum_{j=1}^p \tilde{\lambda}_j \abs{b}_{(j)} + \sum_{j=1}^n \lambda_j \abs{u}_{(j)} \leq 5\tilde{\Lambda}(k) \norm{b}_2 + 5\Lambda(s)\norm{u}_2.
\end{equation}
Combining the equation above with Equation~\eqref{eq:sum_log_inequality} show that the concatenated estimator is in $\mathcal{C}(k_1,s_1,4)$ with $s_1$ and $k_1$ as in the statement of the theorem (note that~$\tilde{\lambda}_n =~\lambda_n =~\sigma\sqrt{\log 2}$) and so, noting $\kappa$ the capacity constant of Assumption~\ref{Hyp}, Equation~\eqref{eq:C5} leads to:
\begin{align*}
2\norm{Xb+ u}_2^2 &\leq (3+C)^2\frac{\tilde{\Lambda}(k)^2}{2\kappa^2} + \kappa^2 \norm{b}_2^2 + 4\rho^2\frac{\Lambda(s)^2}{\kappa^2} + \kappa^2 \norm{u}_2^2\\
&\leq \frac{C'}{\kappa^2}\left( \tilde{\Lambda}(k)^2 + \Lambda(s)^2 \right) + \norm{Xb+ u}_2^2,
\end{align*}
where $C' = 4\rho^2 \vee (3+C)^2 / 2$.
Finally:
$$
\norm{Xb+ u}_2^2 \leq \frac{C'}{\kappa^2}\left( \tilde{\Lambda}(k)^2 + \Lambda(s)^2 \right),
$$
and
$$
\norm{b}_2^2 + \norm{u}_2^2 \leq \frac{C'}{\kappa^4}\left( \tilde{\Lambda}(k)^2 + \Lambda(s)^2 \right).
$$
\section{Proof of Theorem \ref{THM:FDR}}
\label{Appen:p2}
In this section, we give the proof of the asymptotic FDR control presented in Theorem \ref{THM:FDR}.
In the following, for a given matrix $A$ and a given subset $T$, $A_T$ denotes the extracted matrix formed by the columns of $A$ with indices in $T$, whereas~$A_{T,\cdot}$ denotes the extracted matrix formed by the rows of $A$ with indices in~$T$. For vectors, there is no ambiguity. Moreover, $S$ (of cardinal $s$) denotes the support of the true parameter $\mu^\star$.
We first recall some properties on the dual of the sorted $\ell_1$ norm, and also a lemma taken from \cite{slopeminimax} and stated here without proof:
\begin{defn}[\cite{slopeminimax}]
A vector $a\in\mathds{R}^n$ is said to majorize $b\in\mathds{R}^n$ (denoted $b \preccurlyeq a$) if they satisfy for all $i\in\{1,\dots,n\}$:
$$
\abs{a}_{(1)} + \cdots + \abs{a}_{(i)} \geq \abs{b}_{(1)} + \cdots + \abs{b}_{(i)}.
$$
\end{defn}
\begin{prop}[\cite{slope}]
Let $J_\lambda$ be the sorted $\ell_1$ norm for a certain non-increasing sequence $\lambda$ of length $n$. The unit ball of the dual norm is:
$$\mathcal{C_\lambda} = \{v\in\mathds{R}^n: v \preccurlyeq \lambda \}.$$
\end{prop}
\begin{lem}[\cite{slopeminimax}, Lemma A.9]
\label{lem:A9}
Given any constant $\alpha > 1/(1-q)$, suppose $\max\{\alpha s, s + d\} \leq s^\star < n $ for any (deterministic) sequence $d$ that diverges to $\infty$. Let $\zeta_1, \dots, \zeta_{n-s}$ be i.i.d $\mathcal{N}(0,1)$. Then
$$
(\abs{\zeta}_{(s^\star - s +1)}, \abs{\zeta}_{(s^\star - s +2)}, \dots , \abs{\zeta}_{(n - s)}) \preccurlyeq (\lambda_{s^\star+1}^{\mathrm{BH}}, \lambda_{s^\star+2}^{\mathrm{BH}}, \dots, \lambda_{n}^{\mathrm{BH}})
$$
with probability approaching one.
\end{lem}
We adapt from \cite{slopeminimax} the definition of a resolvent set below, useful to determine the true support of the mean-shift parameters.
\begin{defn}
\label{def:res}
Let $s^\star$ be an integer obeying $s<s^\star <n$. The set~$S^\star (S,s^\star)$ is said to be a resolvent set if it is the union of $S$ and\linebreak the $s^\star - s$ indices corresponding to the largest entries of the error term~$\varepsilon$ restricted to $\bar{S}$.
\end{defn}
Let $c$ be any positive constant and fix $s^\star \geq s(1+c) / (1-q)$ ($q$ being the target FDR level), so that assumptions of Lemma \ref{lem:A9} are satisfied. For clarity, we denote $S^\star = S^\star(S, s^\star)$.
For a resolvent set $S^\star$ of cardinality $s^\star$, define the reduced minimization as:
\begin{equation}
\label{reduced}
\beta^{S^\star}, \mu^{S^\star} = \argmin_{\beta\in\mathds{R}^p, \mu\in\mathds{R}^{s^\star}} \left\{\norm{y - X\beta - I_{S^\star}\mu}_2^2 + 2\rho J_{\tilde{\lambda}}(\beta) + 2\rho J_{\lambda^{[s^\star]}}(\mu) \right\},
\end{equation}
where $\lambda^{[s^\star]}$ is the beginning (the first $s^\star $ terms) of the sequence of weights in the global problem.
Note that a resolvent set contains the support of the true parameter $\mu^\star$, so the generalized versions of the main results in Section~\ref{SEC:BOUNDS}, considered in the proof in Appendix~\ref{Appen:p1}, hold.
We want to show that the estimator of the unreduced problem $\hat{\mu}$ has null values for coordinates which indices are not in $S^\star$. Precisely, we will show that $\hat{\mu}= I_{S^\star}\mu^{S^\star}$.
The first order conditions for global and reduced minimisation problem above are respectively:
\begin{empheq}[left=\empheqlbrace]{align}
X^\top (y-X\hat{\beta}-\hat{\mu}) &\in \rho\partial J_{\tilde{\lambda}} (\hat{\beta})\label{eq:glob1} \\
y-X\hat{\beta}-\hat{\mu} &\in \rho\partial J_\lambda (\hat{\mu}) \label{eq:glob2}
\end{empheq}
and
\begin{empheq}[left=\empheqlbrace]{align}
X^\top (y-X\beta^{S^\star}-I_{S^\star}\mu^{S^\star}) & \in \rho\partial J_{\tilde{\lambda}} (\beta^{S^\star}) \label{eq:red1}\\
I_{S^\star}^\top (y-X\beta^{S^\star}-I_{S^\star}\mu^{S^\star}) &\in \rho\partial J_{\lambda^{[s^\star]}} (\mu^{S^\star}) \label{eq:red2}
\end{empheq}
Clearly, Equation~\eqref{eq:red1} leads to Equation~\eqref{eq:glob1} taking $\hat{\beta} = \beta^{S^\star}$ and $\hat{\mu} =~I_{S^\star}\mu^{S^\star}.$ We must now show that this choice of $\hat{\beta}$ and $\hat{\mu}$ satisfies Equation~\eqref{eq:glob2}.
First, $y-X\hat{\beta}-\hat{\mu}$ must be in the unit ball of the dual norm, that is~${y-X\hat{\beta}-\hat{\mu} \preccurlyeq \rho\lambda }$. Because $y-X\hat{\beta}-\hat{\mu}\in\mathds{R}^n$ is the concatenation of~${I_{S^\star}^\top (y-X\hat{\beta}-\hat{\mu})}$ and $I_{\bar{S^\star}}^\top (y-X\hat{\beta}-\hat{\mu})$, we must check that $S^\star$ satisfy:
$$
I_{\overline{S^\star}}^\top (y-X\hat{\beta}^{S^\star}-I_{S^\star}\hat{\mu}^{S^\star}) \preccurlyeq \rho\lambda^{-[s^\star]},
$$
where $\lambda^{-[s^\star]}$ is the end of the sequence in the global problem (omitting the first $s^\star $ terms).
If so, noting that if $a_1 \preccurlyeq b_1$ and $a_2 \preccurlyeq b_2$ then $a\preccurlyeq b$ (with $a$ and $b$ being the respective concatenation of $a_1,a_2$ and $b_1,b_2$) and combining it with Equation~\eqref{eq:red2} will lead to the belonging at the unit ball of the dual norm.
Equivalently, we must check that
$$
y_{\overline{S^\star}} - X_{\overline{S^\star}, \cdot}\beta^{S^\star} \preccurlyeq \rho\lambda^{-[s^\star]},
$$
or also
\begin{equation}
\label{dual}
X_{\overline{S^\star}, \cdot}(\beta^\star - \beta^{S^\star}) + I_{\overline{S^\star}, \cdot} \varepsilon \preccurlyeq \rho\lambda^{-[s^\star]}
\end{equation}
Lemma \ref{lem:A9}, together with the definition of the resolvent set $S^\star$ given in Definition \ref{def:res}, allows us to handle the second term to obtain, with probability tending to one:
$$
I_{\overline{S^\star}, \cdot} \varepsilon \preccurlyeq (\lambda^{\mathrm{BH}})^{-[s^\star]} \preccurlyeq \rho (\lambda^{\mathrm{BH}})^{-[s^\star]}.
$$
It remains to control the term $X_{\overline{S^\star}, \cdot}(\beta^\star - \beta^{S^\star}) $. For our purpose, it is sufficient to show that $\Vert X_{\overline{S^\star}, \cdot}(\beta^\star - \beta^{S^\star})\Vert_{\infty}$ tends to zero when $n$ goes to infinity, because in this case we would have $X_{\overline{S^\star}, \cdot}(\beta^\star - \beta^{S^\star}) \preccurlyeq \rho\epsilon (\lambda^{\mathrm{BH}})^{-[s^\star]} $ if $n$ is large enough. Thus, let $i\in\{1,\dots, n\}$ and $x_i$ the $i^{th}$ row of $X$, then we have:
$$
\vert\langle x_i, \beta^\star - \beta^{S^\star} \rangle \vert \leq \sum_{j=1}^p \vert x_{i,j}\vert \vert \beta^\star - \beta^{S^\star}\vert_{j} \leq \frac{M}{\sqrt{n}} \Vert\beta^\star - \beta^{S^\star}\Vert_1.
$$
Now we distinguish the three cases. For Procedure~\ref{eq:procedure_no_beta}, we do not assume sparsity on $\beta$ so we rely on the Cauchy-Schwarz inequality to obtain
\begin{equation*}
\vert\langle x_i, \beta^\star - \beta^{S^\star} \rangle\vert \leq \frac{M}{\sqrt{n}}\sqrt{p} \Vert\beta^\star - \beta^{S^\star}\Vert_2.
\end{equation*}
For Procedure~\ref{eq:2}, Equation~\eqref{eq:cone_lasso_slope} lead to the bound
\begin{equation*}
\vert\langle x_i, \beta^\star - \beta^{S^\star} \rangle\vert \leq \frac{M}{\sqrt{n}}C\big(k\log p \vee s\log(2en/s)\big) \big(\Vert\beta^\star - \beta^{S^\star}\Vert_2 \vee \Vert\mu^\star - \mu^{S^\star}\Vert_2\big),
\end{equation*}
with $C$ being some positive constant.
For Procedure~\ref{eq:procedure_with_two_slopes}, Equation~\eqref{eq:cone_with_two_slope} lead to the bound
\begin{equation*}
\vert\langle x_i, \beta^\star - \beta^{S^\star} \rangle\vert \leq \frac{M}{\sqrt{n}}C'\big(k\log (2ep/k) \vee s\log(2en/s)\big) \big(\Vert\beta^\star - \beta^{S^\star}\Vert_2 \vee \Vert\mu^\star - \mu^{S^\star}\Vert_2\big),
\end{equation*}
with $C'$ being some positive constant.
Therefore the coordinates are uniformly bounded by a quantity tending to zero in each of the three cases of the theorem, thanks to the upper bounds obtained in Section~\ref{SEC:BOUNDS}. Now it is sufficient to choose $n$ such that $\abs{\langle x_i, \beta^\star - \beta^{S^\star} \rangle} \leq \rho\epsilon\lambda^{\mathrm{BH}}_n$ (it is important to notice that the right term does not depend on $n$ and equals to $\rho\epsilon\Phi^{-1}(1-q/2)$) to finally obtain Equation~\eqref{dual}.
Note that Equation~\eqref{dual} is the necessary condition for $y-X\beta^{S^\star}-I_{S^\star}\mu^{S^\star}$ to be feasible (meaning in the unit ball $\mathcal{C}_\lambda$ of the dual norme of $J_\lambda$) but this is also sufficient for being in the subdifferential because
$$
\partial J_\lambda (x) = \{ \omega\in\mathcal{C}_\lambda \;: \; \langle \omega, x \rangle = J_\lambda (x) \},
$$
and as we have, due to Equation\eqref{eq:red2}:
$$
\langle I_{S^\star}^\top (y-X\beta^{S^\star}-I_{S^\star}\mu^{S^\star}), \mu^{S^\star} \rangle = J_{\lambda^{[s^\star]}}(\mu^{S^\star}),
$$
then:
$$
\langle y-X\beta^{S^\star}-I_{S^\star}\mu^{S^\star}, I_{S^\star}\mu^{S^\star} \rangle = J_{\lambda}(I_{S^\star} \mu^{S^\star}).
$$
Therefore, with probability tending to one, $\hat{\mu} = I_{S^\ast}\mu^{S^\star}$ and in particular
\begin{equation}
\label{subset_fdr}
\supp(\hat{\mu}) \subset S^\star.
\end{equation}
We now show that the support of $\hat{\mu}$ contains the support of $\mu^\star$.
Considering Equation~\eqref{eq:red2} we have in particular the belonging to the unit ball of the dual norm, that is to say:
$$
I_{S^\star}^\top (y-X\beta^{S^\star}-I_{S^\star}\mu^{S^\star}) \preccurlyeq \rho\lambda^{[s^\star]}.
$$
In particular we have
$$
\Vert I_{S^\star}^\top (y-X\beta^{S^\star}-I_{S^\star}\mu^{S^\star}) \Vert_\infty \leq \rho\lambda_1.
$$
Having $y = X\beta^\star + \mu^\star + \varepsilon = X\beta^\star + I_{S^\star}(\mu^\star)_{S^\star} + \varepsilon$, the inequality above re-writes as:
$$
\Vert X_{S^\star,\cdot}(\beta^\star - \beta^{S^\star}) + \mu_{S^\star}^\star - \mu^{S^\star} + I_{S^\star}^\top \varepsilon \Vert_\infty \leq \rho\lambda_1.
$$
By the triangle inequality, we obtain:
$$
\Vert \mu_{S^\star}^\star - \mu^{S^\star} \Vert_\infty \leq \rho\lambda_1 + \Vert X_{S^\star,\cdot}(\beta^\star - \beta^{S^\star}) + I_{S^\star}^\top \varepsilon \Vert_\infty \leq \rho\lambda_1 + \Vert X_{S^\star,\cdot}(\beta^\star - \beta^{S^\star}) \Vert_\infty + \Vert I_{S^\star}^\top \varepsilon \Vert_\infty
$$
Now, we already said that we have $\Vert X_{S^\star,\cdot}(\beta^\star - \beta^{S^\star}) \Vert_\infty \leq \rho\epsilon\lambda^{\mathrm{BH}}_n \leq \rho\epsilon\lambda^{\mathrm{BH}}_1 $, and using the standard bound on the norm of a Gaussian noise (see Lemma~\ref{lm:Gauss2}), we also have, with probability tending to one (precisely with probability $1/n$):
$$
\Vert I_{S^\star}^\top \varepsilon \Vert_\infty \leq \Vert \varepsilon \Vert_\infty\leq 2\sigma \sqrt{\log n}.
$$
Combining the previous inequalities leads to:
$$
\Vert \mu_{S^\star}^\star - \mu^{S^\star} \Vert_\infty \leq \rho\lambda_1 + \rho\epsilon\lambda^{\mathrm{BH}}_1 + 2\sigma\sqrt{\log n} = \rho(1+2\epsilon)\lambda^{\mathrm{BH}}_1 + 2\sigma\sqrt{\log n}.
$$
A standard bound for the Gaussian quantile function gives $\lambda^{\mathrm{BH}}_1 \leq \sigma \sqrt{2\log (2n/q)}$, so with $q\geq 2/n$ (this is quite artificial, $q$ is generally more than $0.01$) we obtain:
$$
\Vert (\mu^\star)_{S^\star} - \mu^{S^\star} \Vert_\infty \leq (1+\rho(1+2\epsilon))2\sigma\sqrt{\log n}.
$$
Therefore, because the entries of $\mu^\star$ are of absolute values greater than the right bound of the above inequality we obtain:
$$
S \subset \supp ((\mu^\star)_{S^\star}) \subset \supp (\mu^{S^\star}) \subset \supp (\hat{\mu}),
$$
and so the Power tends to one.
It remains to show the FDR control, using Equation~\eqref{subset_fdr}.
Define the False Discovery Proportion (FDP) as $V/(R\vee 1)$, where $R$ and $V$ are defined in Equation~\eqref{eq:fdr}. Because of the inclusion $S \subset \supp (\hat{\mu})$, the FDP is~${(R-s)/R = 1-s/R}$ with probability tending to one. According to Equation~\eqref{subset_fdr} and the assumption on $s^\star$,
$$
\mathrm{FDP} = 1-\frac{s}{R} \leq 1-\frac{s}{s^\star} \leq 1-\frac{1-q}{1+c} = \frac{q+c}{1+c} \leq q+c,
$$
with probability tending to one.
In expectation, and with $n$ tending to infinity, we obtain:
$$
\limsup_{n\rightarrow +\infty}\mathrm{FDR}(\hat{\mu}) \leq q+c,
$$
and $c$ being arbitrarily close to zero leads to the conclusion.
\section{Supplementary simulations}
\label{appensim}
We gather here some extra-simulations in low dimension to complete the ones from Section~\ref{subsec:simu} with higher FDR level or/and higher correlation level for the design matrix. As it is our particular interest, we focus on experiments with outliers of weak magnitudes.
Figure~\ref{fig:lowdim_appen0} below is the same as in Section~\ref{subsec:simu} for settting~1, excepted that the correlation of the design matrix is now higher ($0.8$).
Results are similar to those obtained in Section \ref{subsec:simu}, that is even with a higher correlation, E-SLOPE is able to make much more discoveries while keeping the FDR under control.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{V2lowdim_highcor_fdr0,05_weak.pdf}
\caption{Results for simulation Setting~1 with low-magnitude outliers and correlation $\rho = 0.8$. First row gives the FDR (left) and power (right) of each considered procedure for outliers discoveries.
Second row gives the MSE for regressors (left) and intercepts (right).
E-SLOPE gives perfect power, is the only one to respect the required FDR, and provides the best MSEs.}
\label{fig:lowdim_appen0}
\end{figure}
Figures~\ref{fig:lowdim_appen1},~\ref{fig:lowdim_appen2} below gather the results of simulations in setting~1 of Section~\ref{sec:simu} with other target FDR (here $10\%$ and $20\%$) and for both moderate and high correlation (respectively $0.4$ and $0.8$). E-LASSO and IPOD do not depend on the target FDR so they are not plotted again. The results confirm the fact that E-SLOPE provides a high power together with a FDR control for a wide range of target FDR level.
The left (resp. right) columns contain the results for $\alpha = 10\%$ (resp. $\alpha = 20\%$) as indicated by the straight lines.
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{extrasim_lowdim_midcor.pdf}
\caption{Results for simulation Setting~1 with low-magnitude outliers, correlation $\rho = 0.4$. Left column gives the FDR (top) and power (bottom) for E-SLOPE with target FDR $\alpha = 10\%$.
Right column gives the FDR (top) and power (bottom) for E-SLOPE with target FDR $\alpha = 20\%$.}
\label{fig:lowdim_appen1}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=\linewidth]{extrasim_lowdim_highcor.pdf}
\caption{Results for simulation Setting~1 with low-magnitude outliers, correlation $\rho = 0.8$. Left column gives the FDR (top) and power (bottom) for E-SLOPE with target FDR $\alpha = 10\%$.
Right column gives the FDR (top) and power (bottom) for E-SLOPE with target FDR $\alpha = 20\%$.}
\label{fig:lowdim_appen2}
\end{figure}
\end{appendix}
\bibliographystyle{abbrv}
|
1,108,101,565,526 | arxiv | \section{Introduction}
\indent
The Dirac equation utilizes a matrix algebra to construct a linear
relationship between the quantum operators for energy and momentum
in the equations of motion.
The properties of evolution dynamics described using such linear operations on quantum states are straightforward,
and have direct interpretations.
In particular, the cluster decomposability properties necessary for classical correspondence of relativistic quantum
systems is most directly realized using linear quantum operations\cite{JLFQG}\cite{LMNP}\cite{AKLN}.
It is therefore advantageous to extend the Dirac formulation to include operators
whose matrix elements reduce to the Dirac matrices for spin $1 \over 2$
systems, but generally require that the form $\hat{\Gamma}^\mu \: \hat{P}_\mu$
be a Lorentz scalar operation.
The finite dimensional representations
of the resulting extended Poincare group of transformations can be
constructed using the little group of operations ${\mathit{D}^{\lambda'}}_\lambda$ on the
standard state vectors, defining general transformations of the form
\beq{transformation}
\hat{U}(\underline{b}) \, \left | \psi_\lambda \: \vec{a} \right \rangle \: = \:
\sum_{\lambda'}^{} \, \left | \psi_\lambda' \: \vec{z}(\underline{b}; \vec{a}) \right \rangle
\, {\mathit{D}^{\lambda'}}_\lambda (\underline{b}; \vec{a} ) .
\ee
A spinor field equation will be demonstrated for configuration space
eigenstates of the operator $\hat{\Gamma}^\mu \: \hat{P}_\mu$ of the form
\be
\mathbf{\Gamma}^\beta \cdot {\hbar \over i} { \partial \over \partial x^\beta} \,
\hat{\mathbf{\Psi}}_{(\gamma)}^{(\Gamma)}
(\vec{x}) = -(\gamma) m c \, \hat{\mathbf{\Psi}}_{(\gamma)}^{(\Gamma)}(\vec{x}) .
\label{LinearConfigurationSpinorFieldEqn}
\ee
For the $\Gamma=\half$ representation, the matrix representations of the
operators $\mathbf{\Gamma}^\beta$ are just one half of the Dirac matrices,
and the particle type label takes values $\gamma = \pm \half$.
\section{An Extension of the Lorentz Group}
\indent
The finite dimensional representations of an extension of the Lorentz group will be
constructed by developing a spinor representation of the algebra.
The group elements $\underline{b}$ will include
3 parameters representing angles, 3 boost parameters, and 4
group parameters $\vec{\omega}$ associated with four operators $\hat{\Gamma}^\mu$.
\subsection{Extended Lorentz Group Commutation Relations}
For the extended Lorentz group, the commutation relations for angular momentum
and boost generators remain unchanged from those of the standard Lorentz group.
The additional extended group commutation relations will be chosen to be consistent
with the Dirac matrices as follows:
\bea
\left [ \Gamma^0 \, , \, \Gamma^k \right] \: = \: i \, K_k ,\\
\left [ \Gamma^0 \, , \, J_k \right] \: = \: 0 ,\\
\left [ \Gamma^0 \, , \, K_k \right] \: = \: -i \, \Gamma^k ,\\
\left [ \Gamma^j \, , \, \Gamma^k \right] \: = \: -i \, \epsilon_{j k m} \, J_m ,\\
\left [ \Gamma^j \, , \, J_k \right] \: = \: i \, \epsilon_{j k m} \, \Gamma^m ,\\
\left [ \Gamma^j \, , \, K_k \right] \: = \: -i \, \delta_{j k} \, \Gamma^0 .
\label{ExtLorentzGroupEqns}
\eea
A Casimir operator can be constructed for the extended Lorentz (EL) group
in the form
\be
C \: = \: \underline{J} \cdot \underline{J} \,-\, \underline{K} \cdot \underline{K}
\,+\, \Gamma^0 \, \Gamma^0 \,-\, \underline{\Gamma} \cdot \underline{\Gamma} .
\ee
This operator can directly be verified to commute with all generators of the group.
The operators $C$, $\Gamma^0$, and $J_z$ will be chosen as the set of mutually
commuting operators for the construction of the finite dimensional representations.
\subsection{Group metric of the extended Lorentz group\label{subsec:LorentzGrMetric}}
The group metric for the algebra represented by
\be
\left [ \hat{G}_r \, , \, \hat{G}_s \right ] \: = \: -i \, \left ( c_s \right ) _r ^m \, \hat{G}_m
\ee
can be developed from the adjoint representation in terms of the structure constants:
\be
\eta_{a b} \: \equiv \: \left ( c_a \right )_r ^s \, \left ( c_b \right )_s ^r .
\label{GroupMetricEqn}
\ee
The non-vanishing components of the extended Lorentz group metric are
given by
\bea
\eta^{(EL)} _{J_m \, J_n} \: = \: -6 \, \delta_{mn} \quad , \quad &
\eta^{(EL)} _{K_s \, K_n} \: = \: +6 \, \delta_{mn} \quad , \quad &
\eta^{(EL)} _{\Gamma^\mu \, \Gamma^\nu} \: = \: +6 \, \eta_{\mu \nu} \, .
\eea
It is important to note that the group structure of the extended Lorentz
group generates the Minkowski metric $\eta_{\mu \nu}$ as a direct
consequence of the group structure. Neither the group structure of
the usual Lorentz group nor that of the Poincare group can \emph{generate}
the Minkowski metric as a metric describing an invariance
due to the abelian nature of the generators for
infinitesimal space-time translations.
\subsection{Symmetry Behavior of Spinor Forms}
The substitution of the
angular momenta $J_k$ and $\Gamma^0$ with barred operators
identified as
\bea
\underline{J} \: \leftrightarrow \: \underline{\bar{J}} \\
\Gamma^0 \: \leftrightarrow \: -\bar{\Gamma}^0
\eea
will preserve the commutation relations (\ref{ExtLorentzGroupEqns}).
For the Dirac case, this is seen to
represent a "particle-antiparticle" symmetry of the system, and it
represents a general symmetry under negation of the eigenvalues of
the operator $\Gamma^0$.
\subsection{Finite dimensional spinor representations}
Spinor representations of this extension of the Lorentz group will next be constructed.
\subsubsection{Number of States}
The order of the spinor polynomial of the finite dimensional state
of the $\Gamma=J_{max}$ representation
can be determined by examining the minimal state from which other
states can be constructed using the raising operators and orthonormality. This
minimal state takes the form
\be
\psi_{-\Gamma , -\Gamma} ^{(\Gamma)} \: = \: A^{(\Gamma J)} \chi_- ^{(-) 2 \Gamma} ,
\ee
where the $\chi_\pm ^{(\pm)}$ represent the four spinor states.
The lower index $\pm$ labels the angular momentum basis, while the upper
index labels eigenvalues of $\Gamma^0$.
The general state involves spinor products of the type
\be
\chi_+ ^{(+) a} \chi_- ^{(+) b} \chi_+ ^{(-) c} \chi_- ^{(-) d}.
\ee
A complete basis of states requires then that $a+b+c+d=2 \Gamma$. By direct counting
this yields the number of states for a complete basis:
\be
N_\Gamma \: = \: {1 \over 3} (\Gamma + 1) (2 \Gamma + 1) (2 \Gamma + 3).
\ee
For instance, $N_0 = 1, N_{1 \over 2} = 4, N_1 = 10, N_{3 \over 2} = 20$, and so on.
A single J basis with $(2 J + 1)^2$ states does not cover this space of spinors. However,
one can directly verify that
\be
N_{J_{max}} \: = \: \sum_{J=J_{min}}^{J_{max}} (2 J + 1)^2 ,
\ee
where $J_{min}$ is zero for integral systems and $1 \over 2$ for half integral systems.
Thus one can conclude that $\Gamma$ represents the maximal angular momentum state of the system:
\be
J \: \le \: \Gamma = J_{max} .
\ee
Higher order representations will include states of differing angular momenta
with common quantum statistics.
\subsubsection{Spinor metrics}
Invariant amplitudes are usually defined using dual spinors so that
the inner product is a scalar under the
transformation rule for the spinors $\mathbf{D}$:
\be
<\bar{\psi} | \phi > \: = \: <\bar{\psi'} | \phi'> \textnormal{, or } ~
\psi_a ^\dagger g_{ab} \phi_b \: = \:
\left( D_{ca} \psi_a \right) ^\dagger g_{cd}
\left( D_{db} \psi_b \right) .
\ee
This means the \emph{spinor} metric $\mathbf{g}$ should satisfy
\be
\mathbf{g} \: = \: \mathbf{D^\dagger g D} .
\ee
The Dirac conjugate spinor $\bar{\psi}\equiv \psi^\dagger \mathbf{g}$ includes this spinor metric.
The eigenvalues of the hermitian angular momentum operators $\underline{J}$ and $\Gamma^0$
are given by real numbers
\be
\underline{J} ^\dagger=\underline{J} \quad , \quad
\Gamma^{0 \, \dagger}=\Gamma^0 ,
\ee
which requires that the finite dimensional representations satisfy
\be
\mathbf{g \, \Gamma^0} \, = \, \mathbf{\Gamma^0 \, g} \quad , \quad
\mathbf{g \, \underline{J} } \, = \, \mathbf{\underline{J} \, g} ~ .
\ee
The spinor metric therefore takes the general form
\be
g_{a \, a'} ^{(\Gamma J)} \: = \: (-)^{\Gamma-\gamma}
\delta_{\gamma \gamma'} \delta_{s_z,s_z'},
\ee
using the quantum number shorthand $a:\{ \gamma, \, s_z \}$.
One can show that the general form of this spinor metric
anti-commutes with the boost generator and spatial components
of the $\underline{\Gamma}$ matrices:
\be
\mathbf{g \, \underline{\Gamma} } \, = \,- \mathbf{\underline{\Gamma} \, g} \quad , \quad
\mathbf{g \, \underline{K} } \, = \, -\mathbf{\underline{K} \, g} .
\ee
For the Dirac representation, the spinor metric takes the form of the
Dirac matrix $\mathbf{\gamma}^0$. However, the spinor metric is
not related to $\Gamma^0$ for higher spin representations.
\subsubsection{Representation of $\Gamma={1 \over 2}$ systems}
The forms of the matrices corresponding to $\Gamma={1 \over 2}$ are
expected to have dimensionality $N_{1 \over 2}=4$, and
can be expressed in terms of the Pauli spin matrices $\sigma_j$ as shown below:
\be
\begin{array}{ll}
\mathbf{\Gamma^0} \,=\, {1 \over 2} \left( \begin{array}{cc}
\mathbf{1} & \mathbf{0} \\ \mathbf{0} & -\mathbf{1} \end{array} \right)
\,=\, {1 \over 2} \mathbf{g} \quad \quad &
\mathbf{J}_j \,=\, {1 \over 2} \left( \begin{array}{cc}
\sigma_j & \mathbf{0} \\ \mathbf{0} & \sigma_j
\end{array} \right) \\ \\
\mathbf{\Gamma}^j \,=\, {1 \over 2} \left( \begin{array}{cc}
\mathbf{0} & \sigma_j \\
-\sigma_j & \mathbf{0} \end{array} \right) &
\mathbf{K}_j \,=\, -{i \over 2} \left( \begin{array}{cc}
\mathbf{0} & \sigma_j \\
\sigma_j & \mathbf{0} \end{array} \right)
\end{array}
\label{4x4RepresentationEqn}
\ee
The $\Gamma^\mu$ matrices can directly be seen to be proportional
to a representation of the Dirac matrices\cite{Dirac}\cite{BjDrell} .
A representation for $\Gamma=1$ can be found in reference \cite{JLFQG}.
\section{An Extension of the Poincare Group}
\indent
Once space-time translations are included in the group algebra, these additional commutation
relations must result in a self-consistent set of generators\cite{JLFQG}.
The extended Lorentz group structure can be minimally expanded to include
space-time translations as long as all operators continue to
satisfy the algebraic Jacobi identities,
\be
[\hat{A},[\hat{B},\hat{C}]] ~+~[\hat{C},[\hat{A},\hat{B}]]~+~
[\hat{B},[\hat{C},\hat{A}]]~=~0.
\ee
An attempt to only include the 4-momentum operators
in addition to the extended Lorentz group operators
does not produced a closed group structure, due to Jacobi relations of the
type $[\hat{P}_j , [\hat{\Gamma} ^ 0 , \hat{\Gamma} ^k] ]$. The non-vanishing of
this commutator in the Jacobi identity implies a non-vanishing commutator
between operators $\Gamma ^\mu$ and $P_\nu$, and that this commutator must connect
to an operator which then has a commutation relation with $\Gamma ^\mu$ that yields a
4-momentum operator $P_\beta$.
Since the 4-momentum operators self-commute,
at least one additional operator, which will be referred to as $\mathcal{M}_T$,
must be introduced.
The additional non-vanishing commutators involving Jacobi consistent operators $\hat{P}_\mu$ and
$\hat{\mathcal{M}}_T$ are given by
\bea
\left [ J_j \, , \, P_k \right] \: = \: i \hbar \, \epsilon_{j k m} \, P_m ,
\label{JPeqn} \\
\left [ K_j \, , \, P_0 \right] \: = \: -i \hbar \, P_j ,
\label{KP0eqn} \\
\left [ K_j \, , \, P_k \right] \: = \: -i \hbar \, \delta_{j k} \, P_0 ,
\label{KPeqn} \\
\left [ \Gamma^\mu \, , \, P_\nu \right] \: = \: i \, \delta_\nu ^\mu \, \mathcal{M}_T c,
\label{GamPeqn} \\
\left [ \Gamma^\mu \, , \, \mathcal{M}_T \right] \: = \: {i \over c} \, \eta^{\mu \nu} \, P_\nu ,
\label{GamGeqn}
\eea
The first three of these relations are identical to those of the Poincare group. The final two
relations consistently incorporate the additional operator $\hat{\mathcal{M}}_T$
needed to close the algebra.
\subsection{Invariants and group metric}
As demonstrated in section \ref{subsec:LorentzGrMetric},
given the structure constants defining the commutation relationships of the generators ,
a metric for the complete group $\eta_{s \, n}$ can be developed using (\ref{GroupMetricEqn}).
The non-vanishing group metric elements
generated by the structure constants of this extended Poincare group are
given by
\bea
\eta^{(EP)} _{J_m \, J_n} \: = \: -8 \, \delta_{m,n} &
\eta^{(EP)} _{K_m \, K_n} \: = \: +8 \, \delta_{m,n}
\eea
\be
\eta^{(EP)} _{\Gamma^\mu \, \Gamma^\nu} \: = \: 8 \, \eta_{\mu \, \nu}
\ee
where $\eta_{\mu \, \nu}$ is the usual Minkowski metric of the Lorentz group.
The Minkowski metric is non-trivially generated by the extended Lorentz group algebra.
This \emph{group theoretic} metric can be used to develop Lorentz invariants using the
operators $\Gamma^\mu$. Since $\Gamma^\mu P_\mu$ is also Lorentz invariant,
the group transformation properties of the generators $P_\mu$, as well as their
canonically conjugate translations $x^\mu$, are direct consequences of the group properties
of the extended Poincare group. The standard Poincare group has no non-commuting operators
that can be used to connect the group structure to the metric properties of space-time
translations.
\subsection{Unitary quantum states}
A Casimir operator for the complete group can be constructed
using the Lorentz invariants given by
\be
\mathcal{C}_m \: \equiv \: \mathcal{M}_T^2 c^2 \,-\, \eta^{\beta \nu} P_\beta P_\nu .
\label{Casimir_mu}
\ee
The label $m$ in the Casimir operator $\mathcal{C}_m$ parameterizes the eigenstates
that can be developed to construct a finite dimensional representation.
The form of this group invariant suggests that the
hermitian operator $\mathcal{M}_T$ is a
\emph{transverse mass} parameter of the state, which can have a non-vanishing value
for massless states $\eta^{\beta \nu} P_\beta P_\nu =0$.
A set of quantum state vectors that are labeled by mutually commuting operators
are given by
\be
\begin{array}{l}
\hat{\mathcal{C}}_m \, \left | m, \Gamma, \gamma, J, s_z \right > ~=~
m^2 c^2 \, \left | m, \Gamma, \gamma, J, s_z \right > , \\
\hat{C}_\Gamma \,\left | m, \Gamma, \gamma, J, s_z \right > ~=~
2 \Gamma (\Gamma + 2) \, \left | m, \Gamma, \gamma, J, s_z \right > , \\
\hat{\Gamma}^0 \, \left | m, \Gamma, \gamma, J, s_z \right > ~=~
\gamma \, \left | m, \Gamma, \gamma, J, s_z \right >, \\
\hat{J}^2 \, \left | m, \Gamma, \gamma, J, s_z \right > ~=~
J(J+1) \hbar^2 \,\left | m, \Gamma, \gamma, J, s_z \right >, \\
\hat{J}_z \, \left | m, \Gamma, \gamma, J, s_z \right > ~=~
s_z \hbar \, \left | m, \Gamma, \gamma, J, s_z \right >,
\end{array}
\label{StateVectorEqn}
\ee
where $m^2$ is generally a continuous real parameter, and
all other parameters are discrete. $\Gamma$ is an integral or
half-integral label of the representation of the extended Lorentz
group, $J$ labels the internal angular momentum representation of the state,
and $J$ has the same integral signature as $\Gamma$.
Eigenvalues of the group Casimir $\hat{\mathcal{C}}_m$ and $\hat{J}^2$
will be used to label an arbitrary standard state vector.
An additional invariant can be constructed from the pseudo-vector
\be
\begin{array}{l}
\hat{W}_\alpha ~\equiv~ i \epsilon_{\alpha \beta \mu \nu}~
\hat{\Gamma}^\beta ~\hat{\Gamma}^\mu \, \eta^{\nu \lambda} \hat{P}_\lambda , \\
\hat{W}_0 ~=~ \underline{\hat{J}} \cdot \underline{\hat{P}} , \\
\underline{\hat{W}}~=~ \underline{\hat{K}} \times \underline{\hat{P}} ~+~
\underline{\hat{J}} \hat{P}_0 ,
\end{array}
\label{SpinOperator}
\ee
where the antisymmetric tensor $\epsilon_{\alpha \beta \mu \nu}$ is
defined by
\be
\epsilon_{\alpha \beta \mu \nu} ~\equiv ~\left \{
\begin{array}{ll}
+1 & \textnormal{for } (\alpha \beta \mu \nu) \textnormal{ an even permutation
of (0,1,2,3)}, \\
-1 & \textnormal{for } (\alpha \beta \mu \nu) \textnormal{ an odd permutation
of (0,1,2,3)}, \\
~~ 0 & \textnormal{for any two indexes equal} .
\end{array}
\right .
\ee
The Lorentz invariant $\hat{W}^2 \equiv \hat{W}_\alpha \eta^{\alpha \beta}\hat{W}_\beta$
commutes with $\hat{P}_\beta$ and $\hat{\mathcal{M}}_T$, since
$[\hat{W}_\beta, \hat{P}_\mu]=0=[\hat{W}_\beta, \hat{\mathcal{M}}_T]$.
The covariant 4-vector $\hat{W}_\alpha$ is orthogonal to the 4-momentum operator,
$\hat{P}_\mu \eta^{\mu \nu} \hat{W}_\nu=0$,
due to the antisymmetric form defining $\hat{W}_\alpha$.
When acting upon a massive particle state at rest (which has a time-like
4-momentum), the 4-vector $W_\alpha$
is seen to be a space-like vector whose invariant length is related to the particle's
spin times its mass.
Unitary representations of general momentum states are obtained by
boosting standard states satisfying (\ref{StateVectorEqn}). Standard massive states
have covariant 4-momentum components $\vec{p}^{(s)}=(-m c,0,0,0)$ and vanishing
eigenvalue of transverse mass $ \hat{\mathcal{M}}_T$ labeled $m_T=0$.
Standard massless states have covariant 4-momentum components $\vec{p}^{(s)}=(-1,0,0,1)$
with an eigenvalue of transverse mass operator given by $m_T$ that need not vanish.
General Lorentz transformations on the 4-momentum eigenstates satisfy
\be
U (\mathbf{\Lambda}^{(L)}) \, | \vec{p}, (m_T), m, J, \kappa \rangle ~=~
\sum_{\kappa'} | (\mathbf{\Lambda}^{(L)}\vec{p}, (m_T), m, J, \kappa' \rangle~
Q_{\kappa' \kappa}^{(s)} (\mathbf{\Lambda}^{(L)}, \vec{p}) ,
\label{QLambdaP}
\ee
where the matrices $Q_{\kappa' \kappa}^{(s)}$ with
discrete indices $\kappa$ describe the finite dimensional, unitary transformations on the
standard momentum state of the system. The Casimir label $m$ satisfies
$m^2c^2=m_T^2c^2 - \vec{p} \cdot \vec{p}$ for a state with
4-momentum $\vec{p}$.
\subsection{Linear Wave Equation for Single Particle States}
Eigenstates of the operator $\Gamma^\mu P_\mu$ will give linear operator dispersion relations
for energy and momenta in a spinor wave equation. The commutators of the various group generators with
this Lorentz invariant operator are given by
\be
\left [ J_k, \Gamma^\mu P_\mu \right ] \: = \: 0
\label{JDirac}
\ee
\be
\left [ K_k, \Gamma^\mu P_\mu \right ] \: = \: 0
\label{KDirac}
\ee
\be
\left [ P_\beta, \Gamma^\mu P_\mu \right ] \: = \:
-i \mathcal{M}_T P_\beta
\label{PDirac}
\ee
\be
\left [ \mathcal{M}_T, \Gamma^\mu P_\mu \right ] \: = \:
-i \eta^{\beta \nu} P_\beta P_\nu
\label{MTDirac}
\ee
One should note that, from (\ref{MTDirac}), the transverse mass
only commutes with
$\Gamma^\mu P_\mu$ for massless particles.
Similarly, from (\ref{PDirac}) the 4-momentum operator
only commutes with
$\Gamma^\mu P_\mu$ if the transverse mass vanishes.
Therefore, only massless states can have non-vanishing transverse mass values
$m_T \neq 0$.
The transverse mass operator $\hat{\mathcal{M}}_T$ is the generator for translations of the affine parameter labeling the
trajectory of a massless particle.
Spinor forms of the quantum state vectors and matrix representations of the operators
can be developed in the usual manner:
\be
\langle \chi_a | \hat{\Gamma}^\mu | \chi_b \rangle \equiv (\mathbf{\Gamma}^\mu )_{a b}
~,~
\mathbf{\Psi}_a^{(\Gamma)}(\vec{p}, J, \kappa) \equiv
\langle \chi_a | \vec{p}, m, J, \kappa \rangle .
\ee
This results in a momentum-space form of the spinor equation given by
\be
\mathbf{\Gamma}^\mu \hat{P}_\mu \mathbf{\Psi}_{(\gamma)}^{(\Gamma)}(\vec{p}, J, \kappa) =
(\gamma) m c \, \mathbf{\Psi}_{(\gamma)}^{(\Gamma)}(\vec{p}, J, \kappa) ,
\label{LinSpinMomentumSpEqn}
\ee
where $(\gamma)$ is the eigenvalue of $\hat{\Gamma}^0$ for massive particle states.
It is worth noting that this linear spinor formulation does not have negative energy
solutions. Rather, there is a sign is associated with the particle type eigenvalue $\gamma$.
Therefore, straightforward interpretations of the energetics of particles can be made without
introducing any filled Dirac sea of fermions to prevent transitions from positive energy
states. There is no need to introduce any additional degrees
of freedom to stabilize the ground state once radiative coupling is included.
The configuration-space representation of (\ref{LinSpinMomentumSpEqn}) takes the form
\be
\mathbf{\Gamma}^\mu {\hbar \over i} {\partial \over \partial x^\mu}
\mathbf{\Psi}_{(\gamma)}^{(\Gamma)}(\vec{x}) =
(\gamma) m c \, \mathbf{\Psi}_{(\gamma)}^{(\Gamma)}(\vec{x}) .
\label{LinSpinSpaceTimeEqn}
\ee
The spinor fields satisfy microscopic causality as long as their quantum statistics is
fermionic for $\Gamma$ half-integral, and bosonic for $\Gamma$ integral.
The transformation properties of the fields under the improper Lorentz transformations
of parity and time reversal, as well as under charge conjugation, can be developed in
a straightforward manner\cite{JLFQG}.
A form of a Lagrangian for gravitating linear spinor fields with local gauge symmetries
will next be displayed.
One can define spinor-valued geometric matrices of the form
$\mathbf{U}^\beta (x) \equiv \mathbf{\Gamma}^{\hat{\mu}} { \partial x^\beta
\over \partial \xi^{\hat{\mu}} }$, where the $\xi^{\hat{\mu}}$ represent
locally flat coordinates.
A gauge covariant Lagrangian density for a particle with Casimir label $m$ is given by
\be
\mathcal{L}_{m} = {1 \over 2 \Gamma} {\hbar c \over i} \left [
\bar{\mathbf{\Psi}}_{(\gamma)}^{(\Gamma)} \mathbf{U}^\beta \left (
\partial_\beta - {q \over \hbar c} A_\beta^r \, i \mathbf{G}_r
\right ) \mathbf{\Psi}_{(\gamma)}^{(\Gamma)} -
(\textnormal{c. c.})
\right ] + {(\gamma) \over \Gamma} m c^2 \,
\bar{\mathbf{\Psi}}_{(\gamma)}^{(\Gamma)} \mathbf{\Psi}_{(\gamma)}^{(\Gamma)} ,
\label{LinearSpinorGaugeFieldGravLagrangian}
\ee
where (c. c.) specifies the complex conjugate of the previous expression, and the
matrices $\mathbf{G}_r$are hermitian generators of the local gauge group of symmetries for
the linear spinor field $ \mathbf{\Psi}_{(\gamma)}^{(\Gamma)}$. Such a Lagrangian form has
straightforward cluster decomposition properties when systems of mixed entanglements
are being described.
\subsection{Spinor Lie transformation algebra and the principle of equivalence}
Linear spinor fields are useful for describing
the micro-physics of gravitating systems for several reasons. One useful property is their cluster
decomposition properties that allow straightforward combinations of systems
with arbitrary degrees of quantum entanglements at varying times.
Another property is that the \emph{group} metric generated by the
operators $\hat{\Gamma}^{\tilde{\mu}}$ constructs the Minkowski metric. Because
operators like $\hat{\Gamma}^{\tilde{\mu}}\hat{P}_{\tilde{\mu}}$ are
invariant under the Lorentz subgroup of transformations, the
components of the momenta likewise transform in a manner
consistent with this metric defining \emph{subgroup} invariants. There is no analogous
Lorentz subgroup metric for the Poincare group, since there are no non-abelian
operators in that group to generate this metric.
Since the 4-momentum operators $\hat{P}_{\tilde{\mu}}$ transform like basis vectors,
the invariance of $\hat{P}_{\tilde{\mu}} \, \eta^{\tilde{\mu} \tilde{\nu}}\, \hat{P}_{\tilde{\nu}}$
will have significance defining the \emph{space-time} metric.
The relevant group properties of the extended Poincare group will be further explored in
this section.
The general extended Poincare group transformation can be characterized by
the 15 parameters
$\mathcal{P}_X \equiv \{ \vec{\omega}, \mathbf{U},\mathbf{\Theta}, \vec{a}, \alpha \}$
conjugate to generators $\{ \hat{\Gamma}^{\mu}, \hat{K}_{j}, \hat{J}_{k}, \hat{P}_{\nu},
\hat{\mathcal{M}}_T \}$ representing ``Dirac boosts", Lorentz boosts, rotations,
space-time translations, and lightlike translations.
For brevity, the 5 mutually commuting extended translations will be
labeled by a barred parameter
$\bar{a}\equiv \{ \alpha, \vec{a} \}$, while the set of 10
extended Lorentz group parameters will be underlined $\underline{a}=
\{ \mathbf{\Theta}, \mathbf{U}, \vec{\omega} \}$. The overall group structure
will be examined for the product
transformation of pure translations with pure extended Lorentz group transformation
defined by the convention $\hat{U}(\mathcal{P}_X)
\equiv \hat{X}(\bar{a}) \hat{W} (\underline{a})$. A reversal
of this order results in a representation that is a similarity transformation on the
elements.
The individual subgroups have well-defined group operations within each subgroup, while
the overall group will have group operations based upon the convention:
\be
\begin{array}{c}
\hat{X}(\bar{b}) \, \hat{X}(\bar{a}) ~\equiv~ \hat{X}(\bar{\phi}_x (\bar{b}; \bar{a})) , \\
\hat{W}(\underline{b}) \, \hat{W}(\underline{a}) ~\equiv~ \hat{W}(\underline{\phi}_w (\underline{b}; \underline{a})) , \\
\hat{U}({\mathcal{P}_X} ') \, \hat{U}(\mathcal{P}_X) ~\equiv~ \hat{U}(\Phi ({\mathcal{P}_X} '; \mathcal{P}_X )) .
\end{array}
\ee
Using group properties, one can generally show that the translation group operation is independent of the initial
extended Lorentz group parameter $\underline{a}$,
$\bar{\Phi}_x (\bar{b}, \underline{b} ; \bar{a}) =
\bar{\phi}_x (\bar{b} ; \bar{\Phi}_x (\bar{I},\underline{b};\bar{a}))$, where
$I$ is the identity element. Likewise,
the extended Lorentz group operation is independent of the final translation $\bar{b}$ using this convention,
$\underline{\Phi}_w (\underline{b}; \bar{a},\underline{a}) =
\underline{\phi}_w (\underline{\Phi}_w (\underline{b}; \bar{a}, \underline{b}^{-1});
\underline{\phi}_w (\underline{b};\underline{a}))$.
The inverse group element to $\mathcal{P}_X=\{ \underline{a}, \bar{a} \}$, which
can be calculated using
$\hat{W}^{-1} \hat{X}^{-1} = \hat{W}^{-1} \hat{X}^{-1} \hat{W} \hat{W}^{-1}$, is given by
$\mathcal{P}_X^{-1}= \{ \underline{\Phi}_w (\underline{a}^{-1}; \bar{a}^{-1}, \underline{I}) ,
\bar{\Phi}_x (\bar{I}, \underline{a}^{-1}; \bar{a}^{-1}) \}$.
Group associativity for operations within the subgroups is expressed in the relationships
\bea
\bar{\Phi}_x (\bar{c},\underline{c} ; \bar{\Phi}_x (\bar{b},\underline{b};\bar{a}))=
\bar{\Phi}_x (\bar{\Phi}_x(\bar{c},\underline{c};\bar{b}),
\underline{\Phi}_w (\underline{c}; \bar{b},\underline{b}); \bar{a}) \quad ,
\label{xTranslAssocConditionEqn}\\
\underline{\Phi}_w (\underline{c}; \bar{\Phi}_x (\bar{b}, \underline{b} ; \bar{a}) ,
\underline{\Phi}_w (\underline{b}; \bar{a}, \underline{a}) ) =
\underline{\Phi}_w (\underline{\Phi}_w (\underline{c}; \bar{b}, \underline{b});
\bar{a},\underline{a}) ~.
\label{xLorentzAssocConditionEqn}
\eea
In particular,
the translationally independent group of transformations
\begin{displaymath}
\bar{\Phi}_x (\bar{I},\underline{c} ; \bar{\Phi}_x (\bar{I},\underline{b};\bar{x}))=
\bar{\Phi}_x (\bar{I} ,
\underline{\Phi}_w (\underline{c}; \bar{I},\underline{b}); \bar{x})
\end{displaymath}
forms a Lie transformation group
$\bar{x}' \equiv \bar{f}(\bar{x};\underline{b})=
\bar{\Phi}_x (\bar{I}, \underline{b}; \bar{x})$.
However, one should note that the full group of transformations is generally \emph{beyond}
those of a traditional Lie transformation group.
The transformations in gauge symmetries are typically represented by
traditional Lie transformation groups.
Given the group operation, the complete set of group parameters (like
structure constants, Lie structure matrices, transformation matrices, etc.)
can be constructed.
The matrices that define how the group transformations mix the generators of the
group $U(\mathcal{P}^{-1}) \, G_r \, U(\mathcal{P}) =
\oplus_r{}^s (\mathcal{P}) \, G_s$ are given by\cite{JLFQG}
\be
\oplus_r{}^s (\mathcal{P}) = \left . {\partial \over \partial \mathcal{P}^r{} '}
\Phi^s (\mathcal{P}^{-1} ; \Phi (\mathcal{P}' ; \mathcal{P}))
\right |_{\mathcal{P}' \rightarrow I} =
\left . {\partial \Phi^m (\mathcal{P}' ; \mathcal{P})) \over \partial \mathcal{P}^r{}'} \right |_{\mathcal{P}' =I}
\left . {\partial \Phi^s (\mathcal{P}^{-1} ; \mathcal{P}')) \over \partial \mathcal{P}^m{} '}
\right |_{\mathcal{P}' =\mathcal{P}},
\ee
where it is convenient to define the Lie transformation matrices
$\Oline_{m}{}^{s} (\mathcal{P}) \equiv
\left . {\partial \Phi^s (\mathcal{P}^{-1} ; \mathcal{P}')) \over \partial \mathcal{P}^m{} '}
\right |_{\mathcal{P}' =\mathcal{P}}$,
and $\Theta_r{}^m (\mathcal{P}) \equiv
\left . {\partial \Phi^m (\mathcal{P}' ; \mathcal{P})) \over \partial \mathcal{P}^r{}'} \right |_{\mathcal{P}' =I}$.
With these definitions, the transformation matrices for the generators satisfy
$\oplus_r{}^s (\mathcal{P})=\Theta_r{}^m (\mathcal{P}) \Oline_{m}{}^{s} (\mathcal{P}) $.
The transformations $\bar{\phi}_x (\bar{b}; \bar{a})=\bar{\Phi}_x (\bar{b}, \underline{I}; \bar{a})$
form an abelian subgroup of translations.
The extended translations all commute with each other, but they are mixed amongst each other
by the extended Lorentz transformations. This means that a general operation of the form
$F(\xi^\Lambda \hat{P}_\Lambda)$ will transform under extended Lorentz transformations
according to
\be
\hat{U}(\{\underline{a},\bar{I}\}) \, F(\xi^\Lambda \hat{P}_\Lambda) \,
\hat{U}^{-1} (\{\underline{a},\bar{I}\}) =
F(\xi^\Lambda \, \oplus_\Lambda{}^\Delta (\underline{a}^{-1}) \hat{P}_\Delta) ,
\ee
where the capital Greek indices sum over the five parameters including the space-time coordinates and the affine
coordinate conjugate to the transverse mass.
Thus, the $\oplus_\Lambda{}^\Delta (\underline{a})$ define the extended Lorentz transformation
matrices on the momenta.
The fact that the translations are abelian allows a very useful choice of the
translation group parameters associated with space-time coordinates.
If one utilizes the factor
$\Oline_{\Delta}{}^{\Lambda} (\bar{x}) \equiv {\partial \over
\partial x^\Delta} \phi_x^{\Lambda} (\bar{x}^{-1};\bar{x})$,
the associativity condition (\ref{xTranslAssocConditionEqn})
implies that
\bea
\Oline_{\Delta}{}^{\Lambda} (\bar{x}) \oplus_\Lambda{}^\Upsilon (\bar{a})=
{\partial \phi_x^\Lambda (\bar{x}; \bar{a}) \over \partial x^\Delta}
\Oline_{\Lambda}{}^{\Upsilon} (\bar{\phi}_x (\bar{x}; \bar{a})) \Rightarrow \qquad \nonumber \\
dx^\Delta \Oline_{\Delta}{}^{\Lambda} (\bar{x}) \oplus_\Lambda{}^\Upsilon (\bar{a}) =
d \phi_x^\Lambda (\bar{x}; \bar{a}) \Oline_{\Lambda}{}^{\Upsilon} (\bar{\phi}_x(\bar{x}; \bar{a}) ) \, .
\eea
Therefore, if one defines the special set of coordinates $\xi^{\tilde{\Upsilon}}$ by
\be
{\partial \xi^{\tilde{\Upsilon}} \over \partial x^\Delta} \equiv \Oline_{\Delta}{}^{\tilde{\Upsilon}} (\bar{x}) ~,
\ee
then these coordinates have the property that
\bea
\xi^{\tilde{\Upsilon}} (\bar{a}) = \int_{\bar{I}}^{\bar{a}} dx^\Delta \Oline_{\Delta}{}^{\Upsilon} (\bar{x})
~ , \qquad \qquad \\
\xi^{\tilde{\Delta}} (\bar{b}) \oplus_{\tilde{\Delta}}{}^{\tilde{\Upsilon}} (\bar{a}) =
\int_{\bar{a}}^{\bar{\phi}_x (\bar{b};\bar{a})} d \phi_x^\Delta \, \Oline_{\Delta}{}^{\tilde{\Upsilon}} (\bar{\phi}_x).
\eea
This means that these coordinates satisfy
$\xi^{\tilde{\Upsilon}} (\bar{\phi}_x (\bar{b};\bar{a}))=
\xi^{\tilde{\Delta}} (\bar{b}) \oplus_{\tilde{\Delta}}{}^{\tilde{\Upsilon}} (\bar{a}) +\xi^{\tilde{\Upsilon}} (\bar{a}) $.
The coordinate transformation is related to the group operation via
\be
{\partial \xi^{\tilde{\Upsilon}}(\bar{x}) \over \partial x^\Delta} =\mathcal{V}^{\tilde{\Upsilon}}{}_\Delta (\bar{x}) =
\left . {\partial \Phi_x^{\tilde{\Upsilon}} (\bar{x}^{-1},\underline{I};\bar{x}') \over \partial x'{} ^\Delta}
\right |_{\bar{x}'=\bar{x}} \, .
\ee
This equation directly relates the tetrads $\mathcal{V}^{\tilde{\mu}}{}_\beta$
to the extended Poincare group operation.
More generally, the special coordinates satisfy
\be
\xi^{\tilde{\Upsilon}} ( \bar{\phi}_x (\bar{x}_2, \underline{a}_2 ; \bar{x}_1)) =
\xi^{\tilde{\Upsilon}} (\bar{x}_2) + \xi^{\tilde{\Lambda}} (\bar{x}_1) \,
\oplus_{\tilde{\Lambda}}{}^{\tilde{\Upsilon}} (\underline{a}_2^{-1}) \, ,
\ee
or, in a more suggestive form
\be
\bar{\phi}_x (\bar{x}_2, \underline{a}_2 ; \bar{x}_1) =
\bar{x}_3 (\overline{\xi (\bar{x}_2)} + \overline{\xi (\bar{x}_1) \oplus (\underline{a}_2^{-1})} ) \, .
\label{CurvilinearFromGroupEqn}
\ee
The expression (\ref{CurvilinearFromGroupEqn}) demonstrates a direct mapping of
locally flat coordinates into curvilinear coordinates, consistent with the principle of equivalence.
\section{Dynamic mixing of massless states}
\indent
The transverse mass operator $\hat{\mathcal{M}}_T$
of a massless particle is the generator for affine parameter translations $\Delta \lambda$
along the particle's light cone trajectory, and its non-vanishing eigenvalue propagates a stationary particle
with the usual quantum phase $e^{-{i \over \hbar} m_j c \, \Delta \lambda}$. Since all
massless particles share the same phase for space-time propagation $e^{{i \over \hbar} \vec{p} \cdot \vec{x}}$,
the usual manner for differentiating particles for dynamic mixing requires the introduction of
small masses for the particles\cite{PDG}. However, massless particles of differing transverse mass can
be mixed in a straightforward manner. In particular, a mechanism for mixing massless neutrinos of fixed helicity
$\pm {1 \over 2} \hbar$ will be developed.
Suppose that massless neutrinos of finite transverse mass mix to form
flavor eigenstates in a manner consistent with the single helicity states
giving V-A couplings in weak interactions, and
analogous to the quark mixing that suppresses
neutral, strangeness-changing currents. The transverse mass eigenstates
will be labeled by $| m_j \rangle$, while the eigenstates of flavor that define generation
$a$ will be labeled $| f_a \rangle$. Any mixing due to the dynamics is expected
to relate the states in a manner that preserves unitarity,
\be
| f_a \rangle ~=~ \sum_{j} | m_j \rangle \, U_{j \, a} ,
\ee
requiring that the components $U_{j \, a}$ define a unitary matrix.
Finite translations for massless
particles take the form of a simple exponential
in terms of the affine parameter along the trajectory $\lambda$, and the generator of infinitesimal
affine translations $\hat{\mathcal{M}}_T$, i.e. $T_\lambda = e^{-{i \over \hbar} \lambda \, \hat{\mathcal{M}}_T c}$.
This means that the transition amplitude for mixing massless flavor eigenstates
$f_a \rightarrow f_b$ is of the form
\be
A(f_b \leftarrow f_a) ~=~ \sum_ j U_{j \, b}^* ~
e^{-{i \over \hbar} m_j c \, \Delta \lambda} ~ U_{j \, a}.
\ee
The scale of the affine parameter is given by the spatial/temporal distance
of the null particle trajectory $\Delta \lambda = L=c T$.
The transition probability for the mixing $\mathcal{P}(f_b \leftarrow f_a)=
|A(f_b \leftarrow f_a)|^2$ satisfies
\bea
\mathcal{P}(f_b \leftarrow f_a)~=~\delta_{b \, a} \quad + \qquad \qquad \qquad \qquad \qquad \qquad
\qquad \qquad \qquad \qquad \qquad \qquad \nonumber \\
-2 \sum_{j<k}\left \{ 2 Re[ \Upsilon_{j \, k}(b , a)] sin^2 \left ( {\delta m_{j k}\, c\, L \over 2 \hbar} \right ) +
Im[ \Upsilon_{j \, k}(b , a)] sin \left ( {\delta m_{j k}\, c\, L \over \hbar} \right )
\right \} , ~
\eea
where $\Upsilon_{j \, k}( b, a) \equiv U_{j \, b} \, U_{j \, a}^* \, U_{k \, b}^* \, U_{k \, a}$.
Thus, massless particles with differing transverse
mass eigenvalues $\delta m_{j k}=m_j - m_k$ can indeed allow dynamical mixing of flavor eigenstates.
\section{Additional Hermitian generators}
\indent
The fundamental representation of the extended Lorentz group can be developed in terms
of $4 \times 4$ matrices, with a particular representation given in
(\ref{4x4RepresentationEqn}). The three angular momentum generators, along with $\Gamma^0$,
make up the 4 Hermitian generators of this group. It is of interest to examine the other Hermitian
generators in the group GL(4).
There are 16 Hermitian generators whose representations are $4 \times 4$ matrices. This
means that there are 12 additional $4 \times 4$ Hermitian generators beyond those of
the extended Lorentz group. One of these generators is proportional to the identity
matrix, and thus commutes with all other generators. Thus, this generator forms
a U(1) internal symmetry group that defines a conserved hypercharge on the algebra.
Three additional generators $\tau_j$ form a closed representation of SU(2) on the lower components
of a spinor:
\be
\tau_j ={1 \over 2} \left (
\begin{array}{cc}
\mathbf{0} & \mathbf{0} \\
\mathbf{0} & \mathbf{\sigma}_j
\end{array}
\right ) .
\ee
These generators transform as components of a 3-vector under the little group of transformations for
a massive particle.
In a Lagrangian of the form (\ref{LinearSpinorGaugeFieldGravLagrangian}), transformations involving
these generators vanish on upper component standard state vectors.
Six additional generators are given by the Hermitian forms of the anti-Hermitian generators
$\mathbf{\Gamma}^j$ and $\mathbf{K}_j$ given by
$\mathbf{T}_j = i \, \mathbf{\Gamma}^j$ and
$\mathbf{T}_{j+3} = i \, \mathbf{K}_j$. The final two generators are given by
\be
\mathbf{T}_{7} ={i \over 2} \left (
\begin{array}{cc}
\mathbf{0} & \mathbf{1} \\
-\mathbf{1} & \mathbf{0}
\end{array}
\right )
\quad , \quad
\mathbf{T}_{8} ={1 \over 2} \left (
\begin{array}{cc}
\mathbf{0} & \mathbf{1} \\
\mathbf{1} & \mathbf{0}
\end{array}
\right ) \, .
\ee
The set of 8 Hermitian generators $\mathbf{T}_s$ do not form a closed algebra independent of the other Hermitian
generators. The spinor metric transforms these generators according to
$\mathbf{g}\mathbf{T}_s \mathbf{g}=-\mathbf{T}_s$.
Transformations involving at most 5 re-combinations of these generators
will vanish on upper component standard state vectors for a Lagrangian
of the form (\ref{LinearSpinorGaugeFieldGravLagrangian}). However, the remaining three combinations
necessarily mix particle states defined by the $\Gamma^\mu P_\mu$ form of the Lagrangian.
\section{Conclusion}
\indent
Physical models that are unitary, maintain quantum linearity, have positive definite energies, and have
straightforward cluster decomposition properties, can be constructed in
a straightforward manner using linear spinor fields.
The piece of the group algebra that connects the group structure to metric gravitation
necessitates the inclusion of an additional group operator that generates affine parameter
translations for massless particles. This allows dynamic mixing of massless particles
in a manner not allowed by the standard formulations of Dirac or Majorana.
Just as classical mechanics emerges from the expectation values of quantum processes,
space-time geometry can be assumed to emerge from expectation valued measurements of quantum energies
and momenta via Einstein's equation. Using this interpretation, classical geometrodynamics is emergent from
the behaviors of ensembles of mixed quantum states as they independently decohere.
Linear spinor fields then maintain their linearity by describing coherence using
coordinate descriptions transformed from the proper coordinates of the gravitating fields.
Conserved particle type can be shown to be a consequence of the spinor field equation.
Internal gauge symmetries can be incorporated into the formulation in the usual manner.
In addition,
the fundamental representation of the linear spinor fields unifies a set of micro-physical interactions
involving quanta exchanging 12 Hermitian degrees of freedom, with the geometrodynamics
of general relativity through a single unified group of transformations. At least some of these interactions
necessarily mix fundamental standard states of representations. It is intriguing that the
the number of additional Hermitian degrees of freedom beyond those defining the extended Lorentz
group is the same as the number of generators in the standard model of fundamental interactions.
\section{Acknowledgements}
The author wishes to acknowledge the support of Elnora Herod and
Penelope Brown during the intermediate periods prior to and after
his Peace Corps service (1984-1988), during which time the bulk of this work was
accomplished. In addition, the author wishes to recognize the
hospitality of the Department of Physics at the University of Dar
Es Salaam during the three years from 1985-1987 in which a substantial portion of
this work was done. Finally, the author wishes to express his appreciation of the
hospitality of the Stanford Institute for Theoretical Physics during his year of
sabbatical leave.
|
1,108,101,565,527 | arxiv | \section{Introduction}
Many studies in network science are concerned with community detection, proposing various methods and algorithms both for the classification of nodes into clusters, and for the evaluation of such classifications \citep{Dan05,Fort10,Fort16,New06}. Particularly in social networks, the evaluation of a community detection method is concerned with the nature of the clusters into which nodes are placed, for example the social groups, professions or even special interests that the nodes in each cluster share \citep{Ahn10,Bar02,Eva09,Eva10,Gle03,Gold15}. Recent advances in the literature suggest evaluating a clustering in terms of a ground truth, which is based on metadata, characteristics that nodes possess which are external to the structure and topology of the network \citep{Hric14,Hric16,Peel17,Yang13}.
In this work we aim to build on the existing framework for community detection with metadata and propose a new area where this methodology can be applied, by forming a network of biographical connections between Western art painters in a timespan ranging from the 14$^{th}$ to the 20$^{th}$ centuries. We perform community detection with the aim of matching identified clusters to artistic movements. Furthermore, we use the community structure(s) of the network to re-define standard centrality and brokerage measures in order to highlight painters, whose links to artistic movements beyond their own, can classify them as being influential.
This paper is organised as follows. In \secref{sdata} we introduce the empirical dataset we will be using in later analysis. In \secref{scommunity} we discuss community detection and introduce our two alternative measures for assessing a community partition given metadata information. We then test these measures in three cases: the classic example of Zachary's Karate Club, a synthetic network we are producing, and our empirical network of painters. In \secref{scentrality} we introduce variations on standard centrality measures taking into account an underlying community structure. We see how the standard partitions and the partitions motivated by our measures help us highlight nodes which have a bridging role across communities and present examples from our painter network.
\section{Network Definition and Properties}\label{sdata}
\subsection{Data sources and network definition}
The context of our network comes from Art History, as we build a network of Western art painters. In this section we introduce the network and some of its main properties.
We collect data from the \href{http://www.wga.hu}{Web Gallery of Art}, an online repository of more than 40,000 artworks, which is freely accessible for education and research, for two main purposes: firstly to specify a concrete list of which painters to include in our network and secondly to collect metadata for those painters. Wikipedia also contains metadata for some artists (Wikidata), but we only select that information from the WGA because it is a complete set of metadata for all artists in our database.
From the list of artists, we are finding the corresponding Wikipedia page for each artist, using the Wikipedia Python API. As we are focusing on painter collaboration, we create a network for the painters present in the database, who we want to link according to the encounters they may have had with each other. The dataset can be found online \citep{Kit17}.
The nodes in our network are individual painters and edges between nodes are drawn according to biographical connections between artists, which may correspond to influence or other social and contextual links. To quantify that, we draw an edge when the Wikipedia page of one artist links to another artist in the database. In doing so we construct a network of $N = 2474$ nodes and $E = 9568$ edges. This is a simple, unweighted and undirected network.
The reason why we are choosing an undirected network is because of the way we are drawing the edges; a Wikipedia page of an artist may refer both to the artists that influenced a painter but also the ones that were influenced by them, the ones with whom they shared a workshop, or even painters whose works they may have collected and owned. Weight and multiplicity in edges may be more appropriate, as certainly some artists may be more closely connected than others or might have a greater influence; however this is something that is not straightforward to determine with this data extraction process.
Some further manual cleaning-up of the data is required, as some nodes are duplicate pages of artists, or may correspond to other kinds of artists (e.g. sculptors) but not painters, due to the structure of our data sources. After the cleaning-up and the further removal smaller isolated components (typically singletons or two nodes only connected with each other which were not relevant for our analysis), we are left with a graph of $N = 2113$ nodes and $E = 9417$ edges.
\subsection{Basic network description}
We begin our analysis with a short statistical description of the painter network. It has average degree $\langle k \rangle = 8.9$, clustering coefficient $\langle C \rangle = 0.28$, average shortest path length $\langle l \rangle = 4.07$ and diameter $d=14$. Figure \ref{img1} is a plot of the complementary cumulative degree distribution, which exhibits a truncated power-law behaviour (more in the next section). Degree-degree correlations present no significant features beyond a weak positive correlation $\rho = 0.26$.
It is interesting to note that the most highly connected artists in the network are largely well known names, see Table \ref{deg}. The best connected artist turns out to be \href{https://en.wikipedia.org/wiki/Peter_Paul_Rubens}{Rubens}, a master of the Baroque age. We also see many masters of Renaissance art featuring on this list such as \href{https://en.wikipedia.org/wiki/Raphael}{Raphael}, \href{https://en.wikipedia.org/wiki/Titian}{Titian} and \href{https://en.wikipedia.org/wiki/Leonardo_da_Vinci}{Leonardo da Vinci}.
The fact that the best linked artists are very well known and influential, is to be expected. Many painters' Wikipedia pages will have references to some of the masters they were inspired by or whose workshops they might have worked in, and such connections generate many links to these more recognisable names. The only surprising occurrence in this list is the lesser known painter \href{https://en.wikipedia.org/wiki/Karel_van_Mander}{Karel van Mander}, who does however have many connections to other painters due to the fact that he was mainly an art historian and their biographer. This observation is perhaps an good illustration of a characteristic feature that one might expect from our data collecting process.
\begin{table}
\centering
\caption{\textbf{Most connected painters in the network ranked by degree.}}
\begin{tabular}{cc}
\hline
\textbf{Painter} & \textbf{Degree} \\
\hline
Peter Paul Rubens & 154 \\
Rembrandt & 146 \\
Raphael & 123 \\
Caravaggio & 118 \\
Titian & 101 \\
Giorgio Vasari & 96 \\
Diego Velazquez & 94 \\
Karel van Mander & 85 \\
Michelangelo & 83 \\
Leonardo Da Vinci & 74 \\
\hline
\end{tabular}
\label{deg}
\end{table}
\subsubsection{The degree distribution}
The degree distribution of the painter network (Figure \ref{img1}) displays an interesting, and slightly uncommon ``knee"-like distribution. A distribution like this, which is close to a truncated power law with two exponents (data fitting shows a good fit of $\gamma\approx 2.4$ to $k\leq 60$) has been observed in a few other artificial systems - most notably it has occurred in some transportation systems \citep{Ama00,Gui05,Hu09}, in those contexts corresponding to phenomena related to capacity constraints.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{fig1.pdf}
\caption{Complementary cumulative degree distribution for the painter network.}
\label{img1}
\end{figure}
In the context of our painters network this behaviour can be explained partly due to the way that the Wikipedia articles in our database are written (showing a similar behaviour with the degree distribution in both the article length and number outgoing links distribution).
\subsubsection{Centrality measures}
Apart from the degree, one can also look at other centrality measures to see how the painters' connectivity ranks in the network. Table \ref{betcent} shows the ranks of some of the most popular centrality measures.
\begin{table}
\centering
\caption{\textbf{Centrality measures in the painter hyperlink network.}}
\begin{tabular}{ p{10mm} p{30mm} p{30mm} p{30mm} p{30mm} }
\hline
\textbf{Rank} & \textbf{Betweenness} & \textbf{Closeness} & \textbf{Eigenvector} & \textbf{Pagerank} \\
\hline
1& Rembrandt & Rubens & Caravaggio & Rembrandt \\
2 & Rubens & Rembrandt & Simon Vouet & Raphael \\
3 & Raphael & Raphael & Artemisia Gentileschi & Rubens \\
4 & Titian & Titian & Jusepe de Ribera & Vasari \\
5 & Caravaggio & Caravaggio & Cecco del Caravaggio & Titian \\
\hline
\end{tabular}
\label{betcent}
\end{table}
\begin{figure}[h]
\centering
\includegraphics[width=0.7\textwidth]{fig2.pdf}
\caption{Betweenness centrality (left) and closeness centrality (right) for the top ranked painters in each measure, in randomly perturbed networks with 5\% rewiring. The plots are typical box-and-whisker plots, with the width of the box corresponding to the first and third quartiles of the randomised network centrality values.}
\label{centralities1}
\end{figure}
Figure \ref{centralities1} shows some tests of the robustness of these centrality measures, after some randomised perturbation of the network (5\% edge rewiring) as done in the work by \cite{Che17}.
The centrality measures in this case reveal similar results as the degree, effectively highlighting the same, highly recognisable painters with some minor reordering. The most interesting result comes from the eigenvector centrality measure, which highlights the painter \href{https://en.wikipedia.org/wiki/Caravaggio}{Caravaggio} and many artists close to him (in fact he developed his own sub-movement, Caravaggism).
In order to understand more subtle kinds of artistic influence, we propose generalising centrality measures taking community structure into account, in \secref{scentrality}.
\section{Community detection with node attributes}\label{scommunity}
\subsection{Definition of Communities}
In this paper we will form communities of painters such that every painter is in one and only one community. However we will also ask that communities are more tightly knit than average, so a typical community has more edges within the community than edges linking it to nodes outside the community.
The first part of our definition is what is formally known as a partition of the set of nodes, the set of painters in our case. Let $\mathcal{N}$ be the set of nodes (the painters) in our network. Then each individual community is a non-empty subset $\mathcal{C}_\alpha$ of painters, $\mathcal{C}_\alpha \subseteq \mathcal{N}$, such that there is no painter in two communities and all painters are in one unique community, formally $\mathcal{C}_ \alpha\neq\emptyset$, $\mathcal{C}_ \alpha \bigcap \mathcal{C}_ \beta = \emptyset$ and $\mathcal{N} =\bigcup_\alpha \mathcal{C}_\alpha$.
The second aspect, that communities represent tight knit clusters within the network, is less precise and it is not surprising that there are many formal definitions and methods available to define this aspect. Here we will focus on one widely used method, the Louvain method \cite{Blo08}, whose results are shown in the Supplementary Information and visualised in Figure \ref{communities}. This method seeks to assign each node to a community in such a way that it gives an approximate maximum value for the modularity function $Q(A)$,
\begin{equation}
Q(A) = \frac{1}{2m} \sum_C \sum_{i,j\in C} \left(A_{ij}-\frac{k_ik_j}{2m}\right)
\label{mod}
\end{equation}
\begin{figure}[h]
\centering
\includegraphics[width=0.95\textwidth]{fig3}
\caption{Communities in the painter network; node size corresponds to the degree and colour to the community in which it is placed under the standard implementation of the Louvain method.}
\label{communities}
\end{figure}
We choose the Louvain method because it has been widely and successfully used while there are many fast numerical implementations available. In order to understand the nature of the clusters we look at the Wikipedia links that appear within each cluster and at the attributes gathered from the metadata; we expect, similarly to the work in \cite{Gle03} that the clusters will correspond to artistic movements. Some clusters do indeed show a clear correspondence to an artistic movement but not all of them do.
In practice what we find from our data is that our communities correspond to a mix of artistic movements and the location where the artists where primarily active. The results can be found in the Supplementary Information but we primarily observe that the algorithm does reveal considerably sized clusters for some of the most recognisable movements in Western art, the Renaissance, Baroque and Impressionism.
However, we also observe that some of the other identified communities are predominantly centred around a location, for example France or Italy. Motivated by this observation, we wish to force the Louvain method to look for communities at a finer scale. In the next section we formalise our approach.
\subsection{Nodes equipped with a set of attributes}
Using metadata together with a partition into communities is an idea which has been used in both earlier studies \citep{Dan05,New06,Ben01} and in more recent ones \citep{Peel17}. A common theme among the more recent advances in studies of communities with metadata is that metadata should not be used solely as an external ``ground-truth" which only assesses the quality of a partition, but instead information that can be used in conjunction with the network structure to detect more meaningful communities.
Traditional community detection methods only use the topological information in a network, the relations between nodes represented by edges. However, in the real world we usually have additional information - metadata. Often this is in the form of additional attributes, which can be either used post-hoc to test a method or, as we attempt to do here, used together with the network structure to detect communities.
To make this more precise and apply this rationale, we propose the following theoretical setup. We consider the situation where in a network of $N$ nodes, each node is equipped with a set of attributes $X_i = \left\lbrace x_1^{(i)},\ldots, x_m^{(i)}\right\rbrace$, where $x_a$ can take $k_a$ distinct values. This gives us a possibility of $\prod_{a=1}^m k_a$ combinations for different attribute configurations.
To assess the quality of a partition into communities $\mathcal{C} = \left\lbrace c_{\alpha},\ldots, c_{\nu}\right\rbrace$, we propose two alternative measures. An optimal partition in this context should isolate each specific configuration $\xi = \left(\xi_1,\ldots, \xi_m\right)$ in a single and unique cluster, and $\nu = \prod_{a=1}^m k_a$.
We therefore define the \textit{cluster homogeneity}
\begin{equation}
h_{\mathcal{C}}(c) = \frac{2}{|c|(|c|-1)}\sum_{i,j\in c} S(X_i,X_j)
\end{equation}
where the pre-factor is the number of possible pairs of nodes in the cluster $c_j$ and $S(x,y)$ is a suitable similarity measure. We also define the \textit{configuration entropy}
\begin{equation}
e_{\mathcal{C}}(\xi) = -\sum_{c_{\alpha}\in \mathcal{C}} p_{\alpha}(\xi)\log p_{\alpha}(\xi)
\end{equation}
where $p_{\alpha}(\xi)$ is the probability of finding the configuration $\xi$ in community $c_{\alpha}$, given by
\begin{equation}
p_{\alpha}(\xi) = \frac{1}{|c_{\alpha}|}\sum_{i\in c_{\alpha}}\delta_{X_i,\xi}
\end{equation}
Informally, homogeneity is measured on a cluster of a partition and measures the similarity between the attributes of the nodes in that cluster. Entropy accepts as argument a specific configuration, and measures the fragmentation of this configuration into the various communities.
In an optimal partition (where each cluster has nodes of one attribute configuration and conversely each configuration belongs to a single cluster alone) homogeneity is equal to 1 and entropy is equal to 0. We further note the extreme values that these two measures can obtain: when the entire network is one community, $e = 0$ but $h\rightarrow 0$, whereas when every node is in its own community, $e=O(N)$ and $h = 1$.
\subsection{Implementation of quality measures}
We now test our theoretical measures in two examples as well as our empirical network and illustrate how they indicate whether a partition is too fine or two coarse.
\subsubsection{The Karate Club}
One of the most standard networks for testing methods in community detection is the Karate Club network studied by Zachary, and investigated by numerous approaches in network science. In this case each node in the network has one hidden attribute $X_i = 0$ or $1$, depending on whether the individual $i$ belongs in the faction of the manager or the instructor respectively.
The similarity function $S(x,y) = 1 - d_H(x,y)$, where $d_H(x,y)$ is the Hamming distance between $x$ and $y$; in this case simply 0 if the two nodes belong in the same faction and 1 if they belong in a different faction. The other parameters defined in Section 2.2 in this case are $m = 1$, $k_1 = 2$ and $\nu = 2$; i.e. an optimal partition should have two communities, one for each faction.
We see that the partition which maximises modularity (and detects four communities instead of the known two) fails to perform well on the measures of homogeneity and entropy. This is a case of \textit{overdetecting}, where and as expected the entropy value is quite high.
\begin{figure}[h]
\centering
\includegraphics[width=0.35\textwidth]{fig4}
\caption{Karate Club partition with modularity maximisation detects four communities (overdetection) and scores poorly on entropy.}
\end{figure}
To overcome this issue we can merge identified communities, thus creating a coarser partition. After trying the several possible combinations of grouping the four clusters into fewer, we obtain the optimal way of splitting the network into only two communities (Figure \ref{kc2}). We note that this partition is also not perfect, but it is an improvement over the standard implementations of modularity maximisation (including those that can be obtained by tweaking the resolution parameter).
\begin{figure}[h]
\centering
\includegraphics[width=0.35\textwidth]{fig5}
\caption{Karate Club partition with merged clusters leaves the homogeneity unchanged but significantly reduces the entropy of the partition.}
\label{kc2}
\end{figure}
\subsubsection{Synthetic Network}
As a second, artificial example, we consider a network generated by the Stochastic Block Model \citep{Hol83}. Here we equip each node with a vector of two hidden attributes $X_i = (x_1^{(i)},x_2^{(i)})$, and each $x_j$ can take the value of $0$ or $1$ with equal probability; this means that the possible configurations are $\xi_1 = (0,0)$, $\xi_2 = (0,1)$, $\xi_3 = (1,0)$ and $\xi_4 = (1,1)$. More specifically here $m = 2$, $k_1,k_2 = 2$ and $\nu = 4$.
Two nodes are linked depending on the common attributes they share, i.e. they are linked with probability 1 if they have both attributes matching, with probability 1/2 if one attribute only is matching and are disconnected otherwise.
\begin{equation*}
P_{ij} = \left(
\begin{matrix}
1 & 1/2 & 1/2 & 0 \\
1/2 & 1 & 0 & 1/2 \\
1/2 & 0 & 1 & 1/2 \\
0 & 1/2 & 1/2 & 1
\end{matrix}
\right)
\end{equation*}
An optimal partition should uncover four communities in this case; however the standard implementation of modularity only yields two. In this case the average homogeneity is 0.75, as each community contains two kinds of nodes.
\begin{figure}
\centering
\includegraphics[width=0.35\textwidth]{fig6}
\caption{Synthetic network partition with modularity maximisation; two communities detected (underdetection).}
\end{figure}
This is the opposite scenario from the Karate Club network, as we are \textit{underdetecting} communities; since $h<1$ and $e=0$ it means that our clusters contain more than one type of nodes. By running Louvain modularity maximisation again in each community (treated as a separate network) we are able to unfold the partition into four communities, as each original community splits into two (Figure \ref{sbm2}).
\begin{figure}
\centering
\includegraphics[width=0.35\textwidth]{fig7}
\caption{Running modularity maximisation at the communities level uncovers the deeper level clusters and scores perfectly on homogeneity, while entropy remains at the optimal level.}
\label{sbm2}
\end{figure}
\subsubsection{The Painter Network}
We now implement our analysis on our empirical network of painters, looking for the partition at the fine level which optimises our two measures of homogeneity and entropy. Motivated the discussion in Section 2.1 we assign to each painter-node two attributes, their artistic movement and the country where they were working. These tags are also sourced from the WGA, and can be found in detail in the Supplementary Information.
In this case we have $m=2$ attributes, $k_1 = 11$ movements and $k_2 = 25$ locations. However as there is some overlap between some of the locations and most importantly some movement-location combinations are not realistic we should expect a considerably smaller number than $\nu = 275$ communities in an optimal partition.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{fig8.pdf}
\caption{Original Louvain partition (left) and finer partition (right) for the painter network.}
\label{paintres}
\end{figure}
We observe in Figure \ref{paintres} that the standard implementation of the Louvain method (producing 14 communities) is a good balance between homogeneity and entropy; however as the number of communities is too small (underdetection) we wish to perform some community detection within the clusters to obtain a finer partition.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{fig9}
\caption{Homogeneity and entropy for varying partition detection thresholds.}
\label{threshold}
\end{figure}
As some of the clusters in the partition score highly in homogeneity, we do not need to look into the structure of all; we set a critical homogeneity threshold, below which a cluster with that value of homogeneity will be split into sub-clusters. In Figure \ref{threshold} we see that an appropriate value without compromising entropy is around 0.5; this partition has 72 communities. The finer partition is also shown in Figure \ref{paintres} and we will use both community structures in the following section to identify node influence by considering centrality measures.
\section{Identifying influential nodes}\label{scentrality}
One of our main objectives in this work is to identify influential painters or, conversely, those whose work is influenced from a large number of sources. While this can be answered plainly using a wide range of centrality measures, we propose ways of identifying influential nodes by taking an underlying community structure into account.
We define some preliminary notation. Given a community partition $\mathcal{C} = \left\lbrace c_{\alpha},\ldots, c_{\nu}\right\rbrace$, we denote by $c(i)$ the cluster into which the $i$-th node belongs. Then we can define the kronecker-delta $\delta_{c(i),c(j)}$, which is 1 if nodes $i$ and $j$ are in the same community, and 0 otherwise.
\subsection{Mixing parameter}
Splitting a node's degree given a community structure into links within the community and links outside the community, respectively $k_i^{in}$ and $k_i^{out}$, is commonly occurring in the literature. We propose looking first at the ratio of the outward connections, which is also known in the literature as the \textit{mixing parameter} \citep{Fort16}, given by
\begin{equation}
\mu_{\mathcal{C}}(i) = \frac{k_i^{out}}{k_i}
\end{equation}
Figure \ref{fig7} shows the correlation of this measure with standard centrality measures; the correlation is relatively weak, which illustrates that this measure can indeed have a significant contribution in highlighting nodes which the other measures may not identify.
\begin{figure}
\includegraphics[width=\textwidth]{fig10}
\caption{Correlation of $\mu_{\mathcal{C}}(i)$ with centrality measures (clockwise from top left: degree, betweenness, eigenvector and closeness centrality).}
\label{fig7}
\end{figure}
\subsection{Community-based betweenness centrality}
In order to generalise betweenness centrality, we define the \textit{community-based betweenness centrality} (CBBC), where we wish to only take into account paths that start and finish in different communities.
\begin{equation}
bc_{\mathcal{C}}(i) = \sum_{k,l: \delta_{c(k),c(l)}=0} \frac{\sigma_{kl}(i)}{\sigma_{kl}}
\end{equation}
A visualisation of this definition can be seen in Figure \ref{cbbc}. The intuition behind using this measure is that an influential painter according to our understanding, is one who promotes the flow of ideas to different communities. As a result we are interested in their position along a shortest path starting and finishing in different clusters.
\begin{figure}
\includegraphics[width=\textwidth]{fig11}
\caption{Visualisation of the paths taken into account for the CBBC of the purple node. Green path (left) contributes to CBBC, red path (right) does not.}
\label{cbbc}
\end{figure}
The correlations between the standard and modified betweenness centrality are quite high for both of our partitions (almost 1). However the ranks of the nodes exhibit smaller correlation values (around 0.88) allowing us to identify certain nodes who score poorly in the standard Betweenness Centrality and better in our Community-Based modification (Figure \ref{cbbc}).
\begin{figure}
\centering
\includegraphics[width=\textwidth]{fig12}
\caption{Correlations between standard and community-based betweenness centrality, in the original (left) and fine (right) partitions.}
\end{figure}
\subsection{Community-based closeness centrality}
Very similar in concept to the betweenness centrality, we define the \textit{community-based closeness centrality} (CBCC) on a node, only considering the shortest distances of nodes in other communities than the node.
\begin{equation}
cc_{\mathcal{C}}(i) = \frac{1}{\sum_{j\notin c(i)}d_{ij}}
\end{equation}
A visualisation of this generalised centrality measure is on Figure \ref{cbcc}. Correlations are again quite high between the standard and modified centrality measures, though for the finer partition the correlation value is smaller (0.86 against 0.92 for the original partition) which enables us to highlight more painters scoring highly in the modified measures who wouldn't be highlighted in the standard ones.
\begin{figure}
\centering
\includegraphics[width=0.35\textwidth]{fig13}
\caption{Visualisation of the CBCC; only the yellow nodes' distances from the purple node are taken into account to define its score.}
\label{cbcc}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{fig14}
\caption{Correlations between standard and community-based closeness centrality, in the original (left) and fine (right) partitions.}
\end{figure}
\section{Conclusion}
In this work we have introduced a new context where the theory and methodology of networks based on contextual and biographical links can be applied, by constructing and analysing the network of Western art painters. Our overall aim was to use the properties of the network to highlight painter-nodes who can be classified as influential in art history.
It became clear through our analysis that clusters, as generated by a modularity maximisation algorithm, correspond broadly to artistic movements and areas where painters were active. Motivated by this observation we proposed two measures for assessing a partition into communities, using metadata for artistic movement and location, in order to assess the resolution of a community partition. We have applied this approach to two stylised networks as well as for the analysis of the painter network.
In order to illustrate the nodes with influence and connections beyond their artistic movement and region, we have proposed looking at centrality measures which take community structure into account, and look into the outward links that a node may have. The redefined centrality measures in terms of the community structure are then used to highlight influential nodes who might have been missed as they don't necessary rank too highly in the standard measures. Some examples can be found in the Supplementary Information.
\bibliographystyle{apalike}
|
1,108,101,565,528 | arxiv | \section*{Supplementary information for:\\
Stochastic transitions: Paths over higher energy barriers can dominate in the early stages}
\renewcommand{\thefigure}{A\arabic{figure}}
\setcounter{figure}{0}
\begin{center}
S.~P. Fitzgerald, A. Bailey Hass, G. D{\'\i}az Leines and A.~J. Archer
\end{center}
\vspace{1cm}
\section{1D Model potential}
The 1D potential that we consider (displayed in Fig.~1 of the main text) is:
\begin{equation}
\beta\phi(x)=\left(\frac{x}{3}\right)^{10}+e^{-2(x+1)^2}+3e^{-12(x-2)^2}-\frac{13}{10}e^{-12(x-2.5)^2}-\frac{3}{10}e^{-8(x-1)^2}-\frac{23}{10}e^{-8(x+2)^2}.
\nonumbe
\end{equation}
Precise locations of the 3 minima are:\\
$x=x_B=-2.0117264,$ (the global minimum)\\
$x=x_A=1.0004288,$ (a local minimum and our start point)\\
$x=x_C=2.5229797$ (a local minimum)\\
and the two local maxima are at:\\
$x=x_D=-0.99708709$\\
$x=x_E=1.9912978.$
\section{2D Model potential}
The 2D potential that we consider (displayed in Fig.~2 of the main text and in Fig.~\ref{fig:A1} below) is:
\begin{eqnarray}
\beta\phi(x,y)=4(x^2+4y^2-1)^2-\frac{1}{2}x-2e^{-4(x-\frac{1}{2})^2-4(y-\frac{1}{2})^2}
-e^{-4(x-\frac{1}{2})^2-4(y+\frac{1}{2})^2}+3e^{-4(x-1)^2-4y^2}.
\nonumbe
\end{eqnarray}
\begin{figure}[b!]
\centering
\includegraphics[width=0.6\textwidth]{supp_pot.pdf}
\caption{The 2D model potential}
\label{fig:A1}
\end{figure}
This potential has two minima at:\\
${\bf x}_A=(x_A,y_A)=(0.37610659,-0.47094557)$, (a local minimum and our start point)\\
${\bf x}_B=(x_B,y_B)=(0.41564013,0.46836780)$, (the global minimum).
The 2D potential has a local maximum near the origin at:\\
${\bf x}=(-0.097396665,-0.0053727757)$\\
and there are two saddle points at:\\
${\bf x}_D=(x_D,y_D)=(-0.98392863,-0.00010774153)$\\
${\bf x}_E=(x_E,y_E)=(0.66721235,-0.022398035)$.
\vspace{1cm}
In Table~\ref{table:HtSv} below we give the times $t$ corresponding to various values of the path power $H$. Some of these paths are displayed in Fig.~2 of the main text.
\begin{table}[h!]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
$H$ & 0 & 0.01 & 0.02 & 0.03 & 0.1 & 2.5 & 5.0 & 10.0 & 25.0 & 50.0 & 100.0 & 195.0 & 200.0 & 205.0 & 212.0 & 235.0 & 245.0 & 250.0 & 300.0 & 400.0 & 500.0 & 1000.0 \\
\hline
$t$ & $\infty$ & 7.84 & 6.85 & 6.28 & 4.70 & 1.61 & 1.21 & 0.888 & 0.575 & 0.405 & 0.277 & 0.153 & 0.165 & 0.0478 & 0.0463 & 0.0419 & 0.0402 & 0.0413 & 0.0338 & 0.0255 & 0.0221 & 0.0168 \\
\hline
\end{tabular}
\caption{Time $t$ for various $H$ values.}
\label{table:HtSv}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{H=0convergence.png}
\caption{Convergence of the extended gMAM algorithm to find the MLP for $H=0$.}
\label{fig:conv}
\end{figure}
Fig.~\ref{fig:conv} shows the algorithm converging to the minimum action path (the MLP) for $H=0$ from the straight line initial guess.
\end{document}
|
1,108,101,565,529 | arxiv | \section{Introduction}
This is our second paper in a series on the computational aspects of Poisson geometry.
In the first paper we showed how fundamental concepts from Poisson geometry could be operationalized into symbolic code \cite{CompuPoisson}.
We also provided an associated Python module with our implementations, for ease of execution\footnote{Available via \url{https://github.com/appliedgeometry/poissongeometry}}.
A \emph{Poisson manifold} \cite{Poisson, WePG, Dufour, Kosmann, Camille} is a smooth manifold $M$ equipped with a contravariant skew--symmetric $2$--tensor field $\Pi$, \emph{called Poisson bivector field}, satisfying the equation
\begin{equation}\label{EcJacobiPi}
\cSch{\Pi,\Pi} = 0,
\end{equation}
with respect to the Schouten--Nijenhuis bracket $\cSch{,}$ for multivector fields \cite{Michor-08,Dufour}. Let \ec{m=\dim{M}}, and \ec{x = (U; x^{1}, \ldots, x^{m})} be local coordinates on $M$, then $\Pi$ has the following representation \cite{Lich-77,WeLocal}:
\begin{equation}\label{EcPiCoord}
\Pi = \tfrac{1}{2}\Pi^{ij}\frac{\partial}{\partial{x^{i}}} \wedge \frac{\partial}{\partial{x^{j}}}
\ = \sum_{1 \leq i < j \leq m} \Pi^{ij}\frac{\partial}{\partial{x^{i}}} \wedge \frac{\partial}{\partial{x^{j}}}
\end{equation}
The functions \ec{\Pi^{ij}=\Pi^{ij}(x) \in \Cinf{U}} are called the coefficients of $\Pi$, and \ec{\{\partial/\partial{x^{i}}\}} is the canonical basis for vector fields on \ec{U \subseteq M}.
Poisson manifolds are generalizations of symplectic manifolds. A Poisson manifold can be though of informally as a space that is foliated by symplectic leaves. It is then possible to define Hamiltonian dynamics relative to the symplectic forms defined on each leaf, via the Poisson bracket. Comprehensive treatments are available for interested readers \cite{Dufour, Camille}.
This geometric context provides a route that describes Hamiltonian dynamics rigorously. Applications of Hamiltonian dynamics are almost ubiquitous in every scientific domain. More specifically, uses of symplectic forms provide a formalism for diverse phenomena. This theory was introduced by Poisson himself to describe celestial mechanics \cite{Poisson}. It was then used by Dirac to investigate quantum mechanics \cite{Dirac}, and notably Kontsevich showed that Poisson manifolds admit deformation quantizations \cite{Kont}.
In this paper, we present twelve methods that allow for numerical computations of concepts from Poisson geometry. These are listed in the following Table \ref{table:Funs-Algos-Exes}, along with their respective algorithms, and a list of (not exhaustive) relevant references for each one. We have indicated with an asterisk (*) the seven methods that depend explicitly on our module for symbolic computation \textsf{PoissonGeometry} \cite{CompuPoisson}.
\begin{table}[H]
\centering
\caption{Our numerical methods, with their corresponding algorithms, and examples where they are used. The right column is an informal summary of the algorithmic complexities, computed and presented in detail in Section \ref{sec:ComplexityPerformance}.} \label{table:Funs-Algos-Exes}
\begin{tabular}{|l|c|l|c|}
\hline
\multicolumn{1}{|c|} {\textbf{Method}} & \textbf{Algorithm} & \multicolumn{1}{c|}{\textbf{Examples}} & \textbf{Complexity} \\
\hline
\hline
\hyperref[AlgNumBivector]{\textsf{num\_bivector\_field}}
& \ref{AlgNumBivector} & \cite{Dufour, Bullo, Kosmann, Camille} & O(2)\\
\hline
\hyperref[AlgNumMatrixBivector]{\textsf{num\_bivector\_to\_matrix}}\,*
& \ref{AlgNumMatrixBivector} & \cite{Dufour,Bullo,Kosmann,Camille} & O(2) \\
\hline
\hyperref[AlgNumHamVF]{\textsf{num\_hamiltonian\_vf}}
& \ref{AlgNumHamVF} & \cite{Koz-95, Bullo, TV-19, Newton, HNN1} & O(2) \\
\hline
\hyperref[AlgNumPoissonBracket]{\textsf{num\_poisson\_bracket}}
& \ref{AlgNumPoissonBracket} & \cite{JacobiMath, Dufour, Kosmann, Camille} & O(2) \\
\hline
\hyperref[AlgNumSharp]{\textsf{num\_sharp\_morphism}}
& \ref{AlgNumSharp} & \cite{Dufour, Kosmann, Camille} & O(2) \\
\hline
\hyperref[AlgNumCoboundary]{\textsf{num\_coboundary\_operator}}\,*
& \ref{AlgNumCoboundary} & \cite{Naka, Dufour, Kosmann, Poncin, MarcutSl2} & O($2^m$) \\
\hline
\hyperref[AlgNumModularVF]{\textsf{num\_modular\_vf}}\,*
& \ref{AlgNumModularVF} & \cite{Reeb2, Dufour, Kosmann, Miranda, Camille, MVallYu, Pedroza} & O($2^m$) \\
\hline
\hyperref[AlgNumCurl]{\textsf{num\_curl\_operator}}\,*
& \ref{AlgNumCurl} & \cite{GrabowskiFR, Damianou, Dufour, Camille} & O($2^m$) \\
\hline
\hyperref[AlgNumOneFormsB]{\textsf{num\_one\_forms\_bracket}}\,*
& \ref{AlgNumOneFormsB} & \cite{Dufour, Kosmann, Camille, Grabowski} & O(2) \\
\hline
\hyperref[AlgNumGauge]{\textsf{num\_gauge\_transformation}}
& \ref{AlgNumGauge} & \cite{GaugeBursz, GaugeNaranjo, GaugeClass} & O(7) \\
\hline
\hyperref[AlgNumNormal]{\textsf{num\_linear\_normal\_form\_R3}}\,*
& \ref{AlgNumNormal} & \cite{LiuXU-92, Naka, Ginzburg, Dufour, Sheng, Bullo, Camille, GaugeClass, MarcutSl2, Obook} & O(1) \\
\hline
\hyperref[AlgNumFRatiu]{\textsf{num\_flaschka\_ratiu\_bivector}}\,*
& \ref{AlgNumFRatiu} & \cite{GrabowskiFR, Damianou, PabLef, PabloWrinFib, PabBott} & O(6) \\
\hline
\hline
\end{tabular}
\end{table}
The following diagram illustrates the internal functional dependencies of the methods available in \textsf{NumPoissonGeometry} \footnote{Avalaible via \url{https://github.com/appliedgeometry/NumericalPoissonGeometry}}.
\begin{center}
\begin{tikzpicture}[
font=\rmfamily\footnotesize,
every matrix/.style={ampersand replacement=\&, column sep=2cm, row sep=.23cm},
source/.style={draw, thick, rounded corners, inner sep=.2cm},
to/.style={->, >=stealth', shorten >=0.5pt, semithick},
every node/.style={align=center}]
\matrix{
{}; \& \node[source] (gauge) {\hyperref[AlgNumGauge]{\textsf{num\_gauge\_transformation}}}; \\
\node[source] (matrix) {\hyperref[AlgNumMatrixBivector]{\textsf{num\_bivector\_to\_matrix}}};
\& \node[source] (hamilton) {\hyperref[AlgNumHamVF]{\textsf{num\_hamiltonian\_vf}}}; \\
\node[source] (sharp) {\hyperref[AlgNumSharp]{\textsf{num\_sharp\_morphism}}};
\& \node[source] (bracket) {\hyperref[AlgNumPoissonBracket]{\textsf{num\_poisson\_bracket}}}; \\
\node[source] (formsbracket) {\hyperref[AlgNumOneFormsB]{\textsf{num\_one\_forms\_bracket}}}; \& {}; \\
{} \& \node[source] (fratiu) {\hyperref[AlgNumFRatiu]{\textsf{num\_flaschka\_ratiu\_bivector}}}; \\
\node[source] (numbivector) {\hyperref[AlgNumBivector]{\textsf{num\_bivector}}}; \& \node[source] (normal) {\hyperref[AlgNumNormal]{\textsf{num\_linear\_normal\_form\_R3}}}; \\
};
\draw[to] (matrix.east) -- (hamilton.west);
\draw[to] (matrix) -- (sharp);
\draw[to] (hamilton) -- (bracket);
\draw[to] (sharp) -- (formsbracket);
\draw[to] (matrix.east) --++(0:10mm)to[out=0, in=180] (gauge.west);
\draw[to] (numbivector.east) --++(0:11mm)to[out=0, in=180] (fratiu.west);
\draw[to] (numbivector.east) --++(0:9mm)to[out=0, in=180] (normal.west);
\end{tikzpicture}\label{diagram
\end{center}
The methods presented here have classical applications to Mechanics---see for example \cite{Bullo}---and also to recent advances in computer-aided techniques for determining normal forms for Hamiltonian systems \cite{CaraLoca20}. Furthermore, there has been a recent surge of interest in understanding Hamiltonian dynamics as uses of this theory start to appear in the data analysis and machine learning communities \cite{Pozh20}. Without attempting to be exhaustive, recent domains of application include; the development of Hamiltonian Monte Carlo techniques \cite{Hoffman}, applications of symplectic integration to optimization \cite{JordanPNAS}, inference of symbolic models from data \cite{Cranmer}, and the development of Hamiltonian Neural Networks \cite{HNN1, HNN2}.
Our work has been specifically designed to be compatible with popular machine learning frameworks, as our code can be integrated into NumPy, Pytorch, or TensorFlow environments. Moreover, as we rely on lattice meshes for our evaluations, our results will also be of interest to researchers that use finite element methods.
We hope to contribute an additional dimension to the understanding of Poisson geometry, enabling everyone to carry out numerical experiments with our freely available open-source code, motivating the expansion of techniques that have so far been incorporated. To the best of our knowledge this is the first comprehensive implementation of these methods.
Our numerical techniques can inform and complement researchers' intuition and provide further insights. For example, if a certain vector field is not trivial it can imply that a given Poisson structure is not unimodular, and this can be verified numerically. Moreover, our module only needs the algebraic expression for the Poisson bivector to carry out this verification (and not the complete, explicit vector field).
We have included specific examples for each of the methods above where numerical computations would be desirable, or are relevant in published work. We also strongly believe that our algorithms in this paper can be useful in the following related fields.
Our methods could aid in the development of deep learning systems for flows on tori and spheres \cite{Cranmer2020}, in particular for the groups $U(1)\cong\mathbb{S}^{1}$ and $SU(2)\cong\mathbb{S}^{3}$. In a related direction, normalized flows on Lie groups have been investigated with respect to distributions of data points \cite{Falorci2019}.
We expect our methods to be useful for evaluating inference problems related to the $n$-body problem, which has been approached recently \cite{Battaglia2016}.
Hamiltonian dynamics have been used to learn systems of simulations \cite{Battaglia2019}. This innovative work pioneers deep learning techniques that combine Hamiltonian dynamics and integrators of Ordinary Differential Equations. With our methods, such an overarching program could be extended to include more diverse equations, such as the ones that define a Poisson structure, or that verify conservation of specific quantities, for instance, a unimodular flow.
Normalized flows and techniques from symplectic geometry have been used to find canonical Darboux coordinates \cite{Li2020}. This highlights that the use of symplectic, and more generally, Poisson techniques is computationally efficient because these structures preserve volume.
Models that learn to respect conservative laws, through the use of Hamiltonian dynamics have been recently developed \cite{HNN1}. These could also be extended to more ample classes using our methods.
Hamiltonian functions themselves, relative to a symplectic form, have been modelled using recurrent neural networks \cite{chen2019}. We expect extensions of these works to be possible using our methods as well, where now the functions may be Hamiltonian relative to a Poisson structure. Moreover, systems that learn the dynamics from observed orbit trajectory data could be potentially designed.
There are two more applications that are relevant to the computation of statistical quantities on Lie groups. First, Hamiltonian flows that are equivariant with respect to an action can be used to learn densities that are invariant under the action of a Lie algebra. This strategy has been realized already for very specific algebras \cite{Jimenez2019}, and could be extended to more with our methods. Second, these ideas could also lead to a general method for finding reparametrizable densities on arbitrary Lie groups, using their associated Lie-Poisson structure \cite{Tamayo2015}.
\medskip
This paper is organized in the following way. In section 2, we describe our algorithms and point to domains of application for each one. In section 3, detailed analyses of the complexities of our methods are presented, as well as experimental run-times. A summary with explicit upper bounds is found in Table \ref{table:complexity}.
\section{Numerical Methods} \label{sec:keyf}
|In this section we describe the implementation of all functions in \textsf{NumPoissonGeometry} and present numerical examples with classical/relevant Poisson bivector fields.
\subsection{\textsf{NumPoissonGeometry}: Evaluation} \label{subsec:mesh}
Our methods work with regular and \emph{irregular} meshes that must be entered as \textsf{NumPy} arrays: a \ec{(k,m)} \textsf{NumPy} array for a mesh with $k$ points in $\RR{m}$, for \ec{k \in \bold{N}}.
Irregular meshes can be used to implement probabilistic/statistical methods in Poisson geometry. In several of the subsections below we generate irregular meshes by means of random samples drawn from a uniform distribution over the interval \ec{[0,1)}. Sometimes for simplicity we use the `corners' of the unit cube on $\RR{m}$,
\begin{equation}\label{EcCorners}
Q^{m} := \{0,1\}\ \underset{m\text{\ times}}{\underbrace{\times \cdots \times}}\ \{0,1\},
\end{equation}
preloaded in Python into a \textsf{NumPy} array called \textsf{Qmesh}.
\subsection{\textsf{NumPoissonGeometry}: Syntax} \label{subsec:class}
The instance and syntax of \textsf{NumPoissonGeometry} are the same as that of \textsf{PoissonGeometry}. We recall these here for the reader's convenience. \\
\noindent\textbf{Coordinates.} By default, to emulate canonical coordinates on $\RR{m}$, we use symbolic variables that are just the juxtaposition of the symbol \textsf{x} and an index of the set \ec{\{1,\ldots,m\}}: (\text{\textsf{x1}}, \ldots, \text{\textsf{xm}}). \\
\noindent\textbf{Scalar Functions.} A scalar function is written using \emph{string literal expressions}. For example, the function \ec{f = a(x^1)^2 + b(x^2)^2 + c(x^3)^2} should be written exactly as follows: \textsf{`a*x1**2 + b*x2**2 + c*x3**2'}. All characters that are not coordinates are treated as (symbolic) parameters: \textsf{a}, \textsf{b} and \textsf{c} for the previous example. \\
\noindent\textbf{Multivector Fields and Differential forms.} Both multivector fields and differential forms are written using \emph{dictionaries} with \emph{tuples of integers} as \textsl{keys} and \emph{string} type \textsl{values}. If the coordinate expression of an $a$--multivector field $A$, with \ec{a \in \mathbb{N}} is given by
\begin{equation*}
A = \sum_{1 \leq i_1 < i_2 < \cdots < i_a \leq m} A^{i_1 i_2 \cdots i_a}\frac{\partial}{\partial{x^{i_1}}} \wedge \frac{\partial}{\partial{x^{i_2}}} \wedge \cdots \wedge \frac{\partial}{\partial{x^{i_a}}}, \quad A^{i_1 \cdots i_a} = A^{i_1 \cdots i_a}(x),
\end{equation*}
then $A$ should be written using a dictionary, as follows:
\begin{equation}\label{EcMultivectorDic}
\Big\{(1,...,a): \mathscr{A}^{1 \cdots a}, \,...,\, (i_1,...,i_a): \mathscr{A}^{i_1 \cdots i_a}, \,...,\, (m-a+1,...,m): \mathscr{A}^{m-a+1\cdots m}\Big\}
\end{equation}
Here, each key \ec{(i_1,\ldots,i_a)} is a tuple containing ordered indices \ec{1 \leq i_1 < \cdots < i_a \leq m} and the corresponding value \ec{\mathscr{A}^{i_1 \cdots i_a}} is the string expression of the scalar function (coefficient) \ec{A^{i_1 \cdots i_a}} of $A$.
The syntax for differential forms is the same. It is important to remark that we can only write the keys and values of \emph{non--zero coefficients}. See the documentation for more details.
\subsection{\textsf{NumPoissonGeometry}: Python Implementation} \label{sec:implementation}
First let us briefly describe the syntax of the module \textsf{NumPoissonGeometry}. The inputs of the twelve methods in Table \ref{table:Funs-Algos-Exes} have to be string literal expressions.
The \textsf{sympify} method converts such string expressions into symbolic variables, and the \textsf{lambdify} method transforms the symbolic expressions into functions that allow a (fast) numerical evaluation.
The output of our methods can be chosen to be a \textsf{numpy} array (by default) or a \textsf{pytorch}/\textsf{tensorflow} tensor containing the evaluation of the input data at each point in a mesh on $\RR{m}$.
Next we will explain each or our numerical methods, how they relate to the theoretical objects being implemented, present the corresponding algorithm, and show how they may be used with helpful examples.
\subsection{Bivector Fields} \label{subsec:bivector}
The evaluation of a (Poisson) bivector field $\Pi$ at a point in $M$ is defined by
\begin{equation*}
\Pi_{p} := \tfrac{1}{2}\Pi^{ij}(p)\left.\frac{\partial}{\partial{x^{i}}}\right|_{p} \wedge \left.\frac{\partial}{\partial{x^{j}}}\right|_{p}, \quad p \in M.
\end{equation*}
Observe that the coefficients of $\Pi$ (\ref{EcPiCoord}) at $p$ determines the evaluation above.
The function \textsf{num\_bivector} evaluates a (Poisson) bivector field on a mesh in $\RR{m}$.
\begin{algorithm}[H]
\captionsetup{justification=centering}
\caption{\ \textsf{num\_bivector}(\emph{bivector, mesh})} \label{AlgNumBivector}
\rule{\textwidth}{0.4pt}
\Input{a (Poisson) bivector field and a mesh}
\Output{evaluation of the bivector field at each point of the mesh}
\rule{\textwidth}{0.4pt}
\begin{algorithmic}[1]
\Procedure{}{}
\State $m$ $\gets$ dimension of the manifold
\State \bluecolor{bivector} $\gets$ a variable encoding the (Poisson) bivector field
\State \bluecolor{mesh} $\gets$ a $(k,m)$ array encoding the mesh
\CommentNew{$k$: number of points in the mesh}
\State \textsc{Transform} each \bluecolor{bivector} item to a function that allows a numerical evaluation
\State \rreturn{an array containing the evaluation of \bluecolor{bivector} at each $m$--array of \bluecolor{mesh}}
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{example}\label{example:so3}
Consider the Lie--Poisson bivector field on \ec{\RR{3}_{x}}
\begin{equation}\label{EcPiSO3}
\Pi_{\mathfrak{so}(3)} =
x_3\frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_2} -
x_2\frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_3} +
x_1 \frac{\partial}{\partial x_2}\wedge \frac{\partial}{\partial x_3},
\end{equation}
associated to the 3--dimensional Lie algebra $\mathfrak{so}(3)$, which is used in studies of the 3--body problem \cite{Newton}, and more generally in geometric mechanics \cite{Bullo}. To evaluate \ec{\Pi_{\mathfrak{so}(3)}} at points of \ec{Q^{3}} (\ref{EcCorners}) we compute:
\begin{tcolorbox}[arc=0mm, boxsep=0mm, skin=bicolor, colback=pink!15, colframe=blue!20, colbacklower=blue!0, breakable, halign=left]
$>>>$ \textsf{npg3 = NumPoissonGeometry(3)}
\hspace*{\fill} \CommentCode{\textsf{NumPoissonGeometry} instance} \\
$>>>$ \textsf{P\_so3 = \big\{(1, 2): `x3', (1, 3): `-x2', (2, 3): `x1'\big\}}
\hspace*{\fill} \CommentCode{dictionary for \ec{\Pi_{\mathfrak{so}(3)}} in (\ref{EcPiSO3}) according to (\ref{EcMultivectorDic})} \\
$>>>$ \textsf{npg3.num\_bivector(P\_so3, Qmesh)}
\hspace*{\fill} \CommentCode{run \textsf{num\_bivector} function}
\tcblower
\textsf{[\parbox[t]{\linewidth}{\{(1, 2): 0.0, (1, 3): \phantom{-}0.0, (2, 3): 0.0\}, \
\{(1, 2): 1.0, (1, 3): \phantom{-}0.0, (2, 3): 0.0\}, \newline
\{(1, 2): 0.0, (1, 3): -1.0, (2, 3): 0.0\}, \
\{(1, 2): 1.0, (1, 3): -1.0, (2, 3): 0.0\}, \newline
\{(1, 2): 0.0, (1, 3): \phantom{-}0.0, (2, 3): 1.0\}, \
\{(1, 2): 1.0, (1, 3): \phantom{-}0.0, (2, 3): 1.0\}, \newline
\{(1, 2): 0.0, (1, 3): -1.0, (2, 3): 1.0\}, \
\{(1, 2): 1.0, (1, 3): -1.0, (2, 3): 1.0\}]}}
\end{tcolorbox}
Note that the output preserves the \textsf{PoissonGeometry} syntax (\ref{EcMultivectorDic}). For example, to produce a \textsf{PyTorch} tensor encoding this information we use the \textsf{pt\_output} flag:
\begin{tcolorbox}[arc=0mm, boxsep=0mm, skin=bicolor, colback=pink!15, colframe=blue!20, colbacklower=blue!0, breakable, halign=left]
$>>>$ \textsf{npg3.num\_bivector(P\_so3, Qmesh, pt\_output=True)}
\hspace*{\fill} \CommentCode{run \textsf{num\_bivector} function with \textsf{pt\_output} flag}
\tcblower
\resizebox{\textwidth}{!}{
\textsf{tensor([\,\parbox[t]{\linewidth}{[[0., 0., \phantom{-}0.], [0., 0., 0.], [0., \phantom{-}0., 0.]], \
[[0., 1., \phantom{-}0.], [-1., 0., 0.], [0., \phantom{-}0., 0.]], \newline
[[0., 0., -1.], [0., 0., 0.], [1., \phantom{-}0., 0.]], \
[[0., 1., -1.], [-1., 0., 0.], [1., \phantom{-}0., 0.]], \newline
[[0., 0., \phantom{-}0.], [0., 0., 1.], [0., -1., 0.]], \
[[0., 1., \phantom{-}0.], [-1., 0., 1.], [0., -1., 0.]], \newline
[[0., 0., -1.], [0., 0., 1.], [1., -1., 0.]], \
[[0., 1., -1.], [-1., 0., 1.], [1., -1., 0.]]\,], \newline
dtype=torch.float64)}}
}
\end{tcolorbox}
\end{example}
\begin{figure}[H]
\centering
\begin{subfigure}{0.35\textwidth}
\includegraphics[width=\textwidth]{FSO3.png}
\end{subfigure}
\hfill
\begin{subfigure}{0.4\textwidth}
\includegraphics[width=\textwidth]{NOMODSO3.png}
\end{subfigure}
\caption{\emph{Left}: Symplectic foliation of \ec{\Pi_{\mathfrak{so}(3)}} in (\ref{EcPiSO3}). \emph{Right}: Modular vector field of $\Pi$ in (\ref{EcPiNoModSO3}) relative to the Euclidean volume form on \ec{\RR{3}_{x}}. The color scale indicates the magnitude of the vectors.}
\label{fig:SO3-NoUnimod}
\end{figure}
\subsection{Matrix of a Bivector Field} \label{subsec:matrix}
The value of the matrix \ec{[\Pi^{ij}]} of a (Poisson) bivector field $\Pi$ at a point in $M$ is given by
\begin{equation}\label{EcPiMatrix}
\big[\Pi^{ij}\big]_{p} = \big[\Pi^{ij}(p)\big], \quad p \in M.
\end{equation}
Hence, we just need to know the value of the coefficients of $\Pi$ (\ref{EcPiCoord}) at $p$.
The function \textsf{num\_bivector\_to\_matrix} evaluates the matrix of a (Poisson) bivector field on a mesh in $\RR{m}$.
\begin{algorithm}[H]
\captionsetup{justification=centering}
\caption{\ \textsf{num\_bivector\_to\_matrix}(\emph{bivector, mesh})} \label{AlgNumMatrixBivector}
\rule{\textwidth}{0.4pt}
\Input{a (Poisson) bivector field and a mesh}
\Output{evaluation of the matrix of the (Poisson) bivector field at each point of the mesh \vspace{0.25cm}}
\rule{\textwidth}{0.4pt}
\begin{algorithmic}[1]
\Procedure{}{}
\State $m$ $\gets$ dimension of the manifold
\State \bluecolor{bivector} $\gets$ a variable encoding the (Poisson) bivector field
\State \bluecolor{mesh} $\gets$ a $(k,m)$ array encoding the mesh
\CommentNew{$k$: number of points in the mesh}
\State \bluecolor{variable\_1} $\gets$ \textsf{bivector\_to\_matrix}(\bluecolor{bivector})
\CommentNew{\textsf{bivector\_to\_matrix}: function of \textsf{PoissonGeometry}}
\algstore{bkbreak}\end{algorithmic}\end{algorithm}\begin{algorithm}[H]\vspace{0.1cm}\begin{algorithmic}[1]\algrestore{bkbreak}
\State \textsc{Transform} each \bluecolor{variable\_1} item to a function that allows a numerical evaluation
\State \rreturn{an array containing the evaluation of \bluecolor{variable\_1} at each $m$--array of \bluecolor{mesh}}
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{example}
Consider the Lie--Poisson bivector field on \ec{\RR{3}_{x}}
\begin{equation}\label{EcPiSL2}
\Pi_{\mathfrak{sl}(2)} =
-x_3\frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_2} -
x_2\frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_3} +
x_1 \frac{\partial}{\partial x_2}\wedge \frac{\partial}{\partial x_3},
\end{equation}
associated to the 3--dimensional Lie algebra $\mathfrak{sl}(2)$, used in the classification of rigid motions \cite{GaugeClass}, and in other mechanical systems \cite{Naka, Bullo}. To evaluate the matrix of \ec{\Pi_{\mathfrak{sl}(2)}} at points of \ec{Q^{3}} (\ref{EcCorners}) we compute:
\begin{tcolorbox}[arc=0mm, boxsep=0mm, skin=bicolor, colback=pink!15, colframe=blue!20, colbacklower=blue!0, breakable, halign=left]
$>>>$ \textsf{npg3 = NumPoissonGeometry(3)}
\hspace*{\fill} \CommentCode{\textsf{NumPoissonGeometry} instance} \\
$>>>$ \textsf{P\_sl2 = \big\{(1, 2): `-x3', (1, 3): `-x2', (2, 3): `x1'\big\}}
\hspace*{\fill} \CommentCode{dictionary for \ec{\Pi_{\mathfrak{sl}(2)}} in (\ref{EcPiSL2}) according to (\ref{EcMultivectorDic})} \\
$>>>$ \textsf{npg3.num\_bivector\_to\_matrix(P\_sl2, Qmesh, pt\_output=True)} \newline
\hspace*{\fill} \CommentCode{run \textsf{num\_bivector\_to\_matrix} function with \textsf{pt\_output} flag}
\tcblower
\resizebox{\textwidth}{!}{
\textsf{tensor([\,\parbox[t]{\linewidth}{[[0., 0., 0.], [0., 0., 0.], [\phantom{-}0., \phantom{-}0., 0.]], \
[[0., 1., 0.], [-1., 0., 0.], [\phantom{-}0., \phantom{-}0., 0.]], \newline
[[0., 0., 1.], [0., 0., 0.], [-1., \phantom{-}0., 0.]], \
[[0., 1., 1.], [-1., 0., 0.], [-1., \phantom{-}0., 0.]], \newline
[[0., 0., 0.], [0., 0., 1.], [\phantom{-}0., -1., 0.]], \
[[0., 1., 0.], [-1., 0., 1.], [\phantom{-}0., -1., 0.]], \newline
[[0., 0., 1.], [0., 0., 1.], [-1., -1., 0.]], \
[[0., 1., 1.], [-1., 0., 1.], [-1., -1., 0.]]\,], \newline
dtype=torch.float64)}}
}
\end{tcolorbox}
\end{example}
\subsection{Hamiltonian Vector Fields} \label{subsec:hamiltonian}
The Hamiltonian vector field \ec{X_{h} := \mathbf{i}_{\mathrm{d}{h}}\Pi} of a scalar function $h$, and relative to a Poisson bivector field $\Pi$ on $M$, at a point in $M$ can be determined by the following (coordinate) formula:
\begin{equation}\label{EcPiHam}
X_{h} |_{p} \simeq -\left[\Pi^{ij}(p)\right]\left[\tfrac{\partial h}{\ \partial x^{k}}(p)\right], \quad p \in M; \quad k=1, \ldots, m
\end{equation}
Here, \ec{[\Pi^{ij}]} is the matrix of $\Pi$ (\ref{EcPiMatrix}), and \ec{[{\partial h}/{\partial x^{k}}]} is the gradient vector (field) of $h$.
The function \textsf{num\_hamiltonian\_vf} evaluates a Hamiltonian vector field on a mesh in $\RR{m}$.
\begin{algorithm}[H]
\captionsetup{justification=centering}
\caption{\ \textsf{num\_hamiltonian\_vf}(\emph{bivector, ham\_function, mesh})} \label{AlgNumHamVF}
\rule{\textwidth}{0.4pt}
\Input{a Poisson bivector field $\Pi$, a scalar function $h$ and a mesh}
\Output{evaluation of the Hamiltonian vector field of $h$ respect to $\Pi$ at each point of the mesh \vspace{0.25cm}}
\rule{\textwidth}{0.4pt}
\begin{algorithmic}[1]
\Procedure{}{}
\State $m$ $\gets$ dimension of the manifold
\State \bluecolor{bivector} $\gets$ a variable encoding the Poisson bivector field $\Pi$
\State \bluecolor{ham\_function} $\gets$ a (string) expression representing the scalar function $h$
\State \bluecolor{mesh} $\gets$ a $(k,m)$ array encoding the mesh
\CommentNew{$k$: number of points in the mesh}
\State \textsc{Convert} \bluecolor{ham\_function} to a symbolic variable
\State \bluecolor{variable\_1} $\gets$ the (symbolic) gradient vector field of \bluecolor{ham\_function}
\algstore{bkbreak}\end{algorithmic}\end{algorithm}\begin{algorithm}[H]\vspace{0.1cm}\begin{algorithmic}[1]\algrestore{bkbreak}
\State \textsc{Transform} each \bluecolor{variable\_1} item to a function that allows a numerical evaluation
\State \bluecolor{variable\_2} $\gets$ an array containing the evaluation of \bluecolor{variable\_1} at each $m$--array of \bluecolor{mesh}
\State \bluecolor{variable\_3} $\gets$ \textsf{num\_bivector\_to\_matrix}(\bluecolor{bivector}, \bluecolor{mesh})
\CommentNew{see Algorithm \ref{AlgNumMatrixBivector}}
\State \bluecolor{variable\_4} $\gets$ an empty container
\For {$0 \leq i < k$}
\State \bluecolor{variable\_4}[$i$] $\gets$ $(-1)$ $*$ \bluecolor{variable\_3}[$i$] $*$ \bluecolor{variable\_2}[$i$]
\CommentNew{matrix--vector product}
\EndFor
\State \rreturn{\bluecolor{variable\_4}}
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{example}
Consider the Hamiltonian vector field on \ec{\RR{6}_{x}}, that arises in a particular case of the three body problem \cite{Newton},
\begin{equation*}
\begin{split}
X_{h} =& - x_4\frac{\partial}{\partial{x_1}} - x_5\frac{\partial}{\partial{x_2}} - x_6\frac{\partial}{\partial{x_3}}
+ \left[ \frac{1}{(x_1-x_2)|x_1-x_2|} + \frac{1}{(x_1-x_3)|x_1-x_3|} \right]\frac{\partial}{\partial{x_4}} \nonumber \\
& + \left[ \frac{1}{(x_1-x_2)|x_1-x_2|} + \frac{1}{(x_2-x_3)|x_2-x_3|} \right]\frac{\partial}{\partial{x_5}} \\
& - \left[ \frac{1}{(x_1-x_3)|x_1-x_3|} + \frac{1}{(x_2-x_3)|x_2-x_3|} \right]\frac{\partial}{\partial{x_6}},
\end{split}
\end{equation*}
with Hamiltonian function
\begin{equation}\label{EcHamR6}
h = \frac{1}{x_{1}-x_{2}} + \frac{1}{x_{1}-x_{3}} + \frac{1}{x_{2}-x_{3}} + \frac{x_{4}^{2} + x_{5}^{2} + x_{6}^{2}}{2},
\end{equation}
and relative to the canonical Poisson bivector field on \ec{\RR{6}_{x}}
\begin{equation}\label{EcPiCanR6}
\Pi = \frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_4} + \frac{\partial}{\partial x_2}\wedge \frac{\partial}{\partial x_5} + \frac{\partial}{\partial x_3}\wedge \frac{\partial}{\partial x_6}.
\end{equation}
To evaluate \ec{X_{h}} avoiding singularities we use the \textsf{mesh}, for \ec{Q^{3}} being as in (\ref{EcCorners}),
\begin{equation*}
\{-2, -1\} \times \{0, 1\} \times \{2, 3\} \times Q^{3}, \qquad \RR{6} \simeq \RR{3}_{\scriptscriptstyle (x_1,x_2,x_3)} \times \RR{3}_{\scriptscriptstyle (x_4,x_5,x_6)}:
\end{equation*}
\begin{tcolorbox}[arc=0mm, boxsep=0mm, skin=bicolor, colback=pink!15, colframe=blue!20, colbacklower=blue!0, breakable, halign=left]
$>>>$ \textsf{npg6 = NumPoissonGeometry(6)}
\hspace*{\fill} \CommentCode{\textsf{NumPoissonGeometry} instance} \\
$>>>$ \textsf{P = \{(1, 4): 1, (2, 5): 1, (3, 6): 1\}}
\hspace*{\fill} \CommentCode{dictionary for $\Pi$ in (\ref{EcPiCanR6}) according to (\ref{EcMultivectorDic})} \\
$>>>$ \textsf{h = \parbox[t]{0.90\linewidth}{`1/(x1 - x2) + 1/(x1 - x3) + 1/(x2 - x3) + (x4**2 + x5**2 + x6**2)/2'}} \hspace*{\fill} \CommentCode{string expression for $h$ in (\ref{EcHamR6})} \\
$>>>$ \textsf{npg6.num\_hamiltonian\_vf(P, h, mesh, pt\_output=True)} \newpage
\hspace*{\fill} \CommentCode{run \textsf{num\_hamiltonian\_vf} function with \textsf{pt\_output} flag}
\tcblower
\textsf{tensor([\,\parbox[t]{\linewidth}{[[\phantom{-}0.0000], [\phantom{-}0.0000], [\phantom{-}0.0000], [-0.3125], [0.0000], [0.3125]], \newline
$\ldots$, \newline
[[-1.0000], [-1.0000], [\phantom{-}0.0000], [-0.3125], [0.0000], [0.3125]], \newline
[[-1.0000], [-1.0000], [-1.0000], [-0.3125], [0.0000], [0.3125]]\,], \newline
dtype=torch.float64)}}
\end{tcolorbox}
\end{example}
\subsection{Poisson Brackets} \label{subsec:bracket}
By definition and (\ref{EcPiHam}), the Poisson bracket \ec{\{f,g\}_{\Pi}} of two scalar functions $f$ and $g$, induced by a Poisson bivector field $\Pi$ on $M$, at a point in $M$ can be calculated in coordinates as follows:
\begin{equation*}
\{f,g\}_{\Pi}(p) = -\big[\tfrac{\partial g}{\partial x^{k}}(p)\big]^{\top}\left[\Pi^{ij}(p)\right]\big[\tfrac{\partial f}{\partial x^{l}}(p)\big], \quad p \in M; \quad k,l = 1, \ldots, m
\end{equation*}
Here, \ec{[\Pi^{ij}]} is the matrix of $\Pi$ (\ref{EcPiMatrix}), \ec{[{\partial f}/{\partial x^{l}}]} and \ec{[{\partial g}/{\partial x^{k}}]} are the gradient vector (fields) of $f$ and $g$, in that order.
The function \textsf{num\_poisson\_bracket} evaluates the Poisson bracket of two scalar functions on a mesh in $\RR{m}$.
\begin{algorithm}[H]
\captionsetup{justification=centering}
\caption{\ \textsf{num\_poisson\_bracket}(\emph{bivector, function\_1, function\_2, mesh})} \label{AlgNumPoissonBracket}
\rule{\textwidth}{0.4pt}
\Input{a Poisson bivector field $\Pi$, two scalar functions $f,g$ and a mesh}
\Output{evaluation of the Poisson bracket of $f$ and $g$ induced by $\Pi$ at each point of the mesh \vspace{0.25cm}}
\rule{\textwidth}{0.4pt}
\begin{algorithmic}[1]
\Procedure{}{}
\State $m$ $\gets$ dimension of the manifold
\State \bluecolor{bivector} $\gets$ a variable encoding the Poisson bivector field $\Pi$
\State \bluecolor{function\_1}, \bluecolor{function\_2} $\gets$ string expressions representing the scalar functions $f$ and $g$
\State \bluecolor{mesh} $\gets$ a $(k,m)$ array encoding the mesh
\CommentNew{$k$: number of points in the mesh}
\If {\bluecolor{function\_1} $==$ \bluecolor{function\_2}}
\State \textbf{return} 0
\CommentNew{if $f=g$, then its Poisson bracket is zero}
\Else
\State \textsc{Convert} \bluecolor{function\_2} to a symbolic variable
\State \bluecolor{variable\_1} $\gets$ the (symbolic) gradient vector field of \bluecolor{function\_2}
\State \textsc{Transform} each \bluecolor{variable\_1} item to a function that allows a numerical evaluation
\State \bluecolor{variable\_2} $\gets$ an array containing the evaluation of \bluecolor{variable\_1} at each $m$--array of \bluecolor{mesh}
\State \bluecolor{variable\_3} $\gets$ \textsf{num\_hamiltonian\_vf}(\bluecolor{bivector}, \bluecolor{function\_1}, \bluecolor{mesh})
\Statex \CommentNew{see Algorithm \ref{AlgNumHamVF}}
\State \bluecolor{variable\_4} $\gets$ an empty container
\For {$0 \leq i < k$}
\State \bluecolor{variable\_4}[$i$] $\gets$ \bluecolor{variable\_2}[$i$] $*$ \bluecolor{variable\_3}[$i$]
\CommentNew{scalar vector product}
\EndFor
\State \rreturn{\bluecolor{variable\_4}}
\EndIf
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{example}
Consider the Poisson bivector field on \ec{\RR{6}_{x}}, obtained as a deformation of an almost Poisson structure analyzed in relation to plasma \cite{Plasma} (see, also \cite{SeveraTwist}),
\begin{equation}\label{EcTwist}
\Pi = \frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_4} + \frac{\partial}{\partial x_2}\wedge \frac{\partial}{\partial x_5} + \frac{\partial}{\partial x_3}\wedge \frac{\partial}{\partial x_6} + x_{2}^{2}\frac{\partial}{\partial x_5}\wedge \frac{\partial}{\partial x_6}.
\end{equation}
Observe that the Poisson bracket \ec{\{x_{6}, x_{5}\}_{\Pi} = -1}, at points \ec{x \in \RR{6}} such that \ec{|x_{2}|=1}. We can check this fact using random \textsf{meshes} of the form
\begin{equation*}
\{a_{1}, b_{1}\} \times \{1\} \times \{a_{2}, b_{2}\} \times \cdots \times \{a_{5}, b_{5}\}, \qquad a_{i},b_{i} \in \operatorname{random}{[0,1)}.
\end{equation*}
Here, random samples are taken from a uniform distribution over the interval \ec{[0,1)}:
\begin{tcolorbox}[arc=0mm, boxsep=0mm, skin=bicolor, colback=pink!15, colframe=blue!20, colbacklower=blue!0, breakable, halign=left]
$>>>$ \textsf{npg6 = NumPoissonGeometry(6)}
\hspace*{\fill} \CommentCode{\textsf{NumPoissonGeometry} instance} \\
$>>>$ \textsf{P = \{(1, 4): 1, (2, 5): 1, (3, 6): 1, (5, 6): `x2**2'\}}
\hspace*{\fill} \CommentCode{dictionary for $\Pi$ in (\ref{EcTwist}) according to (\ref{EcMultivectorDic})} \\
$>>>$ \textsf{f, g = `x6', `x5'}
\hspace*{\fill} \CommentCode{string expressions for canonical coordinates $x_{6}$ and $x_{5}$, in that order} \\
$>>>$ \textsf{npg6.num\_poisson\_bracket(P, f, g, meshes, pt\_output=True)} \newline
\hspace*{\fill} \CommentCode{run \textsf{num\_poisson\_bracket} function with \textsf{pt\_output} flag}
\tcblower
\resizebox{\textwidth}{!}{
\textsf{tensor(\parbox[t]{\linewidth}{-1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, -1.0, dtype=torch.float64)}}
}
\end{tcolorbox}
\end{example}
Observe that we use a probabilistic method to verify a particular property of a Poisson bracket. These methods can be used to determine other characteristics, for example, singular points.
\subsection{Sharp Morphism} \label{subsec:sharp}
In the context of Lie algebroids, the vector bundle morphism \ec{\Pi^{\natural}:\alpha \mapsto \mathbf{i}_{\alpha}\Pi}, induced by a Poisson bivector field $\Pi$ on $M$, is the anchor map of the Poisson Lie algebroid corresponding to $\Pi$ \cite{FuchAlgbd}.
Similarly to (\ref{EcPiHam}), we can evaluate the image \ec{\Pi^{\natural}(\alpha)} of a differential 1--form $\alpha$ at a point in $M$ as follows:
\begin{equation}\label{EcPiSharp}
\Pi^{\natural}(\alpha)\big|_{p} \simeq -\left[\Pi^{ij}(p)\right]\big[\alpha_{k}(p)\big], \quad p \in M; \quad k = 1, \ldots, m
\end{equation}
Here, \ec{[\Pi^{ij}]} is the matrix of $\Pi$ (\ref{EcPiMatrix}), and \ec{[\alpha_{k}]} is the coefficient vector of \ec{\alpha = \alpha_{k}\mathrm{d}{x^{k}}}.
The function \textsf{num\_sharp\_morphism} evaluates a vector field \ec{\Pi^{\natural}\alpha} on a mesh in $\RR{m}$.
\begin{algorithm}[H]
\captionsetup{justification=centering}
\caption{\ \textsf{num\_sharp\_morphism}(\emph{bivector, one\_form, mesh})} \label{AlgNumSharp}
\rule{\textwidth}{0.4pt}
\Input{a Poisson bivector field $\Pi$, a differential 1--form $\alpha$ and a mesh}
\Output{evaluation of the vector field $\Pi^{\natural}(\alpha)$ at each point of the mesh}
\rule{\textwidth}{0.4pt}
\begin{algorithmic}[1]
\Procedure{}{}
\State $m$ $\gets$ dimension of the manifold
\State \bluecolor{bivector} $\gets$ a variable encoding the Poisson bivector field $\Pi$
\State \bluecolor{one\_form} $\gets$ a variable encoding the differential 1--form \ec{\alpha = \alpha_{1}\mathrm{d}{x^{1}} + \cdots + \alpha_{m}\mathrm{d}{x^{m}}}
\State \bluecolor{mesh} $\gets$ a $(k,m)$ array encoding the mesh
\CommentNew{$k$: number of points in the mesh}
\State \bluecolor{variable\_1} $\gets$ a container with items $(\alpha_{1},\ldots,\alpha_{m})$
\State \textsc{Transform} each \bluecolor{variable\_1} item to a function that allows a numerical evaluation
\State \bluecolor{variable\_2} $\gets$ an array containing the evaluation of \bluecolor{variable\_1} at each $m$--array of
\bluecolor{mesh}
\algstore{bkbreak}\end{algorithmic}\end{algorithm}\begin{algorithm}[H]\vspace{0.1cm}\begin{algorithmic}[1]\algrestore{bkbreak}
\State \bluecolor{variable\_3} $\gets$ \textsf{num\_bivector\_to\_matrix}(\bluecolor{bivector}, \bluecolor{mesh})
\CommentNew{see Algorithm \ref{AlgNumMatrixBivector}}
\State \bluecolor{variable\_4} $\gets$ an empty container
\For {$0 \leq i < k$}
\State \bluecolor{variable\_4}[$i$] $\gets$ $(-1)$ $*$ \bluecolor{variable\_3}[$i$] $*$ \bluecolor{variable\_2}[$i$]
\CommentNew{matrix--vector product}
\EndFor
\State \rreturn{\bluecolor{variable\_4}}
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{example}
Consider the Casimir function \ec{K = {1}/{2}(x_{1}^{2} + x_{2}^{2} + x_{3}^{2})} of the Poisson bivector field \ec{\Pi_{\mathfrak{so}(3)}} in (\ref{EcPiSO3}) \cite{Damianou,Newton}. By definition, the exterior derivative of $K$,
\begin{equation}\label{EcDKsharp}
\mathrm{d}{K} = x_{1}\mathrm{d}{x_{1}} + x_{2}\mathrm{d}{x_{2}} + x_{3}\mathrm{d}{x_{3}},
\end{equation}
belongs to the kernel of \ec{\Pi_{\mathfrak{so}(3)}}, that is, \ec{\Pi_{\mathfrak{so}(3)}^{\natural}\mathrm{d}{K} = 0}. We can check this fact by using random meshes:
\begin{tcolorbox}[arc=0mm, boxsep=0mm, skin=bicolor, colback=pink!15, colframe=blue!20, colbacklower=blue!0, breakable, halign=left]
$>>>$ \textsf{npg3 = NumPoissonGeometry(3)}
\hspace*{\fill} \CommentCode{\textsf{NumPoissonGeometry} instance} \\
$>>>$ \textsf{P\_so3 = \big\{(1, 2): `x3', (1, 3): `-x2', (2, 3): `x1'\big\}}
\hspace*{\fill} \CommentCode{dictionary for $\Pi_{\mathfrak{so}(3)}$ in (\ref{EcPiSO3}) according to (\ref{EcMultivectorDic})} \\
$>>>$ \textsf{dK = \{(1,): `x1', (2,): `x2', (3,): `x3'\}}
\hspace*{\fill} \CommentCode{dictionary for $\mathrm{d}{K}$ in (\ref{EcDKsharp}) according to (\ref{EcMultivectorDic})} \\
$>>>$ \textsf{mesh = numpy.random.rand(10**6, 3)} \\
\hspace*{\fill} \CommentCode{\# $(10^{6}, 3)$ \textsf{numpy} array with random samples from a uniform distribution over [0,1)} \\
$>>>$ \textsf{npg6.num\_sharp\_morphism(P, dK, mesh, pt\_output=True)} \newline
\hspace*{\fill} \CommentCode{run \textsf{num\_sharp\_morphism} function with \textsf{pt\_output} flag}
\tcblower
\textsf{tensor([\,\parbox[t]{\linewidth}{[[0.], [0.], [0.]], \ [[0.], [0.], [0.]], \ [[0.], [0.], [0.]], \ $\ldots$, \newline
[[0.], [0.], [0.]], \ [[0.], [0.], [0.]], \ [[0.], [0.], [0.]]\,], dtype=torch.float64)}}
\end{tcolorbox}
\end{example}
\subsection{Coboundary Operator} \label{subsec:coboundary}
The coboundary operator $\delta_{\Pi}:\Gamma\wedge^{\bullet}T{M} \rightarrow \Gamma\wedge^{\bullet + 1}T{M}$ induced by a Poisson bivector field $\Pi$ on $M$ \cite{Lich-77} is defined by
\begin{equation*}
\delta_{\Pi}(A) := \cSch{\Pi,A}, \quad A \in \Gamma\wedgeT{M}.
\end{equation*}
Here, \ec{\Gamma\wedgeT{M}} denotes the module of multivector fields on $M$.
The function \textsf{num\_coboundary\_operator} evaluates the image under \ec{\delta_{\Pi}} of an arbitrary multivector field on a mesh in $\RR{m}$.
\begin{algorithm}[H]
\captionsetup{justification=centering}
\caption{\ \textsf{num\_coboundary\_operator}(\emph{bivector, multivector, mesh})} \label{AlgNumCoboundary}
\rule{\textwidth}{0.4pt}
\Input{a Poisson bivector field $\Pi$, a multivector field $A$ and a mesh}
\Output{evaluation of the multivector field $\cSch{\Pi, A}$ at each point of the mesh}
\rule{\textwidth}{0.4pt}
\begin{algorithmic}[1]
\Procedure{}{}
\algstore{bkbreak}\end{algorithmic}\end{algorithm}\begin{algorithm}[H]\vspace{0.1cm}\begin{algorithmic}[1]\algrestore{bkbreak}
\State $m$ $\gets$ dimension of the manifold
\State \bluecolor{bivector} $\gets$ a variable encoding the Poisson bivector field $\Pi$
\State \bluecolor{multivector} $\gets$ a variable encoding the multivector field $A$, or a (string) expression if \ec{\operatorname{degree}{A} = 0}
\State \bluecolor{mesh} $\gets$ a $(k,m)$ array encoding the mesh
\CommentNew{$k$: number of points in the mesh}
\State \bluecolor{variable\_1} $\gets$ \textsf{lichnerowicz\_poisson\_operator}(\bluecolor{bivector}, \bluecolor{multivector})
\Statex \CommentNew{\textsf{lichnerowicz\_poisson\_operator}: function of \textsf{PoissonGeometry}}
\State \textsc{Transform} each \bluecolor{variable\_1} item to a function that allows a numerical evaluation
\State \rreturn{an array containing the evaluation of \bluecolor{variable\_1} at each $m$--array of \bluecolor{mesh}}
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{example}
The vector field on \ec{\RR{3}_{x} \setminus \{x_{3}\text{--axis}\}}
\begin{equation}\label{EcWCobound}
W = \frac{x_1x_{3}e^{{-1}/{(x_1^2 + x_2^2 - x_3^2)^2}}}{x_1^2 + x_2^2}\frac{\partial}{\partial{x_{1}}} +
\frac{x_2x_{3}e^{{-1}/{(x_1^2 + x_2^2 - x_3^2)^2}}}{x_1^2 + x_2^2}\frac{\partial}{\partial{x_{2}}} +
e^{{-1}/{(x_1^2 + x_2^2 - x_3^2)^2}}\frac{\partial}{\partial{x_{3}}}
\end{equation}
arises in the study of the first cohomology group of the Poisson bivector field \ec{\Pi_{\mathfrak{sl}(2)}} in (\ref{EcPiSL2}) to construct a 1--cocycle that is not Hamiltonian \cite{Naka, MarcutSl2}. To check the cocycle property of $W$ under the coboundary operator induced by \ec{\Pi_{\mathfrak{sl}(2)}}, we evaluate the (image) bivector field \ec{\cSch{\Pi_{\mathfrak{sl}(2)},W}} on random meshes:
\begin{tcolorbox}[arc=0mm, boxsep=0mm, skin=bicolor, colback=pink!15, colframe=blue!20, colbacklower=blue!0, breakable, halign=left]
$>>>$ \textsf{npg3 = NumPoissonGeometry(3)}
\hspace*{\fill} \CommentCode{\textsf{NumPoissonGeometry} instance} \\
$>>>$ \textsf{P\_sl2 = \big\{(1, 2): `-x3', (1, 3): `-x2', (2, 3): `x1'\big\}}
\hspace*{\fill} \CommentCode{dictionary for $\Pi_{\mathfrak{sl}(2)}$ in (\ref{EcPiSL2}) according to (\ref{EcMultivectorDic})} \\
$>>>$ \textsf{W = \{\parbox[t]{0.95\linewidth}{(1,): `x1 * x3 * exp(-1/(x1**2 + x2**2 - x3**2)**2) / (x1**2 + x2**2)', \newline
(2,): `x2 * x3 * exp(-1/(x1**2 + x2**2 - x3**2)**2) / (x1**2 + x2**2)', \newline
(3,): `exp(-1/(x1**2 + x2**2 - x3**2)**2)'\}}}
\hspace*{\fill} \CommentCode{dictionary for $W$ in (\ref{EcWCobound}) according to (\ref{EcMultivectorDic})} \\
$>>>$ \textsf{mesh = numpy.random.rand(10**6, 3)} \\
\hspace*{\fill} \CommentCode{\# $(10^{6}, 3)$ \textsf{numpy} array with random samples from a uniform distribution over [0,1)} \\
$>>>$ \textsf{npg3.num\_coboundary\_operator(P\_sl2, W, mesh, pt\_output=True)} \newline
\hspace*{\fill} \CommentCode{run \textsf{num\_coboundary\_operator} function with \textsf{pt\_output} flag}
\tcblower
\textsf{tensor([\,\parbox[t]{\linewidth}{[[0., 0., 0.], [0., 0., 0.], [0., 0., 0.]], \
[[0., 0., 0.], [0., 0., 0.], [0., 0., 0.]], \newline
$\ldots$, \newline
[[0., 0., 0.], [0., 0., 0.], [0., 0., 0.]], \
[[0., 0., 0.], [0., 0., 0.], [0., 0., 0.]], \newline
[[0., 0., 0.], [0., 0., 0.], [0., 0., 0.]], \
[[0., 0., 0.], [0., 0., 0.], [0., 0., 0.]]\,], \newline
dtype=torch.float64)}}
\end{tcolorbox}
\end{example}
The characteristic foliation of \ec{\Pi_{\mathfrak{sl}(2)}} can be described using the Casimir function \ec{2K=x_{1}^{2} + x_{2}^{2} - x_{3}^{2}} (see, Figure \ref{fig:SL2-TangentVF}). Observe that $W$ is orthogonal to the gradient vector field of $K$, $x_{1}\partial/\partial{x_{1}} + x_{2}\partial/\partial{x_{2}} - x_{3}\partial/\partial{x_{3}}$. Hence, $W$ is tangent to the symplectic foliation of \ec{\Pi_{\mathfrak{sl}(2)}} (see, Figure \ref{fig:SL2-TangentVF}). However, it known that $W$ is not a Hamiltonian vector field \cite{Naka}. Therefore the first Poisson cohomology group of \ec{\Pi_{\mathfrak{sl}(2)}} is non--trivial.
\begin{figure}[H]
\centering
\begin{subfigure}{0.4\textwidth}
\includegraphics[width=\textwidth]{FSL2.png}
\end{subfigure}
\hfill
\begin{subfigure}{0.4\textwidth}
\includegraphics[width=\textwidth]{WSL2.png}
\end{subfigure}
\caption{\emph{Left}: Symplectic foliation of \ec{\Pi_{\mathfrak{sl}(2)}} in (\ref{EcPiSL2}). \emph{Right}: Vector field W in (\ref{EcWCobound}), tangent to the symplectic foliation of \ec{\Pi_{\mathfrak{sl}(2)}} in (\ref{EcPiSL2}). The color scale indicates the magnitude of the vectors.}
\label{fig:SL2-TangentVF}
\end{figure}
\subsection{Modular Vector Field} \label{subsec:modular}
The modular vector field of an orientable Poisson manifold \ec{(M,\Pi)} is an infinitesimal automorphism of $\Pi$ determined by the choice of a volume form $\Omega$ \cite{WeModular}, it is defined by the linear map
\begin{equation*}
Z_{\Pi, \Omega}: h \longmapsto \mathrm{div}_{\Omega}X_{h}, \quad h \in \Cinf{M}.
\end{equation*}
Here, \ec{\mathrm{div}_{\Omega}X_{h} \in \Cinf{M}} denotes the divergence of the Hamiltonian vector field \ec{X_{h}} (\ref{EcPiHam}) with respect to $\Omega$.
The modular vector field measures how far Hamiltonian flows are from preserving a given volume form \cite{WeModular}. In the regular case, the existence of a volume form that remains invariant under every Hamiltonian vector field only depends on the characteristic (symplectic) foliation of the Poisson manifold, rather than the leaf--wise symplectic form \cite{Reeb2}. In particular, for regular codimension--one symplectic foliations, a characteristic class controls the existence of such a volume form \cite{Miranda}.
The function \textsf{num\_modular\_vf} evaluates the modular vector field of a Poisson bivector field on a mesh in $\RR{m}$.
\begin{algorithm}[H]
\captionsetup{justification=centering}
\caption{\ \textsf{num\_modular\_vf}(\emph{bivector, function, mesh})} \label{AlgNumModularVF}
\rule{\textwidth}{0.4pt}
\Input{a Poisson bivector field $\Pi$, a non--zero scalar function $f_{0}$ and a mesh}
\Output{evaluation of the modular vector field of $\Pi$ relative to the volume form $f_{0}\Omega_{0}$ at each point of the mesh. Where $\Omega_{0}$ is the Euclidean volume form on $\RR{m}$ \vspace{0.25cm}}
\rule{\textwidth}{0.4pt}
\begin{algorithmic}[1]
\Procedure{}{}
\State $m$ $\gets$ dimension of the manifold
\algstore{bkbreak}\end{algorithmic}\end{algorithm}\begin{algorithm}[H]\vspace{0.1cm}\begin{algorithmic}[1]\algrestore{bkbreak}
\State \bluecolor{bivector} $\gets$ a variable encoding the Poisson bivector field $\Pi$
\State \bluecolor{function} $\gets$ a (string) expression representing the scalar function $f_{0}$
\State \bluecolor{mesh} $\gets$ a $(k,m)$ array encoding the mesh
\CommentNew{$k$: number of points in the mesh}
\State \bluecolor{variable\_1} $\gets$ \textsf{modular\_vf}(\bluecolor{bivector}, \bluecolor{function})
\CommentNew{\textsf{modular\_vf}: function of \textsf{PoissonGeometry}}
\State \textsc{Transform} each \bluecolor{variable\_1} item to a function that allows a numerical evaluation
\State \rreturn{an array containing the evaluation of \bluecolor{variable\_1} at each $m$--array of \bluecolor{mesh}}
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{example}\label{example:NoModSO3}
The characteristic (symplectic) foliation of the following homogeneous Poisson bivector field on \ec{\RR{3}_{x}} coincides with that of \ec{\Pi_{\mathfrak{so}(3)}} in (\ref{EcPiSO3}):
\begin{equation}\label{EcPiNoModSO3}
\begin{split}
\Pi =&
\tfrac{1}{4}x_3\big(x_{1}^{4} + x_{2}^{4} + x_{3}^{4}\big)\frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_2} -
\tfrac{1}{4}x_2\big(x_{1}^{4} + x_{2}^{4} + x_{3}^{4}\big)\frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_3} \\
& + \tfrac{1}{4}x_1\big(x_{1}^{4} + x_{2}^{4} + x_{3}^{4}\big)\frac{\partial}{\partial x_2}\wedge \frac{\partial}{\partial x_3}
\end{split}
\end{equation}
However, \ec{(\RR{3},\Pi_{\mathfrak{so}(3)})} admits a Hamiltonian--invariant volume form, while $\Pi$ does not \cite{Pedroza}. Consequently, the following vector field,
\begin{equation*}
Z_{\Pi} = \big(x_{2}x_{3}^{3} - x_{2}^{3}x_{3}\big)\frac{\partial}{\partial{x_{1}}} +
\big(x_{3}x_{1}^{3} - x_{3}^{3}x_{1}\big)\frac{\partial}{\partial{x_{2}}} +
\big(x_{1}x_{2}^{3} - x_{1}^{3}x_{2}\big)\frac{\partial}{\partial{x_{3}}},
\end{equation*}
which is the modular vector field of $\Pi$ with respect to the Euclidean volume form on \ec{\RR{3}_{x}}, cannot be a Hamiltonian vector field.
We can now use random meshes to check numerically that there are points for which the modular vector field of $\Pi$ with respect to the Euclidean volume form is not zero, which implies that $\Pi$ is not unimodular, as follows:
\begin{tcolorbox}[arc=0mm, boxsep=0mm, skin=bicolor, colback=pink!15, colframe=blue!20, colbacklower=blue!0, breakable, halign=left]
$>>>$ \textsf{npg3 = NumPoissonGeometry(3)}
\hspace*{\fill} \CommentCode{\textsf{NumPoissonGeometry} instance} \newline
$>>>$ \textsf{P = \{\parbox[t]{0.85\linewidth}{(1, 2): `1/4*x3*(x1**4 + x2**4 + x3**4)', (1, 3): `- 1/4*x2* (x1**4 + x2**4 + x3**4)', (2, 3): `1/4*x1*(x1**4 + x2**4 + x3**4)'\}}} \hspace*{\fill} \CommentCode{dictionary for $\Pi$ in (\ref{EcPiNoModSO3}) according to (\ref{EcMultivectorDic})} \\
$>>>$ \textsf{mesh = numpy.random.rand(10**6, 3)} \\
\hspace*{\fill} \CommentCode{\# $(10^{6}, 3)$ \textsf{numpy} array with random samples from a uniform distribution over [0,1)} \\
$>>>$ \textsf{npg3.num\_modular\_vf(P, 1, mesh, pt\_output=True)}
\hspace*{\fill} \CommentCode{run \textsf{num\_modular\_vf} function with \textsf{pt\_output} flag}
\tcblower
\textsf{tensor([\,\parbox[t]{\linewidth}{[[ 0.0538], [ 0.0545], [-0.1005]], \ [[ 0.0031], [\phantom{-}0.3838], [-0.0149]], \newline
[[-0.2559], [-0.0204], [ 0.0575]], \ [[-0.0910], [\phantom{-}0.0093], [ 0.1988]], \newline
$\ldots$, \newline
[[-0.0013], [-0.0095], [ 0.0107]], \ [[-0.0467], [-0.0012], [0.0296]]\,], \newline
dtype=torch.float64)}}
\end{tcolorbox}
\end{example}
Observe that \ec{Z_{\Pi}} is orthogonal to the radial vector field \ec{x_{i}\partial/\partial{x_{i}}} on \ec{\RR{3}_{x}}, for \ec{i=1,2,3}. As the characteristic foliation of $\Pi$ consists of concentric spheres (see, Figure \ref{fig:SO3-NoUnimod}), \ec{Z_{\Pi}} is tangent to the symplectic leafs of $\Pi$.
\subsection{Curl Operator} \label{subsec:curl}
On an oriented manifold $M$ with volume form $\Omega$, the \emph{divergence} of an $a$--multivector field $A$ on $M$ \cite{Kozul}, and relative to $\Omega$, is the unique $(a-1)$--multivector field \ec{\mathscr{D}_{\Omega}(A)} on $M$ such that
\begin{equation}\label{EcTrazaDef}
\mathbf{i}_{\mathscr{D}_{\Omega}(A)}\Omega = \mathrm{d}{\mathbf{i}_{A}}\Omega.
\end{equation}
The function \textsf{num\_curl\_operator} evaluates the divergence of a multivector field on a mesh in $\RR{m}$. Let \ec{\Omega_{0}} denote the standard volume form on \ec{\RR{m}}.
\begin{algorithm}[H]
\captionsetup{justification=centering}
\caption{\ \textsf{num\_curl\_operator}(\emph{multivector, function, mesh})} \label{AlgNumCurl}
\rule{\textwidth}{0.4pt}
\Input{a multivector field $A$, a non--zero scalar function $f_{0}$ and a mesh}
\Output{evaluation of the the divergence of $A$ with respect to the volume form $f_{0}\Omega_{0}$ at each point of the mesh. \vspace{0.25cm}}
\rule{\textwidth}{0.4pt}
\begin{algorithmic}[1]
\Procedure{}{}
\State $m$ $\gets$ dimension of the manifold
\State \bluecolor{multivector} $\gets$ a variable encoding the multivector field $A$, or a (string) expression if \ec{\operatorname{degree}{A} = 0}
\State \bluecolor{function} $\gets$ a (string) expression representing the scalar function $f_{0}$
\State \bluecolor{mesh} $\gets$ a $(k,m)$ array encoding the mesh
\CommentNew{$k$: number of points in the mesh}
\State \bluecolor{variable\_1} $\gets$ \textsf{curl\_operator}(\bluecolor{multivector}, \bluecolor{function})
\Statex \CommentNew{\textsf{curl\_operator}: function of \textsf{PoissonGeometry}}
\State \textsc{Transform} each \bluecolor{variable\_1} item to a function that allows a numerical evaluation
\State \rreturn{an array containing the evaluation of \bluecolor{variable\_1} at each $m$--array of \bluecolor{mesh}}
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{example}
The next Poisson bivector field on \ec{\RR{4}_{x}}, has been applied to the analyze the orbital stability of the Pais--Uhlenbeck oscillator \cite{MVallYu},
\begin{equation}\label{EcPiPaisU}
\begin{split}
\Pi =& 2x_{4}\frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_3}
+ 2x_{3} \frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_4}
- 2x_{4} \frac{\partial}{\partial x_2}\wedge \frac{\partial}{\partial x_3}
+ 2x_{3} \frac{\partial}{\partial x_2}\wedge \frac{\partial}{\partial x_4} \\
& + (x_{1}-x_{2}) \frac{\partial}{\partial x_3}\wedge \frac{\partial}{\partial x_4}.
\end{split}
\end{equation}
It is unimodular on (the whole of) \ec{\RR{4}_{x}}, and has trivial modular vector field with respect to the Euclidean volume form. For a fixed volume form, the divergence of a Poisson bivector field coincides with its modular vector field. Hence, the unimodularity of $\Pi$ (\ref{EcPiPaisU}) may be numerically verified, at least in a sample, using random meshes. In this example the output is a Pytorch tensor:
\begin{tcolorbox}[arc=0mm, boxsep=0mm, skin=bicolor, colback=pink!15, colframe=blue!20, colbacklower=blue!0, breakable, halign=left]
$>>>$ \textsf{npg4 = NumPoissonGeometry(4)}
\hspace*{\fill} \CommentCode{\textsf{NumPoissonGeometry} instance} \\
$>>>$ \textsf{P = \big\{\parbox[t]{0.9\linewidth}{(1, 3): `2*x4', (1, 4): `2*x3', (2, 3): `-2*x4', (2, 4): `2*x3', \newline (3, 4): `x1 - x2'\}}} \hspace*{\fill} \CommentCode{dictionary for $\Pi$ in (\ref{EcPiPaisU}) according to (\ref{EcMultivectorDic})} \\
$>>>$ \textsf{mesh = numpy.random.rand(10**6, 4)} \\
\hspace*{\fill} \CommentCode{\# $(10^{6}, 4)$ \textsf{numpy} array with random samples from a uniform distribution over [0, 1)} \\
$>>>$ \textsf{npg4.num\_curl\_operator(P, 1, mesh, pt\_output=True)} \newline
\hspace*{\fill} \CommentCode{run \textsf{num\_curl\_operator} function with \textsf{pt\_output} flag}
\tcblower
\textsf{tensor([\,\parbox[t]{\linewidth}{[[0.], [0.], [0.], [0.]], \ [[0.], [0.], [0.], [0.]], \ [[0.], [0.], [0.], [0.]], \ $\ldots$, \newline
[[0.], [0.], [0.], [0.]], \ [[0.], [0.], [0.], [0.]], \ [[0.], [0.], [0.], [0.]]\,], \newline
dtype=torch.float64)}}
\end{tcolorbox}
\end{example}
\subsection{Differential 1--Forms Bracket} \label{subsec:onefbracket}
In the context of Lie algebroids, the Koszul bracket of 1--forms is the Lie bracket on the space of sections of a Poisson Lie algebroid \cite{Kozul, Grabowski}.
By definition and (\ref{EcPiSharp}), the Lie bracket of two differential $1$--forms $\alpha$ and $\beta$, induced by a Poisson bivector field $\Pi$ on $M$, at a point in $M$ can be determined by the following (coordinate) formula:
\begin{equation*}
\begin{split}
\{\alpha,\beta\}_{\Pi}(p) \simeq& \big[(\mathrm{d}{\beta})_{kl}(p)\big]\left[\Pi^{ij}(p)\right]\big[\alpha_{r}(p)\big] - \big[(\mathrm{d}{\alpha})_{kl}(p)\big]\left[\Pi^{ij}(p)\right]\big[\beta_{r}(p)\big] \\
& - \big[\tfrac{\partial}{\partial x^{r}}\big(\left[\beta_{k}\right]^{\top}\left[\Pi^{ij}\right]\left[\alpha_{l}\right]\big)(p)\big], \qquad\qquad p \in M; \quad k,l,r = 1, \ldots, m
\end{split}
\end{equation*}
Here, \ec{[\Pi^{ij}]} is the matrix of $\Pi$ (\ref{EcPiMatrix}), \ec{[(\mathrm{d}{\alpha})_{kl}]} and \ec{[(\mathrm{d}{\beta})_{kl}]} are the matrix of the differential 2--forms \ec{\mathrm{d}{\alpha} = 1/2(\mathrm{d}{\alpha})_{kl}\mathrm{d}{x^{k}} \wedge \mathrm{d}{x^{l}}} and \ec{\mathrm{d}{\beta} = 1/2(\mathrm{d}{\beta})_{kl}\mathrm{d}{x^{k}} \wedge \mathrm{d}{x^{l}}}, \ec{[\alpha_{r}]} and \ec{[\beta_{r}]} are the coefficient vectors of \ec{\alpha = \alpha_{k}\mathrm{d}{x^{k}}} and \ec{\beta = \beta_{k}\mathrm{d}{x^{k}}}, and \ec{[{\partial}/{\partial x^{r}}(\cdot)]} is the gradient vector (field) operator.
The function \textsf{num\_one\_forms\_bracket} evaluates the differential 1--form \ec{\{\alpha,\beta\}_{\Pi}} on a mesh in \ec{\RR{m}}.
\begin{algorithm}[H]
\captionsetup{justification=centering}
\caption{\ \textsf{num\_one\_forms\_bracket}(\emph{bivector, one\_form\_1, one\_form\_2, mesh})} \label{AlgNumOneFormsB}
\rule{\textwidth}{0.4pt}
\Input{a Poisson bivector field $\Pi$, two differential 1--forms $\alpha, \beta$ and a mesh}
\Output{evaluation of the Lie bracket of $\alpha$ and $\beta$ induced by $\Pi$ at each point of the mesh \vspace{0.25cm}}
\rule{\textwidth}{0.4pt}
\begin{algorithmic}[1]
\Procedure{}{}
\State $m$ $\gets$ dimension of the manifold
\State \bluecolor{bivector} $\gets$ a variable encoding the Poisson bivector field $\Pi$
\State \bluecolor{one\_form\_1} $\gets$ a variable encoding the differential 1--form \ec{\alpha = \alpha_{1}\mathrm{d}{x^{1}} + \cdots + \alpha_{m}\mathrm{d}{x^{m}}}
\State \bluecolor{one\_form\_2} $\gets$ a variable encoding the differential 1--form \ec{\beta = \beta_{1}\mathrm{d}{x^{1}} + \cdots + \beta_{m}\mathrm{d}{x^{m}}}
\State \bluecolor{mesh} $\gets$ a $(k,m)$ array encoding the mesh
\CommentNew{$k$: number of points in the mesh}
\State \bluecolor{variable\_1} $\gets$ a container with items $(\alpha_{1},\ldots,\alpha_{m})$
\State \bluecolor{variable\_2} $\gets$ a container with items $(\beta_{1},\ldots,\beta_{m})$
\For {$i \in \{1,2\}$}
\State \textsc{Convert} each item in \bluecolor{variable\_}$i$ to a symbolic variable
\State \bluecolor{variable\_3\_}$i$ $\gets$ the (symbolic) Jacobian matrix of \bluecolor{variable\_}$i$
\Statex \CommentNew{\bluecolor{variable\_}$i$ thought as a vector field}
\State \textsc{Transform} each \bluecolor{variable\_3\_i} item to a function that allows a numerical evaluation
\State \bluecolor{variable\_4\_}$i$ $\gets$ an array containing the evaluation of \bluecolor{variable\_3\_}$i$ at each $m$--array of \bluecolor{mesh}
\algstore{bkbreak}\end{algorithmic}\end{algorithm}\begin{algorithm}[H]\vspace{0.1cm}\begin{algorithmic}[1]\algrestore{bkbreak}
\State \bluecolor{variable\_5\_}$j$ $\gets$ \textsf{num\_sharp\_morphism}(\bluecolor{bivector}, \bluecolor{one\_form\_}$j$, \bluecolor{mesh})
\Statex \CommentNew{\ec{i \neq j = 1,2}. See Algorithm \ref{AlgNumMatrixBivector}}
\EndFor
\State \bluecolor{variable\_6}, \bluecolor{variable\_7} $\gets$ empty containers
\For {$0 \leq i < k$}
\State \bluecolor{variable\_6}[$i$] $\gets$ \big(\bluecolor{variable\_4\_1} $-$ \textsf{transpose}(\bluecolor{variable\_4\_1})\big)[$i$] $*$ \bluecolor{variable\_5\_2}[$i$]
\State \bluecolor{variable\_7}[$i$] $\gets$ \big(\bluecolor{variable\_4\_2} $-$ \textsf{transpose}(\bluecolor{variable\_4\_2})\big)[$i$] $*$ \bluecolor{variable\_5\_1}[$i$]
\Statex \CommentNew{matrix--vector products}
\EndFor
\State \bluecolor{variable\_8} $\gets$ \textsf{sharp\_morphism}(\bluecolor{bivector}, \bluecolor{one\_form\_1})
\Statex \CommentNew{\textsf{sharp\_morphism}: function of \textsf{PoissonGeometry}}
\State \bluecolor{variable\_9} $\gets$ a container with all the \emph{values} available in the dictionary \bluecolor{variable\_8}
\State \bluecolor{variable\_10} $\gets$ \bluecolor{variable\_9} $*$ \bluecolor{variable\_2}
\CommentNew{scalar vector product}
\State \bluecolor{variable\_11} $\gets$ the (symbolic) gradient vector field of \bluecolor{variable\_10}
\State \textsc{Transform} each \bluecolor{variable\_11} item to a function that allows a numerical evaluation
\State \bluecolor{variable\_12} $\gets$ an array containing the evaluation of \bluecolor{variable\_11} at each $m$--array of \bluecolor{mesh}
\State \bluecolor{variable\_13} $\gets$ an empty container
\For {$0 \leq i < k$}
\State \bluecolor{variable\_13}[$i$] $\gets$ \bluecolor{variable\_7}[$i$] $-$ \bluecolor{variable\_6}[$i$] $+$ \bluecolor{variable\_12}[$i$]
\CommentNew{vector sum}
\EndFor
\State \rreturn{\bluecolor{variable\_13}}
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{example}
By definition, the Lie bracket induced by the Poisson bivector field $\Pi$ in (\ref{EcTwist}) \cite{Plasma}, see also \cite{SeveraTwist}, of the (basic) differential 1--forms \ec{\mathrm{d}{x_{5}}} and \ec{\mathrm{d}{x_{6}}} is given by
\begin{equation*}
\{\mathrm{d}{x_{5}}, \mathrm{d}{x_{6}}\}_{\Pi} = 2x_{2}\mathrm{d}{x_{2}}.
\end{equation*}
Hence, \ec{\{\mathrm{d}{x_{5}}, \mathrm{d}{x_{6}}\}_{\Pi} = 2\mathrm{d}{x_{2}}} at points \ec{x \in \RR{6}} such that \ec{x_{2}=1}. This coincides with the following computation involving random \textsf{mesh}es,
\begin{equation*}
\{a_{1}, b_{1}\} \times \{1\} \times \{a_{2}, b_{2}\} \times \cdots \times \{a_{5}, b_{5}\}, \qquad a_{i},b_{i} \in \operatorname{random}{[0,1]}:
\end{equation*}
\begin{tcolorbox}[arc=0mm, boxsep=0mm, skin=bicolor, colback=pink!15, colframe=blue!20, colbacklower=blue!0, breakable, halign=left]
$>>>$ \textsf{npg6 = NumPoissonGeometry(6)}
\hspace*{\fill} \CommentCode{\textsf{NumPoissonGeometry} instance} \\
$>>>$ \textsf{P = \{(1, 4): 1, (2, 5): 1, (3, 6): 1, (5, 6): `x2**2'\}}
\hspace*{\fill} \CommentCode{dictionary for $\Pi$ in (\ref{EcTwist}) according to (\ref{EcMultivectorDic})} \\
$>>>$ \textsf{alpha = \{(5,): 1\}}
\hspace*{\fill} \CommentCode{dictionary for $\mathrm{d}{x_{5}}$ according to (\ref{EcMultivectorDic})} \\
$>>>$ \textsf{beta = \{(6,): 1\}}
\hspace*{\fill} \CommentCode{dictionary for $\mathrm{d}{x_{6}}$ according to (\ref{EcMultivectorDic})} \\
$>>>$ \textsf{npg6.num\_one\_forms\_bracket(P, alpha, beta, mesh, pt\_output=True)} \newline
\hspace*{\fill} \CommentCode{run \textsf{num\_one\_forms\_bracket} function with \textsf{pt\_output} flag}
\tcblower
\textsf{tensor([\,\parbox[t]{\linewidth}{[[0.], [2.], [0.], [0.], [0.], [0.]], \ [[0.], [2.], [0.], [0.], [0.], [0.]], \newline
$\ldots$, \newline
[[0.], [2.], [0.], [0.], [0.], [0.]], \ [[0.], [2.], [0.], [0.], [0.], [0.]], \newline
[[0.], [2.], [0.], [0.], [0.], [0.]], \ [[0.], [2.], [0.], [0.], [0.], [0.]]\,], \newline
dtype=torch.float64)}}
\end{tcolorbox}
\end{example}
\subsection{Gauge Transformations} \label{subsec:gauge}
Gauge transformations are used to simplify dynamical equations, they aid in reduction methods for dynamical systems. They are also used for the Hamiltonization of nonholonomic systems \cite{GaugeNaranjo}. Recall that the {\em Hamiltonization problem} consists in determining conditions that represent a dynamical system in Hamiltonian form.
Given a differential 2--form $\lambda$ on $M$, if the morphism \ec{\mathrm{id}_{T^{\ast}{M}} - \lambda^{\flat} \circ \Pi^{\natural}} is invertible, the $\lambda$--gauge transformation of a bivector field $\Pi$ on $M$ is the bivector field $\overline{\Pi}$ determined by the vector bundle morphism \ec{{\overline{\Pi}}^{\natural} = \Pi^{\natural} \circ (\mathrm{id}_{T^{\ast}{M}} - \lambda^{\flat} \circ \Pi^{\natural})^{-1}} \cite{SeveraTwist}. This morphism can be evaluated at a point in $M$ as follows:
\begin{equation*}
\overline{\Pi}^{\natural}_{p} \simeq -\big[\Pi^{ij}(p)\big]
\left(\mathrm{Id} - \big[\lambda_{kl}(p)\big]\big[\Pi^{ij}(p)\big]\right)^{-1}, \qquad p \in M; \quad k,l = 1,\ldots,m
\end{equation*}
Here, \ec{[\Pi^{ij}]} is the matrix of $\Pi$ (\ref{EcPiMatrix}), \ec{[\lambda_{kl}]} is the matrix of \ec{\lambda = 1/2\lambda_{kl}\mathrm{d}{x^{k}} \wedge \mathrm{d}{x^{l}}} and $\mathrm{Id}$ denotes the \ec{m \times m} identity matrix. The morphism \ec{\lambda^{\flat}:T{M} \rightarrow T^{\ast}{M}} above is given by \ec{X \mapsto \mathbf{i}_{X}\lambda}.
The function \textsf{num\_gauge\_transformation} evaluates the gauge transformation of a bivector field on a mesh in \ec{\RR{m}}.
\begin{algorithm}[H]
\captionsetup{justification=centering}
\caption{\ \textsf{num\_gauge\_transformation}(\emph{bivector, two\_form, mesh})} \label{AlgNumGauge}
\rule{\textwidth}{0.4pt}
\Input{a (Poisson) bivector field $\Pi$, a differential 2--form $\lambda$ and a mesh}
\Output{evaluation of the gauge transformation of $\Pi$ induced by $\lambda$ at each point of the mesh \vspace{0.25cm}}
\rule{\textwidth}{0.4pt}
\begin{algorithmic}[1]
\Procedure{}{}
\State $m$ $\gets$ dimension of the manifold
\State \bluecolor{bivector} $\gets$ a variable encoding the (Poisson) bivector field $\Pi$
\State \bluecolor{two\_form} $\gets$ a variable encoding the (Poisson) the differential 2--form $\lambda$
\State \bluecolor{mesh} $\gets$ a $(k,m)$ array encoding the mesh
\CommentNew{$k$: number of points in the mesh}
\State \bluecolor{variable\_1} $\gets$ \textsf{num\_bivector\_to\_matrix}(\bluecolor{bivector}, \bluecolor{mesh})
\CommentNew{see Algorithm \ref{AlgNumMatrixBivector}}
\State \bluecolor{variable\_2} $\gets$ \textsf{num\_bivector\_to\_matrix}(\bluecolor{two\_form}, \bluecolor{mesh})
\CommentNew{see Algorithm \ref{AlgNumMatrixBivector}}
\State \bluecolor{variable\_3} $\gets$ the identity \ec{m \times m} matrix
\State \bluecolor{variable\_4} $\gets$ an empty container
\For {$0 \leq i < k$}
\State \bluecolor{variable\_4}[$i$] $\gets$ \bluecolor{variable\_3} $-$ \bluecolor{variable\_2}[$i$] $*$ \bluecolor{variable\_1}[$i$]
\Statex \CommentNew{matrix sum/multiplication}
\EndFor
\State \bluecolor{variable\_5} $\gets$ an empty container
\algstore{bkbreak}\end{algorithmic}\end{algorithm}\begin{algorithm}[H]\vspace{0.1cm}\begin{algorithmic}[1]\algrestore{bkbreak}
\For{$0 \leq i < k$}
\If{\textsf{determinant}(\bluecolor{variable\_4}[$i$]) $\neq 0$}
\State \bluecolor{variable\_5}[$i$] $\gets$ \bluecolor{variable\_1}[$i$] $*$ \textsf{inverse}(\bluecolor{variable\_4}[$i$])
\CommentNew{matrix multiplication}
\Else
\State \bluecolor{variable\_5}[$i$] $\gets$ \textsf{False}
\EndIf
\EndFor
\State \rreturn{\bluecolor{variable\_5}}
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{example}
For this example we will use the following result that we proved in \cite{CompuPoisson}:
\begin{proposition}\label{Prop:gauge}
Let $\Pi$ be a bivector field on a 3--dimensional smooth manifold $M$. Then, given a differential 2--form $\lambda$ on $M$, the $\lambda$--gauge transformation $\overline{\Pi}$ of $\Pi$ is well defined on the open subset,
\begin{equation}\label{EcFdetGauge}
\big\{F := 1 + \big\langle \lambda,\Pi \big\rangle \neq 0 \big\} \subseteq M.
\end{equation}
Moreover, $\overline{\Pi}$ is given by
\begin{equation*}\label{EcPiGauge3}
\overline{\Pi} = \tfrac{1}{F}\Pi.
\end{equation*}
In consequence, if $\Pi$ is Poisson, then $\overline{\Pi}$ is also Poisson.
\end{proposition}
Observe that if \ec{\langle \lambda,\Pi \rangle = 0} in (\ref{EcFdetGauge}), then $\Pi$ remains unchanged under the gauge transformation induced by $\lambda$. This holds for the Poisson bivector field \ec{\Pi_{\mathfrak{so}(3)}} in (\ref{EcPiSO3}) \cite{Newton} and the differential 2--form on $\RR{3}$ given by
\begin{equation}\label{EcLambGague}
\lambda = (x_1 - x_2)\mathrm{d}{x_1}\wedge \mathrm{d}{x_2} + (x_1 - x_3)\mathrm{d}{x_1}\wedge \mathrm{d}{x_3} + (x_2 - x_3)\mathrm{d}{x_2}\wedge \mathrm{d}{x_3}.
\end{equation}
Then, for \ec{Q^{3}} being as in (\ref{EcCorners}), we can check the invariance of \ec{\Pi_{\mathfrak{so}(3)}} under $\lambda$ as follows:
\begin{tcolorbox}[arc=0mm, boxsep=0mm, skin=bicolor, colback=pink!15, colframe=blue!20, colbacklower=blue!0, breakable, halign=left]
$>>>$ \textsf{npg3 = NumPoissonGeometry(3)}
\hspace*{\fill} \CommentCode{\textsf{NumPoissonGeometry} instance} \\
$>>>$ \textsf{P\_so3 = \big\{(1, 2): `x3', (1, 3): `-x2', (2, 3): `x1'\big\}}
\hspace*{\fill} \CommentCode{dictionary for $\Pi_{\mathfrak{so}(3)}$ in (\ref{EcPiSO3}) according to (\ref{EcMultivectorDic})} \\
$>>>$ \textsf{lambda = \big\{(1, 2): `x1 - x2', (1, 3): `x1 - x3', (2, 3): `x2 - x3'\big\}} \newline
\hspace*{\fill} \CommentCode{dictionary for $\lambda$ in (\ref{EcLambGague}) according to (\ref{EcMultivectorDic})} \\
$>>>$ \textsf{npg3.num\_gauge\_transformation(P\_so3, lambda, Qmesh, pt\_output=True)}
\hspace*{\fill} \CommentCode{run \textsf{num\_gague\_transformation} function with \textsf{pt\_output} flag}
\tcblower
\resizebox{\textwidth}{!}{
\textsf{tensor([\,\parbox[t]{\linewidth}{[[0., 0., \phantom{-}0.], [0., 0., 0.], [0., \phantom{-}0., 0.]], \
[[0., 1., \phantom{-}0.], [-1., 0., 0.], [0., \phantom{-}0., 0.]], \newline
[[0., 0., -1.], [0., 0., 0.], [1., \phantom{-}0., 0.]], \
[[0., 1., -1.], [-1., 0., 0.], [1., \phantom{-}0., 0.]], \newline
[[0., 0., \phantom{-}0.], [0., 0., 1.], [0., -1., 0.]], \
[[0., 1., \phantom{-}0.], [-1., 0., 1.], [0., -1., 0.]], \newline
[[0., 0., -1.], [0., 0., 1.], [1., -1., 0.]], \
[[0., 1., -1.], [-1., 0., 1.], [1., -1., 0.]]\,], \newline
dtype=torch.float64)}}
}
\end{tcolorbox}
Notice that the output is the same as the second one in Example \ref{example:so3}, which encodes the evaluation of the bivector field $\Pi_{\mathfrak{so}(3)}$ at points of \ec{Q^{3}}.
\end{example}
\subsection{Lie--Poisson Normal Forms on $\RR{3}$} \label{subsec:normal}
Two Poisson bivector fields $\Pi$ and $\widetilde{\Pi}$ on $M$ are said to be equivalent (or isomorphic) if there exists a diffeomorphism \ec{F:M \rightarrow M} such that \ec{\widetilde{\Pi} = F^{\ast}\Pi}. Under this equivalence relation there exist 9 non--trivial normal forms of Lie--Poisson bivector fields on \ec{\RR{3}} \cite{LiuXU-92, Sheng}.
The function \textsf{num\_linear\_normal\_form\_R3} evaluates a normal form of a given Lie--Poisson bivector field on a mesh in \ec{\RR{3}}.
\begin{algorithm}[H]
\captionsetup{justification=centering}
\caption{\ \textsf{num\_linear\_normal\_form\_R3}(\emph{linear\_bivector, mesh})} \label{AlgNumNormal}
\rule{\textwidth}{0.4pt}
\Input{a Lie--Poisson bivector field $\Pi$ on $\RR{3}$ and a mesh}
\Output{evaluation of a normal form of $\Pi$ at each point of the mesh}
\rule{\textwidth}{0.4pt}
\begin{algorithmic}[1]
\Procedure{}{}
\State $m$ $\gets$ dimension of the manifold
\State \bluecolor{linear\_bivector} $\gets$ a variable that encodes the Lie--Poisson bivector field $\Pi$
\State \bluecolor{mesh} $\gets$ a $(k,m)$ array encoding the mesh
\CommentNew{$k$: number of points in the mesh}
\State \bluecolor{variable\_1} $\gets$ \textsf{linear\_normal\_form\_R3}(\bluecolor{linear\_bivector})
\Statex \CommentNew{\textsf{linear\_normal\_form\_R3}: function of \textsf{PoissonGeometry}}
\State \rreturn{\textsf{num\_bivector}(\bluecolor{variable\_1}, \bluecolor{mesh})}
\CommentNew{see Algorithm \ref{AlgNumBivector}}
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{example}
Using the function \textsf{linear\_normal\_form\_R3} of \textsf{PoissonGeometry} we can verify that the Lie--Poisson bivector field on \ec{\RR{3}_{x}}
\begin{equation}\label{EcPiNormal}
\Pi =
2(x_{2} + x_{3})\frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_2} +
(x_{1} - x_{2})\frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_3} +
(x_{1} + x_{2} + 2x_{3})\frac{\partial}{\partial x_2}\wedge \frac{\partial}{\partial x_3},
\end{equation}
admits the following Poisson bivector field as a normal form:
\begin{equation}\label{EcPiNormal2}
\Pi_{N} =
(x_{1} - 4ax_{2})\frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_3} +
(4ax_{1} + x_{2})\frac{\partial}{\partial x_2}\wedge \frac{\partial}{\partial x_3}, \qquad a>0.
\end{equation}
To evaluate this normal form of $\Pi$ at points of \ec{Q^{3}} (\ref{EcCorners}) we compute:
\begin{tcolorbox}[arc=0mm, boxsep=0mm, skin=bicolor, colback=pink!15, colframe=blue!20, colbacklower=blue!0, breakable, halign=left]
$>>>$ \textsf{npg3 = NumPoissonGeometry(3)}
\hspace*{\fill} \CommentCode{\textsf{NumPoissonGeometry} instance} \\
$>>>$ \textsf{P = \big\{(1, 2): `2*(x2 + x3)', (1, 3): `x1 - x2', (2, 3): `x1 + x2 +2*x3'\big\}} \newline
\hspace*{\fill} \CommentCode{dictionary for $\Pi$ in (\ref{EcPiNormal}) according to (\ref{EcMultivectorDic})} \\
$>>>$ \textsf{npg3.num\_linear\_normal\_form\_R3(P, Qmesh)}
\hspace*{\fill} \CommentCode{run \textsf{num\_linear\_normal\_form} function}
\tcblower
\textsf{[\parbox[t]{\linewidth}{\{(1,3): \phantom{-}0.0, (2,3): 0.0\}, \ \{(1,3): \phantom{-}0.0, (2,3): 0.0\}, \
\{(1,3): -4.0*a, (2,3): 1.0\}, \newline
\{(1,3): -4.0*a, (2,3): 1.0\}, \{(1,3): 1.0, (2,3): 4.0*a\}, \{(1,3): 1.0, (2,3): 4.0*a\}, \newline
\{(1,3): 1.0-4.0*a, (2,3): 4.0*a+1.0\}, \ \{(1,3): 1.0-4.0*a, (2,3): 4.0*a+1.0\}]}}
\end{tcolorbox}
\end{example}
The equivalence between $\Pi$ (\ref{EcPiNormal}) and \ec{\Pi_{N}} (\ref{EcPiNormal2}) implies that the characteristic foliation of $\Pi$ is a open book foliation \cite{Obook}, as shown in Figure \ref{fig:OpenBook} below. In particular, $\Pi$ does not admit global, non--constant, Casimir functions.
\begin{figure}[H]
\centering
\includegraphics[width=0.30\textwidth]{FOBook.png}
\caption{Symplectic (open book) foliation of $\Pi$ (\ref{EcPiNormal}).} \label{fig:OpenBook}
\end{figure}
\subsection{Flaschka--Ratiu Poisson Bivector Field} \label{subsec:fratiu}
If $M$ is a oriented manifold with volume form $\Omega$, the Poisson bivector field $\Pi$ determined by \ec{m-2} prescribed Casimir functions \ec{K_{1},...,K_{m-2}} on $M$, and defined by the formula
\begin{equation*}
\mathbf{i}_{\Pi}\Omega = \mathrm{d}{K_{1}} \wedge \cdots \wedge \mathrm{d}{K_{m-2}},
\end{equation*}
is called Flaschka--Ratiu bivector field \cite{GrabowskiFR, Damianou}. Observe that $\Pi$ is non--trivial on the open subset of $M$ where \ec{K_{1},\ldots,K_{m-2}} are (functionally) independent.
The function \textsf{num\_flaschka\_ratiu\_bivector} evaluates a Flaschka--Ratiu bivector field on a mesh in \ec{\RR{m}}.
\begin{algorithm}[H]
\captionsetup{justification=centering}
\caption{\ \textsf{num\_flaschka\_ratiu\_bivector}(\emph{casimirs\_list, mesh})} \label{AlgNumFRatiu}
\rule{\textwidth}{0.4pt}
\Input{a set of scalar functions and a mesh}
\Output{evaluation of the Flaschka--Ratiu bivector field induced by these functions at each point of the mesh \vspace{0.25cm}}
\rule{\textwidth}{0.4pt}
\begin{algorithmic}[1]
\Procedure{}{}
\State $m$ $\gets$ dimension of the manifold
\State \bluecolor{casimirs\_list} $\gets$ a container with $m-2$ (string) expressions representing the set of scalar functions
\CommentNew{each string expression represents a scalar function}
\State \bluecolor{mesh} $\gets$ a $(k,m)$ array encoding the mesh
\CommentNew{$k$: number of points in the mesh}
\State \bluecolor{variable\_1} $\gets$ \textsf{flaschka\_ratiu\_bivector}(\bluecolor{functions\_set})
\Statex \CommentNew{\textsf{flaschka\_ratiu\_bivector}: function of \textsf{PoissonGeometry}}
\State \rreturn{\textsf{num\_bivector}(\bluecolor{variable\_1}, \bluecolor{mesh})}
\CommentNew{see Algorithm \ref{AlgNumBivector}}
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{example}\label{example:so3FR}
Consider the following (Poisson) Flaschka--Ratiu bivector field on \ec{\RR{4}_{x}}
\begin{equation*}
\Pi =
x_{3}\frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_2} -
x_{2}\frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_3} -
x_{1}\frac{\partial}{\partial x_2}\wedge \frac{\partial}{\partial x_3},
\end{equation*}
that appears as a local model around singularities of a broken Lefschetz fibration on smooth 4--manifolds \cite{PabLef} (see, also \cite{PabloWrinFib, PabBott}). It is induced by the functions
\begin{equation}\label{EcK1K2}
K_{1} = \tfrac{1}{2}x_{4}, \quad K_{2} = - x_{1}^{2} + x_{2}^{2} + x_{3}^{2}.
\end{equation}
To evaluate $\Pi$ at points of \ec{Q^{4}} (\ref{EcCorners}) we compute:
\begin{tcolorbox}[arc=0mm, boxsep=0mm, skin=bicolor, colback=pink!15, colframe=blue!20, colbacklower=blue!0, breakable, halign=left]
$>>>$ \textsf{npg4 = NumPoissonGeometry(4)}
\hspace*{\fill} \CommentCode{\textsf{NumPoissonGeometry} instance} \\
$>>>$ \textsf{functions = [`1/2*x4', `-x1**2 + x2**2 + x3**2']} \newline
\hspace*{\fill} \CommentCode{list containing string expressions for $K_{1}$ and $K_{2}$ in (\ref{EcK1K2}), in that order} \\
$>>>$ \textsf{npg4.num\_flaschka\_ratiu\_bivector(functions, Qmesh)}
\hspace*{\fill} \CommentCode{run \textsf{num\_flaschka\_ratiu\_bivector} function}
\tcblower
\resizebox{\textwidth}{!}{
\textsf{[\parbox[t]{\linewidth}{\{(1, 2): 0.0, (1, 3): \phantom{-}0.0, (2, 3): \phantom{-}0.0\}, \
\{(1, 2): 0.0, (1, 3): \phantom{-}0.0, (2, 3): \phantom{-}0.0\}, \newline
\{(1, 2): 1.0, (1, 3): \phantom{-}0.0, (2, 3): \phantom{-}0.0\}, \
\{(1, 2): 1.0, (1, 3): \phantom{-}0.0, (2, 3): \phantom{-}0.0\}, \newline
\{(1, 2): 0.0, (1, 3): -1.0, (2, 3): \phantom{-}0.0\}, \
\{(1, 2): 0.0, (1, 3): -1.0, (2, 3): \phantom{-}0.0\}, \newline
\{(1, 2): 1.0, (1, 3): -1.0, (2, 3): \phantom{-}0.0\}, \
\{(1, 2): 1.0, (1, 3): -1.0, (2, 3): \phantom{-}0.0\}, \newline
\{(1, 2): 0.0, (1, 3): \phantom{-}0.0, (2, 3): -1.0\}, \
\{(1, 2): 0.0, (1, 3): \phantom{-}0.0, (2, 3): -1.0\}, \newline
\{(1, 2): 1.0, (1, 3): \phantom{-}0.0, (2, 3): -1.0\}, \
\{(1, 2): 1.0, (1, 3): \phantom{-}0.0, (2, 3): -1.0\}, \newline
\{(1, 2): 0.0, (1, 3): -1.0, (2, 3): -1.0\}, \
\{(1, 2): 0.0, (1, 3): -1.0, (2, 3): -1.0\}, \newline
\{(1, 2): 1.0, (1, 3): -1.0, (2, 3): -1.0\}, \
\{(1, 2): 1.0, (1, 3): -1.0, (2, 3): -1.0\}]}}
}
\end{tcolorbox}
\end{example}
\section{Algorithmic Complexity and Performance} \label{sec:ComplexityPerformance}
In this section we will present an \emph{approximation} to the (worst--case) time complexity of the twelve algorithms in Table \ref{table:Funs-Algos-Exes}, as well as a time performance analysis of our Python implementation of the functions in the module \textsf{NumPoissonGeometry}.
\subsection{Complexity}
The time complexity of certain algorithms depends on the nature and structure of the input data. In our methods two important processes in the implementation depend on the \emph{length} of the input data items: converting string expressions to symbolic variables, and transforming these symbolic expressions into functions that allow a numerical evaluation. Therefore, for the analysis of our algorithms we define,
\begin{equation*}
|C| := \underset{\text{for} \,\,x \,\,\text{in}\,\, C}{\mathrm{max}}\{\mathrm{len}(x)\},
\end{equation*}
for a container $C$ with (string expression) items that encode the coefficients (scalar functions) of the coordinate expression of a multivector field or a differential form. For example, as illustrated in (\ref{EcMultivectorDic}), we use dictionaries for our Python implementation. In this case, $C$ is a tuple (or list) that contains all the values of a such dictionary.
In the following Table \ref{table:complexity} we record the approximate worst--case time complexities for the twelve methods of our \textsf{NumPoissonGeometry} module:
\begin{table}[H]
\centering
\caption{Worst--case time complexity of \textsf{NumPoissonGeometry} methods. In the second column: $m$ denotes the dimension of $\RR{m}$, $k$ is the number of points in a mesh on $\RR{m}$, and $[\cdot]$ is the integer part function.} \label{table:complexity}
\resizebox{\textwidth}{!}{
\begin{tabular}{|l|l|}
\hline
\multicolumn{1}{|c|}{\textbf{Method}} & \multicolumn{1}{c|}{\textbf{Time Complexity}} \\
\hline
\hline
\hyperref[AlgNumBivector]{\phantom{0}1. \textsf{num\_bivector\_field}}
& \ec{\mathscr{O}(m^2k|bivector|)} \\
\hline
\hyperref[AlgNumMatrixBivector]{\phantom{0}2. \textsf{num\_bivector\_to\_matrix}}
& \ec{\mathscr{O}(m^{2}k|bivector|)} \\
\hline
\hyperref[AlgNumHamVF]{\phantom{0}3. \textsf{num\_hamiltonian\_vf}}
& \ec{\mathscr{O}(mk(m|bivector| + \mathrm{len}(ham\_function)))} \\
\hline
\hyperref[AlgNumPoissonBracket]{\phantom{0}4. \textsf{num\_poisson\_bracket}}
& \ec{\mathscr{O}(mk(m|bivector| + \mathrm{len}(function\_1) + \mathrm{len}(function\_2)))} \\
\hline
\hyperref[AlgNumSharp]{\phantom{0}5. \textsf{num\_sharp\_morphism}}
& \ec{\mathscr{O}(m(mk|bivector| + |one\_form|))} \\
\hline
\hyperref[AlgNumCoboundary]{\phantom{0}6. \textsf{num\_coboundary\_operator}}
& \ec{\mathscr{O}(\mathrm{comb}(m,[m/2])|bivector|\mathrm{len}(function)(m^{5} + k))} \\
\hline
\hyperref[AlgNumModularVF]{\phantom{0}7. \textsf{num\_modular\_vf}}
& \ec{\mathscr{O}(\mathrm{comb}(m,[m/2])|bivector|\mathrm{len}(function)(m+k))} \\
\hline
\hyperref[AlgNumCurl]{\phantom{0}8. \textsf{num\_curl\_operator}}
& \ec{\mathscr{O}(\mathrm{comb}(m,[m/2])|multivector|\mathrm{len}(function)(m+k))} \\
\hline
\hyperref[AlgNumOneFormsB]{\phantom{0}9. \textsf{num\_one\_forms\_bracket}}
& \ec{\mathscr{O}(m^2k|bivector||one\_form\_1||one\_form\_2|)} \\
\hline
\hyperref[AlgNumGauge]{10. \textsf{num\_gauge\_transformation}}
& \ec{\mathscr{O}\big(m^2k(m^5 + |bivector| + |two\_form|)\big)} \\
\hline
\hyperref[AlgNumNormal]{11. \textsf{num\_linear\_normal\_form\_R3}}
& \ec{\mathscr{O}(k|bivector|)} \\
\hline
\hyperref[AlgNumFRatiu]{12. \textsf{num\_flaschka\_ratiu\_bivector}}
& \ec{\mathscr{O}(m^{6}k|bivector|)} \\
\hline
\hline
\end{tabular}
}
\end{table}
\begin{remark}
Observe that the time complexities in Table \ref{table:complexity} depend linearly on the number of points in the mesh ($k$).
\end{remark}
Recall that the time complexity of independent processes equals the sum of the respective time complexities of each process. With this in mind, we present the deduction of the time complexities of our numerical methods presented in Table \ref{table:complexity}.
\subsubsection{Polynomial Complexity}
The following methods have polynomial time complexities.
\begin{lemma}
The time complexity of the \hyperref[AlgNumBivector]{\textsf{num\_bivector\_field}} method is approximately \ec{\mathscr{O}(m^2k|bivector|)}.
\end{lemma}
\begin{proof}
Consider the Algorithm \ref{AlgNumBivector} with \emph{bivector} and \emph{mesh} inputs. The time complexity of our implementation depends on:
\begin{enumerate}[label=\roman*.]
\item \textsf{Line 5} $\rightarrow$ \ec{\mathscr{O}(m^{2} |bivector|)}: we iterate over \emph{bivector}, and the transformation of each \emph{bivector} item depends on its length.
\item \textsf{Line 6} $\rightarrow$ \ec{\mathscr{O}(km^{2} |bivector|)}: we iterate over \emph{mesh}, and the evaluation of \emph{bivector} depends on its length and on the length of its items.
\end{enumerate}
Hence, the time complexity of \textsf{num\_bivector\_field} is \ec{\mathscr{O}(m^2k|bivector|)}.
\end{proof}
\begin{lemma}
The time complexity of the \hyperref[AlgNumMatrixBivector]{\textsf{num\_bivector\_to\_matrix}} method is approximately \ec{\mathscr{O}(m^{2}k|bivector|)}.
\end{lemma}
\begin{proof}
Consider the Algorithm \ref{AlgNumMatrixBivector} with \emph{bivector} and \emph{mesh} inputs. The time complexity of our implementation depends on:
\begin{enumerate}[label=\roman*.]
\item \textsf{Line 5} $\rightarrow$ \ec{\mathscr{O}(m^{2}|bivector|)}: the \textsf{bivector\_to\_matrix} method has time complexity $\mathscr{O}(m^{2}|bivector|)$.
\item \textsf{Line 6} $\rightarrow$ \ec{\mathscr{O}(m^{2}|bivector|)}: we iterate over the \ec{m \times m} matrix \emph{variable\_1}, and the transformation of each \emph{bivector} item depends on its length.
\item \textsf{Line 7} $\rightarrow$ \ec{\mathscr{O}(km^{2}|bivector|)}: we iterate over \emph{mesh}, and the evaluation of \emph{variable\_1} depends on its dimension and on the length of the \emph{bivector} items.
\end{enumerate}
Hence, the time complexity of \textsf{num\_bivector\_to\_matrix} is \ec{\mathscr{O}(m^{2}k|bivector|)}.
\end{proof}
\begin{lemma}
The time complexity of the \hyperref[AlgNumHamVF]{\textsf{num\_hamiltonian\_vf}} method is approximately $\mathscr{O}(mk(m|bivector| + \mathrm{len}(ham\_function)))$.
\end{lemma}
\begin{proof}
Consider the Algorithm \ref{AlgNumHamVF} with \emph{bivector}, \emph{ham\_function} and \emph{mesh} inputs. The time complexity of our implementation depends on:
\begin{enumerate}[label=\roman*.]
\item \textsf{Line 6} $\rightarrow$ \ec{\mathscr{O}(\mathrm{len}(ham\_function))}: converting \emph{ham\_function} to a symbolic expression depends on its length.
\item \textsf{Line 7} $\rightarrow$ \ec{\mathscr{O}(m\mathrm{len}(ham\_function))}: calculate the gradient of \emph{ham\_function} depends on the length of this symbolic expression and on the dimension of $\RR{m}$.
\item \textsf{Line 8} $\rightarrow$ \ec{\mathscr{O}(m\mathrm{len}(ham\_function))}: we iterate over the \ec{m \times 1} matrix \emph{variable\_1}, and the transformation of each of its items depends on the length of \emph{ham\_function}.
\item \textsf{Line 9} $\rightarrow$ \ec{\mathscr{O}(km\mathrm{len}(ham\_function))}: we iterate over \emph{mesh}, and the evaluation of \emph{variable\_1} depends on its dimension and on the length of \emph{ham\_function}.
\item \textsf{Line 10} $\rightarrow$ \ec{\mathscr{O}(m^{2}k|bivector|)}: the \hyperref[AlgNumMatrixBivector]{\textsf{num\_bivector\_to\_matrix}} method has time complexity \ec{\mathscr{O}(m^{2}k|bivector|)}.
\item \textsf{Lines 12-14} $\rightarrow$ \ec{\mathscr{O}(km^{2})}: we iterate over the set of indices \ec{\{0, \ldots, k-1\}}, and the product of the \emph{variable\_2} and \emph{variable\_3} items has time complexity \ec{\mathscr{O}(m^{2})}.
\end{enumerate}
Hence, the time complexity of \textsf{num\_hamiltonian\_vf} is
\[ \mathscr{O}(m k (m |bivector| + \mathrm{len} (ham\_function))).
\]
\end{proof}
\begin{lemma}
The time complexity of the \hyperref[AlgNumPoissonBracket]{\textsf{num\_poisson\_bracket}} method is approximately $\mathscr{O}(mk(m|bivector| + \mathrm{len}(function\_1) + \mathrm{len}(function\_2)))$.
\end{lemma}
\begin{proof}
Consider the Algorithm \ref{AlgNumPoissonBracket} with \emph{bivector}, \emph{function\_1}, \emph{function\_2} and \emph{mesh} inputs. The time complexity of our implementation depends on:
\begin{enumerate}[label=\roman*.]
\item \textsf{Line 9} $\rightarrow$ \ec{\mathscr{O}(\mathrm{len}(function\_2))}: converting \emph{function\_2} to a symbolic expression depends on its length.
\item \textsf{Line 10} $\rightarrow$ \ec{\mathscr{O}(m\mathrm{len}(function\_2))}: calculate the gradient of \emph{function\_2} depends on the length of this symbolic expression and on the dimension of $\RR{m}$.
\item \textsf{Line 11} $\rightarrow$ \ec{\mathscr{O}(m\mathrm{len}(function\_2))}: we iterate over the \ec{m \times 1} matrix \emph{variable\_1}, and the transformation of each of its items depends on the length of \emph{function\_2}.
\item \textsf{Line 12} $\rightarrow$ \ec{\mathscr{O}(km\mathrm{len}(function\_2))}: we iterate over \emph{mesh}, and the evaluation of \emph{variable\_1} depends on its dimension and on the length of \emph{function\_2}.
\item \textsf{Line 13} $\rightarrow$ \ec{\mathscr{O}(mk(m|bivector| + \mathrm{len}(function\_1)))}: as \hyperref[AlgNumHamVF]{\textsf{num\_hamiltonian\_vf}} has time complexity \ec{\mathscr{O}(mk(m|bivector| + \mathrm{len}(ham\_function)))}.
\item \textsf{Lines 15-17} $\rightarrow$ \ec{\mathscr{O}(km^{2})}: we iterate over the set of indices \ec{\{0, \ldots, k-1\}}, and the product of the \emph{variable\_2} and \emph{variable\_3} items has time complexity \ec{\mathscr{O}(m^{2})}.
\end{enumerate}
Hence, the time complexity of \textsf{num\_poisson\_bracket} is
\[
\ec{\mathscr{O}(mk(m|bivector| + \mathrm{len}(function\_1) + \mathrm{len}(function\_2)))}.
\]
\end{proof}
\begin{lemma}
The time complexity of the \hyperref[AlgNumSharp]{\textsf{num\_sharp\_morphism}} method is approximately
\[
\mathscr{O}(m(mk|bivector| + |one\_form|)).
\]
\end{lemma}
\begin{proof}
Consider the Algorithm \ref{AlgNumSharp} with \emph{bivector}, \emph{one\_form} and \emph{mesh} inputs. The time complexity of our implementation depends on:
\begin{enumerate}[label=\roman*.]
\item \textsf{Line 7} $\rightarrow$ \ec{\mathscr{O}(m|one\_form|)}: we iterate over the $(m,1)$ container \emph{variable\_1}, and the transformation of each of its items depends on the length of \emph{one\_form} items.
\item \textsf{Line 9} $\rightarrow$ \ec{\mathscr{O}(m^{2}k|bivector|)}: the \hyperref[AlgNumMatrixBivector]{\textsf{num\_bivector\_to\_matrix}} method has time complexity \ec{\mathscr{O}(m^{2}k|bivector|)}.
\item \textsf{Lines 11-13} $\rightarrow$ \ec{\mathscr{O}(km^{2})}: we iterate over the set of indices \ec{\{0, \ldots, k-1\}}, and the product of the \emph{variable\_2} and \emph{variable\_3} items has time complexity \ec{\mathscr{O}(m^{2})}.
\end{enumerate}
Hence, the time complexity of \textsf{num\_sharp\_morphism} is
\[
\mathscr{O}(m (m k |bivector| + |one\_form|)).
\]
\end{proof}
\begin{lemma}
The time complexity of the \hyperref[AlgNumOneFormsB]{\textsf{num\_one\_forms\_bracket}} method is approximately
\[
\mathscr{O}(m^2k|bivector| \\ |one\_form\_1||one\_form\_2|).
\]
\end{lemma}
\begin{proof}
Consider the Algorithm \ref{AlgNumOneFormsB} with \emph{bivector}, \emph{one\_form\_1}, \emph{one\_form\_2} and \emph{mesh} inputs. The time complexity of our implementation depends on:
\begin{enumerate}[label=\roman*.]
\item \textsf{Line 10} $\rightarrow$ \ec{\mathscr{O}(\mathrm{len}(one\_form\_1)) + \mathrm{len}(one\_form\_2))}: converting \emph{variable\_i} to a symbolic expression depends on the length of \emph{one\_form\_1} and \emph{one\_form\_2} items.
\item \textsf{Line 11} $\rightarrow$ \ec{\mathscr{O}(m^2(|one\_form\_1| + |one\_form\_2|))}: calculate the Jacobian matrix of \emph{variable\_i} depends on the length of \emph{one\_form\_1} and \emph{one\_form\_2} items and on the dimension of $\RR{m}$.
\item \textsf{Line 12} $\rightarrow$ \ec{\mathscr{O}(m^2(|one\_form\_1| + |one\_form\_2|))}: we iterate over the \ec{m \times m} matrix \emph{variable\_3\_i}, and the transformation of each of its items depends on the length of \emph{one\_form\_1} and \emph{one\_form\_2} items.
\item \textsf{Line 14} $\rightarrow$ \ec{\mathscr{O}(m(mk|bivector| + |one\_form\_1| + |one\_form\_2|))}: the \hyperref[AlgNumSharp]{\textsf{num\_sharp\_ morphism}} method has time complexity \ec{\mathscr{O}(m(mk|bivector| + |one\_form|))}.
\item \textsf{Lines 17-19} $\rightarrow$ \ec{\mathscr{O}(km^{2})}: we iterate over the set of indices \ec{\{0, \ldots, k-1\}}, and the product of the \emph{variable\_4\_i} and \emph{variable\_5\_j} items has time complexity \ec{\mathscr{O}(m^{2})}, for \ec{i,j = 1,2}.
\item \textsf{Line 21} $\rightarrow$ \ec{\mathscr{O}(m^2|bivector||one\_form\_1|)}: the \textsf{sharp\_morphism} method has time complexity $\mathscr{O}(m^2 \\ |bivector||one\_form|)$.
\item \textsf{Line 24} $\rightarrow$ \ec{\mathscr{O}(m|one\_form\_1||one\_form\_2|)}: calculate the gradient of \emph{variable\_10} depends on the length of \emph{one\_form\_1} and \emph{one\_form\_2} items, and on the dimension of $\RR{m}$.
\item \textsf{Line 25} $\rightarrow$ \ec{\mathscr{O}(m(|one\_form\_1||one\_form\_2|))}: we iterate over the \ec{m \times 1} matrix \emph{variable\_11}, and the transformation of each of its items depends on the length of \emph{one\_form\_1} and \emph{one\_form\_2} items.
\item \textsf{Lines 28-30} $\rightarrow$ \ec{\mathscr{O}(km)}: we iterate over the set of indices \ec{\{0, \ldots, k-1\}}, and the sum of variables \emph{variable\_6}, \emph{variable\_7} and \emph{variable\_12} items has time complexity $\mathscr{O}(m)$.
\end{enumerate}
Hence, the time complexity of \textsf{num\_one\_forms\_bracket} is
\[
\mathscr{O}( m^2 k |bivector| |one\_form\_1| |one\_form\_2|).
\]
\end{proof}
\begin{lemma}
The time complexity of the \hyperref[AlgNumGauge]{\textsf{num\_gauge\_transformation}} method is approximately
\[
\mathscr{O}(m^2k(m^5 + |bivector| + |two\_form|)).
\]
\end{lemma}
\begin{proof}
Consider the Algorithm \ref{AlgNumGauge} with \emph{bivector}, \emph{two\_form} and \emph{mesh} inputs. The time complexity of our implementation depends on:
\begin{enumerate}[label=\roman*.]
\item \textsf{Line 6} $\rightarrow$ \ec{\mathscr{O}(m^{2}k|bivector|)}: the \hyperref[AlgNumMatrixBivector]{\textsf{num\_bivector\_to\_matrix}} method has time complexity \ec{\mathscr{O}(m^{2}k|bivector|)}.
\item \textsf{Line 7} $\rightarrow$ \ec{\mathscr{O}(m^{2}k|two\_form|)}: the \hyperref[AlgNumMatrixBivector]{\textsf{num\_bivector\_to\_matrix}} method has complexity \ec{\mathscr{O}(m^{2}k|bivector|)}.
\item \textsf{Lines 10-12} $\rightarrow$ \ec{\mathscr{O}(km^{2})}: we iterate over the set of indices \ec{\{0, \ldots, k-1\}}, and the matrix operations between the \emph{variable\_1}, \emph{variable\_2} and \emph{variable\_3} items has time complexity $\mathscr{O}(m^{2})$.
\item \textsf{Lines 14-20} $\rightarrow$ $\mathscr{O}(km^{10})$: we iterate over the set of indices \ec{\{0, \ldots, k-1\}}, calculate the determinant of each item of \emph{variable\_4} has time complexity $\mathscr{O}(m^4)$, the inverse has time complexity $\mathscr{O}(m^3)$, and the matrix product in \textsf{line 16} has time complexity $\mathscr{O}(m^3)$.
\end{enumerate}
Hence, the time complexity of \textsf{num\_gauge\_transformation} is $\mathscr{O}(m^2 k (m^5 + |bivector| + |two\_form|))$.
\end{proof}
\begin{lemma}
The time complexity of the \hyperref[AlgNumNormal]{\textsf{num\_linear\_normal\_form\_R3}} method is approximately \ec{\mathscr{O}(k|bivector|)}.
\end{lemma}
\begin{proof}
Consider the Algorithm \ref{AlgNumNormal} with \emph{linear\_bivector} and \emph{mesh} inputs. The time complexity of our implementation depends on:
\begin{enumerate}[label=\roman*.]
\item \textsf{Line 5} $\rightarrow$ \ec{\mathscr{O}(|bivector|)}: the \textsf{linear\_normal\_form\_R3} method has time complexity $\mathscr{O}(|bivector|)$.
\item \textsf{Line 6} $\rightarrow$ \ec{\mathscr{O}(k|bivector|)}: the \hyperref[AlgNumBivector]{\textsf{num\_bivector\_field}} method has time complexity \ec{\mathscr{O}(m^{2}k|bivector|)}.
\end{enumerate}
Hence, the time complexity of \textsf{num\_linear\_normal\_form\_R3} is \ec{\mathscr{O}(k|bivector|)}.
\end{proof}
\begin{lemma}
The time complexity of the \hyperref[AlgNumFRatiu]{\textsf{num\_flaschka\_ratiu\_bivector}} method is approximately \[ \mathscr{O}(m^{6}k|bivector|). \]
\end{lemma}
\begin{proof}
Consider the Algorithm \ref{AlgNumFRatiu} with \emph{casimir\_list} and \emph{mesh} inputs. The time complexity of our implementation depends on:
\begin{enumerate}[label=\roman*.]
\item \textsf{Line 5} $\rightarrow$ \ec{\mathscr{O}(m^{6}|casimir\_list|)}: the \textsf{flaschka\_ratiu\_bivector} method has complexity $\mathscr{O}(m^{6}|casimir\_list|)$.
\item \textsf{Line 6} $\rightarrow$ \ec{\mathscr{O}(m^{2}k|casimir\_list|)}: the \hyperref[AlgNumBivector]{\textsf{num\_bivector\_field}} method has time complexity \ec{\mathscr{O}(m^{2}k|bivector|)}.
\end{enumerate}
Hence, the time complexity is \ec{\mathscr{O}(m^{6}k|bivector|)}.
\end{proof}
\subsubsection{Exponential Complexity}
Due to the nature of the concatenated loops in our methods, and because we need to calculate ordered index permutations, the following methods have exponential time complexities.
\begin{lemma}
The time complexity of the \hyperref[AlgNumCoboundary]{\textsf{num\_coboundary\_operator}} method is approximately \[ \mathscr{O}(\mathrm{comb}(m,[m/2]) |bivector|\mathrm{len}(function)(m^{5} + k)). \]
\end{lemma}
\begin{proof}
Consider the Algorithm \ref{AlgNumCoboundary} with \emph{bivector}, \emph{multivector} and \emph{mesh} inputs. The time complexity of our implementation depends on:
\begin{enumerate}[label=\roman*.]
\item \textsf{Line 6} $\rightarrow$ $\mathscr{O}(m^{5}\mathrm{comb}(m,[m/2])|bivector||multivector|)$: the \textsf{lichnerowicz\_poisson\_ operator} method has time complexity
\begin{equation*}
\mathscr{O}(m^{5} \mathrm{comb} (m, [m/2]) |bivector||multivector|).
\end{equation*}
\item \textsf{Line 7} $\rightarrow$ $\mathscr{O}(\mathrm{comb}(m,[m/2])|bivector||multivector|)$: we iterate over \emph{variable\_1}, and the transformation of each of its items depends on the length of the \emph{bivector} and \emph{multivector} items.
\item \textsf{Line 8} $\rightarrow$ $\mathscr{O}(k\mathrm{comb}(m,[m/2])|bivector||multivector|)$: we iterate over \emph{mesh}, and the evaluation of \emph{variable\_1} depends on its length and on the length of the \emph{bivector} and \emph{multivector} items.
\end{enumerate}
Hence, the time complexity of \textsf{num\_coboundary\_operator} is \[ \ec{\mathscr{O}(\mathrm{comb}(m,[m/2])|bivector|\mathrm{len}(function)(m^{5} + k))}. \]
\end{proof}
\begin{lemma}
The time complexity of the \hyperref[AlgNumModularVF]{\textsf{num\_modular\_vf}} method is approximately \[ \mathscr{O}(\mathrm{comb}(m,[m/2]) |bivector|\mathrm{len}(function)(m+k)). \]
\end{lemma}
\begin{proof}
Consider the Algorithm \ref{AlgNumModularVF} with \emph{bivector}, \emph{function} and \emph{mesh} inputs. The time complexity of our implementation depends on:
\begin{enumerate}[label=\roman*.]
\item \textsf{Line 6} $\rightarrow$ \ec{\mathscr{O}(m\,\mathrm{comb}(m,[m/2])|bivector| \mathrm{len}(function))}: the \textsf{modular\_vf} method has time complexity \[ \mathscr{O}(m\,\mathrm{comb}(m,[m/2])|bivector| \mathrm{len}(function)). \]
\item \textsf{Line 7} $\rightarrow$ \ec{\mathscr{O}(\mathrm{comb}(m,[m/2])|bivector|\mathrm{len}(function))}: we iterate over \emph{variable\_1}, and the transformation of each of its items depends on the length of the \emph{bivector} items and the \emph{function} string expression.
\item \textsf{Line 8} $\rightarrow$ \ec{\mathscr{O}(k\mathrm{comb}(m,[m/2]|bivector|\mathrm{len}(function)))}: we iterate over \emph{mesh}. The evaluation of \emph{variable\_1} depends on its length, and on the length of the \emph{bivector} items and the \emph{function} string expression.
\end{enumerate}
Hence, the time complexity of \textsf{num\_modular\_vf} is
\[
\mathscr{O}(\mathrm{comb} (m, [m/2]) |bivector| \mathrm{len}(function)(m + k)).
\]
\end{proof}
\begin{lemma}
The time complexity of the \hyperref[AlgNumCurl]{\textsf{num\_curl\_operator}} method is approximately \[ \mathscr{O}(\mathrm{comb}(m,[m/2]) |multivector|\mathrm{len}(function)(m+k)). \]
\end{lemma}
\begin{proof}
Consider the Algorithm \ref{AlgNumCurl} with \emph{bivector}, \emph{function} and \emph{mesh} inputs. The time complexity of our implementation depends on:
\begin{enumerate}[label=\roman*.]
\item \textsf{Line 6} $\rightarrow$ \ec{\mathscr{O}(m\,\mathrm{comb}(m,[m/2])|multivector| \mathrm{len}(function))}: the \textsf{curl\_operator} method has time complexity \ec{\mathscr{O}(m\,\mathrm{comb}(m,[m/2])|multivector| \mathrm{len}(function))}.
\item \textsf{Line 7} $\rightarrow$ \ec{\mathscr{O}(\mathrm{comb}(m,[m/2])|multivector|\mathrm{len}(function))}: we iterate over \emph{variable\_1}, and the transformation of each of its items depends on the length of the \emph{multivector} items and the \emph{function} string expression.
\item \textsf{Line 8} $\rightarrow$ \ec{\mathscr{O}(k\mathrm{comb}(m,[m/2]|multivector|\mathrm{len}(function)))}: we iterate over \emph{mesh}. The evaluation of \emph{variable\_1} depends on its length, and on the length of the \emph{multivector} items and the \emph{function} string expression.
\end{enumerate}
Hence, the time complexity of \textsf{num\_curl\_operator} is \[ \ec{\mathscr{O}(\mathrm{comb}(m,[m/2])|multivector|\mathrm{len}(function)(m+k))}.\]
\end{proof}
\subsection{Performance}
The time performance of each function in \textsf{NumPoissonGeometry} was experimentally measured by evaluating concrete examples on $\RR{2}$, and $\RR{3}$, on precalculated (irregular) meshes with $10^{\kappa}$ points, for \ec{\kappa = 3,\ldots,7}. These meshes were generated by means of random samples extracted from a uniform distribution in the interval $[0,1)$.
All the numerical experiments were performed on a workstation equipped with 48 GB of main memory in a 3 $\times$ 16 GB ddr4 module configuration and an Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz CPU, running at 3.4 GHz for a peak theoretical performance of four cores.
\begin{comment}
\begin{table}[H]
\caption{Workstation specifications.} \label{table:ColabMemory}
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{| l l l l |}
\hline
\hline
Architecture: x86\_64 & Core(s) per socket: 4 & Model name: Intel(R) Core(TM) & Virtualization: VT-x \\
CPU op-mode(s): 32-bit, 64-bit & Socket(s): 1 & Stepping: 3 & L1d cache: 32K \\
Byte Order: Little Endian & NUMA node(s): 1 & CPU MHz: 799.780 & L1i cache: 32K \\
CPU(s): 8 & Vendor ID: GenuineIntel & CPU max MHz: 4000.0000 & L2 cache: 256K \\
On-line CPU(s) list: 0-7 & CPU family: 6 & CPU min MHz: 800.0000 & L3 cache: 8192K \\
Thread(s) per core: 2 & Model: 94 & BogoMIPS: 6816.00 & NUMA node0 CPU(s): 0-7 \\
\hline
\hline
\end{tabular}
}
\end{table}
\end{comment}
\subsubsection{Two Dimensional Case}
For the performance tests of our methods in dimension two, we have set the non--degenerate Poisson bivector field
\begin{equation*}
\Pi_{0} = \frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_2},
\end{equation*}
induced by the standard symplectic structure \ec{\mathrm{d}{x_{1}} \wedge \mathrm{d}{x_{2}}} on $\RR{2}_{x}$.
\begin{table}[H]
\centering
\caption{Input data used for the time performance tests of functions 1-10 in Table \ref{table:performanceR2}.}
\resizebox{\textwidth}{!}{
\begin{tabular}{|c|l|c|l|}
\hline
\multicolumn{1}{|c|} {\textbf{Function}} & \multicolumn{1}{c|}{\textbf{Input}} & \multicolumn{1}{|c|} {\textbf{Function}} & \multicolumn{1}{c|}{\textbf{Input}} \\
\hline
\hline
1 & $\Pi_{0}$ &
6 & $\Pi_{0}$, \ec{W = x_{2} \frac{\partial}{\partial x_{1}} - x_{1} \frac{\partial}{\partial x_{2}}} \\
\hline
2 & $\Pi_{0}$ &
7 & $\Pi_{0}$, \ec{f=1} \\
\hline
3 & $\Pi_{0}$, \ec{h = x_{1}^{2} + x_{2}^{2}} &
8 & $\Pi_{0}$, \ec{f=1} \\
\hline
4 & $\Pi_{0}$, \ec{f = x_{1}^{2} + x_{2}^{2}}, \ec{g=x_{1} + x_{2}} &
9 & $\Pi_{0}$, \ec{\alpha = x_{1}\mathrm{d}{x_{1}} + x_{2}\mathrm{d}{x_{2}}}, \ec{\beta = \mathrm{d}{x_{1}} + \mathrm{d}{x_{2}}} \\
\hline
5 & $\Pi_{0}$, \ec{\alpha = x_{1}\mathrm{d}{x_{1}} + x_{2}\mathrm{d}{x_{2}}} &
10 & $\Pi_{0}$, \ec{\lambda = \mathrm{d}{x_{1}} \wedge \mathrm{d}{x_{2}}} \\
\hline
\hline
\end{tabular}
}
\end{table}
Table \ref{table:performanceR2} lists the mean time in seconds (with standard deviation) it takes to evaluate the first ten functions in \textsf{NumPoissonGeometry} (see, Table \ref{table:Funs-Algos-Exes}) on a irregular mesh on $\RR{3}_{x}$ with $10^{\kappa}$ points, computed by taking twenty-five samples, for \ec{\kappa = 3,\ldots,7}.
\begin{table}[H]
\centering
\caption{Summary of the time performance of \textsf{NumPoissonGeometry} functions, in dimension two.} \label{table:performanceR2}
\resizebox{\textwidth}{!}{
\begin{tabular}{|l|l l l l l|}
\hline
\hline
\multirow{2}{*}{\hspace{1.25cm}\textbf{Function}} & \multicolumn{5}{c|}{\textbf{Points in mesh/Processing time (in seconds)}} \\
& \hspace{0.85cm}$10^{3}$ & \hspace{0.85cm}$10^{4}$ & \hspace{0.85cm}$10^{5}$ & \hspace{0.85cm}$10^{6}$ & \hspace{0.85cm}$10^{7}$ \\
\hline
\hyperref[AlgNumBivector]{\phantom{0}1. \textsf{num\_bivector\_field}}
& 0.004 \textcolor{gray}{$\pm$ 0.689}
& 0.038 \textcolor{gray}{$\pm$ 0.009}
& 0.356 \textcolor{gray}{$\pm$ 0.002}
& \phantom{0}3.545 \textcolor{gray}{$\pm$ 0.026}
& \phantom{0}35.711 \textcolor{gray}{$\pm$ 0.164} \\
\hline
\hyperref[AlgNumMatrixBivector]{\phantom{0}2. \textsf{num\_bivector\_to\_matrix}}
& 0.006 \textcolor{gray}{$\pm$ 3.633}
& 0.046 \textcolor{gray}{$\pm$ 0.001}
& 0.438 \textcolor{gray}{$\pm$ 0.004}
& \phantom{0}4.442 \textcolor{gray}{$\pm$ 0.037}
& \phantom{0}45.155 \textcolor{gray}{$\pm$ 1.466} \\
\hline
\hyperref[AlgNumHamVF]{\phantom{0}3. \textsf{num\_hamiltonian\_vf}}
& 0.014 \textcolor{gray}{$\pm$ 0.001}
& 0.112 \textcolor{gray}{$\pm$ 0.006}
& 1.096 \textcolor{gray}{$\pm$ 0.021}
& 10.867 \textcolor{gray}{$\pm$ 0.044}
& 108.460 \textcolor{gray}{$\pm$ 0.726} \\
\hline
\hyperref[AlgNumPoissonBracket]{\phantom{0}4. \textsf{num\_poisson\_bracket}}
& 0.021 \textcolor{gray}{$\pm$ 0.006}
& 0.169 \textcolor{gray}{$\pm$ 0.001}
& 1.652 \textcolor{gray}{$\pm$ 0.008}
& 16.721 \textcolor{gray}{$\pm$ 0.049}
& 168.110 \textcolor{gray}{$\pm$ 1.637} \\
\hline
\hyperref[AlgNumSharp]{\phantom{0}5. \textsf{num\_sharp\_morphism}}
& 0.014 \textcolor{gray}{$\pm$ 0.658}
& 0.111 \textcolor{gray}{$\pm$ 0.001}
& 1.068 \textcolor{gray}{$\pm$ 0.007}
& 10.725 \textcolor{gray}{$\pm$ 0.142}
& 107.275 \textcolor{gray}{$\pm$ 0.667} \\
\hline
\hyperref[AlgNumCoboundary]{\phantom{0}6. \textsf{num\_coboundary\_operator}}
& 0.001 \textcolor{gray}{$\pm$ 0.087}
& 0.008 \textcolor{gray}{$\pm$ 0.001}
& 0.084 \textcolor{gray}{$\pm$ 0.006}
& \phantom{0}0.848 \textcolor{gray}{$\pm$ 0.011}
& \phantom{0}8.638 \textcolor{gray}{$\pm$ 0.045} \\
\hline
\hyperref[AlgNumModularVF]{\phantom{0}7. \textsf{num\_modular\_vf}}
& 0.004 \textcolor{gray}{$\pm$ 0.754}
& 0.030 \textcolor{gray}{$\pm$ 0.009}
& 0.280 \textcolor{gray}{$\pm$ 0.001}
& \phantom{0}2.805 \textcolor{gray}{$\pm$ 0.016}
& \phantom{0}28.057 \textcolor{gray}{$\pm$ 0.107} \\
\hline
\hyperref[AlgNumCurl]{\phantom{0}8. \textsf{num\_curl\_operator}}
& 0.022 \textcolor{gray}{$\pm$ 0.009}
& 0.196 \textcolor{gray}{$\pm$ 0.024}
& 1.923 \textcolor{gray}{$\pm$ 0.004}
& 18.487 \textcolor{gray}{$\pm$ 0.136}
& 182.774 \textcolor{gray}{$\pm$ 1.260} \\
\hline
\hyperref[AlgNumOneFormsB]{\phantom{0}9. \textsf{num\_one\_forms\_bracket}}
& 0.058 \textcolor{gray}{$\pm$ 0.006}
& 0.420 \textcolor{gray}{$\pm$ 0.007}
& 4.278 \textcolor{gray}{$\pm$ 0.027}
& 43.257 \textcolor{gray}{$\pm$ 0.071}
& 434.450 \textcolor{gray}{$\pm$ 0.589} \\
\hline
\hyperref[AlgNumGauge]{10. \textsf{num\_gauge\_transformation}}
& 0.051 \textcolor{gray}{$\pm$ 0.001}
& 0.446 \textcolor{gray}{$\pm$ 0.010}
& 4.380 \textcolor{gray}{$\pm$ 0.016}
& 43.606 \textcolor{gray}{$\pm$ 0.212}
& 434.704 \textcolor{gray}{$\pm$ 1.234} \\
\hline
\hline
\end{tabular}
}
\end{table}
To illustrate how fast the \textsf{NumPoissonGeometry} functions can be performed, we use the data in Table \ref{table:performance} to plot the time versus the number of points in each $10^{\kappa}$-point (irregular) mesh on base $10$ \emph{log-log graphs}:
\begin{figure}[H]
\centering
\caption{Log-log graphs of the execution time in seconds versus the number of points in $10^{\kappa}$-point (irregular) meshes of the \textsf{NumPoissonGeometry} functions 1--10 in Table \ref{table:performanceR2}, for \ec{\kappa=3,\ldots,7}. In red, the fitted linear model used to predict the asymptotic behavior of the runtime for each function, with the corresponding determination coefficient (R-squared) indicated in each legend. We include a zoom-graph in each plot due to the accumulation of runtime values.} \label{fig:timeperformanceR2}
\resizebox{\textwidth}{!}{
\begin{tabular}{cccc}
\includegraphics[width=\textwidth]{num_bivector_field_D2.png} &
\includegraphics[width=\textwidth]{num_bivector_to_matrix_D2.png} &
\includegraphics[width=\textwidth]{num_hamiltonian_vf_D2.png} &
\includegraphics[width=\textwidth]{num_poisson_bracket_D2.png}
\end{tabular}
}
\end{figure}
\resizebox{\textwidth}{!}{
\begin{tabular}{cccc}
\includegraphics[width=\textwidth]{num_sharp_morphism_D2.png} &
\includegraphics[width=\textwidth]{num_coboundary_operator_D2.png} &
\includegraphics[width=\textwidth]{num_modular_vf_D2.png} &
\includegraphics[width=\textwidth]{num_curl_operator_D2.png}
\end{tabular}
}
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.25\textwidth]{num_one_forms_bracket_D2.png} &
\includegraphics[width=0.25\textwidth]{num_gauge_transformation_D2.png}
\end{tabular}
\end{center}
We deduce from the graphs in Figure \ref{fig:timeperformanceR2} that, for the input data in Table \ref{table:InputR2}, all of our methods in Table \ref{table:performanceR2} were executed experimentally in polynomial time. Power-law relationships appear as straight lines in a log-log graph. Therefore the degree of the polynomial complexities are deduced by fitting a linear model and estimating its coefficient, which we carry out to 0.99 accuracy.
\subsubsection{Three Dimensional Case}
For the performance tests of our methods in dimension three, we have set the Lie--Poisson bivector field $\Pi_{\mathfrak{sl}(2)}$ in (\ref{EcPiSL2}),
\begin{equation*}
\Pi_{\mathfrak{sl}(2)} =
-x_3\frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_2} -
x_2\frac{\partial}{\partial x_1}\wedge \frac{\partial}{\partial x_3} +
x_1 \frac{\partial}{\partial x_2}\wedge \frac{\partial}{\partial x_3},
\end{equation*}
associated to the 3--dimensional Lie algebra $\mathfrak{sl}(2)$.
\begin{table}[H]
\centering
\caption{Input data used for the time performance tests of functions 1-11 in Table \ref{table:performance}.} \label{table:InputR2}
\resizebox{\textwidth}{!}{
\begin{tabular}{|c|l|}
\hline
\multicolumn{1}{|c|} {\textbf{Function}} & \multicolumn{1}{c|}{\textbf{Input}} \\
\hline
\hline
1 & $\phantom{-}\Pi_{\mathfrak{sl}(2)}$ \\
\hline
2 & $\phantom{-}\Pi_{\mathfrak{sl}(2)}$ \\
\hline
3 & $\phantom{-}\Pi_{\mathfrak{sl}(2)}$, \ec{h = x_{1}^{2} + x_{2}^{2} - x_{3}^{2}} \\
\hline
4 & $\phantom{-}\Pi_{\mathfrak{sl}(2)}$, \ec{f = x_{1}^{2} + x_{2}^{2} - x_{3}^{2}}, \ec{g=x_{1} + x_{2} + x_{3}} \\
\hline
5 & $\phantom{-}\Pi_{\mathfrak{sl}(2)}$, \ec{\alpha = x_{1}\mathrm{d}{x_{1}} + x_{2}\mathrm{d}{x_{2}} - x_{3}\mathrm{d}{x_{3}}} \\
\hline
6 & $\phantom{-}\Pi_{\mathfrak{sl}(2)}$, \ec{W = e^{{-1}/{(x_1^2 + x_2^2 - x_3^2)^2}} \big[ {x_1x_{3}}/(x_1^2 + x_2^2)\frac{\partial}{\partial{x_{1}}} + {x_2x_{3}}/(x_1^2 + x_2^2)\frac{\partial}{\partial{x_{2}}} + \frac{\partial}{\partial{x_{3}}} \big]} \\
\hline
7 & $\phantom{-}\Pi_{\mathfrak{sl}(2)}$, $f=1$ \\
\hline
8 & $\phantom{-}\Pi_{\mathfrak{sl}(2)}$, $f=1$ \\
\hline
9 & $\phantom{-}\Pi_{\mathfrak{sl}(2)}$, \ec{\alpha = x_{1}\mathrm{d}{x_{1}} + x_{2}\mathrm{d}{x_{2}} - x_{3}\mathrm{d}{x_{3}}}, \ec{\beta = \mathrm{d}{x_{1}} + \mathrm{d}{x_{2}} + \mathrm{d}{x_{3}}} \\
\hline
10 & $\phantom{-}\Pi_{\mathfrak{sl}(2)}$, \ec{\lambda = (x_{2}-x_{1})\mathrm{d}{x_{1}} \wedge \mathrm{d}{x_{2}} + (x_{3}-x_{1})\mathrm{d}{x_{1}} \wedge \mathrm{d}{x_{3}} + (x_{2}-x_{3})\mathrm{d}{x_{2}} \wedge \mathrm{d}{x_{3}}} \\
\hline
11 & $-\Pi_{\mathfrak{sl}(2)}$ \\
\hline
\hline
\end{tabular}
}
\end{table}
Table \ref{table:performance} lists the mean time in seconds (with standard deviation) it takes to evaluate the first eleven functions in \textsf{NumPoissonGeometry} (see, Table \ref{table:Funs-Algos-Exes}) on a irregular mesh on $\RR{3}_{x}$ with $10^{\kappa}$ points, computed by taking twenty-five samples, for \ec{\kappa = 3,\ldots,7}.
\begin{table}[H]
\centering
\caption{Summary of the time performance of \textsf{NumPoissonGeometry} functions in dimension 3.} \label{table:performance}
\resizebox{\textwidth}{!}{
\begin{tabular}{| l | l l l l l |}
\hline
\multirow{2}{*}{\hspace{1.25cm}\textbf{Function}} & \multicolumn{5}{c|}{\textbf{Points in mesh/Processing time (in seconds)}} \\
& \hspace{0.85cm}$10^{3}$ & \hspace{0.85cm}$10^{4}$ & \hspace{0.85cm}$10^{5}$ & \hspace{0.85cm}$10^{6}$ & \hspace{0.85cm}$10^{7}$ \\
\hline
\hline
\hyperref[AlgNumBivector]{\phantom{0}1. \textsf{num\_bivector\_field}}
& 0.009 \textcolor{gray}{$\pm$ 0.009}
& 0.051 \textcolor{gray}{$\pm$ 0.004}
& 0.496 \textcolor{gray}{$\pm$ 0.002}
& \phantom{0}4.984 \textcolor{gray}{$\pm$ 0.023}
& \phantom{0}49.565 \textcolor{gray}{$\pm$ 0.222} \\
\hline
\hyperref[AlgNumMatrixBivector]{\phantom{0}2. \textsf{num\_bivector\_to\_matrix}}
& 0.008 \textcolor{gray}{$\pm$ 3.164}
& 0.057 \textcolor{gray}{$\pm$ 0.002}
& 0.553 \textcolor{gray}{$\pm$ 0.019}
& \phantom{0}5.442 \textcolor{gray}{$\pm$ 0.023}
& \phantom{0}55.249 \textcolor{gray}{$\pm$ 1.690} \\
\hline
\hyperref[AlgNumHamVF]{\phantom{0}3. \textsf{num\_hamiltonian\_vf}}
& 0.017 \textcolor{gray}{$\pm$ 0.002}
& 0.129 \textcolor{gray}{$\pm$ 0.001}
& 1.263 \textcolor{gray}{$\pm$ 0.022}
& 12.518 \textcolor{gray}{$\pm$ 0.064}
& 126.091 \textcolor{gray}{$\pm$ 0.583} \\
\hline
\hyperref[AlgNumPoissonBracket]{\phantom{0}4. \textsf{num\_poisson\_bracket}}
& 0.036 \textcolor{gray}{$\pm$ 0.001}
& 0.299 \textcolor{gray}{$\pm$ 0.010}
& 2.936 \textcolor{gray}{$\pm$ 0.067}
& 29.600 \textcolor{gray}{$\pm$ 0.933}
& 292.625 \textcolor{gray}{$\pm$ 6.094} \\
\hline
\hyperref[AlgNumSharp]{\phantom{0}5. \textsf{num\_sharp\_morphism}}
& 0.017 \textcolor{gray}{$\pm$ 0.006}
& 0.128 \textcolor{gray}{$\pm$ 0.005}
& 1.252 \textcolor{gray}{$\pm$ 0.005}
& 12.384 \textcolor{gray}{$\pm$ 0.038}
& 124.851 \textcolor{gray}{$\pm$ 1.809} \\
\hline
\hyperref[AlgNumCoboundary]{\phantom{0}6. \textsf{num\_coboundary\_operator}}
& 1.589 \textcolor{gray}{$\pm$ 0.016}
& 1.705 \textcolor{gray}{$\pm$ 0.029}
& 2.815 \textcolor{gray}{$\pm$ 0.032}
& 12.972 \textcolor{gray}{$\pm$ 0.166}
& 111.034 \textcolor{gray}{$\pm$ 1.365} \\
\hline
\hyperref[AlgNumModularVF]{\phantom{0}7. \textsf{num\_modular\_vf}}
& 0.050 \textcolor{gray}{$\pm$ 0.001}
& 0.103 \textcolor{gray}{$\pm$ 0.004}
& 0.645 \textcolor{gray}{$\pm$ 0.006}
& \phantom{0}6.025 \textcolor{gray}{$\pm$ 0.013}
& \phantom{0}59.652 \textcolor{gray}{$\pm$ 0.146} \\
\hline
\hyperref[AlgNumCurl]{\phantom{0}8. \textsf{num\_curl\_operator}}
& 0.019 \textcolor{gray}{$\pm$ 0.010}
& 0.129 \textcolor{gray}{$\pm$ 0.027}
& 1.199 \textcolor{gray}{$\pm$ 0.032}
& 10.911 \textcolor{gray}{$\pm$ 0.181}
& 105.841 \textcolor{gray}{$\pm$ 1.230} \\
\hline
\hyperref[AlgNumOneFormsB]{\phantom{0}9. \textsf{num\_one\_forms\_bracket}}
& 0.093 \textcolor{gray}{$\pm$ 0.001}
& 0.738 \textcolor{gray}{$\pm$ 0.007}
& 7.285 \textcolor{gray}{$\pm$ 0.159}
& 72.802 \textcolor{gray}{$\pm$ 1.474}
& 724.514 \textcolor{gray}{$\pm$ 13.594} \\
\hline
\hyperref[AlgNumGauge]{10. \textsf{num\_gauge\_transformation}}
& 0.051 \textcolor{gray}{$\pm$ 0.001}
& 0.445 \textcolor{gray}{$\pm$ 0.010}
& 4.395 \textcolor{gray}{$\pm$ 0.013}
& 43.794 \textcolor{gray}{$\pm$ 0.173}
& 437.326 \textcolor{gray}{$\pm$ 0.824} \\
\hline
\hyperref[AlgNumNormal]{11. \textsf{num\_linear\_normal\_form\_R3}}
& 0.016 \textcolor{gray}{$\pm$ 0.438}
& 0.061 \textcolor{gray}{$\pm$ 0.002}
& 0.504 \textcolor{gray}{$\pm$ 0.012}
& \phantom{0}4.903 \textcolor{gray}{$\pm$ 0.017}
& \phantom{0}48.786 \textcolor{gray}{$\pm$ 0.219} \\
\hline
\hline
\end{tabular}
}
\end{table}
Now, to illustrate how fast the \textsf{NumPoissonGeometry} functions can be performed, we use the data in Table \ref{table:performance} to plot the time versus the number of points in each $10^{\kappa}$-point (irregular) mesh on base $10$ \emph{log-log graphs}. These (execution time) plots are presented in Figures \ref{fig:timeperformance} and \ref{fig:timeperformance2}.
\subsubsection{Polynomial Time}
The log-log graphs presented in Figure \ref{fig:timeperformance} correspond to \textsf{NumPoissonGeometry} methods that are executed in polynomial time of some degree. As power-law relationships appear as straight lines in a log-log graph, the complexities are deduced by fitting a linear model and estimating its coefficient to 0.99 accuracy.
\begin{figure}[H]
\centering
\caption{Log-log graphs of the execution time in seconds versus the number of points in $10^{\kappa}$-point (irregular) meshes of the \textsf{NumPoissonGeometry} functions 1--5 and 8--11 in Table \ref{table:performance}, for \ec{\kappa=3,\ldots,7}. In red, the fitted linear model used to predict the asymptotic behavior of the runtime for each function, with the corresponding determination coefficient (R-squared) indicated in each legend. We include a zoom-graph in each plot due to the accumulation of runtime values.}
\label{fig:timeperformance}
\resizebox{\textwidth}{!}{
\begin{tabular}{ccc}
\includegraphics[width=\textwidth]{num_bivector_field.png} &
\includegraphics[width=\textwidth]{num_bivector_to_matrix.png} &
\includegraphics[width=\textwidth]{num_hamiltonian_vf.png}
\end{tabular}
}
\end{figure}
\resizebox{\textwidth}{!}{
\begin{tabular}{ccc}
\includegraphics[width=\textwidth]{num_poisson_bracket.png} &
\includegraphics[width=\textwidth]{num_sharp_morphism.png} &
\includegraphics[width=\textwidth]{num_one_forms_bracket.png} \\ [0.5cm]
\includegraphics[width=\textwidth]{num_gauge_transformation.png} &
\includegraphics[width=\textwidth]{num_linear_normal_form_R3.png} &
\includegraphics[width=\textwidth]{num_curl_operator.png}
\end{tabular}
}
Observe that the experimentally deduced time complexities of the \textsf{NumPoissonGeometry} functions in Figure \ref{fig:timeperformance} coincide with their theoretical time complexities in the following sense: the time complexities of methods 1--5 and 9--11 in Table \ref{table:complexity} are polynomially dependent on variables $m$ and $k$.
\begin{remark}
In our experiments, \textsf{num\_curl\_operator} approached a polynomial time complexity, although its theoretical time complexity is exponential (see, Table \ref{table:complexity}). This is not {\em a fortiori} a contradiction, because in Table \ref{table:complexity} we present an approximation of the \emph{worst--case} time complexity of the \textsf{NumPoissonGeometry} methods. In fact, it is an example that the execution time of our algorithms depends (naturally) on their inputs and that in some cases they run faster than expected.
\end{remark}
\subsubsection{Exponential Time}
The log-log graphs presented in Figure \ref{fig:timeperformance} correspond to \textsf{NumPoissonGeometry} methods that are executed in exponential time. As exponential relationships trace polynomial curves in a log-log graph, the complexities are deduced by fitting a (non-linear) polynomial regression and estimating its coefficients to 0.99 accuracy.
\begin{figure}[H]
\centering
\caption{Log-log graphs of the execution time in seconds versus the number of points in $10^{\kappa}$-point (irregular) meshes of the \textsf{NumPoissonGeometry} functions 6 and 7 in Table \ref{table:performance}, for \ec{\kappa=3,\ldots,7}. In red, the fitted model used to predict the asymptotic behavior of the runtime for each function, with the corresponding determination coefficient (R-squared) indicated in each legend. We include a zoom-graph in each plot due to the accumulation of runtime values.}
\label{fig:timeperformance2}
\begin{tabular}{cc}
\includegraphics[width=0.335\textwidth]{num_coboundary_operator.png} &
\includegraphics[width=0.35\textwidth]{num_modular_vf.png}
\end{tabular}
\end{figure}
Observe that although the \textsf{NumPoissonGeometry} functions presented in Figure \ref{fig:timeperformance2} are executed in exponential time, they are relatively fast since their evaluation on a 10 million point (irregular) mesh takes at most 2 minutes in the experiments on our desktop workstation. Furthermore, the execution times of the \textsf{num\_modular\_vf} function are fitted to a linear model in the interval [4,7], as we illustrate in the following figure:
\begin{figure}[H]
\centering
\caption{Log-log graph of the execution time in seconds versus the number of points in a $10^{\kappa}$-point (irregular) mesh of the \textsf{NumPoissonGeometry} function \textsf{num\_modular\_vf}, for \ec{\kappa=4,\ldots,7}. In red, the fitted linear model used to predict the asymptotic behavior of the runtime, with the corresponding determination coefficient (R-squared) indicated in the legend.}
\includegraphics[width=0.35\textwidth]{num_modular_vf2.png}
\end{figure}
\subsubsection{Flaschka--Ratiu Bivector Fields}
For the performance tests of the method \textsf{num\_flaschka\_ratiu\_bivector}, we have used as inputs the following scalar functions on $\RR{4}_{x}$ (see Example \ref{example:so3FR}):
\begin{equation*}
K_{1} = \tfrac{1}{2}x_{4}, \qquad K_{2} = -x_{1}^{2} + x_{2}^{2} + x_{3}^{2}.
\end{equation*}
\begin{table}[H]
\centering
\caption{Mean time in seconds (with standard deviation) it takes to evaluate the \textsf{num\_flaschka\_ratiu\_bivector} method on a irregular mesh on $\RR{4}$ with $10^{\kappa}$ points, computed by taking twenty-five samples, for \ec{\kappa = 3,\ldots,7}.} \label{table:performanceFR}
\resizebox{\textwidth}{!}{
\begin{tabular}{|l|l l l l l|}
\hline
\multirow{2}{*}{\hspace{1.25cm}\textbf{Function}} & \multicolumn{5}{c|}{\textbf{Points in mesh/Processing time (in seconds)}} \\
& \hspace{0.85cm}$10^{3}$ & \hspace{0.85cm}$10^{4}$ & \hspace{0.85cm}$10^{5}$ & \hspace{0.85cm}$10^{6}$ & \hspace{0.85cm}$10^{7}$ \\
\hline
\hline
\hyperref[AlgNumFRatiu]{\textsf{num\_flaschka\_ratiu\_bivector}}
& 0.0158 \textcolor{gray}{$\pm$ 0.105}
& 0.057 \textcolor{gray}{$\pm$ 0.003}
& 0.505 \textcolor{gray}{$\pm$ 0.003}
& \phantom{0}4.993 \textcolor{gray}{$\pm$ 0.029}
& \phantom{0}49.563 \textcolor{gray}{$\pm$ 0.207} \\
\hline
\hline
\end{tabular}
}
\end{table}
Figure \ref{fig:timeperformanceFR} below illustrates that the \textsf{num\_flaschka\_ratiu\_bivector} method is executed in polynomial time of some degree, which coincides with its theoretical complexity presented in Table \ref{table:complexity} in the sense that it is polynomially dependent on variables $m$ and $k$. As power-law relationships appear as straight lines in a log-log graph, the complexity is deduced by fitting a linear model and estimating its coefficient to 0.99 accuracy.
\begin{figure}[H]
\centering
\caption{Log-log graph of the execution time in seconds versus the number of points in a $10^{\kappa}$-point (irregular) mesh of the \textsf{num\_flaschka\_ratiu\_bivector} method, for \ec{\kappa=3,\ldots,7}. In red, the fitted linear model used to predict the asymptotic behavior of the runtime, with the corresponding determination coefficient (R-squared) indicated in the legend. We include a zoom-graph due to the accumulation of runtime values.}
\label{fig:timeperformanceFR}
\includegraphics[width=0.35\textwidth]{num_flaschka_ratiu_bivector.png}
\end{figure}
\subsection*{Acknowledgements}
This research was partially supported by CONACyT and UNAM-DGAPA-PAPIIT-IN104819. JCRP wishes to also thank CONACyT for a postdoctoral fellowship held during the production of this work.
|
1,108,101,565,530 | arxiv | \section{Topological Orders}
In a modern condensed matter physics,
a concept of the symmetry breaking has a fundamental importance.
At a sufficiently low temperature,
most of classical systems show some ordered structure
which implies that the symmetry
at the high temperature is spontaneously lost or reduced.
This is the spontaneous symmetry breaking which
is usually characterized by using
a {\em local } order parameter as an existence of the long range order.
States of matter in a classical system are mostly characterized by
this order parameter with the symmetry breaking.
Even in a quantum system, the local order parameter and
the symmetry breaking play similar roles
and they form a foundation of our physical understanding.
Typical examples can be ferromagnetic and Neel orders in spin systems.
Recent studies in decades have revealed that
this symmetry breaking may not be always enough to
characterize some of important quantum states\cite{wen89,Hatsugai04e}.
Low dimensionality of the system
and/or geometrical frustrations come from the strong correlation
can prevent from a formation of the local order.
Especially with a quantum fluctuation,
there may happen that a quantum ground state without
any explicit symmetry breaking is realized
even in the zero temperature.
Such a state is classified as a quantum liquid
which mostly has an energy gap (may not be always).
Typical example of this quantum liquids
is the Haldane spin chain and the valence bond solid (VBS)
states\cite{Haldane83-c,Affleck87-AKLT}.
Also some of the frustrated spin systems and
spin-Peierls systems can belong to this
class\cite{Rokhasar88,Read91-LN,Sondi01}.
To characterize these quantum liquids,
a concept of a topological order can be useful\cite{wen89,Hatsugai04e}.
It was proposed to characterize quantum Hall states
which are typical quantum liquids with energy gaps.
There are many clearly different quantum states but
they do not have any
local order parameter associated with symmetry breaking.
Then topological quantities such as a number of degenerate
ground states and the Chern numbers as the Hall conductance
are used to characterize the quantum liquids.
We generalize the idea to use the topological quantities
such as the Chern numbers for the characterization
of the generic quantum liquids\cite{Hatsugai04e}.
This is a global characterization.
When we apply this to spin systems with
the time-reversal symmetry (TR), the Chern number
is vanishing in most cases.
Recently we propose an alternative for the system
with the TR invariance by the quantized Berry phases\cite{Hatsugai06a}.
Although, the Berry phases
can take any values generically,
the TR invariance of the ground state
guarantees a quantization of the Berry phases
which enables us to use them as local topological order
parameters.
In the present article, we use it
for several spin systems with frustrations and verify the validity.
Although the geometrical frustration affects the
standard local order
substantially,
it does not bring any fundamental difficulties
for the topological characterizations
as shown later.
It should be quite useful for characterizations
for general quantum liquids\cite{Hatsugai06a}.
Finally we mention on the energy spectra of
the systems with classical or topological orders.
There can be interesting differences between
the standard order and the topological order.
As for energy spectra, we have two situations when the
symmetry is spontaneously broken.
If the spontaneously broken symmetry is continuous,
there exists
a gapless excitation as a Nambu-Goldstone mode.
On the other hand, the symmetry is discrete, the ground
states are degenerate and above these degenerate states,
there is a finite energy gap.
Note that when
the system is finite (with periodic boundary condition),
the degeneracy is lifted by the small energy gap, $e^{-L^d/\xi}$,
where $L$, $d$ and $\xi$ are a linear dimension of the finite system,
dimensionality and a typical correlation length.
For the topological ordered states with energy gaps,
we may expect degeneracy of the ground states depending on the
geometry of the system (topological degeneracy).
When the system is finite, we expect edge states
generically\cite{Hatsugai93b}. It implies the topological degeneracy is lifted by
the energy gaps of the order $e^{-L/\xi}$.
\section{Local Order Parameters of Quantum Liquids}
After the first discovery of the
fractional quantum Hall states,
the quantum liquids have been recognized to exist
quite universally in a quantum world where
quantum effects
can not be treated as a correction to the classical description
and the quantum law itself takes the wheel to determine
the ground state.
The resonating valence bond (RVB) state
which is proposed for a basic platform of
the high-$T_C$ superconductivity
is a typical example\cite{Anderson87}.
The RVB state of the Anderson can be understood as
a quantum mechanical collection of {\em local} spin singlets.
When it becomes mobile under the doping,
the state is expected to show superconductivity.
Original ideas of this RVB go back to the Pauling's
description of benzene compounds where
the quantum mechanical ground state is composed of
{\em local bonding states (covalent bonds) }
where the basic variables to describe the state
is not electrons localized at sites but
the bonding states on links\cite{Pauling}.
This is quite instructive. That is,
in both of
the Anderson's RVB and the Pauling's RVB,
basic objects to describe the quantum liquids are
quantum mechanical objects as a {\em singlet pair}
and a {\em covalent bond}\cite{Hatsugai06a}.
The ``classical'' objects as small magnets (localized spins)
and electrons at site never play major roles.
The constituents of the liquids themselves
do not have a classical analogue and
purely quantum mechanical objects.
Based on this view point,
it is natural to characterize these quantum objects,
the singlet pairs and the covalent bonds, as working variables of
the {\em local} quantum order parameters.
It is to be compared with the conventional order parameter
(a magnetic order parameter is defined by a local spin as a working variable).
From these observations,
we proposed to use quantized Berry phases
to define local topological order parameters\cite{Hatsugai06a}.
( We only treat here the singlet pairs as the topological order parameters. As for
the local topological description by the covalent bonds, see ref.[1].)
For example,
there can be many kinds of quantum dimer states for frustrated
Heisenberg models, such as
column dimers, plaquette dimers, etc.
As is clear, one can not find any classical local order parameters
to characterize them.
However, our topological order parameters can distinguish
them as different phases not by just a crossover.
\section{Quantized Berry Phases for the Topological Order Parameters
of Frustrated Heisenberg Spins}
Frustration among spins prevent from forming a magnetic order
and their quantum ground states tend to belong to the quantum liquids
without any symmetry breaking.
Since they do not have any local order parameters,
even if they have apparent different physical behaviors,
it is difficult to
make a clear
distinction as a phase
not just as a crossover.
We apply the general scheme in the reference [1]
to classify these frustrated spin systems.
Defining quantized Berry phases as $0$ or $\pi$,
the spin liquids are characterized locally reflecting their topological order.
We can distinguish many topological phases which are separated by
local quantum phase transitions (local gap closings).
We consider following spin $1/2$ Heisenberg models with general exchange couplings,
$
H
= \sum_{ij}
{J }_{ij}{\mb{S} _i} \cdot
\mb{S} _j
$.
{\em We allow frustrations among spins. }
We assume the ground state is {\em unique and gapped}.
To define a local topological order parameter
at a specific link $ \langle ij \rangle $,
we modify the exchange
by making a local $SU(2)$ twist $\theta $ only at the link as
\begin{eqnarray*}
J_{ij}\mb{S} _i \cdot \mb{S} _j
&\to &
J_{ij}
\big(
\frac {1}{2}
(
e^{- i\theta }
S_{i+} S_{j-}
+
e^{ i \theta}
S_{i-}S_{j+}
)
+ S_{iz}S_{jz}
\big).
\end{eqnarray*}
Writing $x=e^{i\theta}$, we define a
parameter dependent Hamiltonian $H(x)$ and its normalized
ground state $|\psi(x) \rangle $ as
$H(x) |\psi(x) \rangle =E(x) | \psi(x) \rangle $,
$\langle {\psi} | {\psi} \rangle= 1$.
Note that this Hamiltonian is invariant under the time-reversal (TR) $\Theta_T$,
$
\Theta_{ T} ^{-1}
H(x)
\Theta_{ T}
= H(x)
$\cite{tri}.
Also note that by
changing $\theta:0\to 2\pi$,
we define a closed loop $C$ in the parameter space of $x$.
Now we define the Berry connection as
$
{A}_\psi = \langle {\psi} | d {\psi} \rangle =
\langle {\psi} | \frac {d }{d x} \psi\rangle dx
$. Then the Berry phase along the loop $C$ is defined
as $
i{\gamma } _C ({A}_\psi )= \int_C {A}_\psi
$\cite{berry84}.
Besides that the system is gapped,
we further assume {\em the excitation gap is always finite} (for $^\forall x$),
to ensure the regularity of the ground state\cite{Hatsugai04e}.
This may not be alway true, since the gap can collapse by the local perturbation
as an appearance of localized states (edge states)\cite{Hatsugai93b}.
Note that by changing a phase of the ground state as
$| {\psi}(x) \rangle =| {\psi}^\prime(x) \rangle
e^{i\Omega(x)} $,
the Berry connection gets modified as
$A_\psi= {A}_\psi^\prime + i d {\Omega} $
\cite{berry84,Hatsugai04e}.
It is a gauge transformation.
Then the Berry phase, $\mb{\gamma }_C $ also changes.
It implies that the Berry phase
is not well defined without specifying the phase of
the ground state (the gauge fixing).
It can be
fixed by taking a single-valued reference state $|\phi \rangle $ and
a gauge invariant projection into the ground state
$
P =
| \psi \rangle \langle \psi |=
| \psi^\prime \rangle \langle \psi^\prime|
$ as
$
|{\psi}_\phi \rangle = {P} |{\phi} \rangle /\sqrt{N_\phi }$,
$N_\phi = \|{P} |\phi \rangle \|^2
= |\eta_\phi| ^2$,
$\eta_\phi= \langle \psi | \phi \rangle $\cite{Hatsugai04e,Hatsugai06a}.
We here require
the normalization, $N_\phi$,
to be finite.
When we use another reference state $| \phi^\prime \rangle $ to fix the gauge,
we have
$
|\psi_\phi \rangle
| \psi_{\phi^\prime} \rangle
e^{i \Omega },\
{\Omega} =
{\rm arg}\, (
{\eta}_\phi - {\eta}_{\phi'} )
$.
Due to this gauge transformation,
the Berry phase gets modified as
$ {\gamma } _{C} ({A}_{\psi_\phi} ) =
{\gamma } _{C} ({A}_{\psi_{\phi^\prime}} )+ \Delta, \quad
\Delta_{}= \int_C
d {\Omega} $.
Since the reference states $|\phi \rangle $ and $|\phi' \rangle $ are
single-valued on the $C$,
the phase difference $\Omega $
is just different by
$\Delta=2\pi M_C $ with some integer $M_C$.
Generically it implies that the Berry phase
has a gauge invariant meaning just up to the integer as
\begin{eqnarray*}
\gamma _C & \equiv &
-i \int_C\,{A},\quad {\rm mod}\, 2\pi
\end{eqnarray*}
By the TR invariance,
the Berry phase get modified as
$
\gamma _C (A_\psi) = \sum_J C_J^* d C_j=-\sum_J C_J d C_j^*=
-\gamma _C(A_{\Theta\psi})
$ since $\sum_J|C_J|^2=1$\cite{Hatsugai06a}.
Therefore to be compatible with
the gauge ambiguity,
the Berry phase of the unique TR-invariant ground state,
$|\psi \rangle \propto \Theta| \psi \rangle $,
satisfies $\gamma _C (A_\psi) \equiv -\gamma _C(A_{\psi})\ ( {\rm mod}\, 2\pi)$.
Then it is required to
{\em be quantized}
as
\begin{eqnarray*}
\gamma _C(A_\psi) &=& 0, \pi \ ({\rm mod}\ 2\pi ).
\end{eqnarray*}
This quantized Berry phases have a topological stability
since any small perturbations can not modify
unless the gauge becomes singular.
Here we note that
the Berry phase of the singlet pair for the two site problem is
$\pi$\cite{Hatsugai06a}.
Now let us take any dimer covering of all sites
${\cal D}=\{\langle ij \rangle \}$
($\#{\cal D}=N/2$, $N$ is a total number od sites) and
assume that the interaction is nonzero only on these dimer links,
then the Berry phases, $\pi$,
pickup the dimer pattern $\cal D$.
Now imagine an adiabatic process to include interactions across
the dimers.
Due to the topological stability of the quantized Berry phase,
they can not be modified unless the dimer gap collapses.
This dimer limit presents
a non-trivial pattern of a quantized Berry phase
and shows the usefulness
of the quantized Berry phases as {\em local order parameters
of singlet pairs}.
To show its real validity of the quantized Berry phases,
we have diagonalized the Heisenberg Hamiltonians numerically by
the Lanzcos algorithm and calculated the quantized Berry phases
explicitly.
The first numerical examples are the Heisenberg chains with alternating exchanges.
When the exchanges are both antiferromagnetic as $J_A>0$ and $J_{A'}>0$,
it is a spin Pierls or dimerized chain.
In this case, the Berry phases are $\pi$ on the links with the strong exchange
couplings and $0$ on the one with the weak couplings (Fig.\ref{f:1D}).
This is expected from the adiabatic principle and the quantization.
When one of them is negative as $J_A>0$ and $J_{F}<0$,
the calculated Berry phases are $\pi$ for the antiferromagnetic links
and $0$ for the ferromagnetic ones. It is independent of the
ratio $J_A/J_F$.
Since the strong ferromagnetic limit is equivalent with the spin $1$ chain,
it is consistent with the topological nontrivial structure
of the Haldane phases. Further analysis on the $S=1$ systems will be published
elsewhere.
\begin{figure}
\begin{center}
\includegraphics[width=1.0\linewidth,clip]{1D.eps}%
\end{center}
\caption{
One dimensional Heisenberg models with
alternating exchange interactions
with periodic boundary condition (left).
Numerically evaluated distribution of the quantized Berry phases (right).
$J_A, J_{A'}>0$ and $J_F<0$.
The results are independent of the system size.
( We have checked a consistency of the results for various possible system sizes.)
\label{f:1D}
}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=1.0\linewidth,clip]{triangle12-1and2.eps}%
\end{center}
\caption{
One dimensional Heisenberg models with NN and NNN exchanges (left)
with periodic boundary condition.
Numerically evaluated distribution of the quantized Berry phases (right).
(a), (b) and (c): three different exchange configurations of $J=1$ and $ J'=2$.
\label{f:tri}
}\vskip -0.6cm
\end{figure}
Next numerical examples are spin chains with nearest neighbor (NN)
and next nearest neighbor (NNN) exchanges
as ladder of triangles (Fig.\ref{f:tri}).
These are typical systems with frustrations.
(a) and (b) are two different but specific configurations
where one may adiabatically connect the system with
different dimer coverings by the strong coupling bonds.
In these cases, the quantized Berry phases are $\pi$
for the strong coupling links and $0$ for the rest links.
This is consistent with the adiabatic principle.
We note here that
it is difficult to make a qualitative difference
between the two quantum liquids by a conventional methods.
However we have
made a clear distinction between them as two different
topological phases.
The present scheme is not only valid for these simple situations but also
useful for generic situation.
For example, as for a system in the Fig.\ref{f:tri} (c),
we can not use the adiabatic principle simply.
However the quantized Berry phases show non trivial behaviors and
it make a clear distinction
that the phase (c) is topologically different from the ones in the
(a) and (b)
as an independent phase not just as a crossover.
A local quantum phase transition separates them by the gap closing.
As is now clear, the present scheme is quite powerful to make
a local characterization of the
topological quantum insulators.
Part of the work was supported by
Grant-in-Aids for Scientific Research
(Grant No. 17540347) and
on Priority Areas
(Grant No. 18043007) from MEXT,
and the Sumitomo Foundation.
|
1,108,101,565,531 | arxiv | \section{Introduction}
Magnetic fields and supersonic turbulence are two mechanisms
that are commonly invoked for the regulation of
star formation in our Galaxy to the observationally
estimated rate of $\sim 3-5\, M_{\odot}$ yr$^{-1}$ \citep[see][]{mck89}.
This is at least one hundred
times less than the rate implied by the gravitational fragmentation timescale
of the molecular gas in the Galaxy calculated from its mean density.
Put another way, the global Galactic star formation efficiency (SFE) is about
1\% of molecular gas per free-fall time. Interestingly, the same SFE
also applies to nearby individual star-forming regions such as the
Taurus molecular cloud \citep{gol08}.
The relatively low Galactic SFE is one fundamental
constraint on the global properties of star formation. The
existence of a broad-tailed core mass function (CMF) that is
a lognormal with a possible power-law high-mass tail, is another. In fact, the
observed form of the CMF \citep[e.g.][]{mot98} is similar to that of
the stellar initial mass function, the IMF.
Other important star formation constraints specifically applying to
cores include the generally subsonic relative infall motions
\citep{taf98,wil99,lee01,cas02}, the low
(subsonic or transonic) systematic core speeds
\citep{and07,kir07}, and the somewhat non-circular
projected shapes \citep{mye91,jon01}. The relatively low speeds are an important property
since the cores are embedded in molecular clouds whose overall
internal random motions are highly supersonic.
Since most stars form in clusters or loose groups,
it seems clear that some sort of
fragmentation process is at work in those regions.
There are several qualitatively distinct
modes of fragmentation to consider. The simplest process is gravitational
fragmentation, which can be
divided into cases with mass-to-flux ratios that are
supercritical (gravity-dominated)
and subcritical (ambipolar-diffusion-driven).
An alternate and distinct star formation mechanism is
turbulent fragmentation, dominated by nonlinear flows,
which can also occur in clouds with supercritical and subcritical
mass-to-flux ratios.
The limit of highly supercritical clouds also corresponds
to super-Alfv\'enic\ turbulence in the case that turbulent and gravitational
energies have comparable magnitude. This limit has been advocated by
\citet{pad02}. However, we favor the transcritical and trans-Alfv\'enic\
cases on theoretical and observational grounds, as discussed
in Section 4.
For a more complete understanding of star formation, a study of the
interplay of magnetic fields, turbulence, and ambipolar diffusion is
therefore of great importance.
In this paper, we carry out an extensive parameter survey of
magnetic field strengths and initial nonlinear perturbations,
facilitated by our use of the thin-sheet approximation, and discuss the
consequences of our results. Such broad parameter studies remain out of
reach for fully three-dimensional non-ideal MHD simulations \citep{kud08,nak08}.
In a previous paper (\citealt{bas09}, hereafter BCW; see also \citealt{bas04}),
we studied the nonlinear
evolution of gravitational fragmentation initiated by small-amplitude
perturbations, including the effects of magnetic fields and
ambipolar diffusion. An extensive parameter study was performed,
encompassing the supercritical,
transcritical, and subcritical cases, and also accounting for varying
levels of cloud ionization and external pressure. Some main findings
of that paper were that fragment spacings in the nonlinear phase agree
with the predictions of linear theory \citep{cio06}, and that the time to
runaway collapse
from small-amplitude white-noise initial perturbations is up to ten times
the growth time calculated from linear theory.
\citet{cio06} showed that transcritical
gravitational fragmentation can lead to significantly larger size and mass scales than
either the supercritical or subcritical limits.
BCW found that CMFs for regions with
a single uniform initial mass-to-flux ratio are sharply peaked, but that
the sum of results from simulations with a variety of initial mass-to-flux
ratios near the critical value can produce a broad distribution.
This represents a way to get a broad CMF, of the type observed,
without the need for nonlinear forcing.
Importantly, only a {\it narrow} initial distribution of initial mass-to-flux
ratio is needed to create a relatively {\it broad} CMF.
Additionally, BCW showed that different ambient mass-to-flux ratios in
different regions lead to observationally distinguishable values of
infall motions, core shapes, and magnetic field line curvature.
\citet{kud07} have recently performed three-dimensional
simulations of gravitational fragmentation with magnetic fields and
ambipolar diffusion, and verified some of the main findings of the thin-sheet
models. Particularly, they also found the dichotomy between subsonic maximum infall
speeds in subcritical clouds and somewhat supersonic speeds in
supercritical clouds.
The inclusion of nonlinear (hereafter, ``turbulent'') initial conditions
to fragmentation models including ambipolar diffusion was introduced
by \citet{li04} and \citet{nak05}, employing the thin-sheet approximation.
They found that the timescale of
star formation was reduced significantly by the initial motions with
power spectrum $v_k^2 \propto k^{-4}$ \citep{li04}, to become
$\sim 10^6$ yr for an initially somewhat subcritical cloud rather than
$\sim 10^7$ yr. By continuing to integrate past the collapse of the
first core through the use of an artificially stiff equation of state
for surface densities 10 times greater than the initial value,
they found that magnetic fields nevertheless prevented most material
from collapsing to form stars.
\citet{kud08} have verified that the mode
of turbulence-accelerated magnetically-regulated star formation also
occurs in a fully three-dimensional simulation.
While three-dimensional simulations are resource-limited,
and large parameter studies cannot yet be performed,
\citet{kud08} showed that this mode of star formation proceeds
though an initial phase of enhanced ambipolar diffusion created by the
small length scale generated by the large-scale compression associated with
the initial perturbation. If this is not sufficient to raise the
maximum mass-to-flux ratio above the critical value, there is a
rebound to lower densities. However, the highest density regions
remain well above the initial mean density, and here the ambipolar
diffusion proceeds in a quasistatic manner ($\propto \rho_{\rm n}^{-1/2}$ assuming
force balance between gravity and magnetic forces - see \citealt{mou99})
but at an enhanced rate due to the raised density.
In this paper, we study the effect of large-amplitude nonlinear initial perturbations
on the evolution of a thin sheet whose evolution is regulated by
magnetic fields and ambipolar diffusion. We focus on
the early stages of prestellar core formation and evolution, and
do not integrate past the runaway collapse of the first core.
Therefore, the effects of protostellar feedback through
outflows do not need to be added. These simplifications allow us to
run a large number of simulations. We perform an
extensive parameter study and also study many realizations of models
with a single set of parameters, since the initial turbulent state is
inherently random.
Some important questions that we can address and which have not been
answered in previous papers are as follows. In which parameter space do
nonlinear velocity fields
lead to prompt collapse, and in which cases can magnetic fields cause a
rebound from the first compressions?
How sensitively does the time until runaway collapse
depend upon the values of different parameters? What is the effect of
different power spectra of perturbations? Is there a qualitative
difference between Alfv\'enic\ and super-Alfv\'enic\ perturbations?
How do velocity profiles in the vicinity of cores vary in the
different scenarios? What are the systematic speeds of cores?
Our paper is organized in the following manner.
The model is described in Section 2, results are given in Section 3,
and a discussion of results, including speculation and implications for
global star formation in a molecular cloud are given in Section 4.
We summarize our results in Section 5.
\section{Physical and Numerical Model}
We employ the magnetic thin-sheet approximation, as laid out in detail
in several previous papers \citep{cio93,cio06,bas09}.
Physically, we are modeling the dense mid-layer of a molecular cloud, and
ignoring the more rarefied envelope of the cloud.
We assume isothermality at all times.
The basic equations
governing the evolution of a model cloud (conservation of
mass and momentum, Maxwell's equations, etc.) are integrated
along the vertical axis from $z=-Z(x,y)$ to $z=+Z(x,y)$. In doing so,
a ``one-zone approximation'' is used, in which all quantities
are taken to be independent of height within the sheet.
The volume density of neutrals $\rho_{\rm n}$ is calculated from the vertical pressure balance
equation
\begin{equation}
\rho_{\rm n} c_{\rm s}^2 = \frac{\pi}{2}G \sigma_{\rm n}^2 + P_{\rm ext} + \frac{B_x^2+B_y^2}{8\pi},
\end{equation}
where $c_{\rm s}$ is the isothermal sound speed, $\sigma_{\rm n}(x,y) = \int_{-Z}^{+Z}\rho_{\rm n}(x,y)~dz$ is
the column density of neutrals, $P_{\rm ext}$ is the external pressure on the sheet,
and $B_x$ and $B_y$ represent values of magnetic field components
at the top surface of the sheet, $z=+Z$.
We solve normalized versions of the magnetic thin-sheet
equations. The unit of velocity is taken
to be $c_{\rm s}$, the column density unit is $\sigma_{\rm n,0}$,
and the unit of acceleration is $2 \pi G \sigma_{\rm n,0}$, equal to the
magnitude of vertical acceleration above the sheet. Therefore,
the time unit is $t_0 = c_{\rm s}/2\pi G \sigma_{\rm n,0}$, and the length unit is
$L_0= c_{\rm s}^2/2 \pi G \sigma_{\rm n,0}$. From this system we can also construct
a unit of magnetic field strength, $B_0 = 2 \pi G^{1/2} \sigma_{\rm n,0}$.
The unit of mass is $M_0 = c_{\rm s}^4/(4\pi^2G^2\,\sigma_{\rm n,0})$.
Here, $\sigma_{\rm n,0}$ is the uniform
neutral column density of the background state.
Typical values of our units are given in the Appendix.
With these normalizations, the equations used to determine the
evolution of a model cloud are
\begin{eqnarray}
\label{cont}
\frac{{\partial \sigma_{\rm n} }}{{\partial t}} & = & - \nabla_p \cdot \left( \sigma_{\rm n} \, \bl{v}_{\rm n} \right), \\
\label{mom}
\frac{\partial}{\partial t}(\sigma_{\rm n} \bl{v}_{\rm n}) & = & - \nabla_p \cdot (\sigma_{\rm n} \bl{v}_{\rm n} \bl{v}_{\rm n}) + \bl{F}_{\rm T}
+ \bl{F}_{\rm M} + \sigma_{\rm n} \bl{g}_p, \\
\label{induct}
\frac{{\partial B_{z,\rm eq} }}{{\partial t}} & = & - \nabla_p \cdot \left( B_{z,\rm eq} \, \bl{v}_{\rm i} \right) , \\
\label{Ftherm}
\bl{F}_{\rm T} & = & - \tilde{C}_{\rm eff}^2 \nabla_p \sigma_{\rm n} ,\\
\label{Fmag}
\bl{F}_{\rm M} & = & B_{z,\rm eq} \, ( \bl{B}_p - Z\, \nabla_p B_{z,\rm eq} ) + {\cal O}(\nabla_p Z), \\
\label{vieq}
\bl{v}_{\rm i} & = & \bl{v}_{\rm n} + \frac{\tilde{\tau}_{\rm ni,0}}{\sigma_{\rm n}}\left(\frac{\rho_{\rm n,0}}{\rho_{\rm n}}\right)^{k_{\rm i}} \bl{F}_{\rm M} , \\
\label{ceffeq}
\tilde{C}_{\rm eff}^2 & = & \sigma_{\rm n}^2 \frac{(3 \tilde{P}_{\rm ext} + \sigma_{\rm n}^2)}{(\tilde{P}_{\rm ext} + \sigma_{\rm n}^2)^2} , \\
\label{rhon}
\rho_{\rm n} & = & \frac{1}{4} \left( \sigma_{\rm n}^2 + \tilde{P}_{\rm ext} + \bl{B}_p^2 \right),\\
\label{Zeq}
Z & = & \frac{\sigma_{\rm n}}{2 \rho_{\rm n}}, \\
\bl{g}_p & = & -\nabla_p \psi ,\\
\label{gravpot}
\psi & = & {\cal F}^{-1} \left[ - {\cal F}(\sigma_{\rm n})/k_z \right] ,\\
\bl{B}_p & = & -\nabla_p \Psi ,\\
\label{magpot}
\Psi & = & {\cal F}^{-1} \left[ {\cal F}(B_{z,\rm eq} - B_{\rm ref})/k_z \right] \, .
\end{eqnarray}
In the above equations,
$\bl{B}_p(x,y) = B_x(x,y)\hat{\bl{x}} + B_y(x,y)\hat{\bl{y}}$ is the
planar magnetic field at the top surface of the sheet,
$\bl{v}_{\rm n}(x,y) = v_x(x,y)\hat{\bl{x}} + v_y(x,y)\hat{\bl{y}}$ is the
velocity of the neutrals in the plane, and
$\bl{v}_{\rm i}(x,y) = v_{{\rm i},x}(x,y)\hat{\bl{x}} + v_{{\rm i},y}(x,y)\hat{\bl{y}}$
is the corresponding velocity of the ions.
The operator $\nabla_p = \hat{\bl{x}} \, \partial/\partial x +
\hat{\bl{y}} \, \partial/\partial y$ is the gradient in the planar
directions within the sheet.
The quantities $\psi(x,y)$ and $\Psi(x,y)$ are the
scalar gravitational and magnetic potentials in the plane
of the sheet, derived in the limit that the sheet is infinitesimally
thin. The vertical wavenumber $k_z = (k_x^2+k_y^2)^{1/2}$ is
a function of wavenumbers $k_x$ and $k_y$ in the plane of the sheet,
and the operators ${\cal F}$ and ${\cal F}^{-1}$ represent the forward
and inverse Fourier transforms, respectively, which we calculate
numerically using an FFT technique. Terms of order
${\cal O}(\nabla_p Z)$ in $\bl{F}_{\rm M}$, the magnetic force per
unit area, are not
written down for the sake of brevity, but are included in the numerical
code; their exact form is given in Sections 2.2 and 2.3 of \citet{cio06}.
All terms proportional to $\nabla_p Z$ are generally very small.
The Eq. (\ref{vieq}) above makes use of relations for the
neutral-ion collision time $\tau_{\rm ni}$ and the ion density $n_{\rm i}$,
as described in BCW:
\begin{eqnarray}
\label{tni}
\tau_{\rm ni} & = & 1.4 \,\frac{m_{\rm i} + m_{{}_{{\rm H}_{2}}}}{m_{\rm i}} \frac{1}{n_{\rm i} \langle \sigma w \rangle_{\rm{i\Htwo}}}\;, \\
\label{nion}
n_{\rm i} &= & {\cal K} n_{\rm n}^{k_{\rm i}}.
\end{eqnarray}
Furthermore, Eq. (\ref{vieq}) uses
the normalized initial mass density (in units of $\sigma_{\rm n,0}/L_0$)
$\rho_{\rm n,0} = \frac{1}{4}(1+\tilde{P}_{\rm ext})$, where $\tilde{P}_{\rm ext}$ is defined below.
Our basic equations contain three dimensionless free parameters:
$\mu_0 \equiv 2 \pi G^{1/2} \sigma_{\rm n,0}/B_{\rm ref}$ is the dimensionless
(in units of the critical value for gravitational collapse)
mass-to-flux ratio of the reference state;
$\tilde{\tau}_{\rm ni,0} \equiv \tau_{\rm ni,0}/t_0$ is the dimensionless neutral-ion collision
time of the reference state, and expresses the effect of ambipolar
diffusion;
$\tilde{P}_{\rm ext} \equiv 2 P_{\rm ext}/\pi G\sigma_{\rm n,0}^{2}$ is the ratio of the external
pressure acting on the sheet to the vertical self-gravitational stress
of the reference state.
Each numerical model is carried out
within a square computational domain spanning the
region $-L/2 \leq x \leq L/2$ and $-L/2 \leq y \leq L/2$.
Periodic boundary conditions are imposed in the $x$ and $y$ directions.
We choose a domain size $L=16\pi\,L_0$ for all models presented
in this paper, so that
it is four times wider than the wavelength
of maximum growth rate for an isothermal nonmagnetic and unpressured
sheet, $\lambda_{\rm T,m} = 4\pi\,L_0$
(see \citealt{cio06}; BCW).
The computational domain has $N$ zones in each direction, with
$N=128$ unless stated otherwise. Some results utilizing greater $N$
are presented in Section \ref{s:tdiss}.
Gradients in our simulation box are approximated using
second-order accurate central differencing.
Advection of mass and magnetic flux is prescribed by using the monotonic
upwind scheme of \citet{van77}. These forms of spatial discretization
convert the system of partial differential equations to a system of
ordinary differential equations (ODE's), with one ODE for each
physical variable at each cell. Time-integration of this large coupled
system of ODE's is performed by using an Adams-Bashforth-Moulton
predictor-corrector subroutine \citep{sha94}. Numerical solution of
Fourier transforms and inverse transforms, necessary to calculate the
gravitational and magnetic potentials $\psi$ and $\Psi$ at each time
step (see Eqs. [\ref{gravpot}] and [\ref{magpot}]), is done by using
fast Fourier transform techniques.
Further details of our numerical technique are given by BCW.
The initial conditions of our model include a ``turbulent'' velocity field
added to our background state of uniform column density $\sigma_{\rm n,0}$
and vertical magnetic field strength $B_{\rm ref}$. The velocity field is
generated in Fourier space using the usual practice adopted by e.g.
\citet{sto98} for three-dimensional models and \citet{li04} for thin-sheet
models. For a physical grid consisting of $N$ zones in each $(x,y)$
direction, the wavenumbers $k_x$ and $k_y$ in Fourier space consist of the
discrete values $k_j = 2\pi j/L$, where $j=-N/2,...,N/2$.
For each pair of $k_x$ and $k_y$, we assign a
Fourier velocity amplitude chosen from a Gaussian distribution and
scaled by the power spectrum $v_k^2 \propto k^{n}$, where
$k = (k_x^2+k_y^2)^{1/2}$. The phase is chosen from a uniform random
distribution in the interval $[0,2\pi]$. The resulting Fourier velocity
field is then transformed back into physical space. The distributions
of each of $v_x$ and $v_y$ are chosen independently in this manner,
and each is rescaled so that the rms amplitude of the field is equal to $v_a$.
For $n=-4$, the resulting velocity field has most of its power in a large-scale
flow component. We have experimented
with various values of $n$ and find that the results are
generally similar as long as $n < 0$.
Distinct results are found in the case of flat spectrum perturbations ($n=0$),
which we present for comparison.
Finally, we note that velocity fields generated in the
above manner are compressive. Hence, we have also studied one
model with $n=-4$ but the Fourier space amplitudes chosen so that
$v_x$ and $v_y$ satisfy $\nabla_p \cdot \bl{v}_{\rm n} =0$.
Therefore, our turbulent initial conditions introduce the additional
dimensionless free parameters $v_a/c_{\rm s}$ and $n$, while our simulation box
introduces the parameters $L/\lambda_{\rm T,m}$ and grid size $N$.
\section{Results}
\begin{table}
\begin{center}\caption{Models and Parameters}\end{center}
\begin{tabular}{crrrcccr}
\hline \hline
Model &\hspace{2em}$\mu_0$ &\hspace{2em}$\tilde{\tau}_{\rm ni,0}$ &\hspace{2em}$\tilde{P}_{\rm ext}$ &\hspace{2em}Spectrum &\hspace{1em}$v_{\rm a}/c_{\rm s}$ &\hspace{2em}$V_{\rm MS,0}/c_{\rm s}$ &\hspace{2em}$t_{\rm run}/t_0$ \\
\hline
1 &0.5 &0.0 &0.1 &$k^{-4}$ & 2.0 &2.9 &$>5000$\\
2 &0.5 &0.2 &0.1 &$k^{-4}$ &4.0 &2.9 &0.8\\
3 &0.5 &0.2 &0.1 &$k^{-4}$ &3.0 &2.9 &30\\
4 &0.5 &0.2 &0.1 &$k^{-4}$ &2.0 &2.9 &31\\
5 &0.5 &0.2 &0.1 &$k^{-4}$ &1.0 &2.9 &50\\
6 &0.5 &0.2 &0.1 &$k^{-4}$ &0.5 &2.9 &58\\
7 &0.5 &0.2 &10.0 &$k^{-4}$ &2.0 &1.0 &8.5\\
8 &0.5 &0.2 &0.1 &$k^{-4}$ (div0)&2.0 &2.9 &23\\
9 &0.5 &0.2 &0.1 &$k^0$ &2.0 &2.9 &56\\
10 &0.5 &0.1 &0.1 &$k^{-4}$ &2.0 &2.9 &92\\
11 &0.5 &0.4 &0.1 &$k^{-4}$ &2.0 &2.9 &8.1\\
12 &0.5 &1.0 &0.1 &$k^{-4}$ &2.0 &2.9 &1.8\\
13 &1.0 &0.2 &0.1 &$k^{-4}$ &2.0 &1.7 &1.8\\
14 &1.0 &0.2 &0.1 &$k^{-4}$ &1.0 &1.7 &4.3\\
15 &1.0 &0.2 &0.1 &$k^{-4}$ &0.5 &1.7 &12\\
16 &1.0 &0.2 &0.1 &$k^0$ &2.0 &1.7 &33\\
17 &2.0 &0.2 &0.1 &$k^{-4}$ &2.0 &1.2 &1.3\\
\hline
\end{tabular}
\end{table}
Table 1 contains a summary of models in our parameter study.
For each model, we list the values of the free parameters
$\mu_0, \tilde{\tau}_{\rm ni,0}, \tilde{P}_{\rm ext}$, the
form of the turbulent power spectrum, the turbulent velocity amplitude
$v_a$, the magnetosound speed $V_{\rm MS,0}$ in the initial state of the model,
and the calculated duration of the simulation, $t_{\rm run}$.
The initial magnetosound speed is calculated from the other parameters
of the model, and its relation to $v_a$ can act as a useful diagnostic.
Following \citet{cio06} we use
\begin{equation}
V_{\rm MS,0} = \left( \tilde{V}_{\rm A,0}^2 + \tilde{C}_{\rm eff,0}^2 \right)^{1/2}\,c_{\rm s},
\end{equation}
where $\tilde{V}_{\rm A,0}^2 = 2 \mu_0^{-2}/(1+\tilde{P}_{\rm ext})$ is the
square of the normalized (to $c_{\rm s}$) initial Alfv\'en\ speed, and
$\tilde{C}_{\rm eff,0}^2 = (1+3 \tilde{P}_{\rm ext})/(1 + \tilde{P}_{\rm ext})^2$ is the square of a
normalized initial effective sound speed that includes the effect of external
pressure, and follows from Eq. (\ref{ceffeq}).
Model 7 has $\tilde{C}_{\rm eff,0}^2 < 1$ because for $\tilde{P}_{\rm ext}=10$, the large external
pressure contributes significant opposition to the restoring force of
internal pressure in the initial state \citep[see discussion in][]{cio06}.
Our simulations are terminated as soon as $\sigma_{\rm n,max} \geq 10\, \sigma_{\rm n,0}$, corresponding
to runaway gravitational collapse of the first core.
For models with $\tilde{P}_{\rm ext} = 0.1$, this also corresponds to a volume
density enhancement by a factor $\approx 100$. We have verified with
high resolution runs to greater values of $\sigma_{\rm n,max}/\sigma_{\rm n,0}$ that the collapse
does indeed continue. Collapsing regions are also invariably
gravity-dominated, having locally
supercritical mass-to-flux ratios and net accelerations that point toward
the density peak. This includes cases of
prompt collapse, i.e. when collapse occurs in localized regions
due to strong shocks associated with the turbulent flow field, in a time
$t_{\rm run}$ less than $2\,t_0$.
The values of $t_{\rm run}$ do
vary somewhat from one realization of the initial state to another,
and in many cases represent an average value from many simulations.
We have run each model at least five separate times, and some
have been run significantly many more times, as described below.
Model 1, which evolves under flux-frozen conditions ($\tilde{\tau}_{\rm ni,0}=0$), cannot be
terminated in the usual manner.
Since the initial mass-to-flux ratio is also
subcritical for this model ($\mu_0=0.5$), gravitational runaway collapse
is not possible unless there is significant numerical magnetic
diffusion. That simulation ran past $t= 5000\,t_0$ without runaway
collapse or any notable artificial flux dissipation, thus providing
an excellent verification of the accuracy of our code.
While some models undergo many oscillations before
eventual runaway collapse of density peaks, others undergo prompt collapse.
Although the models that undergo
prompt collapse may be considered to be artificially forced into collapse
by large-scale flows in the initial conditions, we nevertheless
present them here as interesting limiting cases.
Models 4, 13, and 17 constitute a special set of models with our standard
neutral-ion coupling parameter $\tilde{\tau}_{\rm ni,0}=0.2$, external pressure
$\tilde{P}_{\rm ext}=0.1$,
turbulent velocity amplitude $v_a=2.0\,c_{\rm s}$, but varying initial mass-to-flux
ratio parameter $\mu_0=0.5,1.0,2.0$, respectively.
Model 4 is run 15 times, and models 13 and 17 are run 25 times
(with unique random realizations
of the initial velocity field), in order to compile statistics on the
core mass distributions using the techniques described by BCW.
Model 4 can in some sense be considered our ``standard'' model since we are
most interested in the acceleration of collapse in subcritical clouds
due to nonlinear supersonic velocity perturbations.
Model 8 is similar to model 4 but has the divergence-free initial velocity
field. Model 3 is on the brink of either prompt collapse or
a longer-term evolution leading to runaway collapse, and can sometimes
go into prompt collapse (see Section 4). For this reason, we ran the
model over 15 times
in order to yield a $t_{\rm run} = 30\,t_0$ that is a characteristic value for all
but a few runs that do go into prompt collapse.
\subsection{Global Properties}
\begin{figure}
\centering
\begin{tabular}{cc}
\epsfig{file=f1a.eps,width=0.5\linewidth,clip=} &
\epsfig{file=f1b.eps,width=0.5\linewidth,clip=} \\
\epsfig{file=f1c.eps,width=0.5\linewidth,clip=} &
\end{tabular}
\caption{
Image and contours of column density $\sigma_{\rm n}(x,y)/\sigma_{\rm n,0}$,
and velocity vectors of neutrals, for three different models at the time
that $\sigma_{\rm n,max}/\sigma_{\rm n,0} = 10$. All models have $\tilde{\tau}_{\rm ni,0}=0.2$,
$\tilde{P}_{\rm ext}=0.1$, and $v_a=2.0\,c_{\rm s}$. Top left: model 4 ($\mu_0=0.5$).
Top right: model 13 ($\mu_0=1.0$).
Bottom left: model 17 ($\mu_0=2.0$).
The color table
is applied to the logarithm of the column density and the contour lines
represent values of $\sigma_{\rm n}/\sigma_{\rm n,0}$ spaced in
multiplicative increments of $2^{1/2}$, having the values
[0.7,1.0,1.4,2,2.8,4.0,...].
The horizontal or vertical distance between the footpoints of velocity
vectors corresponds to a speed $1.0 \, c_{\rm s}$ for the $\mu_0=0.5$ model,
$2.0 \, c_{\rm s}$ for the $\mu_0=1.0$ model, and $3.0 \, c_{\rm s}$ for the $\mu_0=2.0$ model.
We use the normalized spatial coordinates
$x' = x/\lambda_{\rm T,m}$ and $y' = y/\lambda_{\rm T,m}$, where $\lambda_{\rm T,m}=4\pi\,L_0$ is the wavelength of
maximum growth rate from linear perturbation theory, in the
nonmagnetic limit with $P_{\rm ext}=0$.
}
\label{densimgs}
\end{figure}
Fig.~\ref{densimgs} shows images of column density overlaid with
column density
contours and velocity vectors, for realizations of models 4 (top left),
13 (top right), and 17 (bottom left). Each snapshot is at the end of the
simulation, when $\sigma_{\rm n,max}/\sigma_{\rm n,0} = 10$, but occurs at a different time
$t_{\rm run}$, as indicated in Table 1.
The maximum speeds in the simulation region are quite different at the
end of the three simulations even though all three start with perturbations
characterized by $v_a = 2.0\,c_{\rm s}$. Therefore, the
velocity vector plots each have a different normalization, with the
horizontal or vertical distance between footpoints of vectors corresponding to
$1.0\,c_{\rm s}$, $2.0\,c_{\rm s}$, and $3.0\,c_{\rm s}$ for the three models with $\mu_0=0.5,1.0$,
and $2.0$ respectively. To understand the difference in maximum speeds, it is
important to understand the different course of evolution in each model.
The model 4, with $\mu_0=0.5$, has a strong enough magnetic field that the
initial compression driven by the large-scale flow of the nonlinear velocity
field does not lead to prompt collapse in any region. The magnetic
field causes a rebound after the initial compression. The densest regions
never reexpand fully to the initial background density, and instead undergo
oscillations in density until continuing ambipolar diffusion
leads to the creation of regions of supercritical mass-to-flux ratio. These
regions then undergo runaway collapse. For model 4, this occurs at
a representative $t= 31\,t_0$, meaning
that there is sufficient time for the initial velocity field to have damped
significantly, since we do not replenish turbulent energy in these simulations.
This is why the velocity amplitude is much smaller at the end of the simulation
than in the other two runs. However, the maximum value is still supersonic
($3.2\,c_{\rm s}$), and
there are strong systematic flow fields in the simulation. In contrast,
when starting with small-amplitude initial perturbations (BCW),
the maximum infall speeds are subsonic. In that case, runaway collapse also
occurred much later, at $t=204\,t_0$.
The color table and column density contours for model 4, when compared to
those for models 13 and 17, reveal that the gas is not as compressed and
filamentary as in those cases, due to the rebound from the initial extreme
compressions.
The images, contours, and velocity vectors for models 13 ($\mu_0=1.0$)
and 17 ($\mu_0=2.0$) reveal that the initial strong compression leads to
immediate runaway collapse within highly compressed filaments. The
velocity field has highly ordered supersonic compressive infall motions.
The runaway collapse is occurring at times $t=1.8\,t_0$ and $t=1.3\,t_0$,
respectively, essentially as soon as the initial flow creates a large-scale
compression. Although kinetic energy is efficiently dissipated behind the
shock fronts, there is hardly enough time for a large reduction of
the global kinetic energy.
Therefore, the maximum speeds at the end of the simulation
($6.0\,c_{\rm s}$ for model 13 and $7.2\,c_{\rm s}$ for model 17) are quite
similar to the initial maximum values.
\begin{figure}
\centering
\begin{tabular}{cc}
\epsfig{file=f2a.eps,width=0.5\linewidth,clip=} &
\epsfig{file=f2b.eps,width=0.5\linewidth,clip=}
\end{tabular}
\caption{
Image and contours of $\mu(x,y)$, the mass-to-flux ratio in units of the
critical value for collapse. Regions with $\mu >1$ are displayed with a color table,
while regions with $\mu <1$ are black. The contour lines are spaced in additive
increments of 0.1. Left: Final snapshot of model 4 ($\mu_0=0.5$).
Right: Final snapshot of model 13 ($\mu_0=1.0$).
}
\label{mtoflx}
\end{figure}
Fig.~\ref{mtoflx} shows images of the mass-to-flux ratio at the end
of the simulations of models 4 and 13.
The end states have a combination of subcritical and supercritical regions.
Subcritical regions are shown in black, and a color table is applied to the
supercritical regions on both panels. The left panel illustrates that the
supercritical regions of the initially significantly subcritical ($\mu_0=0.5$)
cloud are created within the filamentary regions generated by the large-scale
compressions. In contrast, the initially critical ($\mu_0=1.0$) model has
widespread supercritical regions generated by the small-scale modes of
turbulence, as well as the most supercritical regions in the compressed layers.
The former effect of widespread patches of mildly supercritical gas is possible
due to the marginal nature of the critical ($\mu_0=1.0$) initial state.
Physically, we might expect
the cloud with $\mu_0=1.0$ to lead to a cluster of stars soon after the
runaway collapse of the first cores. On the other hand, the
significantly subcritical cloud with $\mu_0=0.5$ would show only isolated star
formation in the compressed layers and have to wait a much longer time before
ambipolar diffusion leads to clustered star formation in the remainder of the cloud.
\begin{figure}
\centering
\begin{tabular}{cc}
\epsfig{file=f3a.eps,width=0.5\linewidth,clip=} &
\epsfig{file=f3b.eps,width=0.5\linewidth,clip=} \\
\epsfig{file=f3c.eps,width=0.5\linewidth,clip=} &
\end{tabular}
\caption{
Image of gas column density $\sigma_{\rm n}(x,y)/\sigma_{\rm n,0}$
and superposed magnetic field lines for realizations of
models 4, 13, and 17, with $\mu_0=0.5$ (top left), $\mu_0=1.0$ (top right),
and $\mu_0=2.0$ (bottom left). All models have initial velocity
amplitude $v_a = 2.0\,c_{\rm s}$. These are two-dimensional projections
of three-dimensional images containing
a sheet with a column density image and magnetic field lines extending
above the sheet to a distance about half the box width. The image is seen
from a viewing angle of about 10$^{\circ}$ relative to the sheet normal
direction.
Animations of the
evolution of the column density are available online. The field lines
appear in the last frame of the animation.
}
\label{movieimgs}
\end{figure}
Fig.~\ref{movieimgs} shows the end states of models 4, 13, and 17 in
different realizations (i.e. starting with a different but statistically
equivalent initial velocity field) than shown in Fig.~\ref{densimgs}.
A color table shows the column density of the final state, and magnetic
field lines above the sheet are also illustrated. These lines are generated
in the manner described in BCW. The image of sheet surface and
field lines above are viewed from an angle of $10^{\circ}$ from the sheet
normal direction. Animations of the evolution of the sheet surface density, with
field lines appearing in the last frame, are available online.
Clearly, the models which suffer prompt collapse ($\mu_0=1.0$ and $\mu_0=2.0$)
show the most curvature of field lines since the field is dragged inward
by the strong initial compression wave. In contrast, the cloud with
$\mu_0=0.5$ (top left) undergoes a rebound and several oscillations before
runaway collapse can occur. This allows the magnetic field to
straighten out again. The ultimate collapse of the first core is due
to ambipolar drift of neutrals past field lines, so the field is not
significantly distorted by this process. However, a legacy of the
initial compression is that the mass-to-flux ratio is no longer
spatially uniform, and significant column density structure exists
even if the magnetic field is not very distorted.
The relative amounts of field line curvature in the cloud and within dense
cores are quantified by $\theta = \tan^{-1} (|B_p|/B_{z,\rm eq})$,
where $|B_p| = (B_x^2+B_y^2)^{1/2}$ is the magnitude of the planar magnetic
field at any location on the sheet-like cloud. Hence, $\theta$ is the angle that
a field line makes with the vertical direction at any location at the top
or bottom surface of the sheet. To quantify the differences in field line
bending from subcritical to transcritical to supercritical clouds,
we note that representative realizations of models 4, 13, and 17,
with $\mu_0=(0.5,1.0,2.0)$, have average
values $\theta_{\rm av} = (4.2^{\circ}, 18^{\circ}, 23^{\circ})$, and maximum
values (probing the most evolved core in each simulation)
$\theta_{\rm max} = (25^{\circ}, 65^{\circ}, 65^{\circ})$.
Of these, only the model with $\mu_0=0.5$ shows similar values of $\theta$
as a corresponding model with initial small-amplitude perturbations. For
models with small-amplitude initial perturbations, BCW found that
$\mu_0=(0.5,1.0,2.0)$ yields representative values
$\theta_{\rm av} = (1.7^{\circ}, 8.3^{\circ}, 18^{\circ})$, and
$\theta_{\rm max} = (20^{\circ}, 30^{\circ}, 46^{\circ})$.
\begin{figure}
\centering
\begin{tabular}{cc}
\epsfig{file=f4a.eps,width=0.5\linewidth,clip=} &
\epsfig{file=f4b.eps,width=0.5\linewidth,clip=} \\
\epsfig{file=f4c.eps,width=0.5\linewidth,clip=} &
\epsfig{file=f4d.eps,width=0.5\linewidth,clip=}
\end{tabular}
\caption{
Top: Column density and velocity vectors as in Fig. \ref{densimgs}, but for
models 9 and 16, which have initial nonlinear velocity field with $v_a=2.0\,c_{\rm s}$
and flat power spectrum ($k^0$).
The horizontal or vertical distance between the origins of velocity
vectors corresponds to a speed $0.5 \, c_{\rm s}$.
Bottom:
Images of mass-to-flux ratio, as in Fig.~\ref{mtoflx} but for
models 9 and 16.
}
\label{k0densmtfimg}
\end{figure}
The evolution of models with flat spectrum ($v_k^2 \propto k^0$) initial
perturbations is distinct from the cases with negative exponent,
so we present the results from models 9 and 16 in Fig.~\ref{k0densmtfimg}.
In these models, $v_a=2.0\,c_{\rm s}$ as in model 4, but the large-scale flow does
not dominate the initial
condition. Therefore, the small-scale modes contribute more
significantly to enhance
ambipolar diffusion. This enhancement of ambipolar diffusion
due to small-scale irregularities is similar to the mechanism
studied analytically by \citet{fat02} and \citet{zwe02}.
We study only models with $\mu_0=0.5$
and $\mu_0=1.0$ in order to focus on the enhanced ambipolar
diffusion. The values of $t_{\rm run}$ for these models are
$56\,t_0$ and $33\,t_0$ respectively. These are significantly
longer time scales
than in the corresponding models with $v_k^2 \propto k^{-4}$.
However, they are much shorter than in models with the same background state
and linear initial perturbations (studied by BCW), in which case
$t_{\rm run}$ is $204\,t_0$ and $121\,t_0$ for $\mu_0=0.5$ and
$\mu_0=1.0$, respectively.
Fig.~\ref{k0densmtfimg} shows the column density and mass-to-flux ratio
at the end of simulations of model 9 and 16.
The input turbulence acts to increase the rate of ambipolar diffusion,
but the turbulence also decays away. By the time of runaway collapse, the
cloud structure and kinematics more closely resembles the case of
small-amplitude initial perturbations than the
case of nonlinear-flow-induced fragmentation (models 4 and 13).
Representative values of the maximum speed $v_{\rm max}$ of neutrals
at the time $t=t_{\rm run}$ of runaway collapse are
$0.7\,c_{\rm s}$ for model 9 and $0.8\,c_{\rm s}$ for model 16. Both values are closer to
the values $0.4\,c_{\rm s}$ and $0.7\,c_{\rm s}$ when starting with small-amplitude
initial perturbations (BCW)
than for the cases of nonlinear-flow-induced fragmentation,
in which case $v_{\rm max} = 3.2\,c_{\rm s}$ and $6.0\,c_{\rm s}$, respectively.
The bottom panels show the mass-to-flux ratio at the end of
simulations of model 9 and 16. The subcritical model 9 has only
isolated pockets of supercritical cores, as well as emerging cores
which still have subcritical but enhanced mass-to-flux ratio. The
image is similar to the corresponding image when starting with
small-amplitude perturbations (Fig. 9 of BCW), but the
cores are not circular in shape. The fragmentation scale also
seems related to that of the small-amplitude perturbation model, although
many fragments are just beginning to emerge and may
take a much longer time to develop fully.
The corresponding image for model 16 shows that the initially
critical ($\mu_0=1.0$) state leads to many regions of supercritical
mass-to-flux ratio. The emerging fragment pattern
has less resemblance to the corresponding case that starts from small amplitude
perturbations (Fig. 9 of BCW) than for $\mu_0=0.5$. Nevertheless,
a fragmentation scale is more apparent than for the case of
nonlinear-flow-induced fragmentation shown in Fig.~\ref{mtoflx}.
The relatively low amount of
remaining turbulence at the end of both simulations means that new cores will develop
on the non-accelerated ambipolar diffusion time scale, leading to a
significant age spread of star formation. In the nonlinear
initial condition models, with $k^{-4}$ or $k^0$ spectrum, we may
identify the initial cores with an early phase of star formation that is
induced by turbulence, and the emerging cores with a later phase
of star formation that can grow from lingering small-amplitude perturbations.
In this way, our model could be in qualitative agreement with the
empirical scenario advocated by \citet{pal00}.
\subsection{Time Evolution}
\begin{figure}
\centering
\epsfig{file=f5.eps,width=1.0\linewidth,clip=}
\caption{
Time evolution of maximum values of surface density
and mass-to-flux ratio for various models with $\mu_0=0.5$.
The solid lines show the
evolution of the maximum value of surface density in the simulation,
$\sigma_{\rm n,max}/\sigma_{\rm n,0}$, versus time $t/t_0$.
The dashed lines
show the evolution of the maximum mass-to-flux ratio in the simulation,
$\mu_{\rm max}$.
This is shown for models
4, and 9, which have
$\mu_0= 0.5$ and same values of $\tilde{\tau}_{\rm ni,0}$ and $\tilde{P}_{\rm ext}$,
but different power spectra of turbulent initial perturbations, $k^{-4}$
and $k^0$ respectively.
Two other models are also shown for comparison. One has the same parameters but
linear initial perturbations, corresponding to model 1 of BCW.
Furthermore, the dash-dotted line shows the evolution (up to time
$t \approx 200\,t_0$ only)
of $\sigma_{\rm n,max}/\sigma_{\rm n,0}$ for the flux-frozen ($\tilde{\tau}_{\rm ni,0}=0$) model 1, which never undergoes
runaway collapse.
}
\label{timevol1}
\end{figure}
\begin{figure}
\epsfig{file=f6.eps,width=1.0\linewidth,clip=}
\caption{
Time evolution of maximum values of surface density
and mass-to-flux ratio for various models with $\mu_0=1.0$.
The solid and dashed lines have the same meaning as in Fig.~\ref{timevol1}.
Shown are results from models 13 and 16, which have differing
power spectra of turbulent initial perturbations. Also shown is
a model with the same parameters but linear initial perturbations,
corresponding to model 3 of BCW.
}
\label{timevol2}
\end{figure}
\begin{figure}
\epsfig{file=f7.eps,width=1.0\linewidth,clip=}
\caption{
Effect of differing levels of magnetic coupling on time evolution of
the maximum value of surface density. The solid lines show results for
models 4, 10, and 11. All models have
$\mu_0=0.5$, $\tilde{P}_{\rm ext}=0.1$, and turbulent initial perturbations with
power spectrum $\propto k^{-4}$. The dashed line shows the evolution of
the flux-frozen model 1 ($\tilde{\tau}_{\rm ni,0}=0$), which does not undergo runaway
collapse.
}
\label{timevol3}
\end{figure}
Fig.~\ref{timevol1} shows the time evolution of
maximum column density for four models and of the maximum
mass-to-flux ratio for three models, all having $\mu_0=0.5$.
It clearly shows that for subcritical
clouds: (1) fragmentation by runaway collapse does not occur under
conditions of flux-freezing (dash-dotted line; the simulation actually
runs past $t=5000\,t_0$ without runaway collapse); (2) small-amplitude
(linear) perturbations result in the classical quasistatic evolution
requiring a time $t_{\rm run} \approx 200\,t_0$; (3) flat spectrum ($k^0$)
nonlinear perturbations that result in a collapse time that is shorter
by a factor $\approx 4$; (4) power-law ($k^{-4}$) spectrum
nonlinear perturbations that result in a rapid collapse
that is shorter than the linear case by a factor $\approx 7$.
Of course, the
exact values of $t_{\rm run}$ will depend on $v_a$ and other parameters
such as $\tilde{\tau}_{\rm ni,0}$ and $\tilde{P}_{\rm ext}$. For the fourth case above, it may depend
on the box size $L$ as well, since that sets the scale of the largest
mode in the simulation.
This figure corresponds to Fig. 1 of \citet{kud08}, which
shows results for some three-dimensional models. In their figure,
the volume density $\rho_{\rm n}$ and plasma $\beta$ (counterparts to
$\sigma_{\rm n}$ and $\mu$ in our thin-sheet model) undergo some variations
due to vertical oscillations of the cloud, before eventually increasing rapidly.
Our model follows the integrated quantities through the layer and
therefore does not include the effect of vertical oscillations.
Nevertheless, the timescale of
evolution and eventual runaway collapse are in good agreement
where comparisons can be made. Our thin-sheet model allows a
broader parameter study than
currently possible using three-dimensional simulations.
Fig.~\ref{timevol2} shows the time evolution of maximum column density and
maximum mass-to-flux ratio for three models with $\mu_0=1.0$.
The model with small amplitude (linear) initial perturbations
corresponds to model 3 of BCW. The other ones are
model 13 and model 16 of this paper.
The same three qualitatively distinct evolutionary modes occur
as for the case of $\mu_0=0.5$. A notable difference is that
collapse occurs immediately during the first compression
in the case of nonlinear-flow-induced fragmentation, at
$t_{\rm run}=1.8\,t_0$. There is no rebound from the first compression
as occurs when $\mu_0=0.5$.
Fig.~\ref{timevol3} illustrates the effect of varying levels of ionization
on the fate of nonlinear-flow-induced fragmentation. This is
represented by differing values of $\tilde{\tau}_{\rm ni,0}$, with $\tilde{\tau}_{\rm ni,0}=0$
corresponding to flux-freezing and differing values of
$\tilde{\tau}_{\rm ni,0}$ corresponding to different initial ionization fractions
$x_{\rm i,0} \propto \tilde{\tau}_{\rm ni,0}^{-1}$
(see Appendix).
The standard model with $\tilde{\tau}_{\rm ni,0}=0.2$ corresponds to
a canonical ionization fraction $x_{\rm i,0} \simeq 10^{-7}(n_{\rm n,0}/10^4~{\rm cm}^{-3})^{-1/2}$
\citep{tie05}.
Our results show that $t_{\rm run}$ may indeed vary significantly
due to variations in $x_{\rm i,0}$, easily spanning the range of
$10^6$ yr to $10^7$ yr for typical values of units and input parameters.
Clearly, a definitive understanding of the influence of magnetic
fields awaits further insight into ionization levels in molecular clouds.
\subsection{Core Properties}
\begin{figure}
\epsfig{file=f8.eps,width=1.0\linewidth,clip=}
\caption{
Velocity component of neutrals $v_x$, normalized to $c_{\rm s}$,
along a line parallel to the $x$-axis
that passes through the center of a core, for models with
various values of $\mu_0$ and differing initial perturbations.
The horizontal coordinate $x' = (x - x_{\rm c})/L_0$, where $x_{\rm c}$
is the location of the core center in each case.
The dashed lines show the profile of $v_x$ for cores generated in models
4, 13, and 17, from left to right panels. They are characterized by
initial nonlinear perturbations (NLP). The dotted lines show
for comparison the profiles through cores in models 1, 3, and
5 of BCW, i.e. models with the same parameters
but initial linear perturbations (LP).
The horizontal solid lines denote the systematic speeds of the
cores in the $x-$direction. Note the largest systematic speed
in the model with $\mu_0=0.5$ and initial NLP.
}
\label{velcuts}
\end{figure}
Two important observed properties of dense cores are the
kinematics of infall motions, and the
distribution of core masses. The latter suffers from some
ambiguity due to the different possible definitions of
a ``core'', or more specifically, how to define a core
``boundary''. Here we describe the most basic features of
our simulated cores.
Fig.~\ref{velcuts} shows the velocity profiles (using dashed lines) in the
vicinity of cores that are obtained in simulations
of models 4, 13, and 17, with $\mu_0=0.5,1.0$, and 2.0,
respectively. They are measured along
a line in the $x$-direction that passes through
the column density peak. Also shown (in dotted lines) for
each value of $\mu_0$ is a profile of $x$-velocity through a core
that is formed in a simulation with small-amplitude (linear)
perturbations (BCW), but otherwise the same parameters as the other
model shown in the same panel.
The horizontal solid lines in each panel pass through the
``midplane'' of the velocity profiles, and allow one to read off the
systematic $x$-velocity of the core.
For the cases of initial linear perturbations, the increasing
sequence of $\mu_0$ leads to ever increasing maximum infall speeds,
from about half the sound speed up to mildly supersonic values.
There is also evidence of infall speeds increasing
towards the core centers, due to gravitational acceleration.
For the cases of initial nonlinear perturbations (these models all
have a $k^{-4}$ spectrum), the sequence of increasing $\mu_0$ leads to
greater relative infall speeds onto the cores. These motions are
supersonic in all cases, and constitute an important observationally-testable
consequence of nonlinear-flow-induced fragmentation.
The models with greater $\mu_0$ have greater infall speeds because they
undergo collapse during the first compression, with most of
the initial input turbulent energy still intact, i.e. there has not
been much time for turbulent decay. Also note that there is no
evidence for an accelerating flow in these cases, which would be
a signature of gravitationally-driven motions.
Thus, these models demonstrate flow-driven core formation, rather than
gravitationally-driven core formation.
For the model with
$\mu_0=2.0$ there is essentially no systematic core speed, since the
collapse occurs very quickly at the intersection of two
colliding flows. At the other limit of a significantly subcritical
cloud ($\mu_0=0.5$), the initial compression is followed by a rebound
due to the strong magnetic restoring forces. The core forms later
within the region of high density that is undergoing oscillatory motions.
The systematic speed of the core relative to the simulation box
is supersonic (about twice the sound speed in this model), although
the relative speed of infall onto the core is subsonic or transonic.
The large systematic core speeds for subcritical clouds constitute another
observationally-testable
consequence of nonlinear-flow-induced fragmentation.
The case of $\mu_0=1.0$ is intermediate in features between the
$\mu_0=0.5$ and $\mu_0=2.0$ models, but is actually closer to the $\mu_0=2.0$
model since the collapse occurs very quickly, with $t_{\rm run}=1.8\,t_0$.
This is very close to the value $t_{\rm run}=1.3\,t_0$ for the $\mu_0=2.0$ model.
\begin{figure}
\epsfig{file=f9.eps,width=1.0\linewidth,clip=}
\caption{
Histograms of masses contained within regions with $\sigma_{\rm n}/\sigma_{\rm n,0} \geq 2
$,
measured at the end of simulations with parameters of
models 4, 13, and 17. Specifically, they are distinguished by
values of $\mu_0 = 0.5,1.0,2.0$ as labeled.
Each figure is the result of a compilation of results of many
simulations. The bin width is 0.1.
}
\label{masshist}
\end{figure}
Fig.~\ref{masshist} shows the histograms of core masses, defined as masses
enclosed within regions that have $\sigma_{\rm n}/\sigma_{\rm n,0} \geq 2$ surrounding a
column density peak. These are measured at the end of each simulation,
for 15 separate realizations of model 4, and 25 each of models 13 and 17.
Simulations of model 4 ($\mu_0=0.5$) and model 13 ($\mu_0=1.0$)
produce an average of five identifiable cores per simulation,
while model 17 ($\mu_0=2.0$) produces an average of ten cores per simulation.
For details
about our thresholding technique used to obtain core masses, see BCW.
The histograms reveal that for any fixed value of $\mu_0$, the distribution
of core masses is much broader than the corresponding histogram of masses
for fixed $\mu_0$ and initial small-amplitude (linear) perturbations.
See Fig.~8 of BCW for the latter, which show a very sharp descent
beyond the preferred mass scale. There are many more high-mass cores that
are formed in these models. However, an examination of Figs. \ref{densimgs}
and \ref{movieimgs} reveals that many of the cores have very elongated
and irregular shapes, so that they may yet break up into multiple fragments.
In BCW, we proposed that a broad CMF may be caused by a distribution of
initial mass-to-flux ratio values
within a cloud. That remains an alternative scenario to that of
``turbulent fragmentation'' explored here and in several previous
publications \citep{pad97,kle01,gam03,til07}.
\subsection{Turbulent Dissipation}
\label{s:tdiss}
\begin{figure}
\epsfig{file=f10.eps,width=1.0\linewidth,clip=}
\caption{
Kinetic energy versus time for models with various values
of $\mu_0$ and/or $v_a$ (normalized to $c_{\rm s}$).
Each of these models is run with $N=256$.
}
\label{ekin1}
\end{figure}
The rate of dissipation of turbulent energy has been studied
extensively in a series of three-dimensional simulations
\citep[e.g.][]{sto98,mac98,mac99,ost01}. See also the reviews by
\citet{mac04}, \citet{elm04}, and \citet{mck07}.
In this Section, our goal is to briefly present some
information about the turbulent decay in our simulations.
These may be of interest because our simulations differ in their
use of the thin-sheet approximation and the use of high-order adaptive
time-stepping that is part of our implementation of the method of lines
technique (see BCW).
Nevertheless, we do also obtain relatively rapid turbulent
dissipation in most models, as presented in the various figures in this Section.
We present results for the decay of kinetic energy in our simulation
box. However, the total energy in the simulation box
is not conserved, due to radiative losses implied by our isothermal
assumption, and also due to work done by the external pressure and
magnetic forces associated
with the field components $B_x$ and $B_y$ at the
cloud's top and bottom surfaces. Nevertheless, the amount of turbulent
kinetic energy present in a cloud has important observational
implications, so we illustrate its evolution here in several figures.
We also use the kinetic energy evolution as a means of exploring the
effect of numerical resolution in our simulations.
Fig.~\ref{ekin1} shows the evolution of kinetic energy $E_{\rm kin}$
(defined as the sum of $\frac{1}{2} \sigma_{\rm n}(v_x^2+v_y^2)$ over all
cells of a simulation), normalized to
initial values $E_{\rm kin,0}$, for models 4, 6, and 13.
Note that the values of $E_{\rm kin,0}$ differ from model to model.
The model with $\mu_0=1.0$ does not have much chance to lose kinetic energy
because collapse occurs right away, during the first
turbulent compression. For models with $\mu_0=0.5$, there is a rebound from
the initial compression, and this is indicated by the
oscillations of $E_{\rm kin}$. Furthermore, there is an overall systematic
decay of $E_{\rm kin}$ so that it is significantly reduced in one sound crossing
time of the initial half-thickness of the cloud,
$t_{\rm c} = Z_0/c_{\rm s} \simeq 2 L_0/c_{\rm s} = 2\,t_0$, where we have used
Eq. (30) of BCW to relate $Z_0$ to $L_0$. The decaying oscillations
of $E_{\rm kin}$ are consistent with the qualitative picture obtained from the
animation of model 4 that accompanies Fig.~\ref{movieimgs}. Some of our
realizations do show an increase in $E_{\rm kin}$ during the last stage of evolution,
due to the conversion of gravitational energy into kinetic energy
of systematic infall onto one or more cores.
Fig.~\ref{ekin2} shows the effect of resolution on the decay of kinetic energy.
Our standard simulations have $N=128$, and numerical experiments with
$N=256$ and $N=512$ demonstrate that while some more kinetic energy is
retained in those cases, the overall pattern of decay and oscillations
of $E_{\rm kin}$ is maintained.
We note that each simulation has a unique random but statistically
equivalent initial state.
Fig.~\ref{ekin3} reveals the additional effect on the kinetic energy evolution
of two interesting limits.
In one case, we perform the numerical experiment of starting with
a divergence-free (non-compressive) initial velocity field,
even though turbulence in the interstellar medium is thought to
be highly compressive \citep{mck07}.
For this case, a plot of $E_{\rm kin}$ for the evolution of
model 8 (dashed line) reveals that the
kinetic energy still decays, but that the cloud does not undergo
large-scale oscillations during the process. These oscillations occur in the
compressible case due to the restoring force of the magnetic field when
compressed into filamentary structures. This does not occur in the
incompressible model in a globally coherent manner, although locally
compressive motions are generated during the evolution and turbulent decay
does occur rapidly, as in the other models.
The dash-dotted line reveals the interesting evolution in the case of
flux-freezing (model 1; $\tilde{\tau}_{\rm ni,0}=0$). Here the initially compressive
velocity field leads to an initial rapid decay of turbulence through
shocks, but large-scale oscillatory modes
remain in the simulation box for indefinite periods of time.
These modes have a root mean squared velocity amplitude $\approx 2\,c_{\rm s}$
and contain roughly half the initial input energy.
Why do these modes not decay away? The lack of ambipolar diffusion
means there is no dissipation of modes in which the restoring force is
due to the magnetic field. The $k^{-4}$ spectrum means that the largest
modes dominate, and these also suffer negligible numerical dissipation
in our scheme. The restoring force that drives the waves is provided largely
by the magnetic tension associated with the magnetic field external to the
sheet. While the case of a thin sheet may not be generalizable to
three dimensional molecular clouds, we feel this result is an
important pointer to processes that may in fact be occurring in real
clouds. That is, the external magnetic field, anchored in the Galactic
interstellar medium, may allow the outer parts of clouds
(effectively flux-frozen due to UV ionization - see \citealt{cio95})
to maintain long-lived oscillations
that are then identified observationally as ``turbulence''.
This idea has long been advocated by \citet{mou75,mou87}.
We note that this result could not be obtained in periodic box simulations
that contain no effect of an external medium, and leave a
more thorough assessment of this effect to a forthcoming paper.
\begin{figure}
\epsfig{file=f11.eps,width=1.0\linewidth,clip=}
\caption{
Kinetic energy versus time for model 4 parameters but varying resolution.
}
\label{ekin2}
\end{figure}
\begin{figure}
\epsfig{file=f12.eps,width=1.0\linewidth,clip=}
\caption{
Kinetic energy versus time for models 1 (dash-dotted line), 4 (solid line),
and 8 (dashed line).
Each of these models has $\mu_0=0.5$ and $v_a = 2\,c_{\rm s}$ and
is run with $N=128$. They differ in their values of
$\tilde{\tau}_{\rm ni,0}$ and in whether or not the initial velocity
field is divergence-free.
}
\label{ekin3}
\end{figure}
\section{Discussion}
We have performed a parameter study of fragmentation of a dense sheet
aided by the presence of initial nonlinear velocity perturbations.
In most models, the power spectrum of fluctuations is $\propto k^{-4}$,
so that the initial conditions impose primarily a large-scale flow
to the system. We have also studied the case of nonlinear perturbations
with power spectrum $\propto k^0$, in which the small-scale fluctuations
play a bigger role. Of the two modes, the latter is more similar to
gravitational fragmentation arising from small-amplitude perturbations,
as studied extensively in our previous paper (BCW). The main
difference is an accelerated time scale for core formation. This is
particularly apparent for the cases with subcritical initial
mass-to-flux ratio, in which case the nonlinear fluctuations enhance
ambipolar diffusion \citep[see][]{fat02,zwe02}. For the
case of nonlinear-flow-induced fragmentation, originally studied
by \citet{li04} and \citet{nak05}, we find that the induced structures
are highly filamentary and go into direct collapse for supercritical
clouds. For critical, and more so for subcritical clouds, the
initial compression may be followed by a rebound and oscillations
which eventually lead to runaway collapse in dense pockets where
enhanced ambipolar diffusion has created supercritical conditions.
What determines the outcome? For any given field strength, there is
a threshold initial velocity amplitude $v_a$ above which prompt collapse
will take place.
An examination of model outcomes in Table 1 reveals that, for a fixed
standard initial ionization fraction defined by $\tilde{\tau}_{\rm ni,0}=0.2$,
prompt collapse takes place when $v_a > V_{\rm MS,0}$
(see models 2, 13, and 17). Indeed, the model 3, which has
$v_a \approx V_{\rm MS,0}$,
is actually prone to go into prompt collapse ($t_{\rm run} \approx t_0$)
in some realizations,
but undergoes several oscillations before runaway collapse in
most cases (with representative value $t_{\rm run}=30\,t_0$).
We can say that significantly super-Alfv\'enic\ perturbations are associated
with prompt collapse, for both subcritical and supercritical model clouds.
This criterion does not apply to models with initial power
spectrum $\propto k^0$,
since the kinetic energy does not get channeled toward a large-scale
compression wave. It also does not apply to model 7 ($\tilde{P}_{\rm ext} = 10$),
since its low value of $V_{\rm MS,0}$ is very specific to the
external-pressure-dominated initial state, but not representative of
the signal speed in the high-density regions that are subsequently generated.
The highly filamentary structure of clouds in which prompt collapse
takes place is a source of concern when comparing with maps of observed
molecular clouds. This was noted by \citet{li04}, who commented that
clouds with weak magnetic field and supersonic turbulence as modeled
(i.e. $k^{-4}$ power spectrum, having most power on the largest scales)
would appear too filamentary in comparison with observations.
Our study extends this concern also to models with strong magnetic field
if the turbulence is highly super-Alfv\'enic, since prompt collapse occurs
in highly compressed filaments without a chance for them to rebound.
Since the weak magnetic field cases also have by design a velocity
amplitude that is super-Alfv\'enic, we can say that super-Alfv\'enic\ turbulence
in all cases may have problems with excessive filamentarity. There is another problem
with large amounts of turbulent forcing; the relative infall motions onto
the cores are highly supersonic (see Fig.~\ref{velcuts}), and at odds with
observed core infall motions \citep{taf98,wil99,lee01,cas02}, which are
subsonic or at best transonic.
Of course, both of these problems are set up artificially in our
simulations through nonlinear forcing associated with the initial conditions.
In other simulations of driven super-Alfv\'enic\ turbulence \citep{pad02}, such
forcing continues at all times and the above features are always present.
If the highly turbulent and/or super-Alfv\'enic\ models pose difficulties for
dense core formation, then
how does one account for the highly supersonic motions observed in molecular clouds
\citep[e.g.][]{sol87}?
The answer is likely that they exist in the lowest density envelopes of the
molecular clouds and therefore should not be input into local models of dense
subregions, as we do in some cases here. Our super-Alfv\'enic\ models in a periodic
simulation box demonstrate a limiting case, and help establish that such models
cannot be applied directly to explain observed star-forming regions.
In a global scenario, the dense regions
that form cores will be less turbulent than the larger low-density envelopes.
The low-density regions can support highly turbulent motions while denser regions
have lower velocity dispersion, as demonstrated in 1.5 dimensional global
models of molecular cloud turbulence \citep{kud03,kud06}.
Our Fig.~\ref{masshist} shows that the core mass distribution is relatively
broad for any given value of $\mu_0$ for nonlinear-flow-induced
fragmentation ($k^{-4}$ spectrum of velocity fluctuations).
This repeats the qualitative findings of many earlier studies
in three-dimensions \citep{pad97,kle01,gam03,til07}.
This scenario of turbulent fragmentation is a plausible mechanism to
generate broad CMFs of the type observed. It remains an open question whether
this kind of CMF is related to the IMF since the cores are often highly irregular
in shape, and it is not clear that they will collapse monolithically.
Alternative methods to generate broad IMFs or CMFs are the global effect of
competitive accretion \citep{bon03,bat03}, a temporal spread of core
accretion lifetimes \citep{mye00,basj04}, or a distribution of initial
mass-to-flux ratios in a cloud (BCW).
Future work by the astrophysical community may clarify the relative
roles of these processes.
\section{Summary}
We have studied the
effect of initial nonlinear velocity perturbations on the
formation of dense cores in isothermal sheet-like layers that
may be embedded within larger molecular cloud envelopes.
Our simulation box is periodic in the lateral ($x,y$) directions and
typically spans four nonmagnetic (Jeans) fragmentation scales in each of these
directions. The initial input turbulent energy is allowed to decay freely.
The simulations reveal a wide range of outcomes.
We emphasize the following main results of the paper:
\begin{enumerate}
\item{\it Time Evolution to Runaway.}
Subcritical model clouds can undergo accelerated ambipolar
diffusion in two different ways. For nonlinear
initial velocity perturbations in which small-scale modes contain
a large portion of the energy, the onset of runaway collapse
occurs sooner by a factor $\approx 4$ in our typical models.
For nonlinear perturbations with most energy on the largest scales
(hereafter, nonlinear flows),
the runaway collapse can be sped up
by a greater factor, $\approx 7$ for our typical models.
Supercritical clouds undergo prompt collapse
whenever nonlinear flows are present.
Subcritical model clouds may also be pushed into prompt
collapse by nonlinear flows that are significantly super-Alfv\'enic.
\item{\it Morphology of Clouds.}
Supercritical model clouds whose evolution is initiated by nonlinear
flows have a highly filamentary structure. Subcritical clouds with
initial nonlinear but trans-Alfv\'enic\ or sub-Alfv\'enic\ flows have a markedly
less filamentary structure. In these cases, magnetic fields
cause a rebound from the initial compression, and several oscillations
occur before the runaway collapse of the first cores.
Subcritical clouds with initially super-Alfv\'enic\ nonlinear flows promptly develop
highly filamentary structure with embedded collapsing cores.
\item{\it Velocity Profiles.}
Supercritical and transcritical model clouds which are driven into prompt collapse
have highly supersonic infall speeds at the core boundaries,
while subcritical model clouds typically have transonic or subsonic
infall speeds (relative to the velocity centroid) onto cores.
In the subcritical cases, the cores can have larger systematic motions than in
supercritical models, because the cores form within regions undergoing oscillatory
motions.
We believe that the large infall motions in the models with super-Alfv\'enic\
nonlinear flows may disqualify them as viable models for core formation,
given current observational results.
\item{\it Core Mass Distributions.}
Core formation initiated by nonlinear flows leads to broader core mass functions
than found in earlier studies of fragmentation initiated by small-amplitude
perturbations. This applies to models of any fixed initial mass-to-flux
ratio $\mu_0$.
However, the ultimate relation of such a core mass function to the
stellar initial mass function is not settled due to the irregular shape of
the fragments created by nonlinear flows.
These fragments may in turn break up into multiple objects at a later stage.
\item{\it Turbulent Decay.}
Supersonic initial velocity perturbations lead to an initially rapid
decay of kinetic energy in all models, on a time scale similar to the sound crossing
time across the half-thickness of the sheet. This rapid decay of turbulence
is in agreement with a wide variety of previous results in the literature.
However, subcritical model clouds can undergo oscillations that
reduce the decay rate of kinetic energy at later times. Furthermore, in the
limit of excellent neutral-ion coupling (flux-freezing), as may be present
in UV-ionized molecular cloud envelopes, large-scale wave modes may survive
for very long times.
\end{enumerate}
\section*{Acknowledgements}
We thank the anonymous referee for comments which improved
the discussion of results.
We also thank Stephanie Keating for creating color images and animations
using the IFRIT package developed by Nick Gnedin.
SB was supported by a grant from the Natural Sciences and Engineering
Research Council (NSERC) of Canada.
JW was supported by an NSERC Undergraduate Summer Research Award.
|
1,108,101,565,532 | arxiv | \section{General introduction}
The experimental breakthroughs of 1995 having led to the first realization
of a Bose-Einstein condensate in an atomic vapor
\cite{Cornell_bec,Hulet_bec,Ketterle_bec}
have opened the era of experimental studies of ultracold gases with
non-negligible or even strong interactions, in dimension lower than or equal to three~\cite{RevueBlochDalibard,RevueTrentoFermions, HouchesBEC,HouchesLowD,Varenna}.
In these systems, the thermal de Broglie wavelength and the typical distance
between atoms are much larger than the range of the interaction potential.
This so-called {\sl zero-range} regime has interesting
universal properties: Several quantities such as the thermodynamic
functions of the gas depend on the interaction potential only through
the scattering length $a$, a length that can be defined in any dimension
and that characterizes the low-energy scattering amplitude of two atoms.
This universality property holds for the weakly repulsive
Bose gas in three dimensions \cite{LHY} up to the order of expansion
in $(n a^3)^{1/2}$ corresponding to Bogoliubov theory
\cite{Wu1959,SeiringerLHY}, $n$ being the gas density.
It also holds for the weakly repulsive
Bose gas in two dimensions \cite{Schick,Popov,Lieb2D,MoraCastin}, even at the next order
beyond Bogoliubov theory \cite{MoraCastin2D}. For $a$ much larger than the range of the interaction potential, the ground state of $N$
bosons in two dimensions is
a universal $N$-body bound state~\cite{BruchTjon3bosons2D,Fedorov3bosons2D,Platter4bosons2D,HammerSon,Lee2D}.
In one dimension, the universality holds for
any scattering length, as exemplified by the fact that
the Bose gas with zero-range interaction is exactly solvable
by the Bethe ansatz both in the repulsive case \cite{LiebLiniger}
and in the attractive case \cite{McGuire,Herzog,Caux}.
For spin 1/2 fermions, the universality properties are expected to be even stronger.
The weakly interacting regimes in $3D$
\cite{LeeYang,HuangYang,Abrikosov,Galitski,
LiebSeiringerSolovej_FermionsT=0,
Seiringer_fermions}
and in $2D$ \cite{Bloom} are universal,
as well as for any scattering length in the Bethe-ansatz-solvable
$1D$ case
\cite{GaudinArticle,GaudinLivre}.
Universality is also expected to hold for an arbitrary scattering length
even in $3D$,
as was recently tested by experimental studies on the BEC-BCS crossover
using a Feshbach resonance, see~\cite{Varenna} and Refs. therein and e.~g.~\cite{HuletClosedChannel, HuletPolarized,ThomasVirielExp,thomas_entropie_PRL,thomas_entropie_JLTP,JinPhotoemission,KetterleGap,GrimmModesTfinie, ZwierleinPolaron,SylPolaron,SylEOS,MukaiyamaEOS,
NirEOS,ZwierleinEOS,JinPseudogap,AustraliensC,AustraliensT,JinC},
and in agreement with unbiased Quantum Monte Carlo calculations \cite{bulgacQMC,zhenyaPRL,Juillet,BulgacCrossover,zhenyas_crossover,ChangAFMC,VanHouckePrepa};
and in $2D$,
a similar universal crossover from BEC to BCS is expected when the parameter $\ln (k_F a)$
[where $k_F$ is the Fermi momentum] varies from $-\infty$ to $+\infty$~\cite{Petrov2D,Miyake2D,Randeria2D,Zwerger2D,Leyronas4corps,Giorgini2D,Kohl2Dgap,Kohl2D}.
Mathematically, results on universality were obtained for the $N$-body problem in $2D$ \cite{Teta}.
In $3D$, mathematical results were obtained for the $3$-body problem~(see, e.g.,~\cite{MinlosFaddeev1,MinlosFaddeev2,VugalterZhislin,Shermatov,Albeverio3corps}).
The universality for the fermionic equal-mass $N$-body problem in $3D$ remains mathematically unproven~\footnote{The proof given in~\cite{Teta} that, for a sufficiently large number of equal-mass fermions,
the energy is unbounded from below, is actually incorrect, since the fermionic antisymmetry was not properly taken into account.
A theorem was published without proof in \cite{MinlosProceeding} implying that
the spectrum of the Hamiltonian of $N_\uparrow$ same-spin-state fermions of mass $m_\uparrow$ interacting with a distinguishable particle of mass $m_\downarrow$
is unbounded below,
not only
for $m_\uparrow=m_\downarrow$ and large enough $N_\uparrow$,
but
also for $N_\uparrow=3$ and $m_\uparrow/m_\downarrow$ larger than the critical mass ratio $5.29\dots$.
No proof was found yet for this theorem; it was only proven that no Efimov effect occurs for $N_\uparrow=3$, $N_\downarrow=1$ provided $m_\uparrow/m_\downarrow$ is sufficiently small~\cite{Minlos2011}.
It was recently shown that a four-body Efimov effect occurs in this $3+1$ body problem
(for an angular momentum $l=1$ and not for any other $l\le10$) and makes the spectrum unbounded below,
however for a widely different critical mass ratio $m_\uparrow/m_\downarrow\simeq 13.384$~\cite{CMP},
which sheds some doubts on \cite{MinlosProceeding}.
}.
Universality is also expected for mixtures
in $2D$~\cite{PetrovCristal,Leyronas4corps,LudoPedri}, and in $3D$ for Fermi-Fermi mixtures below a critical mass ratio~\cite{Efimov73,PetrovCristal,BaranovLoboShlyapMassesDiff,CMP}.
Above a critical mass ratio, the Efimov effect takes place, as it also takes place for bosons
\cite{Efimov,RevueBraaten}. In this case, the three-body problem depends
on a single additional parameter, the three-body parameter. The Efimov physics is presently under active experimental
investigation \cite{GrimmEfimov,FlorenceEfimov,KhaykovichEfimov,NatureEfimov,HuletEfimov,KhaykovichEfimov2,JochimEfimovRF}. It is not the subject of this paper (see \cite{50pages}).
In the zero-range regime, it is intuitive that the short-range or high-momenta properties of the gas are dominated by two-body physics.
For example the pair distribution function $g^{(2)}(\mathbf{r}_{12})$
of particles at distances $r_{12}$
much smaller than the de Broglie wavelength is expected
to be proportional to the modulus squared of the zero-energy two-body
scattering-state wavefunction $\phi(\mathbf{r}_{12})$, with a proportionality factor
$\Lambda_g$ depending on the many-body state of the gas.
Similarly the tail of the momentum distribution
$n(\mathbf{k})$, at
wavevectors much larger than the inverse de Broglie wavelength,
is expected to be proportional to the modulus squared of
the Fourier component $\tilde{\phi}(\mathbf{k})$ of the zero-energy scattering-state wavefunction,
with a proportionality factor $\Lambda_n$ depending on the many-body
state of the gas:
Whereas two colliding atoms in the gas
have a center of mass wavevector of the order of the inverse de Broglie
wavelength, their relative wavevector can access much larger values,
up to the inverse of the interaction range,
simply because the interaction potential has a width in the space
of relative momenta of the order of the inverse of its range in real space.
For these intuitive reasons, and with the notable exception of one-dimensional
systems, one expects that the mean interaction energy $E_{\rm int}$
of the gas, being
sensitive to the shape of $g^{(2)}$ at distances of the order
of the interaction range, is not universal, but diverges
in the zero-range limit; one also expects
that, apart from the $1D$ case,
the mean kinetic energy, being dominated by the tail
of the momentum distribution, is not universal and diverges
in the zero-range limit, a well known fact
in the context of Bogoliubov theory for Bose gases
and of BCS theory for Fermi gases.
Since the total energy of the gas is universal, and $E_{\rm int}$
is proportional to $\Lambda_g$ while $E_{\rm kin}$ is proportional
to $\Lambda_n$, one expects that there exists a simple relation
between $\Lambda_g$ and $\Lambda_n$.
The precise link between the pair distribution function, the tail
of the momentum distribution and the energy of the gas
was first established for one-dimensional systems.
In \cite{LiebLiniger} the value of the
pair distribution function for $r_{12}=0$
was expressed in terms of the derivative of the gas energy with respect
to the one-dimensional scattering length, thanks to the Hellmann-Feynman
theorem. In \cite{Olshanii_nk} the tail
of $n(k)$ was also related to this derivative of the energy,
by using a simple and general property of the Fourier transform of a function
having discontinuous derivatives in isolated points.
In three dimensions, results in these directions were first obtained
for weakly interacting gases. For the weakly interacting Bose gas,
Bogoliubov theory contains the expected properties, in particular
on the short distance behavior of the pair distribution function
\cite{Huang_article,Holzmann,Glauber}
and the fact that the momentum distribution
has a slowly decreasing tail.
For the weakly interacting spin-1/2 Fermi gas, it was shown that
the BCS anomalous average (or pairing field)
$\langle \hat{\psi}_\uparrow(\mathbf{r}_1)
\hat{\psi}_\downarrow(\mathbf{r}_2)\rangle$ behaves at short
distances as the zero-energy two-body scattering wavefunction $\phi(\mathbf{r}_{12})$
\cite{Bruun},
resulting in a $g^{(2)}$ function indeed proportional
to $|\phi(\mathbf{r}_{12})|^2$ at short distances. It was however understood
later that the corresponding proportionality factor $\Lambda_g$
predicted by BCS theory is incorrect \cite{CarusottoCastin},
e.g.\ at zero temperature the BCS prediction drops exponentially with $1/a$
in the non-interacting limit $a\to 0^-$, whereas the correct result
drops as a power law in $a$.
More recently, in a series of two articles \cite{TanEnergetics,TanLargeMomentum},
explicit expressions for the proportionality factors
$\Lambda_g$ and $\Lambda_n$ were obtained in terms of the derivative
of the gas energy with respect to the inverse scattering length, for a spin-1/2 interacting Fermi
gas in three dimensions, for an arbitrary value of the scattering length,
that is, not restricting to the weakly interacting limit.
Later on, these results were rederived in \cite{Braaten,BraatenLong,ZhangLeggettUniv},
and also in \cite{WernerTarruellCastin} with very elementary methods
building on the aforementioned intuition that $g^{(2)}(\mathbf{r}_{12})\propto |\phi(\mathbf{r}_{12})|^2$
at short distances and $n(\mathbf{k})\propto |\tilde{\phi}(\mathbf{k})|^2$
at large momenta. These relations were tested by
numerical four-body calculations \cite{BlumeRelations}.
An explicit relation between $\Lambda_g$ and the interaction energy was derived in \cite{ZhangLeggettUniv}. Another fundamental relation discovered in \cite{TanEnergetics} and recently generalized in \cite{TanSimple,CombescotC} to fermions in $2D$,
expresses the total energy as a functional of the momentum distribution and the spatial density.
\section{Contents}
Here we derive generalizations of the relations
of \cite{LiebLiniger,Olshanii_nk,TanEnergetics,TanLargeMomentum,ZhangLeggettUniv,TanSimple,CombescotC} to two dimensional gases,
and to the case of a small but non-zero interaction range (both on a lattice and in continuous space).
We also find entirely new results for
the first order derivative of the energy with respect to the effective range,
as well as for the second order derivative with respect to the scattering length.
We shall also include rederivations of known relations using our elementary methods.
We treat in detail
the case of spin-1/2 fermions, with equal masses in the two spin states, both in three dimensions and
in two dimensions.
The discussion of spinless bosons and arbitrary mixtures is deferred to another article, as it may involve
the Efimov effect in three dimensions~\cite{CompanionBosons}.
This article is organized as follows.
Models, notations and some basic properties are introduced in Section~\ref{subsec:models}.
Relations for zero-range interactions are
summarized in Table~\ref{tab:fermions} and derived
for pure states
in Section \ref{sec:ZR}. We then consider lattice models (Tab.~\ref{tab:latt} and Sec.~\ref{sec:latt}) and finite-range models in continuous space (Tab.~\ref{tab:V(r)} and Sec.~\ref{sec:V(r)}). In Section~\ref{sec:re} we derive a model-independent expression for the correction to the energy due to a finite range or a finite effective range of the interaction, and we relate this energy correction
to the subleading short distance behavior of the pair distribution function and to the coefficient of
the $1/k^6$ subleading tail of the momentum distribution (see Tab.~\ref{tab:re}).
The case of general statistical mixtures of pure states or of stationary states is discussed in Sec.~\ref{sec:stat_mix},
and the case of thermodynamic equilibrium states in Sec.~\ref{subsec:finiteT}.
Finally we present applications of the general relations: For two particles and three particles in harmonic traps we compute corrections to exactly solvable cases (Sec. \ref{subsec:appli_deux_corps} and Sec. \ref{subsec:appl_3body}).
For the unitary gas trapped in an isotropic harmonic potential, we determine how the equidistance between levels
within a given energy ladder (resulting from the $SO(2,1)$ dynamical symmetry) is affected by finite $1/a$ and finite range
corrections, which leads to a frequency shift and a collapse of the breathing mode of the zero-temperature
gas~(Sec.~\ref{subsec:contactN}).
For the bulk unitary gas, we check that general relations are satisfied by existing fixed-node Monte Carlo
data~\cite{Giorgini,Giorgini_nk,LoboGiorgini_g2}
for correlation functions of the unitary gas~(Sec.~\ref{sec:FNMC}).
We quantify the finite range corrections to the unitary gas energy in typical experiments,
which is required for precise measurements of its equation of state~(Sec.~\ref{subsec:app_manips}).
We conclude in Section \ref{sec:conclusion}.
\begin{table*
\begin{tabular}{|cc|cc|}
\hline
Three dimensions && Two dimensions& \\
\hline
\vspace{-4mm}
& & & \\
$\displaystyle \psi(\mathbf{r}_1,\ldots,\mathbf{r}_N)\underset{r_{ij}\to0}{=}
\left( \frac{1}{r_{ij}}-\frac{1}{a} \right) \, A_{ij}\left(
\mathbf{R}_{ij}, (\mathbf{r}_k)_{k\neq i,j}
\right)
+O(r_{ij})\ $
&
(1a)
&
$\displaystyle \psi(\mathbf{r}_1,\ldots,\mathbf{r}_N)\underset{r_{ij}\to0}{=}
\ln( r_{ij}/a ) \, A_{ij}\left(
\mathbf{R}_{ij}, (\mathbf{r}_k)_{k\neq i,j}
\right)
+O(r_{ij})\ $
&
(1b)
\vspace{-4mm}
\\
&&& \\
\hline
\multicolumn{4}{|c|}{\vspace{-4mm}} \\
\multicolumn{3}{|c}{
$\displaystyle
( A^{(1)},A^{(2)} )\equiv \sum_{i<j} \int \Big(\prod_{k\neq i,j} d^d\! r_k \Big) d^d\! R_{ij}
A^{(1)*}_{ij}(\mathbf{R}_{ij}, (\mathbf{r}_k)_{k\neq i,j})
A^{(2)}_{ij}(\mathbf{R}_{ij}, (\mathbf{r}_k)_{k\neq i,j})$ }
& (2)
\\
\hline
\multicolumn{4}{|c|}{\vspace{-4mm}} \\
\multicolumn{3}{|c}{
$\displaystyle
( A^{(1)},\mathcal{H} A^{(2)} )\equiv \sum_{i<j} \int \Big(\prod_{k\neq i,j} d^d\! r_k \Big) d^d\! R_{ij}
A^{(1)*}_{ij}(\mathbf{R}_{ij}, (\mathbf{r}_k)_{k\neq i,j}) \mathcal{H}_{ij}
A^{(2)}_{ij}(\mathbf{R}_{ij}, (\mathbf{r}_k)_{k\neq i,j})$ }
& (3) \vspace{-4mm}
\\
\multicolumn{4}{|c|}{} \\
\hline
\end{tabular}
\caption{Notation for the regular part $A$ of the $N$-body wavefunction appearing in the contact conditions (first line,
with $\mathbf{R}_{ij}=(\mathbf{r}_i+\mathbf{r}_j)/2$ fixed),
for the scalar product between such regular parts (second line) and for corresponding matrix elements of operators $\mathcal{H}_{ij}$
acting on $\mathbf{R}_{ij}$ and on the $\mathbf{r}_k$, $k\neq i,j$
(third line).
\label{tab:notations}}
\end{table*}
\section{Models, notations, and basic properties} \label{subsec:models}
We now introduce the three models used in this work to account for interparticle interactions and associated notations,
together with some basic properties to be used in some of the derivations.
For a fixed number $N_\sigma$ of fermions in each spin state $\sigma=\uparrow,\downarrow$, one can consider that particles $1,\ldots,N_\uparrow$ have a spin~$\uparrow$ and particles $N_\uparrow+1,\ldots,N_\uparrow+N_\downarrow=N$ have a spin~$\downarrow$, so that the wavefunction $\psi(\mathbf{r}_1,\ldots,\mathbf{r}_N)$
(normalized to unity) changes sign when one exchanges the positions of two particles having the same spin~\footnote{ The corresponding state vector is $|\Psi\rangle=
[N!/(N_\uparrow!N_\downarrow!)]^{1/2} \hat{A} \left( |\uparrow,\ldots,\uparrow,\downarrow,\ldots,\downarrow\rangle \otimes |\psi\rangle \right)$ where there are $N_\uparrow$ spins $\uparrow$ and $N_\downarrow$ spins $\downarrow$,
and the operator $\hat{A}$ antisymmetrizes with respect to all particles. The wavefunction $\psi(\mathbf{r}_1,\ldots,\mathbf{r}_N)$ is then proportional to $\left( \langle \uparrow,\ldots,\uparrow,\downarrow,\ldots,\downarrow | \otimes \langle \mathbf{r}_1,\ldots,\mathbf{r}_N | \right) |\Psi\rangle$, with the proportionality factor
$(N!/N_\uparrow!N_\downarrow !)^{1/2}$.}.
\subsection{Zero-range model}
In this well-known model (see e.g. \cite{AlbeverioLivre,YvanHouchesBEC,PetrovJPhysB,Efimov, MaximLudo2D,RevueBraaten,YvanVarenna,WernerThese} and refs. therein)
the interaction potential is replaced by boundary conditions on the $N$-body wavefunction: For any pair of particles $i\neq j$, there exists a function $A_{ij}$, hereafter called regular part of $\psi$, such that
[Tab. I, Eq.~(1a)] holds in the $3D$ case
and
[Tab. I, Eq.~(1b)] holds in the $2D$ case,
where the limit of vanishing distance $r_{ij}$ between particles $i$ and $j$ is taken for a fixed position
of their center of mass $\mathbf{R}_{ij}=(\mathbf{r}_i+\mathbf{r}_j)/2$
and fixed positions of the remaining particles $(\mathbf{r}_k)_{k\neq i,j}$ different from
$\mathbf{R}_{ij}$.
Fermionic symmetry of course imposes $A_{ij}=0$ if particles $i$ and $j$ have the same spin.
When none of the $\mathbf{r}_i$'s coincide, there is no interaction potential and Schr\"odinger's equation reads
$H\,\psi(\mathbf{r}_1,\ldots,\mathbf{r}_N)=E\,\psi(\mathbf{r}_1,\ldots,\mathbf{r}_N)$
with
$\displaystyle H=-\frac{\hbar^2}{2 m}\sum_{i=1}^N
\Delta_{\mathbf{r}_i} + H_{\rm trap}$,
where $m$ is the atomic mass
and the trapping potential energy is
\begin{equation}
H_{\rm trap}\equiv\sum_{i=1}^N U(\mathbf{r}_i),
\label{eq:Htrap}
\end{equation}
$U$ being an external trapping potential.
The crucial difference between the Hamiltonian $H$ and the non-interacting Hamiltonian is the boundary condition [Tab. I, Eqs. (1a,1b)].
\subsection{Lattice models} \label{sec:models:lattice}
These models are used for quantum Monte Carlo calculations \cite{bulgacQMC,zhenyaPRL,zhenyaNJP,Juillet,ChangAFMC,BulgacCrossover}. They can also be convenient for analytics, as used in \cite{MoraCastin,LudoYvanBoite, WernerTarruellCastin,MoraCastin2D} and in this work.
Particles live on a lattice, i. e. the coordinates are integer multiples of the lattice spacing $b$. The Hamiltonian is
\begin{equation}
H=H_{\rm kin}+
H_{\rm int}+H_{\rm trap}
\label{eq:Hlatt}
\end{equation}
with, in first quantization, the kinetic energy
\begin{equation}
H_{\rm kin}=-\frac{\hbar^2}{2 m}\sum_{i=1}^N
\Delta_{\mathbf{r}_i},
\label{eq:def_H0}
\end{equation}
the interaction energy
\begin{equation}
H_{\rm int}=g_0 \sum_{i<j} \delta_{\mathbf{r}_i,\mathbf{r}_j} b^{-d},
\label{eq:def_W}
\end{equation}
and the trapping potential energy defined by (\ref{eq:Htrap});
i.e. in second quantization
\begin{eqnarray}
H_{\rm kin}&=&\sum_\sigma\int_D \frac{d^dk}{(2\pi)^d}\,\epsilon_{\mathbf{k}} c_\sigma^\dagger(\mathbf{k})c_\sigma(\mathbf{k})
\label{eq:Hkin_second_quant}
\\
H_{\rm int}&=&g_0\sum_\mathbf{r} b^d (\psi^\dagger_\uparrow\psi^\dagger_\downarrow\psi_\downarrow\psi_\uparrow)(\mathbf{r})
\label{eq:defW_2e_quant}
\\
H_{\rm trap}&=&\sum_{\mathbf{r},\sigma} b^d U(\mathbf{r}) (\psi_\sigma^\dagger \psi_\sigma)(\mathbf{r}).
\end{eqnarray}
Here $d$ is the space dimension,
$\epsilon_\mathbf{k}$ is the dispersion relation, $\hat{\psi}$ obeys discrete anticommutation relations
$\{\hat{\psi}_\sigma(\mathbf{r}), \hat{\psi}^\dagger_{\sigma'}(\mathbf{r}')\}=b^{-d} \delta_{\mathbf{r} \mathbf{r}'} \delta_{\sigma\sigma'}$.
The operator $ c_\sigma^\dagger(\mathbf{k})$ creates a particle in the plane wave state $|\mathbf{k}\rangle$ defined by
$\langle \mathbf{r} | \mathbf{k} \rangle = e^{i \mathbf{k} \cdot \mathbf{r}}$ for any $\mathbf{k}$ belonging to the first Brillouin zone
$D=\left(-\frac{\pi}{b},\frac{\pi}{b}\right]^d$. The corresponding anticommutation relations
are $\{c_\sigma(\mathbf{k}),c^\dagger_{\sigma'}(\mathbf{k}')\}= (2\pi)^d \delta_{\sigma\sigma'} \delta(\mathbf{k}-\mathbf{k}')$
if $\mathbf{k}$ and $\mathbf{k}'$ are both in the first Brillouin zone \footnote{Otherwise
$\delta(\mathbf{k}-\mathbf{k}')$ has to be replaced by the periodic version
$\sum_{\mathbf{K}\in (2\pi/b)\mathbb{Z}^d} \delta(\mathbf{k}-\mathbf{k}'-\mathbf{K})$.}.
The operator $\Delta$ in (\ref{eq:def_H0}) is the lattice version of the Laplacian defined by
$-\frac{\hbar^2}{2m}\langle \mathbf{r} | \Delta_\mathbf{r} | \mathbf{k} \rangle \equiv \epsilon_\mathbf{k} \langle \mathbf{r} | \mathbf{k} \rangle$.
The simplest choice for the dispersion relation is $\displaystyle\epsilon_{\mathbf{k}}=\frac{\hbar^2k^2}{2m}$~\cite{MoraCastin,Juillet,LudoYvanBoite, MoraCastin2D,ChangAFMC}. Another choice, used in~\cite{zhenyaPRL,zhenyaNJP}, is the dispersion relation of the Hubbard model:
$\displaystyle\epsilon_{\mathbf{k}}=\frac{\hbar^2}{m b^2}\sum_{i=1}^d\left[1-\cos(k_i b)\right]$. More generally, what follows applies to any $\epsilon_{\mathbf{k}}$ such that $\displaystyle\epsilon_{\mathbf{k}}\underset{b\to0}{\rightarrow}\frac{\hbar^2k^2}{2m}$ sufficiently rapidly and $\epsilon_{-\mathbf{k}}=\epsilon_{\mathbf{k}}$.
A key quantity is the zero-energy scattering state $\phi(\mathbf{r})$, defined by the two-body Schr\"odinger equation (with the center of mass at rest)
\begin{equation}
\left(-\frac{\hbar^2}{m}\,\Delta_\mathbf{r} +g_0 \frac{\delta_{\mathbf{r},\mathbf{0}}}{b^d}\right) \phi(\mathbf{r}) = 0
\end{equation}
and by the normalization conditions
\begin{eqnarray}
\phi(\mathbf{r})&\underset{r\gg b}{\simeq}& \frac{1}{r}-\frac{1}{a} \ \ \ {\rm in}\ 3D
\label{eq:normalisation_phi_tilde_3D}
\\
\phi(\mathbf{r})&\underset{r\gg b}{\simeq}& \ln(r/a) \ \ \ {\rm in}\ 2D.
\label{eq:normalisation_phi_tilde_2D}
\end{eqnarray}
A two-body analysis, detailed in Appendix~\ref{app:2body}, yields the relation between the scattering length and the bare coupling constant $g_0$,
in three and two dimensions:
\begin{eqnarray}
\!\!\!\!\!\!\!\!\!\!\!\! &&\frac{1}{g_0} \stackrel{3D}{=}\frac{m}{4\pi\hbar^2 a}\!-\!\! \int_D \!\!\frac{d^3 k}{(2\pi)^3} \frac{1}{2\epsilon_{\mathbf{k}}}
\label{eq:g0_3D}
\\
\!\!\!\!\!\!\!\!\!\!\!\! &&\frac{1}{g_0} \stackrel{2D}{=} \lim_{q\to 0} \left[
-\frac{m}{2\pi\hbar^2}\ln(\frac{a q e^\gamma}{2}) \!
+\!\!\int_D\!\! \frac{d^2 k}{(2\pi)^2} \mathcal{P} \frac{1}{2(\epsilon_{\mathbf{q}} - \epsilon_{\mathbf{k}})} \right]
\label{eq:g0_2D}
\end{eqnarray}
where $\gamma=0.577216\ldots$ is Euler's constant and $\mathcal{P}$ is the principal value.
This implies that (for constant $b$):
\begin{eqnarray}
\frac{d(1/g_0)}{d(1/a)}=\frac{m}{4\pi\hbar^2 } & & \ \ \ {\rm in}\ 3D
\label{eq:dg0_da_3D}
\\
\frac{d(1/g_0)}{d(\ln a)}=
-\frac{m}{2\pi\hbar^2}
& & \ \ \ {\rm in}\ 2D.
\label{eq:dg0_da_2D}
\end{eqnarray}
Another useful property derived in Appendix~\ref{app:2body} is
\begin{eqnarray}
\phi(\mathbf{0})&=&-\frac{4\pi\hbar^2}{m g_0}\ \ \ {\rm in}\ 3D
\label{eq:phi0_vs_g0}
\\
\phi(\mathbf{0})&=&\frac{2\pi\hbar^2}{m g_0}\ \ \ {\rm in}\ 2D,
\label{eq:phi0_vs_g0_2D}
\end{eqnarray}
which, together with (\ref{eq:dg0_da_3D},\ref{eq:dg0_da_2D}), gives
\begin{eqnarray}
|\phi(\mathbf{0})|^2&=&\frac{4\pi\hbar^2}{m}\frac{d(-1/a)}{dg_0}\ \ \ {\rm in}\ 3D
\label{eq:phi_tilde_3D}
\\
|\phi(\mathbf{0})|^2&=&\frac{2\pi\hbar^2}{m}\frac{d(\ln a)}{dg_0}\ \ \ {\rm in}\ 2D
\label{eq:phi_tilde_2D}.
\end{eqnarray}
In the zero-range limit ($b\to 0$ with $g_0$ adjusted in such a way that $a$ remains constant), it is expected
that the spectrum of the lattice model converges to the one of the zero-range model,
as explicitly checked for three particles in~\cite{LudoYvanBoite},
and that any eigenfunction $\psi(\mathbf{r}_1,\dots,\mathbf{r}_N)$ of the lattice model tends to the corresponding eigenfunction of the zero-range model
provided all interparticle distances remain much larger than $b$.
For any stationary state, let us denote by $1/k_{\rm typ}$ the typical length-scale on which the zero-range model's wavefunction varies:
e.g. for the lowest eigenstates,
this
is on the order of the mean interparticle distance, or
on the order of $a$ in the regime where $a$ is small and positive and dimers are formed.
The zero-range limit is then reached if $k_{\rm typ} b\ll1$.
This
notion of
typical wavevector $k_{\rm typ}$
can also be applied to
the case of a thermal equilibrium state, since most significantly populated eigenstates then have a $k_{\rm typ}$ on the same order;
it is then expected
that the thermodynamic potentials converge to the ones of the zero-range model when $b\to0$,
and that this limit is reached provided $k_{\rm typ} b \ll 1$.
For the homogeneous gas,
defining a thermal wavevector $k_T$ by $\hbar^2 k_T^2/(2m)=k_B\,T$,
we have $k_{\rm typ}\sim\max(k_F,k_T)$ for $a<0$ and
$k_{\rm typ}\sim\max(k_F,k_T,1/a)$
for $a>0$.
For lattice models, it will prove convenient to define the regular part $A$ by
\begin{multline}
\psi(\mathbf{r}_1,\ldots,\mathbf{r}_i=\mathbf{R}_{ij},\ldots,\mathbf{r}_j=\mathbf{R}_{ij},\ldots,\mathbf{r}_N)=\phi(\mathbf{0}) \\
\times A_{ij}(\mathbf{R}_{ij},(\mathbf{r}_k)_{k\neq i,j}).
\label{eq:def_A_reseau}
\end{multline}
In the zero-range regime $k_{\rm typ} b\ll1$, when the distance $r_{ij}$ between two particles of opposite spin is $\ll 1/k_{\rm typ}$ while all the other interparticle distances are much larger than $b$ and than $r_{ij}$, the many-body wavefunction is proportional to $\phi(\mathbf{r}_j-\mathbf{r}_i)$, with a proportionality constant given by~(\ref{eq:def_A_reseau}):
\begin{equation}
\psi(\mathbf{r}_1,\ldots,\mathbf{r}_N)\simeq\phi(\mathbf{r}_j-\mathbf{r}_i)\,A_{ij}(\mathbf{R}_{ij},(\mathbf{r}_k)_{k\neq i,j})
\label{eq:psi_courte_dist}
\end{equation}
where $\mathbf{R}_{ij}=(\mathbf{r}_i+\mathbf{r}_j)/2$.
If moreover $r_{ij}\gg b$, $\phi$ can be replaced by its asymptotic form (\ref{eq:normalisation_phi_tilde_3D},\ref{eq:normalisation_phi_tilde_2D}); since
the contact conditions [Tab. I, Eqs. (1a,1b)] of the zero-range model must be recovered, we see that the lattice model's regular part tends to the zero-range model's regular part in the zero-range limit.
\subsection{Finite-range continuous-space models}
Such models are used in numerical few-body
correlated Gaussian and many-body fixed-node Monte Carlo calculations (see e.\ g.\ \cite{Pandha,Giorgini,BlumeUnivPRL,StecherLong,BlumeRelations, RevueTrentoFermions, Giorgini2D} and refs. therein).
They are also relevant to neutron matter \cite{GezerlisCarlson}.
The Hamiltonian reads
\begin{equation}
H=H_0+\sum_{i=1}^{N_\uparrow}\sum_{j=N_\uparrow+1}^N\,V(r_{ij}),
\end{equation}
$H_0$ being defined by (\ref{eq:def_H0}) where $\Delta_{\mathbf{r}_i}$ now stands for the usual Laplacian,
and $V(r)$ is an
interaction potential between particles of opposite spin, which vanishes for $r>b$ or at least decays quickly enough for $r\gg b$.
The two-body zero-energy scattering state $\phi(r)$ is again defined by the Schr\"odinger equation $-(\hbar^2/m)\Delta_\mathbf{r}\phi+V(r)\phi=0$ and the boundary condition (\ref{eq:normalisation_phi_tilde_3D}) or (\ref{eq:normalisation_phi_tilde_2D}).
The
zero-range regime
is again reached for $k_{\rm typ} b\ll1$ with $k_{\rm typ}$ the typical relative wavevector~\footnote{
For purely attractive interaction potentials such as the square-well potential, above a critical particle number,
the ground state is a collapsed state and
the zero-range regime can only be reached for certain excited states (see e.g. \cite{LeChapitre} and refs. therein).}. Equation (\ref{eq:psi_courte_dist}) again holds in the zero-range regime, where $A$ now simply stands for the zero-range model's regular part.
\section{Relations in the zero-range limit}
\label{sec:ZR}
\begin{table*
\begin{tabular}{|cc|cc|}
\hline
Three dimensions & & Two dimensions & \\
\hline
\multicolumn{4}{|c|}{\vspace{-4mm}}
\\
\multicolumn{3}{|c}{$C\equiv {\displaystyle \lim_{k\to +\infty}}
k^4 n_\sigma(\mathbf{k})$}
& (1) \vspace{-4mm}
\\
\multicolumn{4}{|c|}{} \\
\hline
\vspace{-4mm}
& & & \\
\vspace{-4mm}
$\displaystyle C =
(4\pi)^2\ (A,A)
$
&
(2a)
&
$\displaystyle C =
(2\pi)^2\,(A,A)
$
&
(2b)
\\
& & & \\
\hline
& & &
\vspace{-4mm}
\\
$\displaystyle
\int d^3R \,
g_{\uparrow \downarrow}^{(2)} \left(\mathbf{R}+\frac{\mathbf{r}}{2},
\mathbf{R}-\frac{\mathbf{r}}{2}\right)
\underset{r\to0}{\sim}
\frac{C}{(4\pi)^2}
\frac{1}{r^2}
$
&(3a) \vspace{-4mm}
&
$\displaystyle
\int d^2R \,
g_{\uparrow \downarrow}^{(2)} \left(\mathbf{R}+\frac{\mathbf{r}}{2},
\mathbf{R}-\frac{\mathbf{r}}{2}\right)
\underset{r\to0}{\sim}
\frac{C}{(2\pi)^2}
\ln^2 r$
&
(3b)
\\
& & &
\\
\hline & & &
\vspace{-4mm}
\\
$\displaystyle\frac{dE}{d(-1/a)} = \frac{\hbar^2 C}{4\pi m} $ &
(4a) &
$\displaystyle\frac{dE}{d(\ln a)} = \frac{\hbar^2 C}{2\pi m} $
& (4b) \vspace{-4mm}
\\
&& & \\
\hline & & & \vspace{-4mm} \\
$\displaystyle E - E_{\rm trap} = \frac{\hbar^2 C}{4\pi m a} $
& &
$\displaystyle E - E_{\rm trap} = \lim_{\Lambda\to\infty}\left[-\frac{\hbar^2 C}{2\pi m} \ln \left(\frac{a \Lambda e^\gamma}{2}\right) \right.
$
&
\vspace{-3mm}
\\
& & & \\
$\displaystyle +\sum_{\sigma} \int \frac{d^3\!k}{(2\pi)^3} \frac{\hbar^2 k^2}{2m}
\left[n_\sigma(\mathbf{k}) - \frac{C}{k^4}\right]$
&(5a)
&
$\displaystyle +\sum_{\sigma} \left. \int_{k<\Lambda} \frac{d^2\!k}{(2\pi)^2} \frac{\hbar^2 k^2}{2m}
n_\sigma(\mathbf{k}) \right]
& (5b) \vspace{-4mm}
\\&
&&\\
\hline &&& \vspace{-4mm} \\
$\displaystyle \int d^3R \,
g_{\sigma \sigma}^{(1)} \left(\mathbf{R}+\frac{\mathbf{r}}{2},
\mathbf{R}-\frac{\mathbf{r}}{2}\right)
\underset{r\to0}{=}
N_\sigma -\frac{C}{8\pi}\, r + O(r^2)\ \ \ $
&
(6a) &
$\displaystyle \int d^2R \,
g_{\sigma \sigma}^{(1)} \left(\mathbf{R}+\frac{\mathbf{r}}{2},
\mathbf{R}-\frac{\mathbf{r}}{2}\right)
\underset{r\to0}{=}
N_\sigma +\frac{C}{4\pi}\, r^2\ln r + O(r^2)\ \ \ $
&
(6b)
\vspace{-4mm}
\\ &&& \\
\hline
&&
&
\vspace{-4mm}
\\
$\displaystyle \frac{1}{3} \sum_{i=1}^3 \sum_\sigma \int d^3R \,
g_{\sigma \sigma}^{(1)} \left(\mathbf{R}+\frac{r {\bf u_i}}{2},
\mathbf{R}-\frac{r {\bf u_i}}{2}\right)
\underset{r\to 0}{=}
N$
&
&
$\displaystyle \frac{1}{2} \sum_{i=1}^2 \sum_\sigma
\int d^2R \,
g_{\sigma \sigma}^{(1)} \left(\mathbf{R}+\frac{r {\bf u_i}}{2},
\mathbf{R}-\frac{r {\bf u_i}}{2}\right)
\underset{r\to 0}{=}
N
$&
\\
$\displaystyle -\frac{C}{4\pi}r -\frac{m}{3\hbar^2}\left(
E-E_{\rm trap} - \frac{\hbar^2 C}{4\pi m a}
\right) r^2 + o(r^2)$
&(7a)&
$\displaystyle +\frac{C}{4\pi}r^2
\left[\ln\left(\frac{r}{a}\right) -1\right]
-\frac{m}{2\hbar^2}\left(
E-E_{\rm trap}
\right) r^2 + o(r^2)$
&(7b) \vspace{-4mm}
\\&& & \\
\hline
&& & \vspace{-4mm}
\\$\displaystyle\frac{1}{2} \frac{d^2E_n}{d(-1/a)^2}
= \left(\frac{4\pi\hbar^2}{m}\right)^2 \sum_{n',E_{n'}\neq E_n}
\frac{|(A^{(n')},A^{(n)})|^2}{E_n-E_{n'}}$
&(8a) &
$\displaystyle\frac{1}{2} \frac{d^2E_n}{d(\ln a)^2}
= \left(\frac{2\pi\hbar^2}{m}\right)^2 \sum_{n',E_{n'}\neq E_n}
\frac{|(A^{(n')},A^{(n)})|^2}{E_n-E_{n'}} $
&
(8b) \vspace{-4mm}
\\&&
& \\
\hline & & & \vspace{-4mm} \\
$\displaystyle\left(\frac{d\bar{E}}{d(-1/a)}\right)_{\!S} =\left(\frac{dF}{d(-1/a)}\right)_{T} = \frac{\hbar^2 C}{4\pi m} $ &
(9a) &
$\displaystyle\left(\frac{d\bar{E}}{d(\ln a)}\right)_{\!S} =\left(\frac{dF}{d(\ln a)}\right)_{T} = \frac{\hbar^2 C}{2\pi m}$
&
(9b)\vspace{-4mm} \\
&& &
\\ \hline
&&
&\vspace{-4mm} \\
$\displaystyle\left(\frac{d^2F}{d(-1/a)^2}\right)_T < 0 $
&(10a)
&
$\displaystyle\left(\frac{d^2F}{d(\ln a)^2}\right)_T < 0 $
&
(10b) \vspace{-4mm}
\\
&& &\\
\hline
&&& \vspace{-4mm} \\
$\displaystyle\left(\frac{d^2\bar{E}}{d(-1/a)^2}\right)_{\!S} < 0 $
&(11a)&
$\displaystyle\left(\frac{d^2\bar{E}}{d(\ln a)^2}\right)_{\!S} < 0 $
&(11b)\vspace{-4mm}\\
&&&\\
\hline
& && \vspace{-4mm} \\
$\displaystyle\frac{dE}{dt} = \frac{\hbar^2 C}{4\pi m} \frac{d(-1/a)}{dt}+
\Big<\frac{dH_{\rm trap}}{dt}\Big>$
&(12a)
&
$\displaystyle\frac{dE}{dt} = \frac{\hbar^2 C}{2\pi m} \frac{d(\ln a)}{dt}+
\Big<\frac{dH_{\rm trap}}{dt}\Big>$
&
(12b)\vspace{-4mm}\\
&& & \\
\hline
\end{tabular}
\caption{Relations for spin-1/2 fermions with zero-range interactions.
The definition (1) of $C$, as well as the relations in lines 3, 5, 6 and 7,
concern any (non-pathological) statistical mixture
of states which satisfy the contact conditions~[Tab.~I, line~1] (with real wavefunctions for line 7).
Line~2
holds for any pure state; here $A$ is the regular part of the wavefunction appearing in the contact condition,
and $(A,A)$ is its squared norm (defined in Tab.~I).
Lines 4 and 8
hold for any stationary state.
Lines 9-11 hold at thermal equilibrium in the canonical ensemble.
Line 12 holds for any time-dependence of scattering length and trapping potential, and any corresponding time-dependent statistical mixture.
Many of the $3D$ relations were originally obtained in~\cite{TanEnergetics,TanLargeMomentum} (see text), while the $2D$ relation~(5b)
was obtained in~\cite{CombescotC} for the homogeneous system and in \cite{TanSimple} (in a different form) for the general case.
\label{tab:fermions}}
\end{table*}
We now derive relations for the zero-range model. For some of the derivations we will use a lattice model and then take the zero-range limit.
We recall that we derive all relations for pure states in this section, the generalization to statistical mixtures and the discussion of thermal equilibrium being deferred to Sections~\ref{sec:stat_mix} and \ref{subsec:finiteT}.
\subsection{Tail of the momentum distribution}
\label{sec:C_nk}
In this subsection as well as in the following subsections~\ref{sec:g2}, \ref{sec:energy_thm},
\ref{sec:g1}, \ref{subsec:tdote},
we consider a many-body pure state whose wavefunction $\psi$ satisfies the contact condition~[Tab. I, Eqs. (1a,1b)].
We now show that
the momentum distribution
$n_\sigma(\mathbf{k})$ has a $\sigma$-independent
tail proportional to $1/k^4$, with a coefficient denoted by $C$~[Tab. II, Eq. (1)].
$C$ is usually referred to as the ``contact''.
We shall also show that
$C$ is related by [Tab.~II, Eqs.~(2a,2b)]
to the norm of the regular part $A$ of the wavefunction (defined in Tab.~I).
In $3D$ these results were obtained in~\cite{TanLargeMomentum} \footnote{The existence of the $1/k^4$ tail had already been observed within a self-consistent approximate theory~\cite{Haussmann_PRB}.}.
Here the momentum distribution is defined in second quantization by
$\displaystyle n_\sigma(\mathbf{k}) =\langle\hat{n}_\sigma(\mathbf{k})\rangle= \langle {c}^\dagger_\sigma(\mathbf{k}) {c}_\sigma(\mathbf{k}) \rangle$
where ${c}_\sigma(\mathbf{k})$ annihilates a particle of spin $\sigma$ in the plane-wave state $|\mathbf{k}\rangle$ defined by $\langle\mathbf{r}|\mathbf{k}\rangle=e^{i \mathbf{k}\cdot\mathbf{r}}$;
this corresponds to the normalization
\begin{equation}
\int \frac{d^d k}{(2\pi)^d}\,n_\sigma(\mathbf{k}) = N_\sigma.
\label{eq:def_nk_fermions}
\end{equation}
In first quantization,
\begin{equation}
n_\sigma(\mathbf{k})=\sum_{i:\sigma} \int \Big( \prod_{l\neq i} d^d r_l \Big)
\left| \int d^d r_i e^{-i \mathbf{k}\cdot\mathbf{r}_i} \psi(\mathbf{r}_1,\ldots,\mathbf{r}_N)\right|^2
\label{eq:nk_1e_quant}
\end{equation}
where the sum is taken over all particles of spin $\sigma$: $i$ runs from $1$ to $N_\uparrow$ for $\sigma=\uparrow$,
and from $N_\uparrow+1$ to $N$ for $\sigma=\downarrow$.
\noindent{\underline{\it Three dimensions:}}
\\
The key point is that in the large-$k$ limit, the Fourier transform with respect to $\mathbf{r}_i$ is dominated by the contribution of the short-distance divergence coming from the contact condition~[Tab. I, Eq. (1a)]:
\begin{multline}
\int d^3 r_i\, e^{-i \mathbf{k}\cdot\mathbf{r}_i} \psi(\mathbf{r}_1,\ldots,\mathbf{r}_N)
\underset{k\to\infty}{\simeq}\int d^3 r_i\, e^{-i \mathbf{k}\cdot\mathbf{r}_i} \\
\times \sum_{j,j\neq i} \frac{1}{r_{ij}} A_{ij}(\mathbf{r}_j,(\mathbf{r}_l)_{l\neq i,j}).
\label{eq:FT_sing_3D}
\end{multline}
A similar link between the short-distance singularity of the wavefunction and the tail of its Fourier transform was used to derive exact relations in $1D$ in~\cite{Olshanii_nk}.
From $\Delta(1/r)=-4\pi\delta(\mathbf{r})$, we have
$\displaystyle \int d^3 r \,e^{-i \mathbf{k}\cdot \mathbf{r}}\frac{1}{r}=\frac{4\pi}{k^2}$, so that
\begin{multline}
\int d^3 r_i \,e^{-i \mathbf{k}\cdot\mathbf{r}_i} \psi(\mathbf{r}_1,\ldots,\mathbf{r}_N)
\underset{k\to\infty}{\simeq} \frac{4\pi}{k^2}
\sum_{j,j\neq i} e^{-i \mathbf{k}\cdot\mathbf{r}_j} \\
\times A_{ij}(\mathbf{r}_j,(\mathbf{r}_l)_{l\neq i,j}).
\end{multline}
One inserts this into (\ref{eq:nk_1e_quant}) and expands the modulus squared.
After spatial integration over all the $\mathbf{r}_l$, $l\neq i$,
the crossed terms rapidly vanish in the large-$k$ limit, as they are the product
of $e^{i\mathbf{k}\cdot (\mathbf{r}_j-\mathbf{r}_{j'})}$ and of regular functions of $\mathbf{r}_j$ and $\mathbf{r}_{j'}$
\footnote{E.g.\ for $n_\downarrow(\mathbf{k})$ in the trapped three-body case,
with particles $1$ and $2$ in state $\uparrow$ and particle $3$ in state $\downarrow$, one has $i=3$ and
$j,j'=1$ or $2$. Then the crossed term $A_{31}(\mathbf{r}_1,\mathbf{r}_2)A_{32}(\mathbf{r}_2,\mathbf{r}_1)$ has to all orders finite derivatives with respect to $\mathbf{r}_1$ and $\mathbf{r}_2$,
except if $\mathbf{r}_1=\mathbf{r}_2$ where it vanishes as $|\mathbf{r}_1-\mathbf{r}_2|^{2s-2}$, $s>0$ not integer, see e.g.\ Eq.~(\ref{eq:separa_Aij}) and below that equation.
By a power counting argument, its Fourier transform with respect to $\mathbf{r}_1-\mathbf{r}_2$ contributes to the momentum distribution tail
as $1/k^{2s+5}=o(1/k^4)$; one recovers the ``three-close-particle" contribution mentioned
in a note of \cite{TanLargeMomentum}.}.
This yields
$n_\sigma(\mathbf{k})\underset{k\to\infty}{\sim}C/k^4$,
with the expression~[Tab. II, Eq. (2a)] of $C$ in terms of the norm $(A,A)$ defined in~[Tab. I, Eq. (2)].
\noindent{\underline{\it Two dimensions:}}
\\
The $2D$ contact condition~[Tab.~I, Eq.~(1b)] now gives
\begin{multline}
\int d^2 r_i \, e^{-i \mathbf{k}\cdot\mathbf{r}_i} \psi(\mathbf{r}_1,\ldots,\mathbf{r}_N)
\underset{k\to\infty}{\simeq}\int d^2 r_i \, e^{-i \mathbf{k}\cdot\mathbf{r}_i} \\
\times \sum_{j,j\neq i} \ln (r_{ij}) A_{ij}(\mathbf{r}_j,(\mathbf{r}_l)_{l\neq i,j}).
\label{eq:FT_sing_2D}
\end{multline}
From $\Delta(\ln r)=2\pi\delta(\mathbf{r})$, one has
$\!\!\displaystyle\int\!\! d^2 r \,e^{-i \mathbf{k}\cdot \mathbf{r}}\ln r=-\frac{2\pi}{k^2}$
and
\begin{multline}
\int d^2 r_i \,e^{-i \mathbf{k}\cdot\mathbf{r}_i} \psi(\mathbf{r}_1,\ldots,\mathbf{r}_N)
\underset{k\to\infty}{\simeq} -\frac{2\pi}{k^2}
\sum_{j,j\neq i} e^{-i \mathbf{k}\cdot\mathbf{r}_j} \\
\times A_{ij}(\mathbf{r}_j,(\mathbf{r}_l)_{l\neq i,j}).
\end{multline}
As in $3D$ this leads to
[Tab. II, Eq. (2b)].
\subsection{Pair distribution function at short distances}
\label{sec:g2}
The pair distribution function gives the probability density of finding a spin-$\uparrow$ particle at
$\mathbf{r}_\uparrow$ and a spin-$\downarrow$ particle at $\mathbf{r}_\downarrow$:
$\displaystyle g_{\uparrow\downarrow}^{(2)}\left(\mathbf{r}_\uparrow,\mathbf{r}_\downarrow\right)=\langle
(\hat{\psi}^\dagger_\uparrow\hat{\psi}_\uparrow)(\mathbf{r}_\uparrow)
(\hat{\psi}^\dagger_\downarrow
\hat{\psi}_\downarrow)(\mathbf{r}_\downarrow)\rangle=\int (\prod_{k=1}^{N} d^d r_k)
|\psi(\mathbf{r}_1,\ldots,\mathbf{r}_N)|^2
\sum_{i=1}^{N_\uparrow} \sum_{j=N_\uparrow+1}^{N}
\!\!\!\!\!\!\delta\left(\mathbf{r}_\uparrow\!-\!\mathbf{r}_i\right)\delta\left(\mathbf{r}_\downarrow\!-\!\mathbf{r}_j\right)$.
We set $\mathbf{r}_{\uparrow,\downarrow}=\mathbf{R}\pm \mathbf{r}/2$ and we integrate over $\mathbf{r}_i$ and
$\mathbf{r}_j$:
\begin{multline}
g_{\uparrow\downarrow}^{(2)}\left(\mathbf{R}+\frac{\mathbf{r}}{2},\mathbf{R}-\frac{\mathbf{r}}{2}\right)
= \sum_{i=1}^{N_\uparrow}
\sum_{j=N_\uparrow+1}^{N}
\int \Big( \prod_{k\neq i,j} d^d r_k \Big) \\
\left| \psi\left(\mathbf{r}_1,\ldots,\mathbf{r}_i=\mathbf{R}+\frac{\mathbf{r}}{2},\ldots,\mathbf{r}_j=\mathbf{R}-\frac{\mathbf{r}}{2},\ldots,\mathbf{r}_N\right) \right|^2
\label{eq:def_g2_psi}
\end{multline}
Let us define the spatially integrated pair distribution function~\footnote{For simplicity, we refrain here from expressing $C$ as the integral of a ``contact density'' $\mathcal{C}(\mathbf{R})$
related to the small-$r$ behavior of the local pair distribution function
$g_{\uparrow \downarrow}^{(2)} \left(\mathbf{R}+\mathbf{r}/2,
\mathbf{R}-\mathbf{r}/2\right)$
as was done for the $3D$ case in~\cite{TanEnergetics,TanLargeMomentum,Braaten}; this $\mathcal{C}(\mathbf{R})$ is then also related to the large-$k$ tail of the Wigner distribution [i.e. the Fourier transform with respect to $\mathbf{r}$ of the one-body density matrix $\langle\psi^\dagger_\sigma(\mathbf{R}-\mathbf{r}/2)\psi_\sigma(\mathbf{R}+\mathbf{r}/2)\rangle$], see Eq.~(30) of~\cite{TanEnergetics}.}
\begin{equation}
G^{(2)}_{\uparrow\downarrow}(\mathbf{r})\equiv
\int d^dR \
g_{\uparrow \downarrow}^{(2)} \left(\mathbf{R}+\frac{\mathbf{r}}{2},
\mathbf{R}-\frac{\mathbf{r}}{2}\right),
\end{equation}
whose small-$r$ singular behavior we will show to be related to $C$ {\rm via}~[Tab.~II, Eqs.~(3a,3b)].
\noindent{\underline{\it Three dimensions:}}
\\
Replacing the wavefunction in~(\ref{eq:def_g2_psi}) by its asymptotic behavior given by the contact condition~[Tab. I, Eq. (1a)] immediately yields
\begin{equation}
G^{(2)}_{\uparrow\downarrow}(\mathbf{r})
\underset{r\to 0}{\sim}
\frac{(A,A)}{r^2}.
\end{equation}
Expressing $(A,A)$ in terms of $C$ through~[Tab. II, Eq. (2a)] finally gives
[Tab.~II, Eq.~(3a)].
In a measurement of all particle positions, the mean
total number of pairs of particles of opposite spin which are separated by a distance smaller than $s$ is
$N_{\rm pair}(s)=\int_{r<s} d^d r\, G^{(2)}_{\uparrow\downarrow}(\mathbf{r})$,
so that from [Tab.~II, Eq.~(3a)]
\begin{equation}
N_{\rm pair}(s)\underset{s\to 0}{\sim} \frac{C}{4\pi}
s,
\label{eq:Npair_3D}
\end{equation}
as obtained in~\cite{TanEnergetics,TanLargeMomentum}.
\noindent{\underline{\it Two dimensions:}}
\\
The contact condition~[Tab.~I, Eq.~(1b)] similarly leads to
[Tab.~II, Eq.~(3b)].
After integration over the region $r<s$ this gives
\begin{equation}
N_{\rm pair}(s)\underset{s\to 0}{\sim} \frac{C}{4\pi}
s^2
\ln^2 s.
\label{eq:Npair_2D}
\end{equation}
\subsection{First order derivative of the energy with respect to the scattering length}
\label{sec:dEda}
The relations [Tab. II, Eqs.~(4a,4b)] can be derived straightforwardly using the lattice model, see Sec.\ref{sec:dE_latt}.
Here we derive them by directly using the zero-range model, which is more involved but also instructive.
\noindent{\underline{\it Three dimensions:}}
\\
Let us consider a wavefunction $\psi_1$ satisfying the contact condition~[Tab. I, Eq. (1a)] for a scattering length $a_1$. We denote by $A^{(1)}_{ij}$ the regular part of $\psi_1$ appearing in the contact condition~[Tab. I, Eq. (1a)]. Similarly, $\psi_2$ satisfies the contact condition for a scattering length $a_2$ and a regular part $A^{(2)}_{ij}$.
Then, as shown in Appendix~\ref{app:lemme} using the divergence theorem, the following lemma holds:
\begin{equation}
\langle \psi_1, H \psi_2 \rangle - \langle H \psi_1, \psi_2 \rangle =
\frac{4\pi\hbar^2}{m}\left(\frac{1}{a_1}-\frac{1}{a_2}\right) \ ( A^{(1)},A^{(2)} )
\label{eq:lemme_3D}
\end{equation}
where the scalar product between regular parts is defined by~[Tab. I, Eq. (2)].
We then apply (\ref{eq:lemme_3D}) to the case where $\psi_1$ and $\psi_2$ are $N$-body stationary states of energy $E_1$ and $E_2$. The left hand side of (\ref{eq:lemme_3D}) then reduces to $(E_2-E_1) \langle \psi_1 | \psi_2 \rangle$. Taking the limit $a_2\to a_1$ gives
\begin{equation}
\frac{dE}{d(-1/a)} = \frac{4\pi\hbar^2}{m} (A,A)
\label{eq:thm_dE_3D}
\end{equation}
for any stationary state.
Expressing $(A,A)$ in terms of $C$ thanks to~[Tab.~II, Eq.~(2a)] finally yields~[Tab.~II, Eq.~(4a)]. This result as well as~(\ref{eq:thm_dE_3D}) is contained in Ref.~\cite{TanEnergetics,TanLargeMomentum}\footnote{Our derivation is similar to the one given in the two-body case and sketched in the many-body case in Section~3 of~\cite{TanLargeMomentum}.}.
We recall that here and in what follows, the wavefunction is normalized: $\langle\psi|\psi\rangle=1$.
\noindent{\underline{\it Two dimensions:}}
\\
The $2D$ version of the lemma~(\ref{eq:lemme_3D}) is
\begin{equation}
\langle \psi_1, H \psi_2 \rangle - \langle H \psi_1, \psi_2 \rangle =
\frac{2\pi\hbar^2}{m}\ln\left(a_2/a_1\right) \ ( A^{(1)},A^{(2)} ),
\label{eq:lemme_2D}
\end{equation}
as shown in Appendix~\ref{app:lemme}.
As in $3D$, we deduce that
\begin{equation}
\frac{dE}{d(\ln a)} = \frac{2\pi\hbar^2}{m} (A,A),
\label{eq:thm_dE_2D}
\end{equation}
which gives the desired~[Tab.~II, Eq.~(4b)] by using~[Tab.~II, Eq.~(2b)].
\subsection{Expression of the energy in terms of the momentum distribution}
\label{sec:energy_thm}
\noindent{\underline{\it Three dimensions:}}
\\
As shown in~\cite{TanEnergetics}, the mean total energy $E$
minus the mean trapping-potential energy
$E_{\rm trap}\equiv \left< H_{\rm trap}\right>$,
has the simple expression in terms of the momentum distribution given in~[Tab.~II, Eq.~(5a)],
for any pure state $|\psi\rangle$ satisfying the contact condition~[Tab. I, Eq. (1a)].
We give a simple rederivation of this result by using the lattice model (defined in Sec.~\ref{sec:models:lattice}).
We first treat the case where $|\psi\rangle$ is an eigenstate of the zero-range model.
Let $|\psi_b\rangle$ be the eigenstate of the lattice model that tends to $|\psi\rangle$ for $b\to0$.
We first note that
$C_b\equiv\langle\psi_b|\hat{C}|\psi_b\rangle$, where $\hat{C}$ is defined by [Tab.~III, Eqs.~(1a,1b)],
tends to
the contact $C$ of the state $\psi$
[defined in~Tab.~II, Eq.~(1)]
when $b\to0$,
as shown in Appendix~\ref{app:C_b}.
\begin{comment}
Let $A_b$ denote the regular part of $\psi_b$.
For $b\to0$
we have $\displaystyle A_b\to A$
[see~(\ref{eq:def_A_reseau}) and the discussion thereafter];
thus, the quantity $C_b$, which according to~[Tab.~III, Eq.~(4a)] is equal to $(4\pi)^2\,(A_b,A_b)$, tends to $(4\pi)^2\,(A,A)$, which according to [Tab.~II,Eq.~(2a)] is equal to $C$.
\end{comment}
Then, the key step is to use~[Tab.~III, Eq.~(3a)], which, after taking the expectation value in the state $|\psi_b\rangle$,
yields the desired~[Tab.~II, Eq.~(5a)] in the zero-range limit since $\displaystyle D\rightarrow\mathbb{R}^3$ and $\epsilon_\mathbf{k}\to\hbar^2 k^2/(2m)$ for $b\to0$.
To generalize~[Tab.~II, Eq.~(5a)] to any pure state $|\psi\rangle$
satisfying the contact condition~[Tab. I, Eq. (1a)], we use
the state $|\psi_b\rangle$ defined in~Appendix~\ref{app:C_b_2}.
As shown in that appendix,
the expectation value of $\hat{C}$
taken in this state $|\psi_b\rangle$
tends to the contact $C$ of $|\psi\rangle$~[defined in Tab.~II, Eq.~(1)].
Moreover the expectation values of $H-H_{\rm trap}$ and of $\hat{n}_\sigma(\mathbf{k})$,
taken in this state $|\psi_b\rangle$,
should tend to the corresponding expectation values taken in the state $|\psi\rangle$.
This yields the desired relation.
Finally we mention the equivalent form of relation~[Tab.~II, Eq.~(5a)]:
\begin{multline}
E - E_{\rm trap} = \lim_{\Lambda\to\infty}\Bigg[
\frac{\hbar^2 C}{4\pi m}\left(\frac{1}{a}-\frac{2\Lambda}{\pi}\right)
\\ +\sum_\sigma\int_{k<\Lambda} \frac{d^3k}{(2\pi)^3} \frac{\hbar^2 k^2}{2m}
n_\sigma(\mathbf{k}) \Bigg].
\label{eq:energy_thm_3D_Lambda}
\end{multline}
\noindent{\underline{\it Two dimensions:}}
\\
The $2D$ version of (\ref{eq:energy_thm_3D_Lambda}) is
[Tab.~II, Eq.~(5b)].
This was shown for a homogeneous system in \cite{CombescotC} and in the general case in \cite{TanSimple}
\footnote{This relation was written in \cite{TanSimple} in a form containing a generalised function $\eta(\mathbf{k})$ (i.e. a distribution). We have
checked that this form is equivalent to our Eq.~(\ref{eq:energy_thm_2D_heaviside}),
using Eq.~(16b) of \cite{TanSimple},
$n_\sigma(\mathbf{k})-(C/k^4)\theta(k-q)=O(1/k^5)$ at large $k$, and
$\int d^2k\, \eta(\mathbf{k}) f(\mathbf{k})=\int d^2k \,f(\mathbf{k})$
for any $f(\mathbf{k})=O(1/k^3)$.
This last property is implied in Eq.~(16a) in \cite{TanSimple}.
}.
This can easily be rewritten in the following forms, which
resemble~[Tab.~II, Eq.~(5a)]:
\begin{multline}
E - E_{\rm trap} = -\frac{\hbar^2 C}{2\pi m } \ln\left(\frac{a q e^\gamma}{2}\right)
+\sum_{\sigma} \int \frac{d^2\!k}{(2\pi)^2} \frac{\hbar^2 k^2}{2m} \\
\times
\left[n_\sigma(\mathbf{k}) - \frac{C}{k^4}\theta(k-q)\right]\ \ {\rm for\ any}\ q>0,
\label{eq:energy_thm_2D_heaviside}
\end{multline}
where the Heaviside function $\theta$ ensures that the integral converges at small $k$,
or equivalently
\begin{multline}
E - E_{\rm trap} = -\frac{\hbar^2 C}{2\pi m} \ln \left(\frac{a q e^\gamma}{2}\right)
+\sum_{\sigma} \int \frac{d^2\!k}{(2\pi)^2} \frac{\hbar^2 k^2}{2m} \\
\times \left[n_\sigma(\mathbf{k}) - \frac{C}{k^2(k^2+q^2)}\right]\ \ {\rm for\ any}\ q>0.
\label{eq:energy_thm_2D_yvan}
\end{multline}
To derive this we again use the lattice model.
We note that, if the limit $q\to0$ is replaced by the limit $b\to0$ taken for fixed $a$, Eq.~(\ref{eq:g0_2D}) remains true (see Appendix~\ref{app:2body}); repeating the reasoning of Section~\ref{sec:energy_thm_latt} then shows that [Tab.~III, Eq.~(3b)] remains true; as in $3D$ we finally get in the limit $b\to0$
\begin{multline}
E - E_{\rm trap} = -\frac{\hbar^2 C}{2\pi m} \ln \left(\frac{a q e^\gamma}{2}\right)
+\sum_{\sigma} \int \frac{d^2\!k}{(2\pi)^2} \frac{\hbar^2 k^2}{2m} \\
\times \left[n_\sigma(\mathbf{k}) - \frac{C}{k^2}\mathcal{P}\frac{1}{k^2-q^2}\right]
\label{eq:energy_thm_2D_PP}
\end{multline}
for any $q>0$;
this is easily rewritten as [Tab.~II, Eq.~(5b)].
\subsection{One-body density matrix at short-distances}
\label{sec:g1}
The one-body density matrix is defined as
$\displaystyle
g_{\sigma \sigma}^{(1)} \left(\mathbf{r},
\mathbf{r}'\right)=\langle \hat{\psi}_\sigma^\dagger
\left(\mathbf{r}\right)
\hat{\psi}_\sigma\left(\mathbf{r}'\right) \rangle
$
where $\hat{\psi}_\sigma(\mathbf{r})$ annihilates a particle of spin $\sigma$ at point $\mathbf{r}$.
Its spatially integrated version
\begin{equation}
G^{(1)}_{\sigma\si}(\mathbf{r})\equiv\int d^dR \,
g_{\sigma \sigma}^{(1)} \left(\mathbf{R}-\frac{\mathbf{r}}{2},
\mathbf{R}+\frac{\mathbf{r}}{2}\right)
\end{equation}
is a Fourier transform of the momentum distribution:
\begin{equation} G^{(1)}_{\sigma\si}(\mathbf{r}) =
\int \frac{d^d k}{(2\pi)^d}\,e^{i\mathbf{k}\cdot\mathbf{r}} n_\sigma(\mathbf{k}).
\label{eq:G1_vs_nk}
\end{equation}
The expansion of $G^{(1)}_{\sigma\si}(\mathbf{r})$ up to first order in $r$ is given by
[Tab.~II, Eq.~(6a)] in $3D$, as first obtained in~\cite{TanEnergetics},
and by~[Tab.~II, Eq.~(6b)] in $2D$.
The expansion can be pushed to second order if one sums over spin and averages over
$d$ orthogonal directions of $\mathbf{r}$, see [Tab.~II, Eqs.~(7a,7b)]
where the ${\bf u_i}$'s are an orthonormal
basis~\footnote{These last relations also hold if one averages over all directions of $\mathbf{r}$ uniformly on the unit sphere or unit circle.}.
Such a second order expansion was first obtained in $1D$ in~\cite{Olshanii_nk}; the following derivations however differ from the $1D$ case~\footnote{Our result does not follow from the well-known fact that, for a finite-range interaction potential in continuous space, $-\frac{\hbar^2}{2m}\sum_\sigma \Delta G^{(1)}_{\sigma\si}(\mathbf{r}=\mathbf{0})$ equals the kinetic energy; indeed, the Laplacian does not commute with the zero-range limit in that case [cf.~also the comment below Eq.~(\ref{eq:g1_pour_MC})].}.
\noindent \underline{{\it Three dimensions:}}
\\
To derive [Tab.~II, Eqs.~(6a,7a)] we rewrite (\ref{eq:G1_vs_nk}) as
\begin{multline}
G^{(1)}_{\sigma\si}(\mathbf{r}) = N_\sigma +
\int \frac{d^3 k}{(2\pi)^3}\,\left(e^{i\mathbf{k}\cdot\mathbf{r}} -1\right)\frac{C}{k^4}
\\ + \int \frac{d^3 k}{(2\pi)^3}\, \left(e^{i\mathbf{k}\cdot\mathbf{r}} -1\right)
\left(n_\sigma(\mathbf{k})-\frac{C}{k^4}\right).
\end{multline}
The first integral equals $-(C/8\pi) r$. In the second integral, we use
\begin{equation}
e^{i\mathbf{k}\cdot\mathbf{r}}-1\underset{r\to 0}{=}i\mathbf{k}\cdot\mathbf{r}-\frac{(\mathbf{k}\cdot\mathbf{r})^2}{2}+o(r^2).
\label{eq:expand_exp}
\end{equation}
The first term of this expansion gives a contribution to the integral proportional to the total momentum of the gas, which vanishes since the eigenfunctions are real.
The second term is $O(r^2)$, which gives~[Tab.~II, Eq.~(6a)].
Equation~(7a) of Tab.~II follows from the fact that the contribution of the second term, after averaging over the directions of $\mathbf{r}$,
is given by the integral of $k^2 [n_\sigma(\mathbf{k})-C/k^4]$, which (after summation over spin)
is related to the total energy by~[Tab.~II, Eq.~(5a)].
\noindent \underline{{\it Two dimensions:}}
\\
To derive~[Tab.~II, Eqs.~(6b,7b)] we rewrite (\ref{eq:G1_vs_nk}) as
$G^{(1)}_{\sigma\si}(\mathbf{r}) = N_\sigma + I(\mathbf{r})+J(\mathbf{r})$
with
\begin{eqnarray}
I(\mathbf{r})=
\int \frac{d^2 k}{(2\pi)^2}\,\left(e^{i\mathbf{k}\cdot\mathbf{r}} -1\right)\frac{C}{k^4}\theta(k-q) && \\
J(\mathbf{r})= \int \frac{d^2 k}{(2\pi)^2}\,\left(e^{i\mathbf{k}\cdot\mathbf{r}} -1\right)\left(n_\sigma(\mathbf{k})-\frac{C}{k^4}\theta(k-q)\right) &&
\end{eqnarray}
where $q>0$ is arbitrary and the Heaviside function $\theta$ ensures that the integrals converge.
To evaluate $I(\mathbf{r})$ we use standard manipulations to
rewrite it as $I(\mathbf{r})=Cr^2/(2\pi)\int_{qr}^{+\infty} dx [J_0(x)-1]/x^3$,
$J_0$ being a Bessel function. Expressing this integral with Mathematica in terms of an
hypergeometric function and a logarithm leads for $r\to 0$ to
$I(\mathbf{r})=C r^2/(8\pi)[\gamma-1-\ln 2 +\ln (qr)]+O(r^4)$.
To evaluate $J(\mathbf{r})$ we use the same procedure as in $3D$:
expanding the exponential [see~(\ref{eq:expand_exp})] yields an integral which can be related to the total energy
thanks to~(\ref{eq:energy_thm_2D_heaviside})
\footnote{As suggested by a referee, [Tab.~II, Eq.~(7b)] can be tested for the dimer wavefunction $\psi(\mathbf{r}_1,\mathbf{r}_2)=
\phi_{\rm dim}(r_{12})=-\kappa K_0(\kappa r)/\pi^{1/2}$ \cite{MaximLudo2D}, which has the energy $E=-\hbar^2 \kappa^2/m$
and the momentum distribution $n_{\sigma}(\mathbf{k})= 4\pi \kappa^2/(k^2+\kappa^2)^2$,
where $\kappa=2/(ae^\gamma)$ and $K_0$ is a Bessel function. From Eq.~(\ref{eq:G1_vs_nk}) we
find $G_{\sigma\sigma}^{(1)}(\mathbf{r})=\kappa r K_1(\kappa r)$. From $C/(4\pi)=-mE/\hbar^2=\kappa^2$ and the known
expansion of $K_1$ around zero, we get the same low-$r$ expansion as in [Tab.~II, Eq.~(7b)]. To calculate $G_{\sigma\sigma}^{(1)}(\mathbf{r})$,
we used the fact that $K_0(\kappa r)$ is the $2D$ Fourier transform of $2\pi/(k^2+\kappa^2)$: it remains to take the derivative with respect
to $\kappa$ and to realize that $K_0'=-K_1$.}.
\subsection{Second order derivative of the energy with respect to the scattering length}
We denote by $|\psi_n\rangle$ an orthonormal basis of $N$-body stationary states that vary smoothly with $1/a$, and by $E_n$ the corresponding eigenenergies.
We will derive
[Tab. II, Eqs. (8a,8b)],
where the sum is taken on all values of $n'$ such that $E_{n'}\neq E_n$.
This implies that for the ground state energy $E_0$,
\begin{eqnarray}
\frac{d^2E_0}{d(-1/a)^2} &<& 0\ \ \ {\rm in}\ 3D
\label{eq:d^2E0_3D}
\\
\frac{d^2E_0}{d(\ln a)^2} &<& 0\ \ \ {\rm in}\ 2D.
\label{eq:d^2E0_2D}
\end{eqnarray}
Eq.~(\ref{eq:d^2E0_3D}) was intuitively expected~\cite{LeticiaSoutenance}: Eq.~(\ref{eq:Npair_3D}) shows that $dE_0/d(-1/a)$ is proportional to the probability of finding two particles very close to each other, and it is natural that this probability decreases when one goes from the BEC limit ($-1/a\to-\infty$) to the BCS limit ($-1/a\to+\infty$), i.e. when the interactions become less attractive~\footnote{In the lattice model in $3D$, the coupling constant $g_0$ is always negative in the zero-range limit $|a|\gg b$, and is an increasing function of $-1/a$, as seen from (\ref{eq:g0_3D}).}.
Eq.~(\ref{eq:d^2E0_2D})
also agrees with intuition~\footnote{Eq.~(\ref{eq:Npair_2D}) shows that $dE_0/d(\ln a)$ is proportional to the probability of finding two particles very close to each other, and it is natural that this probability decreases when one goes from the BEC limit ($\ln a\to-\infty$) to the BCS limit ($\ln a\to+\infty$), i.e. when the interactions become less attractive [in the lattice model in $2D$, the coupling constant $g_0$ is always negative in the zero-range limit $a\gg b$, and is an increasing function of $\ln a$, as can be seen from (\ref{eq:g0_2D})].}.
For the derivation, it is convenient to use the lattice model
(defined in Sec.~\ref{sec:models:lattice}): As shown in Sec.\ref{sec:d^2E_reseau} one easily obtains (\ref{eq:d^2E/dg0^2}) and [Tab.~III, Eq.~(6)], from which the result is deduced as follows.
$|\phi(\mathbf{0})|^2$ is eliminated using (\ref{eq:phi_tilde_3D},\ref{eq:phi_tilde_2D}). Then,
in $3D$, one uses
\begin{equation}
\frac{d^2E_n}{d(-1/a)^2}=\frac{d^2E_n}{dg_0^{\phantom{0}2}} \left(\frac{dg_0}{d(-1/a)}\right)^2+\frac{dE_n}{dg_0}\frac{d^2 g_0}{d(-1/a)^2}
\label{eq:deriv_2_fois}
\end{equation}
where the second term equals $2g_0\,dE_n/d(-1/a)\,m/(4\pi\hbar^2)$ and thus vanishes in the zero-range limit.
In $2D$, similarly, one uses the fact that
$d^2E_n/d(\ln a)^2$ is the zero-range limit of $(d^2E_n/dg_0^{\phantom{0}2}) \cdot (dg_0/d(\ln a))^2$.
\subsection{Time derivative of the energy}
\label{subsec:tdote}
We now consider the case where the scattering length $a(t)$ and the trapping potential $U(\mathbf{r},t)$ are varied with time. The time-dependent version of the zero-range model (see e.g.~\cite{CRAS}) is given by Schr\"odinger's equation
\begin{equation}
i\hbar \frac{\partial}{\partial t} \psi(\mathbf{r}_1,\ldots,\mathbf{r}_N;t) =
H(t)\, \psi(\mathbf{r}_1,\ldots,\mathbf{r}_N;t)
\end{equation}
when all particle positions are distinct, with
\begin{equation}
H(t)=
\sum_{i=1}^N \left[
-\frac{\hbar^2}{2 m}\Delta_{\mathbf{r}_i} + U(\mathbf{r}_i,t)
\right],
\end{equation}
and by the contact condition~[Tab. I, Eq. (1a)] in~$3D$ or~[Tab.~I, Eq.~(1b)] in~$2D$ for the scattering length $a=a(t)$.
One then has the relations
[Tab. II, Eqs.~(12a,12b)],
where $E(t)=\langle \psi(t) | H(t)|\psi(t)\rangle$ is the total energy and
$H_{\rm trap}(t)=\sum_{i=1}^N U(\mathbf{r}_i,t)$ is the trapping potential part of the Hamiltonian.
In $3D$, this relation was first obtained in~\cite{TanLargeMomentum}.
A very simple derivation of these relations using the lattice model is given in Section \ref{sec:dEdt_reseau}. Here we give a derivation within the zero-range model.
\noindent{\underline{\it Three dimensions:}}
\\
We first note that the
generalization of the
lemma (\ref{eq:lemme_3D}) to the case of two Hamiltonians $H_1$ and $H_2$ with corresponding trapping potentials $U_1(\mathbf{r})$ and $U_2(\mathbf{r})$ reads:
\begin{multline}
\langle \psi_1, H_2 \psi_2 \rangle - \langle H_1 \psi_1, \psi_2 \rangle =
\frac{4\pi\hbar^2}{m}\left(\frac{1}{a_1}-\frac{1}{a_2}\right) \ ( A^{(1)},A^{(2)}) \\+
\langle \psi_1 | \sum_{i=1}^N \left[U_2(\mathbf{r}_i,t)-U_1(\mathbf{r}_i,t)\right] |\psi_2 \rangle.
\label{eq:lemme_modif_3D}
\end{multline}
Applying this relation for $|\psi_1\rangle=|\psi(t)\rangle$ and $|\psi_2\rangle=|\psi(t+\delta t)\rangle$ [and correspondingly $a_1=a(t)$,
$a_2=a(t+\delta t)$ and $H_1=H(t)$, $H_2=H(t+\delta t)$] gives:
\begin{multline}
\langle \psi(t), H(t+\delta t) \psi(t+\delta t)\rangle -
\langle H(t)\psi(t),\psi(t+\delta t)\rangle = \\ \frac{4\pi\hbar^2}{m}
\left(\frac{1}{a(t)}-\frac{1}{a(t+\delta t)}\right) (A(t),A(t+\delta t)) \\
+\langle\psi(t)| \sum_{i=1}^N \left[U(\mathbf{r}_i,t+\delta t)-U(\mathbf{r}_i,t)\right]
|\psi(t+\delta t)\rangle.
\label{eq:interm_dt}
\end{multline}
Dividing by $\delta t$, taking the limit $\delta t\to0$,
and using the expression~[Tab. II, Eq. (1a)] of $(A,A)$ in terms of $C$,
the right-hand-side of (\ref{eq:interm_dt}) reduces to the right-hand-side of [Tab. II, Eq.~(12a)].
Using twice Schr\"odinger's equation, one rewrites the left-hand-side of (\ref{eq:interm_dt})
as $i\hbar \frac{d}{dt} \langle\psi(t)|\psi(t+\delta t)\rangle$ and
one Taylor expands this last expression to obtain [Tab. II, Eq.~(12a)].
\noindent{\underline{\it Two dimensions:}}
\\
~[Tab.~II, Eq.~(12b)] is derived similarly from the lemma
\begin{multline}
\langle \psi_1, H_2 \psi_2 \rangle - \langle H_1 \psi_1, \psi_2 \rangle
=
\frac{2\pi\hbar^2}{m}\ln(a_2/a_1) ( A^{(1)},A^{(2)}) \\ +
\langle \psi_1 | \sum_{i=1}^N \left[U_2(\mathbf{r}_i,t)-U_1(\mathbf{r}_i,t)\right] |\psi_2 \rangle.
\label{eq:lemme_modif_2D}
\end{multline}
\section{Relations for lattice models}\label{sec:latt}
In this Section,
it will prove convenient to introduce an {\it operator} $\hat{C}$ by
[Tab. III, Eqs.~(1a,1b)] and to {\it define} $C$ by its expectation value
in the state of the system,
\begin{equation}
C = \langle \hat{C}\rangle
\label{eq:defCreseau}
\end{equation}
In the zero-range limit, this new definition of $C$ coincides with the definition [Tab. II, Eq. (1)]
of Section~\ref{sec:ZR}, as shown in Appendix~\ref{app:C_b}.
\begin{table*}[t!]
\begin{tabular}{|cc|cc|}
\hline
Three dimensions & & Two dimensions & \\
\hline
\vspace{-4mm}
& && \\
$\displaystyle \hat{C}\equiv\frac{4\pi m}{\hbar^2}\frac{dH}{d(-1/a)}$
&(1a)& $\displaystyle \hat{C}\equiv\frac{2\pi m}{\hbar^2}\frac{dH}{d(\ln a)}$
&(1b)
\vspace{-4mm}
\\
& & & \\
\hline
\multicolumn{4}{|c|}{} \vspace{-4mm}\\
\multicolumn{3}{|c}{\vspace{-4mm} $\displaystyle H_{\rm int}=\frac{\hbar^4}{m^2} \frac{\hat{C}}{g_0}$}
&(2)
\\
\multicolumn{4}{|c|}{} \\
\hline
& & & \vspace{-4mm}\\
$\displaystyle H - H_{\rm trap} = \frac{\hbar^2 \hat{C}}{4\pi m a}$
&
&
$\displaystyle H - H_{\rm trap} = \lim_{q\to0}\Bigg\{-\frac{\hbar^2 \hat{C}}{2\pi m} \ln \left(\frac{a q e^\gamma}{2}\right)$
&
\\
$\displaystyle +\sum_{\sigma} \int_D \frac{d^3\!k}{(2\pi)^3} \epsilon_{\mathbf{k}}
\left[\hat{n}_\sigma(\mathbf{k}) - \hat{C}\left(\frac{\hbar^2}{2m\epsilon_{\mathbf{k}}}\right)^2\right]$ &
(3a)&
$\displaystyle +\sum_{\sigma} \int_D \frac{d^2\!k}{(2\pi)^2} \epsilon_{\mathbf{k}}
\left[\hat{n}_\sigma(\mathbf{k}) - \hat{C}\frac{\hbar^2}{2m\epsilon_{\mathbf{k}}}\mathcal{P}\frac{\hbar^2}{2m(\epsilon_{\mathbf{k}}-\epsilon_{\mathbf{q}})}\right]\Bigg\}$
&
(3b) \vspace{-4mm}
\\
& & &\\
\hline
& & & \vspace{-4mm}\\
$\displaystyle C=(4\pi)^2\ (A,A)$
&(4a)&
$\displaystyle C=(2\pi)^2\ (A,A)$
&(4b) \vspace{-4mm}
\\
& && \\
\hline
& && \vspace{-4mm} \\
$\displaystyle \frac{dE}{d(-1/a)}=\frac{\hbar^2 C}{4\pi m}$
&(5a)&
$\displaystyle \frac{dE}{d(\ln a)}=\frac{\hbar^2 C}{2\pi m}$
&(5b) \vspace{-4mm}
\\
& & &\\
\hline
\multicolumn{4}{|c|}{\vspace{-4mm}} \\
\multicolumn{3}{|c}{$\displaystyle\frac{1}{2} \frac{d^2E_n}{dg_0^2}
= |\phi(\mathbf{0})|^4 \sum_{n',E_{n'}\neq E_n}
\frac{|(A^{(n')},A^{(n)})|^2}{E_n-E_{n'}}$}
&(6) \vspace{-4mm}
\\
\multicolumn{4}{|c|}{} \\
\hline
\multicolumn{4}{|c|}{\vspace{-4mm}} \\
\multicolumn{3}{|c}{$\displaystyle \left(\frac{d^2F}{dg_0^2}\right)_T <0$,\ \ \
$\displaystyle \left(\frac{d^2E}{dg_0^2}\right)_S <0$}
&(7) \vspace{-4mm}
\\
\multicolumn{4}{|c|}{} \\
\hline
&& & \vspace{-4mm}\\
$\displaystyle\sum_\mathbf{R} b^3 (\psi^\dagger_\uparrow\psi^\dagger_\downarrow\psi_\downarrow\psi_\uparrow)(\mathbf{R})=\frac{\hat{C}}{(4\pi)^2}|\phi(\mathbf{0})|^2$
&(8a)&
$\displaystyle\sum_\mathbf{R} b^2 (\psi^\dagger_\uparrow\psi^\dagger_\downarrow\psi_\downarrow\psi_\uparrow)(\mathbf{R})=\frac{\hat{C}}{(2\pi)^2}|\phi(\mathbf{0})|^2$
&(8b) \vspace{-4mm}
\\
& & &\\
\hline
\multicolumn{4}{|c|}{In the zero-range regime $k_{\rm typ} b\ll1$}
\\ \hline
& & & \vspace{-4mm}\\
$\displaystyle\sum_{\mathbf{R}} b^3
g_{\uparrow \downarrow}^{(2)} \left(\mathbf{R}+\frac{\mathbf{r}}{2},
\mathbf{R}-\frac{\mathbf{r}}{2}\right)
\simeq
\frac{C}{(4\pi)^2}|\phi(\mathbf{r})|^2$,\ for $r\ll k_{\rm typ}^{-1}\ \ $
&(9a)&
$\displaystyle\sum_{\mathbf{R}} b^2
g_{\uparrow \downarrow}^{(2)} \left(\mathbf{R}+\frac{\mathbf{r}}{2},
\mathbf{R}-\frac{\mathbf{r}}{2}\right)
\simeq
\frac{C}{(2\pi)^2}|\phi(\mathbf{r})|^2$,\ for $r\ll k_{\rm typ}^{-1}\ \ $
&(9b) \vspace{-4mm}
\\
& &&
\\ \hline
\multicolumn{4}{|c|}{\vspace{-4mm}} \\
\multicolumn{3}{|c}{$\displaystyle n_\sigma(\mathbf{k})\simeq C \left(\frac{\hbar^2}{2m\epsilon_{\mathbf{k}}}\right)^2,\ $ for $k\ggk_{\rm typ}$}
&(10) \vspace{-4mm}
\\
\multicolumn{4}{|c|}{}
\\
\hline
\end{tabular}
\caption{Relations for spin-1/2 fermions for lattice models. $\hat{C}$ is defined in line 1 and
$C=\langle\hat{C}\rangle$.
Lines 2, 3 and 8 are relations between operators.
Line 4 holds for any pure state [the regular part $A$ being defined in Eq.~(\ref{eq:def_A_reseau}) in the text].
Lines 5-6 hold for any stationary state.
Line 7 holds at thermal equilibrium in the canonical ensemble.
Lines 9-10 are expected to hold in the zero-range regime $k_{\rm typ} b \ll 1$, where $k_{\rm typ}$ is the typical wavevector, for any stationary state or at thermal equilibrium.
\label{tab:latt}}
\end{table*}
\subsection{Interaction energy and $\hat{C}$}
The interaction part $H_{\rm int}$ of the lattice model's Hamiltonian is obviously equal to $\displaystyle g_0\frac{dH}{dg_0}$
[see Eqs. (\ref{eq:Hlatt},\ref{eq:def_H0},\ref{eq:def_W})].
Rewriting this as $\displaystyle \frac{1}{g_0}\,\frac{dH}{d(-1/g_0)}$,
and using the simple expressions (\ref{eq:dg0_da_3D},\ref{eq:dg0_da_2D}) for $d(1/g_0)$, we get the relation [Tab. III, Eq. (2)] between $H_{\rm int}$ and $\hat{C}$, both in $3D$ and in $2D$.
\subsection{Total energy minus trapping potential energy in terms of momentum distribution and $\hat{C}$}\label{sec:energy_thm_latt}
Here we derive [Tab. III, Eqs. (3a,3b)].
We start from the expression [Tab. III, Eq. (2)] of the interaction energy and eliminate $1/g_0$ thanks to (\ref{eq:g0_3D},\ref{eq:g0_2D}).
The desired expression of $H-H_{\rm trap}=H_{\rm int}+H_{\rm kin}$ then simply follows from the expression (\ref{eq:Hkin_second_quant}) of the kinetic energy.
\subsection{Interaction energy and regular part}
In the forthcoming subsections \ref{C_vs_AA_latt}, \ref{sec:dE_latt} and \ref{sec:d^2E_reseau},
we will use the following lemma:
For any wavefunctions $\psi$ and $\psi'$,
\begin{equation}
\langle\psi'|H_{\rm int}|\psi\rangle =g_0 |\phi({\bf 0})|^2\ ( A',A)
\label{eq:lemme_W}
\end{equation}
where $A$ and $A'$ are the regular parts related to $\psi$ and $\psi'$ through (\ref{eq:def_A_reseau}), and the scalar product between regular parts is naturally defined as the discrete version of~[Tab. I, Eq. (2)]:
\begin{multline}
( A',A )\equiv \sum_{i<j}
\sum_{(\mathbf{r}_k)_{k\neq i,j}} \sum_{\mathbf{R}_{ij}} b^{(N-1)d}
A'^*_{ij}(\mathbf{R}_{ij}, (\mathbf{r}_k)_{k\neq i,j}) \\
\times A_{ij}(\mathbf{R}_{ij}, (\mathbf{r}_k)_{k\neq i,j}).
\end{multline}
The lemma simply follows from
\begin{multline}
\langle\psi'|H_{\rm int}|\psi\rangle=g_0\sum_{i<j}
\sum_{(\mathbf{r}_k)_{k\neq i,j}} b^{(N-2)d} \sum_{\mathbf{r}_j} b^d\\
\times
(\psi'^*\psi)(\mathbf{r}_1,\ldots,\mathbf{r}_i=\mathbf{r}_j,\ldots,\mathbf{r}_j,\ldots,\mathbf{r}_N).
\label{eq:note_W}
\end{multline}
\subsection{Relation between $\hat{C}$ and $(A,A)$}\label{C_vs_AA_latt}
Lemma (\ref{eq:lemme_W}) with $\psi'=\psi$ writes
\begin{equation}
\langle \psi | H_{\rm int} | \psi \rangle
= g_0 |\phi({\bf 0})|^2\ ( A,A).
\label{eq:W_AA}
\end{equation}
Expressing $\langle \psi | H_{\rm int} | \psi \rangle$ in terms of $C= \langle\psi| \hat{C}|\psi \rangle$ thanks to [Tab. III, Eq. (2)],
and using
the expressions (\ref{eq:phi_tilde_3D},\ref{eq:phi_tilde_2D}) of $|\phi(\mathbf{0})|^2$, we get [Tab. III, Eqs. (4a,4b)].
\subsection{First order derivative of an eigenenergy with respect to the coupling constant}
\label{sec:dE_latt}
For any stationary state,
the Hellmann-Feynman theorem, together with
the definition [Tab. III, Eqs. (1a,1b)] of $\hat{C}$
and the relation [Tab. III, Eqs. (4a,4b)] between $C$ and $(A,A)$, immediately yields [Tab.~III, Eqs.~(5a,5b)].
\subsection{Second order derivative of an eigenenergy with respect to the coupling constant}\label{sec:d^2E_reseau}
We denote by $|\psi_n\rangle$ an orthonormal basis of $N$-body stationary states which vary smoothly with $g_0$, and by $E_n$ the corresponding eigenenergies.
We apply second order perturbation theory to determine how an eigenenergy varies for an infinitesimal change of $g_0$. This gives:
\begin{equation}
\frac{1}{2}\frac{d^2 E_n}{dg_0^{\phantom{0}2}}=\sum_{n', E_{n'}\neq E_n} \frac{\left|\langle\psi_{n'}|H_{\rm int}/g_0|\psi_n\rangle\right|^2}{E_n-E_{n'}},
\label{eq:d^2E/dg0^2}
\end{equation}
where the sum is taken over all values of $n'$ such that $E_{n'}\neq E_n$.
Lemma (\ref{eq:lemme_W}) then yields~[Tab.~III, Eq.~(6)].
\subsection{Time derivative of the energy}\label{sec:dEdt_reseau}
The relations [Tab. II, Eqs. (12a,12b)] remain exact for the lattice model. Indeed, $dE/dt$ equals $\langle dH/dt\rangle$ from
the Hellmann-Feynman theorem.
In $3D$, we can rewrite this quantity as $\langle dH_{\rm trap}/dt\rangle + d(-1/a)/dt\,\langle dH/d(-1/a)\rangle$, and the desired result follows from the definition [Tab. III, Eq. (1a)] of $\hat{C}$. The derivation of the $2D$ relation [Tab. II, Eq. (12b)] is analogous.
\subsection{On-site pair distribution operator}
Let us define a spatially integrated pair distribution operator
\begin{equation}
\hat{G}^{(2)}_{\uparrow\downarrow}(\mathbf{r})\equiv
\sum_{\mathbf{R}} b^d
(\psi^\dagger_\uparrow{\psi_\uparrow})\left(\mathbf{R}+\frac{\mathbf{r}}{2}\right)
(\psi_\downarrow^{\dagger}{\psi_\downarrow})\left(\mathbf{R}-\frac{\mathbf{r}}{2}\right).
\end{equation}
Using the relation [Tab.~III, Eq.~(2)] between $\hat{C}$ and $H_{\rm int}$, expressing $H_{\rm int}$ in terms of $\hat{G}^{(2)}_{\uparrow\downarrow}(\mathbf{0})$ thanks to the second-quantized form (\ref{eq:defW_2e_quant}), and expressing $g_0$ in terms of $\phi(\mathbf{0})$ thanks to~(\ref{eq:phi0_vs_g0},\ref{eq:phi0_vs_g0_2D}), we immediately get:
\begin{eqnarray}
\hat{G}^{(2)}_{\uparrow\downarrow}(\mathbf{0})
&=&\frac{\hat{C}}{(4\pi)^2}|\phi(\mathbf{0})|^2
\label{eq:g2_latt}
\ \ \ {\rm in}\ 3D
\\
\hat{G}^{(2)}_{\uparrow\downarrow}(\mathbf{0})
&=&\frac{\hat{C}}{(2\pi)^2}|\phi(\mathbf{0})|^2
\ \ {\rm in}\ 2D.
\end{eqnarray}
[Here, $|\phi(\mathbf{0})|^2$ may of course be eliminated using~(\ref{eq:phi0_vs_g0},\ref{eq:phi0_vs_g0_2D}).]
These relations are analogous to the one obtained previously within a different field-theoretical model, see Eq.~(12) in~\cite{Braaten}.
\subsection{Pair distribution function at short distances}
\label{subsec:G2_short_dist_latt}
The last result can be generalized to finite but small $r$,
see [Tab.~III, Eqs.~(9a,9b)]
where the zero-range regime $k_{\rm typ} b\ll1$ was introduced at the end of Sec.~\ref{sec:models:lattice}.
Here we justify this for the case where the expectation values
$g^{(2)}_{\uparrow\downarrow}\left(\mathbf{R}+\frac{\mathbf{r}}{2},\mathbf{R}-\frac{\mathbf{r}}{2}\right)=\langle(\psi^\dagger_\uparrow\psi_\uparrow)\left(\mathbf{R}+\frac{\mathbf{r}}{2}\right)
(\psi^{\dagger}_\downarrow\psi_{\downarrow})\left(\mathbf{R}-\frac{\mathbf{r}}{2}\right)\rangle$ and $C=\langle\hat{C}\rangle$ are taken in an arbitrary stationary state $\psi$ in the zero-range regime;
this implies that the same result holds for a thermal equilibrium state in the zero-range regime, see Section~\ref{subsec:finiteT}.
We first note that the expression (\ref{eq:def_g2_psi}) of $g^{(2)}_{\uparrow\downarrow}$
in terms of the wavefunction is valid for the lattice model with the obvious replacement of the integrals by sums, so that
\begin{multline}
G^{(2)}_{\uparrow\downarrow}(\mathbf{r})\equiv\left<\hat{G}^{(2)}_{\uparrow\downarrow}(\mathbf{r})\right>=\sum_\mathbf{R} b^d \sum_{i=1}^{N_\uparrow}\sum_{j=N_\uparrow+1}^N
\sum_{(\mathbf{r}_k)_{k\neq i,j}} \!\!\! b^{(N-2)d} \\
\times \left| \psi\left(\mathbf{r}_1,\ldots,\mathbf{r}_i=\mathbf{R}+\frac{\mathbf{r}}{2},\ldots,\mathbf{r}_j=\mathbf{R}-\frac{\mathbf{r}}{2},\ldots,\mathbf{r}_N\right) \right|^2.
\end{multline}
For $r\ll1/k_{\rm typ}$, we can replace $\psi$ by the short-distance expression (\ref{eq:psi_courte_dist}),
assuming that the multiple sum is dominated by the configurations where all the distances $|\mathbf{r}_k-\mathbf{R}|$ and $r_{k k'}$
are much larger than $b$ and $r$:
\begin{equation}
G^{(2)}_{\uparrow\downarrow}(\mathbf{r})\simeq (A,A)\ |\phi(\mathbf{r})|^2.
\label{eq:G2_AA}
\end{equation}
Expressing $(A,A)$ in terms of $C$ thanks to [Tab.~III, Eqs.~(4a,4b)] gives the desired~[Tab.~III, Eqs.~(9a,9b)].
\subsection{Momentum distribution at large momenta}\label{subsec:nk_latt}
Assuming again that we are in the zero-range regime $k_{\rm typ} b\ll1$, we will justify
[Tab.~III, Eq.~(10)] both in $3D$ and in $2D$. We start from
\begin{equation}
n_\sigma(\mathbf{k})=\sum_{i:\sigma} \sum_{(\mathbf{r}_l)_{l\neq i}} b^{d(N-1)}
\left|\sum_{\mathbf{r}_i} b^d e^{-i\mathbf{k}\cdot\mathbf{r}_i}\psi(\mathbf{r}_1,\dots,\mathbf{r}_N)
\right|^2.
\label{eq:nk_psi_latt}
\end{equation}
We are interested in the limit $k\ggk_{\rm typ}$.
Since $\psi(\mathbf{r}_1,\ldots,\mathbf{r}_N)$ is a function of $\mathbf{r}_i$ which varies on the scale of $1/k_{\rm typ}$, except when $\mathbf{r}_i$ is close to another particle $\mathbf{r}_j$ where it varies on the scale of $b$,
we can replace $\psi$ by its short-distance form (\ref{eq:psi_courte_dist}):
\begin{multline}
\sum_{\mathbf{r}_i} b^d e^{-i\mathbf{k}\cdot\mathbf{r}_i}\psi(\mathbf{r}_1,\ldots,\mathbf{r}_N)\simeq
\tilde{\phi}(\mathbf{k}) \\ \times \sum_{j,j\neq i} e^{-i\mathbf{k}\cdot\mathbf{r}_j}A_{ij}(\mathbf{r}_j,(\mathbf{r}_l)_{l\neq i,j}),
\label{eq:TF_approx}
\end{multline}
where $\tilde{\phi}(\mathbf{k})=\langle\mathbf{k}|\phi\rangle=\sum_\mathbf{r} b^d e^{-i\mathbf{k}\cdot\mathbf{r}}\phi(\mathbf{r})$.
Here we excluded the configurations where more than two particles are at distances $\lesssim b$, which are expected to have a negligible contribution to (\ref{eq:nk_psi_latt}).
Inserting (\ref{eq:TF_approx}) into (\ref{eq:nk_psi_latt}),
expanding the modulus squared, and neglecting the cross-product terms in the limit $k\ggk_{\rm typ}$, we obtain
\begin{equation}
n_\sigma(\mathbf{k})\simeq|\tilde{\phi}(\mathbf{k})|^2 (A,A).
\label{eq:nk_AA_latt}
\end{equation}
Finally, $\tilde{\phi}(\mathbf{k})$ is easily computed for the lattice model: for $k\neq0$, the two-body Schr\"odinger equation (\ref{eq:schro_2corps}) directly gives
$\tilde{\phi}(\mathbf{k})=-g_0\phi(\mathbf{0})/(2\epsilon_{\mathbf{k}})$, and $\phi(\mathbf{0})$ is given by (\ref{eq:phi0_vs_g0},\ref{eq:phi0_vs_g0_2D}), which yields [Tab.~III, Eq.~(10)].
\section{Relations for a finite-range interaction in continuous space}
\label{sec:V(r)}
In this Section~\ref{sec:V(r)}, we restrict for simplicity to the case of a stationary state. It is then convenient to define $C$ by~[Tab. IV, Eqs.~(1a,1b)].
\begin{table*}
\begin{tabular}{|cc|cc|}
\hline
Three dimensions & & Two dimensions & \\
\hline
& & & \vspace{-4mm} \\
$\displaystyle C\equiv\frac{4\pi m}{\hbar^2}\frac{dE}{d(-1/a)}$
& (1a)
& $\displaystyle C\equiv\frac{2\pi m}{\hbar^2}\frac{dE}{d(\ln a)}$
& (1b) \vspace{-4mm}
\\
& & & \\
\hline
& & & \vspace{-4mm} \\
$\displaystyle E_{\rm int}=\frac{C}{(4\pi)^2}\int d^3r\,V(r) |\phi(r)|^2$
& (2a)
& $\displaystyle E_{\rm int}=\frac{C}{(2\pi)^2}\int d^2r\,V(r) |\phi(r)|^2$
& (2b) \vspace{-4mm}
\\
& & & \\
\hline
& & & \vspace{-4mm} \\
$\displaystyle E-E_{\rm trap}=\frac{\hbar^2 C}{4\pi m a}$
&
& $\displaystyle E-E_{\rm trap}=\lim_{R\to\infty}\Bigg\{\frac{\hbar^2 C}{2\pi m}\ln\left(\frac{R}{a}\right)$
&
\\
$\displaystyle +\sum_{\sigma}\int \frac{d^3 k}{(2\pi)^3}\,\frac{\hbar^2 k^2}{2m}\left[n_\sigma(\mathbf{k})-\frac{C}{(4\pi)^2}|\tilde{\phi}'(k)|^2\right]$
& (3a)
& $\displaystyle +\sum_{\sigma}\int \frac{d^2 k}{(2\pi)^2}\,\frac{\hbar^2 k^2}{2m}\left[n_\sigma(\mathbf{k})-\frac{C}{(2\pi)^2}|\tilde{\phi}'_R(k)|^2\right]\Bigg\}$
& (3b) \vspace{-4mm}
\\
& & & \\
\hline
\multicolumn{4}{|c|}{In the zero-range regime $k_{\rm typ} b\ll1$}
\\ \hline
&&& \vspace{-4mm} \\
$\displaystyle\int d^3R\,
g_{\uparrow \downarrow}^{(2)} \left(\mathbf{R}+\frac{\mathbf{r}}{2},
\mathbf{R}-\frac{\mathbf{r}}{2}\right)
\simeq
\frac{C}{(4\pi)^2}|\phi(\mathbf{r})|^2$\ \ \ for $r\ll k_{\rm typ}^{-1}$
& (4a)
& $\displaystyle\int d^2R\,
g_{\uparrow \downarrow}^{(2)} \left(\mathbf{R}+\frac{\mathbf{r}}{2},
\mathbf{R}-\frac{\mathbf{r}}{2}\right)
\simeq
\frac{C}{(2\pi)^2}|\phi(\mathbf{r})|^2$\ \ \ for $r\ll k_{\rm typ}^{-1}$
& (4b) \vspace{-4mm}
\\
& &&
\\ \hline
& && \vspace{-4mm}\\
$\displaystyle n_\sigma(\mathbf{k})\simeq\frac{C}{(4\pi)^2}|\tilde{\phi}(\mathbf{k})|^2$\ \ \ for $k\ggk_{\rm typ}$
&
(5a) &
$\displaystyle n_\sigma(\mathbf{k})\simeq\frac{C}{(2\pi)^2}|\tilde{\phi}(\mathbf{k})|^2$\ \ \ for $k\ggk_{\rm typ}$
& (5b) \vspace{-4mm}
\\
& &&
\\
\hline
\end{tabular}
\caption{Relations for spin-1/2 fermions with a finite-range interaction potential $V(r)$ in continuous space, for any stationary state. $C$ is defined in line 1. All relations
remain valid at thermal equilibrium in the canonical ensemble; the derivatives of the energy in line~1 then have to be taken at constant entropy.
Equations~(1a,2a,4a) are contained in~\cite{ZhangLeggettUniv} (for $k_{\rm typ} b\ll1$).
The functions $\phi'(r)$ and $\phi'_R(r)$ are given by Eqs.~(\ref{eq:defphip3d},\ref{eq:defphip2d})
and $\tilde{\phi}'(k)$, $\tilde{\phi}'_R(k)$ are their Fourier transforms.
\label{tab:V(r)}}
\end{table*}
\subsection{Interaction energy}
As for the lattice model, we find that the interaction energy is proportional to $C$,
see [Tab.~IV, Eqs.~(2a,2b)].
It was shown in~\cite{ZhangLeggettUniv} that the $3D$ relation is asymptotically
valid in the zero-range limit.
Here we show that it remains exact for any finite value of the range and we generalize it to $2D$.
For the derivation, we set
\begin{equation}
V(r)=g_0 W(r)
\end{equation}
where $g_0$ is a dimensionless coupling constant which allows to tune $a$.
The Hellmann-Feynman theorem then gives $E_{\rm int}=g_0 dE/dg_0$. The result then follows by writing $dE/dg_0=dE/d(-1/a)\cdot d(-1/a)/dg_0$ in $3D$ and
$dE/dg_0=dE/d(\ln a)\cdot d(\ln a)/dg_0$ in $2D$,
and by using the definition~[Tab. IV, Eqs.~(1a,1b)] of $C$
as well as the following lemmas:
\begin{eqnarray}
g_0\frac{d(-1/a)}{d g_0}&=& \frac{m}{4\pi\hbar^2}\int d^3 r\, V(r) |\phi(r)|^2
\ \ \ {\rm in}\ 3D
\label{eq:lemme_g0_vs_a_3D}
\\
g_0\frac{d(\ln a)}{d g_0}&=& \frac{m}{2\pi\hbar^2}\int d^2 r\, V(r) |\phi(r)|^2
\ \ \ {\rm in}\ 2D.
\label{eq:lemme_g0_vs_a_2D}
\end{eqnarray}
To derive these lemmas, we
consider two values of the scattering length $a_i,\ i=1,2$, and the corresponding scattering states $\phi_i$ and coupling constants $g_{0,i}$. The corresponding two-particle relative-motion Hamiltonians are $H_i=-(\hbar^2/m)\,\Delta_\mathbf{r} + g_{0,i} W(r)$. Since $H_i \phi_i=0$, we have
\begin{equation}
\lim_{R\to\infty} \int_{r<R} d^d r \left( \phi_1 H_2 \phi_2
- \phi_2 H_1 \phi_1 \right) = 0.
\end{equation}
The contribution of the kinetic energies can be computed from the divergence theorem and the large-distance form of $\phi$~\footnote{We assume, to facilitate the derivation, that $V(r)=0$ for $r>b$, but the result is expected to hold for any $V(r)$ which vanishes quickly enough at infinity.}.
\setcounter{fnnumberter}{\thefootnote}
The contribution of the potential energies is proportional to $g_{0,2}-g_{0,1}$. Taking the limit $a_2\to a_1$ gives the results (\ref{eq:lemme_g0_vs_a_3D},\ref{eq:lemme_g0_vs_a_2D}).
Lemma (\ref{eq:lemme_g0_vs_a_3D}) was also used in~\cite{ZhangLeggettUniv} and the above derivation is essentially identical to the one of~\cite{ZhangLeggettUniv}.
For this $3D$ lemma, there also exists an alternative derivation based on the two-body problem in a large box~\footnote{We consider two particles of opposite spin in a cubic box of side $L$ with periodic boundary conditions, and we work in the limit where $L$ is much larger than $|a|$ and $b$. In this limit, there exists a ``weakly interacting'' stationary state $\psi$ whose energy is given by the ``mean-field'' shift
$E=g/L^3$ with $g=4\pi\hbar^2 a/m$. The Hellmann-Feynman theorem gives $g_0\,dE/dg_0=E_{\rm int}[\psi]$.
But the wavefunction $\psi(\mathbf{r}_1,\mathbf{r}_2)\simeq\Phi(r_{12})/L^3$ where $\Phi$ is the zero-energy scattering state normalized by $\Phi\to1$ at infinity. Thus $E_{\rm int}=\int d^3 r\,V(r) |\Phi(r)|^2/L^3$. The desired Eq.~(\ref{eq:lemme_g0_vs_a_3D}) then follows, since $\Phi=-a \phi$.}.
\subsection{Relation between energy and momentum distribution}
\noindent
\underline{\it Three dimensions:}
The natural counterpart, for a finite-range interaction potential, of the zero-range-model expression of the energy as a functional of the momentum distribution [Tab.~II, Eqs.~(5a)] is given
by [Tab.~IV, Eq.~(3a)],
where $\tilde{\phi}'(k)$ is the zero-energy scattering state in momentum space with
the incident wave contribution $\propto \delta(\mathbf{k})$ subtracted out:
$\tilde{\phi}'(k)=\tilde{\phi}(k)+a^{-1}(2\pi)^3\delta(\mathbf{k})=\int d^3r\,e^{-i\mathbf{k}\cdot\mathbf{r}}\phi'(r)$ with
\begin{equation}
\phi'(r)=\phi(r)+\frac{1}{a}.
\label{eq:defphip3d}
\end{equation}
This is simply obtained by adding the kinetic energy to [Tab.~IV, Eq.~(2a)]
and by using the lemma:
\begin{equation}
\int d^3 r\, V(r) |\phi(r)|^2 = \frac{4\pi\hbar^2}{m a}-\int \frac{d^3 k}{(2\pi)^3}\frac{\hbar^2k^2}{m}|\tilde{\phi}'(k)|^2.
\label{eq:lemme_phi'(k)_3D}
\end{equation}
To derive this lemma, we start from Schr\"odinger's equation $-(\hbar^2/m)\Delta\phi+V(r)\phi=0$, which implies
\begin{equation}
\int d^3 r\, V(r) |\phi(r)|^2=\frac{\hbar^2}{m}\int d^3 r \, \phi\Delta\phi.
\label{eq:phiDeltaphi}
\end{equation}
Applying the divergence theorem
over the sphere of radius $R$, using the asymptotic expression (\ref{eq:normalisation_phi_tilde_3D}) of $\phi$
and taking the limit $R\to\infty$ then yields
\begin{equation}
\int d^3 r \, \phi\Delta\phi = \frac{4\pi}{a}-\int d^3 r\, (\mathbf{\nabla} \phi)^2.
\end{equation}
We then replace $\nabla \phi$ by $\nabla\phi'$. Applying the Parseval-Plancherel relation to $\partial_i \phi'$,
and using the fact that $\phi'(r)$ vanishes at infinity, we get:
\begin{equation}
\int d^3 r\,(\nabla\phi')^2 = \int \frac{d^3 k}{(2\pi)^3}\,k^2 |\tilde{\phi}'(k)|^2
\end{equation}
The desired result (\ref{eq:lemme_phi'(k)_3D}) follows.
\noindent
\underline{\it Two dimensions:}
An additional regularisation procedure for small momenta is required in $2D$, as was the case for the zero-range
model~[Tab. II, Eq.~(5b)] and for the lattice model~[Tab. III, Eq.~(3b)]. One obtains
[Tab.~IV, Eq.~(3b)],
where $\tilde{\phi}_R'(k)=\int d^{2}r\,e^{-i\mathbf{k}\cdot\mathbf{r}}\phi'_R(r)$ with
\begin{equation}
\phi_R'(r)=\left[\phi(r)-\ln(R/a)\right]\,\theta(R-r).
\label{eq:defphip2d}
\end{equation}
This follows from [Tab.~IV, Eq.~(2b)] and from the lemma:
\begin{multline}
\int d^2r\,V(r)|\phi(r)|^2=\lim_{R\to\infty}\left\{\frac{2\pi\hbar^2}{m}\ln\left(\frac{R}{a}\right)
\right. \\ \left. -\int\frac{d^2k}{(2\pi)^2}\,\frac{\hbar^2k^2}{m}|\tilde{\phi}'_R(k)|^2\right\}.
\label{eq:lemme_phi'(k)_2D}
\end{multline}
The derivation of this lemma again starts with the 2D version of (\ref{eq:phiDeltaphi}). The divergence theorem then
gives~[\thefnnumberter]
\begin{equation}
\int d^2r\,\phi\Delta\phi=\lim_{R\to\infty}\left\{2\pi\ln\left(\frac{R}{a}\right)-\int_{r<R} d^2r\,(\mathbf{\nabla}\phi)^2\right\}.
\end{equation}
We can then replace $\int_{r<R}d^2r\,(\nabla \phi)^2$ by $\int d^2r\,(\nabla\phi'_R)^2$, since $\phi'_R(r)$ is continuous at $r=R$~[\thefnnumberter] so that $\nabla\phi'_R$ does not contain any delta distribution. The Parseval-Plancherel relation can be applied to $\partial_i \phi'_R$, since this function is square-integrable. Then, using the fact that $\phi'_R(r)$ vanishes at infinity, we get
\begin{equation}
\int d^2r\,(\nabla\phi'_R)^2 = \int \frac{d^2k}{(2\pi)^2}\,k^2|\tilde{\phi}'_R(k)|^2,
\end{equation}
and the lemma (\ref{eq:lemme_phi'(k)_2D}) follows.
\subsection{Pair distribution function at short distances}
\label{subsec:g2_V(r)}
In the zero-range regime $k_{\rm typ} b \ll 1$, the short-distance behavior of the pair distribution function is given by the same expressions [Tab.~III, Eqs.~(9a,9b)] as for the lattice model.
Indeed, Eq.~(\ref{eq:G2_AA}) is derived in the same way as for the lattice model;
one can then use the
zero-range model's expressions~[Tab.~II, Eqs.~(2a,2b)]
of $(A,A)$ in terms of $C$, since the finite range model's quantities $C$ and $A$ tend to the zero-range model's ones in the zero-range limit.
In $3D$, the result [Tab.~III, Eq.~(9a)] is contained in~\cite{ZhangLeggettUniv}.
\subsection{Momentum distribution at large momenta}\label{subsec:nk_V(r)}
In the zero-range regime $k_{\rm typ} b\ll1$ the momentum distribution at large momenta $k\ggk_{\rm typ}$ is given by
\begin{eqnarray}
n_\sigma(\mathbf{k})&\simeq&\frac{C}{(4\pi)^2}|\tilde{\phi}(\mathbf{k})|^2
\ \ \ {\rm in}\ 3D
\label{eq:nk_V(r)_3D}
\\
n_\sigma(\mathbf{k})&\simeq&\frac{C}{(2\pi)^2}|\tilde{\phi}(\mathbf{k})|^2
\ \ \ {\rm in}\ 2D.
\end{eqnarray}
Indeed, Eq.~(\ref{eq:nk_AA_latt}) is derived as for the lattice model,
and $(A,A)$ can be expressed in terms of $C$ as in the previous subsection \ref{subsec:g2_V(r)}.
\section{Derivative of the energy with respect to the effective range}
\label{sec:re}
Assuming that the zero-range model is solved, we first show
that the first correction to the energy due to a finite range
of the interaction potential $V(r)$
can be explicitly obtained and only depends
on the $s$-wave effective range of the interaction.
We then enrich the discussion using the many-body
diagrammatic point of view,
where the central object is the full two-body $T$-matrix, to recall
in particular that the situation is more subtle for lattice models
\cite{zhenyaNJP}.
Finally, we relate $\partial E/\partial r_e$ to a subleading term of the short distance
behavior of the pair distribution function in Sec.\ref{subsec:re_dans_g2} and to
the coefficient of the $1/k^6$ subleading tail of $n_{\sigma}(\mathbf{k})$ in Sec.~\ref{subsec:unsurk6}.
\subsection{Derivation of the explicit formulas}
\label{subsec:dotef}
\begin{table*}[t!]
\begin{tabular}{|cc|cc|}
\hline
Three dimensions & & Two dimensions & \\
\hline
&&& \vspace{-4mm} \\
\ \ \ \ $\displaystyle \left(\frac{\partial E}{\partial r_e}\right)_{\!a}= 2\pi (A, (E-\mathcal{H}) A)$ \ \ \ \ & (1a) &
\ \ \ \ $\displaystyle \left(\frac{\partial E}{\partial(r_e^2)}\right)_a = \pi(A, (E-\mathcal{H}) A)$ \ \ \ \ & (1b) \vspace{-4mm} \\
&&& \\
\hline
\multicolumn{4}{|c|}{\vspace{-4mm}}
\\
\multicolumn{3}{|c}{
$\displaystyle \mathcal{H}_{ij} \equiv -\frac{\hbar^2}{4m}\Delta_{\mathbf{R}_{ij}} -\frac{\hbar^2}{2m}\sum_{k\neq i,j}\Delta_{\mathbf{r}_k}
+2 U(\mathbf{R}_{ij}) + \sum_{k\neq i,j} U(\mathbf{r}_k)$} & (2) \vspace{-4mm} \\
\multicolumn{4}{|c|}{} \\
\hline
&&& \vspace{-4mm} \\
$\displaystyle\bar{G}^{(2)}_{\uparrow\downarrow}(\mathbf{r})\underset{r\to 0}{=}\frac{C}{(4\pi)^2} \left(\frac{1}{r}-\frac{1}{a}\right)^2-\frac{m}{2\pi\hbar^2} \frac{\partial E}{\partial r_e}
+O(r)$ & (3a) &
$\displaystyle\bar{G}^{(2)}_{\uparrow\downarrow}(\mathbf{r})\underset{r\to 0}{=}\frac{C}{(2\pi)^2} \ln^2(r/a)-\frac{m}{2\pi\hbar^2} \frac{\partial E}{\partial (r_e^2)}
r^2\ln^2 r +O(r^2\ln r)$
& (3b) \vspace{-4mm} \\
&&& \\
\hline
&&& \vspace{-4mm} \\
$\displaystyle \bar{n}_\sigma(k) - \frac{C}{k^4} \underset{k\to\infty}{\sim} \frac{1}{k^6} \left[\frac{16\pi m}{\hbar^2}
\frac{\partial E}{\partial r_e} -8\pi^2 (A,\Delta_{\mathbf{R}} A) \right]$ & (4a) &
$\displaystyle \bar{n}_\sigma(k) - \frac{C}{k^4} \underset{k\to\infty}{\sim} \frac{1}{k^6} \left[\frac{8\pi m}{\hbar^2}
\frac{\partial E}{\partial (r_e^2)} -4\pi^2 (A,\Delta_{\mathbf{R}} A) \right]$
& (4b) \vspace{-4mm} \\
&&& \\
\hline
\end{tabular}
\caption{For spin-1/2 fermions, derivative of the energy with respect to the effective range $r_e$, or to its square in $2D$,
taken at $r_e=0$ for a fixed value of the scattering length.
The functions $A$ (assumed to be real) are the ones of the zero-range regime. The compact notations for the scalar products and the matrix elements
are defined in Tab.~I. $\bar{n}_\sigma(k)$ is the average of $n_\sigma(\mathbf{k})$ over the direction of $\mathbf{k}$. $\bar{G}^{(2)}_{\uparrow\downarrow}(\mathbf{r})$
is the pair distribution function integrated over the center of mass of the pair and averaged
over the direction of $\mathbf{r}$.
\label{tab:re}}
\end{table*}
\noindent{\underline{\it Three dimensions:}}
\\
In $3D$, the leading order finite-range correction to the zero-range model's spectrum
depends on the interaction potential $V(r)$ only {\it via} its effective range $r_e$,
and is given
by the expression [Tab.~V, Eq.~(1a)],
where the derivative is taken in $r_e=0$ for a fixed value of the scattering length, the function $A$ is assumed to be real without loss
of generality.
As a first way
to obtain this result we use a modified version of the zero-range model,
where the boundary condition [Tab. I, Eq. (1a)] is replaced by
\begin{multline}
\psi(\mathbf{r}_1,\ldots,\mathbf{r}_N)\underset{r_{ij}\to0}{=}
\left( \frac{1}{r_{ij}}-\frac{1}{a}+\frac{m}{2\hbar^2}\mathcal{E} r_e
\right) \\ \times \, A_{ij}\left(
\mathbf{R}_{ij}, (\mathbf{r}_k)_{k\neq i,j}
\right)
+O(r_{ij}),
\label{eq:CL_re}
\end{multline}
where
\begin{multline}
\mathcal{E}=E-2 U(\mathbf{R}_{ij})-\left(\sum_{k\neq i,j} U(\mathbf{r}_k)\right)
+\frac{1}{A_{ij}\left(
\mathbf{R}_{ij}, (\mathbf{r}_k)_{k\neq i,j}
\right)} \\ \times
\left[
\frac{\hbar^2}{4m}\Delta_{\mathbf{R}_{ij}}+\frac{\hbar^2}{2m}\sum_{k\neq i,j}\Delta_{\mathbf{r}_k}\right]
A_{ij}\left(
\mathbf{R}_{ij}, (\mathbf{r}_k)_{k\neq i,j}
\right).
\label{eq:E=}
\end{multline}
Equations (\ref{eq:CL_re},\ref{eq:E=}) generalize the ones already used for 3 bosons in free space in \cite{Efimov93,PetrovBosons}
(the predictions of \cite{Efimov93} and \cite{PetrovBosons} have been confirmed using different approaches, see \cite{PlatterRangeCorrections} and Refs. therein, and \cite{MoraBosons,LudoMattia} respectively; moreover, a derivation of these equations was given in
\cite{Efimov93}).
Such a model was also used in the two-body case, see e.g. \cite{Greene,Tiesinga,Naidon}, and the modified scalar product that makes it hermitian
was constructed in \cite{LudovicOndeL}.
For the derivation of [Tab.~V, Eq.~(1a)], we consider a stationary state $\psi_1$ of the zero-range model, satisfying the boundary condition [Tab. I, Eq. (1a)] with a scattering length $a$ and a regular part $A^{(1)}$, and the corresponding finite-range stationary state $\psi_2$ satisfying (\ref{eq:CL_re},\ref{eq:E=}) with the same scattering length $a$ and a regular part $A^{(2)}$.
As in Appendix~\ref{app:lemme} we get (\ref{eq:ostro}), as well as
(\ref{eq:int_surface_3D}) with $1/a_1-1/a_2$ replaced by $m\mathcal{E}r_e/(2\hbar^2)$.
This yields [Tab.~V, Eq.~(1a)].
A deeper physical understanding and a more self-contained derivation may be achieved
by going back to the actual
finite range model $V(r;b)$ for the interaction potential, such that the scattering
length remains fixed when the range $b$ tends to zero. The Hellmann-Feynman theorem gives
\begin{equation}
\frac{dE}{db}\! =\!\! \sum_{i=1}^{N_\uparrow} \sum_{j=N_\uparrow+1}^{N} \!\int\! d^3r_1\ldots d^3r_N |\psi(\mathbf{r}_1,\ldots,\mathbf{r}_N)|^2 \partial_b V(r_{ij};b).
\label{eq:HellFeyn}
\end{equation}
We need to evaluate $|\psi|^2$ for a typical configuration with two atoms $i$ and $j$ within the potential range $b$; in the limit $b\to 0$ one may then
assume that the other atoms are separated by much more than $b$ and are at distances from $\mathbf{R}_{ij}=(\mathbf{r}_i+\mathbf{r}_j)/2$ much larger than $b$. This motivates
the factorized ansatz
\begin{equation}
\psi(\mathbf{r}_1,\ldots,\mathbf{r}_N) \simeq \chi(r_{ij}) A_{ij}(\mathbf{R}_{ij},(\mathbf{r}_k)_{k\neq i,j}).
\label{eq:fact_ansatz}
\end{equation}
We take a rotationally invariant $\chi$, because we assume the absence
of scattering resonance in the partial waves other than $s$-wave
\footnote{More precisely, one first takes a general, non-rotationally invariant function $\chi(\mathbf{r})$, that
one then expands in partial waves of angular momentum $l$, that is in spherical harmonics.
Performing the reasoning to come for each $l$, one finds at the end that the $l=0$ channel finite range correction
dominates for small $b$, in the absence of $l$-wave resonance for $l\neq 0$.}: The $p$-wave scattering amplitude, that vanishes quadratically with the relative wavenumber $k$,
is then $O(b^3 k^2)$, resulting
in an energy contribution $O(b^3)$ negligible at the present order.
Inserting the ansatz (\ref{eq:fact_ansatz}) into Schr\"odinger's equation $H\psi = E \psi$, and
neglecting the trapping potential within the interaction range
$r_{ij}\leq b$, as justified in the Appendix~\ref{app:piege_dans_interaction},
gives \footnote{Since $\mathcal{E}$ depends on $\mathbf{R}_{ij}$ and
the $(\mathbf{r}_k)_{k\neq i,j}$, $\chi$ actually depends on these variables and not only on $r_{ij}$. This dependence however rapidly vanishes in the limit $b\to 0$,
if one restricts to the distances $r_{ij} \lesssim b$, for the normalization (\ref{eq:norma_chi}): $\partial_{\mathcal{E}}\chi/\chi = O(mb^2/\hbar^2)$.}
\begin{equation}
\mathcal{E} \chi(r_{ij}) \simeq [-\frac{\hbar^2}{m} \Delta_{\mathbf{r}_{ij}}+ V(r_{ij};b)] \chi(r_{ij}),
\label{eq:espc}
\end{equation}
where $\mathcal{E}$ is given by (\ref{eq:E=}).
For $\mathcal{E}>0$, we set $\mathcal{E}=\hbar^2 k^2/m$ with $k>0$, and
$\chi$ is a finite energy scattering state; to match the normalization of the zero energy scattering state
$\phi$ used in this article, see (\ref{eq:normalisation_phi_tilde_3D}), we take for $r$ out of the interaction potential
\begin{equation}
\chi(r) \underset{r\to\infty}{=} \frac{1}{f_k} \frac{\sin(kr)}{kr} + \frac{e^{ikr}}{r},
\label{eq:norma_chi}
\end{equation}
where $f_k$ is the scattering amplitude. The optical theorem, implying that
\begin{equation}
f_k=-\frac{1}{ik+u(k)},
\label{eq:introu}
\end{equation}
where $u(k)\in \mathbb{R}$, ensures that $\chi$ is real
\footnote{$u(k)$ is related to the $s$-wave collisional phase shift $\delta_0(k)$
by $u(k)=-k/\tan \delta_0(k)$.}.
Inserting the ansatz (\ref{eq:fact_ansatz}) into the Hellmann-Feynman expression
(\ref{eq:HellFeyn}) gives
\begin{multline}
\frac{dE}{db} \simeq \sum_{i<j} \int d^3R_{ij} \int (\prod_{k\neq i,j} d^3r_k) \ A^2_{ij}(\mathbf{R}_{ij},(\mathbf{r}_k)_{k\neq i,j})
\\
\times \int d^3r_{ij} \ \chi^2(r_{ij}) \partial_b V(r_{ij};b)
\end{multline}
To evaluate the integral of $\chi^2 \partial_b V$, we use
the following lemma (whose derivation is given in the next paragraph):
\begin{equation}
\frac{4\pi\hbar^2}{m} [u_2(k)-u_1(k)]= \int_{\mathbb{R}^3}\!\!\! d^3r\, \chi_1(r) \chi_2(r) [V(r;b_1)-V(r;b_2)]
\label{eq:lemme_utile}
\end{equation}
where $\chi_1$ and $\chi_2$ are the same energy $\mathcal{E}$ scattering states for two different values $b_1$ and $b_2$ of the potential range.
Then dividing this expression by $b_1-b_2$, taking the limit $b_1\to b_2$, and afterwards the limit $b_2\to 0$ for which the low-$k$ expansion holds:
\begin{equation}
u(k) = \frac{1}{a} - \frac{1}{2} r_e k^2 +O(b^3 k^4)
\label{eq:devuk_3D}
\end{equation}
$r_e$ being the effective range of the interaction potential of range $b$,
we obtain [Tab.~V, Eq.~(1a)] \footnote{In general, when $N_\uparrow\geq 2$ and $N_\downarrow\geq 2$,
the functions $A_{ij}$ have $1/r_{kl}$ divergences when $r_{kl}\to 0$.
This is apparent in the dimer-dimer scattering problem \cite{PetrovPRL}. As a consequence, in the integral
of [Tab.~V, Eq.~(1a)], one has to exclude the manifold where at least two particles are at the same
location. The same exclusion has to be performed in $2D$}.
As a side result of this physical approach, the modified contact conditions (\ref{eq:CL_re}) may be rederived.
One performs an analytical continuation of the out-of-potential wavefunction (\ref{eq:norma_chi})
to the interval $r\leq b$ \cite{CombescotC} and one takes the zero-$r$ limit of that continuation
\footnote{The wavefunction is not an analytic function of $r$ for a compact support interaction potential, since
a non-zero compact support function is not analytic.}. In simple words, this amounts
to expanding (\ref{eq:norma_chi}) in powers of $r$:
\begin{equation}
\chi(r) = \frac{1}{r}-\frac{1}{a} + \frac{1}{2} k^2 r_e + O(r).
\end{equation}
Inserting this expansion in (\ref{eq:fact_ansatz}) and using $k^2=m\mathcal{E}/\hbar^2$ gives (\ref{eq:CL_re}).
The lemma (\ref{eq:lemme_utile}) is obtained by multiplying Schr\"odinger's equations for
$\chi_1$ (respectively for $\chi_2$) by $\chi_2$ (respectively by $\chi_1$),
taking the difference of the two resulting equations, integrating this difference over the sphere $r<R$ and using the divergence theorem to convert
the volume integral of $\chi_2\Delta_\mathbf{r} \chi_1-\chi_1\Delta_\mathbf{r} \chi_2$ into a surface integral, where the asymptotic forms
(\ref{eq:norma_chi}) for $r=R\to +\infty$ may be used.
When $\mathcal{E}<0$, we set $\mathcal{E}=-\hbar^2\kappa^2/m$ with $\kappa>0$ and we perform analytic continuation of the $\mathcal{E}>0$ case
by replacing $k$ with $i\kappa$. From (\ref{eq:norma_chi}) it appears that $\chi(r)$ now diverges exponentially at large distances, as $e^{\kappa r}/r$,
if $1/f(i\kappa)\ne 0$.
If the interaction potential is a compact support potential, or simply tends to zero more rapidly than $\exp(-2\kappa r)$, the lemma
and the final conclusion
[Tab.~V, Eq.~(1a)] still hold; the functions $u_1(i\kappa)$ and $u_2(i\kappa)$ remain real, since the series expansion of $u(k)$ has only
even powers of $k$.
\\
\noindent{\underline{\it Two dimensions:}}
\\
The above physical reasoning may be directly generalized to $2D$ \footnote{We consider here a truly $2D$ gas. In experiments,
quasi-$2D$ gases are produced by freezing the $z$ motion in a harmonic oscillator ground state of size $a_z=[\hbar/(m\omega_z)]^{1/2}$:
At zero temperature, a $2D$ character appears for $\hbar^2 k_F^2/(2m)\ll \hbar\omega_z$. From the quasi-$2D$ scattering amplitude given
in \cite{LudoScatteringLowD} (see also \cite{PetrovShlyapCollisions2D}) we find the effective range squared,
$r_e^2 = -(\ln 2)\, a_z^2$. Anticipating on subsection \ref{subsec:wwlftdpov}
we also find $\rho_e=R_1=0$. It would be interesting to
see if the finite range energy corrections dominate over the corrections due to the $3D$ nature of the gas,
both effects being controlled by the same small parameter $(k_F r_e)^2$.},
giving [Tab.~V, Eq.~(1b)],
where the derivative is taken for a fixed scattering length in $r_e=0$.
The main difference with the $3D$ case [Tab.~V, Eq.~(1a)]
is that the energy $E$ now varies quadratically with the effective range $r_e$,
as already observed numerically for three-boson-bound states in \cite{HelfrichHammer}.
In the derivation, the first significant difference with the $3D$ case occurs in the normalization of the two-body scattering state: (\ref{eq:norma_chi})
is replaced with
\begin{equation}
\chi(r) \underset{r\to\infty}{=} \frac{\pi}{2i} \left[\frac{1}{f_k} J_0(kr) + H_0^{(1)}(kr)\right]
\label{eq:asympt2d}
\end{equation}
where $H_0^{(1)}=J_0+i N_0$ is a Hankel function,
$J_0$ and $N_0$ are Bessel functions of the first and second kinds. The optical theorem implies $|f_k|^2+\mbox{Re}\, f_k=0$
so that
\begin{equation}
f_k = \frac{-1}{1+i u(k)} \ \ \ \ \ \mbox{with}\ \ \ \ \ u(k)\in \mathbb{R},
\label{eq:def_fk_a_2D}
\end{equation}
and $\chi$ is real. The low-$k$ expansion for a potential of range $b$
takes the form \cite{Verhaar2D,Khuri}
\begin{equation}
u(k) = \frac{2}{\pi} \left[\ln\left(e^{\gamma} ka/2\right)+
\frac{1}{2} (k r_e)^2+\ldots\right],
\label{eq:devuk_2D}
\end{equation}
where $\gamma=0.577216\ldots$ is Euler's constant,
the logarithmic term being obtained in the zero-range Bethe-Peierls model and the $k^2$ term corresponding to finite effective range
corrections (with the sign convention of \cite{Verhaar2D} such that $r_e^2>0$ for a hard disk potential).
The subsequent calculations are similar to the $3D$ case, also for the negative energy case where analytic continuation
gives rises to the special functions $I_0(\kappa r)$ and $K_0(\kappa r)$.
For example, at positive energy, the lemma (\ref{eq:lemme_utile})
takes in $2D$ the form
\begin{equation}
\frac{\pi^2\hbar^2}{m} [u_1(k)-u_2(k)]= \int_{\mathbb{R}^2}\!\!\!
d^2r\, \chi_1(r) \chi_2(r) [V(r;b_1)-V(r;b_2)]
\label{eq:lemme_utile_2D}
\end{equation}
The fact that one can neglect the trapping potential within
the interaction range is again
justified in Appendix~\ref{app:piege_dans_interaction}.
Finally, we note that the expansion of the asymptotic form (\ref{eq:asympt2d}) for $r\to 0$, and for $k\to 0$,
\begin{equation}
\chi(r) =\ln(r/a) - \frac{1}{2} (k r_e)^2 + O(r^2\ln r)
\end{equation}
allows to determine the $2D$ version of the modified zero-range model (\ref{eq:CL_re}),
\begin{multline}
\psi(\mathbf{r}_1,\ldots,\mathbf{r}_N)\underset{r_{ij}\to0}{=}
\left( \ln(r_{ij}/a)-\frac{m}{2\hbar^2}\mathcal{E} r_e^2
\right) \\ \times \, A_{ij}\left(
\mathbf{R}_{ij}, (\mathbf{r}_k)_{k\neq i,j}
\right)
+O(r_{ij})
\end{multline}
where $\mathcal{E}$ is defined as in $3D$ by (\ref{eq:E=}).
To complete this $2D$ derivation, one has to check that the $p$-wave interaction
brings a negligible contribution to the energy. The $p$-wave scattering amplitude at low relative wavenumber $k$
vanishes as $k^2 R_1^2$ where $R_1^2$ is the $p$-wave scattering surface \cite{AdhikariGibson}. One could believe
that $r_e\approx R_1 \approx b$, one would then conclude that the $p$-wave
contribution to the energy, scaling as $R_1^2$, cannot be neglected as compared to the $s$-wave finite range
correction, scaling as $r_e^2$. Fortunately, as shown in subsection \ref{subsec:wwlftdpov}, this expectation is too naive, and
[Tab.~V, Eq.~(1b)] is saved by a logarithm, $r_e$ being larger than $R_1$ by a factor $\ln(a/b)\gg 1$ in the zero range limit
\footnote{As in $3D$ one may also be worried by the dependence of $\chi$ with $\mathbf{R}_{ij}$ and
the $(\mathbf{r}_k)_{k\neq i,j}$ {\sl via} its dependence with the energy $\mathcal{E}$. We reach the estimate $\partial_{\mathcal{E}}\chi(b)/\chi(b)
\approx m r_e^2/[\hbar^2\ln(a/b)]$ that vanishes more rapidly than $r_e^2$ in the zero-range limit.}.
\subsection{What we learn from diagrammatic formalism}
\label{subsec:wwlftdpov}
In the many-body diagrammatic formalism \cite{PitaevskiiLifchitz,FetterWalecka},
the equation of state of the homogeneous gas (in the thermodynamic limit) is accessed from the single particle Green's function,
which can be expanded in powers of the interaction potential, each term of the expansion
being represented by a Feynman diagram. The internal momenta of the diagrams can however be as large as
$\hbar/b$, where $b$ is the interaction range. A standard approach to
improve the convergence of the perturbative series for strong interaction potentials is to perform the so-called
{\sl ladder} resummation. The resulting Feynman diagrams then involve the two-body $T$-matrix of the interaction, rather than the bare
interaction potential $V$. For the spin-$1/2$ Fermi gas, where there is {\sl a priori} no Efimov effect, one then expects that
the internal momenta of the Feynman diagrams are on the order of $\hbar k_{\rm typ}$ only, where the typical wavenumber $k_{\rm typ}$
was defined in subsection \ref{sec:models:lattice}. As put forward in \cite{zhenyaNJP}, the interaction parameters
controlling the first deviation of the gas energy from its zero-range limit are then the ones appearing in the first deviations
of the two-body $T$-matrix element $\langle \mathbf{k}_1,\mathbf{k}_2|T(E+i0^+)|\mathbf{k}_3,\mathbf{k}_4\rangle$ from its zero-range limit, where all the $\mathbf{k}_i$
are on the order of $k_{\rm typ}$ and $E$ is on the order of $\hbar^2 k_{\rm typ}^2/m$.
The single particle Green's function
is indeed a sum of integrals of products of $T$-matrix elements and of ideal-gas Green's functions.
We explore this idea in this subsection.
For an interaction potential $V(r)$, we confirm the results of subsection \ref{subsec:dotef}.
In addition to the effective range $r_e$ characterizing the on-shell $T$-matrix elements (that is the scattering amplitude),
the diagrammatic point of view introduces a length $\rho_e$ characterizing the $s$-wave low-energy {\sl off-shell} $T$-matrix elements,
and a length $R_1$ characterizing the $p$-wave on-shell scattering; we will show that the contributions
of $\rho_e$ and $R_1$ are negligible as compared to the one of the effective range $r_e$.
Moreover, in the case of lattice models, a length $R_e$ characterizing the breaking
of the Galilean invariance appears \cite{zhenyaNJP}. Its contribution is in general of the same order
as the one of $r_e$. Both contributions can be zeroed for appropriately tuned matterwave dispersion relations
on the lattice.
Finally, in the case of a continuous space model with a delta interaction potential
plus a spherical cut-off
in momentum space, and in the case of a lattice model with a spherical momentum cut-off,
we show that the breaking of Galilean invariance does not disappear
in the infinite cut-off limit.
\subsubsection{For the continuous space interaction $V(r)$}
When each pair of particles $i$ and $j$ interact in continuous space {\sl via} the potential $V(r_{ij})$, one can use
Galilean invariance to restrict the $T$-matrix to the center of mass frame, where $\mathbf{k}'\equiv \mathbf{k}_1=-\mathbf{k}_2$ and $\mathbf{k}\equiv \mathbf{k}_3= -\mathbf{k}_4$.
Further using rotational invariance, one can restrict this internal $T$-matrix to fixed total angular momentum $l$, with
matrix elements characterized by the function $t_l(k',k;E)$ whose low-energy behavior was extensively studied \cite{AdhikariGibson,Gibson3D}.
This function is said to be {\sl on-shell} iff $k=k'=(mE)^{1/2}/\hbar$, in which case it is simply noted as $t_l(E)$, otherwise it is said to
be {\sl off-shell}.
\noindent{\underline{\it Three dimensions:}}
\\
We assume that the interaction potential, of compact support of range $b$, is everywhere
non-positive (or infinite). We recall that we are here in the {\sl resonant}
regime, with a $s$ wave scattering length $a$ such that $|a|\gg b$.
The potential is assumed to have the {\sl minimal} depth leading to the desired value of $a$, so as to exclude deeply bound dimers.
In particular, at resonance ($1/a=0$), there is no two-body bound state.
To invalidate the usual variational argument \cite{PandhaPRA,BlattWeisskopf,BaymCollapse,WernerThese} (that shows, for a non-positive interaction potential,
that the spin-$1/2$ fermions have deep $N$-body bound states in the large $N$ limit),
we allow that $V(r)$ has a hard core of range $b_{\rm hard}< b$. We directly restrict to the $s$-wave case ($l=0$), since the non-resonant $p$-wave interaction bring
a negligible $O(b^3)$ contribution, as already discussed in subsection \ref{subsec:dotef}.
The first deviation of the on-shell $s$-wave $T$-matrix from its zero-range limit is characterized by the effective range
$r_e$, already introduced in Eq.~(\ref{eq:devuk_3D}). The effective range is given by the well-known Smorodinski formula \cite{Khuri}:
\begin{equation}
\label{eq:smorodinski}
\frac{1}{2} r_e = \int_0^{+\infty} dr \left[(1-r/a)^2 - u_0^2(r)\right]
\end{equation}
in terms of the zero-energy scattering state $\phi(r)$, with $u_0(r)=r\phi(r)$ and $\phi$ is normalized as in Eq.~(\ref{eq:normalisation_phi_tilde_3D}).
Note that $u_0(r)$ is zero for $r\leq b_{\rm hard}$.
As $r_e$ deviates from its resonant ($|a|\to \infty$) value by terms $O(b^2/a)$, the discussion of its $1/a=0$ value is sufficient here.
The function $u_0$ then solves
\begin{equation}
0 = -\frac{\hbar^2}{m} u_0''(r) + V(r) u_0(r)
\label{eq:esaen}
\end{equation}
with the boundary conditions $u_0(b_{\rm hard})=0$ and $u_0(r)=1$ for $r>b$.
Due to the absence of two-body bound states, $u_0$ is the ground two-body state
and it has a constant sign, $u_0(r)\geq 0$ for all $r$. Since $V\leq 0$, Eq.~(\ref{eq:esaen})
implies that $u_0''\leq 0$, the function $u_0$ is concave. Combined with the boundary conditions, this leads to
$0 \leq u_0(r) \leq 1,$ for all $r$. Then from Eq.~(\ref{eq:smorodinski}):
\begin{equation}
2 b_{\rm hard} \le r_e \le 2 b
\end{equation}
For the considered model, this proves that $k_{\rm typ} r_e\to 0$ in the zero-range limit $b\to 0$, which is a key property
for the present work. Note that the absence of two-body bound states at resonance is the crucial hypothesis
ensuring that $r_e\geq 0$; it was not explicitly stated in the solution of problem 1 in Sec.~131 of \cite{LandauLifschitzMecaQ}.
Without this hypothesis, $r_e$ at resonance can be arbitrarily large and negative even for $V(r)\leq 0$ for all $r$,
see an explicit example in \cite{LeChapitre}.
In the $s$-wave channel, the first deviations of the {\sl off-shell} $T$-matrix from its zero-range value introduces, in addition to $r_e$,
another length that we call $\rho_e$, such that \cite{Gibson3D} \footnote{We have checked that the hypothesis of a non-resonant
interaction in \cite{Gibson3D} is actually not necessary to obtain (C16) and (C18) of that reference, that lead to (\ref{eq:rhoe_3D}).}
\begin{multline}
\frac{t_0(k,k';E)}{t_0(E)} -1 \underset{k,k',E\to 0}{\sim} \left(\frac{2mE}{\hbar^2} -k^2 -k'^{2}\right) \frac{1}{2} \rho_e^2 \\
\mbox{with} \ \ \ \frac{1}{2} \rho_e^2 = \int_0^{+\infty} dr\, r [(1-r/a)-u_0(r)].
\label{eq:rhoe_3D}
\end{multline}
For our minimal-depth model at resonance, we conclude that $0\leq \rho_e^2 \leq b^2$, so it appears, in the finite-range correction
to the energy, at a higher order than $r_e$, and it cannot contribute to [Tab.~V, Eq.~(1a)].
\noindent{\underline{\it Two dimensions:}}
\\
The specific feature of the $2D$ case is that the minimal-depth attractive potential ensuring the desired scattering length
$a$ only weakly dephases the matter-wave over its range, when $\ln(a/b)\gg 1$. This is apparent e.g.\ if $V(r)$ is a square-well potential of range $b$,
$V(r)=-\frac{\hbar^2 k_0^2}{m} \theta(b-r)$: One has $-k_0 b J_0'(k_0b)/J_0(k_0b)=1/\ln(a/b)$, where $J_0$ is a Bessel function,
which shows that, for the minimal-depth solution, the matter-wave phase shift $k_0 b$ vanishes as $[2/\ln(a/b)]^{1/2}$ in the zero-range limit.
This property allows to treat the potential perturbatively.
There are three relevant parameters describing the low-energy behavior of the $T$-matrix beyond the zero-range limit.
The first one is the effective range $r_e$ for the $s$-wave on-shell $T$-matrix, see Eq.~(\ref{eq:devuk_2D}).
It is given by the bidimensional Smorodinski formula \cite{Verhaar2D,Khuri}:
\begin{equation}
\frac{1}{2} r_e^2 = \int_0^{+\infty} dr\, r [\ln^2(r/a)-\phi^2(r)]
\label{eq:smorodinski2d}
\end{equation}
where the zero-energy scattering state $\phi(r)$ is normalized as in Eq.~(\ref{eq:normalisation_phi_tilde_2D}).
The second parameter is the length $\rho_e$ associated to the $s$-wave off-shell $T$-matrix: The $2D$ equivalent of
Eq.~(\ref{eq:rhoe_3D}) is \cite{AdhikariGibson}:
\begin{multline}
\frac{t_0(k,k';E)}{t_0(E)} -1 \underset{k,k',E\to 0}{\sim} \left(\frac{2mE}{\hbar^2} -k^2 -k'^{2}\right) \frac{1}{2} \rho_e^2 \\
\mbox{with} \ \ \ \frac{1}{2} \rho_e^2 = \int_0^{+\infty} dr\, r [\phi(r)-\ln(r/a)].
\label{eq:rhoe_2D}
\end{multline}
The third parameter is the length $R_1$ characterizing the low-energy $p$-wave scattering. For the $l$-wave scattering state
of energy $E=\hbar^2 k^2/m$, $k>0$, we generalize Eq.~(\ref{eq:asympt2d}) as
\begin{equation}
\chi^{(l)}(r) \underset{r\to\infty}{=} \frac{\pi}{2i} k^l\left[\frac{1}{f_k^{(l)}} J_l(kr) + H_l^{(1)}(kr)\right].
\end{equation}
The $l$-wave scattering amplitude then vanishes as
\begin{equation}
f_k^{(l)} \underset{k\to 0}{\sim} i\frac{\pi}{2} k^{2l} R_l^{2l}
\end{equation}
and the leading behavior of the off-shell $l$-wave $T$-matrix is characterized by the same length $R_l$ as the on-shell one
\cite{AdhikariGibson}.
The situation thus looks critical in $2D$: Three lengths squared characterize the low-energy $T$-matrix, one may naively
expect that they are of the same order $\approx b^2$ and that they all three contribute to the finite-range correction to the gas
energy at the same level, whereas [Tab.~V, Eq.~(1b)] singles out the effective range $r_e$. By a perturbative
treatment of the minimal-depth finite-range potential $V(r)$ of fixed scattering length $a$, we however obtain in the zero-range
limit the following hierarchy, see Appendix~\ref{app:param2d}:
\begin{eqnarray}
\label{eq:hier1}
r_e^2 &\underset{b\to 0}{\sim}& 2\rho_e^2 \ln(a/b) \\
\label{eq:hier2}
\rho_e^2 &\underset{b\to 0}{=}& \frac{1}{2} \frac{\int_{\mathbb{R}^2} d^2r \, r^2 V(r)}{\int_{\mathbb{R}^2} d^2r \, V(r)}
\left[1+O\left(\frac{1}{\ln(a/b)}\right)\right] \\
\label{eq:hier3}
R_1^2 &\underset{b\to 0}{\sim}& \frac{\rho_e^2}{2\ln(a/b)}
\end{eqnarray}
This validates [Tab.~V, Eq.~(1b)] when $\ln(a/b)\gg 1$.
\subsubsection{Lattice models}
We restrict here for simplicity to the $3D$ case.
To obtain a non-zero $T$-matrix element $\langle \mathbf{k}_1,\mathbf{k}_2|T(E+i0^+)|\mathbf{k}_3,\mathbf{k}_4\rangle$, due to the conservation of the total quasi-momentum,
we have to restrict to $\mathbf{k}_1+\mathbf{k}_2=\mathbf{k}_3+\mathbf{k}_4\equiv \mathbf{K}$ (modulo a vector of the reciprocal lattice).
As the interactions in the lattice model are purely on-site,
the matrix element only depends on the total quasi-momentum $\mathbf{K}$
and the energy $E$, and is noted as $t(\mathbf{K},E)$ in what follows. We recall that the bare coupling constant $g_0$ is adjusted to have
a fixed scattering length $a$ on the lattice, see Eq.~(\ref{eq:g0_3D}), which leads to
\begin{equation}
g_0 = \frac{4\pi\hbar^2a/m}{1- K_3\, a/b}
\end{equation}
where the numerical constant $K_3$ depends on the lattice dispersion relation $\epsilon_\mathbf{k}$.
One then gets \cite{zhenyaNJP}
\begin{multline}
\frac{1}{t(\mathbf{K},E)} = \frac{m}{4\pi\hbar^2 a} -\int_D \frac{d^3q}{(2\pi)^3}
\Big( \frac{1}{2\epsilon_\mathbf{q}} \\+
\frac{1}{E+i0^+-\epsilon_{\frac{1}{2}\mathbf{K}+\mathbf{q}}-\epsilon_{\frac{1}{2}\mathbf{K}-\mathbf{q}}} \Big)
\label{eq:mattmsr}
\end{multline}
where $a$ is the $s$-wave scattering length and the dispersion relation $\epsilon_\mathbf{q}$ is extended by periodicity from the first Brillouin zone
$D$ to the whole space. The low-$\mathbf{K}$ and low-energy limit of that expression was worked out
in \cite{zhenyaNJP}, it involves the effective range $r_e$ and an extra length $R_e$ quantifying the breaking of Galilean invariance:
\begin{equation}
\frac{1}{t(\mathbf{K},E)}= \frac{m}{4\pi\hbar^2} \left(\frac{1}{a}+ i k -\frac{1}{2} r_e k^2 -\frac{1}{2} R_e K^2\right) + \ldots
\end{equation}
where the relative wavenumber $k$ such that $E-\frac{\hbar^2 K^2}{4m}=\frac{\hbar^2 k^2}{m}$ is either real non-negative
or purely imaginary with a positive imaginary part. The two lengths are given by
\begin{eqnarray}
\label{eq:re_reseau}
r_e &=&\!\! \int_{\mathbb{R}^3\setminus D} \frac{d^3q}{\pi^2 q^4} + \!\! \int_{D} \frac{d^3q}{\pi^2} \left[\frac{1}{q^4}-
\left(\frac{\hbar^2}{2m\epsilon_\mathbf{q}}\right)^2\right]\\
R_e &=& - \int_{\stackrel{\circ}{D}} \frac{d^3q}{4\pi^2} \left(\frac{\hbar^2}{2m\epsilon_\mathbf{q}}\right)^2
\left[1-\frac{m}{\hbar^2} \frac{\partial^2\epsilon_{\mathbf{q}}}{\partial q_x^2}\right] \nonumber \\
&-&
\int_{-\frac{\pi}{b}}^{\frac{\pi}{b}} \int_{-\frac{\pi}{b}}^{\frac{\pi}{b}}\frac{dq_y dq_z}{8\pi^2} \frac{\hbar^2}{m\epsilon^2_{(\frac{\pi}{b},q_y,q_z)}}
\frac{\partial \epsilon_{(\frac{\pi}{b},q_y,q_z)}}{\partial q_x}
\label{eq:Re}
\end{eqnarray}
where the dispersion relation $\epsilon_\mathbf{k}$ was supposed to be twice differentiable on the interior $\stackrel{\circ}{D}$ of the first Brillouin zone
and to be invariant under permutation of the coordinate axes.
As compared to \cite{zhenyaNJP} we have added the second term (a surface term) in Eq.~(\ref{eq:Re}) to include the case where the dispersion
relation has cusps at the border of the first Brillouin zone \footnote{This term is obtained by distinguishing three integration
zones before taking the limit $K_x\to 0$, so as to fold back the vectors $\mathbf{q}\pm \frac{1}{2} \mathbf{K}$ inside the first Brillouin zone:
the left zone $-\frac{\pi}{b}< q_x < -\frac{\pi}{b} +\frac{1}{2}K_x$ where
$\epsilon_{\mathbf{q}-\frac{1}{2}\mathbf{K}}$ is written as $\epsilon_{\mathbf{q}+\frac{2\pi}{b}\mathbf{e}_x-\frac{1}{2}\mathbf{K}}$, the right
zone $\frac{\pi}{b}-\frac{1}{2}K_x < q_x < \frac{\pi}{b}$ where $\epsilon_{\mathbf{q}+\frac{1}{2}\mathbf{K}}$
is written as $\epsilon_{\mathbf{q}-\frac{2\pi}{b}\mathbf{e}_x+\frac{1}{2}\mathbf{K}}$,
and the central zone. The surface term can also be obtained by interpreting
$\partial_{q_x}^2$ in the sense of distributions, after having shifted the integration domain $D$ by $\frac{\pi}{b} \mathbf{e}_x$
for mathematical convenience. The second order derivative in the first term of Eq.~(\ref{eq:Re}) is of course
taken in the sense of functions.}.
\setcounter{notedecoupage}{\thefootnote}
As mentioned in the introduction of the present section, we then expect that, in the lattice model, the first deviation of any many-body eigenenergy $E$ from the zero-range limit
is a linear function of the {\sl two} parameters $r_e$ and $R_e$ with model-independent coefficients:
\begin{equation}
E(b) \underset{b\to 0}{=} E(0) + \frac{\partial E}{\partial r_e} r_e + \frac{\partial E}{\partial R_e} R_e + o(b)
\label{eq:dEdRe}
\end{equation}
This feature was overlooked in the early version \cite{50pages} of this work.
It invalidates the discussion of $\partial T_c/\partial r_e$ given in \cite{50pages}.
We illustrate this discussion with a few relevant examples.
For a parabolic dispersion relation $\epsilon_\mathbf{k}=\frac{\hbar^2 k^2}{2m}$,
the constant $K_3=2.442\, 749\, 607\, 806\, 335\ldots$ \cite{MoraCastin,boite}
and the effective range \cite{YvanVarenna,LeChapitre} were already calculated, first numerically then analytically;
in the quantity $R_e$, the first term vanishes but there is still breaking of Galilean invariance due to the non-zero surface term
that can be deduced from Eq.~(\ref{eq:une_integrale}):
\begin{equation}
r_e=b\,\frac{12\sqrt{2}}{\pi^3} \arcsin\frac{1}{\sqrt{3}}\simeq 0.337 b \ \ \mbox{and} \ \ R_e=-\frac{1}{12} r_e
\end{equation}
A popular model for Quantum Monte Carlo simulations is the Hubbard model, that leads to the dispersion
relation $\epsilon_\mathbf{k}^{\rm Hub}=\frac{\hbar^2}{mb^2}[3-\cos(k_xb)-\cos(k_yb)-\cos(k_zb)]$ (as already
mentioned in subsection \ref{sec:models:lattice}). This leads to $K_3\simeq 3.175\, 911\, 6$.
Again, both $r_e$ and $R_e$ differ from zero:
\begin{equation}
r_e \simeq -0.305\, 718 b \ \ \mbox{and}\ \ R_e \simeq -0.264\, 659 b
\end{equation}
In an attempt to reduce the dependence of the Monte Carlo results on the grid spacing $b$, a zero-effective-range dispersion relation was
constructed \cite{LeChapitre,CarlsonAFQMC},
\begin{equation}
\epsilon_\mathbf{k} =\frac{\hbar^2 k^2}{2m} [1-C(k b/\pi)^2],
\label{eq:disperrez}
\end{equation}
with $C\simeq 0.257\, 022$, and used
in real simulations \cite{CarlsonAFQMC}. The corresponding $K_3 \simeq 2.899\, 952$. Unfortunately this leads to a sizeable $R_e$:
\begin{equation}
R_e \simeq-0.168 b.
\end{equation}
As envisioned in \cite{zhenyaNJP} one may look for dispersion relations with $r_e=R_e=0$. We have found an example of
such a {\sl magic} dispersion relation:
\begin{equation}
\epsilon_\mathbf{k} = \epsilon_\mathbf{k}^{\rm Hub} [1+\alpha X + \beta X^2] \ \ \mbox{with} \ \ X=\frac{\epsilon_\mathbf{k}^{\rm Hub}}{6\hbar^2/mb^2}.
\label{eq:magic}
\end{equation}
Two sets of parameters are possible. The first choice is
\begin{equation}
\alpha \simeq 1.470\, 885 \ \mbox{and}\ \beta\simeq -2.450\, 725,
\label{eq:magic1}
\end{equation}
which leads to $K_3\simeq 3.137 \, 788$. The second choice is
\begin{equation}
\alpha \simeq -1.728\, 219 \ \mbox{and}\ \beta\simeq 12.838\, 540,
\label{eq:magic2}
\end{equation}
which leads to $K_3\simeq 1.949\, 671$.
Other examples of magic dispersion relation can be found \cite{Juillet_juillet}.
\subsubsection{The single-particle momentum cut-off model}
\label{subsubsec:tspmcom}
A continuous space model used in particular in \cite{zhenyas_crossover} takes a Dirac delta interaction potential
$g_0 \delta(\mathbf{r}_i-\mathbf{r}_j)$ between particles $i$ and $j$, and regularizes the theory by introducing a
cut-off $\Lambda$ on all the single-particle wavevectors. Due to the conservation of momentum one needs to evaluate
the $T$-matrix only between states with the same total momentum $\hbar\mathbf{K}$. Due to the contact interaction
the resulting matrix element depends only on $\mathbf{K}$ and on $E$, and is noted
as $t(\mathbf{K},E)$. Expressing $g_0$ in terms of the
$s$-wave scattering length as in \cite{zhenyas_crossover} one gets
\begin{multline}
\frac{1}{t(\mathbf{K},E)} = \frac{m}{4\pi\hbar^2 a} -\int_{\mathbb{R}^3} \frac{d^3q}{(2\pi)^3}
\Bigg[ \frac{\theta(\Lambda-q)}{2\epsilon_\mathbf{q}} \\+
\frac{\theta(\Lambda-|\frac{1}{2}\mathbf{K}+\mathbf{q}|)\, \theta(\Lambda-|\frac{1}{2}\mathbf{K}-\mathbf{q}|)}{E+i0^+-\epsilon_{\frac{1}{2}\mathbf{K}+\mathbf{q}}-\epsilon_{\frac{1}{2}\mathbf{K}-\mathbf{q}}} \Bigg]
\end{multline}
where $\epsilon_\mathbf{q}=\hbar^2 q^2/(2m)$ for all $\mathbf{q}$.
Introducing the relative wavenumber $k$ such that $E-\frac{\hbar^2 K^2}{4m}=\frac{\hbar^2 k^2}{m}$, $k\in \mathbb{R}^+$ or $k\in i\mathbb{R}^+$,
we obtain the low wavenumbers expansion
\begin{equation}
\frac{1}{t(\mathbf{K},E)} = \frac{m}{4\pi\hbar^2} \left(\frac{1}{a} + ik- \frac{K}{2\pi} -\frac{1}{2} r_e k^2 -\frac{1}{2} R_e K^2 \right)
+\ldots
\label{eq:tamhbe}
\end{equation}
The effective range is given by $r_e=4/(\pi \Lambda)$ and the length $R_e=r_e/12$
\footnote{The integration can be performed in spherical coordinates of polar axis the direction of $\mathbf{K}$.}.
The unfortunate feature of this model is the occurrence of a term linear in $K$, that does not disappear even if $\Lambda\to +\infty$:
The model thus does {\sl not} reproduce the universal zero-range model in the large cut-off limit, as soon as pairs of
particles have a non-zero total momentum.
Note that here one cannot exchange the order of the integration over $\mathbf{q}$ and the $\Lambda\to\infty$ limit.
As a concrete illustration of the breaking of the Galilean invariance, for $a>0$ and in the limit $\Lambda\to +\infty$,
it is found (e.g.\ by calculating the pole of the $T$-matrix) that the total energy of a free-space dimer of total momentum $\hbar\mathbf{K}$ is
\begin{equation}
E_{\rm dim}^{\rm model} (\mathbf{K})= \frac{\hbar^2 K^2}{4m} -\frac{\hbar^2}{m} \left(\frac{1}{a}-\frac{K}{2\pi}\right)^2
\end{equation}
and that this dimer state exists only for $K<2\pi/a$ \footnote{This problem does not show up in recent studies of the fermionic
polaron problem \cite{CGL2009,ZwergerPunk} since the momentum cut-off is introduced only for the majority atoms and not for the impurity, see
\cite{TheseGiraud}.}.
\subsubsection{The single-particle momentum cut-off lattice model}
\label{subsubsec:tspmcolm}
A spherical momentum cut-off was also introduced for a lattice model in \cite{bulgacQMC,BulgacCrossover,BulgacPG,BulgacPG2}.
Our understanding is that this amounts to taking the following dispersion relation inside the first Brillouin
zone: $\epsilon_{\mathbf{k}}=\hbar^2 k^2/(2m)$ for $k < \pi/b$, $\epsilon_\mathbf{k}=+\infty$ otherwise.
The $T$-matrix is then given by Eq.~(\ref{eq:mattmsr}), where for $\mathbf{K}\neq\mathbf{0}$ one extends $\epsilon_\mathbf{k}$ by
periodicity out of the first Brillouin zone. By distinguishing three zones within the integration domain
for $\mathbf{q}$, similarly to the note [\thenotedecoupage],
and restricting for simplicity to $E=\hbar^2 K^2/(4m)$, we find the same undesired term $-K/(2\pi)$ as in Eq.~(\ref{eq:tamhbe}),
implying that the model does not reproduce the unitary gas even for $b\to 0$.
\subsection{The Juillet effect for lattice models}
\label{subsec:juillet}
\begin{figure*}[t]
\includegraphics[width=0.4\linewidth,clip=]{fig1a.eps}
\includegraphics[width=0.4\linewidth,clip=]{fig1b.eps}
\caption{(Color online) Illustration of the Juillet effect for the lattice model: In the cubic box $[0,L]^3$ with periodic
boundary conditions, ground state energy of two opposite spin fermions as a function of the grid spacing $b$,
for an infinite scattering length ($1/a=0$), for a total momentum equal to $\mathbf{0}$ in (a) and equal to
$\frac{2\pi\hbar}{L}\mathbf{e}_z$ in (b). Three dispersion relations $\epsilon_\mathbf{k}$ are considered,
the quartic one of Eq.~(\ref{eq:disperrez}) with zero effective range $r_e=0$ (in blue, lower set), and the magic one
(\ref{eq:magic}) with $r_e=R_e=0$
with the parameters of Eq.~(\ref{eq:magic1}) (in black, upper set) and of Eq.~(\ref{eq:magic2}) (in red, middle set).
The fact that the energy varies linearly in $b$ for the $r_e=0$ quartic dispersion relation at zero total
momentum is the Juillet effect explained in Sec.~\ref{subsec:juillet}, and the corresponding dashed
line is the analytical result (\ref{eq:analyE1}). At non-zero total momentum
the quartic dispersion relation leads to an energy variation linear in $b$
as expected e.g.\ from the fact that its has a non-zero $R_e$ [the dotted line is a linear fit for $b/L\leq 0.01$].
The magic dispersion relations
lead to a $O(b^2)$ variation of the energy both at zero and non-zero total momentum [the dotted lines are purely quadratic fits
performed for $b/L\leq 0.02$].
\label{fig:juillet}
}
\end{figure*}
With the lattice dispersion relation $\epsilon_\mathbf{k}$ of
(\ref{eq:disperrez}), adjusted to have a
zero effective range $r_e=0$, Olivier Juillet numerically
observed, for two particles in the cubic box $[0,L]^3$ with
periodic boundary conditions and zero total momentum,
that the first energy correction to
the zero-range limit $b\to 0$ is linear in $b$
\cite{Juillet_juillet},
which seems to contradict [Tab.~V, Eq.~(1a)]. This is illustrated in Fig.~\ref{fig:juillet}.
This cannot be explained by a non-zero $R_e$ [defined in Eq.~(\ref{eq:Re})]
because the two opposite-spin fermions have here a zero total momentum.
This Juillet effect, as we shall see, is due to the fact that
the {\sl integral} of $1/\epsilon_\mathbf{k}$ over $\mathbf{k}$ in the first Brillouin
zone and the corresponding {\sl discrete sum} for the finite size quantization
box differ for $b/L\to 0$ not only by a constant term but also by a term linear in $b$, when the dispersion relation
has a cusp at the surface of the first Brillouin zone, such
as Eq.~(\ref{eq:disperrez}).
The Juillet effect thus disappears in the thermodynamic limit.
This explains why it does not show up in the diagrammatic
point of view of Sec.~\ref{subsec:wwlftdpov}, which was considered
in the thermodynamic limit, so that only momentum integrals appeared.
This also shows that the Juillet effect does not invalidate [Tab.~V, Eq.~(1a)]
since it was derived for an interaction that is smooth in momentum space.
In \cite{boite} it was shown that the lattice model spectrally
reproduces the zero-range model when the grid spacing $b\to 0$.
We now simply extend the reasoning of \cite{boite} for two particles to first order in $b$
included. For an eigenenergy $E$ which does not belong to the non-interacting
spectrum, the exact implicit equation is
\begin{equation}
\frac{1}{g_0} + \frac{1}{L^3} \sum_{\mathbf{k}\in D} \frac{1}{2\epsilon_{\mathbf{k}}-E}=0
\end{equation}
where the notation with a discrete sum over $\mathbf{k}$ implicitly
restricts $\mathbf{k}$ to $\frac{2\pi}{L} \mathbb{Z}^3$.
By adding and subtracting terms, and using the expressions (\ref{eq:g0_3D}) and (\ref{eq:re_reseau}) for the
bare coupling constant $g_0$ and the effective range $r_e$, one obtains
the useful form:
\begin{multline}
\frac{1}{g} -\frac{m^2 E r_e}{8\pi \hbar^4}
+\frac{1}{L^3} \Bigg[-\frac{1}{E} +
\!\!\sum_{\mathbf{k}\in D^*} F(\epsilon_\mathbf{k})
+\!\! \sum_{\mathbf{k}\in{\mathbb{R}^{3}}^*} \frac{E}{(\hbar^2 k^2/m)^2}
\Bigg] \\
=
R_1 + E R_2 - E R_3
\label{eq:juillet_utile}
\end{multline}
with $g=4\pi\hbar^2 a/m$ and $F(\epsilon)=(2\epsilon-E)^{-1} -(2\epsilon)^{-1}
-E/(2\epsilon)^2$. We have defined
\begin{equation}
R_1 \equiv \int_D \frac{d^3k}{(2\pi)^3} \frac{1}{2\epsilon_\mathbf{k}}
-\frac{1}{L^3} \sum_{\mathbf{k}\in D^*} \frac{1}{2\epsilon_\mathbf{k}},
\label{eq:R1}
\end{equation}
proportional to the function $C(b)$ introduced in \cite{boite}.
The quantities $R_2$ and $R_3$ have the same structure:
$R_2$ is obtained by replacing in $R_1$ the function
$1/(2\epsilon_\mathbf{k})$ by $1/(2\epsilon_\mathbf{k})^2-
1/(\hbar^2 k^2/m)^2$, in the integral and in the sum;
$R_3$ is obtained by replacing in $R_1$ the function
$1/(2\epsilon_\mathbf{k})$ by $1/(\hbar^2 k^2/m)^2$
and the set $D$ by $\mathbb{R}^3\setminus D$, both for the integration
and for the summation.
We now take $b\to 0$ in Eq.~(\ref{eq:juillet_utile}), keeping terms
up to $O(b)$ included. Since $F(\epsilon)=O(1/\epsilon^3)$ at large
$\epsilon$, we can replace $F(\epsilon_{\mathbf{k}})$ by its $b\to 0$ limit
$F(\hbar^2 k^2/2m)$, and the summation set $D^*$ by its $b\to 0$ limit
\footnote{One has $\epsilon_\mathbf{k}= \frac{\hbar^2 k^2}{2m} [1+O(k^2 b^2)]$.
For the finite number low energy terms, we directly use this fact.
For the other terms, such that $\epsilon_\mathbf{k} \gg |E|$ and
$\gg (2\pi\hbar)^2/(mL^2)$, we use
$F(\epsilon_\mathbf{k})-F(\frac{\hbar^2 k^2}{2m})\simeq
(\epsilon_\mathbf{k}-\frac{\hbar^2 k^2}{2m})
F'(\frac{\hbar^2 k^2}{2m})=O(b^2/k^4)$ which is integrable at
large $k$ in $3D$ and leads to a total error $O(b^2)$.}:
\begin{equation}
\sum_{\mathbf{k}\in D^*} F(\epsilon_\mathbf{k}) \underset{b\to 0}{=}
\sum_{\mathbf{k}\in {\mathbb{R}^3}^*} F\left(\frac{\hbar^2 k^2}{2m}\right)
+O(b^2)
\end{equation}
In the quantities $R_i$, we perform the change of variables
$\mathbf{k}=2\pi \mathbf{q}/b$, and we write the dispersion relation as
\begin{equation}
\epsilon_{\mathbf{k}} = \frac{(2\pi \hbar)^2}{m b^2} \eta_{\mathbf{k} b/(2\pi)}
\label{eq:gdr}
\end{equation}
where the dimensionless $\eta_{\mathbf{q}}$ does not depend on the lattice spacing $b$.
We then find that $b R_1$, $R_2/b$ and $R_3/b$ are
differences between a converging integral
and a three-dimensional Riemann sum with a vanishing cell volume $(b/L)^3$.
As these differences vanish as $O(b)$, we conclude that $R_2=O(b^2)$
and $R_3=O(b^2)$ can be neglected in Eq.~(\ref{eq:juillet_utile}).
This however leads only to $R_1=O(1)$, so that more mathematical work
needs to be done, as detailed in the Appendix~\ref{app:R1}, to obtain
\begin{equation}
\frac{\hbar^2}{m} L R_1\underset{b\to 0}{=} \frac{\mathcal{C}}{4\pi^2}
+\frac{\pi R_e^{\rm surf}}{2 L}
+O(b/L)^2.
\label{eq:devR1}
\end{equation}
The numerical constant $\mathcal{C}\simeq 8.91363$
was calculated and called $C(0)$ in \cite{boite}.
$R_e^{\rm surf}$ remarkably is the surface contribution to the quantity
$R_e$ in Eq.~(\ref{eq:Re}), it scales as $b$.
It is non-zero only when the dispersion
relation has a cusp at the surface of the first Brillouin zone.
In this case, $R_1$ varies to first order in $b$, which comes in addition
to the expected linear contribution of the $E r_e$ term in
Eq.~(\ref{eq:juillet_utile}): This leads to the Juillet effect.
More quantitatively, the first deviation of the eigenenergy from its zero-range limit $E^0$,
shown as a dashed line in Fig.~\ref{fig:juillet}a,
is \footnote{The contribution proportional to $r_e$ in
Eq.~(\ref{eq:analyE1}) can also be obtained from [Tab.~V, Eq.~(1a)]
and from the fact that $\sum_{\mathbf{k}\neq\mathbf{0}} e^{i\mathbf{k}\cdot\mathbf{r}}/k^2 \sim L^3/(4\pi r)$
for $r\to 0$.}:
\begin{equation}
E-E^0 \underset{b\to 0}{\sim} \frac{\frac{m^2 E^0 r_e}{8\pi\hbar^4}+\frac{m\pi R_e^{\rm surf}}{2\hbar^2 L^2}}
{\frac{1}{L^3}\sum_{\mathbf{k}\in \mathbb{R}^3}\frac{1}{\left(\frac{\hbar^2 k^2}{m}-E^0\right)^{2}}}
\label{eq:analyE1}
\end{equation}
\subsection{Link between $\partial E/\partial r_e$ and the subleading short distance behavior
of the pair distribution function}
\label{subsec:re_dans_g2}
As shown by [Tab.~II, Eqs.~(3a,3b)] the short distance behavior of the pair distribution function
(averaged over the center of mass position of the pair) diverges as $1/r^2$ in $3D$ and as $\ln^2 r$ in $2D$,
with a coefficient proportional to $C$, that is related to the derivative of the energy
with respect to the scattering length $a$. Here we show that a subleading term in this short distance
behavior is related to the derivative of the energy with respect to the effective range $r_e$.
To this end, we explicitly write the next order term in the contact conditions [Tab.~I, Eqs.~(1a,1b)].
\noindent \underline{\sl Three dimensions:}
Including the next order term in [Tab.~I, Eq.~(1a)] gives
\begin{multline}
\label{eq:psios}
\psi(\mathbf{r}_1,\ldots,\mathbf{r}_N)\underset{r_{ij}\to0}{=}
\left( \frac{1}{r_{ij}}-\frac{1}{a} \right) \, A_{ij}\left(
\mathbf{R}_{ij}, (\mathbf{r}_k)_{k\neq i,j}
\right) \\
+ r_{ij} \ B_{ij} \left(\mathbf{R}_{ij}, (\mathbf{r}_k)_{k\neq i,j}\right)
+ \sum_{\alpha=1}^{3} r_{ij,\alpha} L_{ij}^{(\alpha)}\left(\mathbf{R}_{ij}, (\mathbf{r}_k)_{k\neq i,j}\right) \\
+O(r_{ij}^2)
\end{multline}
where we have distinguished between a {\sl singular} part linear with the interparticle distance
$r_{ij}$ and a {\sl regular} part linear in the relative coordinates of $i$ and $j$
($r_{ij,\alpha}$ is the component along axis $\alpha$ of the vector $\mathbf{r}_{ij}$).
Injecting this form into Schr\"odinger's equation, keeping the resulting $\propto 1/r_{ij}$ terms and
using notation [Tab.~V, Eq.~(2)] gives
\begin{equation}
B_{ij}(\mathbf{R}_{ij},(\mathbf{r}_k)_{k\neq i,j})= -\frac{m}{2\hbar^2} (E-\mathcal{H}_{ij}) A_{ij}\left(
\mathbf{R}_{ij}, (\mathbf{r}_k)_{k\neq i,j}\right)
\end{equation}
[Tab.~V, Eq.~(1a)] thus becomes
\begin{equation}
\frac{\partial E}{\partial r_e} = -\frac{4\pi\hbar^2}{m} (A,B)
\end{equation}
We square (\ref{eq:psios}) and as in Sec.~\ref{sec:g2} we integrate over $\mathbf{R}_{ij}$,
the $\mathbf{r}_k$'s and we sum over $i<j$. We further average $G^{(2)}_{\uparrow\downarrow}(\mathbf{r})$
over the direction of $\mathbf{r}$ to eliminate the contribution of the regular term $L_{ij}$,
defining $\bar{G}^{(2)}_{\uparrow\downarrow}(\mathbf{r})=[G^{(2)}_{\uparrow\downarrow}(\mathbf{r})+G^{(2)}_{\uparrow\downarrow}(-\mathbf{r})]/2$.
We obtain [Tab.~V, Eq.~(3a)].
\noindent \underline{\sl Two dimensions:} Including next order terms in [Tab.~I, Eq.~(1b)] gives
\footnote{ From Schr\"odinger's equation, $\Delta_{\mathbf{r}_{ij}}\psi$ diverges at most as $\psi$ itself,
that is as $\ln r_{ij}$, for $r_{ij}\to 0$.
The particular solution $f(r)=\frac{1}{4}r^2 (\ln r-1)$ of
$\Delta_{\mathbf{r}} f(r)=\ln r$ fixes the form of the subleading term in $\psi$.}:
\begin{multline}
\label{eq:psios2d}
\psi(\mathbf{r}_1,\ldots,\mathbf{r}_N)\underset{r_{ij}\to0}{=}
\ln(r_{ij}/a) \, A_{ij}\left(
\mathbf{R}_{ij}, (\mathbf{r}_k)_{k\neq i,j}
\right) \\
+ r^2_{ij} \ln r_{ij} \ B_{ij} \left(\mathbf{R}_{ij}, (\mathbf{r}_k)_{k\neq i,j}\right)
+ \sum_{\alpha=1}^{2} r_{ij,\alpha} L_{ij}^{(\alpha)}\left(\mathbf{R}_{ij}, (\mathbf{r}_k)_{k\neq i,j}\right) \\
+O(r_{ij}^2)
\end{multline}
Proceeding as in $3D$ we obtain
\begin{equation}
B_{ij}(\mathbf{R}_{ij},(\mathbf{r}_k)_{k\neq i,j})= -\frac{m}{4\hbar^2} (E-\mathcal{H}_{ij}) A_{ij}\left(
\mathbf{R}_{ij}, (\mathbf{r}_k)_{k\neq i,j}\right)
\end{equation}
[Tab.~V, Eq.~(1b)] thus becomes
\begin{equation}
\frac{\partial E}{\partial (r_e^2)} = -\frac{4\pi\hbar^2}{m} (A,B)
\end{equation}
These equations finally leads to [Tab.~V, Eq.~(3b)].
\subsection{Link between $\partial E/\partial r_e$ and the $1/k^6$ subleading tail of
the momentum distribution}
\label{subsec:unsurk6}
A general idea given in \cite{Olshanii_nk} is that singular terms in the dependence of $\psi$ on
the interparticle distance $r_{ij}$ (at short distances) reflect into power-law tails in the momentum
distribution $n_{\sigma}(\mathbf{k})$ given by Eq.~(\ref{eq:nk_1e_quant}). In Sec.~\ref{sec:C_nk}, we restricted to the leading order.
Here we include the subleading term and we perform the same reasoning as in Sec.~\ref{sec:C_nk} to obtain
\footnote{In $3D$ we used the identity $\int d^3 r e^{i\mathbf{k}\cdot\mathbf{r}}/r = 4\pi/k^2$ and its derivatives with respect
to $k_\alpha$; e.g.\ taking the Laplacian with respect to $\mathbf{k}$ gives $\int d^3 r\, e^{i\mathbf{k}\cdot\mathbf{r}} r = -8\pi/k^4$.
Equivalently, one can use the relation $\int d^3r e^{i\mathbf{k}\cdot\mathbf{r}} \frac{u(r)}{r} = \frac{4\pi}{k^2} u(0)
-\frac{4\pi}{k^4} u^{(2)}(0)+ O(1/k^6)$ and its derivatives with respect to $k_\alpha$;
this relations holds for any $u(r)$ which has a series expansion in $r=0$ and
rapidly decreases at $\infty$. In $2D$ for $k>0$ we used the identity
$\int d^2 r e^{i\mathbf{k}\cdot\mathbf{r}} \ln r = -2\pi/k^2$ and its derivatives with respect
to $k_\alpha$. The regular terms involving $L_{ij}^{(\alpha)}$
have (as expected) a negligible contribution to the tail of $n_\sigma(\mathbf{k})$.}
\footnote{The configurations with three close particles contribute to the tail of $n_{\sigma}(\mathbf{k})$
as $1/k^{5+2s}$, see a note of \cite{TanLargeMomentum}, with $s$ defined in Sec.~\ref{subsec:appl_3body}, which is negligible
for $s>1/2$.}
\begin{equation}
\bar{n}_\sigma(k) \underset{k\to\infty}{=} \frac{C}{k^4} + \frac{D}{k^6}+ \ldots
\end{equation}
where $\bar{n}_\sigma(k)=\frac{1}{d}\sum_{i=1}^{d} n_\sigma(k\mathbf{u}_i)$
and $D$ is the linear combination of $\partial E/\partial r_e$ and $(A, \Delta_\mathbf{R} A)$ given in
[Tab.~V, Eqs.~(4a,4b)].
Physically, the extra term $(A, \Delta_\mathbf{R} A)$ results from the fact that the wavevector $\mathbf{k}_1$ of a particle in an $\uparrow\downarrow$ colliding pair is a linear
combination of the relative wavevector $\mathbf{k}_{\rm rel}$ and of the center-of-mass wavevector $\mathbf{K}$ of the pair, so that, even
if the probability distribution of $\mathbf{k}_{\rm rel}$ was exactly scaling as $1/k_{\rm rel}^4$, a non-zero $\mathbf{K}$ would generate
a subleading $1/k_1^6$ contribution in the single particle momentum distribution.
This is apparent for the simple case of a free space dimer:
When the dimer is at rest, $\psi(\mathbf{r}_1,\mathbf{r}_2)=\phi_{\rm dim}(r_{12})$, $A_{12}(\mathbf{R}_{12})$ is uniform and the extra term vanishes.
When it has a momentum $\mathbf{K}$, $\psi(\mathbf{r}_1,\mathbf{r}_2)=e^{i\mathbf{K}\cdot \mathbf{R}_{12}} \phi_{\rm dim}(r_{12})$,
which shifts the single particle momentum distribution, $n_\uparrow^{\rm mov}(\mathbf{k})=n_\uparrow^{\rm rest}(\mathbf{k}-\mathbf{K}/2)$. Applying this shift
to the momentum tail $C/k^4$ gives, after continuous average over the direction of $\mathbf{k}$, a subleading $\delta D^{\rm mov}/k^6$ contribution,
with $\delta D^{\rm mov}=C K^2/2$ in $3D$ and $\delta D^{\rm mov}=C K^2$ in $2D$. Remarkably, the ratio of the extra term to
$C$ is proportional to the pair-center-of-mass kinetic energy.
In the $N$-body case, one can generalize this property by defining the mean center-of-mass kinetic
energy of a $\uparrow\downarrow$ pair at {\sl vanishing} pair diameter, which is allowed in quantum mechanics
since the center-of-mass operators and the relative-particle operators commute
\footnote{Similarly, a ``contact current'' was recently introduced in \cite{Tan2011},
whose spatial integral is proportional to $(A, \mathbf{\nabla}_\mathbf{R} A)$.}.
By a direct generalisation of the pair distribution function
of Sec.~\ref{sec:g2}, one has for the opposite-spin pair density operator
$\langle \mathbf{r}_\uparrow,\mathbf{r}_\downarrow|\hat{\rho}_{\uparrow\downarrow}^{(2)}|\mathbf{r}_\uparrow',\mathbf{r}_\downarrow'\rangle=
\langle\hat{\psi}_\uparrow^\dagger(\mathbf{r}_\uparrow')\hat{\psi}^\dagger_\downarrow(\mathbf{r}_\downarrow')\hat{\psi}_\downarrow(\mathbf{r}_\downarrow)\hat{\psi}_\uparrow
(\mathbf{r}_\uparrow)\rangle$. Whereas the usual pair-center-of-mass density operator is obtained by taking the trace over the
relative coordinates $\mathbf{r}=\mathbf{r}_\uparrow-\mathbf{r}_\downarrow$, we rather define it here by taking the limit of vanishing relative coordinates,
\begin{equation}
\langle \mathbf{R}|\hat{\rho}_{\rm CoM}^{(2)}|\mathbf{R}'\rangle = \mathcal{N} \lim_{r\to 0}
\frac{\langle \mathbf{R}+\frac{\mathbf{r}}{2}, \mathbf{R}-\frac{\mathbf{r}}{2} | \hat{\rho}_{\uparrow\downarrow}^{(2)}
|\mathbf{R}'+\frac{\mathbf{r}}{2},\mathbf{R}'-\frac{\mathbf{r}}{2}\rangle}{\phi^2(\mathbf{r})}
\end{equation}
where the factor $\mathcal{N}$ is such that $\hat{\rho}_{\rm CoM}^{(2)}$ has a unit trace and $\phi(\mathbf{r})$ is the zero-energy
scattering state of Eqs.(\ref{eq:normalisation_phi_tilde_3D},\ref{eq:normalisation_phi_tilde_2D}).
Proceeding as in Sec.~\ref{sec:g2} we obtain
\begin{multline}
\langle \mathbf{R}|\hat{\rho}_{\rm CoM}^{(2)}|\mathbf{R}'\rangle = \mathcal{N} \sum_{i<j} \int (\prod_{k\neq i,j}d^dr_k) A_{ij}^*(\mathbf{R}',
(\mathbf{r}_k)_{k\neq i,j}) \\ \times A_{ij}(\mathbf{R},(\mathbf{r}_k)_{k\neq i,j})
\end{multline}
By taking the expectation value of $-(\hbar^2/4m)\Delta_{\mathbf{R}}$ within $\hat{\rho}_{\rm CoM}^{(2)}$, we finally obtain for the mean pair-center-of-mass kinetic energy
at vanishing diameter:
\begin{equation}
E_{\rm kin\ pair-CoM}^{r_{\uparrow\downarrow}\to 0} = -\frac{\hbar^2}{4m} \frac{(A,\Delta_{\mathbf{R}} A)}{(A,A)}
\end{equation}
where the denominator is $\propto C$, see [Tab.~II, Eqs.~(2a,2b)].
\section{Generalization to arbitrary statistical mixtures} \label{sec:stat_mix}
In this section, we generalize some of the relations derived in the previous sections for pure states to the case of arbitrary statistical mixtures.
Let us first discuss zero-range interactions.
We consider a statistical mixture of pure states $\psi_n$ with occupation probabilities $p_n$, which is arbitrary, but non-pathological in the following sense~\cite{TanEnergetics}:
Each $\psi_n$ satisfies the contact condition
[Tab. I, Eqs. (1a,1b)];
moreover, $p_n$ decays sufficiently quickly at large $n$ so that
we have $\displaystyle C=\sum_n p_n C_n$,
where
$C_n$ (resp. $C$) is defined by [Tab. II, Eq.~(1)] with $n_\sigma(\mathbf{k})=\langle c^\dagger_\sigma(\mathbf{k})c_\sigma(\mathbf{k})\rangle$ and $\langle\,.\,\rangle=\langle\psi_n|\,.\,|\psi_n\rangle$ (resp. $\langle\,.\,\rangle=\sum_n p_n \langle\psi_n|\,.\,|\psi_n\rangle$).
Then, the relations in lines 3, 5, 6 and 7 of Table II, which were derived in Sec.~\ref{sec:ZR} for any pure state satisfying the contact conditions, obviously generalize to such a statistical mixture.
The relations for the time derivative of $E$ (Tab. \ref{tab:fermions} line 12) hold for any time-evolving pure state satisfying the contact conditions for a time-dependent $a(t)$, and thus also for any statistical mixture of such time-evolving pure states.
For lattice models, one can obviously take an average of
the definition of $\hat{C}$~[Tab.~III, Eqs.~(1a,1b)] to define $C=\langle\hat{C}\rangle$ for in any statistical mixture;
taking averages of the relations between operators~[Tab.III, lines 2,3,8] then gives relations valid for any statistical mixture.
\section{Thermodynamic equilibrium in the canonical ensemble} \label{subsec:finiteT}
We turn to the case of thermal
equilibrium in the canonical ensemble. We shall use the notation
\begin{equation}
\lambda\equiv\left\{
\begin{array}{lr}
-1/a & {\rm in}\ 3D
\\
\frac{1}{2}\ln a & {\rm in}\ 2D.
\end{array}
\right.
\label{eq:def_lambda}
\end{equation}
\subsection{First order derivative of $E$}
The thermal average in the canonical ensemble $\overline{dE/d\lambda}$
can be rewritten in the following more familiar way,
as detailed in Appendix \ref{app:adiab}:
\begin{equation}
\overline{\left(\frac{dE}{d\lambda}\right)}
=
\left(
\frac{dF}{d\lambda}
\right)_{\!T}
=
\left(
\frac{d\bar{E}}{d\lambda}
\right)_{\!S}
\label{eq:relation_T}
\end{equation}
where $\overline{(\dots)}$ is the canonical thermal average,
$F$ is the free energy
and $S$ is the entropy.
Taking the thermal average of [Tab.~\ref{tab:fermions}, Eqs.~(4a,4b)]
(which was shown above for any stationary state)
thus gives~[Tab.~\ref{tab:fermions}, Eqs.~(9a,9b)].
\subsection{Second order derivative of $E$}
Taking a thermal average of the line 8 in Tab.~\ref{tab:fermions} we get after a simple manipulation:
\begin{multline}
\label{eq:manipulation}
\frac{1}{2} \overline{\left( \frac{d^2 E}{d\lambda^2}\right)} = \left( \frac{4\pi\hbar^2}{m}\right)^2
\frac{1}{2 Z}\sum_{n,n';E_n\neq E_{n'}}\frac{e^{-\beta E_n}-e^{-\beta E_{n'}}}{E_n-E_{n'}} \\
\times |(A^{(n')},A^{(n)})|^2
\end{multline}
where $Z=\sum_n \exp(-\beta E_n)$.
This implies
\begin{equation}
\overline{\left( \frac{d^2 E}{d\lambda^2}\right)}<0.
\label{eq:d2Ebar<0}
\end{equation}
Moreover one can check that
\begin{equation}
\left(\frac{d^2F}{d\lambda^2}\right)_T - \overline{\left( \frac{d^2 E}{d\lambda^2}\right)} = -\beta\left[\,
\overline{\left( \frac{dE}{d\lambda}\right)^{\phantom{,}2}}
- \overline{\left( \frac{dE}{d\lambda}\right)}^{\phantom{,}2} \right]<0,
\end{equation}
which implies [Tab.~II, Eqs.~(10a,10b)].
In usual cold atom experiments, however, there is no thermal reservoir imposing
a fixed temperature to the gas, one rather can achieve
adiabatic transformations by a slow variation of the scattering
length of the gas
\cite{SalomonMolecules,HuletMolecules,GrimmCrossover} where the entropy is fixed
\cite{LandauLifschitzPhysStat,CarrCastin,WernerAF}.
One also more directly accesses
the mean energy $\bar{E}$ of the gas rather than its
free energy, even if the entropy is also measurable \cite{thomas_entropie_PRL,thomas_entropie_JLTP}. The second order derivative of $\bar{E}$ with
respect to $\lambda$ for a fixed entropy is thus the relevant
quantity to consider.
As shown in Appendix \ref{app:adiab}
one has in the canonical ensemble:
\begin{equation}
\label{eq:d2us}
\left(\frac{d^2\bar{E}}{d\lambda^2}\right)_S
=
\overline{\left(\frac{d^2E}{d\lambda^2}\right)}
+\frac
{
\left[\mbox{Cov}\!\left(E,\frac{dE}{d\lambda}\right)\right]^2
-\mbox{Var}(E)\mbox{Var}\!\left(\frac{dE}{d\lambda}\right)
}
{k_B T\,\mbox{Var}(E)}
\end{equation}
where $\mbox{Var}(X)$ and $\mbox{Cov}(X,Y)$ stand for the variance
of the quantity $X$ and the covariance of the quantities $X$ and $Y$
in the canonical ensemble, respectively.
From the Cauchy-Schwarz inequality
$[\mbox{Cov}(X,Y)]^2\leq \mbox{Var}(X)\mbox{Var}(Y)$,
and from the inequality (\ref{eq:d2Ebar<0}), we thus obtain~[Tab.~II, Eqs.~(11a,11b)].
For lattice models, the inequalities~[Tab.~III, Eq.~(7)] are derived in the same way, by taking $\lambda$ now equal to $g_0$, and starting from the expression [Tab.~III, Eq.~(6)] of $d^2E_n/dg_0^2$.
For the case of a finite-range interaction potential $V(r)$ in continuous space, the relations~[Tab. IV, lines~1-3] which were derived for an arbitrary stationary state are generalized to the thermal equilibrium case in the same way.
Finally, the relations which asymptotically hold in the zero-range regime, [Tab.~III lines 9-10] for lattice models and [Tab.~IV lines 4-5] for finite-range interaction potential models, which were justified for any eigenstate in the zero-range regime $k_{\rm typ} b<<1$ where the typical relative wavevector $k_{\rm typ}$ is defined in terms of the considered eigenstate as described in Section~\ref{subsec:models},
remain true at thermal equilibrium with $k_{\rm typ}$ now defined as the typical density- and temperature-dependent wavevector described in Section~\ref{subsec:models},
since all the eigenstates which are thermally populated with a non-negligible weight are expected to have a typical wavevector smaller or on the order of the thermal-equilibrium typical wavevector.
\subsection{Quantum-mechanical adiabaticity}
To be complete, we also consider the process where
$\lambda$ is varied so slowly that there is adiabaticity in the many-body
quantum mechanical sense:
The adiabatic theorem of quantum mechanics~\cite{AdiabThmKato}
implies that in the limit where $\lambda$ is changed infinitely slowly,
the occupation probabilities of each eigenspace of the many-body Hamiltonian
do not change with time,
even in presence of level crossings~\cite{AdiabThmAvron}.
We note that this may require macroscopically long evolution times
for a large system.
For an initial equilibrium state in the
canonical ensemble, the mean energy then varies with $\lambda$ as
\begin{equation}
E^{\rm quant}_{\rm adiab}(\lambda)=
\sum_n \frac{e^{-\beta_0 E_n(\lambda_0)}}{Z_0}\,E_n(\lambda)
\label{eq:Ead1}
\end{equation}
where the subscript $0$ refers to the initial state.
Taking the second order derivative of (\ref{eq:Ead1}) with respect
to $\lambda$ in $\lambda=\lambda_0$ gives
\begin{equation}
\frac{d^2 E_{\rm adiab}^{\rm quant}}{d\lambda^2}
= \overline{\left(\frac{d^2E}{d\lambda^2}\right)}
<0.
\label{eq:d^2E_quant}
\end{equation}
Note that the sign of the second order derivative of $E_{\rm adiab}^{\rm quant}$ remains negative
at all $\lambda$ provided one assumes that there is no level crossing in the many-body spectrum
when $\lambda$ is varied: $E_n(\lambda)-E_{n'}(\lambda)$ has the same sign
as $E_n(\lambda_0)-E_{n'}(\lambda_0)$ for all indices $n,n'$, which allows to conclude on the sign
with the same manipulation as the one having led to Eq.~(\ref{eq:manipulation}).
\noindent{\sl Thermodynamic vs quantum adiabaticity:}
The result of the isentropic transformation~(\ref{eq:d2us})
and the one of the adiabatic transformation in the quantum
sense~(\ref{eq:d^2E_quant}) differ by the second term in the right hand
side of~(\ref{eq:d2us}). A priori this term is extensive, and thus not negligible as compared to the first term.
We have explicitly checked this expectation
for the Bogoliubov model Hamiltonian of a weakly interacting Bose gas,
which is however not really relevant since this Bogoliubov model
corresponds to the peculiar case of an integrable dynamics.
For a quantum ergodic system we now show that the second term in the right hand side of (\ref{eq:d2us})
is negligible in the thermodynamic limit, as a consequence of the
Eigenstate Thermalization Hypothesis \cite{Deutsch1991,Srednicki1994,Srednicki1996,Srednicki1999}.
This Hypothesis was tested numerically for several interacting quantum systems \cite{Olshanii,Rigol2009a,Rigol2009b}.
It states that, for a large system, the expectation value $\langle \psi_n|\hat{O}|\psi_n\rangle$ of a few-body observable $\hat{O}$ in a single
eigenstate $|\psi_n\rangle$ of energy $E_n$ can be identified with the microcanonical average $O_{\rm mc}(E_n)$ of $\hat{O}$
at that energy. Here the relevant operator $\hat{O}$ is the two-body observable (the so-called
{\sl contact operator}) such that $\frac{d}{d\lambda} E_n = \langle \psi_n| \hat{O}|\psi_n\rangle$.
In the canonical ensemble, the energy fluctuations scale as $\mathcal{V}^{1/2}$ where $\mathcal{V}$ is the system volume.
We can thus expand the microcanonical average around the mean energy $\bar{E}$:
\begin{equation}
O_{\rm mc}(E) = O_{\rm mc}(\bar{E}) + (E-\bar{E}) O'_{\rm mc}(\bar{E}) + O(1)
\end{equation}
To leading order, we then find that $\mbox{Cov}\!\left(E,\frac{dE}{d\lambda}\right)\sim O_{\rm mc}'(\bar{E})\mbox{Var}\, E$
and $\mbox{Var}\!\left(\frac{dE}{d\lambda}\right)\sim [O_{\rm mc}'(\bar{E})]^2\mbox{Var}\, E$,
so that the second term in the right hand side of ~(\ref{eq:d2us}) is $O(\mathcal{V}^{1/2})$
which is negligible as compared to the first term in that right hand side.
For the considered quantity, this shows the equivalence of the thermodynamic adiabaticity and of the quantum adiabaticity
for a large system.
\noindent{\sl A microcanonical detour:}
We now argue that the quantum adiabatic expression (\ref{eq:Ead1}) for the mean energy as a function of the slowly varying parameter
$\lambda$ can be obtained by a purely thermodynamic reasoning.
This implies that the exponentially long evolution times {\sl a priori} required to reach the quantum
adiabatic regime for a large system are actually not necessary to obtain (\ref{eq:Ead1}).
The first step is to realize that the initial canonical ensemble (for $\lambda=\lambda_0$) can be viewed
as a statistical mixture of microcanonical ensembles \cite{SinatraWitkowskaCastin}. These microcanonical ensembles correspond
to non-overlapping energy intervals of width $\Delta$, each interval contains many eigenstates,
but $\Delta$ is much smaller than the width of the probability distribution of the system energy in the canonical ensemble.
For further convenience, we take $\Delta \ll k_B T$.
One can label each energy interval by its central energy value, or more conveniently
by its entropy $S$.
If the eigenenergies $E_n(\lambda)$ are numbered in ascending order, the initial microcanonical ensemble of entropy
$S$ contains the eigenenergies with $n_1(S) \leq n < n_2(S)$ and $S=k_B\ln[n_2(S)-n_1(S)]$. When $\lambda$ is slowly varied, the entropy
is conserved for our isolated system, and the microcanonical ensemble simply follows the evolution of the initial $n_2(S)-n_1(S)$ eigenstates,
which cannot cross for an ergodic system and remain bunched in energy space.
Furthermore, according to the Eigenstate Thermalization Hypothesis, the energy width $E_{n_2}-E_{n_1}$ remains close to its initial
value $\Delta$: Each eigenenergy varies with a macroscopically large slope $dE_n/d\lambda$ but all the eigenenergies in the microcanonical
ensemble have essentially the same slope \footnote{One has $\frac{d}{d\lambda}(E_{n_2}-E_{n_1})= O_{\rm mc}(E_{n_2})
-O_{\rm mc}(E_{n_1})\simeq (E_{n_2}-E_{n_1})O'_{\rm mc}(E_{\rm mc}) = O(\Delta)$, where $O_{\rm mc}$ is the microcanonical expectation
value of the contact operator.}.
The mean microcanonical energy for this isentropic evolution is thus
\begin{equation}
E_{\rm mc}(S,\lambda) = \frac{1}{n_2(S)-n_1(S)} \sum_{n=n_1(S)}^{n_2(S)-1} E_n (\lambda)
\label{eq:Emc}
\end{equation}
Finally, we take the appropriate statistical mixture of the microcanonical ensembles
(so as to reconstruct the initial $\lambda=\lambda_0$ canonical ensemble): The microcanonical ensemble of entropy $S$ has
an initial central energy $E_{\rm mc}(S,\lambda_0)$, it is weighted in the statistical mixture
by the usual expression $P(S)=e^{S/k_B} e^{-\beta E_{\rm mc}(S,\lambda_0)}$.
Since $\Delta \ll k_B T$, one can identify $e^{-\beta E_{\rm mc}(S,\lambda_0)}$ with
$e^{-\beta E_n(\lambda_0)}$, for $n_1(S)\leq n< n_2(S)$. The corresponding
statistical average of (\ref{eq:Emc}) with the weight $P(S)$ gives (\ref{eq:Ead1}).
\section{Applications}\label{sec:appl}
In this Section, we apply some of the above relations in three dimensions, first to the two-body and three-body problems
and then to the many-body problem.
Except for the two-body case, we restrict to the infinite scattering length case $a=\infty$ in three dimensions.
\subsection{Two-body problem in a harmonic trap: Finite range corrections}
\label{subsec:appli_deux_corps}
Two particles interact with the compact-support potential $V(r_{12};b)$ of range $b$ and scattering length $a$
in an isotropic harmonic potential $U(\mathbf{r}_i)=\frac{1}{2}m\omega^2 r_i^2$.
One separates out the center of mass, in an eigenstate of energy $E_{\rm cm}$.
The relative motion is taken with zero angular momentum; its wavefunction $\psi(r)$ is an eigenstate of energy $E_{\rm rel}=E-E_{\rm cm}$
for a particle of mass $\mu=m/2$ in the potential $V(r;b)+\mu \omega^2 r^2/2$.
We take in this subsection $\hbar\omega$ as the unit of energy and $[\hbar/(\mu\omega)]^{1/2}$ as the unit of length.
For $r\geq b$ the solution may be expressed in terms of the Whittaker function $W$, or equivalently of the Kummer function $U$, see \S 13 in \cite{Abramowitz}:
\begin{eqnarray}
\label{eq:whit3d}
\frac{\psi(r)}{C_3}\!\!\! &\stackrel{3D}{=}&\!\!\! \frac{W_{\frac{E_{\rm rel}}{2},\frac{1}{4}}(r^2)}{r^{3/2}} =
e^{-\frac{r^2}{2}} U(\frac{3}{4}-\frac{E_{\rm rel}}{2},\frac{3}{2},r^2) \\
\frac{\psi(r)}{C_2}\!\!\! &\stackrel{2D}{=}&\!\!\! \frac{W_{\frac{E_{\rm rel}}{2},0}(r^2)}{r}= e^{-\frac{r^2}{2}} U(\frac{1-E_{\rm rel}}{2},1,r^2)
\label{eq:whit2d}
\end{eqnarray}
where the factors $C_2$ and $C_3$ ensure that $\psi$ is normalized to unity.
The zero-range limit, where $V(r;b)$ is replaced by the Bethe-Peierls contact conditions at the origin, is exactly solvable; it gives eigenenergies
$E_{\rm rel}^0$. We give here the finite range corrections to the energy in terms of $r_e$.
\noindent {\underline{\it Three dimensions:}}
\\
Imposing the contact condition $\psi(r)=A[r^{-1}-a^{-1}]+O(r)$ to Eq.~(\ref{eq:whit3d}) gives an implicit equation for the spectrum in the zero-range limit,
obtained in \cite{Busch} with a different technique:
\begin{equation}
f(E_{\rm rel}^0) = -\frac{1}{a} \ \ \ \mbox{with}\ \ \ f(E)\equiv -\frac{2\Gamma(\frac{3}{4}-\frac{E}{2})}{\Gamma(\frac{1}{4}-\frac{E}{2})}
\end{equation}
We have calculated the finite range corrections up to order two in $b$ included, they remarkably involve only the effective range:
\begin{equation}
E_{\rm rel} = E_{\rm rel}^0 + \frac{E_{\rm rel}^{0} r_e}{f'}+ \left(\frac{E_{\rm rel}^{0} r_e}{f'}\right)^2
\left(\frac{1}{E_{\rm rel}^{0}}-\frac{f''}{2 f'}\right) + O(b^3)
\label{eq:jusqua_re2}
\end{equation}
where the first and second order derivatives $f'$ and $f''$ of $f(E)$ are taken in
$E=E_{\rm rel}^{0}$. To obtain this expansion, we have used the result of Appendix~\ref{app:piege_dans_interaction}
that one can neglect, at this order,
the effect of the trapping potential for $r\leq b$, so that the wavefunction is proportional to the free space scattering
state at energy $E_{\rm rel}=\hbar^2 k^2/(2\mu)$, $\psi(r)=\mathcal{A}\chi(r)$.
Such an approximation was already proposed in \cite{Bolda,Naidon,GaoPiege},
without analytical control on the resulting spectral error
\footnote{
We have employed two equivalent techniques.
The first one is to match in $r=b$ the logarithmic derivatives of Eq.~(\ref{eq:whit3d}) and of Eq.~(\ref{eq:norma_chi}) and to expand
their inverses up to order $b^4$ included. Due to Eq.~(\ref{eq:devuk_3D}) this involves only $r_e$.
The second one is to use relation (\ref{eq:appoz_3d}): The matching of $\mathcal{A}\chi$ with Eq.~(\ref{eq:whit3d}) in $r=b$
gives $\mathcal{A}/C_3=\frac{\pi^{1/2}}{\Gamma(\frac{3}{4}-\frac{E_{\rm rel}}{2})}[1+O(b^2)]$, and the normalization of $\psi$ to unity,
from relation 7.611(4) in \cite{Gradstein} together with the Smorodinski relation (\ref{eq:smorodinski}),
gives $dE_{\rm rel}/dr_e$ up to order one in $b$ included, that one integrates to get the result.}.
We have checked that the term of Eq.~(\ref{eq:jusqua_re2}) linear in $r_e$ coincides with the prediction
of [Tab.~V, Eq.~(1a)],
due to the fact that, from relation 7.611(4) in \cite{Gradstein},
the normalization factor in the zero-range limit obeys $(C_3^0)^2 2\pi^2 f'(E_{\rm rel}^0)/\Gamma^2(\frac{3}{4}-\frac{E_{\rm rel}^0}{2})=1$.
The term in Eq.~(\ref{eq:jusqua_re2}) linear in $r_e$ was already written explicitly in \cite{WernerThese}.
This corresponds to the first order perturbative use of the modified version of the zero-range model, as put forward in \cite{Efimov93}.
It can also be obtained by solving to first order in $r_e$ the self-consistent equation considered in \cite{Greene}
obtained by replacing $a_0$ by $a_E$ [see Eq.~(5) of \cite{Greene}] into Eq.~(6) of \cite{Greene}. This self-consistent equation
was also introduced in \cite{Bolda}, and in \cite{GaoPiege} [see Eqs.~(11,12,30) of that reference]
with more elaborate forms for $a_E$.
With our notations and units this self-consistent equation is simply
\begin{equation}
f(E) = - u(k=\sqrt{2E})
\label{eq:autoc}
\end{equation}
where $u(k)$ is related to the $s$-wave scattering amplitude by Eq.~(\ref{eq:introu}).
The self-consistent equation of \cite{Greene} corresponds to the choice $u(k)=\frac{1}{a}-\frac{1}{2} k^2 r_e$
in Eq.~(\ref{eq:autoc}). We have checked that solving that equation
to second order in $r_e$ then exactly gives the term of Eq.~(\ref{eq:jusqua_re2}) that is quadratic in $r_e$.
Our result of Appendix~\ref{app:piege_dans_interaction} shows that going to order three in $r_e$ with the self-consistent equation
should not give the correct result, since one can then no longer neglect the effect of harmonic trapping
within the interaction range. This clarifies the status of that self-consistent equation.
To ascertain this statement, we have calculated the ground state relative energy up to third order included in $b$,
restricting for simplicity to an infinite scattering length, $1/a=0$
\footnote{The result is based on Appendix \ref{app:piege_dans_interaction}. The simplest calculation is as follows: One first neglects
the trapping potential for $r\leq b$, one matches the inverse of the logarithmic derivative of the scattering state
(\ref{eq:norma_chi}) for $r=b^-$ with the inverse of the logarithmic derivative of (\ref{eq:whit3d}) for $r=b^+$, and one expands
the resulting equation up to order $b^5$ included, using relations 13.1.2 and 13.1.3 in \cite{Abramowitz} for $r=b^+$.
Then one includes the $r<b$ trapping effect by applying the usual
first order perturbation theory to the operator $\frac{1}{2}\mu \omega^2 r^2 \theta(b-r)$; at this order
the wavefunction for $r<b$ may be identified with the zero-energy scattering state $\phi(r)$.
An alternative, more complicated technique is to use $\psi^{(1)}$
of Appendix~\ref{app:piege_dans_interaction}. One finds that, up to order $b^4$ included, $\psi(b)/[-b\psi'(b)]=
u(1)/[-u'(1)] + f(1)/u(1)-f'(1)/u'(1)$, where we used the fact that $u(1)/[-u'(1)]=1$ to zeroth order in $b$
and $f(x)$ solves (\ref{eq:inhomo}).
Then from relations (\ref{eq:alphax},\ref{eq:betax}) and from the expression of $v(x)$ in terms of $u(y)$,
given above Eq.~(\ref{eq:estimations_du_chgt}),
one finds $\psi(b)/[-b\psi'(b)]=u(1)/[-u'(1)] + \beta(1)/u^2(1)+ O(b^5)$. Matching this to the $r>b$ solution
gives (\ref{eq:devb3}).
}. We find
\begin{multline}
E_{\rm rel} = \frac{1}{2} + \frac{r_e}{2\pi^{1/2}} + \frac{2-\ln 2}{4\pi} r_e^2
+ \Big[\frac{(1-\ln 2)(2-\ln 2)}{4\pi^{3/2}} \\
- \frac{\pi^2+12\ln^2 2}{192\pi^{3/2}} \Big] r_e^3 -\frac{\lambda_2 + \Lambda_2} {\pi^{1/2}}+O(b^4)
\label{eq:devb3}
\end{multline}
Here $\lambda_2$ is the coefficient of $k^4$ in the low-$k$ expansion of $u(k)$,
$u(k)=\frac{1}{a}-\frac{1}{2} k^2 r_e +\lambda_2 k^4+O(k^6)$,
it can be evaluated by a generalized Smorodinski relation \cite{smoro_en_prepa}.
On the contrary, $\Lambda_2$ is a new coefficient containing the effect of
the trapping potential within the interaction range. It can be expressed in terms of
the zero-energy free space scattering state $\phi(r)$,
normalized as in Eq.~(\ref{eq:normalisation_phi_tilde_3D}):
\begin{equation}
\Lambda_2 = \int_0^{+\infty} dr\, r^2 [1-u_0^2(r)]
\end{equation}
with $u_0(r)=r\phi(r)$. Although our derivation is for a compact support potential,
we expect that our result is applicable as long as $\lambda_2$ and $\Lambda_2$
are finite. For both quantities, this requires (for $1/a=0$) that the interaction potential drops faster
than $1/r^5$ \cite{smoro_en_prepa}.
Interestingly, if one expands the self-consistent Eq.~(\ref{eq:autoc}) up to order $b^3$ included,
one exactly recovers Eq.~(\ref{eq:devb3}), except for the term $\Lambda_2$. This was expected
from the fact that the derivation of (\ref{eq:autoc}) in \cite{GaoPiege} indeed neglects
the trapping potential within the interaction range.
This discussion is illustrated for the particular case of the square-well potential (\ref{eq:puits_carre}) in Fig.\ref{fig:autoc},
with the exact spectrum obtained by matching the logarithmic derivative
of a Whittaker $M$ function for $r=b^-$ with the logarithmic derivative of a Whittaker $W$ function for $r=b^+$
as in Eqs.~(6.16,6.17,6.18) of \cite{WernerThese}
\footnote{In \cite{Pietro_r_e} a similar calculation was performed, except that the harmonic trap was neglected
within the interaction range.}.
In this case, one finds $r_e=b$ \cite{YvanVarenna} and, remarkably, $\Lambda_2=-2 \lambda_2$
so that the difference between the ground state energy of (\ref{eq:autoc}) and the exact ground state energy obeys
\begin{equation}
E_{\rm rel}^{\rm self} - E_{\rm rel} = \frac{\Lambda_2}{\pi^{1/2}} + O(b^4) =
\left(\frac{1}{6}-\frac{1}{\pi^2}\right)\frac{b^3}{\pi^{1/2}} + O(b^4).
\label{eq:diff_autoc_exact}
\end{equation}
Note that the case of two fermions with a square-well
interaction in a harmonic trap was numerically studied in \cite{Calarco}, for the $s$-wave and also for the $p$-wave case,
with the exact spectrum compared to the self-consistent equation (\ref{eq:autoc}) or to its $p$-wave equivalent.
No conclusion was given on the scaling with $b$ of the difference between the exact and the approximate spectrum.
\begin{figure}[t]
\includegraphics[width=0.9\linewidth,clip=]{fig2.eps}
\caption{For two opposite spin fermions interacting in $3D$ {\sl via} a potential of short range $b$ in an isotropic harmonic
trap, the self-consistent equation (\ref{eq:autoc}), derived e.g.\ in \cite{GaoPiege}, gives the eigenenergies
with an error of order $b^3$, due to the fact that it neglects the effect of the harmonic trap
within the interaction range, see Appendix~\ref{app:piege_dans_interaction}. This is illustrated with the ground state relative energy
for a square-well potential of infinite scattering length: The deviation (solid line) between the approximate energy $E_{\rm rel}^{\rm self}$
[solving Eq.~(\ref{eq:autoc})] and the exact one $E_{\rm rel}$ (calculated as in \cite{WernerThese}) vanishes
as $b^3$, with a coefficient given by Eq.~(\ref{eq:diff_autoc_exact}) (dotted line). $\mu$ is the reduced mass,
$\omega$ is the angular oscillation frequency in the trap and $a_{\rm ho}=[\hbar/(\mu \omega)]^{1/2}$.
\label{fig:autoc}}
\end{figure}
\noindent {\underline{\it Two dimensions:}}
\\
Imposing the contact condition $\psi(r)=A\ln(r/a)+O(r)$ to Eq.~(\ref{eq:whit2d}) gives an implicit equation for the spectrum in the zero-range limit
\cite{Busch,LeChapitre}:
\begin{equation}
\psi\left(\frac{1-E_{\rm rel}^0}{2}\right)-2\psi(1)= -2 \ln a
\end{equation}
where $\psi$ is the digamma function. We have obtained the finite range correction
\begin{equation}
E_{\rm rel} = E_{\rm rel}^0 + \frac{4 r_e^2 E_{\rm rel}^0}{\psi'(\frac{1-E_{\rm rel}^0}{2})} + O(b^4\ln^4 b)
\label{eq:dev_expli_2d}
\end{equation}
by neglecting the trapping potential for $r\leq b$ as justified by Appendix~\ref{app:piege_dans_interaction}, and by matching
in $r=b$ the scattering state $\mathcal{A}\chi(r)$ to Eq.~(\ref{eq:whit2d}). The bound on the error results in particular
from the statement that $\ldots$ in Eq.~(\ref{eq:devuk_2D}) are $O[(kb)^4\ln(a/b)]$, that one can e.g.\ check for the square-well potential.
As expected, the value of $\partial E_{\rm rel}/\partial (r_e^2)$ in $r_e=0$ obtained from Eq.~(\ref{eq:dev_expli_2d}) coincides with
[Tab,~V, Eq.~(1b)], knowing that the normalization factor in the zero-range limit, according to relation 7.611(5) in \cite{Gradstein},
obeys $(C_2^0)^2 \pi \psi'(\frac{1-E_{\rm rel}^0}{2})/\Gamma^2(\frac{1-E_{\rm rel}^0}{2})=1$.
\subsection{Three-body problem: corrections to exactly solvable cases and comparison with numerics}\label{subsec:appl_3body}
In this Subsection, we use the known analytical expressions for the three-body wavefunctions to compute the corrections to the spectrum to first order in the inverse scattering length $1/a$ and in the effective range $r_e$. We shall consider not only spin-1/2 fermions, but also spinless bosons restricting to the universal stationary states~\cite{Werner3corpsPRL,Pethick3corps} which do not depend on the three-body parameter.
The problem of three identical spinless bosons~\cite{Werner3corpsPRL,Pethick3corps}
or spin-1/2 fermions (say $N_\uparrow=2$ and $N_\downarrow=1$)~\cite{Werner3corpsPRL,TanScaling}
is exactly solvable in the unitary limit in an isotropic harmonic trap $U(\mathbf{r})=\frac{1}{2}\,m\omega^2 r^2$.
Here we restrict to zero total angular momentum (see however the last line of Appendix~\ref{app:echelle})
with a center of mass in its ground state,
so that the normalization constants of the wavefunctions are also known analytically \cite{WernerThese}. Moreover we restrict to universal eigenstates~\footnote{For Efimovian eigenstates, computing the derivative of the energy with respect to the effective range would require to use a regularisation procedure similar to the one employed in free space in \cite{Efimov93,PlatterRangeCorrections}. However the derivative with respect to $1/a$ can be computed \cite{WernerThese}.}. The spectrum is then
\begin{equation}
E=E_{\rm cm}+(s+1+2q)\hbar\omega
\label{eq:echelle}
\end{equation}
where $E_{\rm cm}$ is the energy of the center of mass motion,
$s$ belongs to the infinite set of real positive solutions of
\begin{equation}
-s \cos\left(s\frac{\pi}{2}\right) + \eta\frac{4}{\sqrt{3}}\sin\left(s\frac{\pi}{6}\right)=0
\label{eq:s}
\end{equation}
with $\eta=+2$ for bosons and $-1$ for fermions,
and
$q$ is a non-negative integer quantum number describing the degree of excitation of an exactly decoupled bosonic breathing mode
\cite{CRAS,WernerSym}.
We restrict to states with $q=0$. The case of a non-zero $q$ is treated in subsection
\ref{subsec:contactN}.
\paragraph{Derivative of the energy with respect to $1/a$.}
Injecting the expression of the regular part $A$ of the normalized wavefunction \cite{WernerThese} into
[Tab.~II, Eqs.~(2a,4a)] or its bosonic version (Tab.~V, line~1 in \cite{CompanionBosons}) we obtain
\begin{equation}
\frac{\partial E}{\partial(-1/a)}\Big|_{a=\infty}\!\!\!\! =
\frac{\sqrt{\frac{\hbar^3\omega}{m}}\Gamma(s+\frac{1}{2})\sqrt{2}s\sin\left(s\frac{\pi}{2}\right)/\Gamma(s+1)}
{
-\cos\left(s\frac{ \pi}{2 } \right) + s\frac{ \pi}{2 } \sin\left(s\frac{ \pi}{2 } \right)
+\eta\frac{2\pi }{3\sqrt{3} } \cos\left(s\frac{ \pi}{6 } \right)
}
\label{eq:dEda3corps}
\end{equation}
For the lowest fermionic state, this gives
$(\partial E/\partial(1/a))_{a=\infty}\simeq -1.1980\sqrt{\hbar^3\omega/m}$,
in agreement with the value $-1.19(2)$ which we extracted from the numerical solution of a finite-range model
presented in Fig.~4a of \cite{StecherLong},
where the error bar comes from our simple way of extracting the derivative from the numerical data of \cite{StecherLong}.
\paragraph{Derivative of the energy with respect to the effective range.}
Using relation [Tab.~V, Eq.~(1a)], which holds not only for fermions but also for bosonic universal states, we obtain
\begin{equation}
\left(\frac{\partial E}{\partial r_e}\right)_{a=\infty}\!\!\!\!\!=
\frac{\sqrt{\frac{\hbar m\omega^3}{8}} \Gamma(s-\frac{1}{2}) s
(s^2-\frac{1}{2}) \sin(s\frac{\pi}{2})
/\Gamma(s+1)}
{-\cos(s\frac{\pi}{2})+s\frac{\pi}{2}\sin(s\frac{\pi}{2})
+\eta \,\frac{2\pi}{3\sqrt{3}}\cos(s\frac{\pi}{6})
}
\label{eq:dEdre_3}
\end{equation}
For bosons, this result was derived previously using the method of \cite{Efimov93} and found to agree with the numerical solution of a finite-range separable potential model for the lowest state \cite{WernerThese}.
For fermions, (\ref{eq:dEdre_3}) agrees with the numerical data from Fig. 3 of \cite{StecherLong} to $\sim0.3\%$ for the two lowest states and $5\%$ for the third lowest state
\footnote{Here we used the value of the effective range $r_e=1.435\,r_0$~\cite{ThogersenThese}
for the Gaussian interaction potential $V(r)=-V_0 e^{-r^2/r_0^2}$
with $V_0$ equal to the value where the first two-body bound state appears.};
(\ref{eq:dEdre_3}) also agrees to $3\%$
with the numerical data
from p.~21 of \cite{WernerThese} for the lowest state of a finite-range separable potential model.
All these deviations are compatible with the estimated numerical accuracy.
\subsection{$N$-body problem in an isotropic trap: Non-zero $1/a$ and $r_e$ corrections}
\label{subsec:contactN}
We now generalize subsection \ref{subsec:appl_3body} to the case of an arbitrary number $N$ of spin-1/2 fermions
(with an arbitrary spin configuration) at the unitary limit in an isotropic harmonic trap.
Although one cannot calculate $\partial E/\partial (1/a)$ and $\partial E/\partial r_e$, some useful information can be obtained
from the following remarkable property:
For any initial stationary state, and after an arbitrary change of the isotropic trap curvature, the system experiences
an undamped breathing at frequency $2\omega$, $\omega$ being the single atom oscillation frequency in the final trapping potential \cite{CRAS}.
From this one can conclude that, in the case of a time independent trap, the system exhibits
a $SO(2,1)$ dynamical symmetry \cite{WernerSym}: The spectrum is a collection of semi-infinite
ladders indexed by the natural integer $q$. Another crucial consequence is that the eigenstate wavefunctions
are separable in $N$-body hyperspherical coordinates, with a know expression for the dependence with the hyperradius \cite{WernerSym}.
This implies that the functions $A_{ij}$ are also separable in $(N-1)$-body hyperspherical coordinates and that
their hyperradial dependence is also known. As the eigenstates within a ladder have exactly the same hyperangular part,
one can relate the energy derivatives (with respect to $1/a$ or $r_e$) for step $q$ of a ladder
to the derivative for the ground step of the {\sl same} ladder, as detailed
in Appendix~\ref{app:echelle}:
\begin{multline}
\label{eq:relat1}
\left[\frac{\partial E}{\partial (1/a)}\right]_{q} = \left[\frac{\partial E}{\partial (1/a)}\right]_{0}
\frac{\Gamma(s+1)}{\Gamma(s+q+1)} \\
\times \sum_{k=0}^q \left[\frac{\Gamma(k+\frac{1}{2})}{\Gamma(k+1)\Gamma(\frac{1}{2})}\right]^2
\frac{\Gamma(s+q-k+\frac{1}{2})\Gamma(q+1)}{\Gamma(s+\frac{1}{2})\Gamma(q-k+1)}
\end{multline}
with the eigenenergy of step $q$ is written as Eq.~(\ref{eq:echelle}), $s$ being now unknown for the general
$N$-body problem.
We have checked that this explicit result is consistent with the recursion relations derived
in \cite{Moroz}.
A similar type of result holds for the derivative with respect to $r_e$:
\begin{multline}
\label{eq:relat2}
\left[\frac{\partial E}{\partial r_e}\right]_{q} = \left[\frac{\partial E}{\partial r_e}\right]_{0}
\frac{\Gamma(s+1)}{\Gamma(s+q+1)} \\
\times \sum_{k=0}^q \left[\frac{\Gamma(k+\frac{3}{2})}{\Gamma(k+1)\Gamma(\frac{3}{2})}\right]^2
\frac{\Gamma(s+q-k-\frac{1}{2})\Gamma(q+1)}{\Gamma(s-\frac{1}{2})\Gamma(q-k+1)}
\end{multline}
For non-zero $1/a$ or $r_e$, the level spacing is not constant within a ladder, the system will not respond to a trap change
by a monochromatic breathing mode. In a small system, a Fourier transform of the system response
can give access to the Bohr frequencies $(E_q-E_{q-1})/\hbar$, which would allow
an experimental test of Eqs.~(\ref{eq:relat1},\ref{eq:relat2}).
In the large $N$ limit, for a system prepared in its ground state,
we now show that the main effects of non-zero $1/a$ or $r_e$ on the breathing
mode are a frequency change and a collapse.
Let us take the macroscopic limit of Eqs.~(\ref{eq:relat1},\ref{eq:relat2}) for a fixed $q$:
Using Stirling's formula for $s\to +\infty$ we obtain
\begin{eqnarray}
\label{eq:devq2unsura}
\frac{[\partial E/\partial(1/a)]_q}{[\partial E/\partial(1/a)]_0}&=& 1 -\frac{q}{4s} +\frac{q(9q+7)}{64 s^2} +\ldots\\
\frac{[\partial E/\partial r_e]_q}{[\partial E/\partial r_e]_0} &=& 1 + \frac{3q}{4s} -\frac{3q(5q+11)}{64s^2}
+\ldots
\label{eq:devq2re}
\end{eqnarray}
The first deviations from unity are thus linear in $q$, and correspond to a shift of
the breathing mode frequency $\omega_{\rm breath}$ to the new value $2\omega + \delta\omega_{\rm breath}$,
that can be obtained to leading order in $1/a$ and $r_e$ from
\begin{equation}
\frac{\partial \omega_{\rm breath}}{\partial (1/a)} = -\frac{\omega}{4 E_0} \frac{\partial E_0}{\partial (1/a)}
\ \ \mbox{and}\ \
\frac{\partial \omega_{\rm breath}}{\partial r_e} = \frac{3\omega}{4 E_0} \frac{\partial E_0}{\partial r_e}
\label{eq:dbreath}
\end{equation}
For a non-polarized gas (with the same number $N/2$ of particles in each spin state)
the local density approximation gives $4s \sim (3N)^{4/3} \xi^{1/2}$ \cite{TanScaling,BlumeUnivPRL} and it allows
to obtain the derivative of the energy with respect to $1/a$ \cite{WernerTarruellCastin} or to $r_e$ in terms of $\xi$,
$\zeta$ and $\zeta_e$, defined in Eqs.(\ref{eq:eq_d_etat},\ref{eq:def_zetae}), so that
\begin{eqnarray}
\delta\omega_{\rm breath}= \frac{256\omega}{525\pi \xi^{5/4}} \left[\frac{\xi^{1/2}\zeta}{k_F a}+2\zeta_e k_F r_e\right]
\end{eqnarray}
where we have introduced the Fermi momentum $k_F$ of the unpolarized trapped ideal gas with the same atom number $N$
as the unitary gas, with $\hbar^2 k_F^2/(2m)=(3N)^{1/3}\hbar\omega$.
For $r_e=0$, we recover the superfluid hydrodynamic prediction of \cite{BulgacModes,CombescotLeyronasComment,YunStringari}.
We have checked that the change of the mode frequency due to finite range effects can also be obtained
from hydrodynamics \footnote{The hydrodynamic frequencies $\Omega$ are given by the
eigenvalue problem $-m\Omega^2 \delta \rho = \mbox{div}\, [\rho_0\, \nabla\, (\mu_{\rm hom}'[\rho_0]\, \delta \rho)]$
where $\delta \rho(\mathbf{r})$ is the infinitesimal deviation from the stationary density profile $\rho_0(\mathbf{r})$,
$\mu_{\rm hom}[\rho]$ is the ground state chemical potential of the homogeneous gas of density $\rho$ and the appex
$'$ indicates derivation. For the equation of state $\mu_{\rm hom}[\rho]=A \rho^{2/3} + B \rho^{\gamma}$, where $B$ is arbitrarily
small, we treat the term in $B$ to first order in perturbation theory around the breathing mode
to obtain $\Omega=2\omega + \omega \frac{96}{\pi} (\gamma-\frac{2}{3}) \frac{B}{\mu} \left(\frac{\mu}{A}\right)^{3\gamma/2}
\int_0^1 du u^2 (1-2u^2) (1-u^2)^{(3\gamma+1)/2}$ where $\mu=\omega N^{1/3} (2mA/\pi^{4/3})^{1/2}$ is the unperturbed
chemical potential of the trapped gas. To zeroth order in $B$, scaling invariance gives
$\delta \rho^{(0)}(\mathbf{r}) =\frac{d}{d\lambda}[\rho_0(\mathbf{r}/\lambda)/\lambda^3]_{\lambda=1}$.
To use perturbation theory, we made the differential operator Hermitian with the change of function
$\delta f(\mathbf{r}) = (\mu_{\rm hom}'[\rho_0(\mathbf{r})])^{1/2} \delta\rho(\mathbf{r})$. Hermiticity of the perturbation is guaranteed
(i.e.\ surface terms coming from the divergence theorem vanish) for $\gamma$ larger than $1/3$.
For finite-$r_e$ corrections, $\gamma=1$.
};
this change in typical experiments is of the order of $0.1\%$ for
lithium and $0.5\%$ for potassium, see subsection \ref{subsec:app_manips}.
Furthermore, due to the presence of $q^2$ terms in Eqs.~(\ref{eq:devq2unsura},\ref{eq:devq2re}), the Bohr frequencies
$(E_{q}-E_{q-1})/\hbar$ depend on the excitation degree $q$ of the mode: If many steps of the ground state ladder
are coherently populated, this can lead to a {\sl collapse}
of the breathing mode, which constitutes a mechanism for zero-temperature damping \cite{DalfovoMinnitiPitaevskii,CastinDumInstab}.
To coherently excite the breathing mode, we start with a ground state gas, with wavefunction $\psi_{\rm old}$,
and we abruptly change at $t=0$ the trap frequency from $\omega_{\rm old}$ to $\omega =\lambda^2 \omega_{\rm old}$. For the unitary gas,
$\psi_{\rm old}$ is deduced from the $t=0^+$ ground state $\psi_0$ by a dilation with scaling factor $\lambda$,
\begin{equation}
|\psi_{\rm old}\rangle = e^{-i \hat{D} \ln \lambda} |\psi_0\rangle
\end{equation}
where $\hat{D}$ is the generator of the dilations \cite{LeChapitre,WernerSym}. Using the representation of $\hat{D}$
in terms of the bosonic operator $\hat{b}$ \cite{WernerSym}, that annihilates an elementary excitation of the breathing
mode ($\hat{b}|q\rangle = q^{1/2} |q-1\rangle$), and restricting to $|\epsilon|\ll 1$, where $\epsilon=\ln \lambda$,
one has
\begin{equation}
\hat{D} \simeq -i s^{1/2} (\hat{b}^\dagger -\hat{b})
\end{equation}
so that the trap change prepares the breathing mode in a Glauber coherent state with mean occupation number $\bar{q}= \epsilon^2 s$
and standard deviation $\Delta q=\bar{q}^{1/2}$. Similarly, the fluctuations of the squared radius of the gas
$\sum_i r_i^2/N$, that can be measured, are given by $-\frac{\hbar s^{1/2}}{m\omega} (\hat{b}+\hat{b}^\dagger)$
for small $\epsilon$.
In the large system limit, one can have $\bar{q}\gg 1$ so that $1\ll \Delta q \ll \bar{q}$. At times much shorter
than the revival time $2\pi\hbar/|\partial_q^2E_q|$, one then replaces the discrete sum over $q$ by an integral to obtain
\begin{equation}
\left|\frac{\langle \hat{b}\rangle(t)}{\langle \hat{b}\rangle(0)}\right| = e^{-t^2/(2 t_c^2)} \ \ \ \mbox{with}\ \ \ \
t_c = \frac{\hbar}{\Delta q\, \left|\partial_q^2 E_q\right|_{q=\bar{q}}}
\end{equation}
For an unpolarized gas, using Eqs.~(\ref{eq:devq2unsura},\ref{eq:devq2re}) and the local density approximation, we obtain
the inverse collapse time due to non-zero $1/a$ or $r_e$:
\begin{eqnarray}
(\omega t_c )^{-1} = \frac{64 |\epsilon|}{35\pi \xi (3N)^{2/3}} \left|\frac{3\zeta}{5 k_F a}+\frac{2\zeta_e k_F r_e}{3\xi^{1/2}}\right|
\end{eqnarray}
For lithium experiments, $t_c$ is more than thousands of mode oscillation periods. To conclude with an exotic note,
we recall that the $q^2$ terms in Eqs.~(\ref{eq:devq2unsura},\ref{eq:devq2re})
lead to the formation of a Schr\"odinger-cat-like state for the breathing mode at half the revival time \cite{YurkeStoler}.
\subsection{Unitary Fermi gas: comparison with fixed-node Monte Carlo}
\label{sec:FNMC}
\begin{figure}[t]
\includegraphics[width=0.9\linewidth,clip=]{fig3.eps}
\caption{Pair distribution function
$g_{\uparrow\downarrow}^{(2)}(r)=\langle
\hat{\psi}^\dagger_\uparrow(\mathbf{r})
\hat{\psi}^\dagger_\downarrow(\mathbf{0})
\hat{\psi}_\downarrow(\mathbf{0})
\hat{\psi}_\uparrow(\mathbf{r})
\rangle$ of the homogeneous non-polarized unitary gas at zero temperature. Circles: fixed-node Monte Carlo results from Ref.~\cite{LoboGiorgini_g2}. Solid line: analytic expression~(\ref{eq:g2_pour_MC}), where the value $\zeta=0.95$ was taken to fit the Monte Carlo results.
The arrow indicates the range $b$ of the square-well interaction potential.
Dashed line: analytic expression~(\ref{eq:g2jolie}), with $\zeta_e=0.12$ \cite{CarlsonAFQMC}.
\label{fig:g2}}
\end{figure}
\begin{figure}[t]
\includegraphics[width=0.9\linewidth,clip=]{fig4.eps}
\caption{(Color online) One-body density matrix $g^{(1)}_{\sigma\si}(r)=
\langle\hat{\psi}^\dagger_\sigma(\mathbf{r})\hat{\psi}_\sigma(\mathbf{0})\rangle$ of the homogeneous
non-polarized unitary gas at zero temperature: comparison between the fixed-node Monte
Carlo results from Ref.~\cite{Giorgini_nk}
(black solid line) and the
analytic expression~(\ref{eq:g1_pour_MC}) for the
small-$k_F r$ expansion of $g^{(1)}_{\sigma\sigma}$ up to first order
(red dashed straight line) and second order (blue dotted parabola)
where we took the value $\zeta=0.95$ extracted from the Monte
Carlo data for $g^{(2)}_{\uparrow\downarrow}$, see Fig.~\ref{fig:g2}.
\label{fig:g1} }
\end{figure}
For the homogeneous non-polarized unitary gas (i.e. the spin-1/2 Fermi gas in $3D$ with $a=\infty$ and $N_\uparrow=N_\downarrow$) at zero temperature, we can compare our analytical expressions for the short-distance behavior of the one-body density matrix $g^{(1)}_{\sigma\sigma}$ and the pair distribution function $g^{(2)}_{\uparrow\downarrow}$ to the fixed-node Monte Carlo results in~\cite{Giorgini,Giorgini_nk,LoboGiorgini_g2}. In this case, $g^{(1)}_{\sigma\si}({\bf R}-\mathbf{r}/2,{\bf R}+\mathbf{r}/2)$ and
$g^{(2)}_{\uparrow\downarrow}({\bf R}-\mathbf{r}/2,{\bf R}+\mathbf{r}/2)$ depend only on $r$ and not on $\sigma$, ${\bf R}$ and the direction of $\mathbf{r}$.
Expanding the energy to first order in $1/(k_F a)$ around the unitary limit yields:
\begin{equation}
E=E_{\rm ideal}\left(\xi - \frac{\zeta}{k_F a}+\dots\right)
\label{eq:eq_d_etat}
\end{equation}
where $E_{\rm ideal}$ is the ground state energy of the ideal gas, $\xi$ and $\zeta$ are universal dimensionless numbers, and the Fermi wavevector is related to the density through $k_F=(3\pi^2 n)^{1/3}$.
Expressing $C$ in terms of $\zeta$ thanks to [Tab.~II, Eqs.~(2a,4a)] and Eq.~(\ref{eq:eq_d_etat}), and inserting this into~[Tab.~II, Eq.~(7a)], we get
\begin{equation}
g^{(1)}_{\sigma\si}(r)\simeq\frac{n}{2}\left[ 1 - \frac{3\zeta}{10} k_F r - \frac{\xi}{10} (k_F r)^2 + \dots\right].
\label{eq:g1_pour_MC}
\end{equation}
For a finite interaction range $b$, this expression is valid for $b\ll r \ll k_F^{-1}$
\footnote{For a finite-range potential one has $g^{(1)}_{\sigma\si}(r)=n/2-r^2 m E_{\rm kin}/(3\hbar^2 \mathcal{V})+\dots$ where $\mathcal{V}$ is the volume; the kinetic energy diverges in the zero-range limit as $E_{\rm kin}\sim -E_{\rm int}$, thus $E_{\rm kin}\sim-C/(4\pi)^2 \int d^3r\,V(r)|\phi(r)|^2$ from
[Tab.~IV, Eq.~(2a)], so that $E_{\rm kin}\sim C\pi\hbar^2/(32 m b)$ for the square-well interaction. This behavior of $g^{(1)}(r)$ only holds at very short distance $r\ll b$ and is below the resolution of the Monte Carlo data.}.
[Tab.~IV, Eq.~(4a)] yields
\begin{equation}
g^{(2)}_{\uparrow\downarrow}(r)\underset{k_F r\ll1}{\simeq}\frac{\zeta}{40\pi^3}k_F^4 |\phi(r)|^2.
\label{eq:g2_pour_MC}
\end{equation}
The interaction potential used in the Monte Carlo simulations~\cite{Giorgini,Giorgini_nk,LoboGiorgini_g2} is a square-well:
\begin{equation}
V(r)=-\left(\frac{\pi}{2}\right)^2 \frac{\hbar^2}{m b^2}\, \theta(b-r)
\label{eq:puits_carre}
\end{equation}
The corresponding zero-energy scattering state is
\begin{equation}
\phi(r)=\frac{\sin\left(\frac{\pi r}{2 b}\right)}{r}\ \ \mbox{for}\ \ r<b, \ \phi(r)=\frac{1}{r} \ \ \mbox{for}\ \ r>b
\end{equation}
and the range $b$ was taken such that $n b^3=10^{-6}$ i.e. $k_F b=0.0309367\dots$. Thus we can assume that we are in the zero-range limit $k_F b\ll1$, so that (\ref{eq:g1_pour_MC},\ref{eq:g2_pour_MC}) are applicable.
Figure \ref{fig:g2} shows that the expression (\ref{eq:g2_pour_MC}) for $g^{(2)}_{\uparrow\downarrow}$ fits well the Monte Carlo data of \cite{LoboGiorgini_g2} if one adjusts the value of $\zeta$ to $0.95$. This value is close to the value $\zeta\simeq1.0$ extracted from (\ref{eq:eq_d_etat}) and the $E(1/a)$-data of~\cite{Giorgini}.
Using $\zeta=0.95$ we can compare the expression (\ref{eq:g1_pour_MC}) for $g^{(1)}_{\sigma\sigma}$ with Monte Carlo data of~\cite{Giorgini_nk} without adjustable parameters.
Figure \ref{fig:g1} shows that the first order derivatives agree, while the second order derivatives are compatible within the statistical noise. This provides an interesting check of the numerical results, even though any wavefunction satisfying the contact condition~[Tab. I, Eq. (1a)] leads to $g^{(1)}_{\sigma\sigma}$ and $g^{(2)}_{\uparrow\downarrow}$ functions satisfying [Tab.~II, Eqs.~(3a,6a)] with values of $C$ compatible with each other.
A more interesting check is provided by our expression [Tab.~V, Eq.(3a)] for the subleading term in the short range behavior
of $g^{(2)}_{\uparrow\downarrow}(r)$, which here reduces to
\begin{equation}
g^{(2)}_{\uparrow\downarrow}(r) = \frac{\zeta}{40\pi^3} \frac{k_F^4}{r^2} - \frac{\zeta_e}{20\pi^3} k_F^6 + O(r)
\label{eq:g2jolie}
\end{equation}
where $\zeta_e$ is defined in Eq.~(\ref{eq:def_zetae}). Remarkably, this expression is consistent with the fixed node Monte Carlo
results of \cite{LoboGiorgini_g2} if one uses the value of $\zeta_e$ of \cite{CarlsonAFQMC}, see Fig.~\ref{fig:g2}.
\subsection{Finite-range correction in simulations and experiments}
\label{subsec:app_manips}
We recall that, as we have seen in Section~\ref{sec:re}, the finite-range corrections to eigenenergies
are, to leading order, of the form
$(\partial E/\partial r_e)\,r_e$ for continuous-space models
or (\ref{eq:dEdRe}) for lattice models,
where the coefficients $\partial E/\partial r_e$, and $\partial E/\partial R_e$ for lattice models, are model-independent.
This can be used in practice by extracting the values of these coefficients from numerical simulations,
done with some convenient continuous-space or lattice models
(usually a dramatic simplification of the atomic physics reality);
then, knowing the value of $r_e$ in an experiment,
one can compute the finite-range corrections present in the measurements,
assuming that the universality of finite range corrections, derived in section
\ref{sec:re} for compact support potentials, also applies for multichannel $O(1/r^6)$ models.
The value of $r_e$ is predicted in Ref.~\cite{GaoFeshbach} to be
\begin{multline}
r_e = -2\,R_*\,\left( 1-\frac{a_{\rm bg}}{a}\right)^2
\\
+ \frac{4\pi\,b}{3\, \Gamma^2(1/4)}\,
\left[
\left( \frac{\Gamma^2(1/4)}{2\pi} - \frac{b}{a} \right)^2 + \frac{b^2}{a^2}
\right]
\label{eq:re_Gao}
\end{multline}
where $b$ is the van~der~Waals length $b=(m C_6/\hbar^2)^{1/4}$,
$a_{\rm bg}$ is the background scattering length
and $R_*$ is the so-called Feshbach length~\cite{PetrovBosons}.
We recall that the magnetic-field dependence of $a$ close to a Feshbach resonance reads
$a(B) = a_{\rm bg} [ 1 - \Delta B / (B-B_0)]$
where $B_0$ is the resonance location
and $\Delta B$ is the resonance width,
and that
$R_* = \hbar^2 / (m a_{\rm bg} \mu_b \Delta B)$
where $\mu_b$ is the effective magnetic moment of the closed-channel molecule.
We note that the $a$-dependent terms in the second term of~(\ref{eq:re_Gao})
are $O(b^2)$ and thus do not contribute to the leading-order correction in $b$.
In contrast, the $a$-dependence of the first term of ~(\ref{eq:re_Gao}) can be significant
since $a_{\rm bg}$ can be much larger than $b$
(this is indeed the case for $^6{\rm Li}$)~\footnote{The general structure of Eq.~(\ref{eq:re_Gao}) already appeared for a simple separable two-channel model~\cite{WernerTarruellCastin} with exactly the same expression for the first term,
which explains why the $a$-dependence is correctly reproduced by the simple expression of~\cite{WernerTarruellCastin},
as observed in~\cite{NaidonCRAS} by comparison with a coupled-channel calculation,
provided that the separable-potential range in~\cite{WernerTarruellCastin} was adjusted to reproduce the correct value of $r_e$ at resonance.}.
A key assumption of Ref.~\cite{GaoFeshbach} is that the open-channel interaction potential is well approximated by $-C_6/r^6$ down to interatomic distances $r\ll b$.
This assumption is well satisfied for alkali atoms~\cite{GaoFeshbach,Gao}.
Although we have not calculated the off-shell length $\rho_e$ explicitly, we have checked that it is finite for a $-C_6/r^6$ potential
\cite{smoro_en_prepa}.
As an illustration, we estimate the finite-range corrections to the non-polarized unitary gas energy in typical experiments.
Similarly to (\ref{eq:eq_d_etat}), we have the expansion
\begin{equation}
E=E_{\rm ideal}\left(\xi + \zeta_e k_F r_e+\dots\right)
\label{eq:def_zetae}
\end{equation}
where $E$ and $E_{\rm ideal}$ are the ground state energies of the homogeneous Fermi gas (of fixed density $n=k_F^3/(3\pi^2)$)
for $1/a=0$ and $a=0$ respectively.
The value of $\zeta_e$ was estimated both from fixed-node Monte Carlo and Auxiliary Field Quantum Monte Carlo to be $\zeta_e=0.12(3)$
~\cite{CarlsonAFQMC}~\footnote{As discussed around Eq.~(\ref{eq:dEdRe}), one has to take into account not only $r_e$ but also $R_e$ for lattice models,
which was not done in~\cite{CarlsonAFQMC}.}.
The value of $r_e$ as given by Eq.~(\ref{eq:re_Gao}) is $4.7\,{\rm nm}$
for the $B_0\simeq 834\,{\rm G}$ resonance of $^6{\rm Li}$ (in accordance with~\cite{Strinati})
and $6.7\,{\rm nm}$ for the $B_0\simeq 202.1\,{\rm G}$ resonance of $^{40}{\rm K}$.
The typical value of $1/k_F$ is $\simeq 400\,{\rm nm}$ in~\cite{ZwierleinEOS}, while $1/k_F$ at
the trap center is $\simeq 250\,{\rm nm}$ in~\cite{HuletClosedChannel}
and $\simeq 100\,{\rm nm}$ in~\cite{JinPotentialEnergy},
which respectively leads to a finite range correction to the homogeneous gas energy:
\begin{equation}
\frac{\delta E}{E} \simeq 0.4\%, 0.6\%\ \mbox{and}\ 2\%.
\end{equation}
In the case of lithium, this type of analysis was used in \cite{ZwierleinEOS} to estimate
the resulting experimental uncertainty on $\xi$.
\section{Conclusion} \label{sec:conclusion}
We derived relations between various observables for $N$ spin-1/2 fermions in an external potential
with zero-range or short-range interactions, in continuous space or on a lattice, in two or three dimensions.
Some of our results generalize the ones of
\cite{Olshanii_nk, TanLargeMomentum, TanEnergetics, ZhangLeggettUniv, TanSimple,CombescotC}:
Large-momentum behavior of the momentum distribution,
short-distance behavior of the pair distribution function and of the one-body density matrix, derivative of the energy with respect to the scattering length or to time, norm of the regular part of the wavefunction (defined through the behavior of the wavefunction when two particles approach each other),
and, in the case of finite-range interactions, interaction energy,
are all related to the same quantity $C$;
and
the difference between the total energy and the trapping potential energy is
related to $C$ and to a functional of the momentum distribution (which is also equal to
the second order term in the short-distance expansion of the one-body density matrix).
We also obtained entirely new relations:
The second order derivative of the energy with respect to the inverse scattering length (or to the logarithm of the scattering length in two dimensions) is related to the regular part of the wavefunctions, and is negative at fixed entropy;
and
the derivative of the energy with respect to the effective range $r_e$
of the interaction potential (or to $r_e^2$ in $2D$) is also related to the regular part,
to the subleading short distance behavior of the pair distribution function,
and to the subleading $1/k^6$ tail of the momentum distribution.
We have found unexpected subtleties in the validity condition of the derived expression of this derivative
in $2D$: Our expression for $\partial E/\partial(r_e^2)$ applies because,
for the class of interaction potentials that we have specified, the effective
range squared $r_e^2$ is much larger than the true range squared $b^2$, than the length squared $\rho_e^2$
characterizing the low-energy $s$-wave off-shell $T$-matrix, and than the length squared $R_1^2$
characterizing the low energy $p$-wave scattering amplitude, by logarithmic factors that diverge in the zero-range limit.
In $3D$, for lattice models, our expression for $\partial E/\partial r_e$ applies only
for magic dispersion relations where an extra parameter $R_e$ quantifying the breaking
of Galilean invariance (as predicted in \cite{zhenyaNJP}) vanishes; also, the magic dispersion relation
should not have cusps at the border of the first Brillouin zone otherwise
the so-called Juillet effect compromises the validity of our $\partial E/\partial r_e$ expression for
finite size systems.
We have explicitly constructed such a magic relation, that may be useful
to reduce lattice discretization effects in Quantum Monte Carlo simulations.
We also considered models with a momentum cut-off used in Quantum Monte Carlo calculations,
either in continuous space \cite{zhenyas_crossover} or on a lattice \cite{bulgacQMC,BulgacCrossover,BulgacPG,BulgacPG2}:
Surprisingly, in the infinite cut-off limit,
the breaking of Galilean invariance survives and one does not exactly recover the unitary gas.
Applications of general relations were presented in three dimensions.
For two particles in an isotropic harmonic trap, finite-interaction-range corrections
were obtained, and were found to be universal up to order $r_e^2$ included in $3D$; in particular, this clarifies
analytically the validity of some approximation and self-consistent equation introduced in \cite{Bolda,Naidon,Greene,GaoPiege}
that neglect the effect of the trapping potential within the interaction range.
For the universal states of three particles with an infinite scattering length in an isotropic harmonic trap,
the derivatives of the energy with respect to the inverse scattering length
and with respect to the effective range
were computed analytically and found to agree with available numerics.
For the unitary gas in an isotropic harmonic trap, which has a $SO(2,1)$ dynamical symmetry
and an undamped breathing mode of frequency $2\omega$,
we have determined the relative finite-$1/a$ and finite range energy corrections within each $SO(2,1)$ ladder, which
allows in the large-$N$ limit to obtain the frequency shift and the collapse time of the breathing mode.
For the bulk unitary Fermi gas, existing fixed-node Monte Carlo data were checked to satisfy exact relations.
Also, the finite-interaction-range correction to the unitary gas energy
expected from our results to be (to leading order) model-independent and thus extractable from Quantum Monte Carlo results,
was estimated for typical experiments: This quantifies one of the experimental uncertainties on the Bertsch parameter $\xi$.
The relations obtained here may be used in various other contexts. For example,
the result [Tab.~II, Eqs.~(11a,11b)] on the sign of the second order derivative of $E$ at constant entropy is relevant to adiabatic ramp experiments~\cite{CarrCastin,GrimmCrossover,JinPotentialEnergy,thomas_entropie_PRL,thomas_entropie_JLTP},
and the relation [Tab.~III, Eq.~(8a)] allows to directly compute $C$ using determinantal diagrammatic Monte Carlo
\cite{Goulko_C} and bold diagrammatic Monte Carlo~\cite{VanHouckePrepa,FelixIntTalk,VanHouckePrepaC}. $C$ is directly related to the closed-channel fraction in a two-channel model \cite{BraatenLong,WernerTarruellCastin}, which allowed to extract it \cite{WernerTarruellCastin} from the experimental photoassociation measurements in \cite{HuletClosedChannel}.
$C$ was measured from the tail of the momentum distribution \cite{JinC}. For the homogeneous gas
$C$ was extracted from measurements of the equation of state \cite{SylEOS}.
$C$ also plays an important role in the theory of radiofrequency spectra \cite{ZwergerRF,BaymRF,ZhangLeggettUniv,StrinatiRF,RanderiaRF,ZwergerRFLong} and in finite-$a$ virial theorems \cite{TanViriel,Braaten,WernerViriel}, as
verified experimentally \cite{JinC}.
$C$ was also extracted from the momentum tail of the static structure factor $S(k)$, which is the Fourier transform
of the spin-independent pair distribution function $\langle \hat{n}(\mathbf{r}) \hat{n}(\mathbf{0})\rangle$
and was measured by Bragg spectroscopy \cite{AustraliensC,AustraliensT}.
In principle one can also measure {\sl via} $S(k)$ the parameter $\zeta_e$ quantifying the finite range correction to the unitary
gas energy, from the relation
\begin{equation}
\frac{\partial E}{\partial r_e}=-\frac{\pi\hbar^2}{m} \int \frac{d^3k}{(2\pi)^3} \left[S(k)-\frac{C}{4k}\right]
\end{equation}
resulting from [Tab.~V, Eq.~(3a)]. This procedure is not hampered by the small value of $k_F r_e$ in present experiments,
contrarily to the extraction of $\zeta_e$ from a direct measurement of the gas relative energy correction
$\propto \zeta_e k_F r_e \lesssim 10^{-2}$.
We can think of several generalizations of
the relations presented here.
All relations can be extended to the case of periodic boundary conditions.
The techniques used here can be applied to the
one-dimensional case to generalize the relations of \cite{Olshanii_nk}.
For two-channel or multi-channel models one may derive relations other than the ones of \cite{BraatenLong,WernerTarruellCastin,ZhangLeggettUniv}.
Generalization of the present relations to arbitrary mixtures of atomic species,
and to situations (such as indistinguishable bosons) where the Efimov effect takes place,
was given in \cite{CompanionBosons}.
\acknowledgments
We thank E.~Burovski, J.~Dalibard, B.~Derrida, V.~Efimov, O.~Goulko, R.~Ignat, O.~Juillet, D.~Lee, S.~Nascimb\`ene, M.~Olshanii,
N.~Prokof'ev, B.~Svistunov, S.~Tan, for useful discussions, as well as
S.~Giorgini and J.~von~Stecher for sending numerical data
from~\cite{Giorgini_nk,LoboGiorgini_g2,StecherLong,ThogersenThes
}.
The idea of using the range corrections present in Monte Carlo calculations to estimate the range corrections in cold atom experiments was given by W.~Ketterle during the Enrico Fermi School in 2006.
The question of a possible link between $\partial E/\partial r_e$ and the $1/k^6$ subleading
tail of $n_{\sigma}(\mathbf{k})$ was asked by L. Platter during a talk given by F.W. at the INT workshop 10-46W in Seattle.
The work of F.W. at UMass was supported by NSF under Grants No.~PHY-0653183 and No.~PHY-1005543.
Our group at ENS is a member of IFRAF. We acknowledge support from ERC Project
FERLODIM N.228177.
\section*{Note}
[Tab.~II, Eq.~(4b)], as well as [Tab.~II, Eq.~(12b)], were obtained independently by Tan
\cite{TanUnpublished}
using the formalism of~\cite{TanSimple}.
After our preprint \cite{50pages} appeared, some of our $2D$ relations were tested in \cite{Giorgini2D}
and some of them were rederived in \cite{Moelmer}.
|
1,108,101,565,533 | arxiv | \section{Introduction}
\begin{figure*}
\centering
\includegraphics[width=2\columnwidth]{geometry_scheme}
\caption{
Left: 3D nanoparticles thin film of thickness $h$ adhered on a semi-infinite substrate. The bottom view, as seen looking across the substrate, highlights the `patched' interface.
Centre: 3D Pillar model: \textit{effective} NP layer ($q<z<h$); pillars layer ($0<z<q$); semi-infinite substrate ($z<0$).
The NP layer \textit{effective} density and stiffness tensor are $\rho^{NP}$ and $C^{NP}$, respectively. The pillars layer density, $\rho^{bk}$, and Young modulus, $E^{bk}$, are the same as the ones of the material of which the NPs are made (see text). The bottom view, as seen looking across the substrate, highlights the similarity with the `patched' interface of the real case. Right: reduction of the periodic 3D Pillar model to a single 3D unit cell of base size $L\times L$. The pillar layer filling fraction, $\alpha$, is defined as the ratio of the pillar cross-sectional area to that of the unit cell, irrespective of the geometry of the pillar cross section. The image is for illustrative purposes.
}
\label{fig:geometry_scheme}
\end{figure*}
Nanogranular ultrathin films are at the forefront of a wide range of technological applications \cite{stark2015industrial} ranging from nanomedicine \cite{benetti2019tailored}, sensing \cite{villa2019soft,benetti2018photoacoustic,huang2017fast} to electronics \cite{nasiri2016tunable,minnai2017facile,caruso2016high,santaniello2020additive,mirigliano2019non,tarantino2020,mirigliano2021}.
Accessing their mechanical properties, both within the film's bulk and at the interface region in contact with the supporting substrate, is among the most urgent issues in view of any device development.
In this context photoacoustic nanometrology plays a key role.
For instance, the bulk properties of periodic nanogranular thin films have been explored across a variety of configurations \cite{tournat2010acoustics,boechler2017dynamics} ranging from 1D \cite{allein2016tunable}, 2D \cite{hiraiwa2016complex,vega2017vibrational,wallen2015dynamics,rizzi2020exploring,graczykowski2020,ghanem2020,babacic2020} to 3D \cite{abi2019longitudinal,merkel2010dispersion} arrangements.
Recently, the development of table-top UV laser sources allowed generating surface acoustic waves with periodicity in the 10 nm range \cite{siemens2009high}, hence opening to mechanical nanometrology \cite{nardi2015impulsively} of periodic granular thin films of thicknesses down to few nanometers \cite{abad2020nondestructive,frazer2020full}. Photoacoustics investigations of the \textit{bulk} properties of \textit{non-periodic} nanogranular films have also been performed in several contexts over granularities ranging from few nm \cite{peli2016mechanical,benetti2017bottom,benetti2018photoacoustic}, to hundreds of nm \cite{ayouch2012elasticity,girard2018contact} up to the micron scale \cite{hiraiwa2017acoustic}. As for \textit{interface properties}, photoacoustic investigations mainly focused on \textit{homogeneous} thin films \cite{tas1998picosecond,dehoux2009nanoscale,dehoux2010picosecond,hettich2011modification,ma2015comprehensive,hoogeboom2016nondestructive,hettich2016viscoelastic,grossmann2017characterization,greener2019high,zhang2020unraveling,clemens2020}, nanogranular thin film interfaces remaining relatively unexplored.
The difficulty is to address `patched' interfaces as the one emerging between an aperiodic granular film and the adhering substrate, disorder being the critical aspect \cite{peli2016mechanical}.
Acoustic attenuation times for such an interface are hard to conceive in analytical terms, calling for full 3D Finite Element Method (FEM) simulations and casting the acoustic wave problem at the interface in scattering terms.
These approaches, whenever applicable, do not shed much light on the underlying physics and are hardly implementable to fit photoacoustic data due to computational costs. Furthermore, implementation of full 3D models requires knowledge of the detailed film morphology at the interface which, for the case of aperiodic granular materials, is unknown or very difficult to achieve \cite{benetti2017bottom}.
Therefore, easy-to-adopt mechanical models are necessary to interpret photoacoustics data, retrieving the interface
physical properties and ultimately unveiling the relevant physics ruling the acoustic to structure relation in materials with disordered interfaces.
From a general view point, the situation here addressed is complementary to that of acoustic damping from a single nano-object to its supporting substrate \cite{hartland2011optical,devkota2019making}. For the latter, the experimental is challenging whereas the modelling is rather straight forward since it relies on a thorough system's knowledge \cite{maioli2018mechanical,devkota2018measurement}. On the contrary, in the present case the experimental is relatively simple \cite{peli2016mechanical}, the modelling though is the delicate and yet unsolved issue. This is ascribable to the disordered, hence intrinsically undetermined, interface.
A 1D mechanical model for nanogranular thin films adhered on a flat substrate is here proposed. The model, addressed as pillar model, is based on a structural interface \cite{bertoldi2007structural}, meaning that a true structure is introduced to mimic the transition region between the NP's film bulk and the underlying substrate.
Extrinsic attenuation, i.e. acoustic radiation to the substrate, is assumed to prevail over intrinsic attenuation which is not accounted for.
The analytical dispersion relation for the frequencies and lifetimes of the ultrathin film's acoustic breathing modes, i.e. the ones commonly excited in photoacoustic experiments, is obtained in terms of the interface layer physical parameters: interface porosity and layer thickness.
The model is successfully benchmarked both against a full 3D FEM model and against experimental photoacoustic data available from the literature on a paradigmatic model system, in which knowledge of mechanical properties at the interface is a key asset in a variety of applications \cite{benetti2018photoacoustic,torrisi2019ag,benetti2020antimicrobial}. A simpler 1D model, addressed as Effective Medium Approximation model (EMA) and based on an homogenized interface layer, is also provided together with its dispersion relation. Its limits of validity, restrained to small porosities, are discusses at the light of the pillar model. Assuming the granular film made of nanoparticles (NP), the present theoretical scheme is here tested for the case of NP radiuses smaller than the film thicknesses and inferior to the excited breathing modes wavelength.
The pillar model rationalises the acoustic to structure relation in materials affected by disordered interfaces. The physics is here shown to be ruled by the integral of the stresses exchanged across the interfaces rather than their detailed distribution. The pillar model, on one side, furnishes to the experimentalist an experimentally-benchmarked, easy-to-adopt analytical tool to extract the interface layer physical parameters upon fitting of the acoustic data. On the other side, upon previous knowledge of the interfacial layer parameters, the model allows retrieving the breathing modes frequencies and lifetimes of a nanogranular coating adhering on a substrate. All these aspects bear both a fundamental and applicative interest across a wide range of fields ranging from condensed matter, material science to device physics.
\section{The pillar model}
\begin{figure*}
\centering
\begin{minipage}[b]{\columnwidth}
\includegraphics[width=1.03\columnwidth]{Periodo_caso_limite_pillar_0_1}
\end{minipage}
\hfill
\begin{minipage}[b]{\columnwidth}
\includegraphics[width=\columnwidth]{Tempo_deca_caso_limite_pillar_0_1}
\end{minipage}
\caption{(a) period $T_n$ and (b) decay time $\tau_n$ versus $\alpha$ for the Ag nanogranular film (see text) with $q$=12 nm and $h$=50 nm. The first two modes $n=\{0,1\}$ are addressed. Pillar model (full lines) and limit cases (dashed lines) obtained for the free standing (subscript '$fs$') and perfect adhesion (subscript '$pa$') scenarios respectively. The y-axis in the graph of panel (a) is broken for sake of graphical clarity. The scales above and below the brake are different for ease of representation. The $fs$ scenario yields, for mode $n$=0, an infinite period (corresponding to a film translation), hence it is not reported in panel (a). The $fs$ scenario yields an infinite decay time, hence it is not reported in panel (b).}
\label{fig:limit_case_pil_tot}
\end{figure*}
The mechanical response of a nano-particle film resting on an infinitely extended substrate (Fig.~\ref{fig:geometry_scheme}) is here analysed assuming negligible intrinsic acoustic losses. For the sake of the following discussion three layers are defined: the NP film layer ($q<z<h$), the interfacial layer ($0<z<q$) and the semi-infinite substrate layer ($z<0$). The problem is considered one-dimensional, as is the case for photoacoustic measurements on ultrathin films \cite{ogi2011picosecond,peli2016mechanical,grossmann2017characterization}. The only non-zero component of the displacement field, $u_z^\#(z,t)$, satisfies the classic wave equation:
\begin{equation}
\dfrac{\partial^2 u_z^{\#} (z,t)}{\partial t^2} = v_z^{\#^2} \dfrac{\partial^2 u_z^{\#} (z,t)}{\partial z^2} \, ,
\label{eq:wave_equation}
\end{equation}
where $u_z^\# (z,t)$ is the displacement component in the $z$ direction, the hash refers to each layer, and $v_z^{\#}$ is the velocity of the P-wave travelling in such materials. The solution of Eq.(\ref{eq:wave_equation}) can be written as
\begin{equation}
u_z^{\#} (z,t) = U^{\#} (z) T^{\#} (t) \, ,
\label{eq:variable_decomposition}
\end{equation}
with
\begin{align}
U^{\#} (z) &= u_k^{\#} e^{i k^{\#} z} + u_{-k}^{\#} e^{- i k^{\#} z} \, ,
\notag
\\
T^{\#} (t) &= u_\omega^{\#} e^{- i \omega t} \, ,
\label{eq:variable_decomposition_2}
\end{align}
where $i$, $\omega$, and $k^{\#}$ are the imaginary unit, the frequency and the wave vector, respectively.
Substituting Eq.(\ref{eq:variable_decomposition}) and (\ref{eq:variable_decomposition_2}) into Eq.(\ref{eq:wave_equation}) yields the dispersion relation $\omega^2 = v_z^{\#^2} k^{\#^2}$. The first and the second terms of $U^{\#} (z)$ are the regressive and the progressive components of the wave, respectively. The regressive component of the wave in the substrate is neglected since this layer is considered as infinitely extended in the $z$ direction, a fact accounting for the radiative attenuation of the film's breathing mode towards the substrate.
When dealing with granular solids, like the aforementioned nano-particle film (Fig.~\ref{fig:geometry_scheme},left), imposing a ``perfect adhesion'' condition (\textit{pa}) at the film-substrate interface results in a faulty evaluation of their mechanical behaviour.
Perfect adhesion implies perfect geometrical matching and continuity of stress and displacement.
This fault is particularly relevant when addressing the oscillation's damping time, not as much for the oscillation frequency \cite{peli2016mechanical}.
This can be traced back to the fact that granularity makes the perfect contact condition unlikely to be achieved, whereas a `patched interface' would be more appropriate.
To overcome this issue, the pillar model is introduced (Fig.~\ref{fig:geometry_scheme},centre). The pillar model partitions the nanogranular film of thickness $h$ (Fig.~\ref{fig:geometry_scheme}, left) in three layers (Fig.~\ref{fig:geometry_scheme}, centre). The actual NP film layer, $q<z<h$, is accounted for introducing an \textit{effective} homogeneous and isotropic thin film layer extending in the same range. The real NP film morphology is granular rather than homogeneous, nevertheless simulating the real NP film with an homogeneous one allows defining an \textit{effective} density $\rho^{NP}$ and an \textit{effective} stiffness tensor $C^{NP}$. These constants may be retrieved either from experiments \cite{peli2016mechanical} or theory \cite{benetti2017bottom,benetti2018photoacoustic}. The key element in the model is the introduction of a layer of pillars (dashed orange layer in Fig.~\ref{fig:geometry_scheme}, centre), extending in the range $0<z<q$ and intended to mimic the mechanics in the interfacial layer, i.e. at the interface between the actual film and the substrate (dashed orange layer in Fig.~\ref{fig:geometry_scheme}, left). The pillars density, $\rho^{bk}$, and Young modulus, $E^{bk}$, are taken as the ones of the real material of which the NPs are made of. The pillar mechanical properties hence differ from that of the effective NP thin film layer. The pillar layer adheres on a semi-infinite substrate, $z<0$.
The velocity, $v_z^{NP}$, of a P-wave in the NP film layer is proportional to the coefficient $C_{11}^{NP}$ since transversal contraction is prevented:
\begin{equation}
v_z^{NP} = \sqrt{\dfrac{C_{11}^{NP}}{\rho^{NP}}} \, ,
\label{eq:vlocityNP}
\end{equation}
while the velocity of a P-wave in the pillars is proportional to the Young modulus $E^{bk}$ since they are free to expand transversely:
\begin{equation}
v_z^{NP} = \sqrt{\dfrac{E^{bk}}{\rho^{bk}}} \, ,
\label{eq:vlocityPil}
\end{equation}
For the pillar model, the boundary conditions are the following:
\begin{enumerate}
\item free standing at the top of the effective NP-layer ($z=h$):
\begin{equation}
C_{11}^{NP} \dfrac{\partial u_z^{NP} \left(h,t\right)}{\partial z} = 0 \, ,
\label{eq:BCP1}
\end{equation}
\item equilibrium at the interface between the effective NP-layer and the pillars layer ($z=q$):
\begin{equation}
F^{NP}\left(q,t\right) = F^{pil}\left(q,t\right) \, ,
\label{eq:BCP2}
\end{equation}
\item continuity of the displacement at the interface between the effective NP-layer and the pillars layer:
\begin{equation}
u_z^{NP}\left(q,t\right) = u_z^{pil}\left(q,t\right) \, ,
\label{eq:BCP3}
\end{equation}
\item force equilibrium at the interface between the pillars layer and the sapphire substrate ($z=0$):
\begin{equation}
F^{pil}\left(0,t\right) = F^{sub}\left(0,t\right) \, ,
\label{eq:BCP4}
\end{equation}
\item continuity of the displacement at the interface between the pillars layer and the sapphire substrate:
\begin{equation}
u_z^{pil}\left(0,t\right) = u_z^{sub}\left(0,t\right) \, ,
\label{eq:BCP5}
\end{equation}
\end{enumerate}
It is pinpointed that the continuity of the stresses at the interfaces between the pillars and the two continuous layers is replaced with the balance of their resultant forces, $F$, as can been appreciated in Eq.(\ref{eq:BCP2}) and Eq.(\ref{eq:BCP4}). This is a key point of the model.\\
Eq.(\ref{eq:BCP2}) and Eq.(\ref{eq:BCP4}) reduce to
\begin{align}
C_{11}^{NP} \dfrac{\partial u_z^{NP} \left(q,t\right)}{\partial z} &= \alpha E^{bk} \dfrac{\partial u_z^{pil} \left(q,t\right)}{\partial z} \, ,
\notag
\\
\alpha E^{bk} \dfrac{\partial u_z^{pil} \left(0,t\right)}{\partial z} &= C_{11}^{sub} \dfrac{\partial u_z^{sub} \left(0,t\right)}{\partial z} \, ,
\label{eq:BCP_forces}
\end{align}
respectively, where $\alpha$ is the contact ratio between the areas of the two homogeneous layers (substrate and effective NP film) and that of the pillars (see Fig.~\ref{fig:geometry_scheme}, right), $C_{11}^{sub}$ and $C_{11}^{NP}$ the substrate's and the effective NPs film relevant stiffness tensor's component, respectively.
The model is therefore reduced to 1D.
It is noteworthy that, despite the fact that the pillars in Fig.~\ref{fig:geometry_scheme} are represented with a circular cross-section, the definition of the parameter $\alpha$ and the structure of the model do not change if the shape of such cross-section is chosen to be different, for instance square-shaped rather than circular. Further on, the analytical model does not depend on the position of the pillar with respect to the unit cell, this despite the fact that the pillars in Fig.~\ref{fig:geometry_scheme} are shown at its center. These two aspects are crucial for a model intended to correctly rationalize a disordered interface, were the number of possible NPs dispositions at the interface, i.e. number of micro-states or configuration in statistical mechanics terms, is infinite. In photo-acoustic experiments for instance, where both the excitation and probing laser beams are much wider than the NP's dimensions, a huge number of possible unit cell's configurations are probed all-together within a single measurement. The acoustic problem is therefore not affected by the specific global interface configuration
, hence for the pillar model to correctly capture the physics it must not depend on the specific pillar cross-sectional geometry or position within the unit cell.
\\
Enforcing the boundary conditions Eqs.(\ref{eq:BCP1})-(\ref{eq:BCP2})-(\ref{eq:BCP3})-(\ref{eq:BCP4})-(\ref{eq:BCP5}) in Eqs.(\ref{eq:variable_decomposition}) yields the following equation in the complex-valued unknown $\omega (q,\alpha)$:
\begin{widetext}
\begin{equation}
Z^{NP} -
\dfrac{\alpha E^{bk} \cot \left(\dfrac{(h-q)\omega
}{v_{z}^{NP}}\right) \left[v_{z}^{pil}
Z^{sub} \cos \left(\dfrac{q~\omega
}{v_{z}^{pil}}\right)-i \alpha E^{bk} \sin
\left(\dfrac{q~\omega
}{v_{z}^{pil}}\right)\right]}{v_{z}^{pil}
\left[v_{z}^{pil} Z^{sub} \sin \left(\dfrac{q~
\omega }{v_{z}^{pil}}\right)+i \alpha E^{bk}
\cos \left(\dfrac{q~\omega
}{v_{z}^{pil}}\right)\right]}
= 0 \, ,
\label{eq:impli_solu_pil}
\end{equation}
\end{widetext}
where $Z^{NP}$, and $Z^{sub}$ are the acoustic impedances of the effective NP-layer and the substrate, respectively. Actually, Eq.(\ref{eq:impli_solu_pil}) may be solved numerically and yields, for each fixed set of parameters $(q,\alpha)$, infinitely many solutions $\omega = \omega_n (q,\alpha)$ with $n$=$\{$0,1,2,...$\}$ the index numbering the mode.
The total thickness $h$ of the actual real film assigned, the free parameter in Eq.(\ref{eq:impli_solu_pil}) are the height of the pillars, $q$, and the contact ratio between the two homogeneous layers and the pillars, $\alpha$. The relations linking the period of vibration, $T_n(q,\alpha)$, and the wave decay time, $\tau_n(q,\alpha)$, to the n-mode complex-valued angular frequency are:
\begin{align}
T_n(q,\alpha) = \dfrac{2 \pi}{Re(\omega_n(q,\alpha))} \, ,
\notag
\\
\tau_n(q,\alpha) = \dfrac{1}{Im(\omega_n(q,\alpha))} \, .
\label{eq:relation_period_decay}
\end{align}
The intuitive idea underlying the pillar model stands in the possibility to reduce the full 3D acoustic scattering problem, involving a disordered interface, to a more amenable 1D one. This approximation is meaningful provided the detailed distribution of stresses across the interface does not affect the solution in terms of quasi-mode period and lifetime. As a matter of fact, the 1D model retains information on the integral of the stresses exchanged across the interfaces rather than their detailed distribution. This key point finds its microscopic justification on the fact that the acoustic problem is not affected by the specific interface configuration, as earlier addressed.
The pillar model is more evolved with respect to spring-based interface models, which are commonly exploited to mimic imperfect interfaces, see for instance the seminal work of Ref. \cite{bigoni2002statics}. In the present case, the pillar has rigidity $\alpha E^{bk}L^{2}/q$, which, contrary to the spring rigidity, arises from the specific interface geometrical and physical characteristics. Furthermore, the pillars correctly account for inertia, the mass being distributed as opposed to concentrated, as is the case for mass-spring interface models and alike.
\subsection{The Pillar model: Case Study}
\begin{figure*}
\centering
\begin{minipage}[b]{\columnwidth}
\includegraphics[width=\columnwidth]{para_study_q}
\end{minipage}
\hfill
\begin{minipage}[b]{\columnwidth}
\includegraphics[width=\columnwidth]{para_study_alpha}
\end{minipage}
\caption{$T_1(h;q,\alpha)$ and $\tau_1(h;q,\alpha)$ versus $h$ for $n=1$ for the Ag nanogranular film:
(a) fixed $\alpha = 0.68$ while varying $q$ (expressed in nm);
(b) fixed $q = 12$ nm for a limited span of $\alpha$ values centred around the best fitting value $\alpha = 0.68$;
The plots of $T_1(h;q,\alpha)$ (dashed line, orange colour range) graphically overlap, not so for $\tau_1(h;q,\alpha)$ (continuous line, blue colour range).}
\label{fig:parametric_study_pil_tot}
\end{figure*}
The pillar model is here exemplified for the case of a real granular thin film \cite{peli2016mechanical} made of pure Ag NPs $\sim$ 6 nm in diameter, total film thickness $h$=50 nm, filling factor 0.8 and adhered on a sapphire substrate, (0001) $\alpha$-Al$_{2}$O$_{3}$ single crystal, of acoustic impedance $Z^{sub}$. Acoustic damping was shown to be due to extrinsic losses, a condition that must be met in order for Eq.(\ref{eq:impli_solu_pil}) to be applicable. The NP film is well mimicked by an homogeneous effective film of known mechanical properties: $v_{z}^{NP}$, $\rho^{NP}$ and $Z^{NP}$. The concept of NP film is meaningful beyond the first two deposited layers of NPs, leading to an interface layer of $\sim$12 nm, as detailed in Ref \cite{benetti2017bottom}. A value of $q$=12 nm is therefore assumed for the pillars, which are made of pure Ag of density $\rho^{bk}$, Young modulus $E^{bk}$ and sustain P-waves of sound velocity $v_{z}^{bk}$. The pillar layer filling factor $\alpha$ is here left as the sole free parameter, Eq.(\ref{eq:impli_solu_pil}) thus linking the complex-valued unknown $\omega$ to $\alpha$. The values of the relevant mechanical properties for this system are reported in table \ref{tab:para_peli}.
The oscillation period $T_{n}$ and lifetime $\tau_{n}$ for the first two modes of the pillar model, $n$=$\{0,1\}$, are reported versus $\alpha$ as full lines in Fig.~\ref{fig:limit_case_pil_tot} panel (a) and (b), respectively. For $\alpha=0.8$ the density of the pillars layer matches the density of the NP-layer, the latter being 0.8 that of bulk Ag. Densification of the interface layer with respect to the NP film's bulk was ruled out for the present scenario \cite{benetti2017bottom}, the maximum value of $\alpha$ is hence here constrained to 0.8. A comment is here due. For the case of cylindrical pillars, a value of $\alpha$$>$$\pi/4\approx0.78$ implies compenetration of neighbouring pillars. This fact does not constitute a problem though, since, as previously discussed, the model is independent on the pillar's cross-section geometry. For instance, for a pillar of square cross-section, compenetration is prevented for any value of $\alpha$$<$1.
For the pillar model, the period and lifetime of a given mode $n$=$\{0,1,2,...\}$ (with $n$=0 meaning $n\to0$) are correctly bounded between those of a ``free standing" ($\textit{fs}$) NP film of thickness $h-q$:
\begin{equation}
\begin{cases}
T_{n,fs}(\alpha) = \dfrac{2 (h-q)}{v_{z}^{NP}}\dfrac{1}{n} \, ,
\qquad\,
n=$\{0,1,2,...\}$
\\
\tau_{n,fs}(\alpha) = \infty $ \, ,
\qquad\qquad\quad\,\!\!
$ \forall n
\end{cases}
\quad
\label{eq:free_T_tau}
\end{equation}
and those of the ``perfect adhesion" ($\textit{pa}$) model:
\begin{equation}
\quad
\begin{cases}
T_{n,pa}(\alpha) = \dfrac{4 h}{v_{z}^{NP}}\dfrac{1}{\left(1+2n\right)} \, ,
\\
\tau_{n,pa}(\alpha) = \dfrac{2 h}{v_{z}^{NP}}\left|ln\left(\left|\dfrac{Z^{sub}-Z^{NP}}{Z^{sub}+Z^{NP}}\right|\right)\right|^{-1} \, .
\end{cases}
\label{eq:perf_T_tau}
\end{equation}
Attention is drawn on the fact that actually $\tau_{n,pa}(\alpha)$ is mode independent.
Indeed, as $\alpha$ approaches zero so does the pillars cross-sectional area and the pillar model converges to that of a free standing NP film of thickness $h-q$. On the contrary, as $\alpha$ approaches one, and assuming a square cross-section for the pillars, the situation converges to that of a perfectly adhering film (continuity of displacement and normal stress component at the interface) of thickness $h$.
Specifically, for $n$=0, $T_0$ diverges (meaning a film rigid shift) as $\alpha$ approaches zero, as expected for the $\textit{fs}$ film, and is 70 ps for $\alpha$=0.8, that is converging to the period of the fundamental mode, $T_{0,pa}$, for the $\textit{pa}$ case, see Fig.~\ref{fig:limit_case_pil_tot}(a). On the same footing, the mode lifetime $\tau_0$ diverges upon approaching the $\textit{fs}$ limit, whereas it approaches the lifetime of the $\textit{pa}$ film, $\tau_{pa}\sim$30 ps, for $\alpha$=0.8, regardless of the specific mode, see Fig.~\ref{fig:limit_case_pil_tot}(b). For $n$=1, $T_1$ evolves from $T_{1,fs}$=27 ps, for $\alpha$=0, to close to $T_{1,pa}$=23 ps, for $\alpha$=0.8.
The small gap between $T_{1}$ and $T_{1,pa}$ is due to the fact that wave propagation is governed by $E^{bk}$ in the pillars layer and by $C^{NP}_{11}$ in the NP film, see Fig.~\ref{fig:limit_case_pil_tot}(a).
As for the lifetime, $\tau_{1}$ qualitatively behaves as $\tau_{0}$ with respect to the $\textit{fs}$ and $\textit{pa}$ cases.
Interestingly, $\tau_{1}>\tau_{0}$ over the entire range of $\alpha$ values, see Fig.~\ref{fig:limit_case_pil_tot}(b).
The present discussion clearly demonstrates that the mode lifetime, rather than its oscillation period, is mostly sensitive to the interface morphology.
For instance, with reference to $n$=1, varying $\alpha$ so as to evolve from the $\textit{fs}$ to the $\textit{pa}$ film, the relative variation in the oscillation period is $\Delta T_{1}/T_{1}$$\sim$17$\%$ whereas the relative variation in lifetime $\Delta \tau_{1}/\tau_{1}$ is infinite.
This also explains why, in previous photo-acoustics experiments performed on granular thin films, the $\textit{pa}$ model was able to correctly address, within the error bar, the breathing mode oscillation period but failed in reproducing the lifetime \cite{peli2016mechanical}. Furthermore, it shows that the pillar model behaves correctly reproducing the $\textit{fs}$ and $\textit{pa}$ cases.
\subsection{The Pillar model: Parametric Study}
Typically, when undertaking an acoustic or photoacoustic investigation of the mechanical properties of ultra-thin films, one measures the breathing mode period and lifetime of a specific mode, $n$, over varying film's thicknesses, $h$.
The interface layer morphology, accounted for by the interface layer filling factor, $\alpha$, and its thickness, $q$, may therefore be retrieved from fitting of the experimental data exploiting the pillar model.
It is therefore important to undertake a parametric study to inspect how the parameters $\alpha$ and $q$ affect $T_{n}(h)$ and $\tau_{n}(h)$.
The calculations are here performed assuming the mechanical properties of the granular NP film addressed above.
For sake of exemplification, we here focus on mode $n=1$, which was the best characterised mode in previous experimental work.
$T_1(h;q,\alpha)$ and $\tau_1(h;q,\alpha)$ are reported versus the total thickness $h$ of the NP-layer for a fixed value of $\alpha = 0.68$ (the value that gives optimal fitting of the photoacoustic data) while varying the parameter $q$ across the set of values $\{6,8,10, 12\}$ nm (Fig.~\ref{fig:parametric_study_pil_tot}(a)) and, vice versa, fixing a value of $q=12$ nm (the value that gives optimal fitting of the photoacoustic data) while varying $\alpha$ across the set of values $\{0.60,0.65,0.70, 0.75\}$ (Fig.~\ref{fig:parametric_study_pil_tot}(b)).
This set of values has been chosen around the best fitting value $\alpha$=0.68, arising from fitting the experimental data pertaining to the sample here addressed, as detailed further on. \textit{Within this parameters range} and with reference to $\tau_1(h;q,\alpha)$, the two parameters act rather independently, $q$ and $\alpha$ governing the position of the inflection point (see Fig.~\ref{fig:parametric_study_pil_tot}(a)) and the tangent at the very same point (see Fig.~\ref{fig:parametric_study_pil_tot}(b)), respectively. Indeed, for a fixed $\alpha$, the flex moves towards higher $h$ values as $q$ increases, whereas, for a fixed $q$, as $\alpha$ approaches unity, the tangent's slope decreases, attaining an asymptotic value concomitantly with the curvature reaching zero. Far enough from the inflection point, $\tau_1(h;\alpha)$ is rather linear with $h$ (see Fig.~\ref{fig:parametric_study_pil_tot}(b)). As for the periods $T_1(h;q,\alpha)$, the differences are not appreciable throughout the presently explored range, see Fig.~\ref{fig:parametric_study_pil_tot} dashed orange blend lines.
Solutions obtained over a wider $\alpha$ and $h$ span are reported in Fig.~\ref{fig:parametric_study_pil_extended} for the same value of $q$=12 nm. Two features clearly arise. First, when extending the analysis to include also smaller $\alpha$, i.e slender pillars, a resonance in $\tau_1(h;q,\alpha)$ clearly emerges, and grows more pronounced as the pillar gets slender, see the decay times curve for $\alpha$ values of 0.4, 0.25, 0.2.
This fact may be intuitively rationalized considering that, as the pillar gets slender, the situation approaches that of a free standing film. Formally, the pillar stiffness decrees proportionally to its cross section (that scales with $\alpha$), resulting in a monotonous reduction of the mechanical wave propagation speed, ultimately extending the quasi-mode life time.
These resonances stand out also in the mode's $Q$ factor, a feature recently observed also in the context of a single nanodisk adhered on a substrate \cite{medeghini2018controlling}. Secondly, for large enough values of $h$, that is once the pillar length becomes negligible with respect the total thickness of the nanoparticle film, $\tau_1(h;q,\alpha)$ scales rather linearly with $h$. In this $h$ range also the minute differences in the periods $T_1(h;q,\alpha)$ for different $\alpha$ values can be appreciated, see Fig.~\ref{fig:parametric_study_pil_extended} orange-blend curves.
\begin{figure}[h!]
\centering
\includegraphics[width=\columnwidth]{para_h_450.pdf}
\caption{$T_1(h;q,\alpha)$ and $\tau_1(h;q,\alpha)$ versus an extended $h$ range for $n=1$ for the Ag nanogranular film: $q = 12$ nm and $\alpha$ values over an extended span.}
\label{fig:parametric_study_pil_extended}
\end{figure}
\subsection{Pillar model benchmarking: fitting photoacoustic data}
The pillar model is now deployed to fit photoacoustic data acquired on nanogranular films of different thicknesses \cite{peli2016mechanical}.
The samples are the same as the one addressed in the case study. These samples constitute an ideal system for benchmarking purposes.
The peculiarities of the deposition method \cite{wegner2006cluster} allow to obtain solvent free and ultra-pure nanoporous films, avoiding the synthesis-related complicacies involved in other methods.
Furthermore, these films have been fully characterised in terms of compositional, structural, morphological and mechanical properties.
On a general basis, the interface layer properties are the one which prove harder to access.
Whereas the NP film layer filling factor may be retrieved employing a variety of techniques, such as X-ray reflectivity \cite{peli2016mechanical}, environmental ellipsometric porosimetry \cite{bisio2009optical} and combining the Brunauer-Emmett-Teller method (BET) with Atomic Force Microscopy (AFM) \cite{borghi2019quantitative}, the interface layer filling factor $\alpha$ and thickness $q$ escape direct inspection.
Only recently, were the latter quantities operatively defined and estimated via a combined Transmission Electro Microscopy (TEM) and Molecular Dynamic (MD) investigation performed on the samples here addressed \cite{benetti2017bottom}. Specifically, the interface layer thickness is defined as the minimal film thickness beyond which the slice filling factors, calculated for thicker films, overlap, as addressed in all details in \cite{benetti2017bottom}.
The pillar model is benchmarked by letting $q$ and $\alpha$ as fitting parameters and maximising the likelihood between the $h$-dependent functions $T_1(h;q,\alpha)$ and $\tau_1(h;q,\alpha)$ and the experimental values, $T_{1,exp}(h)$ and $\tau_{1,exp}(h)$, reported in \cite{peli2016mechanical}. Results are reported in Fig.~\ref{fig:optimal_solution_pillar} for the best fit values of $q=12$ nm and $\alpha=0.68$ (continuous lines) together with the experimental data (markers). Fitting eight data points with two free parameters may not be ideal, nevertheless, the best fit parameters are fully consistent with the values that have been retrieved by other means: $q$=12 nm and $\alpha$$\sim$0.7 for the interface layer \cite{benetti2017bottom}. This is to say that, in the fitting procedure, one could have taken $\alpha$ as the sole fitting parameter, or even fixed all the parameters from previous knowledge, still landing on the experimental data with the theoretical curves calculated adopting the pillar model. The value $\tau_{1}$($h$=15 nm) falls at the edge of the error bar of $\tau_{1,exp}$($h$=15 nm): for $h$=15 nm the effective NP layer is only 3 nm thick, approaching the limit where only an interface layer exists and the concept of a film becomes questionable. Summarising, the pillar model allows rationalizing the experimental data, the best fitting parameter being fully consistent with the values expected from previous knowledge.\\
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{best_alpha_e_q}
\caption{Pillar model's best fit solution for mode $n$=1 for the Ag nanogranular film: $T_1(h;q,\alpha)$ (continuous orange line) and $\tau_1(h;q,\alpha)$ (continuous blue line) vs $h$ plotted for the best fit parameters $q=12$ nm and $\alpha=0.68$. The fitting is performed against the experimental data from \cite{peli2016mechanical}: $T_{1,exp}(h)$ (light orange dots) and $\tau_{1,exp}(h)$ (light blue dots). The error bars on the measured oscillation periods, although present, are too small to be appreciated.}
\label{fig:optimal_solution_pillar}
\end{figure}
\subsection{Pillar model benchmarking: 3D pillar model solved by FEM}
\begin{figure*}
\centering
\includegraphics[width=2\columnwidth]{Figure4_final}
\caption{Ag nanogranular film (see text).
(a) Simulation domain and displacement field $\textbf{u}_{n,h}(\textbf{r},t)$ (arrows) and modulus (colormap) at increasing times for $n$=1, $h$=40 nm and $q$= 12 nm. The displacement $\textbf{u}_{1,40}(\textbf{r},t=0)$ is constructed to match, for $z\ge0$, the film's eigenmode $n$=1.
(b) Normalized projection coefficient $P_{1,40}$ vs time for the case represented in panel (a) (full red dots); its fit with a damped oscillation of period $T$ and decay time $\tau$ (blue line).
(c) Periods and decay times vs film thickness: FEM simulations (diamonds), pillar model (solid lines) and experimental data from \cite{peli2016mechanical}(dots).
}
\label{fig:fem_simu}
\end{figure*}
We now compare the analytical 1D pillar model, addressed so far as the pillar model for brevity, against FEM simulations performed on the 3D pillar model.
The scope is twofold.
A first quest is whether the reduction from a full 3D pillar model (see Fig.~\ref{fig:geometry_scheme}, centre), where acoustic wave scattering is accounted for, to the 1D pillar model expressed by Eq.(\ref{eq:impli_solu_pil}), which does not account for scattering, is justified for the case of low $n$ modes.
Furthermore, the 3D model accounts for the distribution of stresses across the interfaces whereas the pillar model retains information on the integral of the stresses only.
Comparing results obtained from the pillar model against those of 3D FEM simulations would enable confirming the soundness of these approximations.
Secondly, although the pillar model benchmarked remarkably well against existing experimental data, the quest stands as whether the model remains effective across a wider range of interface layer filling factors (while keeping the NP film layer mechanical properties unaltered), a situation for which we lack experimental data.
In this sense comparing against FEM simulations constitute a valid alternative.
We then proceed as follows.
As a validation step, we first implement FEM simulations on the 3D pillar model, mimicking the situations for which experimental data are available.
That is we excite a specific film breathing mode, $n$= 1, and, subsequently, simulate its temporal evolution throughout the sample, now comprising the substrate as well, thus accessing the quasi-eigenmode oscillation period, lifetime and quality factor.
As a matter of fact, once the substrate is accounted for, the film breathing mode becomes a quasi-mode radiating acoustic energy into the substrate.
The results will be benchmarked against both those of the pillar model and the experimental ones.
We then run similar simulations varying the pillar layer filling fraction, $\alpha$, for a fixed film thickness, $h$=50 nm, and compare the results against the values obtained from the pillar model.
To this end we first consider the 3D pillar model (see Fig.~\ref{fig:geometry_scheme}, right) mimicking the samples on which experiments were performed and for which $q$=12 nm and $\alpha$=0.68 were obtained.
\noindent
{\textit{Geometry.}} The 3D unit cell geometry, reported in Fig.~\ref{fig:fem_simu}(a) and in right panel of Fig.~\ref{fig:geometry_scheme},
is composed of three domains and has base dimensions $L\times L$.
Domain \textit{`sub'} (-5 $\mu$m$<z<$0) consists in a 5 $\mu$m-thick sapphire substrate .
This value has been chosen long enough so as to avoid any wave front reflection from the bottom of sapphire within the time span of the simulated dynamics.
For the sake of visualization only a small part of it is shown.
Domain \textit{`pil'} (0$<z<$q) consists in a pure Ag cylindrical pillar of height $q$ and radius $r_{pil}=L \sqrt{\alpha/\pi}$.
We take $r_{pil}$=3.2 nm, consistent with the NPs radius composing the experimentally investigated films, thus resulting in $L$=7 nm.
Domain \textit{`NP'} (q$<z<$h) consists in the \textit{effective} NP layer of thickness $h-q$.\\
\begin{figure*}
\centering
\hspace{-0.5cm}
\begin{minipage}[b]{0.68\columnwidth}
\includegraphics[height=3.5cm]{periodo_n=0}
\end{minipage}
\hfill
\begin{minipage}[b]{0.68\columnwidth}
\includegraphics[height=3.5cm]{decadimento_n=0}
\end{minipage}
\hfill
\begin{minipage}[b]{0.68\columnwidth}
\includegraphics[height=3.5cm]{quality_n=0}
\end{minipage}
\\
\vspace{0.5cm}
\hspace{-0.4cm}
\begin{minipage}[b]{0.68\columnwidth}
\includegraphics[height=3.5cm]{periodo_n=1}
\end{minipage}
\hfill
\begin{minipage}[b]{0.68\columnwidth}
\includegraphics[height=3.5cm]{decadimento_n=1}
\end{minipage}
\hfill
\begin{minipage}[b]{0.68\columnwidth}
\includegraphics[height=3.5cm]{quality_n=1}
\end{minipage}
\caption{Ag nanogranular film.
Comparison between the oscillation period, decay time and quality factor obtained from FEM solution of the 3D pilar model (filled diamonds) and the pillar model (full lines) vs the pillar layer filling factor $\alpha$, for a film's thickness $h$=50 nm. (a) period, (b) decay time and (c) quality factor for $n$=0. (d) period, (e) decay time (log-lin scale) and (f) quality factor (log-lin scale) for $n$=1. The insets show the relative difference between the calculated quantities in the two models.
}
\label{fig:per_deca_quality_n_1}
\end{figure*}
{\textit{Materials properties.}} As for the domains mechanical properties, the densities and elastic constants for Sapphire and polycrystalline Ag are taken for the substrate and for the pillar, respectively, whereas the \textit{effective} NP layer is attributed the density $\rho^{NP}$ and the elastic tensor components $c_{11}$=6.96$\times$10$^{10}$ GPa, taken from \cite{peli2016mechanical}, and $c_{44}$=1,86 $\times$10$^{10}$ GPa, calculated from Budiansky homogenization formulas \cite{budiansky1965elastic} for a volumetric filling factor of 0.8.
The $c_{44}$ value is not actually of any relevance, since, given the problem's symmetry to be discussed shortly, the solution is independent on the choice of $c_{44}$, a fact that we numerically tested.
The adopted values for the above-mentioned quantities are reported in Table \ref{tab:para_peli}.\\
{\textit{Boundary conditions.}}
A zero-displacement boundary condition is enforced at the \textit{`sub'} bottom surface.
The \textit{`NP'} top surface is taken stress-free.
At the portion of the bottom surface of \textit{`NP'} not in contact with the pillar ($z$=$q^{+}$ and $\sqrt{x^2+y^2}>r_{pil}$), a stress-free boundary condition is enforced together with the constraint that the z-component of the displacement ( $w$ ) must be spatially constant along the $x-y$ plane (Rigid connector). Actually the rigid connector condition does not affect the result but slightly improves the computation time.
The displacement field component normal to the lateral boundaries of \textit{`sub'} and \textit{`NP'} is fixed to zero due to the system periodicity and the experimental excitation symmetry.
The pillar's wall is constrained to move in the vertical (i.e. the direction normal to the substrate) and radial direction only, so as to impede pillar torsion.
These boundary conditions have been chosen so as to be consistent with the pillar model.
Furthermore, the boundary conditions in both models, together with the irrelevance of the choice of $C_{44}$ in the domain \textit{`NP'}, are consistent with the displacement and stress fields symmetry triggered by an excitation mechanics such as that of a laser pulse, of waist much greater than the overall film thickness $h$ , impinging at normal incidence on the film.\\
{\textit{Film's quasi-mode period, life time and Q-factor.}} We first calculate the set of eigenmodes $\{\tilde{\textbf{u}}_{n,h}\left(\textbf{r}\right)\}$ solutions of the acoustic eigenvalue problem for the domain $\textit{`pil'} \, \cup \textit{`NP'}$ of height $h$:
\begin{equation}
\nabla\cdot \left[\textbf{c}(\textbf{r})\textbf{:}\nabla \tilde{\textbf{u}}_{n,h}(\textbf{r})\right] = -\rho(\textbf{r})\omega_i^2 \tilde{\textbf{u}}_{n,h}(\textbf{r})\;,\
\label{Acoustic_equation3}
\end{equation}
with $\rho\left(\textbf{r}\right)$ and $\textbf{c}\left(\textbf{r}\right)$ the position-dependent mass density and elastic stiffness tensor, respectively, and
with zero displacement enforced at the boundary $z$=0.
The latter is a good approximation for an impulsive excitation of the film (for instance upon absorption of an ultrashort laser pulse) when $Z_{sub}>Z_{pil}$, as in the present case.
The first subscript, $n$, identifies the film's eigenmode, the second, $h$, the film's thickness expressed in nm.
We then define the initial displacement on the entire simulation domain $\textit{`sub'} \, \cup \textit{`pil'} \, \cup \textit{`NP'}$:
\begin{equation}
\textbf{u}_{n,h}(\textbf{r},t=0)=\\
\begin{cases}
A\tilde{\textbf{u}}_{n,h}(\textbf{r})\, ,$ \quad\quad $ \forall z\ge0
\\
0 \, , \hspace{1.4 cm} \quad\quad \forall z<0
\end{cases}
\quad
\label{Initial_displacement}
\end{equation}
\begin{figure*}
\centering
\includegraphics[width=2\columnwidth]{geometry_scheme_ema}
\caption{
Left: 3D nanoparticles thin film of thickness $d$ adhered on a semi-infinite substrate. Centre: EMA model: \textit{effective} NP layer ($d<z<h$); \textit{effective} interface layer ($0<z<d$); semi-infinite substrate ($z<0$). The NP layer is the same one addressed in the pillar model. The interface layer has effective mechanical properties $C^{*}$, $v^{*}$ and $\rho^{*}$ (see text). The image is for illustrative purposes. Right: 1D sketch of the EMA model.
}
\label{fig:geometry_scheme_ema}
\end{figure*}
where the displacement amplitude $A$ will cancel out in the following analysis.
We pinpoint that, $\textbf{u}_{n,h}(\textbf{r},t=0)$ \textit{is not} an eigenmode of the acoustic eigenvalue problem for the domain $\textit{`sub'} \, \cup \textit{`pil'} \, \cup \textit{`NP'}$, nevertheless, for $h\ge0$, it matches the eigenmode of domain $\textit{`pil'} \, \cup \textit{`NP'}$.
The initial velocity field is $\dot{\textbf{u}}_{n,h}(\textbf{r},t=0)=0$ .
Propagating the initial displacement on the entire unit cell via the Navier equation,
\begin{equation}
\nabla\cdot \left[\textbf{c}(\textbf{r})\textbf{:}\nabla \textbf{u}\right] = \rho\left(\textbf{r}\right)\ddot{\textbf{u}} \, ,
\label{Acoustic_equation_tdep}
\end{equation}
we obtain $\textbf{u}_{n,h}(\textbf{r},t)$.
For the sake of retrieving the film's quasi-eigenmode decay time we calculate the normalized projection coefficient between modes $\textbf{u}_{n,h}(\textbf{r},t=0)$ and $\textbf{u}_{n,h}(\textbf{r},t)$:
\begin{widetext}
\begin{equation}
P_{n,h} (t)=\frac{\langle \textbf{u}_{n,h}(t=0)|\textbf{u}_{n,h}(t) \rangle}{\langle \textbf{u}_{n,h}(t=0)|\textbf{u}_{n,h}(t=0) \rangle} =
\frac{\int_{V} \textbf{u}_{n,h}(\mathbf{r},t=0) \rho(\mathbf{r}) \textbf{u}_{n,h}(\mathbf{r},t) d\mathbf{r}}{\int_{V} \textbf{u}_{n,h}(\mathbf{r},t=0) \rho(\mathbf{r}) \textbf{u}_{n,h}(\mathbf{r},t=0) d\mathbf{r}} \, ,
\end{equation}
\end{widetext}
\noindent
where the integrals are actually calculated on the film's volume, $V_{film}$, since the initial displacement in the substrate is null by construction.
The introduction of the film density $\rho(\mathbf{r})$ is necessary to obtain a formally correct definition of the scalar product, the eigenvalue problem on the entire domain being of the Sturm-Liouville type.\\
For instance, for the case of a sample with $h$=40 nm and focusing on $n$=1, Fig.~\ref{fig:fem_simu} (a) shows the spatial profile of $\textbf{u}_{1,40}(\textbf{r},t=0)$ (arrows) and its modulus (colorbar) together with snapshots of its evolution $\textbf{u}_{1,40}(\textbf{r},t)$ taken for increasing times.
As time evolves, the film's quasi-eigenmode, $\textbf{u}_{1,40}(\textbf{r},t=0)$, fades away, displacement radiating into the substrate.
Fig.~\ref{fig:fem_simu} (b) reports the corresponding $P_{1,40}\left(t\right)$ (full red dots), measuring the overlap between the film's $n$=1 mode displacement profile at time $t$=0 and the actual displacement through out the sample at any given time $t$.
For the `gedanken' case, in which no acoustic radiation to the substrate occurs, the normalized projection coefficient would oscillate inbetween 1 and -1 without any damping, $\textbf{u}_{1,40}(\textbf{r},t)$ representing, for $z\ge$0, the film's quasi-eigenmode displacement at different times.
The normalized projection coefficient's maximum would thus be attained for $t=mT$ (the two displacements fields being in phase), it would be zero for $t=(2m+1)T/4$ (the two displacements fields being in quadrature) and be at its minimum for $t=(2m+1)T/2$ (the two displacements fields being in anti-phase) with $m\in \mathbb{N}_{0}$.
For the real case, in which acoustic radiation is active, the normalized projection coefficient's oscillation is exponentially damped, its period $T_{1,40}$ and decay time $\tau_{1,40}$ being retrieved fitting the numerical results, see Fig.~\ref{fig:fem_simu} (b), blue line.
Running simulations for varying $h$ we thus obtain $T_{n,h}$ and $\tau_{n,h}$, Fig.~\ref{fig:fem_simu} (c) reporting the case for $n$=1 (filled diamonds).
For the sake of comparison, we report on the same graph the data obtained from the analytic solution of the pillar model (full lines) together with the experimental values from ultrafast optoacoustic measurements \cite{peli2016mechanical} (filled circles).
The three sets of date are in good agreement, pointing to the fact that we correctly addressed the 3D pillar model via FEM and that, at least for $\alpha$=0.68, the approximations entailed in the pillar model are sound.
\begin{figure*}
\centering
\begin{minipage}[b]{1.03\columnwidth}
\includegraphics[width=\columnwidth]{Periodo_caso_limite_ema_0_1}
\end{minipage}
\hfill
\begin{minipage}[b]{\columnwidth}
\includegraphics[width=\columnwidth]{Tempo_deca_caso_limite_ema_0_1}
\end{minipage}
\caption{
(a) period $T_n$ and (b) decay time $\tau_n$ versus $\beta$ for the Ag nanogranular film with $d$=12 nm and $h$=50 nm. The first two modes $n=0,1$ are addressed. EMA model (full lines) and limit cases (dashed lines) obtained for the $fs$ and $pa$ scenarios, respectively. The y-axis in the graph of panel (a) is broken for sake of graphical clarity. The scales above and below the brake are different for ease of representation. $T_n$ diverges for $\beta\rightarrow$0.5, an artefact ascribable to the pitfalls of Budiansky formulas. The $fs$ scenario yields, for mode $n=0$, an infinite period (corresponding to a film translation), hence it is not reported in panel (a). The $fs$ scenario yields an infinite decay time, hence it is not reported in panel (b).}
\label{fig:limit_case_ema_tot}
\end{figure*}
Following the same procedure, we now perform FEM simulations on the 3D pillar model varying the pillar layer filling fraction, $\alpha$, for a fixed film thickness, $h$=50 nm, and compare the results against the values obtained from the pillar model. Fig.~\ref{fig:per_deca_quality_n_1} reports the oscillation period (a), decay time (b) and quality factor (c), $Q_{n}$=$\pi\left(\tau_{n}/T_{n}\right)$, calculated for $n$=0, where the superscripts $FEM$ and $pil$ stand for FEM simulations and pillar model, respectively. The same quantities, calculated for $n$=1, are reported throughout panels (d-f). The two models yield the same results, the relative differences, $(X^{FEM}_{n}-X^{pil}_{n})/X^{FEM}_{n}$ with $X$=$\{T, \tau, Q\}$, amounting, at most, to a few percents, see insets to each graph. For the case $n$=0 we were able to perform FEM simulations down to $\alpha$=0.05, whereas for $n$=1 numerical problems impeded extending simulations below $\alpha$=0.3. Given the boosting of $Q_{1}$ for the latter case, FEM simulations, as the one reported in Fig.~\ref{fig:fem_simu} (b), were performed extending the time range to 500 ps. The overall result is that the analytic 1D pillar model perfectly reproduces the results of the more involved 3D FEM pillar model. Furthermore, the former does not pose any problem for the case of low $\alpha$ values, it is order of magnitudes more efficient in terms of computation times (four orders of magnitudes for the present geometry), and, being analytic, it is much more amenable to fit experimental data and clearly identifies the structural parameters leading the acoustic problem.
For low $\alpha$ values simulations were also performed varying the pillar position within the unit cell $xy$ plane and the pillar cross-sectional geometry (a square instead of a circle while keeping the same surface area), the results remaining unaltered. Those of low $\alpha$ values constitute the worse case scenarios for these tests since a slender pillar can be substantially displaced within the unit cell, whereas only small translations can be tested for the case of plumped pillars. i.e. greater $\alpha$ values. These evidences suggest that results are invariant with respect to the specific disordered interface realization. Specifically, the detailed knowledge of the stresses distribution across the interface does not affect the solution in terms of quasi-mode period and lifetime, the relevant aspect rather being the integral of the stresses exchanged across the interfaces. The latter supports the physical ansatz implied in the 1D pillar model.
For the sake of completeness, in SI we also report the modulus of the displacement $\left|\{\tilde{\textbf{u}}_{n,50}\left(\textbf{r}\right)\}\right|$, for the first (n=0) and the second (n=1) film breathing modes for $\alpha$=0.05, 0.4 and 0.75. These plots give an idea of the quasi-breathing mode evolution from the quasi-$\textit{free standing}$ to the quasi-$\textit{perfect adhesion}$ scenarios.
\section{EMA model}
In order to display the potential of the pillar model, its broad validity range and its added value with respect to more traditional approaches, a simpler 1D model, addressed as Effective Medium Approximation model (EMA) and based on an homogenized interface layer, is now introduced and its dispersion relation calculated. Its limit of validity, restrained to small porosities, are discussed at the light of the pillar model, showing the need for the latter to correctly access the acoustic to structure relation in granular ultra-thin films.
The interface layer, previously identified with the pillar layer, is now accounted for via a continuum, isotropic and homogeneous slab, addressed as $\textit{effective}$ interface layer, see Fig.~\ref{fig:geometry_scheme_ema}. The latter mimics an interface granular layer of thickness $d$, with its solid component made of the same material constituting the NPs and of filling fraction $\beta$. The parameters $d$ and $\beta$ play a similar role as $h$ and $\alpha$ in the pillar model. The elastic properties of the $\textit{effective}$ interface layer, denoted with an asterisk as a superscript, are calculated on the basis of Budiansky theory \cite{budiansky1965elastic}. The bulk, $K^{*} (\beta)$, and shear modulus, $G^{*} (\beta)$, are obtained through:
\begin{align}
\sum\limits_{i=1}^{n} \dfrac{c_{i}}{1 + A\left(\dfrac{K_{i}}{K^{*}}-1\right)}=1 \, ,
\notag
\\
\sum\limits_{i=1}^{n} \dfrac{c_{i}}{1 + B\left(\dfrac{G_{i}}{G^{*}}-1\right)}=1 \, ,
\label{eq:boudiansky}
\end{align}
where the value of $A$ and $B$ are
\begin{equation}
A=\dfrac{1+\nu^{*} (\beta)}{3\left(1-\nu^{*} (\beta)\right)} \, ,
\qquad
B=\dfrac{2\left(4-5\nu^{*} (\beta)\right)}{15\left(1-\nu^{*} (\beta)\right)} \, ,
\label{eq:boudiansky_coef}
\end{equation}
in which the Poisson's ratio is expressed via $K^{*} (\beta)$ and $G^{*} (\beta)$) by the standard relation
\begin{equation}
\nu^{*}=\dfrac{3 K^{*} (\beta) - 2 G^{*} (\beta)}{6 K^{*} (\beta) + 2 G^{*} (\beta)}\, .
\label{eq:boudiansky_Poisson}
\end{equation}
In Eq.(\ref{eq:boudiansky}) $c_i$, $K_{i}$, and $G_{i}$ are the volume fraction, the bulk modulus and the shear modulus of the phase $i$, respectively, where, in the present case, $N=2$, $i=1$ stands for vacuum and $i=2$ for the material constituting the NPS (bulk silver in the following), i.e. $c_2$=$\beta$.
A major pitfall of Budiansky formulas stands in the fact the elastic coefficients vanish when $\beta$ reaches 0.5, thus setting a limit to the applicability of the EMA model, as will be discussed shortly.
Since the transversal contraction is prevented in the $\textit{effective}$ interface layer, the P-wave velocity is $v^{*}(\beta) = \sqrt{\frac{C_{11}^{*} (\beta)}{\rho^{*} (\beta)}}$.
The interface boundary conditions for the EMA model are the ``perfect adhesion'' ones. In the following we summarize the full set of boundary conditions for the EMA model:
\begin{enumerate}
\item free standing at the top of the NP-layer ($z=h$):
\begin{equation}
C_{11}^{NP} \dfrac{\partial u_z^{NP} \left(h,t\right)}{\partial z} = 0 \, ,
\label{eq:BCP1_ema}
\end{equation}
\item continuity of stresses at the interface between the NP-layer and the $\textit{effective}$ homogeneous layer ($z=d$):
\begin{equation}
C_{11}^{NP} \dfrac{\partial u_z^{NP} \left(d,t\right)}{\partial z} = C_{11}^{*} (\beta) \dfrac{\partial u_z^{*} \left(d,t\right)}{\partial z} \, ,
\label{eq:BCP2_ema}
\end{equation}
\item continuity of the displacement at the interface between the NP-layer and the $\textit{effective}$ homogeneous layer:
\begin{equation}
u_z^{NP}\left(d,t\right) = u_z^{*}\left(d,t\right) \, ,
\label{eq:BCP3_ema}
\end{equation}
\item continuity of stresses at the interface between the $\textit{effective}$ homogeneous layer and the sapphire substrate ($z=0$):
\begin{equation}
C_{11}^{*} (\beta) \dfrac{\partial u_z^{*} \left(0,t\right)}{\partial z} = C_{11}^{sub} \dfrac{\partial u_z^{sub} \left(0,t\right)}{\partial z} \, ,
\label{eq:BCP4_ema}
\end{equation}
\item continuity of the displacement condition at the interface between the $\textit{effective}$ homogeneous layer and the sapphire substrate:
\begin{equation}
u_z^{*}\left(0,t\right) = u_z^{sub}\left(0,t\right) \, .
\label{eq:BCP5_ema}
\end{equation}
\end{enumerate}
Enforcing the boundary conditions Eqs.(\ref{eq:BCP1_ema})-(\ref{eq:BCP2_ema})-(\ref{eq:BCP3_ema})-(\ref{eq:BCP4_ema})-(\ref{eq:BCP5_ema}) to Eqs.(\ref{eq:variable_decomposition}) yields the following equation in the unknown $\omega(d,\beta)$:
\begin{widetext}
\begin{equation}
Z^{NP} -
\dfrac{C_{11}^{*} (\beta ) \cot \left(\dfrac{
(h-d) \omega}{v_{z}^{NP}}\right) \left[ v_{z}^{*}(\beta) Z^{sub} \cos \left(\dfrac{d~\omega
}{v_{z}^{*} (\beta)}\right)-i
C_{11}^{*} (\beta ) \sin \left(\dfrac{d~\omega
}{v_{z}^{*} (\beta)}\right)\right]}{v_{z}^{*} (\beta)
\left[ v_{z}^{*} (\beta) Z^{sub} \sin
\left(\dfrac{d~\omega }{v_{z}^{*} (\beta)}\right)+i C_{11}^{*} (\beta ) \cos \left(\dfrac{d~\omega }{v_{z}^{*} (\beta)}\right)\right]}
= 0 \, .
\label{eq:impli_solu_ema}
\end{equation}
\end{widetext}
\noindent
Mutatis mutandis from the pillar mode case, Eq.(\ref{eq:impli_solu_ema}) may be solved numerically and yields, for each fixed set of parameters $\left(d,\beta\right)$, infinitely many complexed-value solutions $\omega=\omega_n\left(d,\beta\right)$, with $n$=$\{$0,1,2,...$\}$ the index numbering the mode.
The total thickness of the NP-layer assigned, the free parameter in Eq.(\ref{eq:impli_solu_ema}) are the height of the
interface layer, $d$, and its filling fraction, $\beta$. The relations linking the period of vibration, $T_n(d,\beta)$, and the wave decay time, $\tau_n(d,\beta)$, to the n-mode complex-valued angular frequency are expressed through Eqs.(\ref{eq:relation_period_decay}).
Comparison of Eq.(\ref{eq:impli_solu_pil}) and Eq.(\ref{eq:impli_solu_ema}) show that the pillar and EMA models yield the same results provided $d=q$, $\alpha E^{bk} = C_{11}^{*} \left(\beta \right)$ and $v_z^{pil}=v_z^* \left(\beta\right)$. For the case of Ag NPs, the previous equations are satisfied if $\alpha=\beta=0.770439$.\\
\subsection{The EMA model: Case Study}
\begin{figure*}[t]
\centering
\begin{minipage}[b]{\columnwidth}
\includegraphics[width=\columnwidth]{para_study_d_ema}
\end{minipage}
\hfill
\begin{minipage}[b]{\columnwidth}
\includegraphics[width=\columnwidth]{para_study_beta_ema}
\end{minipage}
\caption{$T_1(h;d,\beta)$ and $\tau_1(h;d,\beta)$ vs $h$ for $n=1$ for the Ag nanogranular film:
(a) fixed $\beta = 0.73$ while varying $d$ (expressed in nm);
(b) fixed $d = 12$ nm while varying $\beta$.
}
\label{fig:parametric_study_ema_tot}
\end{figure*}
We now exemplify the EMA model considering the same situation addressed in the case study for the pillar model, the only exception being the replacement of the pillar with the $\textit{effective}$ homogeneous film of $d$=12 nm.
The oscillation period $T_{n}$ and lifetime $\tau_{n}$ for the first two modes of the EMA model, $n$=$\{0,1\}$, are reported versus $\beta$ as full lines in Fig.~\ref{fig:limit_case_ema_tot} panel (a) and (b), respectively. For $\beta=0.8$ the density of the $\textit{effective}$ homogeneous layer matches the density of the NP-layer, the latter being 0.8 that of bulk Ag. We do not consider interface densification, the maximum $\beta$ value is thus once again constrained to 0.8.
$T_n$ diverges as $\beta$ approaches 0.5, this being due to the elastic constants becoming null in the Budiansky formula. For the same reason, $T_1$ is not bound between the values $T_{1,pa}$ and $T_{1,fs}$, as should be the case for a correct model. On the contrary, $T_n$ correctly approaches $T_{1,pa}$=23 ps for $\beta \rightarrow$0.8, that is when the interface layer becomes identical to the NPs layer. As for the lifetime, $\tau_{n}$ diverges as $\beta$ approaches 0.5, again due to the pitfalls of Budiansky formulas.
\subsection{The EMA model: Parametric Study}
We here repeat the same parametric study, previously performed for the pillar model, for the case of the EMA model. $T_1(h;d,\beta)$ and $\tau_1(h;d,\beta)$ are reported versus the total thickness $h$ of the NP-layer for a fixed value of $\beta = 0.73$ (the value that gives optimal fitting of the photoacoustic data, see SI) while varying the parameter $d$ across the set of values $\{6,8,10, 12\}$nm, see Fig.~\ref{fig:parametric_study_ema_tot}(a), and vice versa, fixing a value $d=12$ nm (the value that gives optimal fitting of the photoacoustic data, see SI) and varying $\beta$ across the set of values $\{0.70,0.75,0.80, 0.85\}$, see Fig.~\ref{fig:parametric_study_ema_tot}(b). Fig.~\ref{fig:parametric_study_ema_tot} shows the same salient features observed for the pillar model in Fig.~\ref{fig:parametric_study_pil_tot}: the position of the inflection point and the magnitude of the tangent in such point being governed quite independently by $d$ and $\beta$, respectively.
\subsection{Pillar vs EMA model}
The pillar model is more adherent to physical reality than the EMA model and, contrary to the latter, is reliable across the entire spectrum of interface filling factor values. The EMA model suffers a major drawback in that both the oscillation periods and decay times diverge as the interface layer filling factor approaches 0.5. The EMA and pillar models yields the same results for a very specific value of the layer filling fraction, which happens to be $\sim$ 0.77 for the case here investigated. The EMA model yields reasonable predictions for small departures of $\beta$ from this values and, in this range, its control parameters work alike the ones of the pillar model. For greater departures of $\beta$ from the optimal value the EMA model fails. Fig.~\ref{fig:optimal_solution_ema}, well summarises these points, reporting, on the same graph and for the same sample, $T^{pil}_1$ and $\tau^{pil}_1$ versus $\alpha$, for the pillar model, and $T^{EMA}_1$ and $\tau^{EMA}_1$ vs $\beta$ for the EMA model.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Tempo_deca_pillar_vs_2EMA_1}
\caption{Ag nanogranular film with $q$=12 nm (pillar), $d$=12 nm (EMA) and $h$=50 nm.
Decay time $\tau^{pil}_1$ versus $\alpha$ (pillar model) and $\tau^{EMA}_1$ vs $\beta$ (EMA model). The two models coincide for $\alpha=\beta\approx0.77$.}
\label{fig:optimal_solution_ema}
\end{figure}
\section{Conclusions and perspectives}
The pillar model, a fully analytical 1D acoustic model for nanoporous thin films adhered on a flat substrate, was here proposed. The analytical dispersion relation for the frequencies and lifetimes of the film's acoustic breathing modes were obtained in terms of the interface layer's porosity and thickness. The model was successfully benchmarked both against full 3D FEM simulations of a 3D pillar model and photoacoustic data available from the literature on a archetypal model system. The interface mechanical properties of the experimental model system itself bear great applicative relevance, as outlined in recent literature. In order to asses the potential of the pillar model and its broad validity range, its performance was compared against a simpler 1D analytical model, addressed as EMA model, based on an homogenized interface layer of Budiansky type. The limits of applicability of the EMA model were addressed, together with the necessity of deploying the pillar model for most filling factors.
The results here reported are relevant both under a fundamental and applicative stand point. As for the former, the pillar model provides a vivid physical representation of the acoustic of porous thin films and its controlling parameters. More generally, it may be deployed to access the acoustic to structure relation in materials affected by disordered interfaces. The model showed that the physics is primarily dictated by the integral of the stresses exchanged across the interfaces rather than their detailed distribution. Being fully analytical and 1D, the model is computationally very efficient and particularly amenable to fit experimental quasi-breathing mode periods and lifetimes. The model allows accessing the interface layer parameters, which proved challenging to retrieve otherwise. On the other hand, should the porous film morphology been known a-priori, the model correctly predicts its acoustic response. As for applications, knowledge of granular thin films interfaces adhered on a substrate is of paramount importance in a variety of sectors. Just to mention a few, the NP interface layer rules the adhesion properties of bactericidal coatings \cite{benetti2020antimicrobial, benetti2017direct}, both the mechanical and the electrical endurance of bendable transparent conductive oxides \cite{torrisi2019ag} and conductive NPs films produced by inkjet techniques \cite{kao2011} and the sensitivity of photoacoustics sensors \cite{benetti2018photoacoustic}.
The pillar model is scale-invariant and may thus be deployed to investigate systems of greater dimensions, ranging from porous foams for vibration transmission control, to rock sediments laying on a continuous bed to seismological scenarios \cite{peng2021}. Furthermore, the model in applicable, beyond the case of granular materials, to any patched interface. This is the case, for instance, when acoustically addressing the wrinkled interface that may arise between a 2D or a few layers material and its supporting substrate \cite{vialla2020time}, when investigating the acoustic properties of thin films suspended on pillars \cite{chaste2018}, or when inspecting for the presence of PMMA residues between a nano-patterned structure, fabricated via e-beam, lithography and the substrate it adheres on, an issue of the utmost importance in post-processing quality control.
The pillar model also provides a connection to the adhesion forces. Even though a direct comparison with the pull-off force, as provided by the most common adhesion models (JKR\cite{johnson1971surface}, DMT\cite{derjaguin1975effect}), is not straightforward, a simplified average pull-off pressure estimate is presented in SI.
\vspace{0.25cm}
\renewcommand{\arraystretch}{1.3}
\begin{table}[h!]
\centering
\begin{tabular}{lrl}
\hline
\hline
$\rho^{NP}$ & 8400 & kg m$^{-3}$ \\[1ex]
$v_{z}^{NP}$ & 2880 & m s$^{-1}$ \\[1ex]
$Z^{NP}$ & 2.42$\times$10$^{7}$ & kg s$^{-1}$ m$^{-2}$ \\[1ex]
$C_{11}^{NP}$ & 6.96$\times$10$^{10}$ & Pa \\[1ex]
$\rho^{bk}$ & 10490 & kg m$^{-3}$ \\[1ex]
$v_{z}^{bk}$ & 2740 & m s$^{-1}$ \\[1ex]
$Z^{bk}$ & 2.87$\times$10$^{7}$ & kg s$^{-1}$ m$^{-2}$ \\[1ex]
$E^{bk}$ & 7.88$\times$10$^{10}$ & Pa \\[1ex]
$\rho^{sub}$ & 3986 & kg m$^{-3}$ \\[1ex]
$v_{z}^{sub}$ & 11260 & m s$^{-1}$ \\[1ex]
$Z^{sub}$ & 4.49$\times$10$^{7}$ & kg s$^{-1}$ m$^{-2}$ \\[1ex]
\hline
\hline
\end{tabular}
\caption{Summary of the mechanical properties of the layers}
\label{tab:para_peli}
\end{table}
\renewcommand{\arraystretch}{1}
\noindent
\textbf{SUPPLEMENTARY INFORMATION}
\\
EMA model best fit solution for mode n=1, displacement field modulus for the first (n=0) and the second (n=1) film breathing modes for several $\alpha$ values, parametric study for the pillar model (fixed $q$ and low values of $\alpha$), computation of the surface energy and pull-off pressure.\\
\\
\noindent
\textbf{AUTHOR INFORMATION.}
\noindent
\textit{Corresponding author}
\\
*\textbf{Giulio Benetti} ([email protected])
\\
\textit{ORCID}
\\
\textbf{Gianluca Rizzi}: 0000-0002-5967-5403
\\
\textbf{Giulio Benetti}: 0000-0002-7070-0083
\\
\textbf{Claudio Giannetti}: 0000-0003-2664-9492
\\
\textbf{Luca Gavioli}: 0000-0003-2782-7414
\\
\textbf{Francesco Banfi}: 0000-0002-7465-8417
\section*{acknowledgement}
All the authors are grateful to Prof. Bigoni for enlightening discussions regarding the pillar model.
G.R. acknowledge funding from the French Research Agency ANR, METASMART (ANR-17CE08-0006) and the support from IDEXLYON in the framework of the Programme Investissement d$'$ Avenir (ANR-16-IDEX-0005).
C.G. and L.G. acknowledge support from Universit\`a Cattolica del Sacro Cuore through D.2.2 and D.3.1. C.G. acknowledges financial support from MIUR through the PRIN 2017 program (Prot. 20172H2SC4-005). F.B. acknowledges financial support from Universit\'e de Lyon in the frame of the IDEXLYON Project (ANR-16-IDEX-0005) and from Universit\'e Claude Bernard Lyon 1 through the BQR Accueil EC 2019 grant. L.G. acknowledges support from Universit\'e de Lyon as an Invited Professor.
The authors thank Michael Cappozzo for graphical support in realizing 3D renderings of the different models.
\vspace*{3cm}
|
1,108,101,565,534 | arxiv | \section*{Appendix}
\label{sec:appendix}
All eight feature pathways of the neurashed model in Figure~\ref{fig:bottleneck}. The left column and right column correspond to Class 1 and Class 2, respectively.
\begin{figure}[!htp]
\centering
\begin{minipage}[c]{0.48\textwidth}
\centering
\includegraphics[width=7.5cm, height=3cm]{g6_appendix.pdf}\\[0.5em]
\includegraphics[width=7.5cm, height=3cm]{g6_2.pdf}\\[0.5em]
\includegraphics[width=7.5cm, height=3cm]{g6_3.pdf}\\[0.5em]
\includegraphics[width=7.5cm, height=3cm]{g6_4.pdf}
\end{minipage}
\begin{minipage}[c]{0.48\textwidth}
\centering
\includegraphics[width=7.5cm, height=3cm]{g6_5.pdf}\\[0.5em]
\includegraphics[width=7.5cm, height=3cm]{g6_6.pdf}\\[0.5em]
\includegraphics[width=7.5cm, height=3cm]{g6_7.pdf}\\[0.5em]
\includegraphics[width=7.5cm, height=3cm]{g6_8.pdf}
\end{minipage}
\end{figure}
In the experimental setup of the right panel of Figure~\ref{fig:bottleneck}, all amplification factors at initialization are set to independent uniform random variables on $(0, 0.01)$. We use $g^-(\lambda_F) = 1.022^{-\frac14} \lambda_F$ and $g^+(\lambda_F) = 1.022^{\frac{11}{4}}\lambda_F$ for all hidden nodes except for the 7th (from left to right) node, which uses $g^+(\lambda_F) = 1.022^{\frac{3}{4}}\lambda_F$. In the early phase of training, the firing pattern on the second level improves at distinguishing the two types of samples in Class 1, depending on whether the 1st or 3rd node fires. This also applies to Class 2. Hence, the mutual information between the second level and the input tends to $\log_2 4 = 2$. By contrast, in the late stages, the amplification factors of the 1st and 3rd nodes become negligible compared with that of the 2nd node, leading to indistinguishability between the two types in Class 1. As a consequence, the mutual information tends to $\log_2 2 = 1$. The discussion on the first level is
similar and thus is omitted.
\iffalse
Consider the mutual information that is defined as $I_X = I(X; T) = H(T) - H(T|X)$, where $H(\cdot)$ and $H(\cdot | \cdot)$ denote entropy and conditional entropy, respectively. Similarly, define $I_Y = I(T; Y) = H(T) - H(T|Y)$. Note that $t = t(x)$ is a \textit{deterministic} function of $X$. By adding Gaussian noise $\N(0, \sigma^2 I)$ to $t$ we get $T = t + \N(0, \sigma^2 I)$. Note that we have
\[
H(T) = - \E_{T \sim p_X * \phi_{\sigma}} \log (p_X * \phi_{\sigma}(T)),
\]
which can be approximated by Monte Carlo simulations.
On the other hand, we have
\[
H(T|X) = \sum p(x_i) H(T| X = x_i) = \sum p(x_i) H(T| X = x_i)
\]
Let $T$ be $d$-dimensional. Then we have
\[
H(T| X = x_i) = d \log\left( \sigma\sqrt{2\pi \e} \right).
\]
Thus, we get
\[
H(T|X) = d \log\left( \sigma\sqrt{2\pi \e} \right).
\]
Turning our focus to $I_Y = I(T; Y) = H(T) - H(T|Y)$, we have
\[
H(T|Y) = \sum_{j=1}^K p(C_j) H(T| Y = C_j) = \sum_{j=1}^K \frac{n_j}{n} H(T| Y = C_j),
\]
where
\[
H(T| Y = y_j) = - \E_{T \sim p_{X|C_j} * \phi_{\sigma}} \log (p_{X|C_j} * \phi_{\sigma}(T)).
\]
Above, $p_{X|C_j}$ denotes the uniform distribution over all $x_i$ such that its label $y_i = C_j$.
Following \cite{shwartz2017opening}, we add noise to the hidden level activity $T$ when calculating entropy for the mutual information in Figure~\ref{fig:le}. Specifically, we
\fi
\iffalse
After $N$ iterations, suppose there are $N_1, N_2$, and $N_3$ times from the three classes, respectively. Then
\[
f_j(x) =
\begin{cases}
a^2N_1 (7N_1 + 5N_2 + 2N_3) & \quad y = C_j, j = 1\\
a^2N_2 (5N_1 + 7N_2 + 2N_3) & \quad y = C_j, j = 2\\
2a^2N_3 (N_1 + N_2 + N_3) + aN_3 f' & \quad y = C_j, j = 2\\
0 & \quad y \ne C_j,
\end{cases}
\]
where $f'$ denotes the contribution from $F$.
Now, suppose we update the neural networks using an example from $C_1$. That is, replace $N_1$ by $N_1^+ := N_1 + 1$. Let us see how the prediction on $C_2$ and $C_3$ changes. Note that
\[
\begin{aligned}
&a^2N_1^+ (7N_1^+ + 5N_2 + 2N_3) - a^2N_1 (7N_1 + 5N_2 + 2N_3) \\
&= a^2(N_1 + 1) (7N_1 + 5N_2 + 2N_3 + 7 ) - a^2N_1 (7N_1 + 5N_2 + 2N_3)\\
&= 7a^2N_1 + a^2(7N_1 + 5N_2 + 2N_3 + 7 )\\
\end{aligned}
\]
and for $C_2$ we see
\[
\begin{aligned}
&a^2N_2 (5N_1^+ + 7N_2 + 2N_3) - a^2N_2 (5N_1 + 7N_2 + 2N_3)\\
& = 5a^2 N_2.
\end{aligned}
\]
for $C_3$ we have
\[
\begin{aligned}
&2a^2N_3 (N_1^+ + N_2 + N_3) + aN_3 f' - 2a^2N_3 (N_1 + N_2 + N_3) - aN_3 f'\\
& = 2a^2 N_3
\end{aligned}
\]
\textbf{Note that in generally $5a^2 N_2 > 2a^2 N_3$ unless $N_3$ is really large. However, to overcome this difficulty, we can say that for nodes in $F^L$, their amplifying factors are constant, meaning that they are all always on or never get updated. For example, we can let the $\F^L$ nodes be on if any of its children node is on.}
\fi
\section{Outlook}
\label{sec:discussion}
In addition to shedding new light on implicit regularization, information bottleneck, and local elasticity, neurashed is likely to facilitate insights into other common empirical patterns of deep learning. First, a byproduct of our interpretation of implicit regularization might evidence a subnetwork with comparable performance to the original, which could have implications on the lottery ticket hypothesis of neural networks~\cite{frankle2018lottery}. Second, while a significant fraction of classes in ImageNet~\cite{deng2009imagenet} have fewer than 500 training samples, deep neural networks perform well on these classes in tests. Neurashed could offer a new perspective on these seemingly conflicting observations---many classes are basically the same (for example, ImageNet contains 120 dog-breed classes), so the effective sample size for learning the common features is much larger than the size of an individual class. Last, neurashed might help reveal the benefit of data augmentation
techniques such as cropping. In the language of neurashed, \texttt{cat head} and \texttt{cat tail} each are sufficient to identify \texttt{cat}. If both concepts appear in the image, cropping reinforces the neurashed model by impelling it to learn these concepts separately. Nevertheless, these views are preliminary and require future consolidation.
While closely resembling neural networks in many aspects, neurashed is not merely intended to better explain some phenomena in deep learning. Instead, our main goal is to offer insights into the development of a comprehensive theoretical foundation for deep learning in future research. In particular, neurashed's efficacy in interpreting many puzzles in deep learning could imply that neural networks and neurashed evolve similarly during training. We therefore believe that a comprehensive deep learning theory is unlikely without incorporating the \textit{hierarchical}, \textit{iterative}, and \textit{compressive} characteristics. That said, useful insights can be derived from analyzing models without these characteristics in some specific settings~\cite{jacot2018neural,chizat2019lazy,wu2018sgd,MeiE7665,chizat2018global,belkin2019reconciling,lee2019wide,xu2019training,oymak2020towards,chan2021redunet}.
Integrating the three characteristics in a principled manner might necessitate a novel mathematical framework for reasoning about the composition of nonlinear functions. Because it could take years before such mathematical tools become available, a practical approach for the present, given that such theoretical guidelines are urgently needed~\cite{weinan2021dawning}, is to better relate neurashed to neural networks and develop finer-grained models. For example, an important question is to determine the unit in neural networks that corresponds with a feature node in neurashed. Is it a filter in the case of convolutional neural networks? Another topic is the relationship between neuron activations in neural networks and feature pathways in neurashed. To generalize neurashed, edges could be fired instead of nodes. Another potential extension is to introduce stochasticity to rules $g^+$ and $g^-$ for updating amplification factors and rendering feature pathways random or adaptive to learned amplification factors. Owing to the flexibility of neurashed as a graphical model, such possible extensions are endless.
\section{Introduction}
\label{sec:introduction}
Deep learning is recognized as a monumentally successful approach to many data-extensive applications in image recognition, natural language processing, and board game programs~\cite{krizhevsky2017imagenet,lecun2015deep,silver2016mastering}. Despite extensive efforts~\cite{jacot2018neural,bartlett2017spectrally,berner2021modern}, however, our theoretical understanding of how this increasingly popular machinery works and why it is so effective remains incomplete. This is exemplified by the substantial vacuum between the highly sophisticated training paradigm of modern neural networks and the capabilities of existing theories. For instance, the optimal architectures for certain specific tasks in computer vision remain unclear~\cite{tolstikhin2021mlp}.
To better fulfill the potential of deep learning methodologies in increasingly diverse domains, heuristics and computation are unlikely to be adequate---a comprehensive theoretical foundation for deep learning is needed. Ideally, this theory would demystify these black-box models, visualize the essential elements, and enable principled model design and training. A useful theory would, at a minimum, reduce unnecessary computational burden and human costs in present-day deep-learning research, even if it could not make all complex training details transparent.
Unfortunately, it is unclear how to develop a deep learning theory from first principles. Instead, in this paper we take a phenomenological approach that captures some important characteristics of deep learning. Roughly speaking, a phenomenological model provides an overall picture rather than focusing on details, and allows for useful intuition and guidelines so that a more complete theoretical foundation can be developed.
To address what characteristics of deep learning should be considered in a phenomenological model, we recall the three key components in deep learning: architecture, algorithm, and data~\cite{zdeborova2020understanding}. The most pronounced characteristic of modern network architectures is their \textit{hierarchical} composition of simple functions. Indeed, overwhelming evidence shows that multiple-layer architectures are superior to their shallow counterparts~\cite{eldan2016power}, reflecting the fact that high-level features are hierarchically represented through low-level features~\cite{hinton2021represent,bagrov2020multiscale}. The optimization workhorse for training neural networks is stochastic gradient descent or Adam~\cite{kingma2015adam}, which \textit{iteratively} updates the network weights using noisy gradients evaluated from small batches of training samples. Overwhelming evidence shows that the solution trajectories of iterative optimization are crucial to generalization
performance~\cite{soudry2018implicit}. It is also known that the effectiveness of deep learning relies heavily on the structure of the data~\cite{blum1992training,goldt2019modelling}, which enables the \textit{compression} of data information in the late stages of deep learning training~\cite{tishby2015deep,shwartz2017opening}.
\section{Neurashed}
\label{sec:neuroshed-model}
We introduce a simple, interpretable, white-box model that simultaneously possesses the \textit{hierarchical, iterative}, and \textit{compressive} characteristics to guide the development of a future deep learning theory. This model, called \textit{neurashed}, is represented as a graph with nodes partitioned into different levels (Figure~\ref{fig:intro_xs}). The number of levels is the same as the number of layers of the neural network that neurashed imitates. Instead of corresponding with a single neuron in the neural network, an $l$-level node in neurashed represents a feature that the neural network can learn in its $l$-layer. For example, the nodes in the first/bottom level denote lowest-level features, whereas the nodes in the last/top level correspond to the class membership in the classification problem. To describe the dependence of high-level features on low-level features, neurashed includes edges between a node and its dependent nodes in the preceding level. This reflects the hierarchical nature of features in neural networks.
\begin{figure}[!htp]
\centering
\includegraphics[scale=0.4]{g7.pdf}
\caption{A neurashed model that imitates a four-layer neural network for a three-class classification problem. For instance, the feature represented by the leftmost node in the second level is formed by the features represented by the three leftmost nodes in the first level.}
\label{fig:intro_xs}
\end{figure}
Given any input sample, a node in neurashed is in one of two states: firing or not firing. The unique last-level node that fires for an input corresponds with the label of the input. Whether a node in the first level fires or not is determined by the input. For a middle-level node, its state is determined by the firing pattern of its dependent nodes in the preceding levels. For example, let a node represent \texttt{cat} and its dependent nodes be \texttt{cat head} and \texttt{cat tail}. We activate \texttt{cat} when either or both of the two dependent nodes are firing. Alternatively, let a node represent \texttt{panda head} and consider its dependent nodes \texttt{dark circle}, \texttt{black ear}, and \texttt{white face}. The \texttt{panda head} node fires only if all three dependent nodes are firing.
We call the subgraph induced by the firing nodes the \textit{feature pathway} of a given input. Samples from different classes have relatively distinctive feature pathways, commonly shared at lower levels but more distinct at higher levels. By contrast, feature pathways of same-class samples are identical or similar. An illustration is given in Figure~\ref{fig:consistent}.
\begin{figure}[!htp]
\centering
\begin{minipage}[c]{0.68\textwidth}
\centering
\subfigure[Class 1a]{
\centering
\includegraphics[scale=0.3]{g7_1.pdf}
\label{fig:c11}}
\hspace{0.01in}
\subfigure[Class 1b]{
\centering
\includegraphics[scale=0.3]{g7_2.pdf}
\label{fig:c12}}\\
\subfigure[Class 2]{
\centering
\includegraphics[scale=0.3]{g7_3.pdf}
\label{fig:c2}}
\hspace{0.01in}
\subfigure[Class 3]{
\centering
\includegraphics[scale=0.3]{g7_4.pdf}
\label{fig:c3}}
\end{minipage}
\caption{Feature pathways of the neurashed model in Figure~\ref{fig:intro_xs}. Firing nodes are marked in red. Class 1 includes two types of samples with slightly different feature pathways, which is a reflection of heterogeneity in real-life data~\cite{feldman2020does}.}
\label{fig:consistent}
\end{figure}
To enable prediction, all nodes $F$ except for the last-level nodes are assigned a nonnegative value $\lambda_F$ as a measure of each node's ability to sense the corresponding feature. A large value of $\lambda_F$ means that when this node fires it can send out strong signals to connected nodes in the next level. Hence, $\lambda_F$ is the amplification factor of $F$. Moreover, let $\eta_{fF}$ denote the weight of a connected second-last-level node $f$ and last-level node $F$. Given an input, we define the score of each node, which is sent to its connected nodes on the next level: For any first-level node $F$, let score $S_F = \lambda_F$ if $F$ is firing and $S_F = 0$ otherwise; for any firing middle-level node F, we recursively define
\begin{equation}\nonumber
S_F = \lambda_F \sum_{f \goto F} S_f,
\end{equation}
where the sum is over all dependent nodes $f$ of $F$ in the lower level. Likewise, let $S_F = 0$ for any non-firing middle-level node $F$. For the last-level nodes $F_1, \ldots, F_K$ corresponding to the $K$ classes, let
\begin{equation}\label{eq:logits}
Z_{j} = \sum_{f \goto {F_j}} \eta_{f F_j} S_f
\end{equation}
be the logit for the $j$th class, where the sum is over all second-last-level dependent nodes $f$ of $F_j$. Finally, we predict the probability that this input is in the $j$th class as
\[
p_j(x) = \frac{\exp(Z_j)}{\sum_{i=1}^K \exp(Z_{i})}.
\]
To mimic the iterative characteristic of neural network training, we must be able to update the amplification factors for neurashed during training. At initialization, because there is no predictive ability as such for neurashed, we set $\lambda_F$ and $\eta_{fF}$ to zero, other constants, or random numbers. In each backpropagation, a node is firing if it is in the \textit{union} of the feature pathways of all training samples in the mini-batch for computing the gradient. We increase the amplification ability of any firing node. Specifically, if a node $F$ is firing in the backpropagation, we update its amplification factor $\lambda_F$ by letting
\[
\lambda_F \leftarrow g^+(\lambda_F),
\]
where $g^+$ is an increasing function satisfying $g^+(x) > x$ for all $x \ge 0$. The simplest choices include $g^+(x) = ax$ for $a > 1$ and $g^+(x) = x + c$ for $c > 0$. The strengthening of firing feature pathways is consistent with a recent analysis of simple hierarchical models~\cite{poggio2020theoretical,allen2020backward}. By contrast, for any node $F$ that is \textit{not} firing in the backpropagation, we decrease its amplification factor by setting
\[
\lambda_F \leftarrow g^-(\lambda_F)
\]
for an increasing function $g^-$ satisfying $0 \le g^-(x) \le x$; for example, $g^-(x) = bx$ for some $0 < b \le 1$. This recognizes regularization techniques such as weight decay, batch normalization~\cite{ioffe2015batch}, layer normalization~\cite{ba2016layer}, and dropout~\cite{srivastava2014dropout} in deep-learning training, which effectively impose certain constraints on the weight parameters~\cite{fang2021layer}. Update rules $g^+, g^-$ generally vary with respect to nodes and iteration number. Likewise, we apply rule $g^+$ to $\eta_{fF}$ when the connected second-last-level node $f$ and last-level node $F$ both fire; otherwise, $g^-$ is applied.
The training dynamics above could improve neurashed's predictive ability. In particular, the update rules allow nodes appearing frequently in feature pathways to quickly grow their amplification factors. Consequently, for an input $x$ belonging to the $j$th class, the amplification factors of most nodes in its feature become relatively large during training, and the true-class logit $Z_j$ also becomes much larger than the other logits $Z_i$ for $i \ne j$. This shows that the probability of predicting the correct class $p_j(x) \goto 1$ as the number of iterations tends to infinity.
The modeling strategy of neurashed is similar to a water\textit{shed}, where tributaries meet to form a larger stream (hence ``neura\textit{shed}''). This modeling strategy gives neurashed the innate characteristics of a hierarchical structure and iterative optimization. As a caveat, we do not regard the feature representation of neurashed as fixed. Although the graph is fixed, the evolving amplification factors represent features in a dynamic manner. Note that neurashed is different from capsule networks~\cite{sabour2017dynamic} and GLOM~\cite{hinton2021represent} in that our model is meant to shed light on the black box of deep learning, not serve as a working system.
\subsection*{Acknowledgments}
We would like to thank Patrick Chao, Zhun Deng, Cong Fang, Hangfeng He, Qingxuan Jiang, Konrad Kording, Yi Ma, and Jiayao Zhang for helpful discussions and comments. This work was supported in part by NSF through CAREER DMS-1847415 and CCF-1934876, an Alfred Sloan Research Fellowship, and the Wharton Dean's Research Fund.
\printbibliography
}
\clearpage
\section{Insights into Puzzles}
\label{sec:implications}
{\noindent \bf Implicit regularization.} Conventional wisdom from statistical learning theory suggests that a model may not perform well on test data if its parameters outnumber the training samples; to avoid overfitting, explicit regularization is needed to constrain the search space of the unknown parameters~\cite{friedman2001elements}. In contrast to other machine learning approaches, modern neural networks---where the number of learnable parameters is often orders of magnitude larger than that of the training samples---enjoy surprisingly good generalization even \textit{without} explicit regularization~\cite{zhang2021understanding}. From an optimization viewpoint, this shows that simple stochastic gradient-based optimization for training neural networks implicitly induces a form of regularization biased toward local minima of low ``complexity''~\cite{soudry2018implicit,bartlett2020benign}. However, it remains unclear how implicit regularization occurs from a geometric perspective~\cite{Nagarajan,razin2020implicit,zhou2021over}.
To gain geometric insights into implicit regularization using our conceptual model, recall that only firing features grow during neurashed training, whereas the remaining features become weaker during backpropagation. For simplicity, consider stochastic gradient descent with a mini-batch size of 1. Here, only \textit{common} features shared by samples from different classes constantly fire in neurashed, whereas features peculiar to some samples or certain classes fire less frequently. As a consequence, these common features become stronger more quickly, whereas the other features grow less rapidly or even diminish.
\begin{figure}[!htp]
\centering
\begin{minipage}[c]{0.6\textwidth}
\centering
\includegraphics[scale=0.4]{g1.pdf}
\includegraphics[scale=0.4]{g2.pdf}\\[0.5em]
\includegraphics[scale=0.4]{g4.pdf}\\[0.5em]
Small-batch training
\end{minipage}
\hspace{-0.3in}
\begin{minipage}[c]{0.35\textwidth}
\centering
\includegraphics[scale=0.4]{g3.pdf}\\[0.5em]
\includegraphics[scale=0.4]{g5.pdf}\\[0.5em]
Large-batch training
\end{minipage}
\hspace{1cm}
\caption{Part of neurashed that corresponds to a single class. The two top plots in the left panel show two feature pathways, and the top plot in the right panel denotes the firing pattern when both feature pathways are included in the batch (the last-level node is firing but is not marked in red for simplicity). The two bottom plots represent the learned neurashed models, where larger nodes indicate larger amplification factors.}
\label{fig:sgd_gd}
\end{figure}
When gradient descent or large-batch stochastic gradient descent are used, many features fire in each update of neurashed, thereby increasing their amplification factors simultaneously. By contrast, a small-batch method constructs the feature pathways in a sparing way. Consequently, the feature pathways learned using small batches are \textit{sparser}, suggesting a form of \textit{compression}. This comparison is illustrated in Figure~\ref{fig:sgd_gd}, which implies that different samples from the same class tend to exhibit vanishing variability in their high-level features during later training, and is consistent with the recently observed phenomenon of neural collapse~\cite{papyan2020prevalence}. Intuitively, this connection is indicative of neurashed's compressive nature.
Although neurashed's geometric characterization of implicit regularization is currently a hypothesis, much supporting evidence has been reported, empirically and theoretically. Empirical studies in \cite{keskar2016large,smith2020generalization} showed that neural networks trained by small-batch methods generalize better than when trained by large-batch methods. Moreover, \cite{ilyas2019adversarial,xiao2020noise} showed that neural networks tend to be more accurate on test data if these models leverage less information of the images. From a theoretical angle, \cite{haochen2020shape} related generalization performance to a solution's sparsity level when a simple nonlinear model is trained using stochastic gradient descent.
\vspace{0.2cm}
{\noindent \bf Information bottleneck.} In \cite{tishby2015deep,shwartz2017opening}, the information bottleneck theory of deep learning was introduced, based on the observation that neural networks undergo an initial fitting phase followed by a compression phase. In the initial phase, neural networks seek to both memorize the input data and fit the labels, as manifested by the increase in mutual information between a hidden level and both the input and labels. In the second phase, the networks compress all irrelevant information from the input, as demonstrated by the decrease in mutual information between the hidden level and input.
Instead of explaining how this mysterious phenomenon emerges in deep learning, which is beyond our scope, we shed some light on information bottleneck by producing the same phenomenon using neurashed. As with implicit regularization, we observe that neurashed usually contains many redundant feature pathways when learning class labels. Initially, many nodes grow and thus encode more information regarding both the input and class labels. Subsequently, more frequently firing nodes become more dominant than less frequently firing ones. Because nodes compete to grow their amplification factors, dominant nodes tend to dwarf their weaker counterparts after a sufficient amount of training. Hence, neurashed starts to ``forget'' the information encoded by the weaker nodes, thereby sharing less mutual information with the input samples (see an illustration in Figure~\ref{fig:bottleneck}). The \textit{compressive} characteristic of neurashed arises, loosely speaking, from the internal competition among nodes. This interpretation of the information bottleneck via neurashed is reminiscent of the human brain, which has many neuron synapses during childhood that are pruned to leave fewer firing connections in adulthood~\cite{feinberg1982schizophrenia}.
\begin{figure}[!htp]
\centering
\includegraphics[width=6cm, height=4.5cm]{g6.pdf}
\hspace{0.5cm}
\includegraphics[width=6cm, height=4.8cm]{ib-crop.pdf}
\caption{A neurashed model for a binary classification problem. All four firing patterns of Class 1 on the first level (from left to right): $(1, 2, 7), (2, 3, 7), (4, 5, 7), (5, 6, 7)$. In the second level, the first and third nodes fire if one or more dependent nodes fire, and the second (dominant) node fires if two or more dependent nodes fire. The left panel displays a feature pathway of Class 1. Class 2 has four feature pathways that are symmetric to those of Class 1. The right panel shows the information bottleneck phenomenon for this neurashed model. As with~\cite{shwartz2017opening}, noise is added in calculating the mutual information (MI) between the first/second level and the input (8 types)/labels (2 types). More details are given in the appendix.}
\label{fig:bottleneck}
\end{figure}
\vspace{0.2cm}
{\noindent \bf Local elasticity.}
Last, we consider a recently observed phenomenon termed local elasticity~\cite{he2019local} in deep learning training, which asks how the update of neural networks via backpropagation at a base input changes the prediction at a test sample. Formally, for $K$-class classification, let $z_1(x, w), \ldots, z_K(x, w)$ be the logits prior to the softmax operation with input $x$ and network weights $w$. Writing $w^+$ for the updated weights using the base input $x$, we define
\begin{equation}\nonumber
\mathrm{LE}(x, x') := \frac{\sqrt{\sum_{i=1}^K (z_i(x', w^+) - z_i(x', w))^2}}{\sqrt{\sum_{i=1}^K (z_i(x, w^+) - z_i(x, w))^2}}
\end{equation}
as a measure of the impact of base $x$ on test $x'$. A large value of this measure indicates that the base has a significant impact on the test input. Through extensive experiments, \cite{he2019local} demonstrated that well-trained neural networks are locally elastic in the sense that the value of this measure depends on the semantic similarity between two samples $x$ and $x'$. If they are similar---say, images of a cat and tiger---the impact is significant, and if they are dissimilar---say, images of a cat and turtle---the impact is low. Experimental results are shown in Figure~\ref{fig:le}. For comparison, local elasticity does not appear in linear classifiers because of the leverage effect. More recently, \cite{chen2020label,deng2020toward,zhang2021imitating} showed that local elasticity implies good generalization ability.
\begin{figure}[!htp]
\centering
\includegraphics[scale=0.5]{le_plot.pdf}
\caption{Histograms of $\mathrm{LE}(x, x')$ evaluated on the pre-trained VGG-19 network~\cite{simonyan2014very}. For example, in the left panel the base input $x$ are images of brown bears. Each class contains 120 images sampled from ImageNet~\cite{deng2009imagenet}. Tiger and leopard are felines and similar.}
\label{fig:le}
\end{figure}
We now show that neurashed exhibits the phenomenon of local elasticity, which yields insights into how local elasticity emerges in deep learning. To see this, note that similar training samples share more of their feature pathways. For example, the two types of samples in Class 1 in Figure~\ref{fig:consistent} are presumably very similar and indeed have about the same feature pathways; Class 1 and Class 2 are more similar to each other than Class 1 and Class 3 in terms of feature pathways. Metaphorically speaking, applying backpropagation at an image of a leopard, the feature pathway for leopard strengthens as the associated amplification factors increase. While this update also strengthens the feature pathway for tiger, it does not impact the brown bear feature pathway as much, which presumably overlaps less with the leopard feature pathway. This update in turn leads to a more significant change in the logits \eqref{eq:logits} of an image of a tiger than those of a brown bear. Returning to Figure~\ref{fig:consistent} for an illustration of this interpretation, the impact of updating at a sample in Class 1a is most significant on Class 1b, less significant on Class 2, and unnoticeable on Class 3.
|
1,108,101,565,535 | arxiv | \section{Introduction}
\label{thintro}
Recent advances in fabrication, manipulation and measurement of artificial
quantum impurity systems such as quantum dots have led to a resurgence of
interest of nanostructures in both experiment and theory. A favorable feature
of these nanostructures is the outstanding tunability of device parameters.
Understanding the dynamical properties of quantum impurity systems is of
fundamental importance for the development of solid-state quantum information
processing\cite{Elz04431, Kop06766, Han081043} and single-electron devices.
\cite{Gab06499, Fev071169}
Moveover, quantum impurity models serve as essential theoretical tools, covering
a broad range of important physical systems. For instance, the Hubbard lattice
model can be mapped onto the Anderson impurity model via a self-consistent
dynamical mean field theory.\cite{Met89324, Geo926479, Geo9613} Besides the
strong electron-electron (\emph{e-e}) interactions, local impurities are also
subject to interactions with itinerant electrons in surrounding bulk materials,
which serve as the electron reservoir as well as thermal bath.
The interplay between the local \emph{e-e} interactions and nonlocal transfer
coupling gives rise to a variety of intriguing phenomena of prominent many-body
nature, such as Kondo effect,\cite{Cro98540, Gol98156, Bul08395} Mott metal-insulator
transition,\cite{Geo921240, Ima981039, Bul01045103} and high-temperature
superconductivity.\cite{Eme872794, Yan031, Mai05237001}
Characterizing the system responses to external perturbation of experimental
relevance is of fundamental significance in understanding the intrinsic
properties of quantum impurity systems and their potential applications.
For instance, the magnetic susceptibility of an impurity system reflects
the redistribution of electron spin under an applied magnetic field, and
its investigation may have important implications for fields such as spintronics.
For the accurate characterization of dynamical properties of the
impurity such as the impurity spectral function and dynamical
charge/magnetic susceptibility, a variety of nonperturbative numerical
approaches have been developed, such as numerical renormalization group
method,\cite{Wil75773, Hof001508, Bul08395} density matrix
renormalization group approach,\cite{Whi922863, Jec02045114, Nis04613}
and quantum Monte Carlo method. \cite{Hir862521,Sil902380, Gub916011,
Gul11349}
While most of work has focused on equilibrium properties, the accurate
characterization of nonequilibrium dynamical properties has remained very
challenging.
In many experimental setups,\cite{Gol98156, Cro98540} artificial quantum
impurity systems attached to electron reservoirs are subject to applied bias
voltages. This stimulates the experimental and theoretical exploration of
nonequilibrium processes in quantum impurity systems. A variety of interesting
physical phenomena have been observed, which originate from the interplay between
strong electron correlation and nonequilibrium dissipation.\cite{Doy06245326,
Meh08086804, Bou08140601, Chu09216803}
In the past few years, a number of nonperturbative theoretical approaches
have been devised to treat systems away from equilibrium. These include
the time-dependent numerical renormalization group method,\cite{Cos973003, And05196801,
And08066804} time-dependent density matrix renormalization group method,\cite{Whi04076401}
nonequilibrium functional renormalization group,\cite{Jak07150603, Gez07045324}
quantum Monte Carlo method,\cite{Han99236808, Wer09035320, Sch09153302} iterative
real-time path integral approach,\cite{Wei08195316, Seg10205323} and nonequilibrium
Bethe ansatz.\cite{Kon01236801, Meh06216802, Cha11195314}
Despite the progress made, quantitative accuracy is not guaranteed for the
resulted nonequilibrium properties, because of the various simplifications and
approximations involved in these approaches. Therefore, an accurate and universal
approach which is capable of addressing nonequilibrium situations is highly
desirable.
In this work we propose a hierarchical dynamics approach for the characterization
of nonequilibrium response of local impurities to external fields.
A general hierarchical equations of motion (HEOM) approach has been developed,
\cite{Jin08234703, Zhe09164708, Zhe121129} which describes the reduced dynamics
of open dissipative systems under arbitrary time-dependent external fields. The
HEOM theory resolves the combined effects of \emph{e-e} interactions, impurity-reservoir
dissipation, and non-Markovian memory in a nonperturbative manner.
In the framework of HEOM, the nonequilibrium dynamics are treated by following the
same numerical procedures as in equilibrium situations. The HEOM theory is in principle
exact for an arbitrary equilibrium or nonequilibrium system, provided that the full hierarchy
inclusive of infinite levels are taken into account.\cite{Jin08234703} In practice, the
hierarchy needs to be truncated at a finite level for numerical tractability. The convergence
of calculation results with respect to the truncation level should be carefully examined.
Once the convergence is achieved, the numerical outcome is considered to reach quantitative
accuracy for systems in both equilibrium and nonequilibrium situations.
It has been demonstrated that the HEOM approach leads to an accurate
and universal characterization of strong electron correlation effects
in quantum impurity systems, and treats the equilibrium and
nonequilibrium scenarios in a unified manner. For the equilibrium
properties of Anderson model systems, the HEOM approach achieves the
same level of accuracy as the latest state-of-the-art NRG
method.\cite{Li12266403}
In particular, the universal Kondo scaling of zero-bias conductance and the logarithmic
Kondo spectral tail have been reproduced quantitatively.
For systems out of equilibrium, numerical calculations achieving
quantitative accuracy remain very scarce. One of the rare cases where
numerically exact solution is available is the dynamic current response
of a noninteracting quantum dot to a step-pulse voltage.
\cite{Ste04195318,Mac06085324} This has been precisely reproduced by
the HEOM approach. \cite{Zhe08184112} However, there are very few
calculations at the level of quantitative accuracy for systems
involving strong \emph{e-e} interactions, since most of the existing
methods involve intrinsic approximations. Based on the HEOM formalism,
quantitative accuracy should be achieved once the numerical convergence
with respect to the truncation level of hierarchy is reached.
There are two schemes to evaluate the response properties of quantum
impurity systems in the framework of HEOM: (\emph{i}) Calculate
relevant system correlation/response functions based on a linear
response theory constructed in the HEOM Liouville space;\cite{Li12266403}
and (\emph{ii}) solve the EOM for a hierarchical set of density
operators to obtain the transient reduced dynamics of system in
response to time-dependent external perturbation, followed by a finite
difference analysis. These two schemes are completely equivalent in the
linear response regime, as have been verified numerically.
In previous studies, we had employed the above second scheme to
evaluate the dynamic admittance (frequency-dependent electric current
in response to external voltage applied to coupling electron
reservoirs) of quantum dot systems, which had led to the identification
of several interesting phenomena, including dynamic Coulomb
blockade\cite{Zhe08093016} and dynamic Kondo transition,\cite{Zhe09164708}
and photon-phonon-assisted transport.\cite{Jia12245427}
In this work, we will elaborate the above first scheme of HEOM approach.
The external perturbation may associate with an arbitrary operator in
the impurities subspace, or originates from a homogeneous shift of
electrostatic potential (and hence the chemical potential) of electron
reservoir. The detailed numerical procedures will be exemplified through the
evaluation of a variety of response properties of a single-impurity
Anderson model, including the impurity spectral density function, local
charge fluctuation spectrum, local magnetic susceptibility, and dynamic
admittance.
The remainder of paper is organized as follows. We will first give a
brief introduction on the HEOM method in \Sec{thheom}.
In \Sec{ththeo} we will elaborate the establishment of a linear
response theory in the HEOM Liouville space. Calculation on system
correlation/response functions which are directly relevant to the
response properties of primary interest will be discussed in detail.
We will then provide numerical demonstrations for the evaluation of
various dynamical properties in \Sec{thnum}. Finally, the concluding
remarks will be given in \Sec{summary}.
\section{A real-time dynamics theory for nonequilibrium impurity systems}
\label{thheom}
\subsection{Prelude}
\label{thheomA}
Consider a quantum impurity system in contact with two electron reservoirs,
denoted as the $\alpha=$ L and R reservoirs, under the bias
voltage $V=\mu_{\rm L}-\mu_{\rm R}$.
The total Hamiltonian
of the composite system assumes the form of
\begin{align}
\label{Htotal}
H_{\T}&=H_{\s}+\sum_{\alpha k} (\epsilon_{\alpha k} + \mu_\alpha)\,
\hat d^{\dg}_{\alpha k}\hat d_{\alpha k}
\nl& \quad
+\sum_{\alpha\mu k}\left(t_{\alpha k\mu}\hat d^{\dg}_{\alpha k}\hat a_{\mu}
+ {\rm H.c.} \right).
\end{align}
The impurity system Hamiltonian $H_{\s}$ is rather general, including
many-particle interactions and external field coupling. Its second
quantization form is given in terms of electron creation and
annihilation operators, $\hat a^\dg_{\mu}\equiv \hat a^+_{\mu}$ and
$\hat a_{\mu}\equiv \hat a^-_{\mu}$, which are associated with the
system spin-state $\mu$.
The reservoirs are modeled by a noninteracting Hamiltonian; see the
second term on the right-hand side (rhs) of \Eq{Htotal}, where $\hat
d^{\dg}_{\alpha k}$ ($\hat d_{\alpha k}$) and $\epsilon_{\alpha k}$ are
the creation (annihilation) operator and energy of single-electron
state $|k\rangle$ electron of $\alpha$-reservoir, respectively. While
the equilibrium chemical potential of total system is set to be
$\mu^{\rm eq}_{\alpha}=0$, the reservoir states are subject to a
homogeneous shift, $\mu_\alpha$, under applied voltages.
The last term on the rhs of \Eq{Htotal} is in a standard transfer
coupling form, which is responsible for the dissipative interactions
between the system and itinerary electrons of reservoirs. It can be
recast as $H'=\sum_{\alpha\mu}(\hat f^{+}_{\alpha\mu}
\hat a^{-}_{\mu}+\hat a^{+}_{\mu}\hat f^{-}_{\alpha\mu})$,
where $\hat f^{+}_{\alpha\mu}\equiv \sum_{k}t_{\alpha k\mu}
\hat d^{\dg}_{\alpha k}=\big(\hat f^{-}_{\alpha\mu}\big)^{\dg}$.
Throughout this paper we adopt the atomic unit $e=\hbar=1$ and
denote $\beta=1/(k_B T)$, with $k_B$ being the Boltzmann constant
and $T$ the temperature of electron reservoirs. Introduce also the
sign variables, $\sigma=+/-$ and $\bar\sigma \equiv -\sigma$
the opposite sign of $\sigma$.
The $\alpha$-reservoir is characterized by the spectral density
$J_{\alpha \mu\nu}(\omega) \equiv \pi \sum_{k}t^\ast_{\alpha k\mu}
t_{\alpha k\nu} \delta(\omega - \epsilon_{\alpha k})$.
It influences the dynamics of reduced system through the reservoir
correlation functions $\tilde{C}^{\sigma;{\rm st}}_{\alpha\mu\nu}
(t-\tau)\equiv \la\hat f^{\sigma}_{\alpha\mu}(t)
\hat f^{\bar\sigma}_{\alpha \nu}(\tau)\ra_{\alpha}$,
Here, $\la(\cdot)\ra_{\alpha}\equiv {\rm
tr}_{\alpha}\big[(\cdot)\,e^{-\beta H_{\alpha}}\big] /{\rm
tr}_{\alpha}(e^{-\beta H_{\alpha}})$ and $\hat
f^{\sigma}_{\alpha\mu}(t)\equiv e^{iH_{\alpha}t}\hat
f^{\sigma}_{\alpha\mu} e^{-iH_{\alpha}t}$, with $H_\alpha$ being the
Hamiltonian of $\alpha$-reservoir.
The superscript ``st'' highlights the stationary feature of the
nonequilibrium correlation function, under a constant $\mu_{\alpha}$.
It is related to the reservoir spectral density,
$J_{\alpha\mu\nu}(\omega)\equiv J^{-}_{\alpha\mu\nu}(\omega)\equiv
J^{+}_{\alpha\nu\mu}(\omega)$, via the fluctuation-dissipation theorem:
\cite{Jin08234703}
\be\label{FDT}
\tilde{C}^{\sigma;{\rm st}}_{\alpha\mu\nu}(t)
=\int_{-\infty}^{\infty}\! d\omega
\frac{ e^{\sigma i\omega t}J^{\sigma}_{\alpha\mu\nu}(\omega-\mu_{\alpha})
}{1+e^{\sigma\beta(\omega-\mu_{\alpha})}}.
\ee
Physically, $\tilde{C}^{\sigma;{\rm st}}_{\alpha\mu\nu}(t)$,
with $\sigma=+$ or $-$,
describes the processes of electron
tunneling from the $\alpha$-reservoir into the
specified system coherent state
or the reverse events, respectively.
We will be interested in nonequilibrium dynamic responses
to a time-dependent external field acting on either the local
system or the reservoirs. For the latter case, we include a
time-dependent shift in chemical potential $\delta\Delta_{\alpha}(t)$,
on top of the constant $\mu_{\alpha}$, to the $\alpha$-reservoir. Its
effect can be described by rigid homogeneous shifts for the reservoir
conduction bands, resulting in the nonstationary reservoir correlation
functions of
\be\label{CT}
C^{\sigma}_{\alpha\mu\nu}(t,\tau)
=\exp\left[\sigma i\!\int_{\tau}^t\! {\rm d}t'\, \delta\Delta_{\alpha}(t')\right]
\tilde{C}^{\sigma;{\rm st}}_{\alpha\mu\nu}(t-\tau).
\ee
This is the generalization of
$\tilde{C}^{\sigma;{\rm st}}_{\alpha\mu\nu}(t)=
e^{\sigma i\mu_{\alpha} t} \tilde{C}^{\sigma;{\rm eq}}_{\alpha\mu\nu}(t)$,
as inferred from \Eq{FDT},
with the equilibrium counterpart being of $\mu^{\rm eq}_{\alpha}=0$.
In the following, we focus on the situation of diagonal reservoir
correlation, \emph{i.e.}, $J^{\sigma}_{\alpha\mu\nu}(\omega)=
J^{\sigma}_{\alpha\mu\mu}(\omega)\,\delta_{\mu\nu}$, and
$\tilde{C}^{\sigma;{\rm st}}_{\alpha\mu\nu}(t)
=\tilde{C}^{\sigma;{\rm st}}_{\alpha\mu\mu}(t)\,\delta_{\mu\nu}$.
In constructing closed HEOM,\cite{Jin08234703} we should
expand $\tilde{C}^{\sigma;{\rm st}}_{\alpha\mu\mu}(t)$
in a finite exponential series,
\be\label{CTexp}
\tilde{C}^{\sigma;{\rm st}}_{\alpha\mu\mu}(t) \simeq \sum_{m=1}^M
\eta^{\sigma}_{\alpha\mu m}e^{-\gamma^{\sigma}_{\alpha\mu m} t}.
\ee
Involved are a total number of $M=N'+N$
poles from the reservoir spectral density
and the Fermi function in the contour integration
evaluation of \Eq{FDT}.
Various sum-over-poles schemes
have been developed, including the Matsubara spectrum decomposition scheme,\cite{Jin08234703}
a hybrid spectrum decomposition and frequency dispersion scheme,\cite{Zhe09164708}
the partial fractional decomposition scheme,\cite{Cro09073102}
and the Pad\'{e} spectrum decomposition (PSD) scheme,\cite{Hu10101106, Hu11244106}
with the primary focus on the Fermi function.
To our knowledge, the PSD scheme has the best
performance until now. We will come back to
this issue later; see the remark-(6)
in \Sec{thheomB}.
In the present work we use
the $[N\!-\!1/N]$ PSD scheme.\cite{Hu10101106, Hu11244106}
It leads to a minimum $M=N'+N$ in the exponential
expansion of \Eq{CTexp} and thus an optimal HEOM construction.\cite{Jin08234703}
The exponential expansion form
of the reservoir correlation function in \Eq{CTexp}
dictates the explicit expressions for the HEOM formalism.\cite{Jin08234703}
For bookkeeping we introduce the abbreviated index
$j=\{\sigma\alpha\mu m\}$ for $\gamma_j\equiv \gamma^{\sigma}_{\alpha\mu m}$ and
so on, or $j=\{\sigma\mu\}$ for $\hat a_j \equiv \hat a^{\sigma}_{\mu}$.
Denote also $\bar j=\{\bar\sigma\alpha\mu m\}$ or $\{\bar\sigma\mu\}$ whenever
appropriate, with $\bar\sigma$ being the opposite
sign of $\sigma=+$ or $-$.
The dynamical variables in HEOM are a set
of auxiliary density operators (ADOs), $\{\rho^{(n)}_{j_1\cdots j_n}(t);
n=0,1,\cdots,L\}$, with $L$ being the terminal or truncated tier of hierarchy.
The zeroth-tier ADO is set to be the reduced system
density matrix, $\rho^{(0)}(t)\equiv \rho(t) \equiv {\rm tr}_{\B}\,[\rho_{\T}(t)]$,
\emph{i.e.}, the trace of the total system and bath composite density matrix
over reservoir bath degrees of freedom.
\subsection{Hierarchical equations of motion formalism}
\label{thheomB}
The HEOM formalism has been constructed from the Feynman--Vernon
influence functional path integral theory, together with the Grassmann algebra.\cite{Jin08234703}
The initial system-bath decoupling used for expressing explicitly the
influence functional is set at the infinite past.
It does not introduce any approximation for
the characterization of any realistic physical process starting
from a stationary state, which is defined via the HEOM that
includes the coherence between the system and bath.
The detailed construction of HEOM is referred to Ref.~\onlinecite{Jin08234703}.
Here we just briefly introduce the HEOM formalism and
discuss some of its key features.
The final HEOM formalism reads\cite{Jin08234703}
\begin{align}\label{HEOM}
\dot\rho^{(n)}_{j_1\cdots j_n} =&-\big[i{\cal L}(t)
+\gamma^{(n)}_{j_1\cdots j_n}\!(t)\big]\rho^{(n)}_{j_1\cdots j_n}
-i \sideset{}{'}\sum_{j}{\cal A}_{\bar j}\, \rho^{(n+1)}_{j_1\cdots j_nj}
\nl &
-i \sum_{r=1}^{n}(-)^{n-r}\, {\cal C}_{j_r}\,
\rho^{(n-1)}_{j_1\cdots j_{r-1}j_{r+1}\cdots j_n}\, .
\end{align}
The boundary conditions are $\gamma^{(0)}
=\rho^{(-1)}=0$, together with a truncation by setting all $\rho^{(n>L)}=0$.
The initial conditions to \Eq{HEOM}
will be specified in conjunction with the evaluation of various response
and correlation functions in \Sec{ththeo}.
The time-dependent damping parameter $\gamma^{(n)}_{j_1\cdots j_n}\!(t)$
in \Eq{HEOM}
collects the exponents of nonstationary reservoir correlation function
[\emph{cf.}~\Eqs{CT} and (\ref{CTexp})]:
\be\label{gammaSum}
\gamma^{(n)}_{j_1\cdots j_n}\!(t)=\sum_{r=1}^{n}
\big[\gamma_{j_r}-\sigma i \delta\Delta_{\alpha}(t)\big]_{\sigma,\alpha\in j_r}.
\ee
This expression has been used directly in
the HEOM evaluation of transient current dynamical properties
under the influence of arbitrary time-dependent chemical
potentials applied to electrode leads.\cite{Zhe08184112,Zhe08093016,Zhe09124508,Zhe09164708}
Note that $\gamma_{j}\equiv \gamma^{\sigma}_{\alpha\mu m}=
\gamma^{\sigma;{\rm eq}}_{\alpha\mu m}-\sigma i \mu_{\alpha}$.
In \Sec{ththeo3}, we will treat $\delta\Delta_{\alpha}(t)$ as perturbation
and derive the corresponding linear response theory formulations
for various transport current related properties under nonequilibrium
($\mu_{\alpha}\neq 0$) conditions.
To evaluate nonequilibrium correlation functions of local system
via linear response theory (\emph{cf.}~\Sec{ththeo2}),
the time-dependent reduced system Liouvillian in \Eq{HEOM}
is assumed formally the form of
\be\label{calLt}
\mathcal{L}(t)=\mathcal{L}_{s}+\delta\mathcal{L}(t).
\ee
Here, $\mathcal{L}_s\,\cdot\,\equiv [H_{\s},\,\cdot\,]$ remains
the commutator form involving two $H_{\s}$-actions onto
the bra and ket sides individually.
However, the time-dependent perturbation $\delta{\cal L}(t)$
may act only on one side, in line with the
HEOM expressions for local system correlation functions
\cite{Mo05084115,Zhu115678,Xu11497,Xu13024106,Li12266403}
Other features of HEOM and remarks, covering both the theoretical formulation
and numerical implementation aspects, are summarized as follows.
(1) The Fermi-Grassmannian properties:
(\emph{i}) All $j$-indexes in a nonzero $n^{\rm th}$-tier ADO,
$\rho^{(n)}_{j_1\cdots j_n}$, must be distinct.
Swap in any two of them leads to a minus sign,
such as $\rho^{(2)}_{j_2j_1}=-\rho^{(2)}_{j_1j_2}$.
In line with this property, the sum of the
tier-up dependence in \Eq{HEOM} runs only over $j\not\in \{j_1,\cdots,j_n\}$;
(\emph{ii}) Involved in \Eq{HEOM} are also ${\cal A}_{\bar j}\equiv {\cal A}^{\bar\sigma}_{\mu}$
and ${\cal C}_j\equiv {\cal C}^{\sigma}_{\alpha\mu m}$. They are Grassmann superoperators,
defined via their actions on an arbitrary operator of fermionic or bosonic
(bi-fermion) nature,
$\hat O^{\text{\tiny F}}$ or $\hat O^{\text{\tiny B}}$, by
\be\label{calAC}
\begin{split}
{\cal A}_{\bar j} \hat O^{\text{\tiny F/B}}
&\equiv \hat a_{\bar j}\hat O^{\text{\tiny F/B}}
\mp \hat O^{\text{\tiny F/B}}\hat a_{\bar j} \, ,
\\
{\cal C}_{j} \hat O^{\text{\tiny F/B}}
&\equiv \eta_j\hat a_j\hat O^{\text{\tiny F/B}}
\pm \eta_{\bar j}^{\ast}\hat O^{\text{\tiny F/B}}\hat a_j \, .
\end{split}
\ee
In particular, even-tier ADOs are bosonic, while odd-tier ones are fermionic.
The case of opposite parity would also appear in conjunction with
applications; see comments following \Eq{tibmrho_init1}.
(2) Physical contents of ADOs:
While the zero-tier ADO is the reduced density matrix,
\emph{i.e.}, $\rho^{(0)}(t) = \rho(t)$,
the first-tier ADOs, $\rho^{(1)}_j\equiv \rho^{\sigma}_{\alpha\mu m}$,
are related to the electric current through the interface between the system
and $\alpha$-reservoir, $I_\alpha(t)$, as follows,
\be\label{curr_t}
I_{\alpha}(t)= - 2\,{\rm Im} \sum_{\mu m}
{\rm Tr}\left[\hat{a}^{+}_{\mu}\rho^{-}_{\alpha\mu m}(t)\right].
\ee
Moreover, we have
$\sum_m \rho^{\sigma}_{\alpha\mu m}(t)=
{\rm tr}_{\B}[\hat f^{\sigma}_{\alpha \mu}(t)\rho_{\T}(t)]$,
and can further relate
${\rm tr}_{\B}[\hat f^{\sigma}_{\alpha\mu}(t)
\hat f^{\sigma'}_{\alpha'\mu'}(t)\rho_{\T}(t)]$ to the second-tier ADOs,
and so on.
Note that $\hat f^{+}_{\alpha\mu}(t)\equiv e^{iH_{\alpha}t}
\big(\sum_{k}t_{\alpha k\mu}
\hat d^{\dg}_{\alpha k}\big)e^{-iH_{\alpha}t}
=\big[\hat f^{-}_{\alpha\mu}(t)\big]^{\dg}$
are defined in the bath-space only.
Apparently, the Fermi-Grassmannian properties in remark-(1) above are rooted at the
fermionic nature of individual $\{\hat f^{\sigma}_{\alpha\mu}\}$.
(3) Hermitian property: The ADOs satisfy the
Hermitian relation of $\big[\rho^{(n)}_{j_1\cdots j_n}(t)\big]^{\dg}
=\rho^{(n)}_{{\bar j}_n\cdots {\bar j}_1}(t)$,
whenever the perturbed $i\delta{\cal L}(t)$ action
assumes Hermitian; see the comments
following \Eq{calLt}.
(4) Nonperturbative nature: The HEOM construction treats properly
the combined effects of system-bath coupling strength,
Coulomb interaction, and bath memory time scales,
as inferred from the following observations.
(\emph{i}) For noninteracting electronic systems, the coupling hierarchy stops at
second tier level ($L=2$) without approximation;\cite{Jin08234703}
(\emph{ii}) HEOM is of finite support, containing in general only a finite number of ADOs.
Let $K$ be the number of all distinct $j$-indexes.
Such a number draws the maximum tier level
$L_{\text{max}}=K$, at which the HEOM formalism ultimately terminates.
The total number of ADOs, up to the truncated tier level $L$,
is $\sum^{L}_{n=0}\frac{K!}{n!(K-n)!}\leq 2^{K}$, as $L\leq L_{\text{max}}=K$;
(\emph{iii}) The hierarchy resolves collectively the memory contents, as decomposed in
the exponential expansion of bath correlation functions of \Eq{CTexp}.
It goes with the observation that an individual ADO, $\rho^{(n)}_{j_1\cdots j_n}$,
is associated with the collective damping constant
Re\,$\gamma^{(n)}_{j_1\cdots j_n}$ in \Eq{gammaSum}.
Meanwhile $\rho^{(n)}_{j_1\cdots j_n}$ has the leading $(2n)^{\rm th}$-order
in the overall system-bath coupling strength.
One may define proper non-Marvokianicity parameters to
determine in advance the numerical importance
of individual ADOs;\cite{Xu05041103,Shi09084105,Xu13024106}
(\emph{iv}) Convergency tests by far -- For quantum
impurity systems with nonzero e-e interactions, calculations often converge
rapidly and uniformly with the increasing truncation level $L$.
Quantitatively accurate results are usually achieved at a relatively low
value of $L$.
(5) Nonequilibrium versus equilibrium:
The HEOM formalism presented earlier provides a unified approach
to equilibrium, nonequilibrium, time-dependent and
time-independent situations.
In general, the number $K$ of distinct ADO indexes
amounts to $K=2N_{\alpha}N_{\mu}M$, as inferred from
\Eq{CTexp}, with $N_{\mu}$ being the number
spin-orbitals of system in direct contact to leads.
The factor 2 accounts for the two choices
of the sign variable $\sigma$,
while $N_{\alpha}=2$ for the distinct $\alpha=$ L and R leads.
Interestingly, in the equilibrium
case, together with the $J_{\rm L}(\omega)\propto J_{\rm R}(\omega)$
condition, one can merger all leads into a single lead to have the
reduced $K=2N_{\mu}M$.
The resulting equilibrium HEOM formalism that contains no longer the
$\alpha$-index can therefore be evaluated at
the considerably reduced computational cost.
(6) Control of accuracy and efficiency: The bath correlation function
in exponential expansion of \Eq{CTexp} dictates the accuracy
and efficiency of the HEOM approach.
(\emph{i}) The accuracy in the exponential expansion of \Eq{CTexp}
is found to be directly transferable to the accuracy
of HEOM. In other words, HEOM is exact as long as the expansion is exact;
(\emph{ii}) The expansion of \Eq{CTexp} is uniformly convergent,
and becomes exact when $M$ goes to infinity,
for any realistic bath spectral density with
finite bandwidth at finite temperature ($T \neq 0$);
(\emph{iii}) The $[N\!-\!1/N]$ PSD scheme adopted in this work
is considered to be the best among all possible sum-over-poles
expansion of Fermi function.\cite{Hu10101106,Hu11244106,Bak96}
In particular it is dramatically superior over the commonly
used Matsubara expansion expression.
The PSD scheme leads to the optimal HEOM, with a minimum $K$-space size,
for either equilibrium or nonequilibrium case, as discussed in
remark-(5) above.
(7) Computational cost: The CPU time and memory space required for
HEOM calculations are rather insensitive
to the Coulomb coupling strength and to the
equilibrium versus nonequilibrium and
time-dependent and time-independent types
of evaluations.
However, it grows exponentially as the temperature $T\rightarrow 0$,
with respect to system-bath hybridization strength,
due to the significant increase of both the
converged $K$-space and $L$-space sizes.
To conclude, HEOM is an accurate and versatile tool,
capable of universal characterizations
of real-time dynamics in quantum impurity systems,
in both equilibrium and nonequilibrium cases.
These remarkable features have been demonstrated
recently in several complex quantum
impurity systems,\cite{Li12266403}
with the focus mainly on equilibrium properties.
The HEOM approach is also very efficient. Calculations often converge
rapidly and uniformly with the increasing truncation level $L$.
Quantitatively accurate results are usually achieved at
a relatively low level of truncation.\cite{Li12266403}
We will show in \Sec{thnum} that these features will largely
remain in the evaluations of nonequilibrium properties.
\section{Nonequilibrium response theory}
\label{ththeo}
\subsection{Linearity of the hierarchical Liouville space}
\label{ththeo1}
To highlight the linearity of HEOM, we arrange
the involving ADOs in a column vector,
denoted symbolically as
\be\label{bfrho}
{\bm\rho}(t)\equiv \big\{\rho(t),\,\rho^{(1)}_{j}\!(t),\,
\rho^{(2)}_{j_1\!j_2}\!(t),\, \cdots\,\big\}.
\ee
Thus, \Eq{HEOM} can be recast in a matrix-vector form (each element of
the vector in \Eq{bfrho} is a matrix) as follows,
\be
\dot{\bm\rho}=-i\bfL(t)\bm\rho, \label{heom-mat-vec}
\ee
with the time-dependent hierarchical-space Liouvillian, as inferred
from \Eqs{HEOM}--(\ref{calLt}), being of
\be\label{dfLsum}
\bfL(t)= \bfL_s+\delta\mathcal{L}(t){\bfone} + \delta{\bfV}(t)\, .
\ee
It consists not just the time-independent $\bfL_s$
part, but also two time-dependent parts and each of them will
be treated as perturbation at the linear response level soon.
Specifically, $\delta\mathcal{L}(t){\bfone}$,
with ${\bfone}$ denoting the unit operator in the hierarchical Liouville space,
is attributed to a time-dependent external field acting on the reduced system,
while $\delta{\bfV}(t)$ is diagonal and due to the time-dependent potentials
$\delta\Delta_{\alpha}(t)$ applied to electrodes.
We may denote $\delta\Delta_{\alpha}(t)=x_{\alpha}\delta\Delta(t)$,
with $0\leq x_{\rm L}\equiv 1+x_{\rm R} \leq 1$; thus
$\delta\Delta(t)=\delta\Delta_{\rm L}(t)-\delta\Delta_{\rm R}(t)$. It
specifies the additional time-dependent bias voltage, on top of the
constant $V=\mu_{\rm L}-\mu_{\rm R}$, applied across the two
reservoirs. As inferred from \Eq{gammaSum}, we have then
\be\label{del_bfL_prime}
\delta{\bfV}(t)=-{\bfS}\,\delta\Delta(t) ,
\ee
where ${\bfS}\equiv \text{diag}\big\{0,S^{(n)}_{j_1\cdots j_n};n=1,\cdots,L\big\}$,
with
\be\label{Sn}
S^{(n)}_{j_1\cdots j_n}\equiv \sum_{r=1}^n
\big(\sigma x_{\alpha}\big)_{\sigma,\alpha\,\in j_r}.
\ee
Note that $S^{(0)}=0$.
The additivity of \Eq{dfLsum} and the linearity of HEOM lead readily to
the interaction picture of the HEOM dynamics in response to the
time-dependent external disturbance $\delta\bfL(t)=\delta{\mathcal
L}(t)\bfone +\delta{\bfV}(t)$.
The initial unperturbed ADOs vector assumes the nonequilibrium
steady-state form of
\be\label{st_ADOs}
{\bm\rho}^\text{st}(T,V)\equiv \big\{\bar\rho,\,
\bar\rho^{(1)}_{j},\,
\bar\rho^{(2)}_{j_1\!j_2},\,\cdots\,\big\},
\ee
under given temperature $T$ and constant bias voltage $V$.
It is obtained as the solutions to the linear equations,
$\bfL_s{\bm\rho}^\text{st}(T,V)=0$, subject to the normalization
condition for the reduced density
matrix.\cite{Jin08234703,Zhe08184112,Zhe09164708}
The unperturbed HEOM propagator reads $\bfG_s(t)\equiv
\exp(-i\bfL_st)$. Based on the first-order perturbation theory,
$\delta\bm\rho(t) \equiv \bm\rho(t)- {\bm\rho}^\text{st}(T,V)$ is then
\be\label{del_bfrho}
\delta\bm\rho(t)=-i\int_{0}^t\! {d}\tau\,
\bfG_s(t-\tau)\delta{\bfL}(\tau){\bm\rho}^\text{st}(T,V).
\ee
The response magnitude of a local system observable $\hat A$ is
evaluated by the variation in its expectation value, $\delta A(t) =
{\rm Tr}\{\hat A\delta\rho(t)\}$. Apparently, this involves the
zeroth-tier ADO $\delta\rho(t)$ in
$\delta\bm\rho(t)
\equiv \big\{\delta\rho(t),\,
\delta\rho^{(n)}_{j_1\cdots j_n}(t); n=1,\cdots,L\big\}$.
In contrast, the response current under applied voltages cannot be
extracted from $\delta\rho(t)$, because the current operator is not a
local system observable. Instead, as inferred from \Eq{curr_t}, while
the steady-state current $\bar I_{\alpha}$ through $\alpha$-reservoir
is related to the steady-state first-tier ADOs, $\bar\rho^{(1)}_j\equiv
\bar\rho^{\sigma}_{\alpha\mu m}$, the response time-dependent current,
$\delta I_{\alpha}(t)=I_{\alpha}(t)-\bar I_{\alpha}$, is obtained via
$\delta\rho^{(1)}_j(t)=\delta \rho^{\sigma}_{\alpha\mu m}(t)$.
The above two situations will be treated respectively, by considering
$\delta{\bfL}(\tau)=\delta{\mathcal L}(t)\bfone$ and
$\delta{\bfL}(\tau)=\delta\bfV(t)$, in the coming two subsections:
\Sec{ththeo2} treats the local system response to a time-dependent
external field acting on the reduced system, while \Sec{ththeo3}
addresses the issue of electric current response to external voltage
applied to reservoirs.
\subsection{Nonequilibrium correlation and response functions of system}
\label{ththeo2}
Let $\hat A$ and $\hat B$ be two arbitrary local system operators, and
consider the correlation functions, $C_{AB}(t-\tau)=\la \hat A(t)\hat
B(\tau) \ra_{\rm st}$ and $S_{AB}(t-\tau)= \la \{\hat A(t),\hat
B(\tau)\} \ra_{\rm st}$, and response function, $\chi_{AB}(t-\tau)=i\la
[\hat A(t),\hat B(\tau)] \ra_{\rm st}$. It is well known that for the
equilibrium case they are related to each other via the
fluctuation-dissipation theorem. The nonequilibrium case is rather
complicated, and the relation between nonequilibrium correlation and
response functions is beyond the scope of the present paper.
We now focus on the evaluation of local system correlation/response
functions with the HEOM approach. This is based on the equivalence
between the HEOM-space linear response theory of \Eq{del_bfrho} and
that of the full system-plus-bath composite space.
We start with the evaluation of nonequilibrium steady-state correlation
function $C_{AB}(t)=\la \hat A(t)\hat B(0) \ra_{\rm st}$, as follows.
By definition, the system correlation function can be recast into the
form of
\begin{align}\label{Cab_def}
C_{AB}(t)
&={\rm Tr}_{\T} \big\{\hat A{\cal G}_{\T}(t)
[\hat B\rho^{\rm st}_{\T}(T,V)]\big\}
\nl&
\equiv {\rm Tr}_{\T}[\hat A \ti\rho_{\T}(t)]
\nl&
={\rm Tr}[\hat A \ti\rho(t)].
\end{align}
The $\rho^{\rm st}_{\T}(T,V)$ and ${\cal G}_{\T}(t)$ in the first
identity are the steady-state density operator and the propagator,
respectively, in the total system-bath composite space under constant
bias voltage $V$. Define in the last two identities of \Eq{Cab_def} are also
$\ti\rho_{\T}(t)\equiv{\cal G}_{\T}(t)\ti\rho_{\T}(0)$
and $\ti\rho(t)\equiv {\rm tr}_{\B}\ti\rho_{\T}(t)$,
with
$\ti\rho_{\T}(0) = \hat B\rho^{\rm st}_{\T}(T,V)$.
Equation (\ref{Cab_def}) can be considered in terms of the linear
response theory, in which the perturbation Liouvillian induced by an
external field $\delta\epsilon(t)$ assumes the form of $-i\delta{\cal
L}(t)(\cdot)=\hat B(\cdot)\delta\epsilon(t)$, followed by the
observation on the local system dynamical variable $\hat A$. Both $\hat
A$ and $\hat B$ can be non-Hermitian. Moreover, $\delta{\cal L}(t)$ is
treated formally as a perturbation and can be a one-side action rather
than having a commutator form.
For the evaluation of $C_{AB}(t)$ with the HEOM-space dynamics, the
corresponding perturbation Liouvillian is $\delta{\bfL}(t)=\delta{\cal
L}(t)\bfone$, with the above defined $\delta{\cal L}(t)$. It leads to
$-i\delta{\bfL}(\tau){\bm\rho}^{\rm st}(T,V) =\hat B {\bm\rho}^{\rm
st}(T,V) \delta\epsilon(\tau)$ involved in \Eq{del_bfrho}. The linear
response theory that leads to the last identity of \Eq{Cab_def} is now
of the $\ti\rho(t)$ being just the zeroth-tier component of
\be\label{tibmrho_t}
\ti{\bm\rho}(t)\equiv \big\{\ti\rho(t),\,\ti\rho^{(1)}_{j}\!(t),\,
\ti\rho^{(2)}_{j_1\!j_2}\!(t),\,\cdots\,\big\}
={\bfG}_s(t)\ti{\bm\rho}(0), \ee
with the initial value of [\emph{cf.}~\Eq{st_ADOs}]
\be\label{tibmrho_init1}
\ti{\bm\rho}(0)=\hat B{\bm\rho}^{\rm st}(T,V)
= \big\{\hat B\bar\rho,\hat B\bar\rho^{(1)}_{j}\!,\,
\hat B\bar\rho^{(2)}_{j_1\!j_2},\,\cdots\,\big\}\,.
\ee
The HEOM evaluations of $S_{AB}(t)$ and $\chi_{AB}(t)$
are similar, but with the initial ADOs of
$\ti{\bm\rho}(0)=\{\hat B,{\bm\rho}^{\rm st}(T,V)\}$
and $i[\hat B,{\bm\rho}^{\rm st}(T,V)]$, respectively.
Care must be taken when propagating \Eq{tibmrho_t}, for the HEOM
propagator ${\bfG}_s(t)$ involving the Grassmann superoperators ${\cal
A}_{\bar j}$ and ${\cal C}_j$ defined in \Eq{calAC}. Note that the
steady-state system density operator $\bar\rho$ is always of the
Grassmann-even (or bosonic) parity. Therefore, the zeroth-tier ADO
$\ti\rho(t)$ in the above cases is of the same Grassmann parity as the
operator $\hat B$, while the ADOs at the adjacent neighboring tier
level are of opposite parity. The HEOM propagation in \Eq{tibmrho_t} is
then specified accordingly.
It is also worth pointing out that the HEOM evaluation of equilibrium
correlation and response functions of the local system can be
simplified when $J_{\rm L}(\omega)\propto J_{\rm R}(\omega)$. In this
case, two reservoirs can be combined as a whole entity bath, resulting
in a HEOM formalism that depends no longer on the reservoir-index
$\alpha$.
\subsection{Current response to applied bias voltages}
\label{ththeo3}
\subsubsection{Dynamic differential admittance}
\label{ththeo3.1}
Consider first the differential circuit current through a two-terminal
transport system composed of an quantum impurity and two leads, $\delta
I(t)=\frac{1}{2}[\delta I_{\rm L}(t) -\delta I_{\rm R}(t)]$, in
response to a perturbative shift of reservoir chemical potential
$\delta\Delta(t)$.
We have
\be\label{Galp_t}
\delta I_{\alpha}(t)=\int_{0}^{t}\! {d}\tau\,
G_{\alpha}(t-\tau)\,\delta\Delta(\tau).
\ee
The HEOM-space dynamics results in
\be\label{Galp_t_HEOM}
G_{\alpha}(t)= 2\, {\rm Re} \sum_{\mu m}
{\rm Tr}\left[\hat a^{+}_{\mu}
\ti \rho^{-}_{\alpha\mu m}(t)\right],
\ee
with $\ti \rho^{-}_{\alpha\mu m}(t)$ denoting the first-tier ADOs in
$\ti{\bm\rho}(t)$ [\Eq{tibmrho_t}] with the initial value of
[\emph{cf.}\ \Eqs{del_bfL_prime}-(\ref{st_ADOs})]
\be\label{tibmrho_init2}
\ti{\bm\rho}(0)= -{\bfS}{\bm\rho}^\text{st}\!(T,\!V)
\equiv -\big\{0,S^{(1)}_{j}\!\bar\rho^{(1)}_{j}\!,\!
S^{(2)}_{j_1\!j_2}\bar\rho^{(2)}_{j_1\!j_2},\!\cdots\!
\big\}.
\ee
Denote the half-Fourier transform,
\be\label{Galp_w}
G_{\alpha}(\omega) \equiv \int_0^{\infty}\!\! {d}t\,
e^{i\omega t}G_{\alpha}(t).
\ee
The admittance is given by $G(\omega)= \frac{1}{2}[G_{\rm L}(\omega)- G_{\rm
R}(\omega)]$, with its zero-frequency component recovering the
steady-state differential conductance as $d\bar I/dV = G(\omega=0)$.
\subsubsection{Current-number and current-current response functions}
\label{ththeo3.2}
Consider now the differential current $\delta I_{\alpha}(t)$ in
response to an additional time-dependent chemical potential
$\delta\Delta_{\alpha'}(t)$ applied on a specified $\alpha'$-reservoir.
Note that the Hamiltonian of the total composite system, \Eq{Htotal},
is now subject to a perturbation of $\delta H_{\T}(t)= \hat
N_{\alpha'}\delta\Delta_{\alpha'}(t)$, with $\hat N_{\alpha'}=\sum_k
\hat d^{\dg}_{\alpha' k}\hat d_{\alpha' k}$ being electron number
operator of the $\alpha'$-reservoir. Thus, the hierarchical Liouville
space linear response theory leads to
\be \label{delI_resp1}
\delta I_{\alpha}(t)=\int_{0}^t\! {d}\tau\,
G_{\alpha\alpha'}(t-\tau)\, \delta\Delta_{\alpha'}(\tau) \, ,
\ee
where the kernel is characterized by the nonequilibrium steady-state
current-number response function,
\be\label{g_alp_t}
G_{\alpha\alpha'}(t-\tau) = -i\, \la [\hat I_{\alpha}(t), \hat N_{\alpha'}(\tau)]
\ra_{\rm st} \, ,
\ee
with $\la (\cdot)\ra_{\rm st} \equiv {\rm Tr}_{\T}[(\cdot)\rho^{\rm
st}_{\rm T}(T,V)]$.
Equation~\eqref{g_alp_t} can be derived by following \Eqs{curr_t},
(\ref{del_bfrho}) and (\ref{delI_resp1}), and the HEOM evaluation of
$G_{\alpha\alpha'}(t-\tau)$ can be achieved as follows.
Equation~\eqref{del_bfL_prime} is recast as
$\delta{\bfL}(t)=-{\bfS}_{\alpha'}\delta\Delta_{\alpha'}(t)$, where
${\bfS}_{\alpha'} =\text{diag}\big\{0,
S^{\alpha'}_{j_1\cdots j_n}\big\}$,
where $S^{\alpha'}_{j_1\cdots j_n}$ is similar to $S^{(n)}_{j_1\cdots
j_n}$ of \Eq{Sn} but with $x_{\alpha}=\delta_{\alpha\alpha'}$.
Therefore,
\be\label{Salpha}
S^{\alpha'}_{j_1\cdots j_n} = \sum_{r=1}^n
(\sigma \delta_{\alpha\alpha'})_{\sigma\alpha \in j_r} \, .
\ee
Its rhs collects the signs ($\sigma = +1$ or $-1$) in the involving
$(j\equiv\{\sigma\alpha\mu m\})$-indexes whenever $\alpha=\alpha'$.
The suitable initial values for the vector of ADOs are
$$
\ti{\bm\rho}_{\alpha'}(0)=-{\bfS}_{\alpha'}{\bm\rho}^\text{st}(T,V)
=-\left\{0,S^{\alpha'}_{j}\!\bar\rho^{(1)}_{j}\!,
S^{\alpha'}_{j_1\!j_2}\bar\rho^{(2)}_{j_1\!j_2},\!\cdots\!
\right\},
$$
followed by the unperturbed HEOM-space evolution,
\be\label{rhot_alpha}
\ti{\bm\rho}_{\alpha'}(t) = {\bfG}_s(t)\ti{\bm\rho}_{\alpha'}(0)
\equiv \big\{\ti\rho(t;\alpha'),\ \ti\rho^{(1)}_{j}\!(t;\alpha'),\,\cdots \big\}.
\ee
The involving first-tier ADOs, $\ti\rho^{(1)}_j(t;\alpha')\equiv
\ti\rho^{\sigma}_{\alpha\mu m}(t;\alpha')$, are used to evaluate the
current-number response function [\emph{cf}.~\Eq{Galp_t_HEOM}]:
\be\label{gt_final}
G_{\alpha\alpha'}(t) = 2\, {\rm Re} \sum_{\mu m}
{\rm Tr}\left[\hat a^{+}_{\mu}
\ti\rho^{-}_{\alpha\mu m}(t;\alpha')\right] .
\ee Apparently, $G_{\alpha}(t)= x_{\rm L} G_{\alpha{\rm L}}(t)+x_{\rm
R}G_{\alpha{\rm R}}(t)$, which is just the dynamic admittance
considered in \Sec{ththeo3.1}.
The nonequilibrium steady-state current-current response function,
$\chi_{\alpha\alpha'}(t)$, can be obtained numerically by taking the
time derivative of $G_{\alpha\alpha'}(t)$,
\be\label{chi_current}
\chi_{\alpha\alpha'}(t)\equiv i \la [\hat I_{\alpha}(t),\hat I_{\alpha'}(0)]\ra_{\rm st}
= {\dot G}_{\alpha\alpha'}(t).
\ee
In the hierarchical Liouville space, $\chi_{\alpha\alpha'}(t)$ can be
explicitly expressed by the zeroth-, first- and second-tier ADOs, as
inferred from \Eq{gt_final} and the EOM for $\ti{\rho}^{-}_{\alpha\mu
m}(t;\alpha')$. Its Fourier transform, the current-current response
spectrum, may carry certain information about the shot noise of the
impurity system.
In general, the correlation/response functions between an arbitrary
local system operator $\hat A$ and the electron number operator $\hat
N_{\alpha'}$ of the $\alpha'$-electrode can be evaluated via the
zeroth-tier ADO $\ti\rho(t;\alpha')$ of \Eq{rhot_alpha}, such as $i\la
[\hat A(t), \hat N_{\alpha'}(0)]\ra_{\rm st}=-{\rm Tr}[\hat A\,
\ti\rho(t;\alpha')]$, by using the HEOM Liouville propagator.
Its time derivative gives $i\la [\hat A(t), \hat
I_{\alpha'}(0)]\ra_{\rm st}$.
\section{Results and discussions}
\label{thnum}
We now demonstrate the numerical performance of the HEOM approach on
evaluation of nonequilibrium response properties of quantum impurity
systems. The hierarchical Liouville-space linear response theory
established in \Sec{ththeo} is employed to obtain the relevant
correlation/response functions, from which the response properties are
extracted.
It is worth emphasizing that the numerical examples presented in this
section aim at verifying the accuracy and universality of the proposed
methodology, rather than addressing concrete physical
problems. To this end, the widely studied standard single-impurity Anderson
model (SIAM) is considered.
The Hamiltonian of the single-impurity is
$H_\text{sys}=\epsilon_{\uparrow} \hat{n}_{\uparrow}+ \epsilon_{\downarrow}
\hat{n}_{\downarrow} + U\hat{n}_{\uparrow}\hat{n}_{\downarrow}$, with
$\hat{n}_{\mu}=\hat a^{\dg}_{\mu}\hat a_{\mu}$ being the electron
number operator for the spin-$\mu$ ($\uparrow$ or $\downarrow$)
impurity level. The impurity is coupled to two noninteracting electron
reservoirs ($\alpha = $ L and R). For simplicity, the spectral
(or hybridization) function of $\alpha$-reservoir assumes a diagonal
and Lorentzian form, \emph{i.e.},
$J_{\alpha\mu\nu}(\omega)=\delta_{\mu\nu}
\frac{\Gamma_{\alpha}W^{2}_{\alpha}}{2[(\omega-\mu_{\alpha})^{2}+W^{2}_{\alpha}]}$,
with $\Gamma_{\alpha}$ and $W_\alpha$ being the linewidth and bandwidth
parameters, respectively.
Note that the same set of system parameters are adopted for all
calculations (except for specially specified): $\epsilon =
\epsilon_{\uparrow} = \epsilon_{\downarrow} = -0.5$, $U=1.5$, $T=0.02$, $\Gamma =
\Gamma_{\rm L} = \Gamma_{\rm R} = 0.1$, $W_{\rm L} = W_{\rm R} = 2$,
all in units of meV.
The nonequilibrium situation concerns a steady state defined by a fixed
bias voltage applied antisymmetrically to the two reservoirs,
\emph{i.e.}, $\mu_{\rm L} = -\mu_{\rm R} = \frac{V_{0}}{2}$ with
$V_{0} = -V = 0.2$, and/or $0.7\,$meV.
A recently developed $[N\!-\!1/N]$ Pad\'{e} spectrum decomposition
scheme\cite{Hu10101106, Hu11244106} with $N=8$ (i.e., $M=9$) is used
for the efficient construction of the hierarchical Liouville propagator
associated with \Eq{HEOM}.
To obtain quantitatively converged numerical results, we increase the
truncation level $L$ and the number of exponential terms $M$
continually until convergence is reached.
Table~\ref{table1} lists the probabilities that the impurity is singly
occupied by spin-$\mu$ electron ($P_\mu = \la\mu|\bar\rho(T,V)|\mu\ra$
with $\mu = \uparrow$ or $\downarrow$); or doubly occupied ($P_{\uparrow\downarrow}
=\la {\uparrow\downarrow} | \bar\rho(T,V) | {\uparrow\downarrow} \ra$).
Here, $\bar\rho(T,V)$ is the nonequilibrium steady-state reduced density
matrix under temperature $T$ and antisymmetric applied voltage $V$.
Calculations are done at different truncation level $L$ (up to $L=5$)
and fixed $M=9$. Apparently, the HEOM results converge rapidly
and uniformly with the increasing $L$, \emph{i.e.}, with higher-tier
ADOs included explicitly in \Eq{HEOM}. In particular, the remaining
relative deviations between the results of $L=4$ and $L=5$ are less than
0.1\%, indicating that the $L=4$ level of truncation is sufficient for
the present set of parameters.
It is also affirmed $M=9$ is sufficient to yield convergent results;
see Supplemental Material.\cite{SM_skw}
These are further affirmed by the calculated steady-state current $\bar{I}(V)$
across the impurity, which also converges quantitatively with rather minor
residual uncertainty at $L = 4$ and $M = 9$.
Note also that the truncation at $L=1$ level results in the sequential
current contribution, which is negligibly small for the present
nonequilibrium system setup. The values of $\bar I$ evaluated at different
truncation levels clearly indicate the current contributions from different
orders of cotunneling processes.
\begin{table}{}
\begin{tabular}{c|c|c|c}
\hline \hline
$L$ & $P_{\mu}$ & $P_{\uparrow\downarrow}$ & $\ \bar I$ (nA) \\
\hline
1 & \, 0.500 ({\it 0.500}) & 0.001 ({\it 0.000}) & 0.003 \\
\hline
2 & \, 0.441 ({\it 0.462}) & 0.025 ({\it 0.027}) & 4.654 \\
\hline
3 & \, 0.439 ({\it 0.454}) & 0.024 ({\it 0.025}) & 4.920 \\
\hline
4 & \, 0.440 ({\it 0.457}) & 0.024 ({\it 0.024}) & 4.799 \\
\hline
5 & \, 0.440 ({\it 0.457}) & 0.024 ({\it 0.024}) & 4.799 \\
\hline \hline
\end{tabular}
\caption{ Spin-$\mu$ single- and double-occupation probabilities
($P_{\mu}$ and $P_{\uparrow\downarrow}$), and steady-state current of
an SIAM with two electrons reservoirs under an antisymmetrically
applied bias voltage of $V_{0} = -V = 0.7\,$meV. Calculations are done
by solving the HEOM of \Eq{HEOM} truncated at different level $L$.
The parameters are adopted are (in units of meV):
$\epsilon=\epsilon_{\uparrow}=\epsilon_{\downarrow}=-0.5$, $U=1.5$,
$\Gamma_{\rm L}=\Gamma_{\rm R}=0.1$, $W_{\rm L}=W_{\rm R}=2$, and
$T=0.02$. For comparison, the numbers of equilibrium situation of $V_{0} =
0$ are shown in the parentheses.}
\label{table1}
\end{table}
In the following, we first show the spectral function of the SIAM
calculated by using the HEOM approach (see \Fig{fig1}), and then
present the evaluation of some typical response properties in both
equilibrium and nonequilibrium situations. These will include the
local charge fluctuation spectrum $S_{Q}(\omega)$ (see \Fig{fig2}), local
magnetic susceptibility $\chi_{M}(\omega)$ (see \Fig{fig3}), and differential
admittance $G(\omega)$ (see \Fig{fig5}).
\emph{All calculations are carried out
at the truncation level of $L = 4$ and $M=9$.} Based on the analysis
of Table~\ref{table1}, the resulting response properties are expected
to be quantitatively converged with respect to $L=4$ and $M=9$.
\begin{figure}
\includegraphics[width=0.9\columnwidth]{fig1.eps}
\caption{The HEOM calculated spectral function of an SIAM system, $A(\omega)=A_{\uparrow}(\omega)
=A_{\downarrow}(\omega)$, in unit of $(\pi\Gamma)^{-1}$.
The parameters adopted are specified in the caption of Table~\ref{table1}.
The three panels exhibit the variations of $A(\w)$, particularly the
evolution of the Kondo and Hubbard peaks, versus
(a) the applied bias voltage $V_{0}$,
(b) the temperature $T$, and
(c) the shift of impurity level energy $\epsilon$ by a gate voltage, respectively.
} \label{fig1}
\end{figure}
Figure \ref{fig1} depicts the HEOM calculated spin-$\mu$ spectral
function of the impurity,
\be\label{Aomega}
A_{\mu}(\omega)=\frac{1}{\pi}\, {\rm Re} \left\{
\int_{0}^{\infty}\!\! d t\, e^{i\omega t} \left\la
\left\{\hat a_{\mu}(t), \hat a^{\dg}_{\mu}(0) \right\}
\right\ra_{\rm st}\, \right\}.
\ee
The effect of bias voltage $V_{0}$ on $A_\mu(\omega)$ is illustrated in
\Fig{fig1}(a). Clearly, the equilibrium $A_\mu(\omega)$ reproduces correctly the
well known features of SIAM, such as the Hubbard peaks at around $\omega =
\epsilon$ and $\omega = \epsilon + U$, and the Kondo peak centered at $\omega =
\mu^{\rm eq} = 0$.\cite{Hew93}
In Ref.\,\onlinecite{Li12266403}, the equilibrium $A_\mu(\omega)$ of
SIAM in the Kondo regime has been investigated with the HEOM approach
thoroughly and the existence of Kondo resonance is manifested by the
correct universal scaling behavior there.
In the nonequilibrium situation where an external voltage is applied
antisymmetrically to the L and R reservoirs, the Hubbard peaks remain
largely unchanged in both position and height.
In contrast, the Kondo peak is split by the voltage into two, which
appear at $\omega =\frac{V_{0}}{2}$ and $\omega = -\frac{V_{0}}{2}$ and
correspond to the shifted reservoir chemical potentials $\mu_{\rm L}$
and $\mu_{\rm R}$, respectively.
Obviously, as the bias voltage $V_{0}$ increases from $0$ to $0.2\,$meV,
then to $0.7\,$meV, the progressive splitting of Kondo peak is observed
in \Fig{fig1}(a).
Figure~\ref{fig1}(b) plots the calculated $A(\omega)$ of the same SIAM
system at various temperatures. Apparently, as the temperature
increases over an order of magnitude, the two Hubbard resonance peaks
at $\omega=\epsilon$ and $\omega=\epsilon+U$ almost remain intact. In
contrast, the split peaks at $\omega=\mu_{L}$ and $\mu_{R}$ vanish
quickly at the higher temperature. This clearly highlights the strong
electron correlation features in the present nonequilibrium SIAM.
To further verify that the split peaks near $\w = 0$ are of Kondo
nature, we examine the variation of $A(\w)$ versus a gate voltage
applied to the dot. The gate voltage is considered to shift the
impurity level energy $\epsilon$ by $0.1\,$meV, and the corresponding
change of calculated $A(\w)$ is shown in \Fig{fig1}(c). Apparently, as
$\epsilon$ drops from $-0.5$ to $-0.6\,$meV by the gate voltage, the
two Hubbard resonance peaks at $\omega=\epsilon$ and
$\omega=\epsilon+U$ are displaced by $0.1\,$meV. In contrast, the two
peaks at $\w = \pm \frac{V_0}{2}$ remain pinned to the reservoir
chemical potentials $\mu_{L}$ and $\mu_{R}$, indicating that these two
peaks are indeed of Kondo origin.
Usually, the Hubbard peaks at $\omega = \epsilon$ and $\omega =
\epsilon + U$ converge more rapidly than the resonance peaks at
$\omega=\pm\frac{V_{0}}{2}$ when truncation level $L$ increases. This
also reflects the Kondo nature of resonance peaks at $\omega
=\pm\frac{V_{0}}{2}$.\cite{SM_skw} Moreover, there is a long-time
oscillatory tail in the real time evolution which is crucial for the
appearance of Kondo peaks. We also find that the short-time dynamics of
the retarded Green's function and high-frequency part of $A(\omega)$
converge more rapidly when the truncation level $L$ increases. In other
words, in the HEOM framework, one can extract the spectral function at
high frequency range at a relatively lower truncation level and
relatively shorter evolution time than those at resonance frequencies,
without compromising the accuracy.
We then exemplify the numerical tractability of HEOM approach via
evaluation of three types of response properties. These include the
local charge fluctuation spectrum $S_{Q}(\omega)$, local magnetic
susceptibility $\chi_M(\omega)$, and differential admittance spectrum
$G_{\alpha\alpha'}(\omega)$, defined respectively as follows,
\begin{align}
S_{Q}(\omega) &\equiv \int^{\infty}_{-\infty} dt\, e^{i\omega t}
\big\langle \big\{\Delta \hat{Q}(t),\Delta \hat{Q}(0)\big\}
\big\rangle_{\rm st}\,,
\label{def-sqq} \\
\chi_{M}(\omega) &\equiv i\int_{0}^{\infty} dt\,
e^{i\omega t}\, \big\la \big[\hat M(t), \hat M(0)
\big]\big\ra_{\rm st}\,,
\label{def-chi} \\
G_{\alpha\alpha'}(\omega) &\equiv -i \int_{0}^\infty
dt\, e^{i\omega t}\, \big\la \big[\hat I_{\alpha}(t),
\hat N_{\alpha'}(0) \big] \big\ra_{\rm st}\,.
\label{def-g}
\end{align}
In \Eq{def-sqq}, $\Delta \hat{Q}(t)=\hat{Q}(t)-\langle
\hat{Q}\rangle_{\rm st}$, with $\hat{Q}= \sum_\mu \hat{n}_\mu$ being
the total impurity occupation number operator. Therefore, $\Delta
\hat{Q}(t)$ describes the fluctuation of occupation number around the
averaged value.
For $\chi_M(\omega)$ of \Eq{def-chi}, $\hat M = g\mu_{B}\hat{S}_z$ is the
impurity magnetization operator, which originates from the electron
spin polarization induced by external magnetic field. Here, $g$ is the
electron $g$-factor, $\mu_B$ is the Bohr magneton, and $\hat{S}_z =
(\hat{n}_{\uparrow}-\hat{n}_{\downarrow})/2$ is the impurity spin
polarization operator.
In \Eq{def-g}, $G_{\alpha\alpha'}(\omega)$ is just the half-Fourier
transform of current-number response function of \Eq{g_alp_t}
or (\ref{gt_final}).
The time $t = 0$ in the individual \Eqs{def-sqq}--\eqref{def-g} represents the instant
at which the external perturbation (magnetic field or bias voltage) is
interrogated.
It is worth pointing out that all the three types of response
properties satisfy the following symmetry: the real (imaginary) part is
an even (odd) function of $\omega$.
This is due to the time-reversal symmetry of steady-state correlation
functions, \emph{i.e.}, $C_{AB}(t) = [C_{BA}(-t)]^\ast$.
In particular, for $S_{Q}(\omega)$ of \Eq{def-sqq}, $\hat{A} = \hat{B} =
\Delta\hat{Q}$. Consequently, it can be shown that $S_{Q}(\omega)$ is a
real function.
For clarity, Figs.~\ref{fig2}, \ref{fig3}, and \ref{fig5} will only
exhibit the $\omega \geq 0$ part of the dynamic response properties, while
the $\omega < 0$ part can be retrieved easily by applying the symmetry
relation.
\begin{figure}
\includegraphics[width=0.9\columnwidth]{fig2.eps}
\caption{HEOM calculated local charge fluctuation spectrum,
$S_{Q}(\w)$ of \Eq{def-sqq}, in unit of $e^2$.
The parameters adopted are the same as those specified
in caption of Table~\ref{table1}.}
\label{fig2}
\end{figure}
The local charge fluctuation spectrum $S_{Q}(\omega)$ has been studied in
the context of shot noise of quantum dot systems.\cite{Agu04206601,
Fli05411, Luo07085325}
Figure~\ref{fig2} depicts the HEOM calculated $S_{Q}(\omega)$ of the SIAM
under our investigation. At equilibrium, the spectrum exhibits a
crossover behavior, where the two peaks centered at around $\omega =
|\epsilon| = 0.5$\,meV and $\omega = |\epsilon + U| = 1$\,meV largely overlap
and form a broad peak.
The positions of these two peaks correspond to the excitation energies
associated with change of impurity occupancy state.
In nonequilibrium situation, the crossover peak is observed to move to
a lower energy, since the chemical potential of reservoir R is drawn
closer to the impurity state by the applied voltage.
\begin{figure}
\includegraphics[width=0.9\columnwidth]{fig3.eps}
\caption{HEOM calculated local magnetic susceptibility,
$\chi_{M}(\omega)$ of \Eq{def-chi}, in unit of $g^{2}\mu^{2}_{B}/k_{B}T$.
The parameters adopted are the same as those specified
in caption of Table~\ref{table1}.
}
\label{fig3}
\end{figure}
The local magnetic susceptibility $\chi_M(\omega)$ is a response property
of fundamental significance, particularly for strongly correlated
quantum impurity systems. It has been studied by various methods such
as NRG.\cite{Mer12075153}
Figure~\ref{fig3} displays the HEOM calculated $\chi_M(\omega)$ of the SIAM
of our concern.
In both equilibrium and nonequilibrium situations, the main
characteristic features of $\chi_M(\omega)$ appear at around zero energy.
Moreover, the magnitude of $\chi_M(\omega)$ is found to reduce
significantly in presence of applied bias voltage, especially in the
low energy range. This is consistent with the diminishing spectral
density at around zero-frequency due to the voltage-induced splitting
of Kondo peak; see \Fig{fig1}.
\begin{figure}
\includegraphics[width=0.9\columnwidth]{fig4.eps}
\caption{Static local magnetic susceptibility multiplied by temperature,
$\chi_{M}(\omega = 0)\,T$ (in unit of $g^{2}\mu^{2}_{B}/k_{B}$),
versus $T/T_K$ for a series of equilibrium symmetric SIAM systems of different $U$.
Here, $T_K$ is the Kondo temperature, and $U$, $T$, and $W$ are in
unit of $\Gamma$. The HEOM results (scattered symbols) are compared with
the latest full density matrix NRG calculations (lines) of Ref.~\onlinecite{Mer12075153},
the Fig.~6 there. The NRG calculations are for very large reservoir bandwidth $W$,
while the HEOM results are obtained with relatively smaller bandwidths
($W = 10$ and $W = 20$) for saving computational cost.
}
\label{fig4}
\end{figure}
To verify the numerical accuracy of our calculated local magnetic
susceptibility, we compare the HEOM approach with the latest high-level
NRG method. The comparison is shown in \Fig{fig4}, where the equilibrium
static magnetic susceptibilities, $\chi_M(\omega = 0)$, of various symmetric
SIAM systems are calculated to reproduce the Fig.~6 of Ref.~\onlinecite{Mer12075153}.
Apparently, the HEOM and NRG results agree quantitatively with each other.
\begin{figure}
\includegraphics[width=0.9\columnwidth]{fig5.eps}
\caption{HEOM calculated dynamic admittance,
$G(\omega) = \frac{1}{4}[G_{\rm LL}(\omega) + G_{\rm LR}(\omega) -
G_{\rm RL}(\omega) - G_{\rm RR}(\omega)]$ with $G_{\alpha\alpha'}(\omega)$ defined by
\Eq{def-g}, in unit of $e^2/h$.
The parameters adopted are the same as those specified
in caption of Table~\ref{table1}.
}
\label{fig5}
\end{figure}
The dynamic admittance is one of the most extensively studied response
properties of quantum dot systems. The frequency-dependent admittance
has been studied by scattering theory\cite{But93364, But934114,
Pre968130, But964793} and nonequilibrium Green's function
method.\cite{Win938487, Jau945528, Ana957632, Nig06206804, Wan07155336}
The HEOM approach has also been used to calculate the dynamic
admittance of noninteracting\cite{Zhe08184112} and interacting quantum
dots.\cite{Zhe08093016,Zhe09124508}
This was realized by calculating the transient current in response to a
delta-pulse voltage.\cite{Zhe08184112}
In the following, we revisit the evaluation for dynamic admittance
$G(\omega)$ via an alternative route, \emph{i.e.}, by calculating the
current-number response functions of \Eq{g_alp_t}.
Figure \ref{fig5} depicts the HEOM calculated differential admittance
of the SIAM under study, $G(\omega) = \frac{1}{4}[G_{\rm LL}(\omega) + G_{\rm LR}(\omega) -
G_{\rm RL}(\omega) - G_{\rm RR}(\omega)]=\frac{1}{2}[G_{L}(\omega)-G_{R}(\omega)]$;
\emph{cf.} \Eq{Galp_t}, with $G_{\alpha\alpha'}(\omega)$ defined
by \Eq{def-g}. Here we have chosen antisymmetrically applied probe ac bias,
$\delta\Delta_{L}(t)=-\delta\Delta_{R}(t)=\frac{1}{2}\delta\Delta(t)$.
As discussed extensively in Ref.\,\onlinecite{Zhe09164708},
the characteristic features of $G(\omega)$ appearing at around $\omega = |\epsilon|$
and $\omega = |\epsilon + U|$ correspond to the transitions between Fock states
of different occupancy, while the low-frequency features highlight the presence of
dynamic Kondo transition. Apparently, the dynamic Kondo transition is suppressed
by the applied voltage, which is analogous to the scenario of $\chi_M(\omega)$ as shown in
\Fig{fig3}.
\begin{figure}
\includegraphics[width=0.9\columnwidth]{fig6.eps}
\caption{
Calculated differential conductance $dI/dV$ of various symmetric SIAMs as a function of scaled voltage $V/T_{K}$.
Here, $T_K$ is the Kondo temperature calculated by $T_{K}=\frac{1}{2}\sqrt{\Gamma U}\, {\rm exp}(-\pi U/4\Gamma + \pi\Gamma/4U)$, with $\Gamma =\Gamma_{L}+\Gamma_{R}$.
Systems of three different combinations of parameters $U$ and $\Gamma$ (in units of meV) are demonstrated, with $T/T_{K}=1$ and $\Gamma_{L}=\Gamma_{R}$. The inset depicts $dI/dV$ versus unscaled voltage. The bandwidths of electrodes are $W_{\rm L}=W_{\rm R}=10\,$meV.
}
\label{fig6}
\end{figure}
At last we investigate the universal scaling properties of
nonequilibrium differential conductance $dI/dV$ (or the
zero-frequency admittance).
The universal scaling relation of nonequilibrium properties
of impurity systems with Kondo correlations have been studied.
\cite{Kam008154,Ros01156802,Pus04R513} For instance, Rosch~\emph{et al.}
have concluded that the nonequilibrium conductance at high voltages
scales universally with $V/T_K$; see the inset of Fig.~1 in
Ref.~\onlinecite{Ros01156802}.
To demonstrate the high accuracy of our HEOM approach in regimes
far from equilibrium, we calculate the conductance of various
symmetric SIAMs versus scaled and unscaled voltages, as displayed
in \Fig{fig6}.
Clearly, while the difference in $dI/dV$--$V$ becomes more accentuated
at a larger $V$ (see the inset of \Fig{fig6}), the $dI/dV$--$V/T_K$
exhibits a universal scaling relation for systems of different parameters.
Such a universal relation holds for all voltages examined (up to $V/T_K = 30$).
Therefore, the HEOM approach reproduces quantitatively the previously
predicted universal scaling relation for nonequilibrium conductance at
high voltages.
\section{Concluding remarks} \label{summary}
In this work, we develop a hierarchical dynamics approach for evaluation
of nonequilibrium dynamic response properties of quantum impurity systems.
It is based on a hierarchical equations of motion formalism, in conjunction
with a linear response theory established in the hierarchical Liouville space.
It provides a unified approach for arbitrary response and correlation functions
of local impurity systems, and transport current related response properties.
The proposed hierarchical Liouville-space approach resolves \emph{nonperturbatively}
the combined effects of \emph{e-e} interactions, impurity-reservoir dissipation,
and non-Markovian memory of reservoirs.
It provides a unified formalism for equilibrium and nonequilibrium dynamic
response properties of quantum impurity systems and can be applied to more
complex quantum impurity systems without extra derivation efforts.
Moreover, the HEOM results converge rapidly and uniformly with
higher-tier ADOs included explicitly and one can often obtain
convergent results at a relative low truncation level $L$.
With our present code and the computational resources at our disposal,
the lowest temperature that can be quantitatively accessed is $T \simeq
0.1\, T_K$ for a symmetric SIAM.
For equilibrium properties, our HEOM approach has achieved the same level
of accuracy as the latest state-of-the-art NRG method.\cite{Li12266403}
In this work, the accuracy of HEOM approach for calculations of nonequilibrium
properties is validated by reproducing some known numerical results or analytic
relations,\cite{Mer12075153,Ros01156802,Mei932601,Kon02125304} such as the static
local magnetic susceptibility, and the universal scaling relation of high-voltage
conductance.
In conclusion, the developed hierarchical Liouville-space approach
provides an accurate and universal tool for investigation of general
dynamic response properties of quantum impurity systems. In particular,
it addresses the nonequilibrium situations and resolves the full
frequency dependence details accurately.
It is thus potentially useful for exploration of quantum impurity systems and
strongly correlated lattice systems (combined with dynamical mean field theory),
which are of experimental significance for the advancement of nanoelectronics,
spintronics, and quantum information and computation.
\acknowledgments
The support from the Hong Kong UGC (AoE/P-04/08-2) and RGC (Grant
No.\,605012), the NSF of China (Grant No.\,21033008, No.\,21103157,
No.\,21233007, No.\,10904029, No.\,10905014, No.\,11274085), the
Fundamental Research Funds for Central Universities (Grant No.\,2340000034
and No.\,2340000025) (XZ), the Strategic Priority Research Program (B) of the CAS (XDB01020000) (XZ and YJY), and the Program for Excellent Young Teachers
in Hangzhou Normal University (JJ) is gratefully acknowledged.
|
1,108,101,565,536 | arxiv |
\section{Introduction} \label{section:intro}
In surveys with complete response, the Horvitz-Thompson (HT) estimator is a design-unbiased estimator of population totals \citep{hor:tho:52}.
In the presence of auxiliary information, its efficiency can be improved by incorporating into the estimator a working model that links the variable of interest and the auxiliary variables.
The resulting estimators are called model-assisted estimators. Such estimators are asymptotically unbiased and asymptotically more efficient than the HT estimator regardless of whether the working model is correctly specified.
\citet{sar:80}, \citet{rob:sar:83}, and \citet{sar:wri:84} are, to the best of our knowledge, some of the first papers that study model-assisted estimators. They are based on generalized linear regression as working model.
\citet{sar:swe:wre:92} extend traditional sampling theory to the model-assisted approach.
More recent works study model-assisted survey estimation with modern and flexible prediction techniques such as \cite{bre:ops:00}, \cite{bre:cla:ops:05}, \cite{Brei:Opso:John:Rana:semi:2007}, \cite{mcc:bre:13}, \citet{bre:ops:17:modelassist}, and \citet{dag:gog:haz:21b}.
Survey data generally suffer from nonresponse which can be seen as a second phase of the survey.
In this second phase, a sample of survey respondents is selected from the sampled units. This results in a partition of the sample into two subsamples: the respondents for which the value of the variable of interest is observed, and the nonrespondents for which this value is missing. The probability that a sampled unit is a survey respondent is called response probability. It represents the inclusion probability of the second phase and is unknown \citep{sar:swe:87}.
One approach to handle nonresponse consists of reweighting survey respondents to compensate for nonrespondents.
By considering nonresponse as a second phase of the survey, the HT estimator can be adapted to two-phase sampling by increasing the weight of respondents using the inverse of the response probabilities.
However, the response probabilities are unknown in practice.
One solution is to estimate the response probabilities and use the estimated probabilities instead of the true response probabilities in the two-phase estimator.
The resulting estimator is called nonresponse weighting adjusted (NWA) estimator, or empirical double expansion estimator. An overview of NWA methods is available in \citet{lun:sar:99} and \citet{lee:ran:sar:02}.
In presence of nonresponse, the aforementioned model-assisted estimators are unavailable.
In this article, we propose a model-assisted estimator adapted for nonresponse.
This estimator is a blend between a model-assisted estimator and a NWA estimator.
We reweight the respondents in a model-assisted estimator using the inverse of the estimated response probabilities to compensate for the nonrespondents. To the best of our knowledge, \citet{kim:haz:14} are the only other authors who provide a model-assisted estimator that handles nonresponse.
In their approach, both the working model and the nonresponse model are parametric. The models are estimated simultaneously using maximum likelihood. They show that the resulting estimator is doubly-robust. Our proposed approach is more flexible: it allows for these models to be parametric and non-parametric and to be estimated separately.
Different working models are studied.
Asymptotic design unbiasedness and efficiency of the proposed estimator are studied and proven for some working and nonresponse models.
We show that the proposed estimator can be viewed as a reweighted estimator and that the resulting weights are calibrated to the totals of the auxiliary variables for some working models. We provide a formula for the asymptotic variance and a variance estimator of the proposed total estimator. We also conduct a simulation study that shows that our proposed estimator performs well in terms of bias and variance, even when one of the two models, nonresponse model or working model, is misspecified.
The article is organized as follows. In Section~\ref{section:framework}, we provide an introduction to the context and some notations. Our proposed estimator is introduced in Section~\ref{section:nwaestimator}. We study different statistical learning techniques as working models in Section~\ref{section:statistical:learning}.
We develop the asymptotic properties of our proposed estimator in Section~\ref{section:asymptotics}. In section~\ref{section:variance}, we discuss the variance and its estimator. A simulation study is presented in Section~\ref{section:simulation}. It confirms the performance of our estimator. The main part of this article closes with a short discussion in Section~\ref{section:discussion}. Supplementary material is presented in the Appendices.
\section{Context}\label{section:framework}
We consider a finite population $U=\{ 1,2,...,N \}$. Let $s \subset U$ be a sample of size $n$ selected in $U$ according to a sampling design $p(.)$. The first and second order inclusion probabilities are denoted by $\pi_k$ and $\pi_{k\ell} = {\rm pr}(k, \ell \in s)$ for generic units $k$ and $\ell$. Consider the sample membership indicator $a_k$ of a unit $k$. We have ${\rm pr}(a_k=1) = \pi_k$, ${\rm pr}(a_k=0) = 1-\pi_k$, and ${\rm E}_p(a_k) = \pi_k$, where subscript $p$ means that the expectation is computed with respect to the sampling design $p(.)$. The covariance between the sample membership indicators is $\Delta_{k\ell} = \mbox{cov}_p(a_k,a_\ell)=\pi_{k\ell}-\pi_k\pi_\ell$.
The goal is to estimate the population total $t$ of some variable of interest $y$ with values $\{y_k\}$ known only for those units in the sample. With no additional information, the total $t$ can be estimated by the \textit{expansion estimator} or Horvitz-Thompson (HT) estimator \citep{hor:tho:52}
\begin{align}
\label{HT}
\widehat{t}_{HT} &= \sum_{k \in s} \dfrac{y_k}{\pi_k}.
\end{align}
The HT estimator is design-unbiased, that is ${\rm E}_p(\widehat{t}_{HT})=t$, provided that the inclusion probabilities are all strictly larger than 0. Under additional assumptions detailed in Section~\ref{section:asymptotics}, the estimator
\begin{align}\label{estimator:var:HT}
\widehat{{\rm var}}\left(\widehat{t}_{HT}\right) &= \sum_{k,\ell \in s} \frac{y_k}{\pi_k}\frac{y_\ell}{\pi_\ell} \frac{\Delta_{k\ell}}{\pi_{k\ell}}
\end{align}
is design-unbiased and consistent for the variance of the HT estimator.
Suppose that a vector of auxiliary variables $\hbox{$\bf x$}_k = (x_{k1}, \dots, x_{kp})^\top $ is known for each population unit $k \in U$ or at least each sampled unit $k \in s$. We consider a working model $\xi$ that links the $\hbox{$\bf x$}_k$'s and $y_k$'s as follows
\begin{equation}
\label{model}
\xi:y_k=m(\hbox{$\bf x$}_k)+\varepsilon_k,
\end{equation}
where $m(.)$ is an unknown function, ${\rm E}_\xi (\varepsilon_k) = 0$, ${\rm var}_\xi (\varepsilon_k)=\sigma_k^2$, and subscript $\xi$ indicates that the expectation and variance are computed under model $\xi$. This working model may be used to improve the efficiency of the HT estimator while maintaining, or almost, its design unbiasedness. Such methods are called \emph{model-assisted}.
The \emph{difference estimator}
\begin{equation}
\label{estimator:diff:m}
\widehat{t}_{m} = \sum_{k \in U} m(\hbox{$\bf x$}_k) + \sum_{k \in s} \dfrac{y_k-m(\hbox{$\bf x$}_k)}{\pi_k}
\end{equation}
is an estimator of total $t$. It is design-unbiased and has less variability that the HT estimator provided that the ``residuals'' $\{ y_k - m(\hbox{$\bf x$}_k) \}$ have less variability than the ``raw values'' $\{ y_k \}$. This holds regardless of the quality of model $\xi$ \citep{bre:ops:17:modelassist}. Since function $m(.)$ is unknown, we may estimate it from values $\{ (\hbox{$\bf x$}_k, y_k) \},k \in U$ based on a standard estimation method. The estimate is written $m_U(.)$.
Substituting $m_U(.)$ for $m(.)$ yields the \emph{pseudo-generalized difference estimator}
\begin{equation}
\label{estimator:diff:mU}
\widehat{t}_{m_U} = \sum_{k \in U} m_U(\hbox{$\bf x$}_k) + \sum_{k \in s} \dfrac{y_k-m_U(\hbox{$\bf x$}_k)}{\pi_k}.
\end{equation}
It is a typical model-assisted estimator. \cite{bre:ops:17:modelassist} show that it is 1) design-unbiased and 2) more efficient than the HT estimator provided that the ``residuals'' $\{ y_k - m_U(\hbox{$\bf x$}_k) \}$ have less variability than the ``raw values'' $\{ y_k \}$. This holds regardless of the quality of working model $\xi$.
The population estimator $m_U(.)$ is unavailable and can be estimated by $m_s(.)$ based on the known sample values $\{ (\hbox{$\bf x$}_k, y_k) \},k \in s$.
Substituting in~\eqref{estimator:diff:mU} yields the \emph{model-assisted estimator}
\begin{equation}
\label{estimator:diff:ms}
\widehat{t}_{m_s} = \sum_{k \in U} m_s(\hbox{$\bf x$}_k) + \sum_{k \in s} \dfrac{y_k-m_s(\hbox{$\bf x$}_k)}{\pi_k}.
\end{equation}
\cite{bre:ops:17:modelassist} show that, under some regularity conditions and for some specific working models including heteroscedastic multiple regression, linear mixed models, and some statistical learning techniques, the model-assisted estimator $\widehat{t}_{m_s}$ based on $m_s(\cdot)$ is 1) asymptotically design unbiased and 2) asymptotically more efficient than the HT estimator provided that the ``residuals'' $\{ y_k - m_s(\hbox{$\bf x$}_k) \}$ have less variability than the ``raw values'' $\{ y_k \}$. This holds regardless of the quality of working model $\xi$.
In practice, some values $\{ y_k \}$ may be missing because they are collected incorrectly or some units refrain from responding. In this case, the HT estimator and all aforementioned estimators are unavailable. Let $p_k$ and $r_k$ denote, respectively, the response probability and response indicator to variable $y$ of a unit $k \in U$. These are related via ${\rm pr}(r_k = 1) = p_k$ and ${\rm pr}(r_k = 0) = 1-p_k$.
Consider $s_r = \{ k \in U \mid a_k = 1, r_k = 1\}$, the set of $n_r$ units in~$s$ for which variable $y$ is known. The units in $s_r$ are called \emph{respondents}. The process that generates the respondents is called the \emph{nonresponse mechanism}. In the main part of this paper, we suppose that the nonresponse mechanism satisfies the following conditions:
\begin{enumerate}[({NR}1):]
\item \label{assumption:MAR} The data is missing at random~\citep[see][for a detailed definition]{rub:76}
\item \label{assumption:indep} The response indicators are independent of one another and of the selected sample~$s$. This means that the values $\{r_k\}$ are obtained from a Poisson sampling design, i.e. the $\{ r_k \}$ are generated from independent Bernoulli random variables with ${\rm E}_q(r_k \mid s) = {\rm E}_q(r_k) = p_k$, where ${\rm E}_q(.)$ is the expectation under the nonresponse mechanism.
\item The response probabilities are bounded below, i.e. there exists a constant $c>0$ so that $p_k>c$ for all $k \in s$.
\item \label{assumption:logistic} The response probabilities are
\begin{align}\label{logistic}
p_k = \dfrac{1}{F(\hbox{$\bf x$}_k^\top\boldsymbol{\lambda}_0)} = \frac{\exp(\hbox{$\bf x$}_k^\top \boldsymbol{\lambda}_0)}{1 + \exp(\hbox{$\bf x$}_k^\top \boldsymbol{\lambda}_0)} = \frac{1}{1 + \exp(-\hbox{$\bf x$}_k^\top \boldsymbol{\lambda}_0)},
\end{align}
for some true unknown parameter vector $\boldsymbol{\lambda}_0$.
\end{enumerate}
Assumption~(NR\ref{assumption:logistic}) is relaxed in Appendix~\ref{section:mle}, where a general nonresponse function is assumed.
Nonresponse can be seen as a second phase of the survey, where the nonresponse mechanism is unknown \citep{sar:swe:87}. In the first phase, a sample~$s$ is selected from population $U$ according to a sampling design $p(.)$. In the second phase, a sample $s_r$ is selected from $s$ according to a Poisson sampling design with unknown inclusion probabilities~$\{p_k\}$. Under nonresponse, all aferementioned estimators are unavailable. An approach to control nonresponse bias consists of increasing the weights of the respondents in order to compensate for the nonrespondents. If nonresponse is seen as a second phase of the survey, the design weights are multiplied by the inverse of the response probabilities. This yields the \textit{two-phase estimator} or \textit{double expansion estimator}
\begin{equation}
\widehat{t}_{2HT} = \sum_{k \in s_r} \dfrac{y_k}{\pi_k p_k}.
\end{equation}
Since the response probabilities $\{p_k\}$ are unknown in practice, they must be estimated. The estimated response probabilities are denoted by $\widehat{p}_k$. Using the estimated response probabilities in the two-phase estimator yields the \textit{Nonresponse Weighting Adjusted (NWA) estimator} or \textit{empirical double expansion estimator}
\begin{equation}\label{estimator:nwa}
\widehat{t}_{NWA} = \sum_{k \in s_r} \dfrac{y_k}{\pi_k \widehat{p}_k}.
\end{equation}
\section{NWA model-assisted estimator}
\label{section:nwaestimator}
In this paper, we introduce a model-assisted estimator adapted to nonresponse. It is a blend between a model-assisted estimator and a NWA estimator. It is constructed as follows. We replace the estimated function $m_s(\cdot)$, unavailable with nonresponse, by an estimator $m_r(\cdot)$ constructed from the respondents in the model-assisted estimator and we see nonresponse as a second phase of the survey. This yields
\begin{equation}
\label{estimator:diff:mr:U:p}
\widehat{t}_{m_r,p} = \sum_{k \in U} m_r(\hbox{$\bf x$}_k) + \sum_{k \in s_r} \dfrac{y_k - m_r(\hbox{$\bf x$}_k)}{\pi_k p_k}.
\end{equation}
We call this estimator the \emph{two-phase model-assisted estimator}. It is unknown in practice since it contains the unknown response probabilities $\{ p_k \}$. We borrow the idea of the NWA estimation and obtain
\begin{equation}
\label{estimator:diff:mr:U:phat}
\widehat{t}_{m_r,\widehat{p}} = \sum_{k \in U} m_r(\hbox{$\bf x$}_k) + \sum_{k \in s_r} \dfrac{y_k - m_r(\hbox{$\bf x$}_k)}{\pi_k \widehat{p}_k}.
\end{equation}
We call this estimator the \emph{NWA model-assisted estimator} as it corresponds to a model-assisted estimator where the weights are adjusted for nonresponse.
This estimator covers a wide range of estimators depending on the chosen working model $\xi$ and the chosen nonresponse model.
The first term of this estimator is the population total of the predicted values $\{m_r(\hbox{$\bf x$}_k)\}$. For most working models, this requires the values $\{\hbox{$\bf x$}_k\}$ to be known for all population units. If this population total is unavailable, we may use a HT-type estimator of this sum, see Appendix~\ref{section:auxiliary:s}.
The NWA model-assisted estimator in~\eqref{estimator:diff:mr:U:phat} contains two estimators: the response probabilities $\{ \widehat{p}_k \}$ and the function $m_r(\cdot)$. Depending on both these choices, we obtain a different estimator. The response probabilities are $p_k=1/F(\hbox{$\bf x$}_k^\top\boldsymbol{\lambda}_0)$ for some unknown parameter vector $\boldsymbol{\lambda}_0$. The estimated response probabilities are $\widehat{p}_k=1/F(\hbox{$\bf x$}_k^\top\widehat{\boldsymbol{\lambda}})$ for some estimator $\widehat{\boldsymbol{\lambda}}$ of $\boldsymbol{\lambda}_0$. Unless otherwise specified, we estimate the response probabilities via calibration. The estimator $\widehat{\boldsymbol{\lambda}}$ is the solution to the estimating equation
\begin{align}\label{estimating:eqn:calib}
Q(\boldsymbol{\lambda}) &= \sum_{k \in U} \hbox{$\bf x$}_k - \sum_{k \in S_r} \frac{\hbox{$\bf x$}_k}{\pi_k}F(\hbox{$\bf x$}_k^\top\boldsymbol{\lambda}) = 0.
\end{align}
We present NWA model-assisted estimators with response probabilities estimated via two alternate techniques, generalized calibration and maximum likelihood, in Appendices~\ref{section:generalized:calibration} and \ref{section:mle}, respectively.
\section{Statistical Learning Techniques}\label{section:statistical:learning}
\subsection{Generalized Regression}
\label{subsection:greg}
Consider the working model
\begin{align}
\xi: y_k = \hbox{$\bf x$}_k^\top {\boldsymbol\beta} + \varepsilon_k,
\end{align}
where the $\varepsilon_k$ are uncorrelated with mean $E_\xi(\varepsilon_k) = 0 $ and variance ${\rm var}_\xi(\varepsilon_k)=\sigma^2_k$. The finite population regression coefficient is
\begin{align}
\bf{B}_U &= \left( \sum_{k \in U} \hbox{$\bf x$}_k \hbox{$\bf x$}_k^\top\right)^{-1} \sum_{k \in U} \hbox{$\bf x$}_k y_k.
\end{align}
If parameter ${\boldsymbol\beta}$ is estimated based on $s_r$ we use
\begin{align}
\bf{B}_r &= \left( \sum_{k \in s_r} \frac{\hbox{$\bf x$}_k \hbox{$\bf x$}_k^\top}{c_k}\right)^{-1} \sum_{k \in s_r} \frac{\hbox{$\bf x$}_k y_k}{c_k},
\end{align}
where $m_r(\hbox{$\bf x$}_k) = \hbox{$\bf x$}^\top_k \bf{B}_r$ and $c_k$ is any of $1, \sigma_k^2, \pi_k \widehat{p}_k, \pi_k \widehat{p}_k \sigma_k^2$.
The NWA model-assisted estimator can be written in weighted form
\begin{align}
\widehat{t}_{m_r,\widehat{p}} &= \sum_{k \in U} \hbox{$\bf x$}^\top_k {\bf{B}}_r + \sum_{k \in s_r} \dfrac{y_k - \hbox{$\bf x$}^\top_k \bf{B}_r}{\pi_k \widehat{p}_k}\\
&= \sum_{k \in s_r} \dfrac{y_k}{\pi_k \widehat{p}_k} + \left( \sum_{k \in U}\hbox{$\bf x$}_k - \sum_{k \in s_r} \dfrac{\hbox{$\bf x$}_k}{\pi_k \widehat{p}_k} \right)^\top
\left( \sum_{k \in s_r} \frac{\hbox{$\bf x$}_k \hbox{$\bf x$}_k^\top}{c_k}\right)^{-1} \sum_{k \in s_r} \frac{\hbox{$\bf x$}_k y_k}{c_k}\\
&= \sum_{k \in s_r}\left\{ \frac{1}{\pi_k \widehat{p}_k} + \left( \bf{t}^X - \widehat{\bf{t}}^X_{NWA} \right)^\top \left( \sum_{\ell \in s_r} \frac{\hbox{$\bf x$}_\ell \hbox{$\bf x$}_\ell^\top}{c_\ell}\right)^{-1} \frac{\hbox{$\bf x$}_k}{c_k} \right\}y_k \\
&= \sum_{k \in s_r} w_{k,s_r} y_k,
\end{align}
where $\bf{t}^X$ is the vector of population total of the auxiliary variables and $\widehat{\bf{t}}^X_{NWA}$ its NWA estimator. The weights $w_{k,s_r}$ are those of the NWA estimator $1/(\pi_k \widehat{p}_k)$ plus a corrective term induced by the working model. The second term cancels when calibration is applied to estimate the response probabilities. The NWA model-assisted estimator is the NWA estimator in this case. The weights are free from values $\{\hbox{$\bf x$}_k\}$ in $U\backslash s_r$ except through the population totals $\bf{t}^X$. Only the values $\{\hbox{$\bf x$}_k\}$ on $s_r$ and the population totals $t^X$ are needed to compute the NWA model-assisted estimator, unless some other values are needed to estimate the response probabilities. The weights are free from $\{y_k\}$. They can therefore be used for several variables of interest provided that they have observed values on~$s_r$. In particular, the weights can be applied to the auxiliary variables. It comes
\begin{align}
\widehat{t}^X_{m_r,\widehat{p}} &= \sum_{k \in s_r}\left\{ \frac{1}{\pi_k \widehat{p}_k} + \left( \bf{t}^X - \widehat{\bf{t}}^X_{NWA} \right)\left( \sum_{k \in s_r} \frac{\hbox{$\bf x$}_k \hbox{$\bf x$}_k^\top}{c_k}\right)^{-1} \frac{\hbox{$\bf x$}_k}{c_k} \right\}\hbox{$\bf x$}_k^\top = \bf{t}^X.
\end{align}
This means that the weights of the NWA model-assisted estimator are calibrated to the totals of the auxiliary variables when calibration is applied to estimate the response probabilities.
\subsection{$K$-Nearest Neighbor}
Consider the working model where the prediction for a nonrespondent is obtained by averaging the $y$-values of the closest respondents. A predicted value $m_r(\hbox{$\bf x$}_k)$ is obtained by
\begin{align}
m_r(\hbox{$\bf x$}_k) = \frac{1}{K} \sum_{\ell \in L_k}y_\ell,
\end{align}
where $L_k$ is the set of the $K$ nearest respondents of unit $k$. The neighborhood is determined based on the auxiliary variables and a distance measure such as the Euclidean distance. Consider $\alpha_{k\ell}$ an indicator that takes value 1 if respondent $\ell \in s_r$ is in the neighborhood $L_k$ of unit $k \in U$. We have $\alpha_{k\ell} = 0$ if $\ell \in U \backslash s_r$. A prediction can be written
\begin{align}
m_r(\hbox{$\bf x$}_k) = \frac{1}{K} \sum_{\ell \in s_r} \alpha_{k\ell}y_\ell.
\end{align}
The NWA model-assisted estimator can be written in weighted form
\begin{align}
\widehat{t}_{m_r,\widehat{p}} &= \sum_{k \in U} m_r(\hbox{$\bf x$}_k) + \sum_{k \in s_r} \dfrac{y_k - m_r(\hbox{$\bf x$}_k)}{\pi_k \widehat{p}_k}\\
&= \sum_{k \in U} \frac{1}{K} \sum_{\ell \in s_r} \alpha_{k\ell}y_\ell + \sum_{k \in s_r} \dfrac{y_k}{\pi_k \widehat{p}_k} - \sum_{k \in s_r}\dfrac{1}{\pi_k \widehat{p}_k K} \sum_{\ell \in s_r} \alpha_{k\ell}y_\ell\\
&= \sum_{\ell \in s_r} \left\{ \dfrac{1}{\pi_\ell \widehat{p}_\ell} + \frac{1}{K} \left( \sum_{k \in U} \alpha_{k\ell} - \sum_{k \in s_r}\dfrac{1}{\pi_k \widehat{p}_k}\alpha_{k\ell} \right) \right\}y_\ell.
\end{align}
The weights are the ones of the NWA estimator $1/(\pi_k \widehat{p}_k)$ plus a corrective term induced by the working model. The second term cancels when the response probabilities are calibrated on variables $(\alpha_{1\ell},\alpha_{2\ell},\ldots,\alpha_{N\ell})^\top$, $\ell \in s_r$. The NWA model-assisted estimator is the NWA estimator in this case. The weights depend on the values of the auxiliary variables through the distance measure applied to construct the neighborhoods. They are free from values $\{y_k\}$ and could therefore be used for several variables of interest provided that they have observed values on $s_r$. In particular, they can be applied to $\{\hbox{$\bf x$}_k\}$. This yields
\begin{align}
\widehat{t}^X_{m_r,\widehat{p}} &= \sum_{k \in s_r} \frac{\hbox{$\bf x$}_k}{\pi_k \widehat{p}_k} + \sum_{k \in U}\frac{1}{K}\sum_{\ell \in s_r} \alpha_{k\ell}\hbox{$\bf x$}_\ell - \sum_{k \in s_r}\frac{1}{\pi_k \widehat{p}_k}\frac{1}{K}\sum_{\ell \in s_r} \alpha_{k\ell}\hbox{$\bf x$}_\ell\\
&=\sum_{k \in s_r} \frac{\hbox{$\bf x$}_k}{\pi_k \widehat{p}_k} + \frac{1}{K}\sum_{k \in U}\left(1 - \frac{a_k r_k}{\pi_k \widehat{p}_k}\right)\sum_{\ell \in s_r} \alpha_{k\ell}\hbox{$\bf x$}_\ell.
\end{align}
The weights are calibrated to the totals of the auxiliary variables when $K^{-1}\sum_{\ell \in s_r} \alpha_{k\ell}\hbox{$\bf x$}_\ell = \hbox{$\bf x$}_k$ for all $k \in U$. This is for instance the case when the neighborhoods $L_k$ are disjoints and have constant values $\{\hbox{$\bf x$}_k\}$. In practice, we can reasonably assume that this holds at least approximately for large populations and samples.
\subsection{Local Polynomial Regression}
Local polynomial regression is studied in the context of model-assisted survey estimation in \cite{bre:ops:00}. Consider a working model in which $\hbox{$\bf x$}_k$ is a scalar, i.e. $\hbox{$\bf x$}_k = x_k$, $x_k \in \mathbb{R}$. Function $m(\cdot)$ is approximated locally at $x_k$ by $q$-th order polynomial regression. The model is fitted via weighted least squares with weights based on a kernel function centered at $x_k$. \cite{bre:ops:00} propose and study the model-assisted estimator with a survey weighted estimator of $m(\cdot)$ fitted at the sample level. Adapting their estimator to the context of nonresponse yields
\begin{align}
m_r(x_k) &= \hbox{$\bf e$}_1 \cdot\left(\textnormal{\textbf{X}}_{rk}^\top\textnormal{\textbf{W}}_{rk}\textnormal{\textbf{X}}_{rk}\right)^{-1}\textnormal{\textbf{X}}_{rk}^\top\textnormal{\textbf{W}}_{rk}\hbox{$\bf y$}_{rk} = \omega_{rk}^\top \hbox{$\bf y$}_{rk},
\end{align}
where $\hbox{$\bf e$}_j$ is a vector with 1 at the j-th coordinate and 0 otherwise,
$$
\textnormal{\textbf{X}}_{rk} =\left[1 \quad x_j - x_k \quad \cdots \quad (x_j - x_k)^q\right]_{j \in s_r},
$$
$$
\textnormal{\textbf{W}}_{rk} = \mbox{diag}\left\{\frac{1}{k_j h}K\left(\frac{x_j-x_k}{h}\right)\right\}_{j\in s_r},
$$
and
$$
\hbox{$\bf y$}^\top_{rk} = \left[y_j\right]_{j \in s_r},
$$
and $k_j$ is either $1$ for all $j \in s_r$ or $\pi_j\widehat{p}_j$, $K(\cdot)$ is a continuous kernel function, and $h$ a bandwidth. The NWA model-assisted estimator can be written in weighted form
\begin{align}
\widehat{t}_{m_r,\widehat{p}} &= \sum_{k \in U} m_r(x_k) + \sum_{k \in s_r} \dfrac{y_k - m_r(x_k)}{\pi_k \widehat{p}_k}\\
&= \sum_{k \in U} \omega_{rk}^\top \hbox{$\bf y$}_{rk} + \sum_{k \in s_r} \dfrac{y_k}{\pi_k \widehat{p}_k} - \sum_{k \in s_r}\dfrac{1}{\pi_k \widehat{p}_k}\omega_{rk}^\top \hbox{$\bf y$}_{rk} \\
&= \sum_{k \in s_r}\left\{ \dfrac{1}{\pi_k \widehat{p}_k} + \sum_{\ell \in U}\left( 1 - \dfrac{a_\ell r_\ell}{\pi_\ell \widehat{p}_\ell} \right)\omega_{r\ell}^\top \hbox{$\bf e$}_k \right\}y_k.
\end{align}
The weights are the weights of the NWA estimator $1/(\pi_k \widehat{p}_k)$ plus a corrective term induced by the working model. They are free from values $\{y_k\}$ and could therefore be used for several variables of interest provided that they have observed values on $s_r$. In particular, they can be applied to $\{x_k\}$. This yields
\begin{align}
\widehat{t}^X_{m_r,\widehat{p}} &= \sum_{k \in s_r} \frac{x_k}{\pi_k \widehat{p}_k} + \sum_{\ell \in U}\omega_{r\ell}^\top \sum_{k \in s_r} \hbox{$\bf e$}_k x_k + \sum_{\ell \in s_r} \dfrac{\omega_{r\ell}^\top}{\pi_\ell \widehat{p}_\ell} \sum_{k \in s_r} \hbox{$\bf e$}_k x_k = \sum_{k \in U}x_k,
\end{align}
where we used
\begin{align}
\omega_{r\ell}^\top \sum_{k \in s_r} \hbox{$\bf e$}_k x_k = x_\ell.
\end{align}
The weights are calibrated to the totals of the auxiliary variables.
\section{Asymptotics}\label{section:asymptotics}
\subsection{Preliminaries}
In this section we develop the asymptotic properties of the two-phase model-assisted estimator and of the NWA model-assisted estimator. We build on the asymptotic framework of \cite{isa:ful:82}. Consider a sequence $U_N$ of embedded finite populations of size $N$ where $N$ grows to infinity. A sample $s_N$ of size $n_N$ is selected from $U_N$ with sampling design $p_N(\cdot)$. The associated first- and second-order inclusion probabilities are $\pi_{k(N)}$ and $\pi_{k\ell(N)}$, respectively, for some generic units $k$ and $\ell$. A subsample $s_{rN}$ is obtained from $s_N$ with Poisson sampling design with unknown inclusion probabilities $p_{k(N)}$. We consider the following common regularity conditions on the sequence of sampling designs.
\begin{enumerate}[({A}1):]
\item \label{assumption:f} $\lim\limits_{N \rightarrow + \infty} n_N/N = \pi \in (0,1) $,
\item\label{assumption:pii:bounded} For all $N$, there exists $\lambda_1 \in \mathbb{R}$ such that $\pi_{k(N)} > \lambda_1 > 0$ for all $k \in U_N$,
\item\label{assumption:piij:bounded} For all $N$, there exists $\lambda_2 \in \mathbb{R}$ such that $\pi_{k\ell(N)} > \lambda_2 > 0$, for all $k,\ell \in U_N$,
\item\label{assumption:deltaij} $\limsup\limits_{N\rightarrow + \infty} ~ n_N ~ \max\limits_{\substack{k,\ell\in U_N, \\ k \neq \ell}} \left|\Delta_{k\ell(N)}\right| < + \infty$.
\end{enumerate}
For a sampling design with random sample size, $n_N$ in Assumption (D\ref{assumption:f}) is the expected sample size. We also consider the following condition on the sequence of finite populations.
\begin{enumerate}[({A}1):]\setcounter{enumi}{4}
\item\label{assumption:moments:y}
The study variable has finite second and fourth moments, i.e.
\begin{align}
\limsup\limits_{N \rightarrow + \infty} N^{-1}\sum_{k \in U_N} u_k < + \infty
\end{align}
for $u_k = y_{k}^2, y_{k}^4$.
\end{enumerate}
Conditions (A\ref{assumption:f})-(A\ref{assumption:moments:y}) ensure consistency of the HT estimator and its variance estimator in~\eqref{estimator:var:HT}. Finally we consider the following regularity conditions on the sequence of Poisson sampling designs that generate the sets of respondents.
\begin{enumerate}[({A}1):]\setcounter{enumi}{5}
\item\label{assumption:f:r} $\lim\limits_{N \rightarrow + \infty} \sum_{k \in U_N}\pi_{k(N)} p_{k(N)}/n_N = \pi \in (0,1) $,
\item\label{assumption:p:bounded:r} For all $N$, there exists $\lambda_3 \in \mathbb{R}$ such that $p_{k(N)} > \lambda_3 > 0$, for all $k \in U_N$.
\end{enumerate}
Assumption~(A\ref{assumption:f:r}) states that the fraction of respondents to sampled units does not increase or decrease as $N$ grows to infinity. Assumption~(A\ref{assumption:p:bounded:r}) states that each unit has a strictly positive probability of responding.
In what follows, we will omit the subscript $N$ whenever possible to simplify notation.
\subsection{Two-Phase Difference Estimator}
To study the asymptotic properties of the proposed estimator, it is useful to introduce a sampling design $p^*(.)$ that selects the sample $s_r$ directly in population $U$. The associated first- and second-order inclusion probabilities are, respectively, $\pi^*_k = \pi_k p_k$ and
\begin{align}
\pi^*_{k \ell} =
\left\{
\begin{array}{ll}
\pi_{k\ell} p_k p_\ell, & \hbox{if $k\neq\ell$;} \\
\pi_{k} p_k, & \hbox{if $k=\ell$.}
\end{array}
\right.
\end{align}
The membership indicator of a unit $k \in U$ in the set of respondents $s_r$ is $a_k^* = a_k r_k$. The membership indicator of two different units $k, \ell \in U, k \neq \ell $ in $s_r$ is $a^*_{k\ell} = a_k a_\ell r_k r_\ell$.
Given that the nonresponse process is independent from the selected sample, we have ${\rm E}_{p^*} (a_k^*) = {\rm E}_p {\rm E}_q (a_k r_k) = \pi_k p_k$ and ${\rm E}_{p^*} ( a^*_{k \ell}) = \pi_{k \ell} p_k p_\ell $, where the subscript $p^*$ means that the expectation is computed with respect to the two-phase sampling design $p^*(.)$. The covariance between the membership indicators $\{a_k^*\}$ is
\begin{align}
\Delta_{k\ell}^* =
\left\{
\begin{array}{ll}
\Delta_{k\ell}p_k p_\ell = (\pi_{k\ell} - \pi_k\pi_\ell)p_k p_\ell, & \hbox{if $k\neq\ell$;} \\
\pi_k p_k(1 - \pi_k p_k), & \hbox{if $k=\ell$.}
\end{array}
\right.
\end{align}
\begin{result}\label{result:asym:mr:U:p}
Suppose that the sequence of sampling designs, populations, and response mechanisms satisfy Assumptions (A\ref{assumption:f})-(A\ref{assumption:moments:y}). Consider a working model $\xi$ in~\eqref{model} for which
\begin{align}
\widehat{t}_{m_s} &= \widehat{t}_{m_U} + R_{m_s}
\end{align}
where the remainder term divided by the population size $R_{m_s}/N$ converges in probability to 0 under the aforementioned assumptions. The reference probability distribution is here the design $p(.)$.
Suppose moreover that the sequence of response mechanisms satisfies Assumptions (NR\ref{assumption:MAR}), (NR\ref{assumption:indep}), (A\ref{assumption:f:r}), and (A\ref{assumption:p:bounded:r}). Then the two-phase model-assisted estimator can be written
\begin{align}
\widehat{t}_{m_r,p} &= \widehat{t}_{m_U} + R_{m_r,p}
\end{align}
where the remainder term divided by the population size $R_{m_r,p}/N$ converges in probability to 0. The reference probability distribution is here the two-phase design $p^*(\cdot)$.
\end{result}
\begin{proof}
It follows directly from the fact that when the sampling design $p(\cdot)$ satisfies Assumptions (A\ref{assumption:f})-(A\ref{assumption:deltaij}) and the response mechanism satisfies Assumptions (NR\ref{assumption:MAR}), (NR\ref{assumption:indep}), (A\ref{assumption:f:r}), and (A\ref{assumption:p:bounded:r}), the sampling design $p^*(\cdot)$ satisfies Assumptions (A\ref{assumption:f})-(A\ref{assumption:deltaij}).
\end{proof}
\subsection{NWA Model-Assisted Estimator}
We now turn to the NWA model-assisted estimator adapted for nonresponse $\widehat{t}_{m_r, \widehat{p}}$. We can write
\begin{align}\label{eq:asym:mu:p:R}
\widehat{t}_{m_r,\widehat{p}} &= \widehat{t}_{m_U,\widehat{p}} + \sum_{k \in U}\left\{ m_r(\hbox{$\bf x$}_k)-m_U(\hbox{$\bf x$}_k) \right\}\left(1 - \frac{a_kr_k}{\pi_k\widehat{p}_k}\right)\\
&= \widehat{t}_{m_U,\widehat{p}} + R_{m_r,\widehat{p}}
\end{align}
where
\begin{align}
\widehat{t}_{m_U,\widehat{p}} = \sum_{k \in U} m_U(\hbox{$\bf x$}_k) + \sum_{k \in s_r} \dfrac{y_k - m_U(\hbox{$\bf x$}_k)}{\pi_k \widehat{p}_k}.
\end{align}
Estimator $\widehat{t}_{m_U,\widehat{p}}$ is unknown in practice but useful to derive some asymptotic properties of $\widehat{t}_{m_r,\widehat{p}}$. The idea is to first study the asymptotic properties of $\widehat{t}_{m_U,\widehat{p}}$, then show that the remainder $R_{m_r,\widehat{p}}$ is negligible. This allows us to conclude that $\widehat{t}_{m_r,\widehat{p}}$ inherits the asymptotic properties of $\widehat{t}_{m_U,\widehat{p}}$. The asymptotic properties of $\widehat{t}_{m_U,\widehat{p}}$ and whether the remainder $R_{m_r,\widehat{p}}$ is negligible depends on the working model and the method applied to estimate the response probabilities.
\begin{result}
Under the aforementioned assumptions and regularity conditions in~\cite{has:22}, the NWA model-assisted estimator $\widehat{t}_{m_r,\widehat{p}}$ can be written
\begin{align}
\widehat{t}_{m_r,\widehat{p}} &= \widehat{t}_{m_U,\widehat{p},\ell} + R
\end{align}
where $R$ is negligible provided that the remainder $R_{m_r,\widehat{p}}$ is negligible, and where estimator $\widehat{t}_{m_U,\widehat{p},\ell}$ is asymptotically unbiased and has a variance expected to be lower than that of the HT estimator.
\end{result}
\begin{proof}
From Result 1 of \cite{has:22}, when the response probabilities are estimated via calibration,
\begin{align}\label{eq:asym:mu:phat:calib:R}
\widehat{t}_{m_U,\widehat{p}} &= \widehat{t}_{m_U,\widehat{p},\ell} + O_{p^*}(Nn^{-1}),
\end{align}
where
\begin{align}
\widehat{t}_{m_U,\widehat{p},\ell} &= \sum_{k \in U} \left[m_U(\hbox{$\bf x$}_k) + \hbox{$\bf x$}_k^\top{\boldsymbol\gamma} + \frac{a_k}{\pi_k}\frac{r_k}{p_k}\left\{y_k - m_U(\hbox{$\bf x$}_k) - \hbox{$\bf x$}_k^\top{\boldsymbol\gamma}\right\} \right],\\
{\boldsymbol\gamma} &= \left\{ \sum_{k \in U} (1-p_k)\hbox{$\bf x$}_k\hbox{$\bf x$}_k^\top \right\}^{-1} \sum_{k \in U} (1-p_k)\hbox{$\bf x$}_k \left\{y_k - m_U(\hbox{$\bf x$}_k)\right\}.\\
\end{align}
Estimator $\widehat{t}_{m_U,\widehat{p},\ell}$ is unbiased for $t$ and we expect its variance to be lower than that of $\widehat{t}_{m_U,p}$ provided that the ``residuals'' $\{y_k - m_U(\hbox{$\bf x$}_k) - \hbox{$\bf x$}_k^\top{\boldsymbol\gamma}\}$ have less variability than values $\{y_k - m_U(\hbox{$\bf x$}_k)\}$ \citep{has:22}. Moreover we expect the variance of $\widehat{t}_{m_U,p}$ to be lower than that of the HT estimator provided that the ``residuals'' $\{y_k - m_U(\hbox{$\bf x$}_k)\}$ have less variability than the ``raw values'' $\{y_k\}$ \citep[][p.192]{bre:ops:17:modelassist}.
\end{proof}
\subsection{Generalized Regression Estimator}
Consider the generalized regression (GREG) estimator described in Section~\ref{subsection:greg}. Suppose that $(B_U - B_r) = O_{p^*}(1)$. This is the case for most sequences of populations, sampling designs, and response mechanisms. For instance, if $c_k=1$, this equality holds when the respondent moments of $\hbox{$\bf x$}_k y_k$ and $\hbox{$\bf x$}_k\hbox{$\bf x$}_k^\top$ converge to their population moments via
\begin{align}
\frac{1}{N} \sum_{k \in U} \hbox{$\bf x$}_k y_k - \frac{1}{N} \sum_{k \in s_r}\hbox{$\bf x$}_k y_k &= o_p^*\left(n^{-1/2}\right),\\
\frac{1}{N} \sum_{k \in U} \hbox{$\bf x$}_k \hbox{$\bf x$}_k^\top - \frac{1}{N} \sum_{k \in s_r}\hbox{$\bf x$}_k \hbox{$\bf x$}_k^\top &= o_p^*\left(n^{-1/2}\right),
\end{align}
$\hbox{$\bf x$}_i y_i$ is bounded,
\begin{align}
\limsup\limits_{N\rightarrow + \infty} \frac{1}{N} \sum_{k \in U} \hbox{$\bf x$}_k y_k < +\infty,
\end{align}
and $\textnormal{\textbf{X}}_{r}$ is of full rank.
Let us first study the asymptotic properties of the two-phase model assisted estimator $\widehat{t}_{m_r,p}$, the model-assisted estimator with the true response probabilities. For this working model, the remainder $R_{m_s}$ in Result~\ref{result:asym:mr:U:p} is
\begin{align}
R_{m_r,p} = \left( \bf{B}_r - \bf{B}_U \right)^\top \left(\bf{t}^X - \widehat{\bf{t}}^X_{2HT}\right),
\end{align}
with $\widehat{\bf{t}}^X_{2HT}$ the vector of Horvitz-Thompson estimators of these variables under the two-phase sampling $p^*$. This reminder is negligible as the first term is $O_{p^*}(1)$ and the second $O_{p^*}(Nn_r^{-1/2})$.
It follows that the two-phase model-assisted estimator $\widehat{t}_{m_r,p}$ behaves asymptotically like the population pseudo-generalized difference estimator $\widehat{t}_{m_U}$.
It is 1) asymptotically design-unbiased and 2) asymptotically more efficient than the HT provided that the finite population ``residuals'' $\{y_k-\hbox{$\bf x$}^\top_k B_U\}$ have less variability than the ``raw values'' $\{y_k\}$. This holds regardless of the quality of the working model.
Let us now turn to the NWA model-assisted estimator $\widehat{t}_{m_r,\widehat{p}}$. From Equation~\eqref{eq:asym:mu:p:R}
\begin{align}
\widehat{t}_{m_r,\widehat{p}} &= \widehat{t}_{m_U,\widehat{p}} + R_{m_r,\widehat{p}},
\end{align}
where
\begin{align}\label{eq:remainder:GREG}
R_{m_r,\widehat{p}} &= \left(\bf{B}_r - \bf{B}_U\right)^\top \left( \bf{t}^X - \widehat{\bf{t}}^X_{NWA} \right),
\end{align}
where $\widehat{t}^X_{NWA}$ is the NWA estimator of the auxiliary variables. By assumption the first term is $O_{p^*}(1)$. When calibration is applied, the second term is 0. Hence, the reminder $R_{m_r,\widehat{p}}$ is negligible. Estimator $\widehat{t}_{m_r,\widehat{p}}$ is asymptotically unbiased for $t$ and we expect its variance to be smaller than that of the HT estimator. This is true even if the working model is misspecified.
\section{Variance and Variance Estimation}\label{section:variance}
Under nonresponse, we can write the variance of a generic estimator $\widehat{t}_{g}$ as
\begin{align}
{\rm var}\left(\widehat{t}_{g}\right) &= {\rm var}_{sam}\left(\widehat{t}_{g}\right) + {\rm var}_{nr}\left(\widehat{t}_{g}\right),
\end{align}
where the two terms are the sampling variance and the nonresponse variance, respectively, and are given by
$$
{\rm var}_{sam}\left(\widehat{t}_{g}\right) = {\rm var}_p\left\{ {\rm E}_q\left(\left. \widehat{t}_{g} \right|s\right) \right\},
$$
and
$$
{\rm var}_{nr}\left(\widehat{t}_{g}\right)= {\rm E}_p\left\{ {\rm var}_q\left(\left. \widehat{t}_{g} \right|s\right) \right\}.
$$
Using the approximations in Equation~\eqref{eq:asym:mu:p:R} and \eqref{eq:asym:mu:phat:calib:R} and results of Section 5 of \cite{has:22}, the variance of the NWA model-assisted estimator $\widehat{t}_{m_r,\widehat{p}}$ can be approximated by
\begin{align}
{\rm var}\left(\widehat{t}_{m_r,\widehat{p}}\right) &\approx {\rm var}_{sam}\left(\widehat{t}_{m_U,\widehat{p},\ell}\right) + {\rm var}_{nr}\left(\widehat{t}_{m_U,\widehat{p},\ell}\right),
\end{align}
where
\begin{equation}
{\rm var}_{sam}\left(\widehat{t}_{m_U,\widehat{p},\ell}\right) = {\rm var}_p\left[ \sum_{k \in s} \frac{1}{\pi_k} \left\{y_k - m_U(\hbox{$\bf x$}_k) - \hbox{$\bf x$}_k^\top{\boldsymbol\gamma}\right\} \right],\label{eqn:Vsam:calU}
\end{equation}
\begin{equation}
{\rm var}_{nr}\left(\widehat{t}_{m_U,\widehat{p},\ell}\right) = {\rm E}_p \left[ \sum_{k \in s} \frac{1}{\pi_k^2}\frac{1 - p_k}{p_k} \left\{y_k - m_U(\hbox{$\bf x$}_k) - \hbox{$\bf x$}_k^\top{\boldsymbol\gamma}\right\}^2 \right],\label{eqn:Vnr:calU}
\end{equation}
and
\begin{equation}
{\boldsymbol\gamma} = \left\{ \sum_{k \in U} (1-p_k)\hbox{$\bf x$}_k\hbox{$\bf x$}_k^\top \right\}^{-1} \sum_{k \in U} (1-p_k)\hbox{$\bf x$}_k \left\{ y_k - m_U(\hbox{$\bf x$}_k) \right\}.
\end{equation}
The first term is the variance of the full sample HT estimator of the differences $\{y_k - m_U(\hbox{$\bf x$}_k) - \hbox{$\bf x$}_k^\top{\boldsymbol\gamma}\}$. Based on this approximation, a variance estimator is
\begin{align}
\widehat{{\rm var}}\left(\widehat{t}_{m_r,\widehat{p}}\right) &= \widehat{{\rm var}}_{sam}\left(\widehat{t}_{m_U,\widehat{p},\ell}\right) + \widehat{{\rm var}}_{nr}\left(\widehat{t}_{m_U,\widehat{p},\ell}\right),
\end{align}
where
\begin{equation}
\widehat{{\rm var}}_{sam}\left(\widehat{t}_{m_U,\widehat{p},\ell}\right) = \sum_{k \in s_r} \frac{1-\pi_k}{\pi_k^2}\frac{e_k^2}{\widehat{p}_k}
+ \sum_{k,\ell \in s_r;k\neq \ell} \frac{\pi_{k\ell} - \pi_k\pi_\ell}{\pi_{k\ell}\pi_k\pi_\ell}\frac{e_k}{\widehat{p}_k}\frac{e_\ell}{\widehat{p}_\ell},
\end{equation}
\begin{equation}
\widehat{{\rm var}}_{nr}\left(\widehat{t}_{m_U,\widehat{p},\ell}\right) = \sum_{k \in s_r} \frac{1}{\pi_k^2}\frac{1 - \widehat{p}_k}{\widehat{p}_k^2} e_k^2 ,
\end{equation}
\begin{equation}
e_k = y_k - m_r(\hbox{$\bf x$}_k)- \hbox{$\bf x$}_k^\top\widehat{{\boldsymbol\gamma}},
\end{equation}
and
\begin{equation}
\widehat{{\boldsymbol\gamma}} = \left( \sum_{k \in s_r} \frac{1}{\pi_k}\frac{1-\widehat{p}_k}{\widehat{p}_k}\hbox{$\bf x$}_k\hbox{$\bf x$}_k^\top \right)^{-1}\sum_{k \in s_r} \frac{1}{\pi_k}\frac{1-\widehat{p}_k}{\widehat{p}_k}\hbox{$\bf x$}_k\left\{y_k - m_r(\hbox{$\bf x$}_k)\right\},
\end{equation}
where we substituted $\widehat{p}_k$ for the unknown $p_k$ and $m_r(\hbox{$\bf x$}_k)$ for $m_U(\hbox{$\bf x$}_k)$.
\section{Simulations}
\label{section:simulation}
Let us consider a population $U$ of size $N=1000$.
For each unit $k$ of $U$, a vector $\hbox{$\bf x$}_k = (x_{k1}, x_{k2}, x_{k3}, x_{k4})^\top$ is generated from independent and identically distributed random variables.
Values $\{ x_{k1} \} $ are realisations of a gaussian random variable with mean 1 and variance 0.25,
$\{ x_{k2} \} $ of a mixture of two gaussian random variables with respective means 6 and 10 and variances 0.25 with mixing proportions 0.5,
$\{ x_{k3} \} $ of a gamma distribution with shape 2 and rate 3, and
$\{ x_{k4} \} $ of a mixture of a gaussian distribution of mean 2 and variance 16 and a gamma distribution with shape 3 and rate 3 with mixing proportions 0.5.
The goal is to estimate the total $t$ on population $U$ of a survey variable $y$ generated as
$$
y_k = 6 \cdot x_{k1} + 4 \cdot x_{k2} + cos(x_{k3}) + \sqrt{\mid x_{k4} - \bar{x}_{4} \mid } + \varepsilon_k,
$$
where $\bar{x}_{4}$ is the mean of values $\{ x_{k4} \}$ in population $U$ and $\varepsilon_k$ the realisation of a normal distribution of mean 0 and variance 1.
Each unit $k$ of the population has a probability
$$
p_k = \big\{ 1+\exp\left[-\boldsymbol{\lambda}^\top (x_{k1}, x_{k2})^\top\right] \big\}^{-1}
$$
of responding to variable $y$, with $\boldsymbol{\lambda} = (0.46,-0.06)^\top$.
Value $\boldsymbol{\lambda}$ is set so that the expected rate of missing values, i.e. the mean of the $\{p_k\}$ on the population, is $50 \%$.
The correlations between the variable of interest and the other variables are given in Table~\ref{tablecor}.
\begin{table}
\caption{Correlations of the values $\{ y_k \}$ of the survey variable with the response probabilities $\{ p_k \}$ and values $\{x_{k1}\}, \{x_{k2}\}, \{x_{k3}\}$ and $\{x_{k4}\}$ of the auxiliary variables on population $U$. \label{tablecor}}
\centering
\resizebox{8cm}{!}{
\fontsize{9}{11}\selectfont
\begin{tabular}[t]{c|ccccc}
\toprule
& $\{ p_k \}$ & $\{x_{k1}\}$ & $\{x_{k2}\}$ & $\{x_{k3}\}$ & $\{x_{k4}\}$\\
\hline
$\{ y_k \}$ & 0.55 & 0.69 & 0.65 & 0.00 & 0.027 \\
\bottomrule
\end{tabular}}
\end{table}
A comparison between some aforementioned total estimators is performed in different scenarios: when the nonresponse model is correctly versus incorrectly specified, when the working model is correctly versus incorrectly specified.
The couple $\{ x_{k1}, x_{k2}\}$ is strongly related to $\{y_k\}$ and $\{p_k\}$.
The couple $\{ x_{k3}, x_{k4}\}$ is weakly related to $\{y_k\}$ and unrelated to $\{p_k\}$.
Four different scenarios are considered in which different couples of variables are used to fit the response model and the working model.
In scenarios 1 and 2, the working model fits well the data, whereas in scenarios 3 and 4 it fits poorly.
In scenarios 1 and 3, the nonresponse model fits well the data, whereas in scenarios 2 and 4 it fits poorly.
Note that in scenario 1 both models fit well the data, in scenario 2 and 3 only one of the two models fits well, and in scenario 4 both models fit poorly.
Table~\ref{table1} shows which couple of variables, i.e. $\{ x_{k1}, x_{k2}\}$ or $\{ x_{k3}, x_{k4}\}$, is used to fit the models.
We compare five estimators: $\widehat{t}_{HT}$, $\widehat{t}_{NWA}$, and $\widehat{t}_{m_r,\widehat{p}}$, defined in Section~\ref{section:framework}, the imputed estimator $\widehat{t}_{imp} = \sum_{k\in s_r} y_k/\pi_k + \sum_{k\in s\backslash s_r} m_r(\hbox{$\bf x$}_k)/\pi_k$, and the naive estimator $\widehat{t}_{naive}= N n_r^{-1} \cdot$ $\sum_{k\in s_r} y_k$.
Estimator $\widehat{t}_{HT}$ is unavailable in practice with nonresponse.
It serves here as comparison point.
Estimators $\widehat{t}_{imp}$ and $\widehat{t}_{m_r,\widehat{p}}$ depend on the estimated function $m_r(.)$. Three different prediction methods are used to obtain $m_r(.)$: generalized regression, local polynomial regression, and $K$-nearest neighbors. Estimators $\widehat{t}_{NWA}$ and $\widehat{t}_{m_r,\widehat{p}}$ depend on the estimated response probabilities.
We select $I=10'000$ samples denoted by $s^{(i)}$, $i=1,\dots,I$, of size $n=200$ from population $U$ using simple random sampling without replacement.
For each sample, we randomly generate missing values in the survey variable using the response
probabilities $\{ p_k \}$ and a Poisson sampling design.
The expected number of observed values $n_r$ in each sample $s^{(i)}$ is $n/2 = 100$.
We can then define the sub-sample $s^{(i)}_r \subset s^{(i)}$ containing the units for which $ y_k $ is observed at simulation run $i$.
In order to evaluate the quality of the nonresponse model and of the working model at simulation run $i$, $i \in \{ 1, \dots, I \}$, two quantities are computed: the mean absolute error of the estimated response probabilities $\{ \widehat{p}_k \}$
$$
\mbox{MAE}(\widehat{p}_k) = \frac{1}{n_r} \sum_{k\in s_r^{(i)}} \mid \widehat{p}_k - p_k \mid,
$$
and the mean relative prediction error
$$
\mbox{MRPE}(m_r(\textnormal{\textbf{z}}_k)) = \frac{1}{N} \dfrac{\sum_{k\in U} \mid m_r(\textnormal{\textbf{z}}_k) - y_k \mid}{\sum_{k\in U} y_k},
$$
where $\textnormal{\textbf{z}}_k = (x_{k1}, x_{k2})^\top$ or $\textnormal{\textbf{z}}_k = (x_{k3}, x_{k4})^\top$, depending on the scenario.
The goodness of fit of the working and nonresponse models is assessed by averaging, for each scenario, the MAE and MRPE over the simulation runs.
Table~\ref{table2} contains these averages.
\begin{table}[htb!]
\caption{Couple of variables used to obtain the estimated response probabilities $\widehat{p}_k$ and the estimated function $m_r(.)$ for four scenarios. \label{table1}}
\centering
\resizebox{10cm}{!}{
\fontsize{9}{11}\selectfont
\begin{tabular}[t]{cc|cc}
\toprule
& & \multicolumn{2}{c}{$\widehat{p}_k$} \\
& & $\{x_{k1}, x_{k2}\}$ & $\{x_{k3}, x_{k4}\}$ \\
\midrule
\multirow{2}{3em}{$m_r(.)$} & $\{x_{k1}, x_{k2}\}$ & Scenario 1 & Scenario 2 \\
&$\{x_{k3}, x_{k4}\}$& Scenario 3 & Scenario 4 \\
\bottomrule
\end{tabular}}
\end{table}
\begin{table}[htb!]
\caption{Average over the simulation runs of the mean absolute error (MAE) of the estimated response probabilities $\widehat{p}_k$ and of the mean relative prediction error (MRPE) of the estimation $m_r(\textnormal{\textbf{z}}_k)$ in four scenarios. \label{table2}}
\centering
\resizebox{8cm}{!}{
\fontsize{9}{11}\selectfont
\begin{tabular}[t]{lrrrr}
\toprule
\multicolumn{1}{c}{ } & \multicolumn{4}{c}{Scenario} \\
\cmidrule(l{3pt}r{3pt}){2-5}
& 1 & 2 & 3 & 4\\
\midrule
\multicolumn{5}{l}{$\mbox{MAE}(\widehat{p}_k)$} \\
& 0.046 & 0.136 & 0.046 & 0.136\\
\addlinespace[1ex]
\multicolumn{5}{l}{$\mbox{MRPE}(m_r(\textnormal{\textbf{z}}_k))$}\\
\hspace{1em}GREG & 0.025 & 0.015 & 0.240 & 0.240\\
\hspace{1em}poly & 0.028 & 0.028 & 0.252 & 0.252\\
\hspace{1em}$K$-nn & 0.058 & 0.058 & 0.252 & 0.252\\
\bottomrule
\end{tabular}}
\end{table}
For each samples $s^{(i)}$ and $s_r^{(i)}$, we estimate the population total with the five aforementioned total estimators.
For a generic total estimator $\widehat{t}$, we compute the Monte Carlo bias relative to the true total
$$
\mbox{RB}(\widehat{t}) = \dfrac{\frac{1}{I} \sum_{i=1}^I (\widehat{t}^{(i)} - t)}{t}
$$
and the Monte Carlo standard deviation relative to the true total
$$
\mbox{RSd}(\widehat{t}) = \dfrac{ \sqrt{\frac{1}{I-1} \sum_{i=1}^I (\widehat{t}^{(i)} - t)^2}}{t},
$$
where $\widehat{t}^{(i)}$ is the value of $\widehat{t}$ obtained at simulation run $i \in \{ 1, \dots, I\}$.
We compare the total estimators for each scenario. Figure~\ref{plotres} summaries the results. More details of the results are in Appendices in Tables~\ref{tablesce1}-\ref{tablesce4} for scenarios~1-4 respectively. Note that only the first three considered estimators are available in practice with nonresponse. The HT estimator $\widehat{t}_{HT}$ is unavailable and serves as comparison point.
In scenario 1, both the nonresponse and working models fit well the data. Our proposed NWA model-assisted estimator $\widehat{t}_{m_r,\widehat{p}}$ and $\widehat{t}_{NWA}$ perform the best in this scenario. They have a RB close to that of the unbiased estimators $\widehat{t}_{HT}$ and have the lowest relative standard deviation.
In scenario 2, our proposed estimator $\widehat{t}_{m_r,\widehat{p}}$ shows the best results of all available estimators even if the nonresponse model fits poorly. It has a bias of the same order as $\widehat{t}_{HT}$, which is an unbiased estimator, for the first two prediction methods. Its bias is smaller than that of the other three available estimators for $K$-nearest neighbors. It shows the best results in term of standard deviation and is more efficient than the NWA and HT estimator. It confirms that the working model allows to improve the efficiency of the total estimator. In scenario~3, the working model is misspecified. The NWA estimator $\widehat{t}_{NWA}$ provides the best results followed by our proposed model-assisted estimator $\widehat{t}_{m_r,\widehat{p}}$. The reason is that the response model is correctly specified in this scenario. Finally, in scenario 4, both the nonresponse model and the working model fit poorly the data. In this case, the performance of $\widehat{t}_{m_r,\widehat{p}}$ is comparable to that of $\widehat{t}_{NWA}$ and $\widehat{t}_{imp}$ that rely on only one of the two models.
The general conclusion of the simulation study is that the proposed estimator $\widehat{t}_{m_r,\widehat{p}}$ globally performs as well or better than estimators $\widehat{t}_{NWA}$, $\widehat{t}_{imp}$, and $\widehat{t}_{naive}$ even when one or both of the working model and the nonresponse model is or are misspecified.
Our estimators hence provides security against model misspecification and greater confidence in the total estimator.
\begin{figure}[ht!]
\centering
\input{Rplot_res.tex}
\caption[Dragonfly Mitteland]{
Bias and standard deviation of total estimators relative to the true total for scenarios 1 to 4 with three prediction methods: generalized regression (GREG), polynomial regression (poly) and $K$-nearest neighbors ($K$-nn) with $K=5$. Estimator $\widehat{t}_{HT}$ is a comparison point and is unavailable with nonresponse.
\label{plotres}}
\label{fig:plot1}
\end{figure}
\section{Discussion}\label{section:discussion}
We adapt model-assisted total estimators to missing at random data building on the idea of nonresponse weighting adjustment. We consider nonresponse as a second phase of the survey and reweight the units using the inverse of estimated response probabilities in model-assisted estimators in order to compensate for the nonrespondents. We develop the asymptotic properties of our proposed estimator and show conditions under which it is asymptotically unbiased. Our proposed estimator can be written as a weighted estimator. We show cases in which the resulting weights are calibrated to the total of the auxiliary variables. We conduct a simulation study to empirically study the performance of our estimator. The results of this study confirm that our estimator generally outperforms the competing estimators, even when the underlying models are misspecified. Further work includes the study of our estimator under other working models as well as the extension to non-missing at random data.
\section*{Acknowledgements}
This work was partially funded by the Swiss Federal Statistical Office.
The views expressed in this paper are those of the authors solely.
\clearpage
\bibliographystyle{apalike}
|
1,108,101,565,537 | arxiv | \section{Introduction}\label{Sec:Introduction}
Supermassive black holes (SMBHs) with masses upwards of a billion solar masses have been observed less than one billion years after the Big Bang \citep{Fan_2006, Mortlock_2011, Wu_2015, Banados_2018}. However, the mechanisms which allow for the formation of supermassive black holes is hotly debated and currently unknown \citep[for a recent review see][]{Woods_2018}. The
mainstream scenarios fall into two main brackets. The first mechanism uses \textit{light} seeds as the
origin for the massive black hole seeds. Light seeds are thought to have masses between 30 and 1000 \msolar masses and may be formed from the end point of Population III (PopIII) stars \citep{Abel_2002, Bromm_2002, Madau_2001}. Light seeds may also evolve from the core collapse of a dense stellar cluster \citep{Begelman_78b, Freitag_2006, Merritt_2008, Devecchi_2008, Freitag_2008, Lupi_2014, Katz_2015} where stellar collisions result in the formation of a massive black hole. However, there is a general consensus within the community that
growing from light seed masses up to one billion solar masses may be demanding in the early Universe and that the vast majority of light seeds suffer from starvation in their host halo \citep{Whalen_2004, Alvarez_2009, Milosavljevic_2009, Smith_2018}; however, see \citep{Alexander_2014,Inayoshi_2018, Pacucci_2017} for examples of super-Eddington accretion mechanisms which may circumvent light seed growth restrictions.
The second mechanism advocates for \textit{heavy} seeds with initial masses between 1000 \msolar and 100,000 \msolarc. This scenario is commonly referred to as the "Direct Collapse Black Holes" (DCBH) scenario \citep{Eisenstein_1995b, Oh_2002, Bromm_2003} and relies on the direct collapse of a metal-free gas cloud directly into a massive black hole. Depending on the
exact thermodynamic conditions of the collapse the massive black hole phase may be preceded by an intermediary stage involving a super-massive star \citep{Shapiro_1979, Schleicher_2013, Hosokawa_2013, Inayoshi_2014, Woods_2017, Haemmerle_2017b, Haemmerle_2017} or a quasi-star \citep{Begelman_2006, Begelman_2008}. Initial numerical investigations of the collapse of atomic cooling haloes revealed that the collapse could proceed monolithically and that the formation of a massive black hole seed with a mass up to 100,000 \msolar masses was viable in the early Universe where atomic cooling haloes were both metal-free and free of \molH \citep{Bromm_2002, Wise_2008a, Regan_2009b, Regan_2009}.
As the numerical investigations became more sophisticated, the research landscape shifted to understanding how metal-free atomic cooling haloes could exist which remained free of rampant star formation. \molH cooling within mini-haloes, which would precede atomic cooling haloes, would lead to the formation of PopIII stars thus shutting off the pathway to massive black hole seed formation.
\molH can be dissociated by radiation in the Lyman-Werner (LW) band \citep{Field_1966} between 11.8 and and 13.6 eV. If the intensity of LW radiation is strong enough then \molH formation can be suppressed, allowing for the formation of an atomic cooling halo in which \molH cooling is prevented and the halo must cool and collapse on the so-called atomic track. A number of authors
\citep{Shang_2010, Wolcott-Green_2011, Sugimura_2014, WolcottGreen_2012, Regan_2014a, Visbal_2014, Agarwal_2015a, Latif_2015}
examined the intensity of LW radiation required to completely suppress \molH formation and found that the intensity of LW radiation impinging onto a nascent halo needed to be upwards of 1000 J$_{21}$\footnote{J$_{21}$ is shorthand for $1 \times 10^{-21} \ \rm{erg \ s^{-1} cm^{-2} Hz^{-1} sr^{-1}}$ and measures the intensity of radiation at a given point.}. Only pristine and metal-free haloes in close proximity to another rapidly star-forming halo would be able to fulfill that criteria given that the 1000 J$_{21}$ value is orders of magnitude above expected mean background values \citep[e.g.][]{Ahn_2009}. Two haloes developing closely separated in both time and space would allow for this mechanism and hence the "synchronised-halo" model was developed by \cite{Dijkstra_2008}, which advocated this approach as being conducive to the formation of atomic cooling haloes that allow the full suppression of \molHc. \cite{Regan_2017} tested the theory rigorously through numerical simulations, showing that atomic cooling haloes that develop and are sub-haloes of one another can lead to the complete suppression of \molH in one of the haloes and hence to an isothermal collapse of the core of one of the pair. The exact abundance of synchronised haloes is challenging to predict analytically and even in optimistic evaluations the number density of synchronised pairs may only be able to seed a sub-population of all SMBHs \citep{Visbal_2014b, Inayoshi_2015b, Habouzit_2016}.
More recently \cite{Wise_2019}, hereafter W19, showed that the rapid assembly of haloes can also lead to the suppression of \molH and should be significantly more common than the synchronised pair scenario (though this
mechanism does not necessarily lead to pure isothermal collapse while the synchronised scenario should). Dynamical heating \citep{Yoshida_2003a, Fernandez_2014} can suppress the impact of \molH cooling, thus keeping an assembling halo hotter and preventing the formation of stars. W19 investigated two haloes in particular from a set of high resolution adaptive mesh refinement simulations of the early Universe that they found had breached the atomic cooling limit, were metal-free and had not formed stars. The two haloes that they targeted for detailed examination were the most massive halo (MMHalo) and the most irradiated halo (LWHalo) at the final
output of the simulation, redshift 15. W19 found that the haloes were subject to only relatively mild LW exposure and that in the absence of all other external effects should have formed stars. They found that the haloes experienced especially rapid growth compared to typical haloes and that the extra dynamical heating effects driven by the rapid growth allowed the haloes to remain star-free. Their examinations also showed that the haloes did not show any initial signs of rapid collapse - however they did not run their simulations beyond the formation of the first density peak and further evolution of these haloes is still required to determine the detailed characteristics of the objects that form. In this study we examine the entire dataset of metal-free and star-free haloes produced by the simulations used in W19. As such, this study is more comprehensive and allows for a broader analysis of the physics driving the formation of these pristine objects. The goal of this study is to look at the Renaissance simulation dataset in its entirety. Here we identify DCBH candidates at each redshift and also investigate the environmental conditions that lead to the emergence of atomic cooling haloes which are both metal-free and star-free.
\begin{figure*} \label{Fig:NumberDCBHs}
\centering
\begin{minipage}{175mm} \begin{center}
\centerline{
\includegraphics[width=0.525\textwidth]{FIGURES/NumberOfDCBHs.png}
\includegraphics[width=0.525\textwidth]{FIGURES/NumberOfDCBHs_Cumulative.png}}
\caption{\textit{Left Panel}: The number of DCBH candidate haloes found at each redshift in each region. \textit{Right Panel}: The total number of DCBH candidate haloes found as a function of redshift. The \rarepeak region (blue line) has formed a total of 76 candidate DCBH haloes. The \normal region (green line) has formed a total of 3 DCBH candidate haloes. The running total is the total number of
DCBH candidate haloes formed over the entire simulation once duplicates are excluded and accounting for a DCBH candidate halo becoming subsequently polluted. For completeness the age of the universe at that time is included at the top of each figure. }
\end{center} \end{minipage}
\end{figure*}
\section{Renaissance Simulation Suite} \label{Sec:Renaissance}
The Renaissance simulations were carried out on the Blue Waters supercomputer facility using the adaptive mesh refinement code \texttt{Enzo~}\citep{Enzo_2014}\footnote{https://enzo-project.org/}. \texttt{Enzo~} has been extensively used to study the formation of structure in the early universe \citep{Abel_2002, OShea_2005b, Turk_2012, Wise_2012b, Wise_2014, Regan_2015, Regan_2017}. In particular \texttt{Enzo~} includes a ray tracing scheme to follow the propagation of radiation from star formation and black hole formation \citep{WiseAbel_2011} as well as a detailed multi-species chemistry model that tracks the formation and evolution of nine species \citep{Anninos_1997, Abel_1997}. In particular the photo-dissociation of \molH is followed, which is a critical ingredient for determining the formation of the first metal-free stars \citep{Abel_2000}.
The datasets used in this study were originally derived from a simulation of the universe in a 28.4 \mpch on the side box using the WMAP7 best fit cosmology. Initial conditions were generated using MUSIC \citep{Hahn_2011} at z = 99. A low resolution simulation was run until z = 6 in order to identify three different regions for re-simulation \citep{Chen_2014}. The volume was then smoothed
on a physical scale of 5 comoving Mpc, and regions of high
($\langle\delta\rangle \equiv \langle\rho/\rangle(\Omega_M \rho_C) - 1 \simeq 0.68$), average ($\langle\delta\rangle \sim 0.09)$), and low ($\langle\delta\rangle \simeq -0.26)$)
mean density were chosen for re-simulation.
These sub-volumes were then refered to as the \rarepeak region, the \normal region and the \void region. The \rarepeak region has a comoving volume of 133.6 Mpc$^3$, the \normal region and the \void regions have comoving volumes of 220.5 Mpc$^3$. Each region was then re-simulated with an effective initial resolution of $4096^3$ grid cells and particles within these sub-volumes of the larger initial simulation. This gives a maximum dark matter particle mass resolution of $2.9 \times 10^4$ \msolarc. For the re-simulations of the \voidc, \normal and \rarepeak regions further refinement was allowed throughout the sub-volumes up to a maximum refinement level of 12, which corresponded to 19 pc comoving spatial resolution. Given that the regions focus on different over-densities each region was evolved forward in time to different epochs. The \rarepeak region, being the most over-dense and hence the most computationally demanding at earlier times, was run until z = 15. The \normal region ran until z = 11.6, and the \void region ran until z = 8. In all of the regions the halo mass function was very well resolved down to M$_{halo} \sim 2 \times 10^6$ \msolarc. The \rarepeak regions contained 822 galaxies with masses larger than $10^9$ \msolar at z = 15, the \normal region contained 758 such galaxies at z = 11.6, while the \void region contained 458 such galaxies at z = 8.
As noted already in \S \ref{Sec:Introduction}, in W19 we examined two metal-free and star-free haloes from the
\rarepeak simulation. Only the z=15 dataset was used. In this work we examine all of the datasets available
from the \voidc, \normal and \rarepeak regions to get a larger sample of the emergence of DCBH haloes across
all three simulations and across all redshift outputs. In the next section we examine both the number density of DCBH across time and also the environmental conditions which lead to their appearance.
\section{Results}
We investigate here the emergence of DCBH candidate haloes in the Renaissance simulations. We first investigate the absolute number of DCBH candidate haloes which form in each of the three simulation regions. We then examine in more detail the physical conditions which allow their emergence.
\subsection{The abundance of DCBH candidate haloes}
In the left panel of Figure \ref{Fig:NumberDCBHs} we show the absolute number of candidate DCBH haloes in each simulation region over the range of redshift outputs available to us. In the right hand panel we show the running total for the number of candidate DCBH haloes formed over the course of the entire simulation.
As noted in section \S \ref{Sec:Renaissance} the \rarepeak simulation runs to z = 15, the \normal simulation runs to z = 11.6 and the \void simulations runs to z = 8. At each redshift snapshot we calculate the number of \textit{metal-free, atomic cooling haloes which contain no stars}. The number of these DCBH candidate haloes, N$_{DCBH}$, versus redshift is captured in the left hand panel of Figure \ref{Fig:NumberDCBHs}. The \rarepeak simulation (blue line) contains the largest absolute number of DCBH
candidate haloes. At the final output time (z = 15) there are 12 candidate DCBH haloes in the \rarepeak volume. This compares to 0 in the \normal volume at z = 11.6. However, there are candidates detected in the \normal region at other outputs as we can see. No candidates are detected in the void region at any redshift output and hence we do not explore the \void region any further in this work.
We can see that the number of DCBH candidate haloes fluctuates over time although overall the trend is that there is an increase in the number of the DCBH candidate haloes per unit redshift. The increase is more
prominently seen in the right hand panel of Figure \ref{Fig:NumberDCBHs}. The running total for the number of DCBH candidate haloes increases rapidly and by z = 15 the \rarepeak simulation has hosted 76 DCBH halo candidates while the \normal region has hosted 3 DCBH halo candidates. The cumulative total accounts for the fact that a previous DCBH candidate halo can become polluted and hence no longer matches the criteria even though it may now host a DCBH\footnote{Renaissance has no subgrid model for DCBH formation and so DCBH is not recorded as haloes assemble.}. In contrast the left hand panel is a pure snapshot at that time and has no memory of the history of haloes.
\begin{figure*}
\centering
\begin{minipage}{175mm} \begin{center}
\centerline{
\hspace*{0.35cm}\includegraphics[width=10.5cm, height=10cm]{FIGURES/Normal_number_density.pdf}
\includegraphics[width=10.5cm, height=10cm]{FIGURES/Rarepeak_number_density.pdf}}
\caption{Left Panel: Projection of the \normal simulation volume with dashed red circles identifying the location of all 3 DCBH halo candidates across all redshift outputs. Right Panel: Projection of the \rarepeak simulation volume with dashed red circles identifying the location of all 76
DCBH candidates across all redshift outputs. The \rarepeak projection is made at z = 15 and the \normal projection is made at z = 11.6 although the DCBH candidate haloes may have formed at a different epoch.}
\label{Fig:Projections}
\end{center} \end{minipage}
\end{figure*}
In Figure \ref{Fig:Projections} we plot the location of each of the distinct DCBH candidate haloes on top of
a projection of the number density of the \rarepeak region and of the \normal region. In each case the
projection is made at the final redshift output (\textit{Rarepeak}, z=15; \textit{Normal}, z=11.6). The dashed red circles which denote the halo location are from across all redshift outputs and hence should be seen as approximate locations. Nonetheless, what is immediately obvious is that the emergence of DCBH candidate haloes is a ubiquitous feature of high density regions. The number of haloes in the \normal region is significantly reduced compared to the \rarepeak region. The reason behind the much larger number of DCBH candidates in the
\rarepeak region compared to the \normal region is multifaceted, depending on the growth of structure, the mean density of the inter-galactic medium in that region and the flux of LW radiation.
The number of galaxies above some given minimum mass M$_{min}$(z) in a redshift bin of width $dz$ and solid angle $d\Omega$ can be defined using the Press-Schechter formalism \citep{PressSchecter_1974}.
\begin{equation}
{{dM} \over {d\Omega dz}}(z) = {{dV} \over {d\Omega dz}}(z) \int^{\inf}_{M_{min}(z)} dM {dn \over dM} (M, z)
\end{equation}
where $dV / d\Omega dz$ is the cosmological comoving volume element at a given redshift and $(dn / dM)dM$ is the
comoving halo number density as a function of mass and redshift. The latter quantity was expressed by \cite{Jenkins_2001} as
\begin{equation}
\begin{aligned}
{dn \over dM}(M, z) = {} & -0.315 {\rho_o \over M} {1 \over \sigma_M} {d\sigma_M \over dM} \times \\
& \exp ({-|0.61 - log(D(z) \sigma_M)|^{3.8})})
\end{aligned}
\end{equation}
where $\sigma_M$ is the RMS density fluctuation, computed on mass scale $M$ from the $z = 0$ linear power spectrum \citep{Eisenstein_Hu_1999}; $\rho_0$ is the mean matter density of the universe, defined as $\rho_0 = \Omega_M*\rho_c$ (with $\rho_c$ being the cosmological critical density, defined as $\rho_c = 3H_0^2/8 \pi G$) and $D(z)$ is the linear growth function (see, e.g. \cite{Hallman_2007} for details). Taking this together we find that $dn / dM $ scales approximately as $\rho \sigma_{M}^{3.8}$.
The higher mean density and higher
$\sigma_M$ in the \rarepeak compared to the \normal region is therefore consistent with previous findings showing that there are approximately 3 - 4 times more haloes, per unit redshift, in the \rarepeak region
\citep{Xu_2013, OShea_2015}. Not only this but the higher mean densities in the \rarepeak region leads to
a smaller volume filling fraction of metal enrichement in the \rarepeak region compared to the \normal region.
Taking supernova blastwave calculations alone leads to a volume filling fraction of 0.7 in the \rarepeak
relative to the \normal region. Finally, the flux of LW is also much higher in the \rarepeak region as there
are more haloes producing more stars per unit volume compared to the \normal region (see e.g. \cite{Xu_2013}). The combination of these three factors leads to significantly more DCBH candidate haloes in the \rarepeak region. Over the time interval that the Renaissance simulations run for this leads to a ratio of 76 DCBH candidates in the \rarepeak region compared to just 3 in the \normal region.
\begin{figure*}
\centering
\begin{minipage}{175mm} \begin{center}
\centerline{
\includegraphics[width=9.5cm, height=8cm]{FIGURES/Dist.pdf}
\includegraphics[width=9.5cm, height=8cm]{FIGURES/J21.pdf}}
\caption{Left Panel: The distance from each candidate DCBH halo to the nearest massive galaxy (defined as
the closest star forming halo, see text for more details) for each
region. Right Panel: The value of the LW background, in units of J$_{21}$, felt at the centre of each DCBH
candidate. For the majority of DCBH haloes the value of LW radiation it is exposed to is within an order of magnitude of the background level at that redshift. Only a small number of DCBH candidate haloes experience radiation levels more than one order of magnitude higher than the background level. The grey vertical band indicates the approximate level of background LW radiation expected at z = 15 \citep{Ahn_2009, Xu_2013}.}
\label{Fig:DistJ21}
\end{center} \end{minipage}
\end{figure*}
\subsection{The physical conditions required for DCBH candidate halo formation}
In Figure \ref{Fig:DistJ21} we plot the distance from each DCBH candidate halo to the nearest massive
galaxy and we also plot the level of LW radiation that each candidate halo is exposed to. In the left hand panel of Figure \ref{Fig:DistJ21} the distance\footnote{All distances discussed are in physical units unless explicitly stated otherwise} to the nearest massive galaxy (defined below) is calculated by examining every halo in a sphere of radius 1 Mpc around the DCBH candidate halo. The stellar mass in each halo is then normalised by the square of the distance between that halo and the candidate halo. This normalisation accounts for the $r^{2}$ drop off in radiation intensity with distance. The galaxy with the largest normalised stellar mass is then used as the nearest massive galaxy. In the \rarepeak
simulation most galaxies lie at least 10 kpc away but the spread is quite even up to nearly 100 kpc at which point it starts to decline. In the \normal simulation, which only has 3 candidates, the nearby galaxies lie
approximately 5 kpc and 50 kpc (in two of the cases) away. What this tells us is that close proximity to nearby
star-forming galaxies is not (directly) correlated with forming DCBH candidate haloes.
In the right hand panel we investigate the level of LW radiation that each candidate halo is exposed to
at the associated redshift output. In this case the results are somewhat more defined. For the \rarepeak
region the values of J$_{LW}$ are between 0.01 and 10 J$_{21}$ while for the
\normal simulation the values are between approximately 0.1 and 1 J$_{21}$, albeit for significantly fewer DCBH candidate haloes. The values for the LW radiation field, in the \rarepeak region, are approximately an order of magnitude higher than the expected mean radiation field at this redshift of J$_{LW} = 10^{-2} - 10^{-1}$ J$_{21}$ \citep{Ahn_2009, Xu_2013} - marked by the shaded region in Figure \ref{Fig:DistJ21}. The reason for this is that the \rarepeak region has significantly more galaxies \citep{OShea_2015} compared to the \normal region and the galaxies are also much brighter, especially in the LW band.
The level of LW radiation felt by the vast majority of candidate DCBH haloes is significantly below the level required to fully suppress \molH cooling \citep{Regan_2014b, Latif_2014a, Regan_2016a}, which is typically estimated to be approximately 1000 J$_{21}$. Nonetheless, the haloes do not collapse until after reaching
the atomic cooling limit. As we found in W19 the impact of rapid halo growth plays a dominant role in the halo assembly history of these haloes, as we now discuss.
\begin{figure*}
\centering
\begin{minipage}{175mm} \begin{center}
\centerline{
\includegraphics[width=9.5cm, height=8cm]{FIGURES/Normal_LogMass_Stars.pdf}
\includegraphics[width=9.5cm, height=8cm]{FIGURES/Rarepeak_LogMass_Stars.pdf}}
\caption{Left Panel: The evolution of the total mass of each DCBH candidate halo in the \normal simulation. Also included (dashed black lines) is the evolution of three rapidly growing star-forming haloes for comparison. The mass
resolution of the Renaissance simulations is approximately 20,000 \msolar so values below 10$^6$ \msolar should be treated with caution and we therefore set the halo resolution of our analysis at 10$^6$ \msolarc. Right Panel: The evolution of the total mass of each DCBH candidate halo in the \rarepeak simulation. In the vast majority of cases the halo grows rapidly just prior to reaching the atomic cooling limit.}
\label{Fig:MassGrowth}
\end{center} \end{minipage}
\end{figure*}
In Figure \ref{Fig:MassGrowth} we plot the mass growth of each candidate DCBH halo as a function of redshift. In both panels we plot the mass of the halo versus the redshift. The left panel contains haloes from the \normal simulation while the right hand panel contains haloes from the \rarepeak simulation. The grey region in each panel below $10^6$ \msolar signifies the region below which
the mass resolution of Renaissance becomes insufficient to confidently model haloes. Generally we are able to track haloes below this threshold and into the grey region but below $10^6$ \msolar results should be treated with caution. The dashed blue line is the limit above which a halo must grow in order to overwhelm the impact of LW radiation, M$_{min,LW}$, \citep{Machacek_2001, OShea_2008, Crosby_2013, Crosby_2016}. The dashed red line is the approximate atomic cooling threshold, M$_{atm}$, at which point cooling due to atomic hydrogen line emission becomes effective\footnote{Both M$_{min,LW}$ and M$_{atm}$ evolve with redshift although the dependence is weak over the range considered here}.
Focusing first on the \normal region in the left panel we plot the growth rate of the three DCBH candidate haloes identified in the left panel of Figure \ref{Fig:Projections}. The DCBH candidate haloes are rapid growers but are not necessarily the fastest growing haloes in the \normal region. To emphasise this comparison we also plot the growth of three rapidly growing haloes which contain stars. We select the three star forming haloes from the final output of the \normal region but haloes at other redshifts do of course exist which are rapidly growing and contain stars.
In this case we see that haloes with high dM/dz (i.e. the mass as a function of redshift) values can be star-free or star-forming and hence having a high dM/dz does not necessarily discriminate between DCBH halo candidates by itself. Rapidly growing
haloes can become metal-enriched through external enrichment processes. The enrichment allows the halo
interior to cool and to form stars even in the presence of dynamical heating. Therefore, any semi-analytical model or sub-grid prescription which uses dM/dz alone as a predictor for DCBH candidates will inevitably
overestimate the number of candidates.
The right hand panel of Figure \ref{Fig:MassGrowth} shows the growth of DCBH candidate haloes from the \rarepeak simulation. There is a much larger number of DCBH candidate haloes in the \rarepeak region
compared to the \normal region and hence only the DCBH candidate haloes are included in this plot. Again we see strong evidence of rapid assembly. All of the haloes show evidence of rapid growth between the LW threshold and the atomic cooling limit, which is able to suppress star formation in all of these haloes. The dynamics of each halo are somewhat unique, with some haloes experiencing major mergers that lead to bursts of dynamical heating while others experience more steady but nonetheless rapid growth. Furthermore, some haloes will be located closer to massive galaxies which expose the haloes to high LW radiation which in-turn impacts the chemo-thermodynamical characteristics of the halo in question. We now examine the roles that metallicity, rapid growth and radiation all play in the assembly of a DCBH candidate halo in more detail.
\begin{figure*}
\centering
\begin{minipage}{175mm} \begin{center}
\centerline{
\includegraphics[width=9.5cm, height=8cm]{FIGURES/scatter_all.pdf}
\includegraphics[width=9.5cm, height=8cm]{FIGURES/scatter_all_rarepeak.pdf}}
\caption{Left Panel: Phase space diagram showing the maximum rate of growth (dM/dz) of the DCBH candidate haloes in the \normal region (squares). Also included is the growth rate of a large sample of star-forming haloes for comparison. It should be noted that while the DCBH candidate haloes are among the most rapidly growing haloes, star forming haloes can grow more rapidly. The colour of the squares, stars and circles are weighted by the LW radiation to which that halo is exposed prior to the onset of star-formation. Right Panel: Similar plot for the \rarepeak simulation. The growth rate, dM/dz, for DCBH candidates in the \rarepeak simulation are shown as circles again coloured by the level of LW radiation to which they are exposed. The
DCBH candidates haloes from the \normal simulations are also plotted for direct comparison. Black outer circles are used to identify four DCBH candidates which collapse completely isothermally at T = 8000 K. The DCBH candidate halo marked with a red outer circle is the MMHalo from W19 while the green outer circle is the LWHalo from W19.}
\label{Fig:Scatter}
\end{center} \end{minipage}
\end{figure*}
\subsection{Radiation, Metallicity \& Rapid Growth all play a role}
In Figure \ref{Fig:Scatter} we examine quantitatively the dM/dz values from haloes in both the \normal and \rarepeak regions. We compare in a 3D representation the average dM/dz, \JLW \ and metallicity of
each of the DCBH candidate haloes as well as a subset of star-forming haloes from the \normal region.
In the left hand panel of Figure \ref{Fig:Scatter} we focus on the \normal region. The phase diagram
shows the average growth rate, dM/dz, as a function of halo metallicity. Each symbol is coloured by the
level of LW radiation the halo is exposed to. We plot the dM/dz, metallicity and \JLW \ values of both DCBH candidate haloes (squares) and star-forming (stars) haloes. The dM/dz value is calculated by determining the time taken for a halo to grow from $5 \times 10^6$ \msolar up to the atomic cooling limit ($\sim 3 \times 10^7$ \msolarc). This measures the mean rate at which mass is accumulated by the halo once it crosses the LW threshold (blue line in Figure \ref{Fig:MassGrowth}) and up to the point it reaches the atomic cooling limit (red line in Figure \ref{Fig:MassGrowth}). Both the J$_{LW}$ value and the metallicity are calculated by taking the final value of J$_{LW}$ and metallicity respectively before star formation occurs (star formation leads to additional internal LW radiation and metal enrichment which we cannot disentangle from external effects). The three DCBH candidate haloes have among the highest dM/dz values, which goes some way to explaining why these haloes were able to suppress star formation. The dynamical heating impact of rapid growth is given by
\begin{equation}
\Gamma_{\rm{dyn}} = \alpha M_{\rm{halo}}^{-1/3} {{k_b} \over {\gamma -1}} {{ dM_{\rm{halo}} \over dt}}
\end{equation}
where $\Gamma_{\rm{dyn}}$ is the dynamical heating rate, $ M_{\rm{halo}}$ is the halo total mass and $\alpha$
is a coefficient relating the virial mass and temperature of the halo \citep{Barkana_2001}. Two of the haloes are completely metal-free while one of the haloes is experiencing some slight external metal enrichment ($\sim 2.88 \times 10^{-9}$
\zsolarc). However, it is also clear that there are star-forming haloes growing more rapidly than the star-free haloes. This is not surprising. In the case of the halo in the top right of the left panel this halo became metal enriched early in the halo assembly process. The halo formed a PopIII star but the halo continued to assemble rapidly. In this case because of the metal enrichment the dynamical heating due to rapid assembly is negated completely. Therefore, only haloes which remain metal-free \textit{and} grow rapidly can remain star-free.
In the right hand panel of Figure \ref{Fig:Scatter} we plot the same phase plot for the DCBH candidate haloes (circles) in the \rarepeak simulation. Given the large number of DCBH candidate haloes in the \rarepeak region we do not include star-forming haloes from the \rarepeak region in this plot. We do, however, include the DCBH candidate haloes (squares) from the \normal region for direct comparison. For these DCBH candidate haloes there is a wide variation in Log$_{10}$ (dM/dz) with values as low as 6.3 and as high as 7.75.
Naively it would be expected that the haloes with low dM/dz values and moderate to low \JLW values would
form stars. However, inspection of individual haloes reveals bursts of rapid assembly which can result in
the suppression of \molH for at least a sound crossing time (see also W19). The average value of dM/dz, as plotted here, fails to detect the bursts which can suppress star formation and in many cases those with low average dM/dz values have a strong burst of dynamical heating not easily captured by an average value. We will return to this point and the impact this can have on deriving a semi-analytic prescription in \S \ref{Sec:Discussion}.
In the right hand panel we identify six haloes with circles. Four are marked with black circles. These are haloes that we have found show an isothermal collapse up to the maximum resolution of the Renaissance simulations ($\sim 1$ pc) and are showing no signs of \molH cooling in the core of the halo. Each of the isothermal haloes that we identify here are typically within a few kiloparsecs of a star forming atomic cooling halo but the candidate halo has not yet become either significantly metal enriched or photo-evaporated. Nonetheless, the nearby massive galaxies provided a much higher than average (average \JLW $\sim 1$ J$_{21}$) \JLW \ value. This scenario is similar to the scenario explored by \cite{Dijkstra_2014}. We also identify in red the most massive halo in the \rarepeak simulation at z = 15 and the most irradiated halo (green circle) in the simulation at z = 15. The most massive and most irradiated halo were previously identified in W19 and
investigated in detail. \\
\indent In Figure \ref{Fig:Rarepeak_Profiles} we show the radial profiles of a number of physical quantities for each of the haloes identified by the circles. The blue line is the most massive halo (MMHalo) and the green line (LWHalo) is the
most irradiated halo. The other haloes are those which show well defined isothermal collapse profiles. Both the MMHalo and the LWHalo show clear cooling towards the molecular cooling track (bottom left panel). Each of the other haloes have temperatures greater than 8000 K all the way in to the centre of the halo and so remain on the cooling atomic cooling track. In the top left panel we see that both the MMHalo and the LWHalo have higher \molH fractions as expected. All the haloes increase their \molH as the density increases towards the centre of the halo. In the case of the isothermally collapsing haloes the fraction remains low enough so that cooling remains dominated by atomic cooling. In the top right panel we plot the enclosed gas mass as a function of radius and in the bottom right panel the instantaneous accretion rate as a function of radius. The accretion rates for each of the haloes are extremely high, with accretion rates above 0.1 \msolar per year at all radii. Accretion rates greater than approximately 0.01 \msolarc/yr are thought be required for supermassive star formation \citep[e.g.][]{Sakurai_2016, Schleicher_2013}. The MMHalo and the LWHalo cool towards the centre of the halo,
meaning that fragmentation into a dense cluster of PopIII stars becomes more likely in those cases.
The reason that the MMHalo and the LWHalo cool towards the centre is due to their higher \molH
fractions compared to the other four haloes. As can be seen in Figure \ref{Fig:Scatter} each of the four
selected haloes have systematically higher LW radiation values impinging onto them resulting in lower \molH
fractions.
In addition, for the cases where the collapse remains isothermal the degree of fragmentation can be suppressed, with more
massive objects likely to form in that case \citep{Regan_2018a, Regan_2018b}.
\begin{figure*}
\centering
\begin{minipage}{175mm} \begin{center}
\centerline{
\includegraphics[width=18.0cm, height=12cm]{FIGURES/MultiPlot.pdf}}
\caption{In each of the four panels in this figure we compare the six DCBH haloes identified in the right hand panel of Figure \ref{Fig:Scatter}. Four of the DCBH candidate haloes are collapsing isothermally while the
MMHalo (blue line) and the LWHalo (green line) show strong evidence of non-isothermal collapse. In the bottom left hand panel we plot the temperature against radius illustrating the isothermality of the four selected DCBH candidate haloes. The MMHalo and the LWHalo clearly start to cool in the halo centre. This cooling can be directly attributed to a higher \molH fraction for the MMHalo and the LWHalo as seen in the top left panel. The enclosed mass for each candidate halo varies inside approximately 30 pc for each halo with an average enclosed mass of $10^5$ \msolar inside 20 pc. In the bottom right panel we show the instantaneous accretion rate for each DCBH candidate halo. All of the haloes show accretion rates greater than 0.1 \msolarc/yr across several decades in radius and continuing into the core of the halo. }
\label{Fig:Rarepeak_Profiles}
\end{center} \end{minipage}
\end{figure*}
\begin{figure*}
\centering
\begin{minipage}{175mm} \begin{center}
\centerline{
\includegraphics[width=18.0cm, height=12cm]{FIGURES/FivePanel.pdf}}
\caption{Visualisations of four of the synchronised haloes found in the \rarepeak region. Each member of the synchronised pair is an atomic cooling halo on the cusp of star formation. Typical separations between haloes are between 200 pc and 500 pc at these outputs. The red circles in each panel mark the central core of each halo. The radius of each circle is approximately 10\% of the virial radius. The virial radius of each individual DCBH candidate haloes overlaps with its synchronised partner halo. Only the system mass is shown in each panel since the haloes are subhaloes of each other.
}
\label{Fig:SyncHaloes}
\end{center} \end{minipage}
\end{figure*}
\subsection{Synchronised Haloes} \label{Sec:Synchronised}
Synchronised haloes have been invoked as a means of generating a sufficiently high LW radiation flux to allow
total suppression of \molH in the core of an atomic cooling halo \citep{Dijkstra_2008, Visbal_2014b, Regan_2017}. The scenario supposes that two pristine progenitor atomic cooling haloes cross the atomic cooling threshold nearly simultaneously. The suppression of star formation in both haloes as they assemble eliminates the possibility of either metal enrichment or photo-evaporation from one halo to the other. The first halo to cross the atomic cooling threshold suffers catastrophic cooling due to neutral hydrogen line emission cooling and begins to collapse and form stars. The LW radiation from Halo1 irradiates Halo2, thus suppressing \molH in Halo2 and allowing for the formation of a DCBH. We search the \rarepeak region for synchronised pairs matching the above criteria.
We look for pairs of ACHs which remain pristine and devoid of star formation and are separated from each other by less than 1 kpc, but are also at a separation of greater than 150 pc as they cross the atomic cooling threshold. We note that this is likely somewhat optimistic given the region of synchronisation is expected to be between approximately 150 pc and 350 pc for haloes
of this size \citep{Regan_2017}. Within the \rarepeak region we find of total of 5 pairs of pristine ACHs that fulfill the basic criteria. In Figure \ref{Fig:SyncHaloes} we show a visualisation of four of the
five haloes which are candidates for synchronised haloes. In each case the haloes are
separated by distances between approximately 200 pc and 500 pc at the time of crossing the atomic cooling threshold. In all cases the haloes are still devoid of star formation but at least one of the haloes in the
pair forms stars before the next data output. The total mass of the two atomic cooling haloes in each case
is above $10^{8}$ \msolarc. Given the proximity of the two haloes at this point it is difficult to estimate the mass of each halo individually.
\cite{Visbal_2014} examined the formation of DCBH from synchronised haloes and estimated their abundances both analytically and through an n-body simulation. To estimate the
abundances of synchronised haloes analytically they used the following equation
\begin{equation} \label{Eq:sync}
{dn_{DCBH} \over dz} \sim {dn_{cool} \over dz} \Big ({dn_{cool} \over dz} \Delta_{z_{sync}} \int^{R.O.R} dr 4\pi r^2 [1 + \eta(r)] f_s(r) \Big )
\end{equation}
where ${{dn_{cool} \over dz}} $ is the number density of haloes which cross the cooling threshold between $z$ and $z + dz$, $\eta(r)$ is the two point correlation function which describes the enhancement of halo pairs due to clustering and $\Delta_{z_{sync}}$ is the redshift range corresponding to the synchronisation time, $f_s(r)$ is the fraction of haloes that are found at a radius, $r$, when they cross the atomic threshold. \cite{Visbal_2014} used an n-body-only simulation to determine the values required for Equation \ref{Eq:sync}. They predicted 15 synchronised pairs in a 3375 cMpc$^3$ volume. In the \rarepeak region, which has a volume of 133.6 cMpc$^3$, we find 5 synchronised pairs. Given the difference in volume our
abundance is higher by a factor of approximately 5 compared to that of \cite{Visbal_2014b}. However, the
\rarepeak region represents an over-density of approximately 1.7 compared to an average region of the universe and \cite{Visbal_2014} also preformed the calculation at a somewhat lower redshift. When this is taken into account our values match those of \cite{Visbal_2014b} quite well. Furthermore, \cite{Visbal_2014b} were unable
to account for metal enrichement in their analysis which may have a led to an over-estimate of the number
density of synchronised halo candidates in that case.
In order to test the feasibility of the synchronised haloes found in this work a zoom-in re-simulation of the region surrounding the synchronised pairs is required which accounts for both normal PopIII star formation, in Halo1, and possible super-massive star formation in Halo2. In order to provide a sufficient flux,
\cite{Regan_2017} predicted that Halo1 must form approximately $10^5$ \msolar of stellar mass in order to generate a significantly strong LW flux to achieve isothermal collapse. However, the DCBH candidate haloes
found here have already had their ability to form \molH suppressed due to dynamical heating. Therefore, these particular haloes may not require such intense external radiation exposure. Detailed re-simulation of these candidate haloes is now required to quantify the level of LW required in this case.
\section{Discussion and Conclusions} \label{Sec:Discussion}
We have analysed the Renaissance suite of high resolution simulations of the early Universe with the goal of identifying candidate haloes in which DCBHs can form. In total we found 79 haloes over all redshifts and volumes which have crossed the
atomic cooling limit and remain both metal-free and star-free. These 79 haloes represent ideal locations in which to form a DCBH as they will shortly undergo rapid collapse due to neutral hydrogen line emission cooling. The nature of the collapse cannot be probed in these simulation as Renaissance has no sub-grid prescription for super-massive star formation and lacks the resolution to accurately track possible fragmentation into a dense stellar cluster of PopIII stars.
In general the candidate haloes form away from massive galaxies. This allows the candidate haloes to remain free of metal enrichment. In examining the distance that these candidate haloes are from their nearest massive galaxy we find that the DCBH candidate galaxies typically lie between 10 kpc and 100 kpc from the nearest massive galaxy. These massive galaxies provide LW intensities that are approximately one order of magnitude higher than the mean intensity expected at these redshifts \citep{Ahn_2015, Xu_2016}. However, only a small fraction of the candidate haloes are exposed to LW intensities greater than 10 J$_{21}$. We find that the primary driver that allows these DCBH haloes to form and remain star-free is dynamical heating achieved through the rapid growth of these haloes. The rapid growth is strongly correlated with overdense environments with 76 DCBH candidate haloes forming in the \rarepeak simulation and only 3 DCBH candidate haloes forming in the \normal region. We also note that rapid growth by itself does not guarantee that a halo will become a DCBH candidate. Successfully avoiding metal enrichment must also be accounted for. Hence, in order to derive
an accurate sub-grid prescription it will be necessary to account for genetic\footnote{Genetic metal pollution was initially coined by \cite{Dijkstra2014a} and refers to the transfer of metals from smaller to larger haloes via mergers and accretion.} metal pollution \citep{Schneider_2006, Dijkstra2014a}. We therefore note that only hydrodynamic simulations which self-consistently follow metal transport will be able to successfully identify DCBH candidates in this case. Prescriptions which attempt to identify DCBH candidates only through the rapid growth of (dark matter) haloes will over-estimate the number density of DCBH candidates unless a metal enrichment/transport method is also used which can identify genetic metal enrichment. It should also be noted that sufficient particle (mass) resolution will also be paramount to resolve bursts of
accretion which can delay \molH formation for at least a sound crossing time \citep{Wise_2019}.
While less than $ 5 \%$ of DCBH candidate haloes are exposed to LW intensities of greater than 2 $J_{21}$ these are nonetheless the candidate haloes which display complete isothermal collapse. In the vast majority of cases our examination of the radial profiles of these DCBH candidate haloes show that the central core of the haloes cools due to the \molHc. The haloes that collapse isothermally are stronger candidates for forming a super-massive star while those which collapse non-isothermally still display rapid mass inflow these are more likely to form a dense stellar cluster \citep{Freitag_2006, Freitag_2008, Lupi_2014, Katz_2015}. However, it should be noted that the resolution, and subgrid physics modules, of Renaissance are not sufficient to probe the further evolution of these haloes. The formation of a supermassive star, a normal population of metal-free free stars and/or a dense stellar cluster may be the final outcome. In order to fully understand the further evolution of these systems we are now running zoom-in
simulations across a handful of interesting haloes in order to undercover the next stage of evolution of these haloes.
Finally, our analysis also reveals the existence of five synchronous haloes with separations of between 200 pc and 500 pc on the cusp of undergoing collapse. These haloes represent excellent candidates for further investigation of the synchronised pair scenario \citep{Dijkstra_2008, Visbal_2014, Regan_2017}. Imminent star formation in one of the haloes will result in the adjacent haloes being subject to intense LW radiation which will prevent the adjacent halo from cooling due to \molHc. In that case the adjacent halo will remain on the atomic cooling track and will be a strong candidate for super-massive star formation. In addition to this the subsequent merger of the two haloes should provide a plentiful supply of baryonic matter with which to successfully generate a massive black hole seed. Zoom-in simulations of a number of promising DCBH candidate haloes are now underway.
\section*{Acknowledgements}
JHW was supported by NSF awards AST-1614333 and OAC-1835213, NASA grant NNX17AG23G, and Hubble theory grant HST-AR-14326.
BWO was supported in part by NSF awards PHY-1430152, AST-1514700, OAC-1835213, by NASA grants NNX12AC98G, NNX15AP39G, and by Hubble theory Grants HST-AR-13261.01-A and HST-AR-14315.001-A. MLN was supported by NSF grants AST-1109243, AST-1615858, and OAC-1835213.
The Renaissance simulations were
performed on Blue Waters operated by the National Center for Supercomputing Applications (NCSA) with PRAC allocation support by the NSF (awards ACI-0832662, ACI-1238993, ACI-1514580). This research is part of the
Blue Waters sustained-petascale computing project, which is supported by the NSF (awards OCI-0725070, ACI-1238993) and the state of Illinois. Blue Waters is a joint effort of the University of Illinois at Urbana-Champaign and its NCSA. We thank an anonymous referee whose comments greatly improved the clarify of the manuscript. The freely available astrophysical analysis code \texttt{yt}\citep{YT} and plotting library matplotlib was used to construct numerous plots within this paper. Computations described in this work were performed using the publicly-available {\sc Enzo} code, which is the product of a collaborative effort of many independent scientists from numerous institutions around the world.
\bibliographystyle{mn2e_w}
|
1,108,101,565,538 | arxiv | \section{Introduction}
\setcounter{equation}{0}
We consider the two-dimensional nonlinear problem describing steady waves in a
horizontal open channel occupied by an inviscid, incompressible, heavy fluid, say
water. The water motion is assumed to be rotational which, according to
observations, is the type of motion commonly occurring in nature (see, for example,
\cite{SCJ,Th} and references cited therein). A brief characterization of various
results concerning waves with vorticity on water of finite as well as infinite depth
is given in \cite{KK2}. Further details can be found in the survey article
\cite{WS}.
Studies of bounds on characteristics of waves with vorticity were initiated by Keady
and Norbury almost 40 years ago in the pioneering paper \cite{KN}. In our
article~\cite{KK4}, we continued these studies and extended the results of \cite{KN}
to all types of vorticity distributions. In the recent note \cite{KKL}, it was
demonstrated that restrictions on the Lipschitz constant of the vorticity
distribution imposed in \cite{KN} and \cite{KK4} are superfluous in the case of
unidirectional flows. Our aim here is to obtain the same bounds as in \cite{KK4}
under the assumption that the vorticity distribution is just locally Lipschitz
continuous. Moreover, we show that wave flows have counter-currents in the case when
the infimum of the free surface profile exceeds a certain critical value; the latter
is equal to the largest depth that have unidirectional, shear flows with horizontal
free surfaces (see formula \eqref{eq:h_0} below).
The plan of the paper is as follows. Statement of the problem is given in \S\,1.1
and some background facts are listed in \S\,1.2. Necessary facts about auxiliary
one-dimensional problems are presented in \S\,1.3 (see further details in
\cite{KK3}), whereas main results are formulated in \S\,1.4. Two auxiliary lemmas
are proved in \S\,2, whereas proofs of Theorems~1--4 are given in \S\,3. In the last
section, we discuss some improvement of Theorem~1 that follows when a rather weak
condition is imposed on the derivative of the vorticity distribution instead of the
restriction required in Theorem~3.2, \cite{KK4}.
\subsection{Statement of the problem}
Let an open channel of uniform rectangular cross-section be bounded below by a
horizontal rigid bottom and let water occupying the channel be bounded above by a
free surface not touching the bottom. The surface tension is neglected and the
pressure is constant on the free surface. The water motion is supposed to be
two-dimensional and rotational which combined with the water incompressibility
allows us to seek the velocity field in the form $(\psi_y, -\psi_x)$, where $\psi
(x,y)$ is referred to as the {\it stream function} (see, for example, the book
\cite{LSh}). The vorticity distribution $\omega$ is prescribed and depends on $\psi$
as is explained in \cite{LSh}, \S~1. The vorticity distribution is supposed to be
locally Lipschitz continuous, but we impose no condition on the Lipschitz constant
here which distinguishes the present article from \cite{KN} and \cite{KK4}.
Moreover, unlike the recent note \cite{KKL} wave flows with counter-currents are
considered here along with unidirectional ones.
Furthermore, our results essentially use the following classification of vorticity
distributions which is based on properties of $\Omega (\tau) = \int_0^\tau \omega
(t) \, \D t$ and slightly differs from that proposed in \cite{KK3}:
\vspace{1mm}
\noindent \ \ (i) $\max_{0 \leq \tau \leq 1} \Omega (\tau)$ is attained either at an
inner point of $(0, 1)$ or at one (or both) of the end-points. In the latter case,
either $\omega (1) = 0$ when $\Omega (1) > \Omega (\tau)$ for $\tau \in (0, 1)$ or
$\omega (0) = 0$ when $\Omega (0) > \Omega (\tau)$ for $\tau \in (0, 1)$ (or both of
these conditions hold simultaneously).
\noindent \ (ii) $\Omega (0) > \Omega (\tau)$ for $\tau \in (0, 1]$ and $\omega (0)
< 0$.
\noindent (iii) $\Omega (\tau) < \Omega (1)$ for $\tau \in (0, 1)$ and $\omega (1) >
0$. Moreover, if $\Omega (1) = 0$, then $\omega (0) < 0$ and $\omega (1) > 0$ must
hold simultaneously.
\vspace{1mm}
\noindent It should be noted that conditions (i)--(iii) define three disjoint sets
of vorticity distributions whose union is the set of all continuous distributions on
$[0,1]$.
Non-dimensional variables are used with lengths and velocities scaled to
$(Q^2/g)^{1/3}$ and $(Qg)^{1/3}$, respectively; here $Q$ and $g$ are the dimensional
quantities for the volume rate of flow per unit span and the constant gravity
acceleration respectively. (We recall that $(Q^2/g)^{1/3}$ is the depth of the
critical uniform stream in the irrotational case; see, for example, \cite{Ben}.)
Hence the constant rate of flow and the acceleration due to gravity are scaled to
unity in our equations.
In appropriate Cartesian coordinates $(x,y)$, the bottom coincides with the $x$-axis
and gravity acts in the negative $y$-direction. We choose the frame of reference so
that the velocity field is time-independent as well as the unknown free-surface
profile. The latter is assumed to be the graph of $y = \eta (x)$, $x\in \RR$, where
$\eta$ is a bounded positive $C^1$-function. Therefore, the longitudinal section of the
water domain is
\[ D = \{ x \in \RR,\ 0 < y < \eta (x) \} .
\]
Since the surface tension is neglected, the pair $(\psi , \eta)$ with $\psi \in C^2
(D) \cap C^{1} (\bar D)$ must satisfy the following free-boundary problem:
\begin{eqnarray}
&& \psi_{xx} + \psi_{yy} + \omega (\psi) = 0, \quad (x,y)\in D;
\label{eq:lapp} \\ && \psi (x,0) = 0, \quad x \in \RR; \label{eq:bcp} \\ && \psi
(x,\eta (x)) = 1, \quad x \in \RR; \label{eq:kcp} \\ && |\nabla \psi (x,\eta (x))|^2
+ 2 \eta (x) = 3 r, \quad x \in \RR . \label{eq:bep}
\end{eqnarray}
Here, the constant $r$ is problem's parameter referred to as the total head or the
Bernoulli constant (see, for example, \cite{KN}). Throughout the paper we assume
that
\begin{equation}
|\psi| \ \mbox{is bounded and} \ |\psi_x| , \ |\psi_y| \ \mbox{are bounded and
uniformly continuous on} \ \bar D . \label{bound}
\end{equation}
The formulated statement of the problem has long been known and its derivation from
the governing equations and the assumptions about the boundary behaviour of water
particles can be found, for example, in \cite{CS}. Notice that the boundary
condition \eqref{eq:kcp} yields that relation \eqref{eq:bep} (Bernoulli's equation)
can be written as follows:
\begin{equation*}
\left[ \partial_n \psi (x,\eta (x)) \right]^2 + 2 \eta (x) = 3 r, \quad x \in \RR
\, . \label{eq:ben}
\end{equation*}
By $\partial_n$ we denote the normal derivative on $\partial D$, where the unit
normal $n = (n_x, n_y)$ points out of $D$.
\subsection{Background}
We begin with results obtained in the irrotational case, an extensive description of
which one finds in \cite{Ben}, \S\,1. Therefore, we restrict ourselves only to the
most important papers. As early as 1954, Benjamin and Lighthill \cite{BenL}
conjectured that $r > 1$ for all steady wave trains on irrotational flows of finite
depth. For a long period only two special kinds of waves were known, namely, Stokes
waves (periodic waves whose profiles rise and fall exactly once per period), and
solitary waves (such a wave has a pulse-like profile that is symmetric about the
vertical line through the wave crest and monotonically decreases away from it). The
inequality $r > 1$ for Stokes waves was proved by Keady and Norbury \cite{KN1} (see
also Benjamin \cite{Ben}), whereas Amick and Toland \cite{AT} obtained the proof for
solitary waves. Finally, Kozlov and Kuznetsov \cite{KK0} proved this inequality
irrespective of the wave type (Stokes, solitary, whatever) under rather weak
assumptions about wave profiles; in particular, stagnation points are possible on
them.
Furthermore, estimates of
\[ \hat{\eta} = \sup_{x \in \RR} \eta (x) \quad \mbox{and} \quad \check{\eta} =
\inf_{x \in \RR} \eta (x)
\]
were found for Stokes waves in the paper \cite{KN1}. They are expressed in terms of
the depths of the supercritical and subcritical uniform streams. Benjamin had
recovered these estimates in his article \cite{Ben}, in which the inequality for
$\check{\eta}$ is generalised to periodic waves that bifurcate from the Stokes ones.
(It should be noted that only numerical evidence indicated their existence in 1995
when \cite{Ben} was published and, to the authors knowledge, there is no rigorous
proof up to the present.) For arbitrary steady wave profiles natural estimates of
these quantities were obtained in \cite{KK0} under the same assumptions as the
inequality $r > 1$. Namely, it was shown that $\check{\eta}$ is strictly less than
the depth of the subcritical uniform stream, whereas $\hat{\eta}$ is strictly
greater than it. Also, an arbitrary wave profile lies strictly above the horizontal
surface of the supercritical uniform stream, but it is well known that profiles of
solitary waves asymptote the latter at infinity.
Now we turn to the case of waves with vorticity considered in the framework of
problem \eqref{eq:lapp}--\eqref{eq:bep}. The first paper relevant to cite in this
connection was the paper \cite{KN} by Keady and Norbury who, in particular,
generalised their estimates of $\check{\eta}$ and $\hat{\eta}$ obtained for
irrotational waves in \cite{KN1}. It should be emphasised that for this purpose they
subject the vorticity distribution $\omega$ to rather strong assumptions, one of
which restricted their considerations only to distributions that satisfy conditions
(i) (see details in \S\,4). In our article \cite{KK4}, this restriction was
eliminated, whereas another one was still used. Unfortunately, some assumptions
required for proving a couple of assertions are missing in \cite{KK4} (see details
in \S\,4). On the other hand, various superfluous requirements imposed in that
paper, in particular, on the derivative of the vorticity distribution were
eliminated in the note \cite{KKL}, but only in the case of unidirectional flows.
\subsection{Auxiliary one-dimensional problems}
First, we outline some properties of solutions to the equation
\begin{equation}
U'' + \omega (U) = 0 , \ \ \ y \in \RR ;
\label{eq:U}
\end{equation}
here and below $'$ stands for $\D / \D y$. These results were obtained in \cite{KK3}
and are essential for our considerations.
The set of solutions is invariant with respect to the following transformations: $y
\mapsto y + \mbox{constant}$ and $y \mapsto - y$. There are three immediate
consequences of this property: (a) if a solution of equation \eqref{eq:U} has no
stationary points, then it is strictly monotonic (either increasing or decreasing)
on the whole $y$-axis; (b) if a solution has a single stationary point, say $y=y_0$,
then the solution's graph is symmetric about the straight line through $y=y_0$ that
is orthogonal to the $y$-axis, the solution decreases (increases) monotonically in
both directions of the $y$-axis away from this point provided it attains its maximum
(minimum respectively) there; (c) if a solution has two stationary points, then
there are infinitely many of them and the solution is periodic with one maximum and
one minimum per period, it is monotonic between the extrema and its graph is
symmetric about the straight line that goes through any extremum point orthogonally
to the $y$-axis.
By $U (y; s)$ we denote a solution of \eqref{eq:U} that satisfies the following
Cauchy data:
\begin{equation}
U (0; s) = 0 , \ \ U' (0; s) = s , \quad \mbox{where} \ s \geq 0 .
\label{eq:cd}
\end{equation}
To describe the behaviour of $U (y; s)$ we denote by $\tau_+ (s)$ and $\tau_- (s)$,
$s > 0$, the least positive and the largest negative root, respectively, of the
equation $2 \, \Omega (\tau) = s^2$. If this equation has no positive (negative)
root, we put $\tau_+ (s) = +\infty$ ($\tau_- (s) = -\infty$ respectively).
Furthermore, if $\omega (0) = 0$, then we put $\tau_\pm (0) = 0$, and if $\pm \omega
(0) > 0$, then $\tau_\pm (0) = 0$, whereas $\tau_\mp (0)$ is defined in the same way
as for $s > 0$. Considering
\begin{equation}
y_\pm (s) = \int_0^{\tau_\pm (s)} \frac{\D \tau}{\sqrt{s^2 - 2 \, \Omega (\tau)}} \,
, \label{eq:y_pm}
\end{equation}
we see that $y_+ (s)$ is finite if and only if $\tau_+ (s) < +\infty$ and $\omega
(\tau_+ (s)) > 0$, whereas the inequalities $\tau_- (s) > -\infty$ and $\omega
(\tau_- (s)) < 0$ are necessary and sufficient for finiteness of $y_- (s)$.
In terms of $y_\pm (s)$ the maximal interval, where the function $U$ given by the
implicit formula
\begin{equation}
y = \int_0^U \frac{\D \tau}{\sqrt{s^2 - 2 \Omega (\tau)}}
\label{eq:Uim}
\end{equation}
increases strictly monotonically, is $(y_- (s), y_+ (s))$. Furthermore, if $y_-
(s)$ is finite, then $U' (y_- (s); s)$ vanishes and
\[ \tau_- (s) = U (y_- (s); s) = \min_{y \in \RR} U (y; s) > -\infty .
\]
Similarly, if $y_+ (s)$ is finite, then $U' (y_+ (s); s)$ vanishes and
\[ \tau_+ (s) = U (y_+ (s); s) = \max_{y \in \RR} U (y; s) < +\infty .
\]
Otherwise, $\min$ and $\max$ must be changed to $\inf$ and $\sup$, respectively, in
these formulae.
Further properties of $U (y; s)$ given by the implicit equation \eqref{eq:Uim} on
$(y_- (s), y_+ (s))$ are as follows:
\noindent $\bullet$ If $y_+ (s) = +\infty$ and $y_- (s) = -\infty$, then $U (y; s)$
increases strictly monotonically for all $y \in \RR$.
\noindent $\bullet$ If $y_- (s) = -\infty$ and $y_+ (s) < +\infty$, then $U (y; s)$
attains its maximum at $y = y_+ (s)$ and decreases monotonically away from this
point in both directions of the $y$-axis.
\noindent $\bullet$ If $y_- (s) > -\infty$ and $y_+ (s) = +\infty$, then $U (y; s)$
attains its minimum at $y = y_- (s)$ and increases monotonically away from this
point in both directions of the $y$-axis.
\noindent $\bullet$ If both $y_+ (s)$ and $y_- (s)$ are finite, then $U (y; s)$ is
periodic; it attains one of its minima at $y = y_- (s)$ and one of its maxima at $y
= y_+ (s)$. Moreover, $U (y; s)$ increases strictly monotonically on $[y_- (s) , \,
y_+ (s)]$.
Second, we consider the problem
\begin{equation}
u'' + \omega (u) = 0 \ \ \mbox{on} \ (0, h) , \quad u (0) = 0 , \ \ u (h) = 1 ,
\label{eq:u}
\end{equation}
which involves one-dimensional versions of relations
\eqref{eq:lapp}--\eqref{eq:kcp}. It is clear that formula \eqref{eq:Uim} defines a
solution of problem \eqref{eq:u} on the interval $(0, h (s))$, where
\begin{equation}
h (s) = \int_0^1 \frac{\D \tau}{\sqrt{s^2 - 2 \, \Omega (\tau)}} \quad \mbox{and} \
s > s_0 = \max_{\tau \in [0, 1]} \sqrt{2 \, \Omega (\tau)} .
\label{eq:d}
\end{equation}
Furthermore, all monotonic solutions of problem \eqref{eq:u} have the form
\eqref{eq:Uim} on the interval $(0, h)$. This remains valid for $s = s_0$ with
\begin{equation}
h = h_0 = \int_0^1 \frac{\D \tau}{\sqrt{s^2_0 - 2 \, \Omega (\tau)}} < \infty ,
\quad \mbox{that is}, \ h_0 = \lim_{s \to s_0} h (s) .
\label{eq:h_0}
\end{equation}
It is clear that $h_0 = +\infty$ when $\omega$ satisfies conditions (i) and $h_0$ is
finite otherwise; moreover, $h (s)$ is a strictly monotonically decreasing function
with values in $(0, h_0]$.
Furthermore, the pair $(\psi, \eta)$ with $\psi = u (y) = U (y; s)$ and $\eta = h
(s)$ satisfies problem \eqref{eq:lapp}--\eqref{eq:bep} provided $s$ satisfies the
equation
\begin{equation}
{\cal R} (s) = r \, , \quad \mbox{where} \ {\cal R} (s) = [ s^2 - 2 \, \Omega (1) +
2 \, h (s) ] / 3 \, .
\label{eq:calR}
\end{equation}
This function has only one minimum, say $r_c > 0$, attained at some $s_c > s_0$.
Hence if $r \in (r_c, r_0)$, where
\[ r_0 = \lim_{s \to s_0 + 0} {\cal R} (s) = \frac{1}{3} \left[ s^2_0 -
2 \, \Omega (1) + 2 \, h_0 \right] ,
\]
then equation \eqref{eq:calR} has two solutions $s_+$ and $s_-$ such that $s_0 < s_+
< s_c < s_-$. By substituting $s_+$ and $s_-$ into \eqref{eq:Uim} and \eqref{eq:d},
one obtains the so-called {\it stream solutions} $(u_+, H_+)$ and $(u_-, H_-)$,
respectively. Indeed, these solutions satisfy the Bernoulli equation $u'_\pm (H_\pm)
+ 2 H_\pm = 3 r$ along with relations \eqref{eq:u}.
It should be mentioned that $s_-$ and the corresponding $H_-$ exist for all values
of $r$ greater than $r_c$, whereas $s_+$ and $H_+$ exist only when $r$ is less than
or equal to $r_0$; in the last case $s_+ = s_0$.
\subsubsection{Solutions with a single minimum}
Let $\omega$ satisfy conditions (ii), then $h_0 < \infty$ and $s_0 = 0$. Without
loss of generality, we consider $\omega$ as extended to $(-\infty, 0)$, and so by
$y^>$ we denote the largest zero of $\omega$ on $(-\infty, 0)$ and set $y^> =
-\infty$ when $\omega$ is positive throughout $(-\infty, 0)$. Putting $s^> = \sqrt{2
\, \Omega (y^>)}$, we see that the function $\tau_- (s)$ attains finite values on
$[0, s^>)$ and is continuous there; moreover, $y_- (s) \to -\infty$ as $s \to s^>$
provided $y^>$ is finite.
What was said in \S\,1.2 implies that for $s \in [0, s^>)$ the Cauchy problem with
data \eqref{eq:cd} has a solution such that
\[ U' (y_- (s); s) = 0 \quad \mbox{and} \quad U (2 y_- (s); s) = 0 .
\]
Now, putting
\begin{equation}
h_- (s) = h (s) - 2 y_- (s) ,
\label{eq:h_-}
\end{equation}
we see that the function
\begin{equation}
u_- (y; s) = U (y + 2 y_- (s); s)
\label{eq:u_-}
\end{equation}
solves problem \eqref{eq:u} on the interval $(0, h_- (s))$. Moreover, if $s$ is
determined from equation \eqref{eq:calR} with $r > r_0$, then $u_- (y; s)$ describes
a shear flow that has a counter-current near the bottom because $u_- (y; s)$ has a
single minimum at $y = - y_- (s)$.
For $\omega \equiv -2$ formulae \eqref{eq:cd} and \eqref{eq:u_-} take the form
\[ U (y; s) = y^2 + s y \quad \mbox{and} \quad u_- (y; s) = y^2 - s y
\]
respectively, whereas according to formula \eqref{eq:h_-} we have $h_- (s) = (s +
\sqrt{s^2 + 4}) / 2$ which is greater than $h_0 = 1$ for all $s > 0$; these
quantities are illustrated for $s=1$ in Figure~1.
\begin{figure}[t!]\centering
\SetLabels
\L (0.9*0.2) $y$\\
\endSetLabels
\leavevmode\AffixLabels{\includegraphics[width=80mm]{pict1}}
\caption{$U (y; 1) = y^2 + y$ is plotted for $y \in (-2, 1)$ and $u_- (y; 1) = y^2 -
y$ is plotted for $y \in (0, 1.618)$ because $h_- (1) = (1 + \sqrt{5}) / 2 \approx
1.618$.}
\label{fig:1}
\end{figure}
\subsubsection{Solutions with a single maximum}
Let $\omega$ satisfy conditions (iii), and so $s_0 = \Omega (1)$ and $h_0 <
\infty$. Without loss of generality, we consider $\omega$ as extended to $(1,
+\infty)$, and so by $y^{<}$ we denote the least zero of $\omega$ on $(1, +\infty)$
and set $y^{<} = +\infty$ when $\omega$ is positive throughout $(1, +\infty)$.
Putting $s^{<} = \sqrt{2 \, \Omega (y^{<})}$, we see that the function $\tau_+ (s)$
attains finite values on $[s_0, s^{<})$ and is continuous there. It should be noted
that $y_+ (s) \to +\infty$ as $s \to s^{<}$ provided $y^<$ is finite.
What was said in \S\,1.2 implies that for $s \in [s_0, s^{<})$ the Cauchy problem
with data \eqref{eq:cd} has the solution for which
\[ U' (y_+ (s); s) = 0 \quad \mbox{and} \quad U (h (s) + 2 [y_+ (s) - h (s)]; s)
= 1 .
\]
Let us put
\begin{equation}
h_+ (s) = h (s) + 2 [y_+ (s) - h (s)] ,
\label{eq:h_+}
\end{equation}
then the function
\begin{equation}
u_+ (y; s) = U (y; s)
\label{eq:u_+}
\end{equation}
solves problem \eqref{eq:u} on the interval $(0, h_+ (s))$. Moreover, if $s$ is
found from equation \eqref{eq:calR} with $r > r_0$, then $u_+ (y; s)$ describes a
shear flow that has a counter-current near the free surface because $u_+ (y; s)$ has
a single maximum at $y = y_+ (s)$.
For $\omega \equiv 2$ both formulae \eqref{eq:cd} and \eqref{eq:u_+} give $U (y; s)
= u_+ (y; s) = -y^2 + s y$, whereas according to formula \eqref{eq:h_+} we have $h_+
(s) = (s + \sqrt{s^2 - 4}) / 2$ which is greater than $h_0 = 2$ for all $s > s_0 =
2$; these quantities are illustrated for $s=3$ in Figure~2.
\begin{figure}[t!]\centering
\SetLabels
\L (0.88*0.48) $y$\\
\endSetLabels
\leavevmode\AffixLabels{\includegraphics[width=45mm]{pict2}}
\caption{$U (y; 3) = -y^2 + 3y$ is plotted for $y \in (-1/2, 7/2)$, and it coincides
with $u_+ (y; 3)$ for $y \in (0, 2.618)$ (plotted bold) because $h_+ (3) = (3 +
\sqrt{5}) / 2 \approx 2.618$.}
\label{fig:2}
\end{figure}
\subsection{Main results}
Bounds for non-stream solutions of problem \eqref{eq:lapp}--\eqref{eq:bep} are
formulated in terms of solutions to problem \eqref{eq:u} and the characteristics
$r_c$, $H_-$ and $H_+$. As in the irrotational case (see \cite{KK0}), the depths
$H_-$ and $H_+$ of properly chosen supercritical and subcritical shear flows
respectively (they are also referred to as conjugate streams) serve as bounds for
$\hat{\eta}$ and $\check{\eta}$.
We begin with the following two theorems generalising Theorems~3.2 and 3.4 in
\cite{KK4}. The first of them asserts, in particular, that all free-surface profiles
subject to reasonable assumptions are located above the supercritical level.
Moreover, these theorems provide bounds for $\check \eta$ (the upper one), $\hat
\eta$ (the lower one) and $r$, which cannot be less than the critical value $r_c$.
\vspace{2mm}
\noindent {\bf Theorem 1.} {\it Let problem \eqref{eq:lapp}--\eqref{eq:bep} have a
non-stream solution $(\psi, \eta)$ such that $\psi \leq 1$ on $\bar D$. Then the
following two assertions are true.}
1. {\it If $\check \eta < h_0$, then $\psi (x, y) < \check U (y; \check s)$ in $\RR
\times (0, \check \eta)$ and $\check U (y; \check s)$ is given by formula
\eqref{eq:Uim} with $s = \check s$; here $\check s > s_0$ is such that $h (\check s)
= \check \eta$. Besides,} (A) $r \geq r_c$, (B) $H_- < \eta (x)$ {\it for all $x \in
\RR$, and if $r \leq r_0$, then also} (C) $\check \eta \leq H_+$. {\it Moreover,
the inequalities for $r$ and $\check \eta$ are strict provided the latter value is
attained at some point on the free surface.}
2. {\it Relations} (A)--(C) {\it are true when $\check \eta = h_0 \neq +\infty$.}
\vspace{2mm}
\noindent {\bf Theorem 2.} {\it Let problem \eqref{eq:lapp}--\eqref{eq:bep} has a
non-stream solution $\left( \psi, \eta \right)$ such that $\psi \geq 0$ on $\bar D$.
Then the following two assertions are true.}
1. {\it If $\hat \eta < h_0$, then $\psi (x, y) > \hat U (y; \hat s)$ in $D$ and
$\hat U (y; \hat s)$ is given by formula \eqref{eq:Uim} with $s = \hat s$; here
$\hat s > s_0$ is such that $h (\hat s) = \hat \eta$. Moreover, the inequality
$\hat \eta \geq H_+$ holds provided $r \leq r_0$ and is strict when $\hat \eta$ is
attained at some point on the free surface.}
2. {\it The inequality $\hat \eta \geq H_+$ holds when $\hat \eta = h_0 \neq
+\infty$.}
\vspace{2mm}
In the next theorem that generalises Theorem~3.6 in \cite{KK4}, $\omega$ satisfies
conditions (ii) and the inequality satisfied by $\check{\eta}$ in Theorem~1 is
violated. In this case, an upper bound for $\psi$ is formulated in terms of the
family $(u_- (y; s), h_-(s))$ whose components depend on the parameter $s > s_0 = 0$
according to formulae \eqref{eq:u_-} and \eqref{eq:h_-} respectively. It is
essential that $u_-' (y; s)$ changes its sign on $(0, h_- (s))$ being negative near
the bottom, and so the flow described by $\psi$ also has a counter-current near the
bottom.
\vspace{2mm}
\noindent {\bf Theorem 3.} {\it Let $\omega$ satisfy conditions {\rm (ii)}. If
problem \eqref{eq:lapp}--\eqref{eq:bep} has a non-stream solu\-tion $(\psi, \eta)$
satisfying the inequalities $\psi \leq 1$ in $\bar D$ and $h_0 < \check \eta$, then
there exists a small $s_* > 0$ such that $h_0 < h_- (s_*) < \check \eta$ and $\psi
(x, y) < u_- (y; s_*)$ in $\RR \times (0, h_- (s_*))$.}
\vspace{2mm}
When $\omega$ satisfies conditions (iii) and the inequality imposed on $\hat \eta$
in Theorem~2 is violated we give a lower bound for a non-negative stream function
$\psi$ provided an extra condition is fulfilled for $\Omega$. This bound involves a
function characterised by formula \eqref{eq:Uim}. It is essential that the
derivative of this function changes its sign being negative near the free surface,
and so the flow described by $\psi$ also has a counter-current there.
\vspace{2mm}
\noindent {\bf Theorem 4.} {\it Let $\omega$ satisfy conditions {\rm (iii)} and
$\Omega (1) > 0$. If problem \eqref{eq:lapp}--\eqref{eq:bep} has a non-stream
solution $(\psi, \eta)$ for which $\psi \geq 0$ in $\bar D$ and $h_0 < \check \eta$,
then there exists $s^* > s_0$ such that $s^* - s_0$ is small, $h_0 < h_+ (s^*) < \check
\eta$ and the inequality $\psi (x, y) > u_+ (y; s_*)$ holds in $D$. Here $h_+ (s^*)$
and $u_+ (y; s^*)$ are given by formulae \eqref{eq:h_+} and \eqref{eq:u_+}
respectively.}
\vspace{2mm}
The last theorem generalises Theorem~3.8 in \cite{KK4}.
\section{Two lemmas}
In Lemmas 1 and 2, we analyse the asymptotic behaviour as $s \to s_0$ for the
functions defined by formulae \eqref{eq:h_-} and \eqref{eq:h_+} respectively. Here
and below one dot on top denotes the first derivative with respect to $s$.
\vspace{2mm}
\noindent {\bf Lemma 1.} {\it Let $\omega$ satisfy conditions {\rm (ii)}. Then the
following asymptotic formula holds
\begin{equation}\label{L1a}
h_- (s) = h (0) - \frac{s}{\omega (0)} + O (s^2) \quad \mbox{as} \ s \to 0 ,
\end{equation}
and so $\dot h_- (0) > 0$. Moreover, if the function $h_-$ strictly increases on
$[0, s^{(0)}]$ for some $s^{(0)}$, then $u_- (y; s_1) > u_- (y; s_2)$ for all $y \in
(0, h_- (s_1)]$, where $s_1$ and $s_2$ are such that $0 \leq s_1 < s_2 \leq
s^{(0)}$.}
\vspace{2mm}
\noindent {\it Proof.} Let $a = \omega (0)$, and let $x$ be such that the equality
$a x = 2 \, \Omega (\tau)$ holds for small $\tau$. Hence $\tau_- (s) = s^2/a$, and
the change of variable gives
\[ y_-(s) = \int_0^{\tau_-} \frac{\D \tau}{\sqrt{s^2 - 2 \, \Omega (\tau)}} =
\int_0^{s^2/a} \frac{a \, \D x}{2 \, \omega (\tau (x)) \, \sqrt{s^2 - ax}} \, .
\]
Since $\omega (\tau (x)) = a + O (s^2)$ for small $s$ and $x$, we see that
\begin{equation}\label{L1b}
y_- (s) = \frac{s}{a} + O (s^3) \quad \mbox{as} \ s \to 0 .
\end{equation}
Furthermore, for sufficiently small $b$ we have
\[ h(s) = h (0) + \int_0^b \Bigg[ \frac{\D \tau}{\sqrt{s^2 - 2 \, \Omega (\tau)}} -
\frac{\D \tau}{\sqrt{-2\Omega (\tau)}} \Bigg] + O (s^2) \quad \mbox{as} \ s \to 0 .
\]
Using the same change of variable $a x = 2 \, \Omega (\tau)$, we write this as
follows:
\[ h (s) = h (0) + \int_0^{- a \tilde{b}/s^2} \frac{a}{2 \omega (\tau(x))} \Bigg[
\frac{\D x}{\sqrt{s^2 - a x}} - \frac{\D x}{\sqrt{-a x}} \Bigg] + O (s^2) \quad
\mbox{as} \ s \to 0 ,
\]
where $a \tilde{b} = 2 \Omega (b)$. Using again $\omega (\tau(x)) = a + O (x^2)$,
we see that
\begin{equation*}
h (s) = h (0) + \frac{1}{2} \int_0^{a\tilde{b}/s^2} \Bigg[ \frac{\D x}{\sqrt{s^2 - a
x}} - \frac{\D x}{\sqrt{-a x}} \Bigg] + O (s^2) = h (0) + \frac{s}{a} + O (s^2)
\quad \mbox{as} \ s \to 0 .
\end{equation*}
Since $h_- (s)$ is defined by formula (\ref{eq:h_-}), combining the last asymptotics
and (\ref{L1b}), we arrive at the required formula \eqref{L1a}.
To prove the second assertion we assume the contrary, that is,
\[ u_- (y^*, s_1) \leq u_- (y^*, s_2) \quad \mbox{for some} \ y^*\in (0,h_-(s_1)] .
\]
Diminishing $s_1$ and observing that $u_- (y; 0) > u_- (y; s_2)$ when $y \in (0, h_-
(0)]$, we conclude that there exists $\tilde s \in (0, s_1)$ and $\tilde y$ such
that $u_- (\tilde y; \tilde s) = u_- (\tilde y; s_2)$ and $u_- (y; \tilde s) \geq
u_- (y^*; s_2)$ on $(0, h_- (\tilde s)]$, which is impossible in view of the maximum
principle for non-negative functions.
The proof is complete.
\vspace{2mm}
\noindent {\bf Lemma 2.} {\it Let $\omega$ satisfy conditions {\rm (iii)}. Then the
following asymptotic formula holds}
\begin{equation}\label{L2a}
h_+(s) = h (0) + \frac{s^2 - s_0^2}{\omega (1)} + O (s -s_0) \quad as \ s \to s_0 .
\end{equation}
\vspace{2mm}
\noindent {\it Proof.} According to formula \eqref{eq:h_+}, we have to evaluate
\[ y_+ (s) - h (s) = \int_1^{\tau_+ (s)} \frac{\D \tau}{\sqrt{s^2 -2 \, \Omega (\tau)}}
\]
when $s$ is close to $s_0$. By changing variable to $x = 2 \, [\Omega (\tau) -
\Omega (1)] / b$, where $b = \omega (1)$ is positive, we obtain
\[ y_+ (s) - h (s) = \int_0^{x_+} \frac{b \, \D x}{2 \, \omega (\tau(x)) \sqrt{s^2
- s_0^2 - b \, x}} = \frac{\sqrt{s^2-s_0^2}}{2b} \int_0^1 \frac{\D y}{\sqrt{1-y}} +
O (s^2-s_0^2)
\]
as $s \to s_0$, where
\[ x_+ = \frac{s^2-s_0^2}{b} = \frac{2}{b} \int_1^{\tau_+ (s)} \omega(t) \, \D t .
\]
This gives that
\begin{equation}\label{L2b}
y_+ (s) - h (s) = \frac{\sqrt{s^2-s_0^2}}{b} + O (s^2-s_0^2) \quad \mbox{as} \ s \to
s_0 .
\end{equation}
Furthermore, in
\[ h (s_0) -h (s) = \int_0^1 \frac{\D \tau}{\sqrt{\int_\tau^1 \omega (t) \, \D t}}
- \int_0^1 \frac{\D \tau}{\sqrt{s^2-s_0^2 + \int_\tau^1 \omega (t) \D t}}.
\]
we change variable $\tau$ to
\[ y = \frac{2}{s^2-s_0^2} \int_\tau^1 \omega (t) \D t ,
\]
thus obtaining
\[ h (s_0) - h (s) = \frac{\sqrt{s^2-s_0^2}}{2b} \int_0^{\Omega(1)/(s^2-s_0^2)}
\Bigg[ \frac{1}{\sqrt{y}} - \frac{1}{\sqrt{1+y}} \Bigg] \D y + O (s^2-s_0^2)
\]
as $s \to s_0$. Therefore,
\begin{equation*}
h (s_0) - h (s) = \frac{\sqrt{s^2-s_0^2}}{b}+O(s^2-s_0^2).
\end{equation*}
This formula and (\ref{L2b}) imply (\ref{L2a}), which completes the proof.
\section{Proof of main results}
Without loss of generality, we suppose that the vorticity distribution $\omega (t)$
is prescribed only on the range of $\psi$. Taking into account how Theorems 1--4 are
formulated, this range belongs to the half-axis $t \leq 1$ ($t \geq 0$) in
Theorems~1 and 3 (Theorems~2 and 4 respectively).
\subsection{Proof of Theorem 1}
First, let us consider the case when $\check \eta < h_0$, and so there exists
$\check s > s_0$ such that $h (\check s) = \check \eta$. Thus, the function $\check
U (y; \check s)$ solves problem \eqref{eq:u} on $(0, \check \eta)$. Moreover, this
formula defines $\check U (y; \check s)$ for all $y \geq 0$ provided $\omega (t)$ is
extended to $t > 1$ as a Lipschitz function for which the inequality $\check s^2 > 2
\max_{\tau \geq 0} \Omega (\tau)$ holds. For this purpose it is sufficient to extend
$\omega$ as a linear function to a small interval on the right of $t = 1$ and to put
$\omega \equiv 0$ farther right. Then we have
\[ \check U' (y; \check s) = \sqrt{\check s^2 - 2 \, \Omega (\check U (y; \check s))}
> 0 \quad \mbox{for all} \ y \geq 0 ,
\]
and so $\check U (y; \check s)$ is a monotonically increasing function of $y$ and
$\check U (y; \check s) > 1$ for $y > h_0$.
Let $U_\ell (y) = \check U (y + \ell; \check s)$ for $\ell \geq 0$, and so $U_\ell
(y) > 1$ on $[0, \check \eta]$ when $\ell > \check \eta$. Then $U_\ell - \psi > 0$
on $\RR \times [0, \check \eta]$ for $\ell > \check \eta$. Let us show that there is no
$\ell_0 > 0$ such that
\begin{equation}
\inf_{\RR \times [0, \check \eta]} (U_{\ell_0} - \psi) = 0 . \label{inf}
\end{equation}
Assuming that such $\ell_0$ exists, we see that $U_{\ell_0} - \psi$ attains values
separated from zero on both sides of the strip $\RR \times [0, \check \eta]$. Since
$\psi$ satisfies conditions \eqref{bound}, there exist positive $\epsilon$ and $\delta$
such that
\begin{equation}
U_{\ell_0} (y) - \psi (x,y) \geq \epsilon \quad \mbox{when} \ (x,y) \in (\RR \times
[0, \delta]) \cup (\RR \times [\check \eta - \delta, \check \eta]) .
\label{e_d}
\end{equation}
Therefore, \eqref{inf} holds when either $U_{\ell_0} (y_0) - \psi (x_0, y_0) = 0$
for some $(x_0, y_0) \in \RR \times (0, \check \eta)$ or there exists a sequence
$\{(x_k, y_k)\}_{k=1}^\infty \subset \RR \times (\delta, \check \eta - \delta)$ such
that
\begin{equation}
U_{\ell_0} (y_k) - \psi (x_k, y_k) \to 0 \quad \mbox{as} \ k \to \infty .
\label{seq}
\end{equation}
The first of these options is impossible. Indeed, the elliptic equation
\begin{equation}
\nabla^2 (U_{\ell_0} - \psi) + (U_{\ell_0} - \psi) \int_0^1 \omega' (t [U_{\ell_0} -
\psi]) \, \D t = 0 \quad \mbox{holds in} \ \RR \times (0, \check \eta) .
\label{ell}
\end{equation}
Then the maximum principle (see \cite{GNN}, p.~212) guarantees that the non-negative
function $U_{\ell_0} - \psi$ cannot vanish at $(x_0, y_0) \in \RR \times (0, \check
\eta)$ because otherwise $\psi$ must coincide with $U_{\ell_0}$ everywhere.
In order to show that the second option \eqref{seq} is also impossible we apply
Harnack's inequality (see Corollary~8.21 in \cite{GT}) to the last equation (indeed,
$U_{\ell_0} - \psi$ is positive in $\RR \times [0, \check \eta]$). Therefore, there
exists $C > 0$ such that
\[ \sup_{(x,y) \in B_\rho (x_k, \check \eta/2)} \big[ U_{\ell_0} (y)- \psi (x,y) \big]
\leq C \inf_{(x,y) \in B_\rho (x_k, \check \eta/2)} \big[ U_{\ell_0} (y) - \psi
(x,y) \big]
\]
in every circle $B_\rho (x_k, \check \eta/2)$, $k=1,2,\dots$, with $\rho = (\check
\eta - \delta) / 2$. Then \eqref{seq} yields that the supremum on the left-hand
side is arbitrarily small provided $k$ is sufficiently large, but this is
incompatible with \eqref{e_d}.
The obtained contradictions show that there is no $\ell_0 > 0$ such that \eqref{inf}
is true. Letting $\ell_0 \to 0$, we see that $\check U (y; \check s) - \psi (x,y)$
is non-negative on $\RR \times [0, \check \eta]$ and vanishes when $y = 0$. Since
this function satisfies equation \eqref{ell} with $\ell_0 = 0$, the maximum
principle implies that it does not vanish at inner points of the strip $\RR \times
(0, \check \eta)$, and so $\check U (y; \check s) - \psi (x, y) > 0$ throughout this
strip. Thus, the first inequality of assertion~1 is proved.
Let us show that relations (A)--(C) hold. First, we suppose that there exists $x_0
\in \RR$ such that $\eta (x_0) = \check \eta$ (it is clear that $y = \eta (x)$ is
tangent to $y = \check \eta$ at $x_0$). Then $\check U (\check \eta; \check s) - \psi
(x_0, \check \eta) = 0$ because both terms are equal to one at this point. Now, it
follows from Hopf's lemma (see \cite{GNN}, p.~212) that
\[ \big[ \check U' (y; \check s) - \psi_y (x,y) \big]_{(x,y)=(x_0,\check \eta)} < 0 \, .
\]
Taking into account Bernoulli's equation at $(x_0, \check \eta)$, that is, $\psi_y
(x_0, \check \eta) = \sqrt{3 r - 2 \check \eta}$, we obtain that $\check U' (\check
\eta; \check s) < \sqrt{3 r - 2 \check \eta}$ which is equivalent to
\[ \check s^2 - 2 \Omega (1) < 3 r - 2 \check \eta , \quad \mbox{and so} \quad
{\cal R} (\check s) < r \ \mbox{according to \eqref{eq:calR}}.
\]
The last inequality yields that relations (A)--(C) of assertion~1 are true and
inequalities (A) and (C) are strict in this case.
In the alternative case, we have that $\eta (x) > \check \eta$ for all $x \in \RR$
and there exists a sequence
\[ \{ x_k \}_{k=1}^\infty \subset \RR \quad \mbox{such that} \ \eta (x_k) \to \check
\eta \ \mbox{as} \ k \to \infty .
\]
Let us put $\check u (x,y) = \check U (y; \check s) - \psi (x, y)$ and consider the
behaviour of $\check u (x_k, \check \eta)$ and the derivatives of $\check u$ at
$(x_k, \check \eta)$ as $k \to \infty$.
Since $\check u (x_k, \check \eta) = \psi (x_k, \eta (x_k)) - \psi (x_k, \check
\eta)$, we see that this difference, say $\epsilon_k \geq 0$, tends to zero as $\ k
\to \infty$. Moreover,
\begin{equation}
\epsilon_k \quad \mbox{and} \quad \eta (x_k) - \check \eta \quad \mbox{tend to zero
as} \ k \to \infty .
\label{eq:11}
\end{equation}
Let us prove that $\check u_x (x_k, \check \eta)$ also tends to zero as $k \to
\infty$. Indeed, we have for $t > 0$:
\[ \check u (x_k \pm t, \check \eta) = \check u (x_k, \check \eta) \pm t \, \check u_x
(x_k, \check \eta) + \alpha (t) \quad \mbox{and} \ \alpha (t) = o (t) \ \mbox{as} \
t \to 0 .
\]
By rearrangement we obtain that $\pm \check u_x (x_k, \check \eta) = t^{-1} \left[
\check u (x_k \pm t, \check \eta) - \epsilon_k \right] + t^{-1} \alpha (t)$, where
the first term in the squire brackets is positive. Therefore,
\[ \mp \check u_x (x_k, \check \eta) \leq t^{-1} \epsilon_k + t^{-1} \alpha (t) ,
\quad \mbox{and the last term is} \ o (1) \ \mbox{as} \ t \to 0 .
\]
Hence $|\check u_x (x_k, \check \eta)|$ is less than arbitrarily small $\delta > 0$
provided $k$ is sufficiently large. First, let $t$ be small enough to guarantee that
$t^{-1} \alpha (t) < \delta / 2$. Fixing this $t$, we use \eqref{eq:11} and take $k$
such that $t^{-1} \epsilon_k < \delta / 2$. This implies that $|\check u_x (x_k,
\check \eta)| < \delta$, thus completing the proof that $\check u_x (x_k, \check
\eta) \to 0$ as $k \to \infty$.
According to the definition of $\check u$, this means that $\psi_x (x_k, \check
\eta) \to 0$ as $k \to \infty$. The next step is to show that
\begin{equation}
\underset{k \to \infty}{\rm lim\,sup} \ \check u_y (x_k, \check \eta) \leq 0 \, .
\label{eq:12}
\end{equation}
For $t > 0$ we have
\[ \check u (x_k, \check \eta - t) = \check u (x_k, \check \eta) - t \, \check u_y
(x_k, \check \eta) + \beta (t) \quad \mbox{and} \ \beta (t) = o (t) \ \mbox{as} \
t \to 0 .
\]
In the same way as above we obtain that
\[ \check u_y (x_k, \check \eta) \leq t^{-1} \epsilon_k + t^{-1} \beta (t) ,
\quad \mbox{where the last term is} \ o (1) \ \mbox{as} \ t \to 0 .
\]
It is clear that this implies \eqref{eq:12}. Then, taking a subsequence if necessary
and using Bernoulli's equations for $\check U$ and $\psi$, we let $k \to \infty$
and conclude that
\begin{equation}
\check s^2 - 2 \, \Omega (1) \leq 3 r - 2 \, \check \eta . \label{check_s}
\end{equation}
By rearranging this inequality, all three relations (A)--(C) follow.
Let us turn to assertion 2 when $\check \eta = h_0 \neq +\infty$. By $\{ h_j
\}_1^\infty$ we denote a sequence such that the inequalities $h (s_c) < h_j < h_0$
hold for its elements (see the line next to \eqref{eq:calR} for the definition of
$s_c$), and $h_j \to h_0$ as $j \to \infty$. Then the function inverse to
\eqref{eq:d} gives $s_j$ for which $h (s_j) = h_j$, and so $s_j \to s_0$ as $j \to
\infty$. Moreover, we have the stream solution $(U (y; s_j), h_j)$ with the first
component defined on $[0, h_j]$ by formula \eqref{eq:Uim}. Thus, each function of
the sequence $u_j (x, y) = U (y; s_j) - \psi (x, y)$, $j=1,2,\dots$, is defined on
$\RR \times [0, h_j]$.
By the definition of $\check \eta$ there exists a sequence $\{ x_j \}_1^\infty
\subset \RR$ (it is possible that all its elements coincide) such that $\eta (x_j) -
h_j \to 0$ as $j \to \infty$. Then considerations similar to those above yield that
\[ \partial_x u_j (x_j, h_j) \to 0 \ \mbox{as} \ j \to \infty \quad \mbox{and} \quad
\underset{j \to \infty}{\rm lim\,sup} \ \partial_y u_j (x_j, h_j) \leq 0 ,
\]
which leads to the inequality
\[ s_0^2 - 2 \, \Omega (1) \leq 3 r - 2 \, \check \eta .
\]
instead of \eqref{check_s}. Since $\check \eta = h_0$ this inequality, gives all three
required assertions. The proof is complete.
\subsection{Proof of Theorem 2}
At its initial stage the proof of this theorem is similar to that of Theorem~1.
Namely, we consider the case when $\hat \eta < h_0$ first. Since there exists $\hat
s > s_0$ such that $h (\hat s) = \hat \eta$, the function $\hat U (y; \hat s)$
given by formula \eqref{eq:Uim} with $s = \hat s$ solves problem \eqref{eq:u} on
$(0, \hat \eta)$. Moreover, the same formula defines this function for all $y \leq
\hat \eta$ provided $\omega (t)$ is extended to $t < 0$ as a Lipschitz function for
which the inequality $\hat s^2 > 2 \max \Omega (\tau)$ holds. As in the proof of
Theorem~1, it is sufficient to extend $\omega$ as a linear function to a small
interval on the left of $t = 0$ and to put $\omega \equiv 0$ farther left. Then we
have
\[ \hat U' (y; \hat s) = \sqrt{\hat s^2 - 2 \, \Omega (\hat U (y; \hat s))}
> 0 \quad \mbox{for all} \ y \leq \hat \eta ,
\]
and so $\hat U (y; \hat s)$ is a monotonically increasing function of $y$ and $\hat
U (y; \hat s) < 0$ for $y < 0$.
Let $U_\ell (y) = \hat U (y - \ell; \hat s)$ for $\ell \geq 0$, and so $U_\ell (y) <
0$ on $[0, \hat \eta]$ when $\ell > \hat \eta$. Then $U_\ell - \psi < 0$ on $\bar D$
for $\ell > \hat \eta$. Let us show that there is no $\ell_0 > 0$ such that
\begin{equation}
\sup_{\bar D} \, (U_{\ell_0} - \psi) = 0 . \label{sup}
\end{equation}
Assuming that such $\ell_0$ exists, we see that $U_{\ell_0} - \psi$ attains values
separated from zero on both sides of $\bar D$. Since $\psi$ satisfies conditions
\eqref{bound}, there exist positive $\epsilon$ and $\delta$ such that
\begin{equation}
U_{\ell_0} (y) - \psi (x,y) \geq \epsilon \quad \mbox{when} \ (x,y) \in (\RR \times
[0, \delta]) \cup \{ x \in \RR , y \in [\eta (x) - \delta, \eta (x)] \} .
\label{e_d'}
\end{equation}
Therefore, \eqref{sup} holds when either $U_{\ell_0} (y_0) - \psi (x_0, y_0) = 0$
for some $(x_0, y_0) \in \bar D$ or there exists a sequence $\{(x_k,
y_k)\}_{k=1}^\infty \subset \{ x \in \RR , y \in ( \delta, \eta (x) - \delta ) \}$
such that
\begin{equation}
U_{\ell_0} (y_k) - \psi (x_k, y_k) \to 0 \quad \mbox{as} \ k \to \infty .
\label{seq'}
\end{equation}
The same methods as in the proof of Theorem~1 demonstrate that either of the options
\eqref{e_d'} and \eqref{seq'} leads to a contradiction, and so there is no $\ell_0 >
0$ such that \eqref{sup} is true. Letting $\ell_0 \to 0$, we see that $\hat U (y;
\hat s) - \psi (x,y)$ is non-positive on $\bar D$ and vanishes when $y = 0$. Since
this function satisfies equation \eqref{ell} with $\ell_0 = 0$, the maximum
principle implies that it does not vanish at inner points of the strip $D$, and so
$\hat U (y; \hat s) - \psi (x, y) < 0$ throughout this strip. Thus, the first
inequality of assertion~1 is proved.
To show the inequality $\hat \eta \geq H_+$ we argue by analogy with the proof of
Theorem~1. First, we suppose that there exists $x_0 \in \RR$ such that $\eta (x_0) =
\hat \eta$ (it is clear that $y = \eta (x)$ is tangent to $y = \hat \eta$ at $x_0$).
Then $\check U (\hat \eta; \hat s) - \psi (x_0, \hat \eta) = 0$ because both terms
are equal to one at this point. Now, it follows from Hopf's lemma (see \cite{GNN},
p.~212) that
\[ \big[ \hat U' (y; \hat s) - \psi_y (x,y) \big]_{(x,y)=(x_0,\hat \eta)} > 0 \, .
\]
Taking into account Bernoulli's equation at $(x_0, \check \eta)$, we obtain that
\begin{equation}
\hat U' (\hat \eta; \hat s) > \sqrt{3 r - 2 \hat \eta} \label{17oct}
\end{equation}
because $\psi_y (x_0, \hat \eta) \geq 0$. Indeed, $\psi (x_0, \hat \eta) = 1$ and
$\psi \leq \hat U$ in a neighbourhood of $(x_0, \hat \eta)$, whereas $\hat U \leq
1$. A consequence of \eqref{17oct} is that the required inequality holds and is
strict.
In the alternative case, that is, when $\eta (x) < \hat \eta$ for all $x \in \RR$
and there exists a sequence $\{ x_k \}_{k=1}^\infty \subset \RR$ such that $\eta
(x_k) \to \hat \eta$ as $k \to \infty$, the considerations applied for proving the
corresponding part of Theorem~1 must be used with necessary amendments, thus leading
to the required inequality which is non-strict. Also, this remark concerns the proof
of assertion~2.
\subsection{Proof of Theorem 3}
Since $h_0 < \check \eta$, the function $\psi$ should be compared with a more
sophisticated family of test functions than $U_\ell (y) = \check U (y + \ell; \check
s)$, $\ell \geq 0$, used in the proof of Theorem~1. To define this family, say $v
(y; \lambda)$ depending on the parameter $\lambda \in [0, \Lambda]$ with $\Lambda$
to be described later, we, as in the proof of Theorem~1, extend $\omega (t)$ as a
linear function to a small interval on the right of $t = 1$ and put $\omega \equiv
0$ farther right. Thus, we obtain a Lipschitz function such that the inequality
$s_*^2 > 2 \max_{\tau \geq 0} \Omega (\tau)$ holds for $s_* > s_0 = 0$ which is so
small that $h_0 < h_- (s_*) < \check \eta$ and the function $u_- (y; s)$ is
well-defined for $s \in [0, s_*]$; see formulae \eqref{eq:h_-} and \eqref{eq:u_-}
for the definitions of $h_- (s)$ and $u_- (y; s)$ respectively.
Let us recall the properties of $u_- (y; s_*)$ essential for constructing the family
$v (y; \lambda)$ that depends on $(y; \lambda) \in [0, d_*] \times [0, \Lambda]$
continuously; $d_* = h_- (s_*)$. The function $u_- (y; s_*)$ solves problem
\eqref{eq:u} on the interval $(0, d_*)$; moreover, it attains its single negative
minimum at $- y_- (s_*)$ (see fig.~1). Furthermore, to apply considerations used in
the proof of Theorem~1 the following properties are required:
\vspace{1mm}
(I) $v'' + \omega (v) = 0$ on $(0, d_*)$ for $\lambda \in [0, \Lambda]$; (II) $v (y;
0) = u_- (y; s_*)$;
(III) $v (0; \lambda) > 0$ and $v (d_*; \lambda) > 1$ for $\lambda \in (0,
\Lambda)$; (IV) $v (y; \Lambda) > 1$ for all $y \in [0, d_*]$.
\vspace{1mm}
To construct $v (y; \lambda)$ we first consider $\lambda \in [0, s_*]$ and put
\begin{equation}
v (y; \lambda) = u_- (y - c \lambda; s_* - \lambda) \ \ \mbox{with} \ c \in (0, - [2
\, \omega (0)]^{-1}) \label{c} .
\end{equation}
Then properties (I), (II) and the first inequality (III) hold by the definition of
this function. In order to show that the second inequality is true we write
\begin{eqnarray*}
&& d_* = h_- (s_*) = h_- (s_* - \lambda) + \lambda h_-' (s_* - \lambda) + O
(\lambda^2) \nonumber \\ && \ \ \ \ = h_- (s_* - \lambda) - \lambda [\omega
(0)]^{-1} + O (s_* \lambda) \ \ \mbox{as} \ s_*, \lambda \to 0 .
\end{eqnarray*}
Here the second equality is a consequence of Lemma~1. Since $u_- (h_- (s_* -
\lambda) ; s_* - \lambda)$ is equal to one, we have
\begin{eqnarray*}
&& v (d_*; \lambda) = u_- ( h_- (s_* - \lambda) - \lambda [\omega (0)]^{-1} - c
\lambda + O (s_* \lambda) ; s_* - \lambda ) \\ && \ \ \ \ \ \ \ \ \ \ \ = 1 -
\lambda \{ [\omega (0)]^{-1} + c \} u_-' ( h_- (s_* - \lambda); s_* - \lambda ) + O
(s_* \lambda) \ \ \mbox{as} \ s_*, \lambda \to 0 .
\end{eqnarray*}
According to the definition of $c$, the expression in braces is negative, whereas
\[ u_-' ( h_- (s_* - \lambda); s_* - \lambda ) = \sqrt{s_*^2 - 2 \, \Omega (1)} > 0 . \]
Therefore, the second inequality (III) is true for $\lambda \in [0, s_*]$ provided
$s_*$ is sufficiently small.
The next step is to define $v (y; \lambda)$ for $\lambda \in [s_*, s_>]$ with $s_>$
such that $s_> - s_* > 0$ is small. Let $\lambda \in [s_*, s_>]$ and $V_\lambda (y)$
be given implicitly for $y \in [0, \infty)$ as follows:
\[ y = \int_{\lambda - s_*}^{V_\lambda} \frac{\D \tau}{\sqrt{2 \, [\Omega
(\lambda - s_>) - \Omega (\tau)}} \, .
\]
Since $\omega$ satisfies conditions (ii) (in particular, $\omega (0) < 0$) and is
extended to the half-axis $\{ t \geq 1 \}$ so that the inequality $\Omega (\tau)
\leq \Omega (1) / 2$ holds for $\tau \geq 1$, we see that $V_\lambda (y)$
monotonically increases from $\lambda - s_*$ to infinity and $V_\lambda' (0) = 0$.
The last equality allows us to consider $V_\lambda (y)$ as an even function on the
whole real axis $\RR$.
Putting $v (y; \lambda) = V_\lambda (y - c \, s_>)$ with the same $c$ as in formula
\eqref{c}, we obtain the function $v (y; \lambda)$ continuous for $(y; \lambda) \in
[0, d_*] \times [0, s_>]$ for which properties (I)--(III) hold; in particular, the
second inequality (III) for $\lambda \in [s_*, s_>]$ is a consequence of preceding
considerations and the assumption that $s_> - s_*$ is small.
Finally, let $\Lambda = d_* + s_>$ and $v (y; \lambda) = v (y + \lambda - s_>; s_>)$
for $\lambda \in [s_>, \Lambda]$. For these values of $\lambda$ the second
inequality (III) follows by monotonicity from the same inequality for $\lambda \in
[s_*, s_>]$. Moreover, property (IV) is also a consequence of monotonicity and the
second inequality (III).
Having the family $v (y; \lambda)$, we proceed in the same way as in the proof of
Theorem~1. In view of property (IV) we have that
\[ v (y; \Lambda) - \psi (x, y) > 0 \quad \mbox{for all} \ (x, y) \in \RR \times
[0, d_*] .
\]
Let us show that there is no $\lambda_0 \in (0, \Lambda)$ such that
\begin{equation}
\inf_{\RR \times [0, d_*]} \{ v (y; \lambda_0) - \psi (x, y) \} = 0 . \label{inf*}
\end{equation}
If such $\lambda_0$ exists, then inequalities (III) guarantee that $v (y; \lambda_0) -
\psi (x, y)$ attains values separated from zero on both sides of the strip $\RR
\times [0, d_*]$. Since $\psi$ satisfies conditions \eqref{bound}, there exist
positive $\epsilon$ and $\delta$ such that
\begin{equation*}
v (y; \lambda_0) - \psi (x,y) \geq \epsilon \quad \mbox{when} \ (x,y) \in (\RR
\times [0, \delta]) \cup (\RR \times [d_* - \delta, d_*]) .
\end{equation*}
Therefore, \eqref{inf*} holds when either
\[ v (y_0; \lambda_0) - \psi (x_0, y_0) = 0 \ \ \mbox{for some} \ (x_0, y_0) \in
\RR \times (0, d_*)
\]
or there exists a sequence $\{(x_k, y_k)\}_{k=1}^\infty \subset \RR \times (\delta,
d_* - \delta)$ such that
\begin{equation*}
v (y_k; \lambda_0) - \psi (x_k, y_k) \to 0 \quad \mbox{as} \ k \to \infty .
\end{equation*}
Literally repeating considerations in the proof of Theorem 1, one demonstrates that
both these options are impossible and the limit function $v (y; 0) - \psi (x,y)$ as
$\lambda \to 0$ is non-negative on $\RR \times [0, d_*]$ and vanishes when $y = 0$.
We recall that $v (y; 0) = u_- (y; s_*)$ by property (II). Since this function
satisfies property (I), we obtain that
\[ \nabla^2 [u_- (\cdot; s_*) - \psi] + [u_- (\cdot; s_*) - \psi] \int_0^1 \omega'
(t [u_- (\cdot; s_*) - \psi]) \, \D t = 0 \ \ \mbox{in} \ \RR \times (0, d_*) .
\]
Then the maximum principle implies that $u_- (\cdot; s_*) - \psi$ does not vanish at
inner points of this strip, that is, $u_- (y; s_*) - \psi (x, y) > 0$ there.
Recollecting that $d_* = h_- (s_*)$, we arrive at the theorem's assertion.
\subsection{Proof of Theorem 4}
Since $\omega$ satisfies conditions (iii) and $\Omega (1) > 0$, we have that $s_0 =
\sqrt{2 \, \Omega (1)} > 0$. Since $h_0 < \check \eta$, a more sophisticated family of
test functions is required than that used in the proof of Theorem~2 (cf. the proof
of Theorem~3). To construct it we extend $\omega (t)$ in the same way as in the
proof of Theorem~2, that is, as a linear function to a small interval on the left of
$t = 0$ and put $\omega \equiv 0$ farther left so that the inequality $s_0^2 - 2 \,
\Omega (\tau) > s_0^2 / 2$ holds for $\tau \leq 0$. This implies that for every $s$
in a vicinity of $s_0$ the function $U (y; s)$ monotonically increases on the
half-axis $(-\infty, y_+ (s)]$ from $-\infty$ to the maximum value $\tau_+ (s) > 1$
and the graph of $U$ is symmetric about the vertical line through $y_+ (s)$.
Let $s^* > s_0$ be such that $s^* - s_0$ is so small that $U (y; s^*)$ is
well-defined, and $h_0 < d^* < \check \eta$; here and below $d^* = h_+ (s^*)$. Let us
consider $u_+ (y; s^*)$ that solves problem \eqref{eq:u} on the interval $(0, d^*)$;
this function coincides with $U (y; s^*)$ there. It should be noted that $u_+ (y;
s^*) > 1$ when $y \in (h (s^*), d^*)$; moreover, this function monotonically
increases from zero to its maximum value $\tau_+ (s^*) > 1$ on $[0, y_+ (s^*)]$ and
monotonically decreases from $\tau_+ (s^*)$ to one on $[y_+ (s^*), d^*]$.
Now we construct a family of test functions, say $w (y; \theta)$, continuously
depending on $(y; \theta) \in [0, \hat \eta] \times [0, \Theta]$ with $\Theta$ to be
described later. The following properties are analogous to those in the proof of
Theorem~3:
\vspace{1mm}
(I) $w'' + \omega (w) = 0$ on $(0, \hat \eta)$ for $\theta \in [0, \Theta]$; (II) $w
(y; 0) = u_+ (y; s^*)$;
(III) $w (0; \theta) < 0$ and $w (y; \theta) < 1$ for $\theta \in (0, \Theta)$ and $y
\in [d^*, \hat \eta]$;
(IV) $w (y; \Theta) < 0$ for all $y \in [0, \hat \eta]$.
\vspace{1mm}
First, we consider $\theta \in [0, s^* - s_0]$ and put
\begin{equation}
w (y; \theta) = u_+ (y - c \theta; s^* - \theta) \ \ \mbox{with} \ c \in (0, s_0
[\omega (1)]^{-1}) \label{c*} \, .
\end{equation}
This definition guarantees that properties (I), (II) and the first inequality (III)
hold for these values of $\theta$.
Let us show that the second inequality (III) is a consequence of $w (d^*; \theta) <
1$, for which purpose we check that $u_+ (y - c \, \theta; s^* - \theta)$
monotonically decreases when $y$ is greater than $d^*$. Since $u_+ (y; s)$ has this
property for $y > y_+ (s)$, it is sufficient to establish that $d^* - c \, \theta
> y_+ (s^* - \theta)$. Indeed, we have
\[ d^* - c \, \theta = h_+ (s^*) - c \, \theta = h_+ (s^* - \theta) - c \, \theta +
\theta \dot h_+ (s^* - \theta) + O (\theta^2) \quad \mbox{as} \ \theta \to 0 ,
\]
where $\dot h_+ (s^* - \theta) = 2 \, s_0 / \omega (1) + O (s^* - s_0 - \theta)$ in
view of Lemma~2. Combining this and the definition of $c$, we see that $\theta \dot
h_+ (s^* - \theta) - c \, \theta > 0$ provided $s^* - s_0$ is sufficiently small.
Furthermore, $h_+ (s^* - \theta) = y_+ (s^* - \theta) + y_+ (s^* - \theta) - h (s^*
- \theta)$, where the difference is non-negative, which together with the previous
inequality yields the required one.
To evaluate $w (d^*; \theta)$, we write
\begin{eqnarray*}
&& \!\!\!\!\!\!\!\!\!\!\!\!\!\! w (d^*; \theta) = u_+ (h_+ (s^* - \theta + \theta) -
c \, \theta; s^* - \theta) \\ && \ \ \ \ = u_+ (h_+ (s^* - \theta) + \theta \dot
h_+ (s^* - \theta) - c \, \theta + O (\theta^2); s^* - \theta) \quad \mbox{as} \
\theta \to 0 .
\end{eqnarray*}
Using Lemma 2 again, we obtain that
\begin{eqnarray*}
&& \!\!\!\!\!\!\!\!\!\!\!\!\!\! w (d^*; \theta) = u_+ (h_+ (s^* - \theta; s^* -
\theta) + [2 \, s_0 / \omega (1) - c] \theta + O ((s^* - s_0) \theta)) \\ && \ \ \ \
= 1 + u'_+ (h_+ (s^* - \theta); s^* - \theta) \left\{ [2 \, s_0 / \omega (1) - c]
\theta + O ((s^* - s_0) \theta) \right\} \\ && \ \ \ \ + \, 2^{-1} u''_+ (h_+ (s^* -
\theta); s^* - \theta) \left\{ [2 \, s_0 / \omega (1) - c] \theta + O ((s^* - s_0)
\theta) \right\}^2 \\ && \ \ \ \ + \, O \left( (s^* - s_0)^3 \right) \quad
\mbox{as} \ s^* - s_0 \to 0 ,
\end{eqnarray*}
because $u_+ (h_+ (s^* - \theta); s^* - \theta) = 1$. Let us show that the second
and third terms on the right-hand side are negative provided $s^* - s_0$ is
sufficiently small. Indeed, $2 \, s_0 / \omega (1) - c > 0$ by the definition of
$c$, and so the expression in braces is positive for such values of $s^*$ and
$\theta$. Therefore, it remains to investigate the behaviour of
\[ u'_+ (h_+ (s^* - \theta); s^* - \theta) \ \ \mbox{and} \ \ u''_+ (h_+ (s^* -
\theta); s^* - \theta)
\]
for these values. Since $\omega$ satisfies conditions (iii), $u'_+ (h_0; s_0) = 0$
which yields that
\[ u'_+ (h_+ (s^* - \theta); s^* - \theta) = u''_+ (h_0; s_0) [ (s^* - \theta)^2 -
s_0^2 ] / \omega (1) + O \left( (s^* - s_0)^2 \right) \ \ \mbox{as} \ s^* - s_0 \to
0 .
\]
This implies that
\[ u'_+ (h_+ (s^* - \theta); s^* - \theta) = s_0^2 - (s^* - \theta)^2 + O \left(
(s^* - s_0)^2 \right) \ \ \mbox{as} \ s^* - s_0 \to 0 ,
\]
in view of the equality $u''_+ (h_0; s_0) = - \omega (1)$, and so $u'_+ (h_+ (s^* -
\theta); s^* - \theta)$ is either negative or equal to $O \left( (s^* - s_0)^2
\right)$ when $s^*$ is sufficiently close to $s_0$ and $\theta \in [0, s^* - s_0]$.
Besides, the same equality gives that
\[ u''_+ (h_+ (s^* - \theta); s^* - \theta) = - \omega (1) + O (s^* - s_0 - \theta)
\ \ \mbox{as} \ s^* - s_0 \to 0 ,
\]
and the right-hand side is negative provided $s^*$ and $\theta$ have the same
properties. This is a consequence of $\omega (1) > 0$ which is included in
conditions (iii).
Using these facts in the last expression for $w (d^*; \theta)$, we see that it is
less than one for $\theta \in [0, s^* - s_0]$ provided $s^*$ is sufficiently close
to $s_0$.
The next step is to define $w (y; \theta)$ for $\theta \in [s^*, s^>]$ with $s^>$
such that $s^> - s^* > 0$ is small. In this case, formula \eqref{eq:Uim} defines the
function $U (y; s)$ for $y \in (-\infty, y_+ (1))$ and $s \in [s^*, s^>]$. This
allows us to put
\begin{equation}
w (y; \theta) = U (y - c \, (s^* - s_0); s^* - \theta) \quad \mbox{for} \ \theta \in
[s^* - s_0, s^> - s^*] \label{c*'} \, ,
\end{equation}
where $c$ is the same as in \eqref{c*}. It is clear that property (I) is fulfilled
and both inequalities (III) hold because they are true for $\theta = s^* - s_0$ and
$s^> - s_0$ is small. Moreover, $w (y; \theta) < 1$ for $\theta$ described in
\eqref{c*'} and all $y \in \RR$ since
\[ \sup_{y \in \RR} w (y; \theta) = \tau_+ (s^* - \theta) < 1 .
\]
Finally, the continuity follows from the fact that it holds for $\theta = s^* - s_0$
which one verifies directly.
Let $\Theta = \hat \eta - c \, (s^* - s_0) + s^* - s^> + \delta$, where $\delta > 0$
is sufficiently small. Putting
\[ w (y; \theta) = U (y - c \, (s^* - s_0) - \theta + s^* - s^>; s^>) \quad \mbox{for}
\ \theta \in [s^> - s^*, \Theta] ,
\]
we see that properties (I) and (III) are fulfilled by continuity, and so it remains
to check property (IV). Indeed, it follows from continuity and the fact that
\[ w (d^*; \Theta) = U (- \delta; s^>) < 0 .
\]
Having the family $w (y; \theta)$, we proceed in the same way as in the proof of
Theorem~2. In view of property (IV) we have that $w (y; \Theta) - \psi (x, y) < 0$
on $\bar D$. Let us assume that there exists $\theta_0 \in (0, \Theta)$ such that
\begin{equation}
\sup_{\bar D} \{ w (y; \theta_0) - \psi (x, y) \} = 0 . \label{sup*}
\end{equation}
In view of inequalities (III), $w (y; \theta_0) - \psi (x, y)$ does not vanish for
$(x, y) \in \partial D$. Moreover, as in the proof of Theorem~3, condition
\eqref{bound} that this function is separated from zero when $(x,y) \in (\RR \times
[0, \delta]) \cup \{ x \in \RR , y \in [\eta (x) - \delta, \eta (x)] \}$ for some
$\delta > 0$. According to assumption \eqref{sup*}, either there exists $(x_0, y_0)$
belonging to $D$, lying outside the $\delta$-strips described above and such that $w
(y_0; \theta_0) - \psi (x_0, y_0) = 0$ or there exists a sequence $\{(x_k,
y_k)\}_{k=1}^\infty$ located in the same strip as $(x_0, y_0)$ and such that
\begin{equation*}
v (y_k; \lambda_0) - \psi (x_k, y_k) \to 0 \quad \mbox{as} \ k \to \infty .
\end{equation*}
In both cases, we arrive to a contradiction in the same way as in the proof of
Theorem~3. Therefore, $w (y; 0) - \psi (x, y)$ is non-positive on $\bar D$ and
vanishes when $y = 0$.
We recall that $w (y; 0) = u_+ (y; s^*)$ by property (II). Since this function
satisfies property (I), we obtain that
\[ \nabla^2 [u_+ (\cdot; s^*) - \psi] + [u_+ (\cdot; s^*) - \psi] \int_0^1 \omega'
(t [u_+ (\cdot; s^*) - \psi]) \, \D t = 0 \ \ \mbox{in} \ D .
\]
Then the maximum principle implies that $u_+ (\cdot; s^*) - \psi$ does not vanish at
inner points of this strip, that is, $u_+ (y; s^*) - \psi (x, y) > 0$ there, which
completes the proof.
\section{Discussion}
In the framework of the standard formulation, we have considered the problem
describing steady, rotational, water waves in the case when a counter-current might
be present in the corresponding flow of finite depth. Bounds on characteristics of
wave flows are obtained in terms of the same characteristics but corresponding to
some particular horizontal, shear flows that have the same vorticity distribution
$\omega$. It is important that our method allowed us to obtain bounds for stream
functions that change sign within the flow domain for which either $\check \eta$ or
$\hat \eta$ is greater than $h_0$.
It should be also mentioned that according to assertion (4) of Theorem~1 in
\cite{KKL} no unidirectional solutions exist for $r > r_0$ in the case of $\omega$
satisfying conditions (iii). Theorems~3 and 4 complement this result as follows. If
$\omega$ satisfies conditions (ii), then a wave flow such that $\check{\eta} \geq
h_0$ and $\psi \leq 1$ has a near-bottom counter-current, whereas if $\omega$
satisfies conditions (iii), then a wave flow such that $\check{\eta} > h_0$ and
$\psi \geq 0$ has a counter-current near the free surface. Thus, no unidirectional
flows exist in these cases.
An essential feature of the obtained bounds is that they, unlike those in
\cite{KN}, vary depending on the vorticity type. Indeed, inequalities (5.2a) in
\cite{KN} exclude the vorticity distributions satisfying conditions (ii) and (iii).
Another important point, that distinguishes our results from those in \cite{KN} and
also in \cite{KK4}, is that no extra requirement is imposed on $\omega$ and the
latter is assumed to be merely a locally Lipschitz function. Indeed, to prove
Theorems~3.2 and 3.4 in \cite{KK4} (they are analogous to Theorems~1 and 2 here) it
was assumed that $\mu = {\rm ess\ sup}_{\tau \in (-\infty, +\infty)}\ \omega'
(\tau)$ is less than $\pi^2 / \check \eta^2$ and $\pi^2 / \hat \eta^2$ respectively
(the last condition was also used in \cite{KN}, whereas another bound was imposed on
$\mu$ in Theorem~3.6, \cite{KK4}). However, much weaker assumption about $\omega$
satisfying conditions (ii) yields assertion (C) of Theorem~1 for a wider range of
values of the Bernoulli constant than $r \in (r_c, r_0)$, and below we outline how
to demonstrate this.
Another point concerning assertion (C) of Theorem~3.2 in \cite{KK4} should be
mentioned. The assumption that $\check \eta < h (s^>)$ (it is expressed in terms of
the notation adopted in the present paper) used in the proof of this assertion is
missing in the formulation of that theorem. A similar omission made in Theorem~3.4,
\cite{KK4}, is as follows. The assumption $\hat \eta < h (s^<)$ used in the proof
of assertion (B) of this theorem is missing in its formulation.
Let us turn to assertion (C) of Theorem~1. Since $s_0 = 0$ for $\omega$ satisfying
conditions (ii) which is assumed in what follows, the functions $U (y; s)$ and $h
(s)$ are defined by formulae \eqref{eq:Uim} and \eqref{eq:d} respectively for all $s
\geq 0$. Then the following formulae
\[ U(y; s) = U (y + 2 \, y_- (-s); -s) , \quad h (s) = h (-s) - 2 \, y_- (-s)
\]
with $y_-$ given by \eqref{eq:y_pm} extend these functions to the negative values of
$s$ belonging to some interval adjacent to zero. It is clear that both functions are
continuously differentiable, and Lemma~1 implies that $\dot h (s) < 0$ when $s < 0$
is in a neighbourhood of zero.
Let $s' \geq - s^>$ be such that $(s', 0)$ is the largest interval where $\dot h$
is negative, then
\begin{equation}
\partial_s U(y; s) > 0 \quad \mbox{for} \ y \in (0, h (s)) \ \mbox{and} \ s \in (s',
\infty) . \label{lem2}
\end{equation}
Indeed, if $U (y; s)$ is not a monotonically increasing function of $s$ for some $y
\in (0, h (s))$, then there exist small negative
\[ s_1, s_2 > s_1 \ \mbox{and} \ y_* \in (0, h (s_2)) \ \mbox{such that} \ U (y_*;
s_1) < U (y_*; s_2) .
\]
Since $U(y; s_1) > U(y; s_2)$ for $y = h (s_2)$ and when $y > 0$ is small, there
exists $s > 0$ such that $U (y; s) \geq U(y; s_2)$ for $y > 0$ and $U(y_*; s) \geq
U(y_*; s_2)$. However, this is impossible in view of the maximum principle for
non-negative functions.
It is clear that formula \eqref{eq:calR} correctly defines $\mathcal R (s)$ for $s
\in (s', 0)$. Therefore, the stream solutions $(u_+, H_+)$ and $(u_-, H_-)$ can be
found for $r \in [r_0, r')$ in the same way as in \cite{KK3}; here $r' = \lim_{s\to
s'} {\cal R} (s)$. Now we are in a position to complement Theorem~1 by the following
assertion.
\vspace{2mm}
\noindent {\bf Proposition 1.} {\it Let $\omega$ satisfy conditions {\rm (ii)}. If
problem \eqref{eq:lapp}--\eqref{eq:bep} with $r \in (r_c, r')$ has a non-stream
solution $(\psi, \eta)$ such that $\psi \leq 1$ in $\bar D$ and $\check \eta < h (s')$,
then $\check \eta \leq H_+$ and this inequality is strict provided $\check \eta$ is
attained at some point on the free surface.}
\vspace{2mm}
\noindent {\it Sketch of the proof.} It is sufficient to prove the proposition for
$r \in (r_0, r')$, in which case there exists $\check{s} \in (s', 0)$ such that $h
(\check s) = \check{\eta}$. In the same way as in the proof of Theorem~3, one
constructs a family, say $v (y, \lambda)$, that depends on $(y, \lambda) \in (0, h
(\check s)) \times [0, \Lambda]$ continuously and satisfies properties (I)--(IV)
listed in that proof. Then applying inequality \eqref{lem2} and the definition of
$\check s$, one completes the proof using the same argument as in \S\,3.1 with $v
(y, \lambda)$ instead of $U_\ell (y)$.
\vspace{2mm}
In conclusion of this section, it remains to show that the existence of $s'$ means
that $\omega'$ satisfies the following condition. For every $s > s'$ the inequality
\begin{equation}
\int_0^{h (s)} \left[ |v' (y)|^2 - \omega'(U(y; s)) |v (y)|^2 \right] \D y > 0
\label{fin}
\end{equation}
holds for every non-zero $v$ belonging to the Sobolev space $H_0^1 (0, h (s))$.
Indeed, we have that $U (h(s); s) = 1$, and so
\[ \D U (h(s); s) / \D s = \partial_y U (h(s); s) \, \dot h (s) + \partial_s U
(h(s); s) = 0 .
\]
Since $[\partial_y U (h(s); s)]_{s=s'} \neq 0$, the equality $\dot h (s') = 0$
implies that $z (y) = [\partial_s U (y; s)]_{s=s'}$ is a nontrivial solution of the
boundary value problem
\[ z'' + \omega' (U (\cdot; s')) \, z = 0 \ \mbox{on} \ (0, h (s')) , \quad z (0) =
z (h (s')) = 0 .
\]
This yields the property of $\omega'$ formulated above.
If $\mu < \pi^2 / [h (s)]^2$, then inequality \eqref{fin} holds for the described
test functions, and so this property of $\omega'$ is weaker than the bound imposed
on $\mu$ in \cite{KN} and \cite{KK4}.
\vspace{6mm}
\noindent {\bf Acknowledgements.} V.~K. was supported by the Swedish Research
Council (VR). N.~K. acknowledges the support from the Link\"oping University.
{\small
|
1,108,101,565,539 | arxiv | \section{\label{sec:intro} Introduction}
The past two decades have seen immense progress in observational cosmology that has lead to the establishment of the $\Lambda$CDM model for cosmology.
This development is mainly based on the combination of different cosmological probes such as the CMB temperature anisotropies, galaxy clustering, weak gravitational lensing, supernovae and galaxy clusters. Until now, these probes have been, for the most part, measured and analysed separately using different techniques and combined at late stages of the analysis, i.e. when deriving constraints on cosmological parameters. However, this approach is not ideal for current and future surveys such as the Dark Energy Survey (DES\footnote{\tt{http://www.darkenergysurvey.org}.}), the Dark Energy Spectroscopic Instrument (DESI\footnote{\tt{http://desi.lbl.gov}.}), the Large Synoptic Survey Telescope (LSST\footnote{\tt{http://www.lsst.org}.}), Euclid\footnote{\tt{http://sci.esa.int/euclid/}.} and the Wide Field Infrared Survey Telescope (WFIRST\footnote{\tt{http://wfirst.gsfc.nasa.gov}.}) for several reasons. First, these surveys will cover large, overlapping regions of the observable universe and are therefore not statistically independent. In addition, the analysis of these surveys requires tight control of systematic effects, which might be identified by a direct cross-correlation of the probes statistics. Moreover, each probe provides a measurement of the cosmic structures through a different physical field, such as density, velocity, gravitational potentials, and temperature. A promising way to test for new physics, such as modified gravity, is to look directly for deviations from the expected relationships of the statistics of the different fields. The integrated treatment of the probes from the early stages of the analysis will thus provide the cross-checks and the redundancy needed not only to achieve high-precision but also to challenge the different sectors of the cosmological model.
Several earlier studies have considered joint analyses of various cosmological probes. \citet{Mandelbaum:2013, Cacciato:2013} and \citet{Kwan:2016} for example derived cosmological constraints from a joint analysis of galaxy-galaxy lensing and galaxy clustering while \citet{Liu:2016} used the cross-correlation between the galaxy shear field and the overdensity field together with the cross-correlation of the galaxy overdensity with CMB lensing to constrain multiplicative bias in the weak lensing shear measurement in CFHTLenS. Recently, \citet{Singh:2016} performed a joint analysis of CMB lensing as well as galaxy clustering and weak lensing. Furthermore, \citet{Eifler:2014} and \citet{Krause:2016} have theoretically investigated joint analyses for photometric galaxy surveys by modelling the full non-Gaussian covariance matrix between cosmic shear, galaxy-galaxy lensing, galaxy clustering, photometric baryon acoustic oscillations (BAO), galaxy cluster number counts and galaxy cluster weak lensing.
Extending beyond this, we present and implement an integrated approach to probe combination. In this first implementation we combine data from CMB temperature anisotropies, galaxy overdensities and weak lensing. We use data from Planck 2015 \cite{Planck-Collaboration:2015af} for the CMB, for galaxy clustering we use photometric data from the 8$^{\mathrm{th}}$ data release of the Sloan Digital Sky Survey (SDSS DR 8) \cite{Aihara:2011} and the weak lensing shear data comes from SDSS Stripe 82 \cite{Annis:2014}. We combine these probes into a common framework at the map level by creating projected 2-dimensional maps of CMB temperature, galaxy overdensity and the weak lensing shear field. In order to jointly analyse this set of maps we consider the spherical harmonic power spectra of the probes including their cross-correlations. This leads to a spherical harmonic power spectrum matrix that combines CMB temperature anisotropies, galaxy clustering, cosmic shear, galaxy-galaxy lensing and the ISW \cite{Sachs:1967} effect with galaxy and weak lensing shear tracers. We combine this power spectrum matrix together with the full Gaussian covariance matrix and derive constraints on the parameters of the $\Lambda$CDM cosmological model, marginalising over a constant linear galaxy bias and a parameter accounting for possible multiplicative bias in the weak lensing shear measurement. In this first implementation, we use some conservative and simplifying assumptions. For instance we include a limited range of angular scales for the different probes to reduce our sensitivity to systematics, nuisance parameters and nonlinear corrections. With this, we work under the assumption of Gaussian covariance matrices and with a reduced set of nuisance parameters.
This paper is organised as follows. In Section \ref{sec:framework} we describe the framework for integrated probe combination employed in this work. The theoretical modelling of the cosmological observables is summarised in Section \ref{sec:theorypred}. Section \ref{sec:maps} describes the data analysis for each probe, especially the map-making procedure. The computation of the spherical harmonic auto- and cross-power spectra is discussed in Section \ref{sec:cls} and the estimation of the covariance matrix is detailed in Section \ref{sec:covariance}. In Section \ref{sec:results} we present the cosmological constraints derived from the joint analysis and we conclude in Section \ref{sec:conclusions}. More detailed descriptions of data analysis as well as robustness tests are deferred to the Appendix.
\section{\label{sec:framework} Framework}
\begin{figure}
\begin{center}
\includegraphics[scale=0.55]{fig1.pdf}
\caption{Synopsis of the framework for integrated probe combination employed in this work.}
\label{fig:framework}
\end{center}
\end{figure}
The framework for integrated probe combination employed in this work is illustrated in Fig.~\ref{fig:framework}. In a first step we collect data for different cosmological probes as taken by either separate surveys or by the same survey. For our first implementation described below we use cosmological data from the CMB temperature anisotropies, the galaxy overdensity field and the weak lensing shear field. After data collection, we perform probe specific data analysis which involves data selection and systematics removal. We then homogenise the data format by creating projected 2-dimensional maps for all probes considered. The common data format allows us to combine the cosmological probes into a common framework at the map level. We compute both the spherical harmonic auto- and cross-power spectra of this set of maps and combine them into the spherical harmonic power spectrum matrix $C_{\ell}^{ij}$. This matrix captures the cosmological information contained in the two-point statistics of the maps. In a last step we compute the power spectrum covariance matrix and combine it with theoretical predictions to derive constraints on cosmological parameters from a joint fit to the measured spherical harmonic power spectra. The details of the implementation for CMB temperature anisotropies, galaxy overdensities and weak lensing are described below.
\section{\label{sec:theorypred} Theoretical predictions}
The statistical properties of both galaxy overdensity $\delta_{g}$ and weak lensing shear $\gamma$, as well as their cross-correlation can be measured from their spherical harmonic power spectra. These generally take the form of weighted integrals of the nonlinear matter power spectrum $P^{\mathrm{nl}}_{\delta \delta}(k, z)$ multiplied with spherical Bessel functions $j_{\ell}\boldsymbol{(}k \chi(z)\boldsymbol{)}$. Their computation is time-consuming and we therefore resort to the the Limber approximation \cite{Limber:1953, Kaiser:1992, Kaiser:1998} to speed up calculations. This is a valid approximation for small angular scales, typically $\ell > \mathcal{O}(10)$, and broad redshift bins \cite{Peacock:1999}. For simplicity, we further focus on flat cosmological models, i.e. $\Omega_{\mathrm{k}} = 0$, for the theoretical predictions. The spherical harmonic power spectrum $C_{\ell}^{ij}$ at multipole $\ell$ between cosmological probes $i$, $j$ $\in \{\delta_{g}, \gamma\}$ can then be expressed as:
\begin{multline}
C_{\ell}^{ij}=\int \mathrm{d} z \; \frac{c}{H(z)} \; \frac{W^{i}\boldsymbol{\left(}\chi(z)\boldsymbol{\right)}W^{j}\boldsymbol{\left(}\chi(z)\boldsymbol{\right)}}{\chi^{2}(z)} \\
\times P^{\mathrm{nl}}_{\delta \delta}\left(k=\frac{\ell+\sfrac{1}{2}}{\chi(z)}, z\right),
\end{multline}
where $c$ is the speed of light, $\chi(z)$ the comoving distance, $H(z)$ the Hubble parameter and $W^{i'}\boldsymbol{\left(}\chi(z)\boldsymbol{\right)}$ denotes the window function for probe $i'$.
For galaxy clustering the window function is given by
\begin{equation}
W^{\delta_{g}}\boldsymbol{\left(}\chi(z)\boldsymbol{\right)} = \frac{H(z)}{c} b(z) n(z),
\label{eq:deltagwindow}
\end{equation}
where $b(z)$ denotes a linear galaxy bias and $n(z)$ is the normalised redshift selection function of the survey i.e. $\int \mathrm{d}z \; n(z) = 1$. We focus on scale-independent galaxy bias since we restrict the analysis to large scales, which are well-described by linear theory.
The window function for weak lensing shear is
\begin{equation}
W^{\gamma}\boldsymbol{\left(}\chi(z)\boldsymbol{\right)} = \frac{3}{2} \frac{\Omega_{\mathrm{m}} H^{2}_{0}}{c^{2}} \frac{\chi(z)}{a} \int_{\chi(z)}^{\chi_{\mathrm{h}}} \mathrm{d} z' n(z') \frac{\chi(z')-\chi(z)}{\chi(z')},
\label{eq:gammawindow}
\end{equation}
where $\Omega_{\mathrm{m}}$ denotes the matter density parameter today, $H_{0}$ is the present-day Hubble parameter, $\chi_{\mathrm{h}}$ is the comoving distance to the horizon and $a$ denotes the scale factor.
Similarly to the spherical harmonic power spectra of galaxy clustering and weak lensing the spherical harmonic power spectrum of CMB temperature anisotropies $T$ can be related to the primordial matter power spectrum generated during inflation as \cite{Dodelson:2003}
\begin{equation}
C^{\mathrm{TT}}_{\ell} = \frac{2}{\pi} \int \mathrm{d}k \; k^{2} P^{\mathrm{lin}}_{\delta \delta}(k) \left \vert \frac{\Delta T_{\ell}(k)}{\delta(k)}\right \vert^{2},
\end{equation}
where $\Delta T_{\ell}$ denotes the transfer function of the temperature anisotropies and $\delta$ is the matter overdensity.
The CMB temperature anisotropies are correlated to tracers of the large-scale structure (LSS) such as galaxy overdensity and weak lensing shear primarily through the integrated Sachs-Wolfe effect \cite{Sachs:1967}. On large enough scales where linear theory holds, the spherical harmonic power spectra between these probes can be computed from expressions similar to those above. In the Limber approximation \cite{Limber:1953, Kaiser:1992, Kaiser:1998}, the spherical harmonic power spectrum between CMB temperature anisotropies and a tracer $i$ of the LSS becomes: \cite{Crittenden:1996}
\begin{multline}
C^{i\mathrm{T}}_{\ell} = 3 \frac{\Omega_{\mathrm{m}} H^{2}_{0}T_{\mathrm{CMB}}}{c^{2}} \frac{1}{(\ell+\sfrac{1}{2})^{2}} \int \mathrm{d}z \frac{\mathrm{d}}{\mathrm{d}z} \left[D(z)(1+z)\right] \\
\times D(z) W^{i}\boldsymbol{\left(}\chi(z)\boldsymbol{\right)} P^{\mathrm{lin}}_{\delta \delta}\left(k=\frac{\ell+\sfrac{1}{2}}{\chi(z)}, 0\right),
\label{eq:clisw}
\end{multline}
where $T_{\mathrm{CMB}}$ denotes the mean temperature of the CMB today, $i \in \{\delta_{g}, \gamma\}$ and $W^{i}\boldsymbol{\left(}\chi(z)\boldsymbol{\right)}$ represents the window functions defined in Equations \ref{eq:deltagwindow} and \ref{eq:gammawindow}. We have further split the linear matter power spectrum $P^{\mathrm{lin}}_{\delta \delta}(k, z)$ into its time-dependent part parametrised by the growth factor $D(z)$ and the scale-dependent part $P^{\mathrm{lin}}_{\delta \delta}(k, 0)$. For a derivation of Eq.~\ref{eq:clisw} for the galaxy overdensity field as tracer of the LSS see e.g. \citet{Padmanabhan:2005}. The derivation for $C^{\gamma\mathrm{T}}_{\ell}$ is similar and is detailed in Appendix \ref{sec:shearisw}.
To compute the auto-power spectrum of the CMB temperature anisotropies we use the publicly available Boltzmann code $\textsc{class}$\footnote{$\tt{http://class\text{-}code.net}$.} \cite{Lesgourgues:2011}. For the other power spectra we use $\textsc{PyCosmo}$ \cite{Refregier:2016}. We calculate the linear matter power spectrum from the transfer function derived by \citet{Eisenstein:1998}. To compute the nonlinear matter power spectrum we use the $\textsc{Halofit}$ fitting function \cite{Smith:2003} with the revisions of \citet{Takahashi:2012}.
\section{\label{sec:maps} Maps}
\begin{table*}
\caption{Summary of used data.} \label{tab:data}
\begin{center}
\begin{tabular}{>{\centering}m{2.5cm}|>{\centering}m{7cm}|>{\centering}m{8cm}@{}m{0pt}@{}} \hline \hline
CMB temperature anisotropies &
\multicolumn{2}{>{\centering}m{8cm}}{
\Tstrut
Survey: Planck 2015 \cite{Planck-Collaboration:2015ab} \\
Fiducial foreground-reduced map: $\tt{Commander}$ \\
Sky coverage: $f_{\text{sky}} = 0.776$
\Bstrut} &
\tabularnewline \hline
galaxy overdensity &
\multicolumn{2}{>{\centering}m{8cm}}{
\Tstrut
Survey: SDSS DR8 \cite{Aihara:2011} \\
Sky coverage: $f_{\mathrm{sky}} = 0.27$ \\
Galaxy sample: CMASS1-4 \\
Number of galaxies: $N_{\mathrm{gal}} = 854\,063$ \\
Photometric redshift range $0.45 \leq z_{\mathrm{phot}} < 0.65$
\Bstrut} &
\tabularnewline \hline
weak lensing &
\multicolumn{2}{>{\centering}m{8cm}}{
\Tstrut
Survey: SDSS Stripe 82 co-add \cite{Annis:2014}\\
Sky coverage: $f_{\mathrm{sky}} = 0.0069$ \\
Number of galaxies: $N_{\mathrm{gal}} = 3\,322\,915$ \\
Photometric redshift range: $0.1 \lesssim z_{\mathrm{phot}} \lesssim 1.1$ \\
r.m.s. ellipticity per component: $\sigma_{e} \sim 0.43$
\Bstrut} &
\tabularnewline \hline \hline
\end{tabular}
\end{center}
\end{table*}
\subsection{\label{subsec:cmbmap}Cosmic Microwave Background}
We use the foreground-reduced CMB anisotropy maps provided by the Planck collaboration \cite{Planck-Collaboration:2015ab} in their 2015 data release. We choose these over the uncleaned single-frequency maps because they allow to perform the foreground correction on the maps rather than the power spectrum level. This is important when considering probe combination. The Planck foreground-reduced CMB anisotropy maps have been derived using four different algorithms: $\tt Commander$, $\tt NILC$, $\tt SEVEM$ and $\tt SMICA$. The maps are given in HEALPix\footnote{$\tt http://healpix.sourceforge.net$.} \cite{Gorski:2005} format and are provided in Galactic coordinates at two different resolutions of $\tt NSIDE$ $= 1024$ and $\tt NSIDE$ $= 2048$. These correspond to pixel areas of $11.8$ and $2.95$ arcmin$^{2}$ respectively. Different data configurations are available \cite{Planck-Collaboration:2015ab}; we use both the half-mission half-sum (HMHS) maps, which contain both signal and noise, and the half-mission half-difference maps (HMHD), which contain only noise and potential residual systematic uncertainties. All four maps yield consistent estimates of both the spherical harmonic power spectrum of the CMB temperature anisotropies as well as the spherical harmonic cross-power spectrum between CMB temperature anisotropies and tracers of the LSS \cite{Planck-Collaboration:2015ab, Planck-Collaboration:2015ac, Planck-Collaboration:2015ad}. Since the Planck collaboration found the $\tt Commander$ approach to be the preferred solution for studying the CMB anisotropies at large and intermediate angular scales, we also choose it for our analysis. Each of the four foreground reduction methods also provides a confidence mask inside which the CMB solution is trusted. Following the Planck collaboration \cite{Planck-Collaboration:2015ab}, we adopt the union of the confidence masks for $\tt Commander$, $\tt SEVEM$ and $\tt SMICA$. This is referred to as the $\tt UT78$ mask and covers $77.6 \%$ of the sky at a resolution of $\tt NSIDE$ $= 2048$. To downgrade the mask to $\tt NSIDE$ $= 1024$, we follow the description outlined in \citet{Planck-Collaboration:2015ab}. The HMHS CMB anisotropy map derived using $\tt Commander$ is shown in the top panel of Fig.~\ref{fig:maps} for resolution $\tt NSIDE$ $= 1024$ and the corresponding HMHD map is shown in Fig.~\ref{fig:cmbnoisemap} in the Appendix.
\begin{figure*}
\begin{center}
\includegraphics[scale=0.7]{fig2.pdf}
\caption{Summary of the three maps in Galactic coordinates used in this analysis. The all-sky maps are in Mollweide projection while the zoom-in versions are in Gnomonic projection. The HMHS map of CMB temperature anisotropies as derived using $\tt{Commander}$ is shown in the top panel. It is masked using the $\tt{UT78}$ mask. The middle panel shows the systematics-corrected (see text) galaxy overdensity map for CMASS1-4 galaxies. Grey areas have been masked either because they lie outside the survey footprint or are potentially contaminated by systematics. The lower panel shows the map of the SDSS Stripe 82 shear modulus $\vert \hat{\gamma} \vert$. Grey areas have been masked because they are either unobserved or do not contain galaxies for shear measurement. The zoom-in figures (left) are enlarged versions of the $5\times5$ deg$^{2}$ region centred on $(\tt{l}, \tt{b})$ $= (53\degree, -33.5\degree)$ shown in the maps. The zoom-in for the galaxy shear map is overlaid with a whisker plot of the galaxy shears. All three maps have resolution $\tt NSIDE$ $= 1024$.}
\label{fig:maps}
\end{center}
\end{figure*}
\subsection{\label{subsec:galaxymap}Galaxy overdensity}
The SDSS \cite{York:2000, Eisenstein:2011, Gunn:1998, Gunn:2006} obtained wide-field images of $14\,555$ deg$^{2}$ of the sky in 5 photometric passbands ($u, g, r, i, z$ \cite{Fukugita:1996, Smith:2002, Doi:2010}) up to a limiting $\tt{r\text{-}band}$ magnitude of $r \simeq 22.5$. The photometric data is complemented with spectroscopic data from the Baryonic Oscillations Spectroscopic Survey (BOSS) \cite{Eisenstein:2011, Dawson:2013, Smee:2013}. BOSS was conducted as part of SDSS III \cite{Eisenstein:2011} and obtained spectra of approximately 1.5 million luminous galaxies distributed over $10\,000$ deg$^{2}$ of the sky. The SDSS photometric redshifts for DR8 \cite{Aihara:2011} are estimated using a local regression model trained on a spectroscopic training set consisting of $850\,000$ SDSS DR8 spectra and spectroscopic data from other surveys\footnote{More details can be found on \tt{http://www.sdss3.org/dr8/algorithms/photo-z.php}.}. The algorithm is outlined in \citet{Beck:2016}.
In our analysis, which is described in the following, we largely follow \citet{Ho:2012}. We select objects classified as galaxies from the $\textsc{PhotoPrimary}$ table in the Catalog Archive Server (CAS\footnote{The SDSS Catalog Archive Server can be accessed through \tt{http://skyserver.sdss.org/CasJobs/SubmitJob.aspx}.\label{footn:cas}}). To obtain a homogeneous galaxy sample we further select CMASS galaxies using the color-magnitude cuts used for BOSS target selection \cite{Eisenstein:2011} and outlined in \citet{Ho:2012}. This selection isolates luminous, high-redshift galaxies that are approximately stellar mass limited \cite{White:2011, Ross:2011}. We further restrict the sample to CMASS galaxies with SDSS photometric redshifts between $0.45 \leq z < 0.65$, i.e. we consider the photometric redshift slices CMASS1-4. This selection yields a total of $N_{\mathrm{gal}} = 1\,096\,455$ galaxies.
To compute the galaxy overdensity field, we need to characterise the full area observed by the survey and mask regions heavily affected by foregrounds or potential systematics. The area imaged by the SDSS is divided into units called fields. Several such fields have been observed multiple times in the SDSS imaging runs. The survey footprint is the union of the best observed (primary) fields at each position and is described in terms of $\textsc{Mangle}$ \cite{Hamilton:1993, Hamilton:2004, Swanson:2008} spherical polygons. Each of these polygons is matched to the SDSS field fully covering it\footnote{This information is found in the files $\tt window\_unified.fits$ and $\tt window\_flist.fits$.}. In order to select the survey area least affected by foregrounds and potential systematics we follow \citet{Ho:2012} and \citet{Ross:2011} and restrict the analysis to polygons covered by fields with $\tt{score}$\footnote{\tt{http://www.sdss3.org/dr10/algorithms/resolve.php}.} $\geq 0.6$, full width at half maximum (FWHM) of the point spread function (PSF) $\tt{PSF\text{-}FWHM}$ $< 2.0$ arcsec in the $\tt{r\text{-}band}$ and Galactic extinction $\tt{E(B-V)}$ $\leq 0.08$ as determined from the extinction maps from \citet{Schlegel:1998}.
To facilitate a joint analysis between the LSS probes and the CMB, which is given as a map in Galactic coordinates, we transform both the galaxy positions as well as the survey mask from equatorial ($\tt RA$, $\tt DEC$) to Galactic ($\tt l$, $\tt b$) coordinates. We construct the continuous galaxy overdensity field by pixelising the galaxy overdensities $\delta_{g} = \delta n/\bar{n}$ onto a HEALPix pixelisation of the sphere with resolution $\tt NSIDE$ $= 1024$. We mask the galaxy overdensity map with a HEALPix version of the SDSS survey mask, which is obtained by random sampling of the $\textsc{Mangle}$ mask. To account for the effect of bright stars, we use the Tycho astrometric catalog \cite{Hog:2000} and define magnitude-dependent stellar masks as defined in \citet{Padmanabhan:2007}. We remove galaxies inside the bright star masks and correct for the area covered by the bright stars by removing the area covered by the star masks from the pixel area $A_{\mathrm{pix}, \mathrm{corr}} = A_{\mathrm{pix}, \mathrm{uncorr}} - A_{\mathrm{stars}}$ when computing the galaxy overdensity. The final map covers a fraction $f_{\mathrm{sky}} \approx 0.27$ of the sky and contains $N_{\mathrm{gal}} = 854\,063$ galaxies.
Even after masking and removal of high contamination regions, there are still systematics left in the galaxy overdensity map. The correction for residual systematic uncertainties in the maps follows \citet{Ross:2011} and \citet{Hernandez-Monteagudo:2014} and is described in Appendix \ref{sec:galaxysys}. The final map is shown in the middle panel of Fig.~\ref{fig:maps}.
As well as the maps we need an estimate for the redshift distribution of the galaxies in our sample. To this end we follow \citet{Ho:2012} and match photometrically detected galaxies to galaxies observed spectroscopically in SDSS DR9 \cite{Ahn:2012}. We then estimate the redshift distribution of the photometric galaxies from the spectroscopic redshift distribution of the matching galaxies. The selected CMASS1-4 galaxies have spectroscopic redshifts $0.4 \lesssim z \lesssim 0.7$ as can be seen from the redshift distribution shown in Fig.~\ref{fig:nz}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.6]{fig3.pdf}
\caption{Redshift distribution for the LSS probes. The figure shows the redshift selection function of SDSS CMASS1-4 galaxies, the redshift selection function for the SDSS Stripe 82 galaxies as well as the weak lensing shear window function defined in Eq.~\ref{eq:gammawindow}. The redshift selection function for CMASS1-4 galaxies as well as the weak lensing shear window function have been rescaled relative to the Stripe 82 redshift selection function.}
\label{fig:nz}
\end{center}
\end{figure}
\subsection{\label{subsec:ellipticitymap}Weak lensing}
We take weak lensing data from the SDSS Stripe 82 co-add \cite{Annis:2014}, which comprises $275$ deg$^{2}$ of co-added SDSS imaging data with a limiting $\tt{r\text{-}band}$ magnitude $r \approx 23.5$ and $\tt{r\text{-}band}$ median seeing of $1.1$ arcsec. The shapes of objects detected in the SDSS were measured from the adaptive moments \cite{Bernstein:2002} by the $\tt PHOTO$ pipeline \cite{Lupton:2001} and are available on the CAS\footnote{See footnote \ref{footn:cas}.}. Photometric redshifts for all detected galaxies were computed using a neural network approach as described in \citet{Reis:2012} and are available as a DR7 value added catalog\footnote{$\tt{http://classic.sdss.org/dr7/products/value\_added/}$, $\tt{http://das.sdss.org/va/coadd\_galaxies/}$.}.
In the following analysis we closely follow the work by \citet{Lin:2012}. We select objects identified as galaxies in the co-add data (i.e. $\tt run = 106$ or $\tt run = 206$) from the CAS and we restrict the sample to galaxies with extinction corrected $\tt{i\text{-}band}$ magnitudes in the range $18 < i < 24$. Further we select only objects that pass the clean photometry cuts as defined by the SDSS\footnote{\tt{http://www.sdss.org/dr12/tutorials/flags/}.\label{footn:flags}} and do not have flags indicating problems with the measurement of adaptive moments as well as negative errors on those. The former cuts especially exclude galaxies containing saturated pixels. We use shapes measured in the $\tt{i\text{-}band}$ since it has the smallest seeing ($1.05$ arcsec) \cite{Annis:2014, Lin:2012} and further consider only galaxies with observed sizes at least $50 \%$ larger than the PSF. This requirement is quantified by requiring the resolution factor $R = 1-\tt mRrCcPSF/\tt mRrCc$ \cite{Bernstein:2002} to satisfy $R > 0.33$, where $\tt mRrCc$ and $\tt mRrCcPSF$ denote the sum of the second order moments in the CCD column and row direction for both the object and the PSF.
For the above galaxy sample we compute PSF-corrected galaxy ellipticities using the linear PSF correction algorithm as described in \citet{Hirata:2003}. For weak lensing shear measurement we follow \citet{Lin:2012} and restrict the sample to galaxies with PSF-corrected ellipticity components $e_{1}, e_{2}$ satisfying $\vert e_{1} \vert < 1.4$ as well as $\vert e_{2} \vert < 1.4$ and photometric redshift uncertainties $\sigma_{z} < 0.15$. This sample has an r.m.s. ellipticity per component of $\sigma_{e} \sim 0.43$. We then turn the PSF-corrected ellipticities for this sample into shear estimates. The details of the analysis are described in Appendix \ref{sec:cosmicshearanalysis}.
After computing weak lensing shear estimates from the ellipticities we apply a rotation to both the galaxy positions and shears from equatorial to Galactic coordinates\footnote{The exact rotation of the shears is described in Appendix \ref{sec:shearrotation}.} to allow for combination with the CMB. We pixelise both weak lensing shear components onto separate HEALPix pixelisations of the sphere choosing a resolution of $\tt NSIDE$ $= 1024$ as for the galaxy overdensity map. At this resolution the mean number of galaxies per pixel is about $38$, which corresponds to $n_{\mathrm{gal}} \simeq 3.2$ arcmin$^{-2}$. We apply a mask to both maps, which accounts for both unobserved and empty pixels. The final maps are constructed using $N_{\mathrm{gal}} = 3\,322\,915$ galaxies and cover a sky fraction $f_{\mathrm{sky}} \approx 0.0069$. The map of the shear modulus $\vert \hat{\gamma} \vert$ is shown in the bottom panel of Fig.~\ref{fig:maps} together with a zoom-in region with overlaid whisker plot illustrating the magnitude and direction of the weak lensing shear.
We follow \citet{Lin:2012} and estimate the redshift distribution of the galaxies from their photometric redshift distribution. The redshift distribution is shown in Fig.~\ref{fig:nz} together with the window function for weak lensing shear defined in Eq.~\ref{eq:gammawindow}. We see that the selected galaxies have photometric redshifts $z \lesssim 1.0$.
\section{\label{sec:cls} Spherical harmonic power spectra}
We calculate the spherical harmonic power spectra of the maps presented in the previous section using the publicly available code $\tt PolSpice$\footnote{\tt{http://www2.iap.fr/users/hivon/software/PolSpice/}.} \cite{Szapudi:2001, Chon:2004}. The $\tt PolSpice$ code is designed to combine both real and Fourier space in order to correct spherical harmonic power spectra measured on a cut-sky from the effect of the mask. The algorithm can be summarised as follows: starting from a masked HEALPix map, $\tt PolSpice$ first computes the so-called pseudo power spectrum, which is then Fourier transformed to the real space correlation function. In order to correct for the effects of the mask, the latter is divided by the mask correlation function. In a last step, the demasked correlation function is Fourier transformed back to the spherical harmonic power spectrum. This approach ensures that $\tt PolSpice$ can exploit the advantages of real space while still performing the computationally expensive calculations in Fourier space.
Demasking can only be performed on angular scales on which information is available, which translates to a maximal angular scale $\theta_{\mathrm{max}}$ for which a demasked correlation function can be computed. This maximal scale leads to ringing when transforming back from real to Fourier space, which can be reduced by apodising the correlation function prior to inversion. Both these steps lead to biases in the power spectrum recovered by $\tt PolSpice$. The kernels relating the average $\tt PolSpice$ estimates to the true power spectra can be computed theoretically for a given maximal angular scale and apodisation prescription and need to be corrected for when comparing theoretical predictions to observed power spectra.
An additional difficulty arises in the computation of spherical harmonic power spectra of spin-2 fields. Finite sky coverage tends to cause mixing between E-and B-modes. The polarisation version of $\tt PolSpice$ is designed to remove E- to B-mode leakage in the mean \cite{Chon:2004}. Details on our earlier application of $\tt PolSpice$ to LSS data are described in Appendix A of \citet{Becker:2015}.
In order to calculate both the auto- and cross-power spectra for all probes, we need to estimate the maximal angular scale $\theta_{\mathrm{max}}$. This is not a well-defined quantity but we can separately estimate it for each probe from the real space correlation function of its mask. The real space correlation function of the survey mask will fall off significantly or vanish for scales larger than $\theta_{\mathrm{max}}$. We therefore estimate $\theta_{\mathrm{max}}$ as the scale around which the mask correlation function significantly decreases in amplitude. Appendix \ref{sec:maskcorr} illustrates this analysis for the example of the SDSS Stripe 82 weak lensing shear mask. In order to reduce Fourier ringing we apodise the correlation function using a Gaussian window function; following \citet{Chon:2004} we choose the FWHM of the Gaussian window as $\theta_{\mathrm{FWHM}} = \sfrac{\theta_{\mathrm{max}}}{2}$. Survey masks with complicated angular dependence might not exhibit a clear fall-off, which complicates the choice of $\theta_{\mathrm{max}}$. We therefore validate our choices of $\theta_{\mathrm{max}}$ and $\theta_{\mathrm{FWHM}}$ with the Gaussian simulations as described in Appendix \ref{sec:corrmaps} and \ref{sec:validation}. We find our choices to allow the recovery of the input power spectra for all the probes and settings.
All spherical harmonic power spectra are corrected for the effect of the HEALPix pixel window function and the power spectra involving the CMB map are further corrected for the Planck effective beam window function, which complements the CMB maps.
We now separately describe the measurement of all the six spherical harmonic power spectra. To compute the power spectra, we use the maps and masks described in Section \ref{sec:maps} at resolution $\tt NSIDE$ $= 1024$, except for the CMB temperature power spectrum. For the latter we use the maps at resolution $\tt NSIDE$ $= 2048$, but we do not expect this to make a significant difference. The $\tt{PolSpice}$ parameter settings used to compute the power spectra are summarised in Tab.~\ref{tab:clparams}. This table further gives the angular multipole range as well as binning scheme employed for the cosmological analysis. For all probes considered, the uncertainties are derived from the Gaussian simulations described in Section \ref{sec:mockcov} and Appendix \ref{sec:corrmaps}.
\begin{table}
\caption{Spherical harmonic power spectrum parameters and angular multipole ranges.} \label{tab:clparams}
\begin{center}
\begin{ruledtabular}
\begin{tabular}{ccccc}
Power spectrum & $\theta_{\mathrm{max}}$ [deg] & $\theta_{\mathrm{FWHM}}$ [deg] & $\ell$-range & $\Delta \ell$ \\ \hline \Tstrut
$C^{\mathrm{TT}}_{\ell}$ & 40 & 20 & $[10, \,610]$ & 30 \\
$C^{\delta_{g} \delta_{g}}_{\ell}$ & 80 & 40 & $[30, \,210]$ & 30 \\
$C^{\gamma \gamma}_{\ell}$ & 10 & 5 & $[70, \,610]$ & 60 \\
$C^{\delta_{g}\mathrm{T}}_{\ell}$ & 40 & 20 & $[30, \,210]$ & 30 \\
$C^{\gamma \mathrm{T}}_{\ell}$ & 10 & 5 & $[70, \,610]$ & 60 \\
$C^{\gamma \delta_{g}}_{\ell}$ & 10 & 5 & $[30, \,210]$ & 60 \\
\end{tabular}
\end{ruledtabular}
\end{center}
\end{table}
\subsection{\label{subsec:tempcls} CMB}
We use the half-mission half-sum (HMHS) map to estimate the CMB signal power spectrum and the half-mission half-difference (HMHD) map to estimate the noise in the power spectrum of the HMHS map.
The minimal angular multipole used in the cosmological analysis is chosen such as to minimise demasking effects and the cut at $\ell = 610$ ensures that we are not biased by residual foregrounds in the maps as discussed in Section \ref{sec:results}. The resulting power spectrum is shown in the top panel of Fig.~\ref{fig:cls}. In Appendix \ref{sec:cltests} we compare the CMB auto-power spectrum computed from the different foreground-reduced maps. As illustrated in Fig.~\ref{fig:compplanckmaps} in Appendix \ref{sec:cltests} we find that the measured CMB auto-power spectrum is unaffected by the choice of foreground-reduced map.
\begin{figure*}
\includegraphics[scale=0.4]{fig4.pdf}
\caption{Spherical harmonic power spectra for all probes used in the cosmological analysis. The top left panel shows the power spectrum of CMB anisotropies computed from the $\tt Commander$ CMB temperature map at resolution of $\tt NSIDE$ $= 2048$. The middle left panel shows the cross-power spectrum between CMB temperature anisotropies and galaxy overdensity computed from the systematics-reduced SDSS CMASS1-4 map and the $\tt{Commander}$ map at resolution $\tt NSIDE$ $= 1024$. The middle right panel shows the spherical harmonic power spectrum of the galaxy overdensity computed from the systematics-reduced SDSS CMASS1-4 map at $\tt NSIDE$ $= 1024$. The bottom left panel shows the spherical harmonic power spectrum between CMB temperature anisotropies and weak lensing shear measured from the $\tt{Commander}$ CMB map and the SDSS Stripe 82 weak lensing maps at resolution $\tt NSIDE$ $= 1024$. The bottom-middle panel shows the spherical harmonic power spectrum between galaxy overdensity and galaxy weak lensing shear computed from the systematics-reduced SDSS CMASS1-4 map and the SDSS Stripe 82 galaxy weak lensing shear map at resolution $\tt NSIDE$ $= 1024$. The bottom right panel shows the spherical harmonic power spectrum of cosmic shear E-modes computed from the SDSS Stripe 82 weak lensing shear maps. The angular multipole ranges and binning schemes for all power spectra are summarised in Table \ref{tab:clparams}. All power spectra are derived from the maps in Galactic coordinates. The solid lines show the theoretical predictions for the best-fit cosmological model determined from the joint analysis which is summarised in Tab.~\ref{tab:params}. The theoretical predictions have been convolved with the $\tt{PolSpice}$ kernels as described in Section \ref{sec:cls}. The error bars are derived from the Gaussian simulations described in Section \ref{sec:mockcov} and Appendix \ref{sec:corrmaps}.}
\label{fig:cls}
\end{figure*}
\subsection{\label{subsec:deltagcls} Galaxy clustering}
The galaxy overdensity maps described in Section \ref{sec:maps} are estimated from discrete galaxy tracers. Therefore, their spherical harmonic power spectrum receives contributions from the galaxy clustering signal and Poisson shot noise. To estimate the noise power spectrum, we resort to simulations. We generate noise maps by randomising the positions of all the galaxies in the sample inside the mask. Since this procedure removes all correlations between galaxy positions, the power spectra of these maps will give an estimate of the level of Poisson shot noise present in the data. In order to obtain a robust noise power spectrum, we generate 100 noise maps and estimate the noise power spectrum from the mean of these power spectra.
The spherical harmonic galaxy clustering power spectrum contains significant contributions from nonlinear structure formation at small angular scales. The effects of nonlinear galaxy bias are difficult to model and we therefore restrict our analysis to angular scales for which nonlinear corrections are small. We can estimate the significance of nonlinear effects by comparing the spherical harmonic galaxy clustering power spectrum computed using the nonlinear matter power spectrum as well as the linear matter power spectrum. Since galaxies are more clustered than dark matter this is likely to underestimate the effect. We find that the difference between the two reaches $5 \%$ of the power spectrum uncertainties and thus becomes mildly significant at around $\ell_{\mathrm{max}} \sim 250$. This difference is smaller than the difference derived in \citet{Ho:2012} and \citet{de-Putter:2012} which is likely due to the fact that we consider a single redshift bin and do not split the data into low and high redshifts. In order not to bias our results we choose $\ell_{\mathrm{max}} = 210$ which is comparable to the limit used in \citet{Ho:2012} and \citet{de-Putter:2012}. To determine the minimal angular multipole we follow \citet{Ho:2012}, who determined that the Limber approximation becomes accurate for scales larger than $\ell = 30$.
The middle right panel in Fig.~\ref{fig:cls} shows the spherical harmonic galaxy clustering power spectrum computed from the systematics-corrected map in Galactic coordinates. In Appendix \ref{sec:cltests}, we compare the spherical harmonic power spectrum derived from the systematics-corrected maps in Galactic and equatorial coordinates. We find small differences at large angular scales, but the effect on the bandpowers considered in this analysis is negligible, as can be seen from Appendix \ref{sec:cltests} (Fig.~\ref{fig:eclgalcls}). To test the procedure for removing systematic uncertainties, we compare the spherical harmonic power spectra before and after correcting the maps for residual systematics. We find that the removal of systematics marginally reduces the clustering amplitude on large scales, which is expected since Galactic foregrounds exhibit significant large scale clustering. Small angular scales on the other hand, are mostly unaffected by the corrections applied. These results are shown in Appendix \ref{sec:cltests} (Fig.~\ref{fig:deltagcorrnocorrcls}).
\subsection{\label{subsec:gammacls} Cosmic shear}
The power spectrum computed from the weak lensing shear maps described in Section \ref{subsec:ellipticitymap} contains contributions from both the cosmic shear signal and the shape noise of the galaxies, which is due to intrinsic galaxy ellipticities. In order to estimate the shape noise power spectrum we follow the same methodology as for galaxy clustering and resort to simulations. We generate noise-only maps by rotating the shears of all the galaxies in our sample by a random angle. This procedure removes spatial correlations between galaxy shapes. Since the weak lensing shear signal is at least an order of magnitude smaller than the intrinsic galaxy ellipticities, the power spectrum of the randomised map gives an estimate of the shape noise power spectrum. As for galaxy clustering, we compute 100 noise maps and estimate the shape noise power spectrum from the mean of these 100 noise power spectra.
For the cosmological analysis we choose broader multipole bins than for the CMB temperature anisotropies and galaxy clustering since the small sky fraction covered by SDSS Stripe 82 causes the cosmic shear power spectrum to be correlated across a significantly larger multipole range. The low and high $\ell$ limits are chosen to minimise demasking uncertainties and the impact of nonlinearities in the cosmic shear power spectrum.
The spherical harmonic power spectrum of the weak lensing shear E-mode is displayed in the bottom right panel of Fig.~\ref{fig:cls} and the B-mode power spectrum is shown in the Appendix (Fig.~\ref{fig:shearclsb}). We see that the E-mode power spectrum is intrinsically low as compared to the best-fit theory power spectrum. These results are similar to those derived by \citet{Lin:2012}, who found a low value of $\Omega_{\mathrm{m}}^{0.7}\sigma_{8}$ for Stripe 82 cosmic shear. As can be seen, we do not detect a significant B-mode signal.
When comparing the weak lensing shear E-mode power spectra computed from the maps in Galactic and equatorial coordinates, we find discrepancies. These are mainly caused by the correction for additive bias in the weak lensing shears. As described in Appendix \ref{sec:cosmicshearanalysis}, the PSF-corrected galaxy shears are affected by an additive bias. Following \citet{Lin:2012}, we correct for this bias by subtracting the mean shear of each CCD camera column from the galaxy shears. This correction is performed in equatorial coordinates and ensures that the mean shear vanishes in this coordinate system. When the galaxy positions and shears are rotated from equatorial to Galactic coordinates, this ceases to be true. Therefore the correction for additive bias is coordinate-dependent and it is this effect that causes the main discrepancies between the measured power spectra. Further descriptions of the impact of the additive shear bias correction can be found in Appendix \ref{subsec:eclgal}.
The discrepancies between the cosmic shear power spectra measured from maps in Galactic and equatorial coordinates are still within the experimental uncertainties. We therefore choose to correct for the additive shear bias in equatorial coordinates, apply the rotation to the corrected shears and compute the cosmic shear power spectrum from the maps in Galactic coordinates. We note however, that these differences will become significant for surveys measuring cosmic shear with higher precision. It is therefore important to develop coordinate-independent methods for shear bias correction when performing a joint analysis of different cosmological probes.
\subsection{\label{subsec:tempcrossdeltagcls} CMB and galaxy overdensity cross-correlation}
To compute the spherical harmonic cross-power spectrum between CMB temperature anisotropies and the galaxy overdensity, we use the maps and masks described in Sections \ref{subsec:cmbmap} and \ref{subsec:galaxymap}.
We generally have two possibilities to compute cross-correlations between two maps with different angular masks. We can either compute the cross-correlation by keeping the respective mask for each probe, or we can compute a combined mask, which is the union of all pixels masked in at least one of the maps. When testing both these cases on Gaussian simulations, we observed a better recovery of the input power spectra when applying the combined mask to both maps. We therefore mask both maps with the combined mask, which covers a fraction of sky $f_{\mathrm{sky}} \sim 0.26$.
The spherical harmonic cross-power spectrum between CMB temperature anisotropies and galaxy overdensity is shown in the middle left panel of Fig.~\ref{fig:cls}. We see that the ISW power spectrum is very noisy, which makes its detection significance small. Since the power spectrum uncertainties for the considered angular scales are mainly due to cosmic variance, we suspect that the low signal-to-noise is mainly due to the fraction of sky covered by the SDSS CMASS1-4 galaxies. Despite its low significance, we include the ISW power spectrum in our analysis, because we expect it to help break degeneracies between cosmological parameters. We check that the ISW power spectrum does not depend on the choice of foreground-reduced CMB map. We find that the results using the maps provided by the $\tt NILC$, $\tt SEVEM$ and $\tt SMICA$ algorithms are virtually the same, as illustrated in Appendix \ref{sec:cltests} (Fig.~\ref{fig:compplanckmaps}).
\subsection{\label{subsec:tempcrossgammacls} CMB and weak lensing shear cross-correlation}
We estimate the spherical harmonic cross-power spectrum between CMB temperature anisotropies and the weak lensing shear E-mode field from the maps and masks described in Sections \ref{subsec:cmbmap} and \ref{subsec:ellipticitymap}. Both maps are masked with the combination of the masks, which covers a fraction of sky $f_{\mathrm{sky}} \sim 0.0065$.
The bottom left panel in Fig.~\ref{fig:cls} shows the spherical harmonic power spectrum between CMB temperature anisotropies and the weak lensing shear E-mode field. As can be seen, the noise level is too high to allow for a detection of the ISW correlation between CMB temperature anisotropies and weak lensing shear. This is to be expected due to the small sky fraction covered by the SDSS Stripe 82 galaxies and the intrinsically low signal-to-noise of this cross-correlation. Nevertheless, we include the power spectrum into the joint analysis to provide an upper limit on the ISW from weak lensing. The measured power spectrum is unaffected by the choice of CMB mapmaking method, as illustrated in Fig.~\ref{fig:compplanckmaps} in Appendix \ref{sec:cltests}.
\subsection{\label{subsec:deltagcrossgammacls} Galaxy overdensity and weak lensing shear cross-correlation}
We compute the spherical harmonic cross-power spectrum between the galaxy overdensity and weak lensing shear E-mode field from the maps and masks described in Sections \ref{subsec:galaxymap} and \ref{subsec:ellipticitymap}. We mask both maps with the combination of the two masks. The combined mask covers a sky fraction $f_{\mathrm{sky}} \sim 0.0053$.
The spherical harmonic cross-power spectrum between galaxy overdensity and weak lensing shear E-mode is shown in the bottom-middle panel of Fig.~\ref{fig:cls}. We see that the signal-to-noise of the power spectrum is low at the angular scales considered. This is probably due to the small sky fraction covered by Stripe 82 galaxies. We nevertheless include this cross-correlation in our analysis to serve as an upper limit. In Appendix \ref{sec:cltests} we show the comparison between the power spectra measured from the maps in Galactic and in equatorial coordinates. We find reasonable agreement between the two, even though the discrepancies are significantly enhanced compared to the effects on the galaxy overdensity power spectrum. As discussed in Section \ref{subsec:gammacls} this is probably due to the coordinate-dependence of the additive shear bias correction.
\section{\label{sec:covariance} Covariance matrix}
In order to obtain cosmological constraints from a joint analysis of CMB temperature anisotropies, galaxy clustering and weak lensing we need to estimate the joint covariance matrix of these cosmological probes. In this work we assume all the fields to be Gaussian random fields, i.e. we assume the covariance between all probes to be Gaussian and neglect any non-Gaussian contribution. This is appropriate for the CMB temperature field as well as the galaxy overdensity field at the scales considered but it is only an approximation for the weak lensing shear field \cite{Sato:2009}. For example, for a survey with source redshifts $z_{s} = 0.6$, \citet{Sato:2009} found that neglecting non-Gaussian contributions leads to an underestimation of the diagonal terms in the cosmic shear covariance matrix by a factor of approximately 5 at multipoles $\ell \sim 600$. In our case the discrepancy may be more pronounced since our sample contains a significant number of galaxies with $z_{s} < 0.6$. On the other hand we will be less sensitive to the non-Gaussian nature of the covariance matrix since the covariance for our galaxy sample is dominated by shape noise especially at the highest multipoles considered. We therefore decide to leave the introduction of non-Gaussian covariance matrices to future work.
In this work, we employ two different models for the joint Gaussian covariance matrix $C_{G}$: the first is a theoretical model and the second is based on simulations of correlated Gaussian realisations of the three cosmological probes. We use the theoretical covariance matrix to validate the covariance matrix obtained from the simulations.
\subsection{\label{sec:theorycov}Theoretical covariance estimate}
The covariance between cosmological spherical harmonic power spectra is composed of two parts: cosmic variance and noise. For spherical harmonic power spectra computed over the full sky, different $\ell$ modes are uncorrelated and the covariance matrix is diagonal. Partial sky coverage, i.e. $f_{\mathrm{sky}}<1$, has the effect to couple different $\ell$ modes and thus leads to a non-diagonal covariance matrix. This covariance becomes approximately diagonal if it is binned into approximately uncorrelated bandpowers of width $\Delta \ell$ \cite{Cabre:2007}. \citet{Cabre:2007} found the empirical relation $\Delta \ell f_{\mathrm{sky}} \sim 2$ to be a good approximation. In this case the covariance matrix between binned power spectra $C_{\ell}^{ij}$ and $C_{\ell'}^{i'j'}$ can be approximated as \cite{Hu:2004, Cabre:2007, Eifler:2014}
\begin{equation}
\begin{aligned}
\mathrm{Cov}_{G}(C_{\ell}^{ij}, C_{\ell'}^{i'j'}) = \langle \Delta C_{\ell}^{ij} \Delta C_{\ell'}^{i'j'} \rangle &\simeq \\
\frac{\delta_{\ell \ell'}}{(2\ell+1)\Delta \ell f_{\mathrm{sky}}} \left [(C_{\ell}^{ii'} + N^{ii'})(C_{\ell}^{jj'} + N^{jj'}) \right. \\
+ \left. (C_{\ell}^{ij'} + N^{ij'})(C_{\ell}^{ji'} + N^{ji'})\right ],
\end{aligned}
\label{eq:theorycovmat}
\end{equation}
where $i,\,j,\,i',\,j'$ denote different cosmological probes; in our case $i,\,j,\,i',\,j' \in \{\mathrm{T}, \delta_{g}, \gamma\}$. The quantities $N^{ij}$ are the noise power spectra of the different probes, which vanish unless $i = j$.
Given a cosmological model and survey specifications such as fractional sky coverage and noise level, we can approximate $C_{G}$ using Eq.~\ref{eq:theorycovmat} for each block covariance matrix. We choose a hybrid approach: we adopt a cosmological model to calculate the signal power spectra whereas we approximate $N^{ij}$ with the measured noise power spectra used to remove the noise bias in the data as described in Section \ref{sec:cls}.
\subsection{\label{sec:mockcov} Covariance estimate from Gaussian simulations}
The theoretical covariance matrix estimate described above is expected to only yield accurate results for uncorrelated binned power spectra, since in this approximation the covariance matrix is fully diagonal. For this reason we also estimate the covariance matrix in an alternative way that does not rely on this approximation: we estimate an empirical covariance matrix from the sample variance of Gaussian simulations of the three cosmological probes. To this end, we simulate correlated realisations of both the two spin-0 fields, CMB temperature and galaxy overdensity, as well as the spin-2 weak lensing shear field. We follow the approach outlined in \citet{Giannantonio:2008} for simulating correlated maps of spin-0 fields and we make use of the polarisation version of the HEALPix routine $\tt synfast$ to additionally simulate correlated maps of the spin-2 field. We estimate noise maps from the data and add these to the correlated signal maps. The details of the algorithm are outlined in Appendix \ref{sec:corrmaps}.
In order to compute the power spectrum covariance matrix, we apply the masks used on the data to the simulated maps and calculate both the auto- and the cross-power spectra of all the probes using the same methodology and $\tt{PolSpice}$ settings as described in Section \ref{sec:cls}. We generate $N_{\mathrm{sim}}$ random realisations and estimate the covariance matrix as
\begin{equation}
\begin{aligned}
\mathrm{Cov}_{G}(C_{\ell}^{ij}, C_{\ell'}^{i'j'}) = \frac{1}{N_{\mathrm{sim}}-1} \sum_{k=1}^{N_{\mathrm{sim}}} \left[C_{k}^{ij}(\ell) - \bar{C}_{k}^{ij}(\ell)\right] \\
\times \left[C_{k}^{i'j'}(\ell') - \bar{C}_{k}^{i'j'}(\ell')\right],
\end{aligned}
\end{equation}
where $\bar{C}_{k}^{ij}(\ell)$ denotes the mean over all realisations.
The accuracy of the sample covariance estimate depends on the number of simulations. As described in \citet{Cabre:2007}, $N_{\mathrm{sim}} = 1000$ achieves better than $5 \%$ accuracy for estimating the covariance matrix for the ISW effect from Gaussian simulations. We therefore follow \citet{Cabre:2007} and compute the covariance matrix from the sample variance of $N_{\mathrm{sim}} = 1000$ Gaussian realisations of the 4 maps or 6 spherical harmonic power spectra respectively.
The correlation matrix for the spherical harmonic power spectra derived from the Gaussian simulations for binning schemes and angular multipole ranges described in Section \ref{sec:cls} is shown in Fig.~\ref{fig:mockcovfull}. We see that the survey masks lead to significant correlations between bandpowers.
\begin{figure}
\includegraphics[scale=0.32]{fig5.pdf}
\caption{Correlation matrix for the spherical harmonic power spectra derived from the sample variance of the Gaussian simulations. The binning scheme and angular multipole range for each probe follow those outlined in Tab.~\ref{tab:clparams}.}
\label{fig:mockcovfull}
\end{figure}
\section{\label{sec:results} Cosmological constraints}
Each of the power spectra presented in Section \ref{sec:cls} carries cosmological information with probe-specific sensitivities and degeneracies. An integrated combination of these cosmological probes therefore helps break these parameter degeneracies. It further provides robust cosmological constraints since it is derived from a joint fit to the auto- as well as cross-correlations of three cosmological probes.
In order to derive cosmological constraints from a joint fit to the six spherical harmonic power spectra discussed in Section \ref{sec:cls}, we assume the joint likelihood to be Gaussian, i.e.
\begin{multline}
\mathscr{L}(D \vert \theta) = \frac{1}{[(2\pi)^{d}\det{C_{G}}]^{\sfrac{1}{2}}} \\
\times e^{-\frac{1}{2}(\mathbf{C}^{\mathrm{obs}}_{\ell}-\mathbf{C}^{\mathrm{theor}}_{\ell})^{\mathrm{T}}C^{-1}_{G}(\mathbf{C}^{\mathrm{obs}}_{\ell}-\mathbf{C}^{\mathrm{theor}}_{\ell})},
\label{eq:likelihood}
\end{multline}
where $C_{G}$ denotes the Gaussian covariance matrix. $\mathbf{C}^{\mathrm{theor}}_{\ell}$ denotes the theoretical prediction for the spherical harmonic power spectrum vector of dimension $d$ and $\mathbf{C}^{\mathrm{obs}}_{\ell}$ is the observed power spectrum vector, defined as
\begin{equation}
\mathbf{C}^{\mathrm{obs}}_{\ell} = \begin{pmatrix}
C^{\mathrm{TT}}_{\ell} & C_{\ell}^{\delta_{g}\mathrm{T}} & C_{\ell}^{\delta_{g} \delta_{g}} & C^{\gamma\mathrm{T}}_{\ell} & C^{\gamma \delta_{g}}_{\ell} & C^{\gamma \gamma}_{\ell}
\end{pmatrix}_{\mathrm{obs}}.
\label{eq:psvector}
\end{equation}
A Gaussian likelihood is a justified assumption for both the CMB temperature anisotropy and galaxy clustering power spectra due to the central limit theorem. Since the weak lensing shear power spectrum receives significant contribution from non-linear structure formation, its likelihood will deviate from being purely Gaussian \cite{Hartlap:2009}. It has been shown however, that a Gaussian likelihood is a sensible approximation, especially when CMB data is added to weak lensing \cite{Sato:2010}. In our first implementation we will thus assume both a joint Gaussian likelihood and Gaussian single probe likelihoods.
We estimate the covariance matrix using both methods outlined in Section \ref{sec:covariance}. In both cases we compute the covariance for a $\Lambda$CDM cosmological model, which we keep fixed in the joint fit. Note that the covariance matrices depend on the cosmological model and should therefore vary in the fitting procedure \cite{Eifler:2009}. Following standard practice, (e.g. \cite{DES-Collaboration:2015}), we approximate the covariance matrix to be constant and compute it for a $\Lambda$CDM cosmological model with parameter values $\{h,\, \Omega_{\mathrm{m}}, \,\Omega_{\mathrm{b}}, \, n_{\mathrm{s}}, \,\sigma_{8}, \,\tau_{\mathrm{reion}}, \,T_{\mathrm{CMB}}\} = \{0.7, \,0.3, \,0.049, \,1.0, \,0.88, \,0.078, \,2.275 \,\mathrm{K}\}$, where $h$ is the dimensionless Hubble parameter, $\Omega_{\mathrm{m}}$ is the fractional matter density today, $\Omega_{\mathrm{b}}$ is the fractional baryon density today, $n_{\mathrm{s}}$ denotes the scalar spectral index, $\sigma_{8}$ is the r.m.s. of matter fluctuations in spheres of comoving radius $8 \,h^{-1}$ Mpc and $\tau_{\mathrm{reion}}$ denotes the optical depth to reionisation. We further set the linear, redshift-independent galaxy bias parameter to $b=2$. To obtain an unbiased estimate of the inverse of the covariance matrix derived from the Gaussian simulations, we apply the correction derived in \citet{Kaufman:1967}, \citet{Hartlap:2007} and \citet{Anderson:2003}, i.e. we multiply the inverse covariance matrix by $(N_{\mathrm{sim}}-d-2)/(N_{\mathrm{sim}}-1)$. The theoretical covariance matrix estimate does not suffer from this bias and is thus left unchanged.
From the likelihood given in Eq.~\ref{eq:likelihood}, we derive constraints in the framework of a flat $\Lambda$CDM cosmological model, where our fiducial model includes one massive neutrino eigenstate of mass $0.06$ eV as in \cite{Planck-Collaboration:2015ae}. Our parameter set consists of the six $\Lambda$CDM parameters $\{h,\, \Omega_{\mathrm{m}}, \,\Omega_{\mathrm{b}}, \,n_{\mathrm{s}}, \,\sigma_{8}, \,\tau_{\mathrm{reion}}\}$. We further marginalise over two additional parameters: a redshift independent, linear galaxy bias parameter $b$ and a multiplicative bias parameter $m$ for the weak lensing shear. The multiplicative bias parametrises unaccounted calibration uncertainties affecting the weak lensing shear estimator $\hat{\boldsymbol{\gamma}}$ and is defined as \cite{Heymans:2006}
\begin{equation}
\hat{\boldsymbol{\gamma}} = (1+m)\boldsymbol{\gamma}.
\end{equation}
We note that we do not include additional nuisance parameters such as additive weak lensing shear bias, stochastic and scale-dependent galaxy bias \cite{Tegmark:1998, Pen:1998, Dekel:1999}, photometric redshift uncertainties, intrinsic galaxy alignments (for reviews, see e.g. \cite{Troxel:2015, Joachimi:2015}) or parameters describing the effect of unresolved point sources on the CMB temperature anisotropy power spectrum \cite{Planck-Collaboration:2014af}. In this present work we restrict the analysis to angular scales where these effects are expected to be subdominant.
We sample the parameter space with a Monte Carlo Markov Chain (MCMC) using $\tt CosmoHammer$ \cite{Akeret:2012}. The parameters sampled are summarised in Table \ref{tab:params} along with their priors. We choose flat, uniform priors for all parameters except for $\tau_{\mathrm{reion}}$ and $m$. The optical depth to reionisation can only be constrained with CMB polarisation data. Since we do not include CMB polarisation in this analysis, we apply a Gaussian prior with $\mu = 0.089$ and $\sigma = 0.02$ on $\tau_{\mathrm{reion}}$. This corresponds to a WMAP9 \cite{Hinshaw:2013} prior with increased variance to accommodate the Planck 2015 results \cite{Planck-Collaboration:2015ae}. We further apply a Gaussian prior on the multiplicative bias $m$ with mean $\mu=0$ and $\sigma=0.1$. This is motivated by \citet{Hirata:2003}, who found the multiplicative bias for the linear PSF correction method to lie in the range $m \in [-0.08, 0.13]$ for the sample considered in this analysis.
\begin{table*}
\caption{Parameters varied in the MCMC with their respective priors and posterior means. The uncertainties denote the $68 \%$ c.l..} \label{tab:params}
\begin{center}
\begin{ruledtabular}
\begin{tabular}{ccc}
Parameter & Prior & Posterior mean\\ \hline \Tstrut
$h$ & flat $\in [0.2, \,1.2]$ & $0.699 \pm 0.018$ \\
$\Omega_{\mathrm{m}}$ & flat $\in [0.1, \,0.7]$ & $0.278 \substack{+0.019 \\ -0.020}$ \\
$\Omega_{\mathrm{b}}$ & flat $\in [0.01, \,0.09]$ & $0.0455 \pm 0.0018$ \\
$n_{\mathrm{s}}$ & flat $\in [0.1, \,1.8]$ & $0.975 \substack{+0.019 \\ -0.018}$ \\
$\sigma_{8}$ & flat $\in [0.4, \,1.5]$ & $0.799 \pm 0.029$ \\ \\
$\tau_{\mathrm{reion}}$ & Gaussian with $\mu = 0.089$, $\sigma = 0.02$\footnote{This corresponds to a WMAP9 \cite{Hinshaw:2013} prior with increased variance to accommodate the Planck results.} & $0.0792 \pm 0.0196$ \\
$b$ & flat $\in [1., \,3.]$ & $2.13 \pm 0.06$ \\
$m$ & Gaussian with $\mu = 0.0$, $\sigma = 0.1$ & $-0.142 \substack{+0.080 \\ -0.081}$
\end{tabular}
\end{ruledtabular}
\end{center}
\end{table*}
In our fiducial configuration presented below we use the covariance matrix derived from the Gaussian simulations as described in Section \ref{sec:mockcov}. We find that this choice does not influence our results since the constraints derived using the theoretical covariance are consistent. In order to further assess the impact of a cosmology-dependent covariance matrix, we perform the equivalent analysis using a covariance matrix computed with a cosmological model with $\sim 5 \%$ lower $\sigma_{8}$. We find that the derived parameter values change by at most $0.5\sigma$. The width of the contours is only marginally changed.
In addition to the joint analysis, we also derive parameter constraints from separate analyses of the three auto-power spectra $C^{\mathrm{TT}}_{\ell}, C_{\ell}^{\delta_{g} \delta_{g}}$ and $C^{\gamma \gamma}_{\ell}$. In all three cases we assume a Gaussian likelihood as in Eq.~\ref{eq:likelihood} and derive constraints on the base $\Lambda$CDM parameters $\{h,\, \Omega_{\mathrm{m}}, \,\Omega_{\mathrm{b}}, \,n_{\mathrm{s}}, \,\sigma_{8}\}$ as well as additional parameters constrained by each probe. These are $\tau_{\mathrm{reion}}$ for the CMB temperature anisotropies, $b$ for galaxy clustering and $m$ for the cosmic shear.
\begin{figure*}
\includegraphics[scale=0.6]{fig6.pdf}
\caption{Cosmological parameter constraints derived from the joint analysis, marginalised over $\tau_{\mathrm{reion}}$, $b$ and $m$ and from the single probes. The single probe constraints have been marginalised over the respective nuisance parameters i.e. $\tau_{\mathrm{reion}}$ for the CMB temperature anisotropies, $b$ for galaxy clustering and $m$ for the weak lensing shear. In each case the inner (outer) contour shows the $68 \%$ c.l. ($95 \%$ c.l.). For clarity the cosmic shear $68 \%$ c.l. are solid while the $95 \%$ c.l. are dashed. }
\label{fig:constraintssinglevsjoint}
\end{figure*}
\begin{figure*}
\includegraphics[scale=0.5]{fig7.pdf}
\caption{Comparison between the parameter constraints derived from the joint analysis, marginalised over $b$ and $m$ and the constraints from \citet{Planck-Collaboration:2015ae} using only CMB data (TT+lowP) or adding external data (TT,TE,EE+lowP+lensing+ext). The Planck constraints are marginalised over all nuisance parameters. In each case the inner (outer) contour shows the $68 \%$ c.l. ($95 \%$ c.l.).}
\label{fig:constraintscompplanck}
\end{figure*}
Fig.~\ref{fig:constraintssinglevsjoint} shows the constraints on the $\Lambda$CDM parameters $\{h,\, \Omega_{\mathrm{m}}, \,\Omega_{\mathrm{b}}, \,n_{\mathrm{s}}, \,\sigma_{8}\}$ derived from the joint analysis using the spherical harmonic power spectrum vector and likelihood defined in Equations \ref{eq:likelihood} and \ref{eq:psvector}. These have been marginalised over $\tau_{\mathrm{reion}}$, $b$ and $m$. Also shown are the constraints derived from separate analyses of the three auto-power spectra $C^{\mathrm{TT}}_{\ell}, C_{\ell}^{\delta_{g} \delta_{g}}$ and $C^{\gamma \gamma}_{\ell}$, each of them marginalised over the respective nuisance parameter. As expected, we find that the constraints derived from the CMB anisotropies are the strongest, followed by the galaxy clustering and cosmic shear constraints, which both constrain the full $\Lambda$CDM model rather weakly. The constraints from the CMB temperature anisotropies are broader and have central values which differ from those derived in \citet{Planck-Collaboration:2015ae}. The reason for these discrepancies is the limited angular multipole range $\ell \in [10, \,610]$ employed in the CMB temperature analysis. This causes the CMB posterior to become broader, asymmetric and results in a shift of the parameter means. We have verified that the Planck likelihood and our analysis give consistent results when the latter is restricted to a similar $\ell$-range. If on the other hand, we increase the high multipole limit to $\ell_{\mathrm{max}} = 1000$, we find significant differences between our analysis and the Planck likelihood. We therefore choose to be conservative and use $\ell_{\mathrm{max}} = 610$ throughout this work. Comparing the single probe constraints to one another we see that they agree reasonably well, the only slight discrepancy being the low value of both $\Omega_{\mathrm{m}}$ and $\sigma_{8}$ derived from the cosmic shear analysis. This is similar to the results derived in \citet{Lin:2012} even though the values for $\Omega_{\mathrm{m}}$ and $\sigma_{8}$ are even lower in our analysis. However, care must be taken since the amplitude of the cosmic shear auto-power spectrum appears to have a small dependence on the choice of the coordinate system as discussed in Appendix \ref{sec:cltests}.
The potential of the joint analysis emerges when the three auto-power spectra are combined together with their three cross-power spectra. Due to the complementarity of the different probes the constraints tighten and the allowed parameter space volume is significantly reduced. This is especially true in our case, since the constraints from CMB temperature anisotropies are broadened due to the restricted multipole range that we employed. Including more CMB data would significantly reduce the impact of adding additional cosmological probes. The numerical values of the best-fit parameters and their $68 \%$ confidence limits (c.l.) derived from the joint analysis are given in Tab.~\ref{tab:params}.
Fig.~\ref{fig:constraintscompplanck} compares the constraints derived from the joint analysis to the constraints derived by the Planck Collaboration \cite{Planck-Collaboration:2015ae}. We show two versions of the Planck constraints: the constraints derived from the combination of CMB temperature anisotropies with the Planck low-$\ell$ polarisation likelihood (TT+lowP) and the ones derived from a combination of the latter with the Planck polarisation power spectra, CMB lensing and external data sets (TT,TE,EE+lowP+lensing+BAO+JLA+$H_{0}$). We see that the joint analysis prefers slightly lower values of the parameters $\Omega_{\mathrm{m}}$ and $\Omega_{\mathrm{b}}$ and a higher Hubble parameter $h$, but these differences are not significant. Despite this fact we find sensible overall agreement between the constraints derived in this work with both versions of the Planck constraints. While the constraints we derived in this analysis are broadened by the restricted multipole range we used, the results already demonstrate the power of integrated probe combination: the complementarity of different cosmological probes and their cross-correlations allows us to obtain reasonable constraints.
The measured power spectra together with the theoretical predictions for the best-fitting cosmological model derived from the joint analysis are shown in Fig.~\ref{fig:cls}. The best-fit cosmology provides a rather good fit to all power spectra except $C^{\gamma \delta_{g}}_{\ell}$ and $C^{\gamma \gamma}_{\ell}$, whose measured values are generally lower than our best-fit model. This is mainly due to the assumed Gaussian prior on the multiplicative shear bias $m$, which does not allow for more negative values of $m$ as would be preferred by the data. If we relax the prior to a Gaussian with standard deviation $\sigma=0.2$, we find a best-fit value for the multiplicative bias parameter of $m = -0.276 \pm 0.108$. This results in an improved fit to both $C^{\gamma\delta_{g}}_{\ell}$ and $C^{\gamma \gamma}_{\ell}$, but is in tension with the values derived for the multiplicative bias by \citet{Hirata:2003}. We therefore find evidence for a slight tension between CMB temperature anisotropy data and weak gravitational lensing, as already seen by e.g. \cite{MacCrann:2015, Grandis:2016}.
\section{\label{sec:conclusions} Conclusions}
To further constrain our cosmological model and gain more information about the dark sector, it will be essential to combine the constraining power of different cosmological probes. This work presents a first implementation of an integrated approach to combine cosmological probes into a common framework at the map level. In our first implementation we combine CMB temperature anisotropies, galaxy clustering and weak lensing shear. We use CMB data from Planck 2015 \cite{Planck-Collaboration:2015af}, photometric galaxy data from the SDSS DR8 \cite{Aihara:2011} and weak lensing data from SDSS Stripe 82 \cite{Annis:2014}.
We take into account both the information contained in the separate maps as well as the information contained in the cross-correlation between the maps by measuring their spherical harmonic power spectra. This leads to a power spectrum matrix with associated covariance, which combines CMB temperature anisotropies, galaxy clustering, cosmic shear, galaxy-galaxy lensing and the ISW \cite{Sachs:1967} effect with galaxy and weak lensing shear tracers.
From the power spectrum matrix we derive constraints in the framework of a $\Lambda$CDM cosmological model assuming both a Gaussian covariance as well as a Gaussian likelihood. We find that the constraints derived from the combination of all probes are significantly tightened compared to the constraints derived from each of the three separate auto-power spectra. This is due to the complementary information carried by different cosmological probes. We further compare these constraints to existing ones derived by the Planck collaboration and find reasonable agreement, even though the joint analysis slightly prefers lower values of both $\Omega_{\mathrm{m}}$ and $\Omega_{\mathrm{b}}$ and a higher Hubble parameter $h$. For a joint analysis of three cosmological probes, the constraints derived are still relatively weak, which is mainly due to our conservative cuts in angular scales. Nevertheless this analysis already demonstrates the potential of integrated probe combination: the complementarity of different data sets, that alone yield rather weak constraints on the full $\Lambda$CDM parameter space, allows us to obtain robust constraints which are significantly tighter than those obtained from probes taken individually. In addition, our analysis reveals challenges intrinsic to probe combination. Examples are the need for foreground-correction at the map as opposed to the power spectrum level and the need for coordinate-independent bias corrections.
In this first implementation we have made simplifying assumptions. We assume a Gaussian covariance matrix for all cosmological probes considered. This is justified for the CMB temperature anisotropies and the galaxy overdensity at large scales. The galaxy shears on the other hand exhibit non-linearities already at large scales and their covariance therefore receives significant non-Gaussian contributions \cite{Sato:2009}. Furthermore, we do not take into account the cosmology-dependence of the covariance matrix \cite{Eifler:2009}. In addition we only include systematic uncertainties from a potential multiplicative bias in the weak lensing shear measurement and neglect effects from other sources. Finally we also used the Limber approximation for the theoretical predictions. We leave these extensions to future work but we do not expect them to have a significant impact on our results since we restrict the analysis to scales where the above effects are minimised.
In order to fully exploit the wealth of cosmological information contained in upcoming surveys, it will be essential to investigate ways in which to combine these experiments. It will be thus interesting to extend the framework presented here to include additional cosmological probes, 3-dimensional tomographic information and tests of cosmological models beyond $\Lambda$CDM.
\begin{acknowledgments}
We thank Eric Hivon for his valuable help with $\tt PolSpice$. We also thank Chihway Chang for careful reading of the manuscript and helpful comments and Sebastian Seehars as well as Joel Akeret for helpful discussions. We further thank the referee for comments and suggestions that have improved this paper. This work was in part supported by the Swiss National Science Foundation (grant number 200021 143906).
Some of the results in this paper have been derived using the HEALPix (\citet{Gorski:2005}) package. Icons used in Fig.~\ref{fig:framework} made by Freepik from $\tt{www.flaticon.com}$. The colour palettes employed in this work are taken from $\tt{http://colorpalettes.net}$ and $\tt{http://flatuicolors.com}$. We further acknowledge the use of the colour map provided by \cite{Planck-Collaboration:2015af}. The contour plots have been created using $\tt{corner.py}$ \cite{ForemanMackey:2016}.
Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is http://www.sdss.org/.
The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington.
Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is http://www.sdss3.org/.
SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University.
Based on observations obtained with Planck (http://www.esa.int/Planck), an ESA science mission with instruments and contributions directly funded by ESA Member States, NASA, and Canada.
\end{acknowledgments}
|
1,108,101,565,540 | arxiv | \section{Introduction}
It is well known (see \cite{Mcd}) that the representation theory of the symmetric groups is nicely encoded by the space of symmetric functions (in countably many commuting variables). In fact the interplay between the character theory of the symmetric groups and symmetric functions has enriched both theories with very interesting combinatorics. The space of symmetric functions has several algebraic operations (in particular it is a Hopf algebra) and many interesting bases (Schur, power-sum, monomial, and homogeneous symmetric functions).
The algebraic operations and bases can be lifted to the characters of the symmetric groups, and as such are meaningful representation theoretic operations and bases. The character table of the symmetric group is known to be the transition matrix between the Schur basis and the power-sum basis. A natural factorization of this matrix is obtained by using a third basis (the monomial basis). The transition matrix between Schur functions and monomials is unipotent lower triangular and the transition matrix between monomials and power-sums is upper triangular.
In a recent workshop at AIM \cite{AIM}, we showed that the supercharacter theory of the group of unipotent upper triangular matrices over a finite field $\FF_q$ is related to the Hopf algebra $\mathrm{NCSym}(X)$ of symmetric functions in noncommutative variables \cite{BRRZ, BZ,RS}. For $q=2$, the algebraic operations of $\mathrm{NCSym}(X)$ can be lifted to the supercharacter theory and have a representation theoretic meaning.
This inspired us to seek a new basis of $\mathrm{NCSym}(X)$ that will allow a natural decomposition of the supercharacter table.
To this end, we recall in Section~\ref{sec:superchar} some of the results of \cite{AIM}, and then adapt it to a coarser supercharacter theory that allows us to have an isomorphism to $\mathrm{NCSym}(X)$ valid for all $q$. The supercharacter table is given by the transition matrix between the supercharacter basis and the superclass basis.
In Section~\ref{sec:qpowersums} we introduce a $q$-deformation of a new power-sum basis (these power-sums were first introduced in \cite{ABT}). For each $q$, this will give us our desired factorization of the supercharacter table. In subsequent sections, we explore the sum of the consequences and related combinatorics.
In particular we compute the determinant of the character table.
\section{Preliminaries}\label{sec:superchar}
\subsection{Supercharacters}
A \emph{supercharacter theory} of a finite group $G$ is a pair $(\mathcal{K},\mathcal{X})$ where $\mathcal{K}$ is a partition of $G$ such that
$$\mathbb{C}\text{-span}\bigg\{\sum_{g\in K}g\mid K\in \mathcal{K}\bigg\}$$
is a dimension $|\mathcal{K}|$ subalgebra of $Z(\mathbb{C} G)$ under usual group algebra multiplication and $\mathcal{X}$ is a partition of the irreducible characters of $G$ such that $|\mathcal{X}|=|\mathcal{K}|$ and
\begin{equation}\label{KisX}
\mathrm{SC}(G)=\left\{f:G\rightarrow \mathbb{C}\ \bigg|\ \begin{array}{@{}l@{}} f\text{ constant on}\\ \text{the parts of $\mathcal{K}$}\end{array}\right\}=\mathbb{C}\text{-span}\bigg\{ \sum_{\psi\in X} \psi(1)\psi \mid X\in \mathcal{X}\bigg\}
\end{equation}
We will refer to the parts $K\in \mathcal{K}$ as \emph{superclasses}; we fix a basis of $\mathrm{SC}(G)$ consisting of characters orthogonal with respect to the usual inner product on class functions, and refer to the elements of this basis as \emph{supercharacters}.
There are various natural supercharacter theories for the group
$$\mathrm{UT}_n(q)=\left\{\begin{array}{c}\text{$n\times n$ unipotent upper triangular }\\ \text{matrices over $\FF_q$}\end{array}\right\},$$
but for this paper, we are interested in the following theory. Let $u,v\in \mathrm{UT}_n(q)$ be equivalent, if there exist $x,y\in \mathrm{UT}_n(q)$ and $t\in T_n(q)$ such that $u=xt(v-1)t^{-1}y+1$. Here $T_n(q)\subseteq \GL_n(\FF_q)$ denotes the set of diagonal matrices with non-zero entries on the diagonal. We will let $\mathcal{K}$ be the set of equivalence classes of this relation, giving half of our supercharacter theory.
It turns out that these superclasses are indexed by
$$\mathcal{S}_n=\{\text{set partitions of $\{1,2,\ldots, n\}$}\},$$
where a \emph{set partition} $\lambda$ of $\{1,2,\ldots, n\}$ is a subset $\lambda\subseteq \{i\larc{}j\mid 1\leq i<j\leq n\}$ such that
$$i\larc{}k\in \lambda\qquad \text{implies}\qquad i\larc{}j,j\larc{}k\notin \lambda\qquad \text{for $i<j<k$.}$$
Instead of finding the corresponding partition $\mathcal{X}$ of the irreducible characters of $\mathrm{UT}_n(q)$ (which is uniquely determined by $\mathcal{K}$ via (\ref{KisX})), we will give our chosen set of supercharacters. Note that since the supercharacters form a basis for $\mathrm{SC}(\mathrm{UT}_n(q))$, we have that they are also indexed by $\mathcal{S}_n$. Given $\lambda,\mu\in \mathcal{S}_n$ with $u_\mu$ in the superclass corresponding to $\mu$, define $\chi^\lambda\in \mathrm{SC}(\mathrm{UT}_n(q))$ by
\begin{equation}\label{SupercharacterFormula}
\chi^\lambda(u_\mu)=\left\{\begin{array}{ll} \displaystyle\frac{(q-1)^{|\lambda|-|\lambda\cap\mu|}q^{\dim(\lambda)-|\lambda|} (-1)^{|\lambda\cap\mu|}}{q^{\mathrm{nst}^\lambda_\mu}} & \begin{array}{@{}l} \text{if $i<j<k$ with $i\larc{}k\in \lambda$}\\ \text{implies $i\larc{}j,j\larc{}k\notin \mu$,}\end{array}\\ 0 & \text{otherwise,}\end{array}\right.
\end{equation}
where
\begin{align*}
\dim(\lambda) &= \sum_{i\slarc{} j\in \lambda} j-i,\\
\mathrm{nst}^\lambda_\mu &= \#\{i< j<k < l\mid i\larc{}l\in \lambda, j\larc{}k\in \mu\}.
\end{align*}
These superclass functions are characters, and they form a basis for $\mathrm{SC}(\mathrm{UT}_n(q))$ in this case.
\begin{remark}
The supercharacter theory defined in this paper is slightly coarser than the usual supercharacter theory used for $\mathrm{UT}_n(q)$ (for example, \cite{An95,DI}). In the finer theory, we discard the conjugation action of $T$ in our equivalence relation. However, these two supercharacter theories coincide when $q=2$.
\end{remark}
\begin{example}\label{SupercharacterTable}
For $n=4$, if $t=q-1$, then the supercharacter table is
$$
\begin{array}{c|ccccccccccccccc}
& \begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (.25,.75) and (.75,.75) .. (1);
\end{tikzpicture} &
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (1) .. controls (1.25,.75) and (1.75,.75) .. (2);
\end{tikzpicture} &
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (2) .. controls (2.25,.75) and (2.75,.75) .. (3);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (0.25,.75) and (.75,.75) .. (1);
\draw (1) .. controls (1.25,.75) and (1.75,.75) .. (2);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (0.25,.75) and (.75,.75) .. (1);
\draw (2) .. controls (2.25,.75) and (2.75,.75) .. (3);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (1) .. controls (1.25,.75) and (1.75,.75) .. (2);
\draw (2) .. controls (2.25,.75) and (2.75,.75) .. (3);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (0.25,.75) and (.75,.75) .. (1);
\draw (1) .. controls (1.25,.75) and (1.75,.75) .. (2);
\draw (2) .. controls (2.25,.75) and (2.75,.75) .. (3);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (.5,1.25) and (1.5,1.25) .. (2);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (1) .. controls (1.5,1.25) and (2.5,1.25) .. (3);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (0.25,.75) and (.75,.75) .. (1);
\draw (1) .. controls (1.5,1.25) and (2.5,1.25) .. (3);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (.5,1.25) and (1.5,1.25) .. (2);
\draw (2) .. controls (2.25,.75) and (2.75,.75) .. (3);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (.5,1.25) and (1.5,1.25) .. (2);
\draw (1) .. controls (1.5,1.25) and (2.5,1.25) .. (3);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (.75,1.75) and (2.25,1.75) .. (3);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (.75,1.75) and (2.25,1.75) .. (3);
\draw (1) .. controls (1.25,.75) and (1.75,.75) .. (2);
\end{tikzpicture} \\ \hline
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\end{tikzpicture}
& 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (.25,.75) and (.75,.75) .. (1);
\end{tikzpicture} &
t & -1 & t & t & -1 & -1 & t & -1 & t & t & t & -1 & t & t & t \\
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (1) .. controls (1.25,.75) and (1.75,.75) .. (2);
\end{tikzpicture} & t & t & -1 & t & -1 & t & -1 & -1 & t & t & t & t & t & t & -1\\
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (2) .. controls (2.25,.75) and (2.75,.75) .. (3);
\end{tikzpicture}
& t & t & t & -1 & t & -1 & -1 & -1 & t & t & t & -1 & t & t & t\\
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (0.25,.75) and (.75,.75) .. (1);
\draw (1) .. controls (1.25,.75) and (1.75,.75) .. (2);
\end{tikzpicture}
& t^2 & -t & -t & t^2 & 1 & -t & -t & 1 & t^2 & t^2 & -t & t^2 & t^2 & t^2 & -t\\
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (0.25,.75) and (.75,.75) .. (1);
\draw (2) .. controls (2.25,.75) and (2.75,.75) .. (3);
\end{tikzpicture}
& t^2 & -t & t^2 & -t & -t & 1 & -t & 1 & t^2 & t^2 & -t & -t & t^2 & t^2 & t^2 \\
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (1) .. controls (1.25,.75) and (1.75,.75) .. (2);
\draw (2) .. controls (2.25,.75) and (2.75,.75) .. (3);
\end{tikzpicture}
& t^2 & t^2 & -t & -t & -t & -t & 1 & 1 & t^2 & t^2 & t^2 & -t & t^2 & t^2 & -t\\
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (0.25,.75) and (.75,.75) .. (1);
\draw (1) .. controls (1.25,.75) and (1.75,.75) .. (2);
\draw (2) .. controls (2.25,.75) and (2.75,.75) .. (3);
\end{tikzpicture}
& t^3 & -t^2 & -t^2 & -t^2 & t & t & t & -1 & t^3 & t^3 & -t^2 & -t^2 & t^3 & t^3 & -t^2\\
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (.5,1.25) and (1.5,1.25) .. (2);
\end{tikzpicture}
& tq & 0 & 0 & tq & 0 & 0 & 0 & 0 & -q & tq & 0 & -q & -q & tq & 0\\
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (1) .. controls (1.5,1.25) and (2.5,1.25) .. (3);
\end{tikzpicture}
& tq & tq & 0 & 0 & 0 & 0 & 0 & 0 & tq & -q & -q & 0 & -q & tq & 0\\
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (0.25,.75) and (.75,.75) .. (1);
\draw (1) .. controls (1.5,1.25) and (2.5,1.25) .. (3);
\end{tikzpicture}
& t^2q & -tq & 0 & 0 & 0 & 0 & 0 & 0 & t^2q & -tq & q & 0 & -tq & t^2q & 0\\
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (.5,1.25) and (1.5,1.25) .. (2);
\draw (2) .. controls (2.25,.75) and (2.75,.75) .. (3);
\end{tikzpicture}
& t^2q & 0 & 0 & -tq & 0 & 0 & 0 & 0 & -tq & t^2q & 0 & q & -tq & t^2 q & 0\\
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (.5,1.25) and (1.5,1.25) .. (2);
\draw (1) .. controls (1.5,1.25) and (2.5,1.25) .. (3);
\end{tikzpicture}
& t^2q^2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -tq^2 & -tq^2 & 0 & 0 & q^2 & t^2q^2 & 0\\
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (.75,1.75) and (2.25,1.75) .. (3);
\end{tikzpicture}
& tq^2 & 0 & tq & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -q^2 & -q\\
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (.75,1.75) and (2.25,1.75) .. (3);
\draw (1) .. controls (1.25,.75) and (1.75,.75) .. (2);
\end{tikzpicture} & t^2 q^2 & 0 & -tq & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -tq^2 & q
\end{array}
$$
\end{example}
\subsection{Hopf algebra of supercharacters}
Let
$$\mathrm{SC}(q)=\bigoplus_{n\geq 0} \mathrm{SC}(\mathrm{UT}_n(q)),$$
where by convention we let
$$\mathrm{SC}(\mathrm{UT}_0(q))=\mathbb{C}\text{-span}\{\chi^{\emptyset_0}\},$$
where $\emptyset_0$ is the empty set partition of the set with 0 elements. Define a product on $\mathrm{SC}(q)$ by
$$\chi_m\cdot \chi_n= \mathrm{Inf}_{\mathrm{UT}_m(q)\times \mathrm{UT}_n(q)}^{\mathrm{UT}_{m+n}(q)}(\chi_m\times \chi_n) = (\chi_m\times \chi_n)\circ \pi,$$
where $\chi_m\in \mathrm{SC}(\mathrm{UT}_m(q))$, $\chi_n\in \mathrm{SC}(\mathrm{UT}_n(q))$, and $\mathrm{Inf}$ is the inflation functor coming from the quotient map
$$\pi\colon \mathrm{UT}_{m+n}(q)\longrightarrow \left[\begin{array}{c|c} \mathrm{UT}_m(q) & 0\\ \hline 0 & \mathrm{UT}_n(q)\end{array}\right] \cong \mathrm{UT}_m(q) \times \mathrm{UT}_n(q).$$
Define a coproduct on $\mathrm{SC}(q)$ by
$$\Delta(\chi_n)=\sum_{\{1,2,\ldots,n\}=J\sqcup K} \Res^{\mathrm{UT}_n(q)}_{\mathrm{UT}_J(q)}(\chi_n)\otimes \Res^{\mathrm{UT}_n(q)}_{\mathrm{UT}_K(q)}(\chi_n),$$
where $\mathrm{UT}_J(q)$ is the subgroup of $\mathrm{UT}_n(q)$ with nonzero entries above the diagonal only in rows and columns in $J$. We make use of the isomorphism $\mathrm{UT}_J(q)\cong \mathrm{UT}_{|J|}(q)$ in this definition.
This product and coproduct give rise to a graded Hopf algebra, and this algebra comes equipped with two distinguished bases:
\begin{align*}
\mathrm{SC}(q) & = \mathbb{C}\text{-span}\{\chi^\lambda\mid \lambda\in \mathcal{S}_n, n\in \mathbb{Z}_{\geq 0}\}\\
& = \mathbb{C}\text{-span}\{\kappa_\mu\mid \mu\in \mathcal{S}_n, n\in \mathbb{Z}_{\geq 0}\},
\end{align*}
where for $u\in \mathrm{UT}_n(q)$,
$$\kappa_\mu(u)=\left\{\begin{array}{ll} 1 & \text{if $u$ is in the superclass indexed by $\mu$,}\\ 0 & \text{otherwise.}\end{array}\right.$$
An American Institute of Mathematics workshop showed that we are already familiar with this Hopf algebra.
\begin{theorem}[\cite{AIM}]
The Hopf algebra $\mathrm{SC}(q)$ is isomorphic to the Hopf algebra of symmetric functions in non-commuting variables $\mathrm{NCSym}(X)$.
\end{theorem}
\begin{remark}
The paper \cite{AIM} actually only addresses the case when $q=2$ since that paper was using a finer supercharacter theory, but the proof for arbitrary $q$ in our current supercharacter theory follows by the same argument. In fact, if we work purely combinatorially and ignore the representation theory, then $\mathrm{SC}(q)$ makes sense for arbitrary $q$. In particular, we also get an interesting isomorphism in the case when $q=1$.
\end{remark}
Given an infinite set $X=\{X_1,X_2,\ldots\}$ of noncommuting variables, the algebra $\mathrm{NCSym}(X)$ has a distinguished basis of monomial symmetric functions given by
$$\{m_\mu=\sum_{(i_1,i_2,\ldots, i_n)\in O_\mu} X_{i_1}X_{i_2}\cdots X_{i_n}\mid \mu\in \mathcal{S}_n,n\in \mathbb{Z}_{\geq 0}\},$$
where
$$O_{\mu}=\{(i_1,\ldots,i_n)\in \mathbb{Z}_{\geq 1}^n\mid i_k=i_l\text{ if and only if $k$ and $l$ are in the same part of $\mu$}\},$$
and the \emph{parts} of $\mu$ are given by the transitive closure of the relation $i\sim j$ if $i\larc{}j\in \mu$ or $j\larc{}i\in \mu$.
We will be interested in a second natural basis of $\mathrm{NCSym}(X)$, which is a slight variation on what is usual in the literature \cite{BZ,RS}. Consider the power-sum symmetric functions,
$$p_\nu=\sum_{\mu\supseteq \nu} m_\mu.$$
The usual definition of $p_\nu$ uses the refinement order on set partitions rather than the subset relation in our definition. There are several consequences from the fact that we have a different order:
\begin{enumerate}
\item[(a)] The sums of monomial symmetric functions have fewer terms,
\item[(b)] If we consider the function $\mathrm{NCSym(X)}\rightarrow \mathrm{Sym}(X)$ induced by allowing the variables to commute, not all the $p_\nu$ get sent to the corresponding power-sum symmetric functions (as the usual ones do). However, if $\nu$ satisfies $i\larc{}j\in \nu$ implies $j-i=1$, then $p_\nu$ will be sent to the appropriate symmetric function. That is, in the usual construction the image of $p_\nu$ depends on the sequence of part sizes, and in ours each $p_\nu$ gets sent to something different.
\end{enumerate}
The isomorphism
$$\mathrm{ch}:\mathrm{SC}(q)\longrightarrow \mathrm{NCSym(X)}$$
given in \cite{AIM} sends $\kappa_\mu$ to $m_\mu$, but there is no representation theoretic interpretation for the power-sum symmetric functions. This paper finds a representation theoretic approach by tweaking the definition of the power-sum symmetric functions.
\section{Transition matrices} \label{sec:qpowersums}
This section defines a $q$-analogue of the power-sum symmetric functions, and studies its transition matrices to the superclass function basis and the supercharacter basis.
\subsection{$q$-deformations of power-sums in $\mathrm{SC}(q)$}
For $\nu\in \mathcal{S}_n$, define
$$\rho_\nu(q)=\sum_{\mu\supseteq \nu} \frac{1}{q^{\mathrm{nst}_{\mu-\nu}^\nu}} \kappa_\mu,$$
so that formally $\mathrm{ch}(\rho_\nu(1))=p_\nu$.
\begin{example} The transition matrix from the $\rho$-basis to the $\kappa$-basis is given by
$$
\begin{array}{c|ccccccccccccccc}
& \begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (.25,.75) and (.75,.75) .. (1);
\end{tikzpicture} &
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (1) .. controls (1.25,.75) and (1.75,.75) .. (2);
\end{tikzpicture} &
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (2) .. controls (2.25,.75) and (2.75,.75) .. (3);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (0.25,.75) and (.75,.75) .. (1);
\draw (1) .. controls (1.25,.75) and (1.75,.75) .. (2);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (0.25,.75) and (.75,.75) .. (1);
\draw (2) .. controls (2.25,.75) and (2.75,.75) .. (3);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (1) .. controls (1.25,.75) and (1.75,.75) .. (2);
\draw (2) .. controls (2.25,.75) and (2.75,.75) .. (3);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (0.25,.75) and (.75,.75) .. (1);
\draw (1) .. controls (1.25,.75) and (1.75,.75) .. (2);
\draw (2) .. controls (2.25,.75) and (2.75,.75) .. (3);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (.5,1.25) and (1.5,1.25) .. (2);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (1) .. controls (1.5,1.25) and (2.5,1.25) .. (3);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (0.25,.75) and (.75,.75) .. (1);
\draw (1) .. controls (1.5,1.25) and (2.5,1.25) .. (3);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (.5,1.25) and (1.5,1.25) .. (2);
\draw (2) .. controls (2.25,.75) and (2.75,.75) .. (3);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (.5,1.25) and (1.5,1.25) .. (2);
\draw (1) .. controls (1.5,1.25) and (2.5,1.25) .. (3);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (.75,1.75) and (2.25,1.75) .. (3);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (.75,1.75) and (2.25,1.75) .. (3);
\draw (1) .. controls (1.25,.75) and (1.75,.75) .. (2);
\end{tikzpicture} \\ \hline
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\end{tikzpicture}
& 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (.25,.75) and (.75,.75) .. (1);
\end{tikzpicture} &
0 & 1 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (1) .. controls (1.25,.75) and (1.75,.75) .. (2);
\end{tikzpicture} & 0 & 0 & 1 & 0 & 1 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1\\
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (2) .. controls (2.25,.75) and (2.75,.75) .. (3);
\end{tikzpicture}
& 0 & 0 & 0 & 1 & 0 & 1 & 1 & 1 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (0.25,.75) and (.75,.75) .. (1);
\draw (1) .. controls (1.25,.75) and (1.75,.75) .. (2);
\end{tikzpicture}
& 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (0.25,.75) and (.75,.75) .. (1);
\draw (2) .. controls (2.25,.75) and (2.75,.75) .. (3);
\end{tikzpicture}
& 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (1) .. controls (1.25,.75) and (1.75,.75) .. (2);
\draw (2) .. controls (2.25,.75) and (2.75,.75) .. (3);
\end{tikzpicture}
& 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (0.25,.75) and (.75,.75) .. (1);
\draw (1) .. controls (1.25,.75) and (1.75,.75) .. (2);
\draw (2) .. controls (2.25,.75) and (2.75,.75) .. (3);
\end{tikzpicture}
& 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (.5,1.25) and (1.5,1.25) .. (2);
\end{tikzpicture}
& 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 0\\
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (1) .. controls (1.5,1.25) and (2.5,1.25) .. (3);
\end{tikzpicture}
& 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 0\\
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (0.25,.75) and (.75,.75) .. (1);
\draw (1) .. controls (1.5,1.25) and (2.5,1.25) .. (3);
\end{tikzpicture}
& 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (.5,1.25) and (1.5,1.25) .. (2);
\draw (2) .. controls (2.25,.75) and (2.75,.75) .. (3);
\end{tikzpicture}
& 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (.5,1.25) and (1.5,1.25) .. (2);
\draw (1) .. controls (1.5,1.25) and (2.5,1.25) .. (3);
\end{tikzpicture}
& 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (.75,1.75) and (2.25,1.75) .. (3);
\end{tikzpicture}
& 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & q^{-1}\\
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (.75,1.75) and (2.25,1.75) .. (3);
\draw (1) .. controls (1.25,.75) and (1.75,.75) .. (2);
\end{tikzpicture} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1
\end{array}
$$
\end{example}
The following proposition computes the inverse of this matrix.
\begin{proposition} \label{kappatorhoinverse}
$$\kappa_\mu=\sum_{\nu\supseteq \mu} \frac{(-1)^{|\nu-\mu|}}{q^{\mathrm{nst}_{\nu-\mu}^{\nu}}} \rho_\nu(q).$$
\end{proposition}
\begin{proof}
We wish to show that
\begin{align*}
\rho_\nu(q)&=\sum_{\mu\supseteq \nu}\frac{1}{q^{\mathrm{nst}_{\mu-\nu}^\nu}} \sum_{\lambda\supseteq \mu}\frac{(-1)^{|\lambda-\mu|}}{q^{\mathrm{nst}_{\lambda-\mu}^{\lambda}}} \rho_\lambda(q)\\
&=\sum_{\lambda\supseteq\mu\supseteq \nu}\frac{(-1)^{|\lambda-\mu|}}{q^{\mathrm{nst}_{\mu-\nu}^\nu}q^{\mathrm{nst}_{\lambda-\mu}^{\lambda}}} \rho_\lambda(q).
\end{align*}
In other words,
$$\sum_{\lambda\supseteq\mu\supseteq \nu}\frac{(-1)^{|\lambda-\mu|}}{q^{\mathrm{nst}_{\mu-\nu}^\nu}q^{\mathrm{nst}_{\lambda-\mu}^{\lambda}}}=\left\{\begin{array}{ll} 1 & \text{if $\lambda=\nu$,}\\ 0 &\text{otherwise.}\end{array}\right.$$
If $\lambda=\nu$, then the sum has one term which is
$$\frac{1}{q^{\mathrm{nst}^\nu_\emptyset}q^{\mathrm{nst}^\nu_{\emptyset}}}=1.$$
Assume $\lambda\neq \nu$. To establish
$$ \sum_{\lambda\supseteq\mu\supseteq \nu}\frac{(-1)^{|\lambda-\mu|}}{q^{\mathrm{nst}_{\mu-\nu}^\nu}q^{\mathrm{nst}_{\lambda-\mu}^{\lambda}}} =0$$
we define an involution $\iota$ on the set $\{\nu\subseteq \mu\subseteq \lambda\}$ such that
\begin{enumerate}
\item[(a)] $(-1)^{|\lambda-\iota(\mu)|}=-(-1)^{|\lambda-\mu|}$,
\item[(b)] $q^{\mathrm{nst}_{\iota(\mu)-\nu}^\nu} q^{\mathrm{nst}_{\lambda-\iota(\mu)}^{\lambda}}=q^{\mathrm{nst}_{\mu-\nu}^\nu} q^{\mathrm{nst}_{\lambda-\mu}^{\lambda}}$.
\end{enumerate}
Let $\alpha=i\larc{}l\in \lambda-\nu$ be maximal with respect to the statistic $l-i$ (the particular choice is irrelevant).
Define the involution by
$$\iota(\mu)=\left\{\begin{array}{ll} \mu\cup\{\alpha\} & \text{if $\alpha\notin \mu$},\\ \mu-\{\alpha\} & \text{if $\alpha\in \mu$}.\end{array}\right.$$
Clearly (a) holds under this involution. For (b), suppose $\alpha\in \mu$. then
\begin{align*}
q^{\mathrm{nst}_{\iota(\mu)-\nu}^{\nu}} & = q^{\mathrm{nst}_{\mu-\nu}^\nu+\#\{i'<i<l<l'\mid i'\slarc{} l'\in \nu\}}\\
&= q^{\mathrm{nst}_{\mu-\nu}^\nu+\#\{i'<i<l<l'\mid i'\slarc{} l'\in \lambda\}}
\end{align*}
by the maximality in the choice of $\alpha$. On the other hand,
$$
q^{\mathrm{nst}_{\lambda-\iota(\mu)}^{\lambda}}= q^{\mathrm{nst}_{\lambda-\mu}^{\lambda}-\#\{i'<i<l<l'\mid i'\slarc{} l'\in \lambda\}}.
$$
Condition (b) follows.
\end{proof}
Let $\chi^\lambda_\mu$ denote the value of the supercharacter $\chi^\lambda$ on the superclass indexed by $\mu$. By Proposition \ref{kappatorhoinverse},
\begin{align*}
\chi^\lambda & = \sum_{\mu} \chi^\lambda_\mu \kappa_\mu\\
& = \sum_{\mu} \chi^\lambda_\mu \sum_{\nu\supseteq \mu} \frac{(-1)^{|\nu-\mu|}}{q^{\mathrm{nst}_{\nu-\mu}^\nu}} \rho_\nu(q)\\
& =\sum_{\nu} \bigg(\sum_{\mu\subseteq \nu} \chi^\lambda_\mu \frac{(-1)^{|\nu-\mu|}}{q^{\mathrm{nst}_{\nu-\mu}^\nu}}\bigg) \rho_\nu(q).
\end{align*}
We are interested in these coefficients of the $\rho_\nu(q)$.
For $\lambda, \mu\in \mathcal{S}_n$, let
\begin{align*}
\mathrm{cflt}(\mu)&=\{j\larc{}k\mid \text{there exists $i\larc{}l\in \mu$ with $i=j<k<l$ or $i<j<k=l$}\}\\
\mathrm{snst}^\lambda_\mu&=\#\{i<j<k<l\mid i\larc{}l\in \lambda, j\larc{}k\in \mu-\mathrm{cflt}(\lambda)\}
\end{align*}
be the sets of \emph{arcs conflicting with $\mu$} and the set of \emph{strictly nested pairs}, respectively.
\begin{theorem} \label{CoefficientFormula} For $\lambda,\nu\in \mathcal{S}_n$,
\begin{align*}
\sum_{\mu\subseteq \nu} & \chi^\lambda_\mu \frac{(-1)^{|\nu-\mu|}}{q^{\mathrm{nst}_{\nu-\mu}^\nu}}\\
&= \frac{(-1)^{|\nu|}q^{\dim(\lambda)}(q-1)^{|\lambda-\nu|}}{q^{|\lambda|+\mathrm{snst}_\nu^\lambda+\mathrm{nst}_\nu^\nu}}\bigg(\prod_{i\slarc{}j\in\nu\cap \lambda}\hspace{-.2cm} (q-1)q^{\mathrm{nst}_{i\slarc{}j}^\lambda}+q^{\mathrm{nst}_{i\slarc{}j}^\nu}\bigg) \bigg(\prod_{i\slarc{}j\in\nu- \lambda\atop i\slarc{}j\notin \mathrm{cflt}(\lambda)} \hspace{-.3cm} q^{\mathrm{nst}_{i\slarc{}j}^\lambda}-q^{\mathrm{nst}_{i\slarc{}j}^\nu}\bigg).
\end{align*}
\end{theorem}
\begin{proof} For the purpose of this proof, let $t=q-1$.
First note that by (\ref{SupercharacterFormula}) we have that
\begin{align*}
\chi^\lambda_\mu=q^{\dim(\lambda)-|\lambda|}\bigg(\prod_{i\slarc{}j\in \lambda-\mu}t\bigg)\bigg(\prod_{i\slarc{}j\in \lambda\cap\mu} \frac{-1}{q^{\mathrm{nst}_{i\slarc{}j}^\lambda}}\bigg)\bigg(\prod_{i\slarc{}j\in \mu-\lambda\atop i\slarc{}j\notin \mathrm{cflt}(\lambda)} \frac{1}{q^{\mathrm{nst}_{i\slarc{}j}^\lambda}}\bigg)\bigg(\prod_{i\slarc{}j\in \mu-\lambda\atop i\slarc{}j\in \mathrm{cflt}(\lambda)} 0\bigg),
\end{align*}
Plug into the coefficient formula to get
\begin{align*}
&\sum_{\mu\subseteq \nu} \chi^\lambda_\mu \frac{(-1)^{|\nu-\mu|}}{q^{\mathrm{nst}_{\nu-\mu}^\nu}} \\
&=\frac{q^{\dim(\lambda)}}{q^{|\lambda|}} \sum_{\mu\subseteq \nu}\bigg(\prod_{i\slarc{}j\in \lambda-\mu}\hspace{-.15cm}t\bigg)\bigg(\prod_{i\slarc{}j\in \lambda\cap\mu} \frac{-1}{q^{\mathrm{nst}_{i\slarc{}j}^\lambda}}\bigg)\bigg(\prod_{i\slarc{}j\in \mu-\lambda\atop i\slarc{}j\notin \mathrm{cflt}(\lambda)} \frac{1}{q^{\mathrm{nst}_{i\slarc{}j}^\lambda}}\bigg)\bigg(\prod_{i\slarc{}j\in \mu-\lambda\atop i\slarc{}j\in \mathrm{cflt}(\lambda)}\hspace{-.2cm} 0\bigg) \frac{(-1)^{|\nu-\mu|}}{q^{\mathrm{nst}_{\nu-\mu}^\nu}}\\
&= \frac{q^{\dim(\lambda)}}{q^{|\lambda|}} \sum_{\mu\subseteq \nu}\bigg(\prod_{i\slarc{}j\in \lambda-\mu}\hspace{-.15cm} t\bigg)\bigg(\prod_{i\slarc{}j\in \lambda\cap\mu} \frac{-1}{q^{\mathrm{nst}_{i\slarc{}j}^\lambda}}\bigg)\bigg(\prod_{i\slarc{}j\in \mu-\lambda\atop i\slarc{}j\notin \mathrm{cflt}(\lambda)} \frac{1}{q^{\mathrm{nst}_{i\slarc{}j}^\lambda}}\bigg)\bigg(\prod_{i\slarc{}j\in \mu-\lambda\atop i\slarc{}j\in \mathrm{cflt}(\lambda)} \hspace{-.2cm} 0\bigg) \bigg(\prod_{i\slarc{}j\in \nu-\mu} \frac{-1}{q^{\mathrm{nst}_{i\slarc{}j}^\nu}}\bigg)\\
&= \frac{q^{\dim(\lambda)}t^{|\lambda-\nu|}}{q^{|\lambda|}} \\
&\hspace*{.75cm}\cdot\sum_{\mu\subseteq \nu}\bigg(\prod_{i\slarc{}j\in (\lambda\cap\nu)-\mu}\hspace{-.15cm} t\bigg)\bigg(\prod_{i\slarc{}j\in \lambda\cap\mu} \frac{-1}{q^{\mathrm{nst}_{i\slarc{}j}^\lambda}}\bigg)\bigg(\prod_{i\slarc{}j\in \mu-\lambda\atop i\slarc{}j\notin \mathrm{cflt}(\lambda)} \frac{1}{q^{\mathrm{nst}_{i\slarc{}j}^\lambda}}\bigg)\bigg(\prod_{i\slarc{}j\in \mu-\lambda\atop i\slarc{}j\in \mathrm{cflt}(\lambda)} \hspace{-.2cm} 0\bigg) \bigg(\prod_{i\slarc{}j\in \nu-\mu} \frac{-1}{q^{\mathrm{nst}_{i\slarc{}j}^\nu}}\bigg)
\end{align*}
Thus,
$$\sum_{\mu\subseteq \nu} \chi^\lambda_\mu \frac{(-1)^{|\nu-\mu|}}{q^{\mathrm{nst}_{\nu-\mu}^\nu}} = \frac{q^{\dim(\lambda)}t^{|\lambda-\nu|}}{q^{|\lambda|}} \sum_{\mu\subseteq \nu} \prod_{i\slarc{}j\in \nu}\mathrm{val}_\mu^\lambda(i\larc{}j),$$
where
\begin{equation}\label{ValueFunction}
\mathrm{val}_\mu^\lambda(i\larc{}j)=\left\{\begin{array}{ll}
-q^{-\mathrm{nst}^\lambda_{i\slarc{}j}} & \text{if $i\larc{}j\in \lambda\cap\mu$,}\\
q^{-\mathrm{nst}_{i\slarc{} j}^\mu} & \text{if $i\larc{}j\in \mu-\lambda$, $i\larc{}j\notin \mathrm{cflt}(\lambda)$,}\\
0 & \text{if $i\larc{}j\in \mu-\lambda$, $i\larc{}j\in \mathrm{cflt}(\lambda)$,}\\
-tq^{-\mathrm{nst}_{i\slarc{}j}^\nu} & \text{if $i\slarc{}j\in \lambda-\mu$,}\\
-q^{-\mathrm{nst}_{i\slarc{}j}^\nu} & \text{if $i\slarc{}j\notin \lambda\cup\mu$,}\\
\end{array}\right.
\end{equation}
Fix $k\larc{}l\in \nu$. Then
\begin{align*}
\sum_{\mu\subseteq \nu} & \chi^\lambda_\mu \frac{(-1)^{|\nu-\mu|}}{q^{\mathrm{nst}_{\nu-\mu}^\nu}} = q^{\dim(\lambda)-|\lambda|} \bigg(\sum_{\mu\subseteq \nu\atop k\slarc{}l\in \mu} \prod_{i\slarc{}j\in \nu}\mathrm{val}_\mu^\lambda(i\larc{}j)+\sum_{\mu\subseteq \nu\atop k\slarc{}l\notin \mu} \prod_{i\slarc{}j\in \nu}\mathrm{val}_\mu^\lambda(i\larc{}j)\bigg)\\
&=q^{\dim(\lambda)-|\lambda|} \bigg(\sum_{\mu\subseteq \nu\atop k\slarc{}l\in \mu} \mathrm{val}_\mu^\lambda(k\larc{}l) \prod_{i\slarc{}j\in \nu\atop i\slarc{}j\neq k\slarc{} l}\mathrm{val}_\mu^\lambda(i\larc{}j) + \sum_{\mu\subseteq \nu\atop k\slarc{}l\in \mu} \mathrm{val}_\mu^\lambda(k\larc{}l) \prod_{i\slarc{}j\in \nu\atop i\slarc{}j\neq k\slarc{} l}\mathrm{val}_\mu^\lambda(i\larc{}j)\bigg)\\
&=q^{\dim(\lambda)-|\lambda|} \bigg(\sum_{\mu\subseteq \nu\atop k\slarc{}l\in \mu} \mathrm{val}_\nu^\lambda(k\larc{}l) \prod_{i\slarc{}j\in \nu\atop i\slarc{}j\neq k\slarc{} l}\mathrm{val}_\mu^\lambda(i\larc{}j) + \sum_{\mu\subseteq \nu\atop k\slarc{}l\in \mu} \mathrm{val}_\emptyset^\lambda(k\larc{}l) \prod_{i\slarc{}j\in \nu\atop i\slarc{}j\neq k\slarc{} l}\mathrm{val}_\mu^\lambda(i\larc{}j)\bigg).
\end{align*}
Note that
\begin{align*}
\sum_{\mu\subseteq \nu\atop k\slarc{}l\in \mu} \prod_{i\slarc{}j\in \nu\atop i\slarc{}j\neq k\slarc{} l}\mathrm{val}_\mu^\lambda(i\larc{}j) &= \sum_{\mu\subseteq \nu\atop k\slarc{}l\in \mu} \prod_{i\slarc{}j\in \nu\atop i\slarc{}j\neq k\slarc{} l}\mathrm{val}_\mu^\lambda(i\larc{}j),\\
&=\sum_{\mu\subseteq \nu-\{k\slarc{}l\}} \prod_{i\slarc{}j\in \nu-\{k\slarc{}l\}}\mathrm{val}_\mu^\lambda(i\larc{}j),
\end{align*}
Thus,
\begin{align*}
\sum_{\mu\subseteq \nu} \chi^\lambda_\mu \frac{(-1)^{|\nu-\mu|}}{q^{\mathrm{nst}_{\nu-\mu}^\nu}} & = q^{\dim(\lambda)-|\lambda|} ( \mathrm{val}_\nu^\lambda(k\larc{}l)+ \mathrm{val}_\emptyset^\lambda(k\larc{}l))\bigg(\sum_{\mu\subseteq \nu-\{k\slarc{}l\}} \prod_{i\slarc{}j\in \nu-\{k\slarc{}l\}}\mathrm{val}_\mu^\lambda(i\larc{}j)\bigg)\\
&=q^{\dim(\lambda)-|\lambda|} \prod_{i\slarc{}j\in \nu} ( \mathrm{val}_\nu^\lambda(i\larc{}j)+ \mathrm{val}_\emptyset^\lambda(i\larc{}j)),
\end{align*}
where the second equality is obtained by iterating (ie. fix $k'\larc{}l'\in \nu-\{k\larc{}l\}$, etc.).
By separating into the cases given by (\ref{ValueFunction}), we obtain
\begin{align*}
\sum_{\mu\subseteq \nu} & \chi^\lambda_\mu \frac{(-1)^{|\nu-\mu|}}{q^{\mathrm{nst}_{\nu-\mu}^\nu}}\\
&= \frac{q^{\dim(\lambda)}t^{|\lambda-\nu|}}{q^{|\lambda|}} \bigg(\prod_{i\slarc{}j\in \nu\cap \lambda} \frac{-1}{q^{\mathrm{nst}^\lambda_{i\slarc{}j}}}+\frac{-t}{q^{\mathrm{nst}_{i\slarc{}j}^\nu}}\bigg)\bigg(\prod_{i\slarc{}j\in \nu-\lambda\atop i\slarc{}j\notin \mathrm{cflt}(\lambda)} \frac{1}{q^{\mathrm{nst}^\lambda_{i\slarc{}j}}}+\frac{-1}{q^{\mathrm{nst}_{i\slarc{}j}^\nu}}\bigg)\bigg(\prod_{i\slarc{}j\in \nu-\lambda\atop i\slarc{}j\in \mathrm{cflt}(\lambda)}\frac{-1}{q^{\mathrm{nst}_{i\slarc{}j}^\nu}}\bigg)\\
&=\frac{(-1)^{|\nu|}q^{\dim(\lambda)}t^{|\lambda-\nu|}}{q^{|\lambda|+\mathrm{snst}_\nu^\lambda+\mathrm{nst}_\nu^\nu}}
\bigg(\prod_{i\slarc{}j\in \nu\cap \lambda} tq^{\mathrm{nst}^\lambda_{i\slarc{}j}}+q^{\mathrm{nst}_{i\slarc{}j}^\nu}\bigg)\bigg(\prod_{i\slarc{}j\in \nu-\lambda\atop i\slarc{}j\notin \mathrm{cflt}(\lambda)}q^{\mathrm{nst}^\lambda_{i\slarc{}j}}-q^{\mathrm{nst}_{i\slarc{}j}^\nu}\bigg),
\end{align*}
as desired.
\end{proof}
\begin{example}
The transition matrix from the $\chi$-basis to the $\rho$-basis is therefore
$$
\begin{array}{c|cc@{}c@{}cccccccccccc}
& \begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (.25,.75) and (.75,.75) .. (1);
\end{tikzpicture} &
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (1) .. controls (1.25,.75) and (1.75,.75) .. (2);
\end{tikzpicture} &
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (2) .. controls (2.25,.75) and (2.75,.75) .. (3);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (0.25,.75) and (.75,.75) .. (1);
\draw (1) .. controls (1.25,.75) and (1.75,.75) .. (2);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (0.25,.75) and (.75,.75) .. (1);
\draw (2) .. controls (2.25,.75) and (2.75,.75) .. (3);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (1) .. controls (1.25,.75) and (1.75,.75) .. (2);
\draw (2) .. controls (2.25,.75) and (2.75,.75) .. (3);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (0.25,.75) and (.75,.75) .. (1);
\draw (1) .. controls (1.25,.75) and (1.75,.75) .. (2);
\draw (2) .. controls (2.25,.75) and (2.75,.75) .. (3);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (.5,1.25) and (1.5,1.25) .. (2);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (1) .. controls (1.5,1.25) and (2.5,1.25) .. (3);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (0.25,.75) and (.75,.75) .. (1);
\draw (1) .. controls (1.5,1.25) and (2.5,1.25) .. (3);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (.5,1.25) and (1.5,1.25) .. (2);
\draw (2) .. controls (2.25,.75) and (2.75,.75) .. (3);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (.5,1.25) and (1.5,1.25) .. (2);
\draw (1) .. controls (1.5,1.25) and (2.5,1.25) .. (3);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (.75,1.75) and (2.25,1.75) .. (3);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (.75,1.75) and (2.25,1.75) .. (3);
\draw (1) .. controls (1.25,.75) and (1.75,.75) .. (2);
\end{tikzpicture} \\ \hline
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\end{tikzpicture}
& 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (.25,.75) and (.75,.75) .. (1);
\end{tikzpicture}
& t & -q & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (1) .. controls (1.25,.75) and (1.75,.75) .. (2);
\end{tikzpicture}
& t & 0 & -q & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (2) .. controls (2.25,.75) and (2.75,.75) .. (3);
\end{tikzpicture}
& t & 0 & 0 & -q & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (0.25,.75) and (.75,.75) .. (1);
\draw (1) .. controls (1.25,.75) and (1.75,.75) .. (2);
\end{tikzpicture}
& t^2 & -tq & -tq & 0 & q^2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (0.25,.75) and (.75,.75) .. (1);
\draw (2) .. controls (2.25,.75) and (2.75,.75) .. (3);
\end{tikzpicture}
& t^2 & -tq & 0 & -tq & 0 & q^2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (1) .. controls (1.25,.75) and (1.75,.75) .. (2);
\draw (2) .. controls (2.25,.75) and (2.75,.75) .. (3);
\end{tikzpicture}
& t^2 & 0 & -tq & -tq & 0 & 0 & q^2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (0.25,.75) and (.75,.75) .. (1);
\draw (1) .. controls (1.25,.75) and (1.75,.75) .. (2);
\draw (2) .. controls (2.25,.75) and (2.75,.75) .. (3);
\end{tikzpicture}
& t^3 & -t^2q & -t^2q & -t^2q & tq^2 & tq^2 & tq^2 & -q^3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (.5,1.25) and (1.5,1.25) .. (2);
\end{tikzpicture}
& tq & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -q^2 & 0 & 0 & 0 & 0 & 0 & 0\\
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (1) .. controls (1.5,1.25) and (2.5,1.25) .. (3);
\end{tikzpicture}
& tq & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -q^2 & 0 & 0 & 0 & 0 & 0\\
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (0.25,.75) and (.75,.75) .. (1);
\draw (1) .. controls (1.5,1.25) and (2.5,1.25) .. (3);
\end{tikzpicture}
& t^2q & -tq^2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -tq^2 & q^3 & 0 & 0 & 0 & 0\\
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (.5,1.25) and (1.5,1.25) .. (2);
\draw (2) .. controls (2.25,.75) and (2.75,.75) .. (3);
\end{tikzpicture}
& t^2q & 0 & 0 & -tq^2 & 0 & 0 & 0 & 0 & -tq^2 & 0 & 0 & q^3 & 0 & 0 & 0\\
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (.5,1.25) and (1.5,1.25) .. (2);
\draw (1) .. controls (1.5,1.25) and (2.5,1.25) .. (3);
\end{tikzpicture}
& t^2q^2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -tq^3 & -tq^3 & 0 & 0 & q^4 & 0 & 0\\
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (.75,1.75) and (2.25,1.75) .. (3);
\end{tikzpicture}
& tq^2 & 0 & -t^2q & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -q^3 & 0\\
\begin{tikzpicture}[scale=.2]
\foreach \x in {0,1,2,3}
\node (\x) at (\x,0) [inner sep =-2pt] {$\scriptstyle\bullet$};
\draw (0) .. controls (.75,1.75) and (2.25,1.75) .. (3);
\draw (1) .. controls (1.25,.75) and (1.75,.75) .. (2);
\end{tikzpicture}
& t^2q^2 & 0 & t(q^3-q^2+q) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -tq^3 & q^3
\end{array}
$$
\end{example}
As we can see in the above example, the matrix appears to be lower-triangular. The following defines a total order on $\mathcal{S}_n$ that makes this clear, while respecting our poset of set partition inclusion.
For $\lambda ,\mu\in \mathcal{S}_n$, let $\mathrm{dimv}(\lambda)$ be the integer partition given by the multiset $\{l-i\mid i\larc{}l\in \lambda\}$ and $\mathrm{rnode}(\lambda)$ be the integer partition given by the set $\{l\mid i\larc{}l\in \lambda\}$. For example,
$$\mathrm{dimv}\left(
\begin{tikzpicture}[scale=.5,baseline=0.25cm]
\foreach \x in {0,...,5}
\node (\x) at (\x,0) [inner sep = -1pt] {$\bullet$};
\draw (0) .. controls (1,2) and (3,2) .. (4);
\draw (1) .. controls (1.25,.75) and (1.75,.75) .. (2);
\draw (2) .. controls (2.25,.75) and (2.75,.75) .. (3);
\draw (3) .. controls (3.5,1.25) and (4.5,1.25) .. (5);
\end{tikzpicture}\right)
=(4,2,1,1)\qquad \text{and}\qquad
\mathrm{rnode}\left(
\begin{tikzpicture}[scale=.5,baseline=0.25cm]
\foreach \x in {0,...,5}
\node (\x) at (\x,0) [inner sep = -1pt] {$\bullet$};
\draw (0) .. controls (1,2) and (3,2) .. (4);
\draw (1) .. controls (1.25,.75) and (1.75,.75) .. (2);
\draw (2) .. controls (2.25,.75) and (2.75,.75) .. (3);
\draw (3) .. controls (3.5,1.25) and (4.5,1.25) .. (5);
\end{tikzpicture}\right)=(6,5,4,3).$$
Note that $\dim(\lambda)=|\mathrm{dimv}(\lambda)|$ or the size of the corresponding integer partition.
Define a total order $\leq$ on $\mathcal{S}_n$ by
\begin{enumerate}
\item[(a)] $\lambda\geq \mu$ if $\mathrm{dimv}(\lambda)\geq_{\text{lex}} \mathrm{dimv}(\mu)$, where $\geq_{\text{lex}}$ is the biggest part to smallest part lexicographic order on integer partitions, and
\item[(b)] If $\mathrm{dimv}(\lambda)= \mathrm{dimv}(\mu)$, then $\mathrm{rnode}(\lambda)\geq_{\text{lex}} \mathrm{rnode}(\mu)$.
\end{enumerate}
For example, for $n=4$ we have in increasing order,
$$\begin{array}{|c|c|c|c|c|c|} \hline
\lambda &
\begin{tikzpicture}[scale=.5]
\foreach \x in {0,1,2}
\node (\x) at (\x,0) [inner sep =-2pt] {$\bullet$};
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.5]
\foreach \x in {0,1,2}
\node (\x) at (\x,0) [inner sep =-2pt] {$\bullet$};
\draw (0) .. controls (.25,.5) and (.75,.5) .. (1);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.5]
\foreach \x in {0,1,2}
\node (\x) at (\x,0) [inner sep =-2pt] {$\bullet$};
\draw (1) .. controls (1.25,.5) and (1.75,.5) .. (2);
\end{tikzpicture}
&
\begin{tikzpicture}[scale=.5]
\foreach \x in {0,1,2}
\node (\x) at (\x,0) [inner sep =-2pt] {$\bullet$};
\draw (0) .. controls (.25,.5) and (.75,.5) .. (1);
\draw (1) .. controls (1.25,.5) and (1.75,.5) .. (2);
\end{tikzpicture} &
\begin{tikzpicture}[scale=.5]
\foreach \x in {0,1,2}
\node (\x) at (\x,0) [inner sep =-2pt] {$\bullet$};
\draw (0) .. controls (.5,1) and (1.5,1) .. (2);
\end{tikzpicture}\\ \hline
\mathrm{dimv}(\lambda) & \emptyset & (1) & (1) & (1,1) & (2) \\ \hline
\mathrm{rnode}(\lambda) & \emptyset & (2) & (3) & (3,2) & (3) \\ \hline
\end{array}$$
This is also the order used in all of our $n=4$ transition matrices above.
\begin{remark}
For our purposes any poset that respects the poset obtained by using (a) above is sufficient. We add (b) only to get a total order.
\end{remark}
\begin{corollary} Let $\nu,\lambda\in \mathcal{S}_n$. If $\nu>\lambda$, then
$$\sum_{\mu\subseteq\nu} \chi^\lambda_\mu \frac{(-1)^{|\nu-\mu|}}{q^{\mathrm{nst}_{\nu-\mu}^\nu}}=0.$$
\end{corollary}
\begin{proof}
Suppose there exists $j\larc{}k\in \nu-(\lambda\cup\mathrm{cflt}(\lambda))$ such that $\mathrm{nst}_{j\slarc{}k}^\lambda=0$. Pick such an arc maximal with respect to $k-j$. If $\mathrm{nst}_{j\slarc{}k}^\nu\neq 0$, then by the maximality of $j\larc{}k$ there would exist $i\slarc{}l\in \nu\cap\mathrm{cflt}(\lambda)$ such that $i<j<k<l$. However, if $i\larc{}l\in \mathrm{cflt}(\lambda)$, then there exists $i'\larc{}l'\in\lambda$ such that $i'=i<l<l'$ or $i'<i<l=l'$, contradicting $\mathrm{nst}_{j\slarc{}k}^\lambda=0$. Thus, $\mathrm{nst}_{j\slarc{}k}^\nu= 0$. We can conclude that
$$q^{\mathrm{nst}_{i\slarc{}j}^\lambda}-q^{\mathrm{nst}_{i\slarc{}j}^\nu}=0,$$
so our sum is zero if there exists $j\larc{}k\in \nu-(\lambda\cup\mathrm{cflt}(\lambda))$ such that $\mathrm{nst}_{j\slarc{}k}^\lambda=0$.
Suppose $\nu>\lambda$. Then there exists $j\larc{}k\in \nu-\lambda$ maximal with respect to $k-j$. If $\mathrm{nst}_{j\slarc{}k}^\lambda\neq 0$, then there would exist $i\larc{}l\in \lambda$ with $i<j<k<l$. However, the maximality of our choice now contradicts $\lambda>\mu$. Thus, $\mathrm{nst}_{j\slarc{}k}^\lambda= 0$, and our coefficient is 0, as desired.
\end{proof}
Furthermore, the nonzero coefficients are polynomials in $q$ with integer coefficients.
\begin{corollary} For $\nu,\lambda\in \mathcal{S}_n$,
$$\sum_{\mu\subseteq\nu} \chi^\lambda_\mu \frac{(-1)^{|\nu-\mu|}}{q^{\mathrm{nst}_{\nu-\mu}^\nu}}\in \mathbb{Z}[q].$$
\end{corollary}
\begin{proof}
By Theorem \ref{CoefficientFormula}, it suffices to show that
$$\frac{q^{\dim(\lambda)}}{q^{|\lambda|+\mathrm{snst}_\nu^\lambda+\mathrm{nst}_\nu^\nu}}\in \mathbb{Z}[q].$$
Note that any arc $i\larc{}l$ can have at most $\lfloor\frac{l-i-1}{2}\rfloor$ arcs nested in it, so
$$\mathrm{snst}^{\lambda}_\nu\leq \sum_{i\slarc{}l\in \lambda} \lfloor\frac{l-i-1}{2}\rfloor\leq \frac{\dim(\lambda)-|\lambda|}{2} \qquad \text{and}\qquad \mathrm{nst}^{\nu}_\nu\leq \sum_{i\slarc{}l\in \nu} \lfloor\frac{l-i-1}{2}\rfloor\leq \frac{\dim(\nu)-|\nu|}{2}. $$
However, the coefficient is zero if $\nu>\lambda$, so
$$\mathrm{snst}^{\lambda}_\nu + \mathrm{nst}^{\nu}_\nu + |\lambda|\leq\frac{\dim(\lambda)-|\lambda|}{2} + \frac{\dim(\nu)-|\nu|}{2} + |\lambda| \leq \dim(\lambda),$$
as desired.
\end{proof}
There are many specializations of Theorem \ref{CoefficientFormula}. For example, as entries of a $|\mathcal{S}_n|\times |\mathcal{S}_n|$ matrix, we could consider the diagonal entries, as in the following corollary.
\begin{corollary} For $\lambda\in \mathcal{S}_n$,
$$\sum_{\mu\subseteq \lambda} \chi^\lambda_\mu \frac{(-1)^{|\lambda-\mu|}}{q^{ \mathrm{nst}_{\lambda-\mu}^\lambda}}=(-1)^{|\lambda|}q^{\dim(\lambda)-\mathrm{nst}_\lambda^\lambda}.$$
\end{corollary}
\begin{proof}
This follows directly from Theorem \ref{CoefficientFormula} and the observation that
$$\prod_{i\slarc{}j\in\lambda\cap \nu} (q-1)q^{\mathrm{nst}_{i\slarc{}j}^\lambda}+q^{\mathrm{nst}_{i\slarc{}j}^\nu}=\prod_{i\slarc{}j\in\lambda} (q-1)q^{\mathrm{nst}_{i\slarc{}j}^\lambda}+q^{\mathrm{nst}_{i\slarc{}j}^\lambda}=q^{\mathrm{nst}_\lambda^\lambda+|\lambda|}.\qedhere$$
\end{proof}
\section{Consequences}
One of the most immediate consequences of Theorem \ref{CoefficientFormula} is that the $\rho$ basis gives an $LU$-decomposition of the supercharacter table of $\mathrm{UT}_n(q)$ (That is, a product of an upper-triangular matrix and a lower triangular matrix).
\begin{corollary}
The supercharacter table $C$ of $\mathrm{UT}_n(q)$ has a factorization
$$C=AB$$
where $A$ is a lower-triangular matrix with entries in $\mathbb{Z}[q]$ and $B$ is an upper-triangular with entries in $\mathbb{Z}[q^{-1}]$.
\end{corollary}
We expect that interesting applications will come from such a result. For now, we have a combinatorial formula for the determinant of the supercharacter table.
\begin{corollary}
The supercharacter table $C$ of $\mathrm{UT}_n(q)$ has a determinant
$$\det(C)=(-1)^{\sum_{\lambda\in \mathcal{S}_n} |\lambda|} q^{\sum_{\lambda\in \mathcal{S}_n} \dim(\lambda) -\mathrm{nst}_\lambda^\lambda}.$$
\end{corollary}
It is somewhat of a surprise that the sequences (which we have added to Sloane)
\begin{align*}
\dim(n)&=\sum_{\lambda\in \mathcal{S}_n} \dim(\lambda)\qquad \qquad \text{[Sloane A200580]}\\
\mathrm{arcs}(n) & = \sum_{\lambda\in \mathcal{S}_n} |\lambda| \qquad \qquad \qquad \text{[Sloane A200660]}\\
\mathrm{nst}(n) & = \sum_{\lambda\in \mathcal{S}_n} \mathrm{nst}^\lambda_\lambda \qquad \qquad \quad\, \text{[Sloane A200673]}
\end{align*}
did not appear to be in the literature (or at least not in the Sloane integer sequences database). However, the first two at least do appear to be related to the recursive two-variable array (Sloane A011971), known as Aitken's array, given by
$$b[n,k]=\left\{\begin{array}{ll} \#\{\lambda\in \mathcal{S}_n\mid k\larc{}n\in \lambda\} & \text{if $1\leq k<n$,}\\ \#\{\lambda\in \mathcal{S}_n\mid j\larc{}n\notin \lambda, 1\leq j<n\}, & \text{if $k=n$.}\end{array}\right. $$
This sequence satisfies the recursion
\begin{align*}
b[1,1] & =1\\
b[n,1] & = b[n-1,n-1]\\
b[n,k] & =b[n,k-1]+b[n-1,k-1],
\end{align*}
and looks like
$$\begin{array}{c@{}c@{}c@{}c@{}c@{}c@{}c@{}c@{}c}
&& & & 1 & & && \\
& & & 1& & 2 & &&\\
&& 2 & & 3 & & 5 &&\\
& 5 && 7 && 10 && 15& \\
15 && 20 && 27&& 37 && 52
\end{array}$$
Note that the Bell numbers appear on the boundary of this triangle.
\begin{proposition}
For $n\in \mathbb{Z}_{\geq 1},$
\begin{align*}
\mathrm{arcs}(n) &= \sum_{k=1}^{n-1} k \cdot b[n,k]\\
\dim(n) &= \sum_{k=1}^{n-1} k (n-k) \cdot b[n,k].
\end{align*}
\end{proposition}
\begin{proof}
Consider first
\begin{equation*}
\mathrm{arcs}(n) = \sum_{\lambda\in \mathcal{S}_n} |\lambda| =\sum_{1\leq i<j\leq n} \#\{\lambda\in \mathcal{S}_n\mid i\larc{}j\in \mathcal{S}_n\}.
\end{equation*}
However,
$$\#\{\lambda\in \mathcal{S}_n\mid i\larc{}j\in \mathcal{S}_n\} = b[n,n-j+i],$$
so
$$\mathrm{arcs}(n) =\sum_{1\leq i<j\leq n} b[n,n-j+i]=\sum_{k=1}^{n-1} k b[n,k].$$
The argument for $\dim(n)$ is similar, but as we enumerate the arcs $i\larc{}j$, we add the statistic $j-i$.
\end{proof}
To understand the sequence $\mathrm{nst}(n)$ in a similar way, we need to define a slight variation of the sequence $b[n,k]$ given by
$$b[n,k,j]=\left\{\begin{array}{ll}
\#\{\lambda\in \mathcal{S}_n\mid j\larc{}n,k\larc{}n-1\in \lambda\} & \text{if $j<k<n-1$,}\\
\#\{\lambda\in \mathcal{S}_n\mid j\larc{}n\in \lambda, i\larc{}n-1\notin \lambda, 1\leq i <n-1\} & \text{if $j<k=n-1$.}
\end{array}\right.$$
These numbers also satisfy a recursion given by
\begin{align*}
b[3,2,1] & = 1\\
b[n,2,1] &= b[n-1,n-2,1]\\
b[n,j+1,j]&=b[n,j+1,j-1]+b[n-1,j,j-1]\\
b[n,k,j] & = b[n,k-1,j]+b[n-1,k-1,j]
\end{align*}
\begin{proposition} For $n\in \mathbb{Z}_{\geq 1},$
$$\mathrm{nst}(n)=\sum_{j=1}^{n-3}\sum_{k=j+1}^{n-2} j (k-j) b[n,k,j].$$
\end{proposition}
|
1,108,101,565,541 | arxiv | \section*{I. Introduction}
In the paper \cite{zhd97a} we have suggested a generalization of the
Turbiner-Shifman approach \cite{tur88}--\cite{tus89} to the construction
of quasi-exactly solvable (QES) models on line for the case of matrix
Hamiltonians. We remind that originally their method was applied to
scalar one-dimensional stationary Schr\"odinger equations. Later on it
was extended to the case of multi-dimensional scalar stationary
Schr\"odinger equations \cite{tus89}--\cite{gko94b} (see, also
\cite{ush93}).
A systematic description of our approach can be found in the paper
\cite{zhd97b}. The procedure of constructing a QES matrix (scalar) model
is based on the concept of a Lie-algebraic Hamiltonian. We call a
second-order operator in one variable Lie-algebraic if the following
requirements are met:
\begin{itemize}
\item{The Hamiltonian is a quadratic form with constant coefficients of
first-order operators $Q_1, Q_2, \ldots, Q_n$ forming a Lie algebra
$g$;}
\item{The Lie algebra $g$ has a finite-dimensional invariant subspace
$\cal{I}$ of the whole representation space.}
\end{itemize}
Now if a given Hamiltonian $H[x]$ is Lie-algebraic, then after being
restricted to the space $\cal I$ it becomes a matrix operator $\cal H$
whose eigenvalues and eigenvectors are computed in a purely algebraic
way. This means that the Hamiltonian $H[x]$ is quasi-exactly solvable (for
further details on scalar QES models see \cite{ush93}).
It should be noted that there exist alternative approaches to
constructing matrix QES models \cite{tur92}--\cite{fgr97}. The principal
idea of these is fixing the form of basis elements of the invariant
space $\cal I$. They are chosen to be polynomials in $x$. This
assumption leads to a challenging problem of classification of
superalgebras by matrix-differential operators in one variable
\cite{fgr97}.
We impose no {\em a priori}\/ restrictions on the form of basis elements
of the space $\cal I$. What is fixed it is the class to which the basis
elements of the Lie algebra $g$ should belong. Following
\cite{zhd97a,zhd97b} we choose this class $\cal L$ as the set of matrix
differential operators of the form
\begin{equation} \label{0-1}
{\cal L} = \left\{ Q:\quad Q=a(x) \partial_x + A(x)\right\}.
\end{equation}
Here $a(x)$ is a smooth real-valued function and $A(x)$ is an $N\times
N$ matrix whose entries are smooth complex-valued functions of $x$.
Hereafter we denote $d/dx$ as $\partial_x$.
Evidently, $\cal L$ can be treated as an infinite-dimensional Lie
algebra with a standard commutator as a Lie bracket. Given a subalgebra
$\langle Q_1, Q_2, \ldots, Q_n \rangle $ of the algebra $\cal L$, that
has a finite-dimensional invariant space, we can easily construct a QES
matrix model. To this end we compose a bilinear combination of the
operators $Q_1, Q_2, \ldots, Q_n$ and of the unit $N\times N$ matrix $I$
with constant complex coefficients $\alpha_{jk}$ and get
\begin{equation}
\label{0-2}
H[x]\psi(x)=\left(\sum_{j,k=1}^n\alpha_{jk}Q_jQ_k\right).
\end{equation}
So there arises a natural problem of classification of subalgebras of
the algebra $\cal L$ within its inner automorphism group. The problem of
classification of inequivalent realizations of Lie algebras on line and
on plane has been solved in a full generality by Lie itself
\cite{lie24,lie27} (see, also \cite{gko92}). However, the classification
problem for the case when $A(x)\not = f(x)I$ with a scalar function
$f(x)$ is open by now. In the paper \cite{zhd97b} we have classified
realizations of the Lie algebras of the dimension up to three by the
operators belonging to $\cal L$ with an arbitrary $N$. Next, fixing
$N=2$ we have studied which of them give rise to QES matrix Hamiltonians
$H[x]$. It occurs that the only three-dimensional algebra that meets
this requirement is the algebra $sl(2)$ (which is fairly easy to predict
taking into account the scalar case!). This yields the two families of
$2\times 2$ QES models, one of them under proper restrictions giving
rise to the well-known family of scalar QES Hamiltonians (for more
details, see \cite{zhd97b}).
As is well-known a physically meaningful QES matrix Schr\"odinger
operator has to be Hermitian. This requirement imposes restrictions on
the choice of QES models which somehow were beyond considerations of our
previous papers \cite{zhd97a,zhd97b}. The principal aim of the present
paper is to formulate and implement an efficient algebraic procedure for
constructing QES Hermitian matrix Schr\"odinger operators
\begin{equation}
\label{0-3}
\hat H[x] = \partial_x^2 + V(x).
\end{equation}
This requires a slight modification of the algebraic procedure used in
\cite{zhd97b}. We consider as an algebra $g$ the direct sum of two
$sl(2)$ algebras which is equivalent to the algebra $o(2,2)$. The
necessary algebraic structures are introduced in Section 2. The next
Section is devoted to constructing in a regular way Hermitian QES matrix
Schr\"odinger operators on line. We give the list of thus obtained QES
models in Section 4. The fifth Section contains a number of examples of
Hermitian QES Schr\"odinger operators that have square integrable
eigenfunctions.
\section*{II. Extension of the algebra $sl(2)$}
Following \cite{zhd97a,zhd97b} we consider the representation of the
algebra $sl(2)$
\begin{equation} \label{11}
\begin{array}{rcl}
sl(2)&=&\langle Q_-,\ Q_0,\ Q_+ \rangle \\[2mm]
&=&\langle \partial_x,\ x\partial_x -{{\textstyle m-1} \over {\textstyle 2}}+S_0,\
x^2\partial_x -(m-1)x+2S_0x+S_+\rangle,
\end{array}
\end{equation}
where $S_0=\sigma_3/ 2$,\ $S_+=(i\sigma_2+\sigma_1)/2$, $\sigma_k$ are
the $2\times 2$ Pauli matrices
$$
{\sigma_1}=\left(\begin{array}{cc}0 & 1\\ 1 &
0\end{array}\right),\
{\sigma_2}=\left(\begin{array}{cc}0 & -i\\ i &
0\end{array}\right),\
{\sigma_3}=\left(\begin{array}{cc}1 & 0\\ 0 &
-1\end{array}\right)
$$
and $m\geq 2$ is an arbitrary natural number. This representation gives
rise to a family of QES models and furthermore the algebra (\ref{11})
has the following finite-dimensional invariant space
\begin{equation} \label{12}
\begin{array}{l} {\cal I}_{sl(2)} ={\cal I}_1\bigoplus{\cal
I}_2 = \langle \vec e_1,x\vec e_1,\ldots,x^{m-2}\vec
e_1\rangle\bigoplus\\[2mm]
\langle m\vec e_2,\ldots,mx^j\vec e_2-jx^{j-1}\vec e_1,\ldots,
mx^m\vec e_2-mx^{m-1}\vec e_1\rangle.
\end{array}
\end{equation}
Since the spaces ${\cal I}_1$, ${\cal I}_2$ are invariant with respect
to an action of any of the operators (\ref{11}), the above
representation is reducible. A more serious trouble is that it is not
possible to construct a QES operator, that is equivalent to a Hermitian
Schr\"odinger operator, by taking a bilinear combination (\ref{0-2}) of
operators (\ref{11}) with coefficients being complex numbers. To
overcome this difficulty we use the idea indicated in \cite{zhd97b} and
let the coefficients of the bilinear combination (\ref{0-2}) to be
constant $2\times 2$ matrices. To this end we introduce a wider Lie
algebra and add to the algebra (\ref{11}) the following three
matrix operators:
\begin{equation} \label{13}
R_-=S_- ,\quad R_0=S_-x+S_0 ,\quad
R_+=S_-x^2+2S_0x+S_+ ,
\end{equation}
where $S_{\pm}=(i\sigma_2\pm\sigma_1)/2$.
It is straightforward to verify that the space (\ref{12}) is invariant
with respect to an action of a linear combination of the operators
(\ref{13}). Consider next the following set of operators:
\begin{equation} \label{14}
\langle T_{\pm}=Q_{\pm}-R_{\pm},\ T_0=Q_0-R_0,\ R_{\pm},\ R_0,\ I\rangle ,
\end{equation}
where $Q$ and $R$ are operators (\ref{11}) and (\ref{13}), respectively,
and $I$ is a unit $2\times 2$ matrix. By a direct computation we check
that the operators $T_{\pm}, T_{0}$ as well as the operators $R_{\pm},
R_{0}$, fulfill the commutation relations of the algebra $sl(2)$.
Furthermore any of the operators $T_{\pm}, T_{0}$ commutes with any of
the operators $R_{\pm}, R_{0}$. Consequently, operators (\ref{14}) form
the Lie algebra
\[
sl(2)\bigoplus sl(2)\bigoplus I\cong o(2,2)\bigoplus I.
\]
In a sequel we denote this algebra as $g$.
The Casimir operators of the Lie algebra $g$ are multiples of the unit
matrix
$$
C_1=T_0^2-T_+T_--T_0=\left({{m^2-1} \over {4}}
\right)I ,\quad K_2=R_0^2-R_+R_--R_0={{3} \over {4}}I.
$$
Using this fact it can be shown that the representation of $g$ realized
on the space ${\cal I}_{sl(2)}$ is irreducible.
One more remark is that the operators (\ref{14}) satisfy
the following relations:
\begin{equation} \label{15} \begin{array}{lll}
R_-^2=0,\ \ R_0^2={{\textstyle 1} \over {\textstyle 4}},\ \ R_+^2=0, \\[2mm]
\{R_-,R_0\}=0,\ \ \{R_+,R_0\}=0,\ \ \{R_-,R_+\} =-1, \\[2mm]
R_-R_0={{\textstyle 1} \over {\textstyle 2}}R_-,\ \ R_0R_+={{\textstyle 1} \over {\textstyle 2}}R_+,\ \
R_-R_+=R_0-{{\textstyle 1} \over {\textstyle 2}}.
\end{array}
\end{equation}
Here $\{Q_1, Q_2\}=Q_1Q_2 + Q_2Q_1$. One of the consequences of this fact
is that the algebra $g$ may be considered as a superalgebra which shows
and evident link to the results of the paper \cite{fgr97}.
\section*{III. The general form of the Hermitian QES operator}
Using the commutation relations of the Lie algebra $g$ together with
relations (\ref{15}) one can show that any bilinear combination of the
operators (\ref{14}) is a linear combination of twenty one (basis)
quadratic forms of the operators (\ref{14}). Composing this linear
combination yields all QES models which can be obtained with the help of
our approach. However the final goal of the paper is not to get some
families of QES matrix second-order operators as such but to get QES
Schr\"odinger operators (\ref{0-3}). This means that it is necessary to
transform bilinear combination (\ref{0-2}) to the standard form
(\ref{0-3}). What is more, it is essential that the corresponding
transformation should be given by explicit formulae, since we need to
write down explicitly the matrix potential $V(x)$ of thus obtained QES
Schr\"odinger operator and the basis functions of its invariant space.
The general form of QES model obtainable within the framework of our
approach is as follows
\begin{equation} \label{6}
H[x]=\xi(x)\partial_x^2+B(x)\partial_x+C(x),
\end{equation}
where $\xi(x)$ is some real-valued function and $B(x), C(x)$ are matrix
functions of the dimension $2\times 2$. Let $U(x)$ be an invertible
$2\times 2$ matrix-function satisfying the system of ordinary
differential equations
\begin{equation} \label{8}
U'(x)={{\textstyle 1} \over {\textstyle 2\xi(x)}}\left({{\textstyle \xi'(x)} \over
{\textstyle 2}}-B(x)\right)U(x),
\end{equation}
and the function $f(x)$ be defined by the relation
\begin{equation}
\label{3-0}
f(x)=\pm\int {{\textstyle dx} \over {\sqrt{\xi(x)}}}.
\end{equation}
Then the change of variables reducing (\ref{6}) to the standard form
(\ref{0-3}) reads as
\begin{equation}
\label{7}
\begin{array} {rcl}
x&\rightarrow& y = f(x),\\[2mm]
H[x]&\rightarrow& \hat H[y] = \hat U^{-1}(y)H[f^{-1}(y)]\hat U(y),
\end{array}
\end{equation}
where $f^{-1}$ stands for the inverse of $f$ and $\hat U(y)=U(f^{-1}(y))$.
Performing the transformation (\ref{7}) yields the Schr\"odinger operator
\begin{equation}
\label{3-1}
\hat H[y]=\partial_y^2+V(y)
\end{equation}
with
\begin{equation} \label{9} \begin{array}{rcl}
V(y)&=&\Biggl\{U^{-1}(x)\left[-{{\textstyle 1} \over {\textstyle 4\xi}}B^2(x)-{{\textstyle 1}
\over {\textstyle 2}}B'(x)+{{\textstyle \xi'} \over {\textstyle 2\xi}}B(x)+C(x)\right]
U(x)\\[2mm]
&&\left.+{{\textstyle \xi''} \over {\textstyle 4}}-{{\textstyle 3{\xi'}^2}
\over {\textstyle 16\xi}}\Biggr\}\right|_{x=f^{-1}(y)} .
\end{array}
\end{equation}
Hereafter, the notation $\{W(x)\}_{x=f^{-1}(y)}$ means that we should
replace $x$ with $f^{-1}(y)$ in the expression $W(x)$.
Furthermore, if we denote the basis elements of the invariant space
(\ref{12}) as $\vec f_1(x),\ldots, \vec f_{2m}(x)$, then the invariant
space of the operator $\hat H[y]$ takes the form
\begin{equation} \label{10}
\hat {\cal I}_{sl(2)}=\left \langle \hat U^{-1}(y)\vec f_1(f^{-1}(y)),\ldots,
\hat U^{-1}(y)\vec f_{2m}(f^{-1}(y))\right \rangle .
\end{equation}
In view of the remark made at the beginning of this section we are
looking for such QES models that the transformation law (\ref{7}) can be
given explicitly. This means that we should be able to construct a
solution of system (\ref{8}) in an explicit form. To achieve this goal
we select from the above mentioned set of twenty one linearly
independent quadratic forms of operators (\ref{14}) the twelve
forms,
\begin{eqnarray}
&&A_0=\partial_x^2,\ A_1=x\partial_x^2,\quad A_2=x^2\partial_x^2+(m-1)\sigma_3,\nonumber \\
&&B_0=\partial_x,\ B_1=x\partial_x+{{\textstyle \sigma_3} \over {\textstyle 2}},\quad
B_2=x^2\partial_x-(m-1)x+\sigma_3x+\sigma_1,\nonumber\\
&&C_1=\sigma_1\partial_x+{{\textstyle m} \over {\textstyle 2}}\sigma_3,\quad
C_2=i\sigma_2\partial_x+{{\textstyle m} \over {\textstyle 2}}\sigma_3,\quad
C_3=\sigma_3\partial_x,\label{16} \\
&&D_1=x^3\partial_x^2-2\sigma_1x\partial_x+(3m-m^2-3)x+(2m-3)x\sigma_3+(4m-4)\sigma_1,\nonumber\\
&&D_2=x^3\partial_x^2-2i\sigma_2x\partial_x+(3m-m^2-3)x+(2m-3)x\sigma_3+(4m-4)\sigma_1,\nonumber\\
&&D_3=2\sigma_3x\partial_x+(1-2m)\sigma_3 ,\nonumber
\end{eqnarray}
whose linear combinations have such a structure that system (\ref{8}),
can be integrated in a closed form. However, in the present paper we
study systematically the first nine quadratic forms from the above list.
The quadratic forms $D_1, D_2, D_3$ are used to construct an example of
QES model such that the matrix potential is expressed via the
Weierstrass function.
Thus the general form of the Hamiltonian to be considered in a sequel
is as follows
\begin{eqnarray}
H[x]&=&\sum_{\mu=0}^2(\alpha_\mu A_\mu +\beta_\mu B_\mu)
+ \sum_{i=1}^3 \gamma_iC_i = (\alpha_2x^2+\alpha_1x+\alpha_0)\partial_x^2\nonumber\\
&& +(\beta_2x^2+\beta_1x+\beta_0+
\gamma_1\sigma_1+i\gamma_2\sigma_2+\gamma_3\sigma_3)\partial_x
+\beta_2\sigma_3x\label{17}\\
&&-\beta_2(m-1)x+\beta_2\sigma_1+\left[\alpha_2(m-1)+
{{\textstyle \beta_1} \over {\textstyle 2}}+{{\textstyle m} \over {\textstyle 2}}(\gamma_1+\gamma_2)
\right]\sigma_3.\nonumber
\end{eqnarray}
Here $\alpha_0,\alpha_1, \alpha_2$ are arbitrary real constants and $\beta_0,\ldots,
\gamma_3$ are arbitrary complex constants.
If we denote
\begin{equation}\label{18}\begin{array}{l}
\tilde\gamma_1=\gamma_1,\ \tilde\gamma_2=i\gamma_2,\
\tilde\gamma_3=\gamma_3,\ \delta =2\alpha_2(m-1)+
\beta_1+m(\gamma_1+\gamma_2),\\[2mm]
\xi(x) =\alpha_2x^2+
\alpha_1x+\alpha_0,\ \eta(x) =\beta_2x^2+\beta_1x+\beta_0,
\end{array}
\end{equation}
then the general solution of system (\ref{8}) reads as
\begin{equation}\label{19}\begin{array}{l}
\displaystyle U(x)=\xi^{1/4}(x)\exp\left [-{{\textstyle 1} \over {\textstyle 2}}\int{{{\textstyle \eta(x)}
\over {\textstyle \xi(x)}}dx}\right ]\exp\left [-{{\textstyle 1} \over {\textstyle 2}}
\tilde\gamma_i\sigma_i\int{{{\textstyle 1}
\over {\textstyle \xi(x)}}dx}\right ]\Lambda ,
\end{array}
\end{equation}
where $\Lambda$ is an arbitrary constant invertible $2\times 2$ matrix.
Performing the transformation (\ref{7}) with $U(x)$ being given by (\ref{19})
reduces QES operator (\ref{17}) to a Schr\"odinger form (\ref{3-1}),
where
\begin{eqnarray}
V(y)&=&\Biggl\{{{\textstyle 1} \over {\textstyle 4\xi}}\Lambda^{-1}\lbrace -\eta^2+2\xi
'\eta-2\xi\eta '-4\beta_2(m-1)x\xi -\tilde\gamma_i^2\nonumber\\
&&+2(\xi '-\eta)\tilde\gamma_i\sigma_i+
4\beta_2\xi U^{-1}(x)\sigma_1U(x)+(4\beta_2x+2\delta)\xi\label{20} \\
&&\left.\times U^{-1}(x) \sigma_3U(x)\rbrace\Lambda+{{\textstyle \alpha_2} \over {\textstyle 2}}-
{{\textstyle 3(2\alpha_2x+\alpha_1)^2} \over {\textstyle 16\xi}}\Biggr\}
\right |_{x=f^{-1}(y)}.\nonumber
\end{eqnarray}
Here $\xi, \eta$ are functions of $x$ defined in (\ref{18}) and
$f^{-1}(y)$ is the inverse of $f(x)$ which is given by (\ref{3-0}).
The requirement of hermiticity of the Schr\"odinger operator (\ref{3-1})
is equivalent to the requirement of hermiticity of the matrix $V(y)$.
To select from the multi-parameter family of matrices (\ref{20}) Hermitian
ones we will make use of the following technical lemmas.
\begin{lem} The matrices $z\sigma_a, w(\sigma_a\pm i\sigma_b), a\not
=b$, with $\{z, w\}\subset{\bf C}, z\notin{\bf R}, w\not =0$ cannot be
reduced to Hermitian matrices with the help of a transformation
\begin{equation}
\label{3-2}
A\rightarrow A'=\Lambda^{-1}A\Lambda,
\end{equation}
where $\Lambda$ is an invertible constant $2\times 2$ matrix.
\end{lem}
\vspace{2mm}
\noindent
{\bf Proof.}$\quad$ It is sufficient to prove the statement for the case
$a=1, b=2$, since all other cases are equivalent to this one. Suppose
the inverse, namely that there exists a transformation (\ref{3-2})
transforming the matrix $z\sigma_1$ to a Hermitian matrix $A'$. As ${\rm
tr}\ (z\sigma_1)={\rm tr}\ A'=0$, the matrix $A'$ has the form
$\alpha_i\sigma_i$ with some real constants $\alpha_i$. Next, from the
equality ${\rm det}\,(z\sigma_1)={\rm det}\,A'$ we get $z^2=\alpha_i^2$.
The last relation is in contradiction to the fact that $z\notin {\bf
R}$. Consequently, the matrix $z\sigma_1$ cannot be reduced to a
Hermitian matrix with the aid of a transformation (\ref{3-2}).
Let us turn now to the matrix $w(\sigma_1+i\sigma_2)$. Taking a general form
of the matrix $\Lambda$
\[
\Lambda=\left(\begin{array}{cc} a& b\\ c& d\end{array}\right)
\]
we represent (\ref{3-2}) as follows
$$
A'=\Lambda^{-1}w(\sigma_1+i\sigma_2)\Lambda ={{\textstyle 2w} \over {\textstyle
\delta}}\left(\begin{array}{cc}cd & d^2\\ -c^2 & -cd
\end{array}\right),\quad \delta = {\rm det}\ \Lambda.
$$
The conditions of hermiticity of the matrix $A'$ read
$$
{{\textstyle w} \over {\textstyle \delta}}cd={{\textstyle \bar w} \over {\textstyle
\bar\delta}}\bar c\bar d,\quad
{{\textstyle -w} \over {\textstyle \delta}}c^2={{\textstyle \bar w} \over {\textstyle
\bar\delta}}\bar d^2.
$$
where the bar over a symbol stands for the complex conjugation.
It follows from the second relation that $c, d$ can vanish only
simultaneously which is impossible in view of the fact that the matrix
$\Lambda$ is invertible. Consequently, the relation $cd\not = 0$ holds.
Hence we get
$$
{{\textstyle -d} \over {\textstyle c}}={{\textstyle \bar c} \over {\textstyle \bar d}}
\leftrightarrow |c|^2+|d|^2=0.
$$
This contradiction proves the fact that the matrix $w(\sigma_1 + i \sigma_2)$
cannot be reduced to a Hermitian form.
As the matrix $\sigma_1+i\sigma_2$ is transformed to become
$\sigma_1-i\sigma_2$ with the use of an appropriate transformation
(\ref{3-2}), the lemma is proved.
\begin{lem}
Let $\vec a=(a_1,a_2,a_3)$, $\vec b=(b_1,b_2,b_3)$, $\vec c=(c_1,c_2,c_3)$ be
complex vectors and $\vec \sigma$ be the vector whose components are the Pauli
matrices $(\sigma_1,\sigma_2,\sigma_3)$. Then the following assertions holds true.
\begin{enumerate}
\item{ A non-zero matrix $\vec a\vec\sigma$ is reduced to a Hermitian form with
the help of a transformation (\ref{3-2}) iff $\vec a^2>0$ (this
inequality means, in particular, that $\vec a^2\in{\bf R}$);}
\item{Non-zero matrices $\vec a\vec\sigma, \vec b\vec\sigma$ with $\vec b\not
=\lambda\vec a$,\ $\lambda\in{\bf R}$, are reduced simultaneously to Hermitian forms with
the help of a transformation (\ref{3-2}) iff
$$
\vec a^2>0,\ \ \vec b^2>0,\ \ (\vec a\times\vec b)^2>0;
$$}
\item{Matrices $\vec a\vec\sigma, \vec b\vec\sigma, \vec
c\vec\sigma$ with $\vec a\not =\vec 0,\ \vec b\not
=\lambda\vec a,\ \vec c\not
=\mu\vec b,\ \{\lambda,\mu\}\subset{\bf R}$ are reduced simultaneously to Hermitian forms with
the help of a transformation (\ref{3-2}) iff
\begin{eqnarray*}
&&\vec a^2>0,\quad \vec b^2>0,\quad (\vec a\times\vec b)^2> 0,\\
&& \left\{\vec a\vec c,\quad \vec b\vec c,\quad
(\vec a\times\vec b)\vec c\right\}
\subset {\bf R}.
\end{eqnarray*}}
\end{enumerate}
Here we designate the scalar product of vectors $\vec a, \vec b$ as $\vec a\vec b$
and the vector product of these as $\vec a\times \vec b$.
\end{lem}
{\bf Proof.}$\quad$ Let us first prove the necessity of the assertion
1 of the lemma. Suppose that the non-zero matrix $\vec a \vec \sigma$
can be reduced to a Hermitian form. We will prove that hence it follows the
inequality $\vec a^2 > 0$.
Consider the matrices:
\begin{equation}\label{21}\begin{array}{l}
\Lambda_{ij}(a,b)=
\left\{
\begin{array}{ll} 1+\epsilon_{ijk}{{\textstyle \sqrt{a^2+b^2}-b} \over
{\textstyle a}}i\sigma_k, & a\not =0,\\[3mm] 1, & a=0, \end{array}
\right.
\end{array}
\end{equation}
where $(i,j,k)={\rm cycle}\ (1,2,3)$. It is not difficult to verify that
these matrices are invertible, provided
\begin{equation}\label{22}
\sqrt{a_i^2+a_j^2}\not =0.
\end{equation}
Given the condition (\ref{22}), the following relations hold
\begin{equation}\label{23}
\sigma_l\rightarrow\Lambda^{-1}_{ij}(a,b)\ \sigma_l\
\Lambda_{ij}(a,b)=\left\{\begin{array}{ll} \sigma_k,&l=k,\\[2mm]
{{\textstyle b\sigma_i+a\sigma_j} \over {\textstyle \sqrt{a^2+b^2}}},&l=i,\\[6mm]
{{\textstyle -a\sigma_i+b\sigma_j} \over {\textstyle \sqrt{a^2+b^2}}}, &l=j.
\end{array}\right.
\end{equation}
As $\vec a$ is a non-zero vector, there exists at least one pair of the
indices $i, j$ such that $a_i^2+a_j^2\not = 0$. Applying the
transformation (\ref{23}) with $a=a_i, b=a_j$ we get
\begin{equation} \label{24}
\vec a\vec\sigma\rightarrow\vec a'\vec\sigma=
\sqrt{a_i^2+a_j^2}\, \sigma_j+a_k\sigma_k
\end{equation}
(no summation over the indices $i,j,k$ is carried out). As the direct
check shows, the quantity $\vec a^2$ is invariant with respect to
transformation (\ref{23}), i.e. $\vec a^2= \vec a'^2$.
If $\vec a^2=0$, then ${a'}_j^2+{a'}_k^2=0$, or $a'_i=\pm ia'_k$. Hence by force of
Lemma 1 it follows that the matrix (\ref{24}) cannot be reduced to a Hermitian form.
Consequently, $\vec a^2\not =0$ and the relation ${a'}_j^2+{a'}_k^2\not =0$ holds
true. Applying transformation (\ref{23}) with $a=\sqrt{a_i^2+a_j^2},\ b=a_k$ we get
\begin{equation}\label{25}
\vec a'\vec\sigma\rightarrow\sqrt{\vec a^2}\, \sigma_k.
\end{equation}
Due to Lemma 1, if the number $\sqrt{\vec a^2}$ is complex, then the
above matrix cannot be transformed to a Hermitian matrix. Consequently,
the relation $\vec a^2>0$ holds true.
The sufficiency of the assertion 1 of the lemma follows from the fact
that, given the condition $\vec a^2 > 0$, the matrix (\ref{25}) is
Hermitian.
Now we will prove the necessity of the assertion 2 of the lemma.
First of all we note that due to assertion 1, $\vec a^2>, \vec b^2
> 0$. Next, without loss of generality we can again suppose that
$a_i^2+a_j^2\not =0$. Taking the superposition of two transformations of
the form (\ref{23}) with $a=a_i, b=a_j$ and $a=\sqrt{a_i^2+a_j^2}, b=a_k$
yields
\begin{equation}\label{26}\begin{array}{l}
\Lambda_{ij}(a_i,a_j) \Lambda_{jk}(\sqrt{a_i^2+a_j^2},a_k)=
1+i\epsilon_{ijk}\frac{\textstyle \sqrt{\vec a^2}-a_k}{\textstyle \sqrt{a_i^2+
a_j^2}}\sigma_i\\[4mm]
\quad +i\epsilon_{ijk}\frac{\textstyle \sqrt{a_i^2+a_j^2}-
a_j}{\textstyle a_i}\sigma_k- i\epsilon_{ijk}\frac{\textstyle \sqrt{a_i^2+a_j^2}-a_j}{\textstyle a_i}
\frac{\textstyle \sqrt{\vec a^2}-a_k}{\textstyle \sqrt{a_i^2+a_j^2}}\sigma_j
\end{array} \end{equation}
(here the finite limit exists when $a_i\rightarrow 0$).
Using this formula and taking into account (\ref{23}) yield
\begin{equation}\label{27}
\vec a\vec\sigma\rightarrow\sqrt{\vec a^2}\sigma_k,\quad
\vec b\vec\sigma\rightarrow{\vec b'}\vec\sigma=
{{\textstyle b_ia_j-b_ja_i} \over {\textstyle \sqrt{a_i^2+a_j^2}}}\sigma_i+
{{\textstyle a_k\vec a\vec b-b_k\vec a^2} \over
{\textstyle \sqrt{\vec a^2}\sqrt{a_i^2+a_j^2}}}\sigma_j+
{{\textstyle \vec a\vec b} \over {\textstyle \sqrt{\vec a^2}}}\sigma_k.
\end{equation}
Let us show that the necessary condition for the matrices $\sqrt{\vec
a^2}\sigma_k$, $\vec b'\vec\sigma$ to be reducible to Hermitian forms
simultaneously reads as $\vec a\vec b\in{\bf R}$. Indeed, as the
matrices ${\vec b'}\vec\sigma, \sigma_k$ are simultaneously reduced to
Hermitian forms, the matrix $\vec b' \vec\sigma + \lambda \sigma_k$ can
be reduced to a Hermitian form with any real $\lambda$. Hence, in view
of the assertion 1 we conclude that
\begin{equation}\label{28}
{b'}_i^2+{b'}_j^2+({b'}_k+\lambda)^2>0,
\end{equation}
where $\lambda$ is an arbitrary real number. The above equality may be
valid only when $b'_k={\vec a\vec b\over \sqrt{\vec a^2}} \in {\bf R}$.
Choosing $\lambda = -b'_k$ in (\ref{28}) yields that
${b'}_i^2+{b'}_j^2>0$. Since ${b'}_i^2+{b'}_j^2=(\vec a\times\vec b)^2$,
hence we get the desired inequality $(\vec a\times\vec b)^2>0$. The necessity
is proved.
In order to prove the sufficiency of the assertion 2, we consider transformation
(\ref{23}) with
\begin{equation}\label{29}
a={{\textstyle b_ia_j-b_ja_i} \over {\textstyle \sqrt{a_i^2+a_j^2}}},\quad
\ b={{\textstyle a_k\vec a\vec b-b_k\vec a^2} \over
{\textstyle \sqrt{\vec a^2}\sqrt{a_i^2+a_j^2}}}.
\end{equation}
This transformation leaves the matrix $\sqrt{\vec a^2}\sigma_k$ invariant, while
the matrix ${\vec b'}\vec\sigma$ (\ref{27}) transforms as follows
\begin{equation}\label{30}
{\vec b'}\vec\sigma\rightarrow{\vec b''}\vec\sigma=
{{\textstyle \sqrt{(\vec a\times\vec b)^2}} \over {\textstyle
\sqrt{\vec a^2}}}\sigma_j+
{{\textstyle \vec a\vec b} \over
{\textstyle \sqrt{\vec a^2}}}\sigma_k,
\end{equation}
whence it follows the sufficiency of the assertion 2.
The proof of the assertion 3 of the lemma is similar to one of the assertion 2.
The first three conditions are obtained with account of the assertion 2.
A sequence of transformations (\ref{23}) with $a,b$ of the form (\ref{26}), (\ref{29})
transforms the matrix $\vec c\vec \sigma$ to become
\begin{equation}\label{31}
\vec c\vec\sigma\rightarrow{\vec c\; ''}\vec\sigma=
{{\textstyle \epsilon_{ijk}\vec a(\vec c\times\vec b)} \over {\textstyle
\sqrt{(\vec c\times\vec b)^2}}}\sigma_i+
{{\textstyle (\vec a\times\vec b)(\vec a\times\vec c)} \over {\textstyle
\sqrt{(\vec c\times\vec b)^2}\sqrt{\vec a^2}}}\sigma_j+
{{\textstyle \vec a\vec c} \over
{\textstyle \sqrt{\vec a^2}}}\sigma_k.
\end{equation}
Using the standard identities for the mixed vector products we establish that
the coefficients by the matrices $\sigma_i, \sigma_j, \sigma_k$ are real
if and only if the relations
\[
\left\{\vec a\vec c,\ \vec b\vec c,\
(\vec a\times\vec b)\vec c\right\}
\subset {\bf R}
\]
hold true. This completes the proof of Lemma 2.
Lemma 2 plays the crucial role when reducing the potentials (\ref{20}),
to Hermitian forms. This is done as follows. Firstly, we reduce the QES operator
to the Schr\"odinger form
\[
\partial_y^2 + f(y)\vec a\vec\sigma + g(y)\vec b\vec\sigma +
h(y)\vec c\vec\sigma + r(y).
\]
Here $f, g, h, r$ are some linearly-independent scalar functions and
$\vec a=(a_1,a_2,$ $a_3)$, $\vec b=(b_1,b_2,b_3)$, $\vec c=(c_1,c_2,c_3)$ are
complex constant vectors whose components depend on the parameters
$\vec\alpha, \vec\beta, \vec\gamma$. Next, using Lemma 2 we obtain the
conditions for the parameters $\vec\alpha, \vec\beta, \vec\gamma$ that
provide a simultaneous reducibility of the matrices $\vec a\vec\sigma,
\vec b\vec\sigma, \vec c\vec\sigma$ to Hermitian forms. Then, making use
of formulae (\ref{21}), (\ref{26}), (\ref{29}) we find the form of the
matrix $\Lambda$. Formulae (\ref{25}), (\ref{30}), (\ref{31}) yield
explicit forms of the transformed matrices $\vec a\vec\sigma, \vec
b\vec\sigma, \vec c\vec\sigma$ and, consequently, the Hermitian form of
the matrix potential $V(y)$.
\section*{IV. QES matrix models}
Applying the algorithm mentioned at the end of the previous section we
have obtained a complete description of QES matrix models (\ref{17})
that can be reduced to Hermitian Schr\"odinger matrix operators. We list
below the final result, namely, the restrictions on the choice of
parameters and the explicit forms of the QES Hermitian Schr\"odinger
operators and then consider in some detail a derivation of the
corresponding formulae for one of the six inequivalent cases. In the
formulae below we denote the disjunction of two statements $A$ and $B$
as $[A]\bigvee [B]$.
\vspace{2mm}
\noindent
{\bf Case 1.}\ $\tilde \gamma_1= \tilde \gamma_2=\tilde \gamma_3=0$ and
\[
\left[\beta_0,\ \beta_1,\ \beta_2\in{\bf R}\right]\bigvee
[\beta_2=0,\ \beta_1=2\alpha_2,\
\beta_0=\alpha_1+i\mu ,\ \mu\in{\bf R}];
\]
\begin{eqnarray*}
\hat H[y]&=&\partial_y^2 + \Biggl\{{{\textstyle 1} \over
{\textstyle 4(\alpha_2x^2+\alpha_1x+\alpha_0)}}\lbrace
-\beta_2^2x^4-[2\beta_1\beta_2+4\alpha_2\beta_2(m-1)]x^3\\
&&+\left[ 2\alpha_2\beta_1-2\alpha_1\beta_2-\beta_1^2-2\beta_0\beta_2-
4\alpha_1\beta_2(m-1)\right] x^2\\
&&+\left[ 4\alpha_2\beta_0-2\beta_0\beta_1-4m\alpha_0\beta_2\right]
x+2\alpha_1\beta_0-2\alpha_0\beta_1-\beta_0^2\\
&&+4\beta_2(\alpha_2x^2+\alpha_1x+\alpha_0)\sigma_1
+(4\beta_2x+2\delta)(\alpha_2x^2+\alpha_1x+\alpha_0)\sigma_3
\rbrace\\
&&+{{\textstyle \alpha_2} \over {\textstyle 2}}-
{{\textstyle 3(2\alpha_2x+\alpha_1)^2} \over {\textstyle
16(\alpha_2x^2+\alpha_1x+\alpha_0)}}\left.\Biggr\}\right|_{x=f^{-1}(y)},\\
\Lambda&=&1.
\end{eqnarray*}
{\bf Case 2.}\ $\beta_2, \delta=0$ and
\begin{eqnarray*}
&&2\alpha_2\beta_1-\beta_1^2\in{\bf R}, \
2\alpha_2\beta_0-\beta_0\beta_1\in{\bf R},\
2\alpha_1\beta_0-2\beta_1\alpha_0-\beta_0^2-\tilde\gamma_i^2\in{\bf R},\\
&&\left[(2\alpha_2-\beta_1)^2\tilde\gamma_i^2>0\right]\bigvee
\left[2\alpha_2-\beta_1=0\right],\
\left[(\alpha_1-\beta_0)^2\tilde\gamma_i^2>0\right]\bigvee
\left[\alpha_1-\beta_0=0\right];
\end{eqnarray*}
\begin{eqnarray*}
\hat H[y]&=&\partial_y^2 + \Biggl\{{{\textstyle 1} \over
{\textstyle 4(\alpha_2x^2+\alpha_1x+\alpha_0)}}\Biggl\{
\beta_1(2\alpha_2-\beta_1)x^2+2\beta_0(2\alpha_2-\beta_1)x\\
&&+2\alpha_1\beta_0-2\beta_1\alpha_0-\beta_0^2-\tilde\gamma_i^2+
\left[2(2\alpha_2-\beta_1)x+2(\alpha_1-\beta_0)\right]\sqrt{\tilde\gamma_i^2}
\sigma_3
\Biggr\}\\
&&+{{\textstyle \alpha_2} \over {\textstyle 2}}-
{{\textstyle 3(2\alpha_2x+\alpha_1)^2} \over
{\textstyle 16(\alpha_2x^2+\alpha_1x+\alpha_0)}}\left.\Biggr\}\right|_{x=f^{-1}(y)},\\
\Lambda&=&\Lambda_{12}(\tilde\gamma_1,\tilde\gamma_2)
\Lambda_{23}(\sqrt{\tilde\gamma_1^2+\tilde\gamma_2^2},\tilde\gamma_3), \
\tilde\gamma_1^2+\tilde\gamma_2^2\not =0.
\end{eqnarray*}
(If $\tilde\gamma_1^2+\tilde\gamma_2^2=0$, then one can choose another matrix
$\Lambda$ (27) with $\tilde\gamma_i^2+\tilde\gamma_j^2\not =0$). \\
{\bf Case 3.}\ $\alpha_2\not = 0, \beta_2\not = 0$ and
\begin{eqnarray*}
&&\left[\{\beta_2, \gamma_1\}\subset {\bf{Re}},\ \gamma_3=0,\ \gamma_2 =
\sqrt{\gamma_1^2-2\alpha_2\gamma_1}, \alpha_2\gamma_1<0,\right.\\
&&\left. \beta_1=2\alpha_2+\beta_2{{\textstyle \alpha_1} \over {\textstyle \alpha_2}},\
\beta_0=\alpha_1+\beta_2{{\textstyle \alpha_0} \over {\textstyle \alpha_2}}\right];
\end{eqnarray*}
\begin{eqnarray*}
\hat H[y]&=& \partial_y^2 + \Biggl\{{{\textstyle \alpha_2} \over {\textstyle 2}}
-{{\textstyle 3(2\alpha_2x+\alpha_1)^2}
\over {\textstyle 16(\alpha_2x^2+\alpha_1x+\alpha_0)}}
+{{\textstyle 1} \over {\textstyle 4(\alpha_2x^2+\alpha_1x+\alpha_0)}}
\Biggl\{-\beta_2^2x^4\\
&&-\left[2\beta_2^2{{\textstyle \alpha_1} \over {\textstyle \alpha_2}}+
4\alpha_2\beta_2m\right]x^3-\Biggl[{{\textstyle \beta_2^2} \over {\textstyle
\alpha_2^2}}(\alpha_1^2+2\alpha_0\alpha_2)
+2\alpha_1\beta_2(1+2m)\Biggr] x^2\\
&&-\left[ {{\textstyle 2\alpha_1\beta_2(\alpha_1\alpha_2+\alpha_0\beta_2)}
\over {\textstyle \alpha_2^2}}+4\alpha_0\beta_2m\right] x
+\alpha_1^2-\beta_2^2{{\textstyle \alpha_0^2} \over {\textstyle
\alpha_2^2}}-4\beta_2{{\textstyle \alpha_0\alpha_1} \over {\textstyle \alpha_2}} -
4\alpha_0\alpha_2-2\alpha_2\gamma_1\\
&&+4\beta_2x(\alpha_2x^2+\alpha_1x+\alpha_0)\left[\sin\left(\theta(y)
\sqrt{-2\alpha_2 \gamma_1}\;\right)\sigma_1+
\cos\left(\theta(y)\sqrt{-2\alpha_2\gamma_1}\;\right)\sigma_3\right]\\
&&+2(\alpha_2x^2+\alpha_1x+\alpha_0)\Biggl[{{\textstyle \sin\left(\theta(y)
\sqrt{-2\alpha_2\gamma_1}\;\right)} \over {\textstyle
\sqrt{-2\alpha_2\gamma_1}}}\Biggl(\delta\sqrt{-2\alpha_2\gamma_1}\sigma_1
-2\beta_2\sqrt{\gamma_1^2-2\alpha_2\gamma_1}\sigma_3\Biggr)\\
&&+\left.\cos\left(\theta(y)\sqrt{-2\alpha_2\gamma_1}\;\right)\left(
{{\textstyle 2\beta_2\sqrt{\gamma_1^2-2\alpha_2\gamma_1}} \over {\textstyle
\sqrt{-2\alpha_2\gamma_1}}}\sigma_1+
\delta\sigma_3\right)\Biggr]\Biggr\}\Biggr\}\right|_{x=f^{-1}(y)} ,\\
\Lambda&=&1+\left(\sqrt{1-{{\textstyle 2\alpha_2} \over {\textstyle \gamma_1}}}-
\sqrt{{{\textstyle -2\alpha_2} \over {\textstyle \gamma_1}}}\;\right)\sigma_3 .
\end{eqnarray*}
{\bf Case 4.}\ $\alpha_2\not =0, \beta_2 =0$.
\vspace{2mm}
\noindent
{\bf Subcase 4.1.}\ $\delta\not = 0$, $\gamma_1, \gamma_2$ do not vanish
simultaneously and
\[
\gamma_1^2-\gamma_2^2<0,\ \ \gamma_3=i\mu,\ \ \{\mu, \delta\}
\subset{\bf R},\ \ i(\alpha_1-\beta_0)\in{\bf R},\ \ \beta_1=2\alpha_2;
\]
\begin{eqnarray*}
\hat H[y]&=&\partial_y^2 + \Biggl\{{{\alpha_2} \over {2}}
-{{3(2\alpha_2x+\alpha_1)^2}
\displaystyle\over {16(\alpha_2x^2+\alpha_1x+\alpha_0)}}+{{1} \over {4\xi}}\Biggl\{
-\beta_0^2+2\alpha_1\beta_0-2\alpha_0
\beta_1-\tilde\gamma_i^2\\
&&+2(\alpha_2x^2+\alpha_1x+\alpha_0)\Biggl[\delta\sqrt{\gamma_2^2-\gamma_1^2}
\displaystyle\sigma_1{{\sin\left(\theta(y)\sqrt{-\tilde\gamma_i^2}\;\right)}
\displaystyle \over {\sqrt{-\tilde\gamma_i^2}}}\\
&&+\displaystyle {{-i\delta\gamma_3\sqrt{\gamma_2^2-\gamma_1^2}\sigma_2
\displaystyle +\delta(\gamma_1^2-\gamma_2^2)\sigma_3} \over {\tilde\gamma_i^2}}
\cos\left(\theta(y)\sqrt{-\tilde\gamma_i^2}\;\right)\Biggr]
+\displaystyle\Biggl[{{2\delta\alpha_2\gamma_3}\over{\tilde\gamma_i^2}}x^2\\
&&+\displaystyle{{2\delta\alpha_1\gamma_3}\over{\tilde\gamma_i^2}}x+
\displaystyle{{(2\alpha_1-2\beta_0)\tilde\gamma_i^2+2\delta\alpha_0\gamma_3}\over
\displaystyle{\tilde\gamma_i^2}}\Biggr]
\left(i\sqrt{\gamma_2^2-\gamma_1^2}\sigma_2
+\gamma_3\sigma_3\right)\Biggr\}
\left.\Biggr\}\right|_{x=f^{-1}(y)},\\
\Lambda&=&\Lambda_{21}(i\gamma_1,\gamma_2).
\end{eqnarray*}
\noindent
{\bf Subcase 4.2.}\ $\delta\not =0,\ \gamma_1=\gamma_2=0,\ \gamma_3\not =0$ and
\begin{eqnarray*}
\{\delta ,\ \beta_1(2\alpha_2-\beta_1),\ \beta_0(2\alpha_2-\beta_1),\
-\beta_0^2+2\alpha_1\beta_0-2\alpha_0\beta_1,
\gamma_3(2\alpha_2-\beta_1),\ \gamma_3(\alpha_1-\beta_0)\}\subset {\bf R};
\end{eqnarray*}
\begin{eqnarray*}
\hat H[y]&=& \partial_y^2 + \Biggl\{{{\textstyle \alpha_2} \over {\textstyle 2}}-
{{\textstyle 3(2\alpha_2x+\alpha_1)^2} \over {\textstyle 16(\alpha_2x^2+\alpha_1x+\alpha_0)}}
+{{\textstyle 1} \over {\textstyle 4(\alpha_2x^2+\alpha_1x+\alpha_0)}}\lbrace
\beta_1(2\alpha_2-\beta_1)x^2\\
&&+2\beta_0 (2\alpha_2-\beta_1)x-\beta_0^2+
2\alpha_1\beta_0-2\beta_1\alpha_0-\gamma_3^2\\
&&+\left[2\delta\alpha_2x^2+2x((2\alpha_2-\beta_1)\gamma_3+\delta\alpha_1)+
2(\alpha_1-\beta_0)\gamma_3+2\delta\alpha_0\right]\sigma_3
\rbrace\left.\Biggr\}\right|_{x=f^{-1}(y)},\\
\Lambda&=&1 ,
\end{eqnarray*}
{\bf Case 5.}\ $\alpha_2=0,\ \beta_2\not =0$ and
\begin{eqnarray*}
&&\alpha_1\not =0,\ \gamma_1^2-\gamma_2^2<0,\ \tilde\gamma_i^2<0,\
\gamma_3= \displaystyle{{\tilde\gamma_i^2}\over{2\alpha_1}},\\
&&\{\beta_0,\ \beta_1,\ \beta_2,\ \gamma_2,\
\delta(\gamma_2^2-\gamma_1^2)+2\beta_2\gamma_1\gamma_3\}\subset {\bf R},\\
&&\{i(2\alpha_0\beta_2\gamma_3-\beta_1\tilde\gamma_i^2+2\beta_2\alpha_1\gamma_1
+\delta\alpha_1\gamma_3),\
i((\alpha_1-\beta_0)\tilde\gamma_i^2+2\beta_2\alpha_0\gamma_1
+\delta\alpha_0\gamma_3)\} \subset {\bf R};
\end{eqnarray*}
\begin{eqnarray*}
\hat H[y]&=& \partial_y^2 + \Biggl\{- {{\textstyle 3\alpha_1^2}
\over {\textstyle 16(\alpha_1x+\alpha_0)}}+
{{\textstyle 1} \over {\textstyle 4(\alpha_1x+\alpha_0)}}\Biggl\{
-\beta_2^2x^4-2\beta_1\beta_2x^3\\
&&+\left[(2-4m)\alpha_1\beta_2-\beta_1^2-2\beta_0\beta_2\right] x^2-
\left[ 2\beta_0\beta_1+4m\alpha_0\beta_2\right]x\\
&&+2\alpha_1\beta_0-2\alpha_0\beta_1-\beta_0^2-\tilde\gamma_i^2+
4x(\alpha_1x+\alpha_0)\Biggl[\beta_2\sqrt{\gamma_2^2-\gamma_1^2}\sigma_1
{{\textstyle \sin\left(\theta(y)\sqrt{-\tilde\gamma_i^2}\;\right)} \over {\textstyle
\sqrt{-\tilde\gamma_i^2}}}\\
&&+{{\textstyle \beta_2\sqrt{(\gamma_1^2-\gamma_2^2)\tilde\gamma_i^2}} \over {\textstyle
\tilde\gamma_i^2}}\sigma_3\cos\left(\theta(y)\sqrt{-\tilde\gamma_i^2}\;\right)
\Biggr]+2(\alpha_1x+\alpha_0)\\
&&\displaystyle \times\Biggl[\left({{\delta(\gamma_2^2-\gamma_1^2)+2\beta_2\gamma_1\gamma_3}
\displaystyle \over {\sqrt{\gamma_2^2-\gamma_1^2}}}\sigma_1-
\displaystyle {{2\beta_2\gamma_2\tilde\gamma_i^2} \over
\displaystyle {\sqrt{(\gamma_1^2-\gamma_2^2)\tilde\gamma_i^2}}}\sigma_3\right)
{{\textstyle \sin\left(\theta(y)\sqrt{-\tilde\gamma_i^2}\;\right)} \over {\textstyle
\sqrt{-\tilde\gamma_i^2}}}\\
&&+\displaystyle \left({{2\beta_2\gamma_2} \over
\displaystyle {\sqrt{\gamma_2^2-\gamma_1^2}}}\sigma_1+
\displaystyle {{\delta(\gamma_1^2-\gamma_2^2)-2\beta_2\gamma_1\gamma_3}
\displaystyle \over
{\sqrt{(\gamma_1^2-\gamma_2^2)\tilde\gamma_i^2}}}\sigma_3\right) \cos\left(
\theta(y)\sqrt{-\tilde\gamma_i^2}\;\right)\Biggr]\\
&&+\Biggl[x{{\textstyle 4\alpha_0\beta_2\gamma_3-2\beta_1\tilde\gamma_i^2+
4\alpha_1\beta_2\gamma_1+2\delta\alpha_1\gamma_3} \over {\textstyle
\tilde\gamma_i^2}}\\
&&+{{\textstyle (2\alpha_1-2\beta_0)\tilde\gamma_i^2+4\alpha_0\beta_2\gamma_1+2\delta
\alpha_0\gamma_3} \over {\textstyle
\tilde\gamma_i^2}}\Biggr]\left(-i\sqrt{-\tilde\gamma_i^2}\sigma_2\right)
\Biggr\}\left.\Biggr\}\right|_{x=f^{-1}(y)},\\
\Lambda&=&\Lambda_{21}(i\gamma_1,\gamma_2)\Lambda_{23}\left(-i\gamma_3
\sqrt{\gamma_2^2-\gamma_1^2},\gamma_1^2-\gamma_2^2\right).
\end{eqnarray*}
{\bf Case 6.}\ $\alpha_2=0,\ \beta_2=0$.
\vspace{2mm}
\noindent
{\bf Subcase 6.1.}\ $\delta\not =0$, $\tilde\gamma_1, \tilde\gamma_2$ do not vanish
simultaneously and
\begin{eqnarray*}
&&\tilde\gamma_i^2<0,\ \{\delta^2(\gamma_1^2-\gamma_2^2)<0,\ \beta_0,\
\beta_1\} \subset {\bf R},\\
&&\{i(-\beta_1\tilde\gamma_i^2+\delta\alpha_1\gamma_3),\
i((\alpha_1-\beta_0)\tilde\gamma_i^2+\delta\alpha_0\gamma_3)\}\subset {\bf R};
\end{eqnarray*}
\begin{eqnarray*}
\hat H[y]&=& \partial_y^2 + \Biggl\{-{{\textstyle 3\alpha_1^2}
\over {\textstyle 16(\alpha_1x+\alpha_0)}}
+{{\textstyle 1} \over {\textstyle 4(\alpha_1x+\alpha_0)}}\Biggl\{
-\beta_1^2x^2-2\beta_0\beta_1x+2\alpha_1\beta_0\\
&&-2\alpha_0\beta_1-\beta_0^2-\tilde\gamma_i^2
+\displaystyle 2(\alpha_1x+\alpha_0)\Biggl[\delta\sqrt{\gamma_2^2-\gamma_1^2}\sigma_1
\displaystyle {{\textstyle \sin\left(\theta(y)\sqrt{-\tilde\gamma_i^2}\;\right)} \over {\textstyle
\displaystyle \sqrt{-\tilde\gamma_i^2}}}\\
&&+\displaystyle {{\delta(\gamma_1^2-\gamma_2^2)} \displaystyle \over
\displaystyle {\sqrt{(\gamma_1^2-\gamma_2^2)\tilde\gamma_i^2}}}\sigma_3\cos\left(
\theta(y)\sqrt{-\tilde\gamma_i^2}\;\right)\Biggr]+
\displaystyle \Biggl[x{{\textstyle -2\beta_1\tilde\gamma_i^2+2\delta\alpha_1\gamma_3} \over {\textstyle
\displaystyle \tilde\gamma_i^2}}\\
&&+{{\textstyle (2\alpha_1-2\beta_0)\tilde\gamma_i^2+2\delta
\alpha_0\gamma_3} \over {\textstyle
\tilde\gamma_i^2}}\Biggr]\left(-i\sqrt{-\tilde\gamma_i^2}\sigma_2\right)\Biggr\}
\left.\Biggr\}\right|_{x=f^{-1}(y)},\\
\Lambda&=&\Lambda_{21}(i\gamma_1,\gamma_2)\Lambda_{23}\left(-i\gamma_3
\sqrt{\gamma_2^2-\gamma_1^2},\gamma_1^2-\gamma_2^2\right).
\end{eqnarray*}
{\bf Subcase 6.2.}
\begin{eqnarray*}
&&\gamma_1=\gamma_2=0,\ \gamma_3\not =0,\ \{\beta_1^2,\
\beta_0\beta_1\}\subset {\bf R},\\
&&\{-\beta_1\gamma_3+\delta\alpha_1,\ (\alpha_1-\beta_0)\gamma_3
+\delta\alpha_0,\ -\beta_0^2+2\alpha_1\beta_0-2\alpha_0\beta_1\}
\subset {\bf R};
\end{eqnarray*}
\begin{eqnarray*}
\hat H[y]&=& \partial_y^2 + \Biggl\{-{{\textstyle 3\alpha_1^2}
\over {\textstyle 16(\alpha_1x+\alpha_0)}}\\
&&+{{\textstyle 1} \over {\textstyle 4(\alpha_1x+\alpha_0)}}\lbrace
-\beta_1^2x^2-2\beta_0\beta_1x+
2\alpha_1\beta_0-2\alpha_0\beta_1-\beta_0^2-\gamma_3^2\\
&&+2(\alpha_1x+\alpha_0)[2x\beta_1(\alpha_1-\gamma_3)+2(\alpha_1-\beta_0)
\gamma_3+2\beta_1\alpha_0]\sigma_3
\rbrace\left.\Biggr\}\right|_{x=f^{-1}(y)},\\
\Lambda&=&1.
\end{eqnarray*}
In the above formulae we denote the inverse of the function
\begin{equation}\label{34}
y=f(x)\equiv\int\displaystyle {{dx}\over{\sqrt{\alpha_2x^2+\alpha_1x+\alpha_0}}},
\end{equation}
as $f^{-1}(y)$ and, what is more, the function $\theta=\theta(y)$ is
defined as follows
\begin{equation} \label{38}
\theta(y)=-\left.\left\{\int{{dx}\over{\alpha_2x^2+\alpha_1x+\alpha_0}}
\right\} \right|_{x=f^{-1}(y)},
\end{equation}
and $\tilde\gamma_i^2$ stands for $\tilde\gamma_1^2 + \tilde \gamma_2^2
+ \tilde \gamma_3^2$.
The whole procedure of derivation of the above formulae is very
cumbersome. That is why we restrict ourselves to indicating the
principal steps of the derivation of the corresponding formulae for the
case when $\alpha_2\not =0, \beta_2\not =0$ omitting the secondary
details. It is not difficult to prove that $\tilde \gamma_i^2\not = 0$.
Indeed, suppose that the relation $\tilde \gamma_i^2 = 0$ holds and
consider the expression $\Omega =U^{-1}(x)\sigma_3U(x)$ from (\ref{20}).
Making use of the Campbell-Hausdorff formula we get
\[
\Omega =\sigma_3+\theta(i \gamma_1\sigma_2+\gamma_2\sigma_1)-
{{\textstyle \theta^2} \over {\textstyle 2}} \gamma_3\tilde\gamma_i\sigma_i,
\]
where $\theta$ is the function (\ref{38}). Considering the coefficient
at $\theta^2$, yields that $\gamma_3=0$ (otherwise using Lemma 2 we get
the inequality $\gamma_3^2\tilde\gamma_i^2\not = 0$ that contradicts to
the assumption $\tilde\gamma_i^2=0$). Since the matrix coefficient at
$\theta$ has to be Hermitian, we get
$\tilde\gamma_i^2=\gamma_1^2-\gamma_2^2<0$. This contradiction proves
that $\tilde\gamma_i^2\not =0$. Taking into account the proved fact we
represent the matrix potential (\ref{20}) as follows
\begin{eqnarray}
V(y)&=&\Biggl\{{{\textstyle \alpha_2} \over {\textstyle 2}}-{{\textstyle 3(2\alpha_2x+\alpha_1)^2}
\over {\textstyle 16(\alpha_2x^2+\alpha_1x+\alpha_0)}}\nonumber\\
&&+{{\textstyle 1} \over {\textstyle 4(\alpha_2x^2+\alpha_1x+\alpha_0)}}\Lambda^{-1}\Biggl\{
-\beta_2^2x^4-[2\beta_1\beta_2+4\alpha_2\beta_2(m-1)]x^3\nonumber\\
&&+\left[ 2\alpha_2\beta_1-2\alpha_1\beta_2-\beta_1^2-2\beta_0\beta_2-
4\alpha_1\beta_2(m-1)\right] x^2\nonumber\\
&&+\left[ 4\alpha_2\beta_0-2\beta_0\beta_1-4m\alpha_0\beta_2\right]
x+2\alpha_1\beta_0-2\alpha_0\beta_1-\beta_0^2-\tilde\gamma_i^2\nonumber\\
&&+4x(\alpha_2x^2+\alpha_1x+\alpha_0)\Biggl[\beta_2\gamma_3
(\tilde\gamma_i^2)^{-1}
\tilde\gamma_i\sigma_i+\beta_2(\gamma_2\sigma_1+i\gamma_1\sigma_2)
(\tilde\gamma_i^2)^{-1/2}
\sinh\left(\theta\sqrt{\tilde\gamma_i^2}\;\right)\nonumber\\
&&+[\beta_2(-\gamma_1\gamma_3\sigma_1-i\gamma_2\gamma_3\sigma_2+
(\gamma_1^2-\gamma_2^2)\sigma_3)]
(\tilde\gamma_i^2)^{-1}\cosh\left(\theta\sqrt{\tilde\gamma_i^2}\;\right)\Biggr]
\label{37}\\
&&+2(\alpha_2x^2+\alpha_1x+\alpha_0)\Biggl[(\delta\gamma_2\sigma_1+
i(\delta\gamma_1-2\beta_2\gamma_3)\sigma_2-
2\beta_2\gamma_2\sigma_3)(\tilde\gamma_i^2)^{-1/2}
\sinh\left(\theta\sqrt{\tilde\gamma_i^2}\;\right)\nonumber\\
&&+[(2\beta_2(\gamma_3^2-\gamma_2^2)-\delta\gamma_1\gamma_3)\sigma_1-
i(2\beta_2\gamma_1\gamma_2+\delta\gamma_2\gamma_3)\sigma_2+(\delta
(\gamma_1^2-\gamma_2^2)-2\beta_2\gamma_1\gamma_3)\sigma_3]\nonumber\\
&&(\tilde\gamma_i^2)^{-1}\cosh\left(\theta\sqrt{\tilde\gamma_i^2}\;\right)
\Biggr]
+[(-2\beta_2\tilde\gamma_i^2+4\alpha_2\beta_2\gamma_1+2\delta
\alpha_2\gamma_3)x^2+
((4\alpha_2-2\beta_1)\tilde\gamma_i^2\nonumber\\
&&+4\alpha_1\beta_2\gamma_1+2\delta
\alpha_1\gamma_3)x
+{{\textstyle (2\alpha_1-2\beta_0)\tilde\gamma_i^2+4\alpha_0\beta_2\gamma_1+2\delta
\alpha_0\gamma_3} \over {\textstyle \tilde\gamma_i^2}}](\tilde\gamma_i^2)^{-1}\tilde\gamma_i
\sigma_i\Biggr\}\Lambda \left.\Biggr\}\right|_{x=f^{-1}(y)},\nonumber
\end{eqnarray}
where $\theta=\theta(y)$ is given by (\ref{38}).
Let us first suppose that $\gamma_1, \gamma_2$ do not vanish
simultaneously. We will prove that hence it follows that
$\tilde\gamma_i^2\in{\bf R}$. Consider the (non-zero) matrix coefficient
at $4x\xi\cosh\left(\theta\sqrt{\tilde\gamma_i^2}\;\right)$ in the
expression (\ref{37}) and suppose that $\sqrt{\tilde\gamma_i^2}=a+ib$,
with some non-zero real numbers $a$ and $b$. Now it is easy to prove that
$\cosh\left(\theta\sqrt{\tilde\gamma_i^2}\;\right)=f(x)+ig(x)$, where
$f, g$ are linearly-independent real-valued functions. Considering the
matrix coefficients of $f(x)$, $g(x)$ we see that in order to reduce
the matrix (\ref{37}) to a Hermitian form we should reduce to Hermitian
forms the matrices $A$, $iA$ which is impossible. This contradiction
proves that $\tilde\gamma_i^2\in{\bf R}$.
Consider next the non-zero matrix coefficients of $4x\xi{{\textstyle
\sinh\left(\theta\sqrt{\tilde\gamma_i^2}\;\right)} \over {\textstyle
\sqrt{\tilde\gamma_i^2}}},
4x\xi\cosh\left(\theta\sqrt{\tilde\gamma_i^2}\;\right)$ in (\ref{37}).
These coefficients can be represented in the form $\vec a \vec \sigma,\
\vec b \vec \sigma$, where
\[
\vec a=\beta_2(\gamma_2,i\gamma_1,0),\ \vec b=\beta_2(-\gamma_1\gamma_3,-i
\gamma_2\gamma_3,\gamma_1^2-\gamma_2^2),
\]
and, what is more,
\[
\vec a\times\vec b=\beta_2^2(\gamma_1^2-\gamma_2^2)(i\gamma_1,-\gamma_2,i
\gamma_3).
\]
Applying Lemma 2 yields
\[
\beta_i\in{\bf R},\ \ \gamma_3=i\mu ,\ \mu\in{\bf R},\ \
\gamma_1^2-\gamma_2^2<0.
\]
Next we turn to the matrix coefficient of $2\xi{{\textstyle \sinh\left(\theta
\sqrt{\tilde\gamma_i^2}\;\right)} \over {\textstyle \sqrt{\tilde\gamma_i^2}}}$ which is
of the form $\vec c\vec \sigma$ with $\vec
c=(\delta\gamma_2,i(\delta\gamma_1-
2\beta_2\gamma_3),-2\beta_2\gamma_2)$. Making use of the assertion 3
of Lemma 2 we obtain the conditions
\[
\{\gamma_1,\ \gamma_2\}\subset{\bf R},\ \ [\gamma_1=0]\bigvee[\gamma_3=0].
\]
Considering in a similar way the matrix coefficient of $2\xi\cosh\left(\theta
\sqrt{\tilde\gamma_i^2}\;\right)$ yields the following restrictions on
the coefficients $\vec \alpha, \vec \beta, \vec \gamma$:
\[
\Biggl[\{\beta_2,\gamma_1\}\subset{\bf{R}},\ \gamma_3=0, \gamma_2=
\sqrt{\gamma_1^2-2\alpha_2\gamma_1},\ \alpha_2\gamma_1<0,\\
\beta_1=2\alpha_2+\beta_2{{\textstyle \alpha_1} \over {\textstyle \alpha_2}},\
\beta_0=\alpha_1+\beta_2{{\textstyle \alpha_0} \over {\textstyle \alpha_2}}\Biggr].
\]
As a result we get the formulae of Case 2.
One can prove in an analogous way that, provided $\gamma_1=\gamma_2=0, \gamma_3\not =0$,
the matrix (\ref{37}) cannot be reduced to a Hermitian form.
\section*{V. Some examples.}
In this section we give several examples of Hermitian QES matrix Schr\"odinger
operators that have a comparatively simple form and, furthermore, are
in direct analogy to QES scalar Schr\"odinger operators.
\vspace{2mm}
\noindent
{\em Example 1.}\ Let us consider Case I of the previous section with
$\alpha_0=\beta_2=1$, the remaining coefficients being equal to zero.
This choice of parameters yields the following Hermitian QES matrix
Schr\"odinger operator:
\begin{equation}\label{58}
\hat H[y]=\partial_y^2-{{\textstyle 1} \over {\textstyle 4}}y^4-my+\sigma_3y+\sigma_1.
\end{equation}
The invariant space ${\cal I}$ of the above Schr\"odinger
operator has the dimension $2m$ and is spanned by the
vectors
\[
\vec f_j=\exp \left({{\textstyle y^3} \over {\textstyle 6}}\right)\vec e_1y^j,\quad
\vec g_k=\exp \left({{\textstyle y^3} \over {\textstyle 6}}\right)(m\vec e_2y^k-k\vec
e_yx^{k-1}),
\]
where $j=0,\ldots,m-2, k=0,\ldots, m$, $\vec e_1=(1,0)^T, \vec e_2=(0,1)^T$
and $m$ is an arbitrary natural number.
Note that the basis vectors of the invariant space $\cal I$ are
square-integrable on an interval $(-\infty, B]$ with an arbitrary
$B<+\infty$. It is also worth noting that there exists a QES scalar model
of the same structure that has analogous properties \cite{ush93}.
By construction, QES operator (\ref{58}) when restricted to the
invariant space $\cal I$ becomes complex $2m\times 2m$ matrix
$M$. However, the fact that operator (\ref{58}) is Hermitian does not
guarantee that the matrix $M$ will be Hermitian. It is straightforward
to check that the necessary and sufficient conditions of hermiticity of
the matrix $M$ read as
\begin{itemize}
\item{basis vectors $\vec f_j(y), \vec g_k(y)$ are square integrable
on the interval $[A, B]$,}
\item{the condition
\begin{equation}\label{57}
(\partial_y\vec r_j(y))\vec r_k(y)-\vec r_j(y)
(\partial_y\vec r_k(y))|_A^B=0,
\end{equation}
where $\vec r_i=f_i, i=0,\ldots, m-2$ and $r_i=\vec g_{i-m+1},
i=m-1,\ldots,2m-1$, holds $\forall j,k=0,\ldots,2m-1$.}
\end{itemize}
In the case considered relation (\ref{57}) does not hold and,
consequently, the matrix $M$ is not Hermitian. The next two examples are
free of this drawback, since the basis vectors of their invariant spaces
are square integrable on the interval $(-\infty, +\infty)$.
\vspace{2mm}
\noindent
{\em Example 2.}\ Let us now consider Case I of the previous section
with $\alpha_1=1, \beta_2=-1, \beta_0=1/2$, the remaining coefficients
being equal to zero. This choice yields the following QES matrix
Schr\"odinger operator
\[
\hat H[y] =\partial_y^2 - {y^6\over 256} +{4m-1\over 16} y^2 -
{1\over 4}y^2 \sigma_3 -\sigma_1.
\]
The invariant space ${\cal I}$ of this operator has the dimension
$2m$ and is spanned by the vectors
\begin{eqnarray*}
\displaystyle\vec f_j&=&\exp \left({{-y^4} \over {64}}\right)
\displaystyle\left({{y}\over{2}}\right)^{2j}\vec e_1,\\
\displaystyle\vec g_k&=&\exp \left({{-y^4} \over
{64}}\right)\left(m\left({{y}\over{2}}\right)^{2k}\vec e_2
\displaystyle -k\left({{y}\over{2}}\right)^{2k-2}\vec e_1\right),
\end{eqnarray*}
where $j=0,\ldots, m-2,\ k=0,\ldots,m$.
It is not difficult to verify that the basis vectors of the invariant
space $\cal I$ are square integrable on the interval $(-\infty
,+\infty)$ and that the corresponding matrix $M$ is Hermitian.
One more remark is that there exists an analogous QES scalar
Schr\"odinger operator whose invariant space has square integrable
basis vectors (see, for more details \cite{tur88,ush88}).
\vspace{2mm}
\noindent
{\em Example 3.}\ Let us now consider Case III of the previous section
with $\alpha_2=1, \beta_2=-1, \gamma_1=-1$. This choice of parameters
yields the following QES matrix Schr\"odinger operator:
\begin{eqnarray*}
\hat H[y] &=& \partial_y^2 -\displaystyle{{1}\over{4}}-{{1}\over{4}}\exp(-2y)+
m\exp(-y)+{{1}\over{2}} \exp(2y)\\
&&+\displaystyle \left[m{{\sqrt 3+1}\over{2}}\sin(\sqrt 2e^y)- \displaystyle{{\sqrt
6}\over{2}}\cos(\sqrt 2e^y)-\exp(-y)\sin(\sqrt 2e^y)\right]\sigma_1\\
&&+\displaystyle \left[m{{\sqrt 3+1}\over{2}}\cos(\sqrt 2e^y)+ \displaystyle{{\sqrt
6}\over{2}}\sin(\sqrt 2e^y)-\exp(-y)\cos(\sqrt 2e^y)\right]\sigma_3.
\end{eqnarray*}
Furthermore, the invariant space ${\cal I}$ of this operator has the
dimension $2m$ and is spanned by the vectors
\begin{eqnarray*}
\vec f_j&=&U^{-1}(y)\exp(-jy)\vec e_1,\\
\vec g_k&=&U^{-1}(y)\left(m\exp(-ky)\vec e_2-
k\exp(-(k-1)y)\vec e_1\right),
\end{eqnarray*}
where $j=0,\ldots, m-2, k=0,\ldots, m$, $m$ is an arbitrary
natural number and
\begin{eqnarray*}
U^{-1}(y)&=&{{1}\over{2\sqrt 2}}\exp\left(-{{y}\over{2}}\right)
\displaystyle\exp\left(-{{1}\over{2}}e^{-y}\right)\\
&&\times (\sqrt 3+\sqrt 2-\sigma_3)\left[\cos(\sqrt 2e^y)+
\displaystyle {{i\sqrt 3\sigma_2-\sigma_1}\over{\sqrt 2}}\sin(\sqrt
2e^y)\right].
\end{eqnarray*}
The basis vectors of the invariant space $\cal I$ are square
integrable and the condition (\ref{57}) holds. Indeed, the functions
$\vec f_j(y)$ and $\vec g_k(y)$ behave asymptotically as
$\exp\left(-{{(2j+1)y}\over{2}}\right)$ and
$\exp\left(-{{(2k+1)y}\over{2}}\right)$, correspondingly, with $y\to
+\infty$. Furthermore, they behave as $\exp\left(-{{(2j+1)y}
\over{2}}\right)$ $\times \exp\left(-{{1}\over{2}}e^{-y}\right)$and
$\exp\left(-{{(2k+1)y}\over{2}}\right)
\exp\left(-{{1}\over{2}}e^{-y}\right)$, correspondingly, with $y\to
-\infty$. This means that they vanish rapidly provided $y\to \pm
\infty$. \vspace{2mm}
\noindent
{\em Example 4.}\ The last example to be presented here is the QES
matrix model having a potential containing the Weierstrass function. To
this end we consider the whole set of operators (\ref{16}) and compose
the Hamiltonian
\[
H[x]=D_2 + A_1 +2B_2.
\]
Reducing $H[x]$ to the Schr\"odinger form yields the following QES
matrix model:
\begin{eqnarray*}
\hat H[y] &=& \partial_y^2 + (m-m^2-1)w(y) -
{3(w(y)^2-1)^2\over 16(w(y)^3+w(y))}\\
&&+{2m-1\over w(y)^2+1}\left(2\sigma_1 + (w(y)^3+3w(y))\sigma_3\right).
\end{eqnarray*}
Here $m$ is an arbitrary natural number and $w(y)$ is the
Weierstrass function defined by the quadrature
\[
y=\int\limits_0^{w(y)}{dx\over \sqrt{x^3+x}}.
\]
The invariant space ${\cal I}$ of the operator $\hat H[y]$ has the
dimension $2m$ and is spanned by the vectors
\begin{eqnarray*}
\vec f_j&=&(w(y)^3 +w(y))^{-\frac{1}{4}}(1-i\sigma_2 w(y))
\exp(-jy)\vec e_1,\\
\vec g_k&=&(w(y)^3 +w(y))^{-\frac{1}{4}}(1-i\sigma_2 w(y))
\left(m\exp(-ky)\vec e_2\right.\\
&&\left.-k\exp(-(k-1)y)\vec e_1\right),
\end{eqnarray*}
where $j=0,\ldots, m-2, k=0,\ldots, m$.
Note that the first example of scalar QES model with elliptic
potential has been constructed by Ushveridze (see \cite{ush93}
and the references therein).
\section*{VI. Some conclusions}
A principal aim of the paper is to give a systematic algebraic treatment
of Hermitian QES Hamiltonians within the framework of the approach to
constructing QES matrix models suggested in our papers
\cite{zhd97a,zhd97b}. The whole procedure is based on a specific
representation of the algebra $o(2,2)$ given by formulae (\ref{11}),
(\ref{13}), (\ref{14}). Making use of the fact that the algebra
(\ref{14}) has an infinite-dimensional invariant subspace (\ref{12}) we
have constructed in a systematic way six multi-parameter families of
Hermitian QES Hamiltonians on line. Due to computational reasons we do
not present here a systematic description of Hermitian QES Hamiltonians
with potentials depending on elliptic functions (we give only an example
of such Hamiltonian in Section V).
The problem of constructing all Hermitian QES Hamiltonians of the form
(\ref{17}) having square-integrable eigenfunctions is also beyond the
scope of the present paper. We restricted our analysis of this problem
to giving two examples of such Hamiltonians postponing its further
investigation for our future publications.
A very interesting problem is a comparison of the results of the present
paper based on structure of representation space of the representation
(\ref{11}), (\ref{13}), (\ref{14}) of the Lie algebra $o(2,2)$ to those
of the paper \cite{fgr97}, where some superalgebras of
matrix-differential operators come into play. The link to the results of
\cite{fgr97} is provided by the fact that the algebra $o(2,2)$ has a
structure of a superalgebra. This is a consequence of the fact that
operators (\ref{14}) fulfill identities (\ref{15}).
One more challenging problem is a utilization of the obtained results
for integrating multi-dimensional Pauli equation with the help of the
method of separation of variables. As an intermediate problem to be
solved within the framework of the method in question is a reduction of
the Pauli equation to four second-order systems of ordinary differential
equations with the help of a separation Ansatz. The next step is
studying whether the corresponding matrix-differential operators belong
to one of the six classes of QES Hamiltonians constructed in Section IV.
Investigation of the above enumerated problems is in progress now and we
hope to report the results obtained in one of our future publications.
\section*{Acknowledgments}
This work is partially supported by the DFFD Foundation of Ukraine
(project 1.4/356).
|
1,108,101,565,542 | arxiv | \section{Introduction}
\vskip1ex
A \textit{Lie type algebra} is by definition an algebra $L$ over a field
$\Bbb F$ with product $[\,,\,]$ satisfying the following property:
for all elements $a, b, c \in L$ there exist $\alpha, \beta \in
\Bbb F$ such that $\alpha\ne 0$ and
\begin{equation}\label{tozh}
[[a,b],c]=\alpha [a,[b,c]]+\beta[[a,c],b].\end{equation} Note that
in general $\alpha, \beta$ depend on elements $a,b,c\in L$; they
can be viewed as functions $\alpha, \beta: L\times L \times L
\rightarrow \Bbb F$.
\vskip1ex If a group $G$ acts on an algebra $L$, we denote by
$$C_L(G)=\{l\in L \,\,\,\mid \,\,\, l^{\varphi}=l \mathrm{\,\,\,
for\, all\,\, } \varphi\in G\}$$ the fixed-point subalgebra of
$G$.
\vskip1ex
By a theorem of Khukhro, Makarenko and Shumyatsky~\cite{khu-ma-shu} if a Lie algebra over a field admits a Frobenius group of automorphisms $FH$ with cyclic
kernel $F$ and complement $H$ such that the fixed-point subalgebra
of $F$ is trivial and the fixed-point subalgebra of $H$ is
nilpotent of class $c$, then $L$ is nilpotent and the nilpotency
class of $L$ is bounded in terms of $|H|$ and $c$.
\vskip1ex The aim of the present paper is to extent this result to
the class of Lie type algebras, which includes in particular Lie
algebras, associative algebras and Leibnitz algebras.
\vskip1ex
\begin{theorem}\label{th-1}
Suppose that a Lie type algebra $L$ \/$($of possibly infinite dimension\/$)$ over an arbitrary field $\Bbb K$ admits a Frobenius group of automorphisms $FH$ with cyclic
kernel $F$ of order $n$ and complement $H$ of order $q$ such that
the fixed-point subalgebra
of $H$ is nilpotent of class $c$ and the fixed-point subalgebra
of $F$ is trivial. Assume also that either $\Bbb K$ contains a
primitive $n$th root of unity, or $\alpha, \beta$ in property
(\ref{tozh}) are constants and do not depend on the choice of
$a,b,c$. Then $L$ is nilpotent and the nilpotency class of $L$ is
bounded in terms of $q$ and $c$.
\end{theorem}
\vskip1ex
The proof of the theorem is heavily based on the arguments of~\cite{khu-ma-shu}. Since the demonstrations in~\cite{khu-ma-shu}
do not use structure theory and all calculations are founded on
combinatorial reasoning in a graded Lie algebra, most of the
lemmas can be easily adapted to a graded Lie
type algebra by replacing the Jacobi identity by the propery~(\ref{tozh}). We give them without proof. However, unlike Lie
algebras, Lie type algebras lack the anticommutativity identity,
therefore some works are needed to overcome this difficulty.
\vskip1ex
We now state a straightforward corollary of Theorem \ref{th-1}.
\vskip1ex
A \textit{\/$($right\/$)$ Leibniz
algebra} or \textit{Loday algebra} is an algebra $L$ over a field
satisfying the Leibniz
identity $$[[a,b],c] = [a,[b,c]]+ [[a,c],b]$$ for all $a,b,c\in
L$.
\vskip1ex
\begin{corollary}\label{th1}
Suppose that a Leibniz algebra $L$ \/$($of possibly infinite dimension\/$)$ over an arbitrary field $\Bbb K$ admits a Frobenius group of automorphisms $FH$ with cyclic
kernel $F$ and complement $H$ of order $q$ such that the
fixed-point subalgebra
of $H$ is nilpotent of class $c$ and the fixed-point subalgebra
of $F$ is trivial. Then $L$ is nilpotent and the nilpotency class
of $L$ is bounded in terms of $q$ and $c$.
\end{corollary}
\vskip1ex Some preliminary definitions and facts are given in
\S\,2. In \S\,3 we prove some auxiliary result --- an analogue for
Lie type algebras of Shalev--Kreknin theorem \cite{kr, shalev}
on graded Lie algebras with small number of non-trivial components
(Proposition \ref{small-comp}). This result is used in \S\,5 to
prove the solvability of bounded derived length of $L$. But before
that, we perform the reduction to graded algebras in \S\,4.
Finally in \S\,6 we employ the induction on derived length to
prove the nilpotency of bounded class of $L$.
\section{Preliminaries}
Let $L$ be a Lie type algebra. If $M,\, N$ are subspaces of
$L$, then $[M,N]$
denotes the subspace, generated by all the products $[m,n]$ for
$m\in M$, $n\in N$. In view of (\ref{tozh}), if $M$ and $N$ are
two-side ideals, then $[M,N]+[N,M]$ is also a two-side ideal; if
$H$ is a (sub)algebra, then $[H,H]$ is two-side ideal of $H$ and,
in particular, its subalgebra.
The subalgebra generated by subspaces~$U_1,U_2,\ldots, U_k$ is
denoted by $\left<U_1,U_2,\ldots, U_k\right>$, and the two-side
ideal generated by~$U_1,U_2,\ldots, U_k$ is denoted by ${}_{\rm
id}\!\left<U_1, U_2,\ldots, U_k\right>$.
\vskip1ex
A simple product $[a_1,a_2,a_3,\ldots, a_s]$ is by
definition the left-normalized product
$$[...[[a_1,a_2],a_3],\ldots, a_s].$$ The analogous notation is also
used for subspaces
$$[A_1,A_2,A_3,\ldots, A_s]=[...[[A_1,A_2],A_3],\ldots, A_s].$$
\vskip1ex Since
\begin{equation}\label{tozh2} [a,[b,c]]=\frac{1}{\alpha}\,[[a,b],c]-\frac{\beta}{\alpha}\,[[a,c],b]\end{equation}
for all $a,b,c\in L$,
any (complex) product in elements in $L$ can be expressed as a linear combination of simple products of
the same length in the same elements. Also, it follows that the
(two-sided) ideal in $L$ generated by a subspace $S$ is the
subspace generated by all the simple products
$[x_{i_1},y_j,x_{i_2},\ldots, x_{i_t}]$ and $[y_j,x_{i_1},
x_{i_2},\ldots x_{i_t}]$, where $t\in \Bbb N$ and $x_{i_k}\in L,
y_j\in S$. In particular, if $L$ is generated by a subspace $M$,
then its space is spanned by simple products in elements of~$M$.
\vskip1ex
The derived series of an algebra $L$ is defined as
$$L^{(0)}=L, \; \; \; \, \, \, \, \, \,
L^{(i+1)}=[L^{(i)},L^{(i)}].$$ Then $L$ is solvable of derived
length at most $n$ if $L^{(n)}=0$.
\vskip1ex
Terms of the lower central series of $L$ are defined as $\gamma
_1(L)=L;$ \ $\gamma _{k+1}(L)=[\gamma _k(L),\,L]$. Then $L$ is
nilpotent of class at most $c$ if $\gamma _{c+1}(L)=0$.
\vskip1ex
We will need the following algebra analog of P.~Hall's theorem
\cite{hall}, which will helps us in proving nilpotency of a
solvable Lie type algebra.
\vskip1ex
\begin{lemma}\label{chao} Let $K$ be an ideal of a Lie
type algebra $L$. If $\gamma _{c+1}(L)\subseteq [K,K]$ and $\gamma
_{k+1}(K)=0$ then $$\gamma _{c{k+1 \choose 2} -{k \choose
2}+1}(L)=0.$$
\end{lemma}
\begin{proof} The proof can be easily obtained from the proof of P.~Hall's theorem
\cite{hall} by appropriate simplifications in our ``linear'' case.
\end{proof}
\vskip1ex
An algebra $L$ over a field
is \textit{ $({\Bbb Z}/n{\Bbb Z})$-graded} if
$$L=\bigoplus_{i=0}^{n-1}L_i\qquad \text{ and }\qquad [L_i,L_j]\subseteq L_{i+j\,({\rm mod}\,n)},$$
where $L_i$ are subspaces of~$L$. Elements of $L_i$ are referred
to as \textit{homogeneous} and the subspaces $L_i$ are called
\textit{homogeneous components} or \textit{grading components}.
In particular, $L_0$ is called the zero component.
\vskip1ex
An additive subgroup $H$ of $L$ is called \textit{homogeneous} if
$H=\bigoplus_i (H\cap L_i)$ and we set $H_i=H\cap L_i$. Obviously,
any subalgebra or an ideal generated by homogeneous subspaces is
homogeneous. A homogeneous subalgebra can be regarded as a
$({\Bbb Z} /n{\Bbb Z} )$-graded algebra with the induced grading.
It follows that the terms of the derived series and of the lower
central series of $L$, the ideals $L^{(k)}$ and $\gamma
_{k}(L)$, are also $({\Bbb Z} /n{\Bbb Z} )$-graded algebras with
induced grading $$L^{(k)}_i=L^{(k)}\cap L_i,\,\,\,\,\,\gamma
_{k}(L)_i=\gamma _{k}(L)\cap L_i$$ and
$$L^{(k+1)}_i=\sum_{u+v\equiv i\,({\rm mod\,}n)
}[L^{(k)}_{u},\,L^{(k)}_{v}]$$ $$\gamma
_{k+1}(L)_i=\sum_{u+v\equiv i\,({\rm mod\,}n ) }\Big([\gamma
_{k}(L)_u,\,L_{v}]+[L_v,\,\gamma _{k}(L)_u]\Big)=\sum_{u+v\equiv
i\,({\rm mod\,}n ) }[\gamma _{k}(L)_u,\,L_{v}].$$ The last
equality in the above formula is due to \eqref{tozh2}.
\vskip1ex
\begin{Index Convention}
Henceforth a small Latin letter with an index $i\in \Bbb Z/n\Bbb
Z$ will denote a homogeneous element in the grading component
$L_i$, with the index only indicating which component this element
belongs to: $x_i\in L_i$. We will not be using numbering indices
for elements of the $L_i$, so that different elements can be
denoted by the same symbol when it only matters which component
the elements belong to. For example, $x_{i}$ and $x_{i}$ can be
different elements of $L_{i}$, so that $[x_{i},\, x_{i}]$ can be a
non-zero element of $L_{2i}$.
\end{Index Convention}
\vskip1ex
We use abbreviation, say, ``$(m,n,\dots )$-bounded'' for ``bounded
above in terms of $m, n,\dots$''.
\section
{$(\Z/n\Z)$-graded Lie type algebra with small number of
non-trivial homogeneous components}
By Kreknin's theorem \cite{kr} a {$(\Z/n\Z)$-graded Lie algebra
$$L=\bigoplus_{i=0}^{n-1}L_i,$$ where
$[L_i,L_j]\subseteq L_{i+j\,({\rm mod}\,n)}$ and $L_0=0$, is
solvable of derived length at most $2^n-2$.
\vskip1ex In \cite{shalev}, in the frame of studying the finite
groups of bounded rank with automorphisms, Shalev noticed an
interesting fact: if among $L_i$
there are only $d\leq n-1$ non-trivial
components, then the derived length does not depend on $n$, but
only on $d$. We will need an analog of this result for Lie type
algebras.
\begin{proposition}\label{small-comp} Let $$L=\bigoplus_{i=0}^{n-1}L_i,$$ be a $(\Z/n\Z)$-graded Lie type algebra.
If $L_0=0$ and among $L_i$ there are only $d\leq n-1$ non-trivial
components, then $L$ is solvable of $d$-bounded derived length.
\end{proposition}
\begin{proof} Let $\Omega =\{ w_1,\ldots ,w_ {d}\}$ be the set of
all indices $i$ such that $L_i\ne 0$. We assume that
$0<w_1<w_2<\ldots<w_{d-1}<w_d<n$. We use the same arguments as in
the proof of Kreknin's theorem given in \cite[Theorem
4.3.1]{kh-book} replacing everywhere $i$ by $w_i$ and the Jacobi
identity by (\ref{tozh}). The assertion follows from the two
following inclusions
\begin{equation}\label{f-small-comp-1} L^{(2^{k-1})}\cap L_{w_k}
\subseteq \langle L_{w_{k+1}}, L_{w_{k+2}},\ldots,
L_{w_{d}}\rangle \end{equation}
\begin{equation}\label{f-small-comp-2}
L^{(2^{k}-1)}\subseteq \langle L_{w_{k+1}}, L_{w_{k+2}},\ldots,
L_{w_{d}}\rangle,\end{equation} which are proved simultaneously by
induction on $k$.
\vskip1ex We will also need the following elementary Lemma.
\begin{lemma} [{\cite[Lemma 4.3.5]{kh-book}}]\label{ntl} If $i + j \equiv k (\mathrm{mod}\, n)$ for $1 \leq i, j \leq n - 1$, then the numbers $i$ and $j$ are both greater
than $k$ or less than $k$.
\end{lemma}
For $k=1$ the inclusions (\ref{f-small-comp-1}) and
(\ref{f-small-comp-2}) take the forms
$$L^{(1)}\cap L_{w_1}\subseteq \langle L_{w_2}, \ldots,
L_{w_{d}}\rangle,$$
$$L^{(1)}\subseteq \langle L_{w_2}, \ldots,
L_{w_{d}}\rangle.$$ The subspace $L^{(1)}\cap L_{w_1}$ is
generated by non-trivial products $[x_i, y_j]$ such that $x_i\in
L_i, \,\,y_j\in L_j$ with $i,j\in \Omega$ and $i+j\equiv w_1\,
(\mathrm{mod}\, n)$. By Lemma \ref{ntl} either $i,j>w_1$ or
$i,j<w_1$. Since there are no non-trivial components $L_i$ with
$i<w_1$, it follows that $i, j\in \{w_2, \ldots,w_{d}\}$. This
implies (\ref{f-small-comp-1}) and also (\ref{f-small-comp-2}) for
$k=1$.
\vskip1ex Let's now $k>1$. We prove the inclusion
(\ref{f-small-comp-1}) using the induction hypothesis for the
inclusion (\ref{f-small-comp-2}). The subspace $L^{2^{k-1}}\cap
L_{w_k}$ is generated by the products $[x_i,y_j]$, such that
$x_i\in L_i\cap L^{2^{k-1}-1}$, $y_j\in L_j\cap L^{2^{k-1}-1}$
with $i,j\in \Omega$ and $i+j\equiv w_k\, (\mathrm{mod}\, n)$. By
induction hypothesis for (\ref{f-small-comp-2}), $y_j\in \langle
L_{w_{k}}, L_{w_{k+1}},\ldots, L_{w_{d}}\rangle$, and hence $y_j$
can be written as a linear combination of products of the form
$$[u_{j_1}, u_{j_2}, \ldots, u_{j_t}]$$ with $j_l\in \{w_k, \ldots, w_d\}$,
$j_1+j_2+\cdots+ j_t\equiv j\,(\mathrm{mod}\, n)$. Applying
repeatedly (\ref{tozh2}), we can represent
$$\big[x_i,[u_{j_1}, u_{j_2}, \ldots, u_{j_t}]\big]$$ as a linear
combination of products
\begin{equation}\label{f-small-comp-3} [x_i, v_{h_1},\ldots, v_{h_t}],\end{equation} where $h_l\in
\{w_k, \ldots, w_d\}$, $h_1+h_2+\cdots +h_t \equiv
j\,(\mathrm{mod}\, n)$. For each such product we have
$$i+h_1+h_2+\cdots +h_t\equiv w_k\,(\mathrm{mod}\, n).$$
If $h_t=w_k$, then $$i+h_1+h_2+\cdots +h_{t-1} \equiv
0\,(\mathrm{mod}\, n)$$ and hence (\ref{f-small-comp-3}) is equal
to zero. If $h_t>w_k$, then $i+h_1+h_2+\cdots +h_{t-1}>w_k$ by
Lemma \ref{ntl}, and consequently the product
(\ref{f-small-comp-3}) lies in $\langle L_{w_{k+1}},
L_{w_{k+1}},\ldots, L_{w_{d}}\rangle$. Note, that the case
$h_t<w_k$ is impossible since all the $h_l$ belong to the set
$\{w_k, \ldots, w_d\}$. As $[x_i,y_j]$ is a linear combination of
products of the form (\ref{f-small-comp-3}) it follows that
$[x_i,y_j]$ also belongs to this subalgebra as required.
\vskip1ex To prove (2) for $k>1$ we apply (2) for $k-1$ to the
subalgebra $L^{2^{k-1}}$:
$$L^{({2^{k}-1)}}=(L^{(2^{k-1})})^{(2^{k-1}-1)}\subseteq \langle (L^{2^{k-1}})\cap
L_{w_k},\dots, (L^{2^{k-1}})\cap L_{w_{d}}\rangle.$$
As we have already proved above, the subspace $(L^{2^{k-1}})\cap
L_{w_k}$ lies in $\langle L_{w_{k+1}},\dots, L_{w_{d}}\rangle .$
Hence
$$L^{(
2^{k}-1)}\subseteq \langle L_{w_{k+1}},\dots, L_{w_{d}}\rangle .$$
\end{proof}
\section{Reduction to graded algebras with ``selective nilpotency'' condition}
Let $L$ be a Lie type algebra that satisfies the hypothesis of
Theorem \ref{th1} and let $\varphi$ be a generator and $n$ the
order of the Frobenius kernel $F$. If the ground field $\Bbb K$
contains a primitive $n$th root $\omega$ of 1, we consider the
eigenspaces $L_i=\{x\in L \mid x^{\varphi}=\omega^i x\}$ for the
eigenvalues $\omega^i$. One can verify that
$$[L_i, L_j]\subseteq L_{i+j\,(\rm{mod}\,n)}\qquad \text{and}\qquad L= \bigoplus _{i=0}^{n-1}L_i,$$
so this is a $(\Bbb Z /n\Bbb Z )$-grading. We also have
$L_0=C_L(F)=0$.
In the case, where $\Bbb K$ does not contain a primitive $n$th
root of 1, by hypothesis of the theorem \ref{th1}, the values
$\alpha, \beta$ in \eqref{tozh} should be constant and not
depending on $a,b,c$. We extend the ground field by $\omega$ and
denote the resulting Lie algebra by $\widetilde L$. The group $FH$
acts in a natural way on $\widetilde L$ and this action inherits
the conditions that $C_{\widetilde L}(F)=0$ and $C_{\widetilde
L}(H)$ is nilpotent of class~$c$. Since $\alpha,\beta $ are
constant for all $a,b,c\in L$ in (\ref{tozh}), this property
(\ref{tozh}) holds also in $\widetilde L$, and therefore
$\widetilde L$ is a Lie type algebra. Thus, we can assume that
$L=\widetilde L$ and the ground field contains~$\omega$.
\vskip1ex
A known property of Frobenius groups says that if the Frobenius
kernel is cyclic, then the Frobenius complement is also cyclic.
Let $h$ be a generator and $q$ the order of $H$ and let
$\varphi^{h^{-1}} = \varphi^{r}$ for some $1\leq r \leq n-1$.
Since by definition of the Frobenius group $C_H(f) = 1$ for every
non-identity $f$ in $F$, it follows that the numbers $n, q, r$
satisfy the following condition
\begin{equation} \label{prim}
\begin{split}
& n, q, r \text{ are positive integers such that } 1\leq r \leq n-1 \text{ and } \\
&\quad\text{the image of } r \text{ in } {\Bbb Z}/d{\Bbb Z} \text{
is a primitive } q \text{th root of } 1 \\ &\qquad
\qquad\qquad\text{for every divisor } d \text{ of }n.
\end{split}
\end{equation}
The group $H$ permutes the components $L_i$: $L_i^h=L_{ri}$, since
if $x_i\in L_i$, then $(x_i^h)^{\varphi}=x_i^{h\varphi
h^{-1}h}=(x_i{\varphi^r})^h=\omega^{ir}x_i^h.$
\vskip1ex We can assume that the characteristic $p$ of the ground
field does not divide $n=|F|$. In the opposite case, we consider
the Hall $p'$-subgroup $\langle f_1\rangle$ of $F$, the Sylow
$p$-subgroup $\langle f_2\rangle$ of $F$ and $f_2$-invariant
subspace $C_L(f_1)$. If $C_L(f_1)$ is non-trivial, the
automorphism $f_2$ (as a $p$-automorphism acting on a space over a
field of characteristic $p$) has necessary a non-trivial fixed
point on $C_L(f_1)$ which would be also a non-trivial fixed point
for $F$ that contradicts our assumption. Thus $C_L(f_1)=0$ and we
can replace $F$ by $\langle f_1\rangle$, consider the Frobenius
group $\langle f_1\rangle H$ instead of $FH$ and, consequently,
assume that $p$ does not divide~$n$.
\vskip1ex In what follows, to simplify the notations, (under the
Index Convention) we will denote $(x_s)^{h^i}$ by $x_{r^is}$ for
$x_s\in L_s$ . Let $x_{a_1},\dots,x_{a_{c+1}}$ be homogeneous
elements in $L_{a_1},\dots,L_{a_{c+1}}$, respectively. Consider
the sums
\begin{align*}
X_1&=x_{a_1}+x_{ra_1}+\cdots+x_{r^{q-1}a_1},\\
\vdots&\\
X_{c+1}&=x_{a_{c+1}}+x_{ra_{c+1}}+\cdots+x_{r^{q-1}a_{c+1}}.
\end{align*}
Since all of them lie in subalgebra $C_L(H)$, which is nilpotent
of class $c$, it follows that
$$[X_1,\ldots, X_{c+1}]=0.$$
We expand the expressions to obtain on the left a linear
combination of products in the $x_{r^ja_i}$, which in particular
involves the term $[x_{a_1},\ldots, x_{a_{c+1}}]$. Suppose that
the product $[x_{a_1},\ldots, x_{a_{c+1}}]$ is non-zero. Then
there must be other terms in the expanded expression that belong
to the same component $L_{a_1+\cdots+a_{c+1}}$. In other words,
then
$$a_{1}+\dots+a_{c+1}=r^{\alpha_1}a_{1}+\dots+r^{\alpha_{c+1}}a_{c+1}$$
for some $\alpha_i\in\{0,1,2,\dots,q-1\}$ not all of which are
zeros. Equivalently, if for all $(\alpha_1,
\alpha_2,\ldots,\alpha_{c+1})\neq (0,0,\ldots 0)$ with
$\alpha_i\in\{0,1,2,\dots,q-1\}$,
$$a_{1}+\dots+a_{c+1}\neq
r^{\alpha_1}a_{1}+\dots+r^{\alpha_{c+1}}a_{c+1},$$ then the
product $[x_{a_1},\ldots x_{a_{c+1}}]$ is equal to zero.
\vskip1ex The above considerations lead to the following notion
that plays an important role in further arguments.
\begin{definition}
Let $a_1,\dots,a_k$ be not necessarily distinct non-zero elements
of $\Bbb Z/n\Bbb Z$. We say that the sequence $(a_1,\dots,a_k)$ is
\textit{$r$-dependent} if
$$a_{1}+\dots+a_{k}=r^{\alpha_1}a_{1}+\dots+r^{\alpha_k}a_{k}$$
for some $\alpha_i \in\{0,1,2,\dots,q-1\}$ not all of which are
zero. If the sequence $(a_1,\dots,a_k)$ is not $r$-dependent, i.e.
if for all $(\alpha_1, \alpha_2,\ldots,\alpha_{k})\neq (0,0,\ldots
0)$ with $\alpha_i\in\{0,1,2,\dots,q-1\}$,
$$a_{1}+\dots+a_{k}\neq
r^{\alpha_1}a_{1}+\dots+r^{\alpha_{k}}a_{k},$$ we call it
\textit{$r$-independent}.
\end{definition}
\begin{remark}\label{remark1}
A single non-zero element $a\in \Bbb Z/n\Bbb Z$ is always
$r$-independent. Indeed, if $a=r^{\alpha}a$ for $\alpha
\in\{1,2,\dots,q-1\}$, then $a=0$ by \eqref{prim}.
\end{remark}
\begin{definition}
Let $n, q, r$ be integers defined by \eqref{prim}. We say that a $(\Z/n\Z)$-graded Lie type algebra $L$ satisfies
the \textit{selective $c$-nilpotency condition} if, under the
Index Convention,
\begin{equation}\label{select}
[x_{d_1},x_{d_2},\dots, x_{d_{c+1}}]=0\quad \text{ whenever }
(d_1,\dots,d_{c+1}) \text{ is $r$-independent}.
\end{equation}
\end{definition}
\begin{remark}\label{remark2}
If $c=0$ a $(\Z/n\Z)$-graded Lie type algebra $L$ satisfies the
\textit{selective $c$-nilpotency condition} if and only if $L_d=0$
for all $d\ne0$, since any element $d\ne 0$ is $r$-independent by
Remark~\ref{remark1}.
\end{remark}
Summarizing all of the above we can assert that to demonstrate
Theorem \ref{th1} it suffices to prove the nilpotency of
$(q,c)$-bounded class of a $(\Z/n\Z)$-graded Lie type algebra with
selective $c$-nilpotency condition.
\section{ Bounding of derived length }
In this section we suppose that $L$ is a $(\Z/n\Z)$-graded Lie
type algebra with $L_0=0$ that satisfies the selective
$c$-nilpotency condition (\ref{select}). Note, that from
Proposition \ref{small-comp} it follows that $L$ is already
solvable of $n$-bounded derived length. We will obtain a bound of
the derived length that does not depend of $n$, but depends only
on $q=|H|$ and $c$.
\vskip1ex
We start with an elementary fact from \cite{khu-ma-shu} on
$r$-dependent sequences.
\begin{notation}
For a given $r$-independent sequence $(a_1,\dots,a_k) $ we denote
by $D(a_1,\dots,a_k)$ the set of all $j\in\Bbb Z/n\Bbb Z$ such
that $(a_1,\dots,a_k,j)$ is $r$-dependent.
\end{notation}
\begin{lemma} [{\cite[Lemma 4.4]{khu-ma-shu}}]\label{115}
If $(a_1,\dots,a_k)$ is $r$-independent, then
$$|D(a_1,\dots,a_k)|\leq q^{k+1}.$$
\end{lemma}
\vskip1ex
We now present a series of lemmas that were proved in \cite{khu-ma-shu} for Lie algebras but need only some minor modifications to be used for the case of Lie type algebras.
\vskip1ex
\begin{notation} The order of an element $b\in\Bbb Z/n\Bbb Z$ (in
the additive group) is denoted by $o(b)$.
\end{notation}
\begin{lemma}[{\cite[Lemma 4.6]{khu-ma-shu}}]\label{lb}
Suppose that a $(\Z/n\Z)$-graded Lie type algebra $L$ with
$L_0=0$ satisfies the selective $c$-nilpotency
condition~\eqref{select}. Let $b$ be an element of
$\Bbb{Z}/n\Bbb{Z}$ such that $o(b)>2^{2^{2q-3}-1}c^{2^{2q-3}}$.
Then there are at most $q^{c+1}$ elements $a\in\Bbb{Z}/n\Bbb{Z}$
such that $[L_a, \underbrace{L_b,\dots,L_b}_{c}] \neq 0$.
\end{lemma}
\begin{proof} The proof is exactly the same as that of
Lemma~4.6 in \cite{khu-ma-shu}.
\end{proof}
\begin{lemma}[{\cite[Lemma 4.7]{khu-ma-shu}}]\label{l_b}
Suppose that a $(\Z/n\Z)$-graded Lie type algebra $L$ with $L_0=0$
satisfies the selective $c$-nilpotency condition~\eqref{select}.
There is a $(c,q)$-bounded number $w$ such that
$$[
L,\underbrace{L_b,\dots,L_b}_{w}] =0$$ whenever $b$ is an element
of $\Bbb{Z}/n\Bbb{Z}$ such that
$o(b)>\max\{2^{2^{2q-3}-1}c^{2^{2q-3}},q^{c+1} \}$.
\end{lemma}
\begin{proof} See Lemma 4.7 in \cite{khu-ma-shu}.
\end{proof}
\begin{lemma}[{\cite[Lemma 4.8]{khu-ma-shu}}]\label{odin} Let $(d_1,\dots, d_c)$ be an arbitrary $r$-independent
sequence and $U=[u_{d_1},\dots,u_{d_c}]$ be a homogeneous product
with indices $(d_1,\dots, d_c)$ (under Index Convention). Suppose
that a $(\Z/n\Z)$-graded Lie type algebra $L$ with $L_0=0$
satisfies the selective $c$-nilpotency condition~\eqref{select}.
Then
\begin{enumerate}[a)]
\item
every product of the form
\begin{equation}\label{eq4-1}
[U,x_{i_1},\dots,x_{i_t}]
\end{equation}
can be
written as a linear combination of products of the form
\begin{equation}\label{eq5-1}
[U, m_{j_1},\dots,m_{j_{s}}],
\end{equation}
where $j_k\in D(d_1,\dots,d_c)$ and $s\leq t$. The case $s=t$ is
possible only if $i_k\in D(d_1,\dots, d_c)$ for all $k=1,\dots,t$.
\item every product of the form
\begin{equation}\label{eq4}
[x_{i_1}, U, \dots,x_{i_t}]
\end{equation}
can be
written as a linear combination of products of the form
\begin{equation}\label{eq5}
[m_{j_1},U,\dots,m_{j_{s}}],
\end{equation}
where $j_k\in D(d_1,\dots,d_c)$ and $s\leq t$. The case $s=t$ is
possible only if $i_k\in D(d_1,\dots, d_c)$ for all $k=1,\dots,t$.
\end{enumerate}
\end{lemma}
\begin{proof} The part a) is proved in the same way as Lemma 4.8 in
\cite{khu-ma-shu} applying everywhere the property~(\ref{tozh})
instead of the Jacobi identity.
\vskip1ex To prove part b) we use induction on $t$. If $t=0$,
there is nothing to prove. If $t=1$ and $i_1\in D(d_1,\dots,d_c)$,
then $[x_{i_1}, U]$ is of the required form. If $i_1\notin
D(d_1,\dots,d_c)$, then $[x_{i_1}, U]=0$ by \eqref{select}.
\vskip1ex
Let $t>1$. If all the indices $i_j$
belong to $D(d_1,\dots,d_c)$, then the product
$[x_{i_1},U,\dots,x_{i_t}] $ is of the required form with $s=t$.
Suppose that in \eqref{eq4} there is an element $x_{i_k}$ with the
index $i_k$ that does not belong to $D(d_1,\dots,d_c)$. Let $k$ be
as small as possible. We use $k$ as a second induction parameter.
\vskip1ex If $k=1$, then the product \eqref{eq4} is zero by
\eqref{select} and we are done. If $k=2$ we rewrite product
\eqref{eq4} using (\ref{tozh}) as
$$[x_{i_{1}}, U, x_{i_{2}}, \ldots, x_{i_t}]=\alpha [x_{i_{1}}, [U,x_{i_2}], \ldots, x_{i_t}]+\beta [[x_{i_{1}},
x_{i_2}],U,\ldots, x_{i_t}]$$$$= \alpha [x_{i_{1}}, [U,
x_{i_2}],\ldots, x_{i_t}]+\beta [[x_{i_{1}+i_2},U,\ldots,
x_{i_t}],
$$
where $x_{i_{1}+i_2}=[x_{i_{1}}, x_{i_2}]$ (under the Index
Condition). The first term is trivial by \eqref{select} because
$x_{i_2}\notin D(d_1,\dots,d_c)$. The second term is of required
form by induction hypothesis because it is shorter than the
original one.
\vskip1ex Suppose that $k\geq 3$. We rewrite \eqref{eq4} using
\eqref{tozh} as
$$[x_{i_{1}},U,\dots,x_{i_{k-1}},x_{i_k},\dots,x_{i_t}]=\alpha \big[x_{i_{1}},U,\dots,[x_{i_{k-1}},x_{i_k}],
\dots,x_{i_t}\big]+$$
$$+ \beta [x_{i_{1}},U,\dots,x_{i_k},x_{i_{k-1}},\dots,x_{i_t}].$$
By the induction hypothesis the first term is a linear combination
of products of the form~\eqref{eq5} because it is shorter than
\eqref{eq4}, while the second term has the required form because
the index that does not belong to $D(d_1,\dots,d_c)$ here occurs
closer to $U$ than in \eqref{eq4}.
\end{proof}
Let $D=|D(d_1,\dots,d_c)|$ and let $w$ be the number given by
Lemma~\ref{l_b}.
\begin{lemma}[{\cite[Lemma 4.10]{khu-ma-shu}}]\label{dva}
Let $D=|D(d_1,\dots,d_c)|$ and let $w$ be the number given by
Lemma~\ref{l_b}. Suppose further that $L$ and $U$ are as in
Lemma~\ref{odin}. Then the ideal of $L$ generated by $U$
is spanned by
products of the form
\begin{equation}\label{eq6-1}
[U, m_{i_1},\dots,m_{i_u},m_{i_{u+1}},\dots,m_{i_{v}}]
\end{equation}
and
\begin{equation}\label{eq6-2}
[m_{i_1},U, m_{i_2},\dots,m_{i_u},m_{i_{u+1}},\dots,m_{i_{v}}]
\end{equation}
where $u\leq (w-1)D+1$, $i_k\in D(d_1,\dots, d_c)$ for all $k=1,2\ldots v$, and $o(i_k)\leq N(c,q)$ for $k>u$.
\end{lemma}
\begin{proof} By Lemma \ref{odin} the ideal generated by $U$
is spanned by the products of the two forms \eqref{eq5-1} and
\eqref{eq5}. We denote this span by $R$. Exactly in the same
manner as in Lemma 4.10 in \cite{khu-ma-shu}), we prove by
induction on the length of the products that \eqref{eq5-1} and
\eqref{eq5} do not change modulo $R$ under any permutation of the
$m_{j_k}$, $k\geq 3$. To adapt the proof in \cite{khu-ma-shu} we
only need to replace the Jacobi identity by \eqref{tozh} and Lemma
4.8 in~\cite{khu-ma-shu} by Lemma~\ref{odin}.
If among the $m_{j_k}$ there are at least $w$ elements with the
same index $j_k$ such that $o(j_k)\geq N(c,q)$, we move these
elements next to each other. Then by Lemma~\ref{l_b} the
corresponding product is equal to zero. If all the indices $j_k\in
A$ occur less than $w$ times, we place all these elements right
after the $[U,m_{j_1}]$ or respectively $[m_{j_1},U]$. This
initial segment has length at most $D(w-1)+c+1$, so the resulting
products take the required form~\eqref{eq6-1} or \eqref{eq6-2}.
\end{proof}
\begin{proposition}[{\cite[Corollary 4.11]{khu-ma-shu}}]\label{malocomp}
Suppose that a $(\Z/n\Z)$-graded Lie type algebra $L$ with
$L_0=0$ satisfies the selective $c$-nilpotency
condition~\eqref{select}, and let $(d_1,\dots, d_c)$ be an
$r$-independent sequence. Then the ideal $_{\rm id}\langle
[L_{d_1},\dots, L_{d_c}]\rangle$ has $(c,q)$-boundedly many
non-trivial components of the induced grading.
\end{proposition}
\begin{proof} The proof can be easily reconstructed from the proof of Corollary 4.11 in
\cite{khu-ma-shu}. We only need to replace Lemmas 4.4, 4.7, 4.8
and 4.10 in \cite{khu-ma-shu} by Lemmas \ref{115}, \ref{l_b},
\ref{odin} and \ref{dva} respectively.
\end{proof}
\begin{lemma}[{\cite[Lemma 4.12]{khu-ma-shu}}]\label{pyat}
Suppose that a homogeneous ideal $T$ of a Lie type algebra $L$ has
only $e$ non-trivial components. Then $L$ has at most $e^2$
components that do not centralize $T$.
\end{lemma}
\begin{proposition}[{\cite[Proposition 4.13]{khu-ma-shu}}]\label{razresh}
Suppose that a $(\Z/n\Z)$-graded Lie type algebra $L$ with $L_0=0$
satisfies the selective $c$-nilpotency condition~\eqref{select}.
Then $L$ is solvable of $(c,q)$-bounded derived length $f(c,q)$.
\end{proposition}
\begin{proof} We reproduce the proof of Corollary 5.10 in~\cite{khu-ma-shu} replacing the Jacobi identity by the property
\eqref{tozh} and applying Propostion \ref{small-comp} instead of
Shalev--Kreknin theorem \cite{kr, shalev}.
\vskip1ex
We use induction on
$c$. If $c=0$, then $L=0$ by Remark~\ref{remark2} and we are done.
\vskip1ex Let $c\geq 1$. We consider the ideal $I$ of $L$
generated by all products $[L_{i_1},\dots,L_{i_c}]$, where
$(i_1,\dots,i_c)$ ranges through all $r$-independent sequences of
length~$c$. The quotient algebra $L/I$ has induced
$(\Z/n\Z)$-grading of $L/I$ with trivial zero-component and $L/I$
satisfies the selective $(c-1)$-nilpotency condition. It follows
by the induction hypothesis that $L/I$ is solvable of bounded
derived length, say, $f_0$, that is, $L^{(f_0)}\leq I$.
\vskip1ex Let now $(i_1,\dots,i_c)$ be a $r$-independent sequence
$(i_1,\dots,i_c)$. We set
$$T={}_{\rm id}\langle[L_{i_1},\dots,L_{i_c}]\rangle.$$ Proposition \ref{malocomp} implies that there are only $(c,q)$-boundedly many, say,
$e$, non-trivial grading components in $T$. By Lemma \ref{pyat}
there are at most $e^2$ components that do not centralize $T$. The
subalgebra $C_L(T)$ is also a homogeneous ideal by \eqref{tozh},
since
$$[C_L(T), L, T]\subseteq \big[C_L(T), [L, T]\big]+\big[C_L(T), T, L\big]\subseteq
[C_L(T), T]\subseteq \{0\},$$
$$[L, C_L(T), T]\subseteq \big[L, [C_L(T), T]\big]+\big[L, T, C_L(T)\big]\subseteq
[T, C_L(T)]\subseteq \{0\}.$$ The quotient algebra $L/C_L(T)$ has
induced $(\Z/n\Z)$-grading with trivial zero-component and with at
most $e^2$ non-trivial components. By Proposition
\ref{small-comp} the algebra $L/C_L(T)$ is solvable of $e$-bounded
derived length, say, $f_1$. Therefore $L^{(f_1)}\subseteq C_L(T)$
and $[L^{(f_1)}, T]=0$. Since $f_1$ does not depend on the choice
of the $r$-independent tuple $(i_1,\dots,i_c)$ and $I$ is the sum
of all such ideals $T$, it follows that $[L^{(f_1)},I]=0$. Recall
that $L^{(f_0)}\leq I$. Hence, $[L^{(f_1)},L^{(f_0)}]=0$. Thus $L$
is solvable of $(c,q)$-bounded derived length at most
$\max\{f_0,f_1\}+1$.
\end{proof}
\section{ Bounding of nilpotency class }
In this section we complete the proof of Theorem \ref{th1} by
proving the nilpotency of $(c,q)$-bounded class of a
$(\Z/n\Z)$-graded Lie type $L$ algebra with $L_0=0$ that satisfies
the selective $c$-nilpotency condition (\ref{select}). We have
already proved that $L$ is solvable of $(c,q)$-bounded derived
length and can use induction on the derived length of $L$.
\vskip1ex
If $L$ is abelian,
there is nothing to prove. Assume that $L$ is metabelian, that is
$[[L,L], [L,L]]=0$.
\vskip1ex We will use two following lemmas from \cite{khu-ma-shu}.
\begin{lemma}[{\cite[Lemma 5.2]{khu-ma-shu}}]\label{l_b-metab} Let $L$ be a metabelian
$(\Z/n\Z)$-graded Lie type $L$ algebra with $L_0=0$ that satisfies
the selective $c$-nilpotency condition (\ref{select}). Then there
is a $(c,q)$-bounded number $m$ such that
{$[L,\underbrace{L_b,\dots,L_b}_{m} ]=0$} for every $b\in
\Bbb{Z}/n\Bbb{Z}$.
\end{lemma}
\begin{proof} See Lemma 5.2 in \cite{khu-ma-shu}.
\end{proof}
\begin{lemma}[{\cite[Lemma 4.5]{khu-ma-shu}}]\label{rigid}
Suppose that for some $m$ a sequence $(a_1,\dots,a_k)$ of non-zero
elements of $\Bbb Z/n\Bbb Z$ contains at least $q^m+m$ different
values. Then one can choose an $r$-independent subsequence
$(a_1,a_{i_2},\dots,a_{i_m})$ of $m$ elements that contains $a_1$.
\end{lemma}
\begin{proof} See Lemma 4.5 in \cite{khu-ma-shu}.
\end{proof}
Let $m=m(c,q)$ be as in
Lemma~\ref{l_b-metab} and put $g=(m-1)(q^{c+1}+c)+2$. We consider
the product $[[L,L]_{a_1},L_{a_2},\dots,L_{a_g}]$, where
$a_1,\dots,a_g\in \Bbb Z/n\Bbb Z$ are non-zero. If the sequence
$(a_1,\dots,a_g)$ contains an $r$-independent sequence of length
$c+1$ that starts with $a_1$, we permute the $L_{a_i}$ in order
to have an initial segment with indices which form an
$r$-independent subsequence $a_1,\dots,a_{c+1}$. Then
$[[L,L]_{a_1},L_{a_2},\dots,L_{a_g}]=0$ by~\eqref{select}. If the
sequence $(a_1,\dots,a_g)$ does not contain an $r$-independent
subsequence of length $c+1$ starting with $a_1$, then by
Lemma~\ref{rigid} the sequence $(a_1,\dots,a_g)$ contains at most
$q^{c+1}+c$ different values. It follows that either the value of
$a_1$ occurs in $(a_1,\dots,a_g)$ at least $m+1$ times or, else,
another value, different from $a_1$, occurs at least $m$ times. In
any case there are $m$ components $L_{a_i}$ with the same index,
say $j$. We move these components $L_j$ next to $L_{a_1}$. It
follows from Lemma~\ref{l_b-metab} that
$[[L,L]_{a_1},L_{a_2},\dots,L_{a_g}]=0$. Thus, we conclude that
$L$ is nilpotent of class at most $g$.
\vskip1ex Now suppose that the derived length of $L$ is at
least~3. By the induction hypothesis, $[L,L]$ is nilpotent of
bounded class. The quotient algebra $L/[[L,L],[L,L]]$ is
metabelian and hence nilpotent of bounded class. It follows that
$L$ is nilpotent of bounded nilpotency class by the analogue of
P.~Hall's theorem (Lemma \ref{chao}).
\vskip1ex
|
1,108,101,565,543 | arxiv | \section{Introduction}\label{sec20200314a}
Throughout the paper, $R$ is a commutative ring.\vspace{5pt}
Let $I$ be an ideal of $R$, and assume in this paragraph that $R$ is $I$-adically complete and local. When $I$ is generated by an $R$-regular sequence, lifting property of finitely generated modules and of bounded below complexes of finitely generated free modules along the natural surjection $R\to R/I$ was studied by Auslander, Ding, and Solberg~\cite{auslander:lawlom} and Yoshino~\cite{yoshino}.
Nasseh and Sather-Wagstaff~\cite{nasseh:lql} generalized these results to the case where $I$ is not necessarily generated by an $R$-regular sequence. In this case, they considered the lifting property of differential graded (DG) modules along the natural map from $R$ to the Koszul complex on a set of generators of the ideal $I$.
Let $A\to B$ be a homomorphism of DG $R$-algebras. A right DG $B$-module $N$ is \emph{liftable} to $A$ if there is a right DG $A$-module $M$ such that $N \cong M\otimes^{\mathbf{L}}_A B$ (or $N\cong M\otimes_A B$, if $M$ and $N$ are semifree)
in the derived category $\mathcal{D}(B)$.
In their recent works, Nasseh and Yoshino~\cite{nassehyoshino} and Ono and Yoshino~\cite{OY} proved the following results on liftability of DG modules; see~\ref{para20201112a} and~\ref{para20201112c} for notation.
\begin{thm}[\cite{nassehyoshino, OY}]\label{thm20200605a}
Let $A$ be a DG $R$-algebra and $B=A\langle X\rangle$ be a simple free extension of $A$ obtained by adjunction of a variable $X$ of degree $|X|>0$ to kill a cycle in $A$. Assume that $N$ is a semifree DG $B$-module with $\operatorname{Ext}^{|X|+1}_B(N,N)=0$
\begin{enumerate}[\rm(a)]
\item
If $|X|$ is odd, then $N\oplus N(-|X|)$ is liftable to $A$ (that is, $N$ is weakly liftable to $A$ in the sense of~\cite[Definition 5.1]{NOY}).
\item
If $|X|$ is even and $N$ is bounded below, then $N$ is liftable to $A$.
\end{enumerate}
\end{thm}
Na\"{\i}ve lifting property of DG modules along simple free extensions of DG algebras was introduced in~\cite{NOY} to obtain a new characterization of (weak) liftability of DG modules along such extensions; see~\cite[Theorem 6.8]{NOY}. However, our study of na\"ive lifting property of DG modules in this paper is mainly motivated by a conjecture of Auslander and Reiten as we explain in Section~\ref{sec20201126n}; see Theorem~\ref{thm20210108z}.
For the general definition of na\"{\i}ve liftability, let $A\to B$ be a homomorphism of DG $R$-algebras such that the underlying graded $A$-module $B$ is free.
Let $N$ be a semifree right DG $B$-module, and denote by $N |_A$ the DG $B$-module $N$ regarded as a right DG $A$-module via $A\to B$.
We say that $N$ is {\it na\"ively liftable} to $A$ if
the DG $B$-module epimorphism
$\pi _N\colon N |_A \otimes _A B \to N$
defined by $\pi_N(x \otimes b)=xb$ splits; see~\ref{para20201113a} for more details.
The purpose of this paper is to prove the following result that deals with this version of liftability along finite free and polynomial extensions of DG algebras; see~\ref{para20201203a} and~\ref{para20201112b} for the definitions and notation.
\begin{mthm*}\label{thm20201114a}
Let $n$ be a positive integer. We consider the following two cases:
\begin{enumerate}[\rm(a)]
\item
$B=A[X_1,\ldots,X_n]$ is a polynomial extension of $A$, where $X_1,\ldots,X_n$ are variables of positive degrees; or
\item
$A$ is a divided power DG $R$-algebra and $B=A \langle X_1,\ldots,X_n \rangle$ is a free extension of $A$ obtained by adjunction of variables $X_1,\ldots,X_n$ of positive degrees.
\end{enumerate}
In either case, if $N$ is a bounded below semifree DG $B$-module with $\operatorname{Ext} _B ^i (N, N)=0$ for all $i>0$, then $N$ is na\"ively liftable to $A$. Moreover, $N$ is a direct sum of a DG $B$-module that is liftable to $A$.
\end{mthm*}
A unified method to prove parts (a) and (b) of Theorem~\ref{thm20200605a} is introduced in~\cite{NOY} using the notion of $j$-operators. However, as is noted in~\cite[3.10]{NOY}, this notion cannot be generalized (in a way that useful properties of $j$-operators are preserved) to the case where we have more than one variable. Our approach in this paper in order to prove Main Theorem is as follows. In Section~\ref{sec20201126a}, we define the notions of diagonal ideals and DG smoothness, which is a generalization of the notion of smooth algebras in commutative ring theory. Then using the notion of homotopy limits, discussed in Section~\ref{sec20201126b}, we prove the following result in Section~\ref{sec20201126c}.
\begin{thm}\label{thm2021017a}
Let $A\to B$ be a DG smooth homomorphism. If $N$ is a bounded below semifree DG $B$-module with $\operatorname{Ext} _B ^i (N, N)=0$ for all $i\geq 1$, then $N$ is na\"ively liftable to $A$. Moreover, $N$ is a direct sum of a DG $B$-module that is liftable to $A$.
\end{thm}
The proof of Main Theorem then follows after we show that under the assumptions of Main Theorem, $A\to B$ is DG smooth. This takes up the entire Section~\ref{sec20210107a}.
\section{Terminology and preliminaries}\label{sec20200314b}
We assume that the reader is fairly familiar with complexes, DG algebras, DG modules, and their properties. Some of the references on these subjects are~\cite{avramov:ifr,avramov:dgha, felix:rht, GL}. In this section, we specify the terminology and include some preliminaries that will be used in the subsequent sections.
\begin{para}\label{para20200329a}
Throughout the paper, $A$ is a \emph{strongly commutative differential graded $R$-algebra} (\emph{DG $R$-algebra}, for short), that is,
\begin{enumerate}[\rm(a)]
\item
$A = \bigoplus _{n \geq 0} A _n$ is a non-negatively \emph{graded commutative} $R$-algebra\footnote{Some authors use the cohomological notaion for DG algebras. In such a case, $A$ is described as $A = \bigoplus _{n \leq 0} A ^n$, where $A^{n} = A_{-n}$ and $A$ is called non-positively graded.}, i.e., for all homogeneous elements $a, b \in A$ we have $ab = (-1)^{|a| |b|}ba$, and $a^2 =0$ if the degree of $a$ (denoted $|a|$) is odd;
\item
$A$ is an $R$-complex with a differential $d^A$ (that is, a graded $R$-linear map $A\to A$ of degree $-1$ with $(d^A)^2=0$); such that
\item
$d^A$ satisfies the \emph{Leibniz rule}: for all homogeneous elements $a,b\in A$ the equality $d^A(ab) = d^A(a) b + (-1)^{|a|}ad^A(b)$ holds.
\end{enumerate}
A \emph{homomorphism} $f\colon A\to B$ of DG $R$-algebras is a graded $R$-algebra homomorphism of degree $0$ which is also a chain map, that is, $d^Bf=fd^A$.
\end{para}
\begin{para}\label{para20201203a}
An $R$-algebra $U$ is a \emph{divided power algebra} if a sequence of elements $u^{(i)}\in U$ with $i\in \mathbb{N}\cup \{0\}$ is correspondent to every element $u\in U$ with $|u|$ positive and even such that the following conditions are satisfied:
\begin{enumerate}[\rm(1)]
\item
$u^{(0)}=1$, $u^{(1)}=u$, and $|u^{(i)}|=i|u|$ for all $i$;
\item
$u^{(i)}u^{(j)}=\binom{i+j}{i}u^{(i+j)}$ for all $i,j$;
\item
$(u+v)^{(i)}=\sum_{j}u^{(j)}v^{(i-j)}$ for all $i$;
\item
for all $i\geq 2$ we have
$$
(vw)^{(i)}=
\begin{cases}
0& |v|\ \text{and}\ |w|\ \text{are odd}\\
v^iw^{(i)}& |v|\ \text{is even and}\ |w|\ \text{is even and positive}
\end{cases}
$$
\item
For all $i\geq 1$ and $j\geq 0$ we have
$$\left(u^{(i)}\right)^{(j)}=\frac{(ij)!}{j!(i!)^j}u^{(ij)}.$$
\end{enumerate}
A \emph{divided power DG $R$-algebra} is a DG $R$-algebra whose underlying graded $R$-algebra is a divided power algebra.
\end{para}
\begin{para}\label{para20201126d}
If $R$ contains the field of rational numbers and $U$ is a graded $R$-algebra, then $U$ has a structure of a divided power $R$-algebra by defining $u^{(m)}=(1/m!)u^m$ for all $u\in U$ and integers $m\geq 0$; see~\cite[Lemma 1.7.2]{GL}. Also, $R$ considered as a graded $R$-algebra concentrated in degree $0$ is a divided power $R$-algebra.
\end{para}
\begin{para}\label{para20201112a}
Let $t\in A$ be a cycle, and let $A \langle X \rangle$ with the differential $d$ denote the \emph{simple free extension of $A$} obtained by adjunction of a variable $X$ of degree $|t|+1$ such that $dX = t$. The DG $R$-algebra $A \langle X \rangle$ can be described as $A \langle X \rangle = \bigoplus_{m\geq 0} X^{(m)}A$ with the conventions $X^{(0)}=1$ and $X^{(1)}=X$, where $\{X^{(m)}\mid m\geq 0\}$ is a free basis of $A\langle X\rangle$ such that:
\begin{enumerate}[\rm(a)]
\item
If $|X|$ is odd, then $X^{(m)}=0$ for all $m\geq 2$, and for all $a + Xb\in A \langle X \rangle$ we have
$$
d(a + Xb)=d^Aa + tb - X d^Ab.
$$
\item
If $|X|$ is even, then $A \langle X \rangle$ is a divided power DG $R$-algebra with the algebra structure given by $X^{(m)}X^{(\ell)} =\binom{m+\ell}{m} X^{(m+\ell)}$ and the differential structure defined by $dX^{(m)}=X^{(m-1)}t$ for all $m\geq 1$.
\end{enumerate}
Also, let $A [X]$ denote the \emph{simple polynomial extension of $A$} with $X$ described as above, that is, $A [X] = \bigoplus_{m\geq 0} X^{m}A$ with $d^{A [X]}(X^m)=mX^{m-1}t$ for positive integers $m$. Note that here $X^m$ is just the ordinary power on $X$.
If $R$ contains the field of rational numbers, then $A\langle X\rangle=A[X]$.
\end{para}
\begin{para}\label{para20201112b}
Let $n$ be a positive integer, and let $A \langle X_1,\ldots,X_n\rangle$ (which is also denoted by $A \langle X_i \mid 1\leq i\leq n\rangle$) be a \emph{finite free extension of the DG $R$-algebra $A$} obtained by adjunction of $n$ variables. In fact, setting $A^{(0)} =A$ and $A^{(i)}=A^{(i-1)}\langle X_i \rangle$ for all $1 \leq i \leq n$ such that $d^{A^{(i)}}X_i$ is a cycle in $A^{(i-1)}$, we have $A \langle X_1,\ldots,X_n\rangle=A^{(n)}$. We also assume that $0 < |X_1| \leq \cdots \leq |X_n|$. Note that there is
a sequence of DG $R$-algebras $A= A^{(0)} \subset A ^{(1)} \subset \cdots \subset A^{(n)}=A \langle X_1,\ldots,X_n\rangle$.
In a similar way, one can define the \emph{finite polynomial extension of the DG $R$-algebra $A$}, which is denoted by $A [X_1,\ldots,X_n]$.
\end{para}
\begin{para}\label{para20201205a}
Our discussion in~\ref{para20201112b} can be extended to the case of adjunction of infinitely countably many variables to the DG $R$-algebra $A$.
Let $\{ X_i \mid i \in \mathbb{N} \}$ be a set of variables.
Attaching a degree to each variable such that $0 < |X_1| \leq |X_2| \leq \cdots$, similar to~\ref{para20201112b},
we construct a sequence
$A= A^{(0)} \subset A ^{(1)} \subset A^{(2)} \subset \cdots$
of DG $R$-algebras.
We define an \emph{infinite free extension of the DG $R$-algebra $A$} obtained by adjunction of the variables $X_1, X_2, \ldots$ to be
$A \langle X_i \mid i\in \mathbb{N} \rangle = \bigcup _{n \in \mathbb{N}} A^{(n)}$. It is sometimes convenient for us to use the notation $A \langle X_1,\ldots,X_n\rangle$ with $n=\infty$ instead of $A \langle X_i \mid i\in \mathbb{N} \rangle $.
For the infinite extension $A \langle X_i \mid i\in \mathbb{N} \rangle $ of the DG $R$-algebra $A$, we always assume the {\it degree-wise finiteness condition}, that is, for all $n\in \mathbb{N}$, we assume that the set $\{ i \mid |X_i| = n \}$ is finite. As an example of this situation, let $R \to S$ be a surjective ring homomorphism of commutative noetherian rings.
Then the Tate resolution of $S$ over $R$ is an extension of the DG $R$-algebra $R$ (with infinitely countably many variables, in general) which satisfies the degree-wise finiteness condition; see~\cite{Tate}.
In a similar way, one can define the \emph{infinite polynomial extension of the DG $R$-algebra $A$}, which is denoted by $A [X_i\mid i\in \mathbb{N}]$ or $A [X_1,\ldots,X_n]$ with $n=\infty$.
\end{para}
\begin{para}\label{para20201124a}
For $n\leq \infty$, let $\Gamma=\bigcup_{i=1}^{n}\{X_i^{(m)}\mid m\geq 0\}$ with the conventions from~\ref{para20201112a} that if $|X_i|$ is odd, then $X_i^{(0)}=1$, $X_i^{(1)}=X_i$, and $X_i^{(m)}=0$ for all $m\geq 2$.
If $n<\infty$, then the set $\{ X_1 ^{(m_1)}X_2^{(m_2)}\cdots X_n^{(m_n)}\mid X_i^{(m_i)}\in \Gamma\ (1 \leq i \leq n)\}$ is a basis for the underlying graded free $A$-module $A \langle X_1,\ldots,X_n\rangle$.
If $n=\infty$, then the set $\{ X_{i_1} ^{(m_{i_1})}X_{i_2}^{(m_{i_2})}\cdots X_{i_t}^{(m_{i_t})}\mid X_{i_j}^{(m_{i_j})}\in \Gamma\ (i_j\in \mathbb{N}, t<\infty)\}$ is a basis for the underlying graded free $A$-module $A \langle X_i \mid i\in \mathbb{N} \rangle$.
The cases $A [X_1,\ldots,X_n]$ and $A [X_i\mid i\in \mathbb{N}]$ can be treated similarly by using ordinary powers $X_i^m$ instead of divided powers $X_i^{(m)}$.
\end{para}
\begin{para}\label{para20201112c}
A right \emph{DG $A$-module} $(M, \partial^M)$ (or simply $M$) is a graded right $A$-module $M=\bigoplus_{i\in \mathbb{Z}}M_i$ that is also an $R$-complex with the differential $\partial^M$ satisfying the Leibniz rule, that is, the equality
$\partial^M(ma) = \partial^M(m)\ a + (-1)^{|m|} m\ d^A(a)$ holds for all homogeneous elements $a\in A$ and $m\in M$.
All DG modules considered in this paper are right DG modules, unless otherwise stated. Since $A$ is graded commutative, a DG $A$-module $M$ is also a left DG $A$-module with the left $A$-action
defined by $am = (-1)^{|m||a|} ma$ for $a\in A$ and $m \in M$.
A \emph{DG submodule} of a DG $A$-module $M$ is a subcomplex that is a DG $A$-module under the operations induced by $M$, and a \emph{DG ideal} of $A$ is a DG submodule of $A$.
For a DG $A$-module $M$, let $\inf (M) = \inf \{ i \in \mathbb{Z}\mid M_i \not=0 \}$. We say that $M$ is \emph{bounded below} if $\inf (M) > -\infty$, that is, if $M_i=0$ for all $i\ll 0$.
Note that $\inf (L) \geq \inf (M)$ if $L$ is a DG $A$-submodule of $M$. For an integer $i$, the \emph{$i$-th shift} of $M$, denoted $\mathsf{\Sigma}^i M$ or $M(-i)$, is defined by $\left(\mathsf{\Sigma}^i M\right)_j = M_{j-i}$ with $\partial_j^{\mathsf{\Sigma}^i M}=(-1)^i\partial_{j-i}^M$.
\end{para}
\begin{para}\label{para20201114a}
Let $A^{o}$ denote the \emph{opposite DG $R$-algebra} which is equal to $A$ as a set, but to distinguish elements in $A^o$ and $A$ we write $a^o\in A^o$ if $a \in A$.
The product of elements in $A^o$ and the differential $d^{A^o}$ are given by the formulas
$a^ob^o = (-1)^{|a||b|}(ba)^o=(ab)^o$ and $d^{A^o} (a^o) = d^A(a)^o$, for all homogeneous elements $a, b \in A$.
Since $A$ is a graded commutative DG $R$-algebra, the identity map $A \to A^o$ that corresponds $a \in A$ to $a^o\in A^o$ is a DG $R$-algebra isomorphism.
From this point of view, there is no need to distinguish between $A$ and $A^o$.
However, we will continue using the notation $A^o$ to make it clear how we use the graded commutativity of $A$.
Note that every right (resp. left) DG $A$-module $M$ is a left (resp. right) DG $A^o$-module with $a^om=(-1)^{|a^o||m|}ma$ (resp. $ma^o=(-1)^{|a^o||m|}am$) for all homogeneous elements $a\in A$ and $m\in M$.
\end{para}
\begin{para}\label{para20201114e}
Let $A\to B$ be a homomorphism of DG $R$-algebras such that $B$ is projective as an underlying graded $A$-module. Let $B^e$ denote the \emph{enveloping DG $R$-algebra} $B^o \otimes_A B$ of $B$ over $A$.
The algebra structure on $B^e$ is given by
$$
(b_1^o \otimes b_2)( {b'}_1^o \otimes {b'}_2)
= (-1)^{|{b'}_1| |b_2|} b_1^o {b'}_1^o \otimes b_2{b'}_2
= (-1)^{|{b'}_1| |b_2|+|{b'}_1| |b_1|}({b'}_1 b_1)^o\otimes b_2{b'}_2
$$
for all homogeneous elements $b_1, b_2, b'_1, b'_2\in B$, while the graded structure is given by $(B^e)_i = \sum_{j} (B^o)_j\otimes_A B_{i-j}$ and the differential $d^{B^e}$ is defined by
$d^{B^e} ( b_1^o \otimes b_2 ) = d^{B^o} ( b_1^o) \otimes b_2 + (-1)^{|b_1|} b_1^o \otimes d^{B}(b_2 )$.
Note that $B$ and $B^o$ are regarded as subrings of $B^e$.
Moreover, the map $B^o \to B^e$ defined by $b^o\mapsto b ^o \otimes 1$ is an injective DG $R$-algebra homomorphism, via which we can consider $B^o$ as a DG $R$-subalgebra of $B^e$.
Since $B$ is graded commutative, $B \cong B^o$ and hence, $B$ is a DG $R$-subalgebra of $B^e$ as well.
Note also that DG $B^e$-modules are precisely DG $(B, B)$-bimodules. In fact, for a DG $B^e$-module $N$, the right action of an element of $B^e$ on $N$ yields the two-sided module structure $n ( b_1 ^o \otimes b_2) = (-1)^{|b_1||n|}b_1 n b_2$ for all homogeneous elements $n \in N$ and $b_1, b_2 \in B$.
Hence, the differential $\partial ^N$ satisfies the Leibniz rule on both sides:
$\partial ^N ( b_1nb_2 ) = d^B(b_1)nb_2 + (-1)^{|b_1|} b_1 \partial^N(n)b_2 + (-1)^{|b_1|+|n|} b_1n d^B(b_2)$
for all homogeneous elements $n \in N$ and $b_1, b_2 \in B$.
\end{para}
\begin{para}\label{para20201206d}
Consider the notation from~\ref{para20201124a} and~\ref{para20201114e}. Let
\begin{align*}
\mathrm{Mon}(\Gamma)\!\!&=\!\!
\begin{cases}
\!\!\{ (1^o \otimes X_1 ^{(m_1)})\cdots (1^o \otimes X_n^{(m_n)})\mid X_i^{(m_i)}\in \Gamma\ (1 \leq i \leq n) \}&\!\!\!\text{if}\ n<\infty\\
\!\!\{ (1^o \otimes X_{i_1} ^{(m_{i_1})})\cdots (1^o \otimes X_{i_t}^{(m_{i_t})})\mid X_{i_j}^{(m_{i_j})}\in \Gamma\ (i_j\in \mathbb{N}, t<\infty) \}&\!\!\!\text{if}\ n=\infty
\end{cases}\\
&=
\begin{cases}
\{ 1^o \otimes (X_1 ^{(m_1)}\cdots X_n^{(m_n)})\mid X_i^{(m_i)}\in \Gamma\ (1 \leq i \leq n) \}&\text{if}\ n<\infty\\
\{ 1^o \otimes (X_{i_1} ^{(m_{i_1})} \cdots X_{i_t}^{(m_{i_t})})\mid X_{i_j}^{(m_{i_j})}\in \Gamma\ (i_j\in \mathbb{N}, t<\infty) \}&\text{if}\ n=\infty
\end{cases}
\end{align*}
Then the underlying graded $A \langle X_1,\ldots,X_n\rangle^o$-module $A \langle X_1,\ldots,X_n\rangle^e$ with $n\leq \infty$ is free with the basis $\mathrm{Mon}(\Gamma)$.\footnote{``$\mathrm{Mon}$'' is chosen for ``monomial.''}
Once again, the case of $A [X_1,\ldots,X_n]$ with $n\leq \infty$ can be treated similarly by using $X_i^m$ instead of $X_i^{(m)}$.
\end{para}
\begin{para}\label{para20201124b}
A \emph{semifree basis} (or \emph{semi-basis}) of a DG $A$-module $M$ is a well-ordered subset $F\subseteq M$ that is a basis for the underlying graded $A$-module $M$ and satisfies $\partial^M(f)\in \sum_{e<f}eA$ for every element $f\in F$. A DG $A$-module $M$ is \emph{semifree}\footnote{Keller~\cite{keller:ddgc} calls these ``DG modules that have property (P).''} if it has a semifree basis. Equivalently, the DG $A$-module $M$ is semifree if there exists an increasing filtration $$0=F_{-1}\subseteq F_0\subseteq F_1\subseteq \cdots\subseteq M$$ of DG $A$-submodules of $M$ such that $M=\bigcup_{i\geq 0}F_i$ and each DG $A$-module $F_i/F_{i-1}$ is a direct sum of copies of $A(n)$ with $n\in \mathbb{Z}$; see~\cite{AH},~\cite[A.2]{AINSW}, or~\cite{felix:rht}.
\end{para}
\begin{para}\label{para20201124c}
Let $\mathcal{C}(A)$ denote the abelian category of DG $A$-modules and DG $A$-module homomorphisms.
Also, let $\mathcal{K}(A)$ be the \emph{homotopy category} of DG $A$-modules. Recall that objects of $\mathcal{K}(A)$ are DG $A$-modules and
morphisms are the set of homotopy equivalence classes of DG $A$-module homomorphisms $\Hom _{\mathcal{K}(A)} (M, L) = \Hom _{\mathcal{C}(A)} (M, L)/ \sim$,
where $f \sim g$ for $f,g\in \Hom _{\mathcal{C}(A)} (M, L)$ if and only if there is a graded $A$-module homomorphism $h\colon M \to L (-1)$ of underlying graded $A$-modules such that $f - g = \partial ^L h + h \partial ^M$. It is known that $\mathcal{K}(A)$ is triangulated category.
In fact, there is a triangle $M \to L \to Z \to \mathsf{\Sigma} M$ in $\mathcal{K}(A)$
if and only if there is a short exact sequence
$0 \to M \to L \oplus L' \to Z \to 0$
in $\mathcal{C}(A)$ in which $L'$ is splitting exact, i.e., $\id_{L'}\sim 0$. The \emph{derived category} $\mathcal{D}(A)$ is obtained from $\mathcal{C}(A)$ by formally inverting the quasi-isomorphisms (denoted $\simeq$); see, for instance,~\cite{keller:ddgc} for details.
For each integer $i$ and DG $A$-modules $M,L$ with $M$ being semifree, $\operatorname{Ext}^i_A(M,L)$ is defined to be $\operatorname{H}_{-i}\left(\Hom_A(M,L)\right)$. Note that $\operatorname{Ext}^i_A(M,L)= \Hom _{\mathcal{K}(A)}(M, L(-i))$.
\end{para}
\section{Diagonal ideals and DG smoothness}\label{sec20201126a}
In this section, we introduce the notion of diagonal ideals which play an essential role in the proofs of Theorem~\ref{thm2021017a} and Main Theorem.
\begin{para}\label{para20201114d}
Let $\varphi\colon A\to B$ be a homomorphism of DG $R$-algebras such that $B$ is projective as an underlying graded $A$-module. Let $\pi _B\colon B^e \to B$ denote the map defined by $\pi _B(b_1 ^o \otimes b_2) = b_1b_2$.
For all homogeneous elements $b_1, b_2, b'_1, b'_2\in B$ we have
\begin{eqnarray*}
\pi_B((b_1 ^o \otimes b_2)({b'}_1 ^o \otimes {b'}_2))
&=&(-1) ^{|{b'}_1||b_2|+|{b'}_1||b_1|} \pi_B(({b'}_1 {b}_1) ^o \otimes b_2{b'}_2) \\
&=& (-1) ^{|{b'}_1||b_2|+|{b'}_1||b_1|}({b'}_1 {b}_1) (b_2{b'}_2) \\
&=& ({b}_1 {b}_2) ({b'}_1{b'}_2) \\
&=& \pi_B(b_1 ^o \otimes b_2) \pi _B({b'}_1 ^o \otimes {b'}_2).
\end{eqnarray*}
Hence, $\pi _B$ is an algebra homomorphism.
Also, it is straightforward to check that $\pi _B$ is a chain map.
Therefore, $\pi_B$ is a homomorphism of DG $R$-algebras.
\end{para}
\begin{defn}\label{defn20201206a}
In the setting of~\ref{para20201114d}, kernel of $\pi_B$ is denoted by $J=J_{B/A}$ and is called the \emph{diagonal ideal} of $\varphi$.\footnote{The definition of diagonal ideals originates in scheme theory. In fact, if $A \to B$ is a homomorphism of commutative rings, then the kernel of the natural mapping $B \otimes _AB \to B$ is the defining ideal of the diagonal set in the Cartesian product $\operatorname{Spec} B \times _{\operatorname{Spec} A} \operatorname{Spec} B$.
}
\end{defn}
\begin{para}\label{para20201206a}
In~\ref{para20201114d}, since $\pi_B$ is a homomorphism of DG $R$-algebras, $J$ is a DG ideal of $B^e$.
The isomorphism $B^e /J \cong B$ of DG $R$-algebras is also an isomorphism of DG $B^e$-modules. Hence, there is an exact sequence of DG $B^e$-modules:
\begin{equation}\label{eq20201114a}
0 \to J \to B^e \xrightarrow{\pi_B} B \to 0.
\end{equation}
\end{para}
Next, we define our notion of smoothness for DG algebras.
\begin{defn}\label{defn20210105a}
Let $\varphi\colon A\to B$ be a homomorphism of DG $R$-algebras.
We say that $B$ is \emph{DG quasi-smooth over $A$} (or simply \emph{$\varphi$ is DG quasi-smooth})
if the following conditions are satisfied:
\begin{enumerate}[\rm(i)]
\item
$B$ is free as an underlying graded $A$-module.
\item
The diagonal ideal $J$ has a filtration consisting of DG $B^e$-submodules\footnote{$J^{[\ell]}$ is just a notation for the $\ell$-th DG $B^e$-submodule of $J$ in the sequence. It is not an $\ell$-th power of any kind.}
$$
J =J^{[1]} \supset J^{[2]} \supset J^{[3]} \supset \cdots \supset J^{[\ell]} \supset J^{[\ell+1]} \supset \cdots
$$
such that $J J^{[\ell]} + J^{[\ell]} J \subseteq J^{[\ell+1]}$ for all $\ell \geq 1$, and each element of $J^{[\ell]}$ has degree $\geq \ell$, that is, $\inf (J^{[\ell]})\geq \ell$. This implies that $\bigcap _\ell J^{[\ell]} =(0)$.
\item
For every $\ell \geq 1$, the DG $B$-module $J^{[\ell]}/J^{[\ell+1]}$ is semifree.
\end{enumerate}
We say that $B$ is \emph{DG smooth over $A$} (or simply \emph{$\varphi$ is DG smooth}) if it is DG quasi-smooth over $A$ and for all positive integers $\ell$, the semifree DG $B$-module $J^{[\ell]}/J^{[\ell+1]}$ has a finite semifree basis.\footnote{In case that $A \to B$ is a homomorphism of commutative rings, $B$ is projective over $A$, and $J/J^2$ is projective over $B$, then $B$ is smooth over $A$ in the sense of scheme theory. In this case, $J/J^2 \cong \Omega _{B/A}$ is the module of K\"{a}hler differentials.}
\end{defn}
\begin{para}
There exist other definitions of smoothness for DG algebras. For instance, a definition given by Kontsevich is found in~\cite{Kont} (alternatively, in~\cite[Section 18]{Yekutieli}). Also, another version of smoothness for DG algebras is introduced by To\"{e}n and Vezzosi in~\cite{TV} which Shaul~\cite{Shaul} proves is equivalent to Kontsevich's definition. However, our above version of smoothness is new and quite different from any existing definition of smoothness for DG algebras.
\end{para}
\begin{para}\label{para20210130a}
Let $\varphi\colon A\to B$ be a homomorphism of DG $R$-algebras. If $B$ is DG smooth over $A$, then for any integer $\ell \geq 1$,
there is a finite filtration
$$
J = L_0 \supset L_1 \supset L_2 \supset \cdots \supset L _s \supset L_{s+1}= J ^{[\ell]}
$$
of $J$ by its DG $B^e$-submodules, where for each $0 \leq i \leq s$ we have $L_i /L_{i+1} \cong B(-a_i)$ as DG $B^e$-modules, for some positive integer $a_i$.
\end{para}
\begin{para}\label{para20210106a}
We will show in Section~\ref{sec20210107a} that free extensions of divided power DG $R$-algebras and polynomial extensions of DG $R$-algebras are DG quasi-smooth. If these extensions are finite, then we have the DG smooth property; see Corollary~\ref{cor20210105a}.
There are several examples of DG smooth extensions besides free or polynomial extensions.
For instance, as one of the most trivial examples,
let $B = A \langle X \rangle /(X^2)$, where $|X|$ is even and $d^BX =0$.
If $A$ contains a field of characteristic $2$, then $B$ is DG smooth over $A$ by setting $J^{[2]} =(0)$.
\end{para}
\begin{para}\label{para20201126a}
Let $\varphi\colon A\to B$ be a DG quasi-smooth homomorphism, and use the notation of Definition~\ref{defn20210105a}. Let $N$ be a semifree DG $B$-module. For every positive integer $\ell$, consider the DG $B^e$-module $J/J^{[\ell]}$ as a DG $(B,B)$-bimodule. The tensor product $N \otimes _B J/J^{[\ell]}$ uses the left DG $B$-module structure of $J/J^{[\ell]}$ and in this situation, $N \otimes _B J/J^{[\ell]}$ is a right DG $B$-module by the right $B$-action on $J/J^{[\ell]}$.
\end{para}
The following lemma is useful in the next section.
\begin{lem}\label{lemma for HLtheorem}
Let $\varphi\colon A\to B$ be a DG quasi-smooth homomorphism, and use the notation of Definition~\ref{defn20210105a}. Suppose that $N$ is a semifree DG $B$-module such that $\operatorname{Ext}_B ^i (N,N \otimes _B J/J^{[\ell]})=0$ for all $i \geq 0$ and some $\ell \geq 1$. Then the natural inclusion $J ^{[\ell]} \hookrightarrow J$ induces an isomorphism $\operatorname{Ext}_B ^i (N,N \otimes _B J^{[\ell]}) \cong \operatorname{Ext}_B ^i (N,N \otimes _B J)$ for all $i\geq 1$.
\end{lem}
\begin{proof}
Applying $N \otimes _B - $ to the short exact sequence
$0 \to J^{[\ell]} \to J \to J/J^{[\ell]} \to 0$
of DG $B^e$-modules, we get an exact sequence
\begin{equation}\label{eq20201121a}
0 \to N \otimes _B J^{[\ell]} \to N \otimes _B J \to N \otimes _B J/J^{[\ell]} \to 0
\end{equation}
of DG $B$-modules, where the injectivity on the left comes from the fact that $N$ is free as an underlying graded $B$-module.
The assertion now follows from the long exact sequence of Ext obtained from applying $\Hom _B ( N , - )$ to~\eqref{eq20201121a}.
\end{proof}
The following result is used in the proof of Theorem~\ref{thm2021017a}.
\begin{thm}\label{Extzero}
Let $\varphi\colon A\to B$ be a DG smooth homomorphism, and use the notation of Definition~\ref{defn20210105a}. Let $N$ be a semifree DG $B$-module with $\operatorname{Ext}_B ^i (N,N)=0$ for all $i \geq 1$. Then for all $i \geq 0$ and all $\ell \geq 1$ we have $\operatorname{Ext}_B ^i (N,N \otimes _B J^{[\ell]}/J^{[\ell +1]})=0=\operatorname{Ext}_B ^i (N,N \otimes _B J/J^{[\ell]})$.
\end{thm}
\begin{proof}
We treat both of the equalities at the same time. Let $L$ denote $J^{[\ell]}/J^{[\ell +1]}$ or $J/J^{[\ell]}$ with $\ell\geq 1$.
By definition of DG smoothness, there is a finite filtration
$$
L = L_0 \supset L_1 \supset L_2 \supset \cdots \supset L _s \supset L_{s+1}=(0)
$$
of $L$ by its DG $B^e$-submodules, where for each $0 \leq i \leq s$ we have $L_i /L_{i+1} \cong B(-a_i)$ as DG $B^e$-modules, for some positive integer $a_i$.
We now prove by induction on $s$ that $\operatorname{Ext}_B ^i (N,N \otimes _B L)=0$ for all $i \geq 0$.
For the base case where $s=0$, we have $L \cong B(-a_0)$. Hence, $N\otimes _B L \cong N(-a_0)$.
Therefore, $\operatorname{Ext}_B ^i (N,N \otimes _B L) \cong \operatorname{Ext}_B ^i (N,N (-a_0)) = \operatorname{Ext} _B ^{i+ a_0} (N, N) = 0$ for all $i \geq 0$.
Assume now that $s \geq 1$. Since $N$ is a semifree DG $B$-module, tensoring the short exact sequence $0 \to L_{1} \to L \to B (-a_0) \to 0$ of DG $B^e$-modules (hence, DG $B$-modules) by $N$, we get a short exact sequence
\begin{equation}\label{eq20201116a}
0 \to N\otimes_BL_{1} \to N\otimes_BL \to N\otimes_BB (-a_0) \to 0
\end{equation}
of DG $B$-modules (hence, DG $B^e$-modules via $\pi_B$).
By inductive hypothesis we have $\operatorname{Ext}_B ^i (N,N \otimes _B L_{1})=0$ for all $i\geq 0$. Also, $\operatorname{Ext}_B ^i (N,N \otimes _B B(-a_0)) = 0$ for all $i \geq 0$ by the base case $s=0$.
It follows from the long exact sequence of cohomology modules obtained from~\eqref{eq20201116a} that $\operatorname{Ext}_B ^i (N,N \otimes _B L) =0$ for all $i \geq 0$.
\end{proof}
The next result is also crucial in the proof of Theorem~\ref{thm2021017a}.
\begin{thm}\label{HLtheorem}
Let $\varphi\colon A\to B$ be a DG quasi-smooth homomorphism, and use the notation of Definition~\ref{defn20210105a}. Let $N$ be a bounded below semifree DG $B$-module with $\operatorname{Ext}_B ^i (N,N \otimes _B J/J^{[\ell]})=0$ for all $i \geq 0$ and all $\ell\geq 1$. Then $\operatorname{Ext}_B ^i (N,N \otimes _B J)=0$ for all $i\geq 1$.
\end{thm}
The proof of this result needs the machinery of homotopy limits, which we discuss in the next section. We give the proof of this theorem in~\ref{para20201122d} below.
\section{Homotopy limits and proof of Theorem~\ref{HLtheorem}}\label{sec20201126b}
The entire section is devoted to the proof of Theorem~\ref{HLtheorem}.
The notion of homotopy limits, which we define in~\ref{para20201122b}, plays an essential role in the proof of the following result which is a key to the proof of Theorem~\ref{HLtheorem}.
\begin{thm}\label{limit}
Let $M$ and $N$ be DG $A$-modules with $M$ bounded below and $N$ semifree. Assume that there is a descending sequence
$$
M = M^0 \supseteq M^1\supseteq M^2 \supseteq \cdots \supseteq M ^{\ell} \supseteq M ^{\ell +1} \supseteq \cdots
$$
of DG $A$-submodules of $M$ that satisfies the following conditions:
\begin{enumerate}[\rm(1)]
\item $\lim_{\ell\to \infty}\inf (M^{\ell})=\infty$; and
\item there is an integer $k$ such that the natural maps $M ^{\ell} \hookrightarrow M$ induce isomorphisms
$\operatorname{Ext}_A ^k ( N, M^{\ell}) \cong \operatorname{Ext}_A ^k ( N, M)$ for all $\ell \geq 1$.
\end{enumerate}
Then $\operatorname{Ext}_A ^k ( N, M) = 0$.
\end{thm}
By~\ref{para20201124c}, Theorem~\ref{limit} can be restated as follows. We prove this result in~\ref{para20201122c}.
\begin{thm}\label{limit2}
Let $M$ and $N$ be DG $A$-modules with $M$ bounded below and $N$ semifree. Assume that there is a descending sequence
$$M = M^0 \supseteq M^1\supseteq M^2 \supseteq \cdots \supseteq M ^{\ell} \supseteq M ^{\ell +1} \supseteq \cdots$$
of DG $A$-submodules of $M$ that satisfies the following conditions:
\begin{enumerate}[\rm(1)]
\item $\lim_{\ell\to \infty}\inf (M^{\ell})=\infty$; and
\item for all positive integers $\ell$, the natural maps $M ^{\ell} \hookrightarrow M$ induce isomorphisms
$\Hom _{\mathcal{K}(A)}( N, M^{\ell}) \cong \Hom _{\mathcal{K}(A)}( N, M)$.
\end{enumerate}
Then $\Hom _{\mathcal{K}(A)} ( N, M) = 0$.
\end{thm}
\begin{para}\label{para20201122v}
For a family $\{ M ^{\ell} \mid \ell \in \mathbb{N} \}$ of countably many DG $A$-modules,
the \emph{product} (or the \emph{direct product}) $P = \prod _{\ell \in \mathbb{N}} M^{\ell}$ in $\mathcal{C}(A)$ is constructed as follows:
the DG $A$-module $P$ has a $\mathbb{Z}$-graded structure
$P_i = \prod _{\ell \in \mathbb{N}} \left(M ^{\ell}\right)_i$ for all $i\in \mathbb{Z}$ with the differential that is given by the formula
$$\partial_i ^P \left( (m^{\ell})_{\ell \in \mathbb{N}} \right) = \left( \partial_i ^{M^{\ell}}(m^{\ell}) \right) _{\ell \in \mathbb{N}}$$
for all $(m^{\ell})_{\ell \in \mathbb{N}} \in P_i$.
By definition, we have
$$\Hom _{\mathcal{C}(A)} ( - , P) \cong \prod _{\ell \in \mathbb{N}} \Hom _{\mathcal{C}(A)} ( - , M^{\ell})$$
as functors on $\mathcal{C}(A)$.
It can be seen that $P$ is also a product in $\mathcal{K}(A)$. Hence,
\begin{equation}\label{product}
\Hom _{\mathcal{K}(A)} ( - , P) \cong \prod _{\ell \in \mathbb{N}} \Hom _{\mathcal{K}(A)} ( - , M^{\ell})
\end{equation}
as functors on $\mathcal{K}(A)$.
\end{para}
Next, we define the notion of homotopy limits; references on this include~\cite{avramov:dgha, keller:ddgc}.
\begin{para}\label{para20201122b}
Assume that $\{M ^{\ell} \mid \ell \in \mathbb{N}\}$ is a family of DG $A$-modules such that
$$ M^1\supseteq M^2 \supseteq \cdots \supseteq M ^{\ell} \supseteq M ^{\ell +1} \supseteq \cdots.$$
Then, the \emph{homotopy limit} $L= \operatorname{holim} M^{\ell}$ is defined by the triangle in $\mathcal{K}(A)$
\begin{equation}\label{holimit}
L \to P \xrightarrow{\varphi} P \to \mathsf{\Sigma} L
\end{equation}
where $P = \prod _{\ell \in \mathbb{N}} M^{\ell}$ is the product introduced in~\ref{para20201122v} and $\varphi$ is defined by
$$
\varphi \left( (m^{\ell} \right) _{\ell \in \mathbb{N}} ) = \left( m^{\ell} - m^{\ell +1} \right)_{\ell \in \mathbb{N}}.
$$
Note that~\eqref{holimit} is a triangle in the derived category $\mathcal{D}(A)$ as well.
Let $N$ be a semifree DG $A$-module.
Since $N$ is $\mathcal{K}(A)$-projective, we note that $\Hom _{\mathcal{D}(A)} (N, - ) = \Hom _{\mathcal{K}(A)} (N, - )$ on the object set of $\mathcal{C}(A)$.
Applying the functor $\Hom _{\mathcal{K}(A)} (N, - )$ to the triangle~\eqref{holimit}, by~\ref{para20201122v} we have a triangle
{\small
$$
\Hom _{\mathcal{K}(A)} (N, L) \to \prod _{\ell \in \mathbb{N}} \Hom _{\mathcal{K}(A)} (N, M^{\ell} ) \to \prod _{\ell \in \mathbb{N}} \Hom _{\mathcal{K}(A)} (N, M^{\ell} ) \to \mathsf{\Sigma} \Hom _{\mathcal{K}(A)} (N, L)
$$}
in $\mathcal{D}(R)$.
Therefore, $\Hom _{\mathcal{K}(A)} (N , \operatorname{holim} M^{\ell}) \cong \operatorname{holim} \Hom _{\mathcal{K}(A)} (N , M^{\ell})$ in $\mathcal{D}(R)$.
\end{para}
\begin{lem}\label{lem20201122a}
Under the assumptions of Theorem~\ref{limit2} we have $\operatorname{H} (\operatorname{holim} M^{\ell}) =0$, that is, $\operatorname{holim} M^{\ell}$ is zero in the derived category $\mathcal{D}(A)$.
\end{lem}
\begin{proof}
Let $L = \operatorname{holim} M^{\ell}$.
The triangle~\eqref{holimit} gives the long exact sequence
$$
\cdots \to \operatorname{H}_i(L) \to \prod _{\ell \in \mathbb{N}} \operatorname{H}_i(M^{\ell} ) \xrightarrow{\operatorname{H}(\varphi)} \prod _{\ell \in \mathbb{N}} \operatorname{H}_i(M^{\ell}) \to \operatorname{H}_{i-1} (L) \to \cdots
$$
of homology modules. Fix an integer $i$ and note that $\operatorname{H}_i (M^{\ell}) =0$ if $\inf (M^{\ell}) > i$.
Hence, by Condition (1) we have $\operatorname{H}_i(M^{\ell}) =0$ for almost all $\ell \in \mathbb{N}$. Thus, the product $\prod _{\ell \in \mathbb{N}} \operatorname{H}_i(M^{\ell} )$ is a product of finite number of $R$-modules.
Hence, $\operatorname{H}(\varphi)$ is an isomorphism and therefore, $\operatorname{H}_i(L)=0$ for all $i \in \mathbb{Z}$, as desired.
\end{proof}
\begin{para}{\emph{Proof of Theorem~\ref{limit2}.}}\label{para20201122c}
Since $N$ is semifree, by~\ref{para20201122b} and Condition (2)
\begin{align*}
\Hom _{\mathcal{K}(A)} (N , M)&\cong \operatorname{holim} \Hom _{\mathcal{K}(A)} (N , M^{\ell})\\&\cong \Hom _{\mathcal{K}(A)} (N , \operatorname{holim} M^{\ell})\\ &\cong \Hom _{\mathcal{D}(A)} (N , \operatorname{holim} M^{\ell}).
\end{align*}
Now the assertion follows from Lemma~\ref{lem20201122a}.\qed
\end{para}
\begin{para}{\emph{Proof of Theorem~\ref{HLtheorem}.}}\label{para20201122d}
It follows from Lemma~\ref{lemma for HLtheorem} that $\operatorname{Ext}_B ^i (N,N \otimes _B J^{[\ell]}) \cong \operatorname{Ext}_B ^i (N,N \otimes _B J)$ for all $i\geq 1$ and all $\ell\geq 1$.
Note that $\{\inf \left(N \otimes _B J ^{[\ell]}\right)\mid \ell\in\mathbb{N}\}$ is an increasing sequence of integers that diverges to $\infty$. Now, the assertion follows from Theorem~\ref{limit}.\qed
\end{para}
\section{Na\"ive liftability and proof of Theorem~\ref{thm2021017a}}\label{sec20201126c}
The notion of na\"ive liftability was introduced by the authors in~\cite{NOY} along simple free extensions of DG algebras. It is shown in~\cite[Theorem 6.8]{NOY} that along such extension $A\to A\langle X\rangle$ of DG algebras, weak liftability in the sense of~\cite[Definition 5.1]{NOY} (when $|X|$ is odd) and liftability (when $|X|$ is even) of DG modules are equivalent to na\"ive liftability. In this section, we study the na\"ive lifting property of DG modules in a more general setting using the diagonal ideal. We give the proof of Theorem~\ref{thm2021017a} in~\ref{para20201125a}.
\begin{para}\label{para20201113a}
Let $A\to B$ be a homomorphism of DG $R$-algebras such that $B$ is free as an underlying graded $A$-module. Let $(N,\partial^N)$ be a semifree DG $B$-module, and let $N |_A$ denote $N$ regarded as a DG $A$-module via $A\to B$.
Since $B$ is free as an underlying graded $A$-module, $N |_A$ is a semifree DG $A$-module.
Note that $(N |_A \otimes _A B,\partial)$ is a DG $B$-module with
$\partial (n \otimes b) = \partial ^N (n) \otimes b + (-1)^{|n|} n \otimes d^B (b)$ for all homogeneous elements $n \in N$ and $b \in B$.
Since $N |_A$ is a semifree DG $A$-module, $N |_A \otimes _A B$ is a semifree DG $B$-module, and we have a (right) DG $B$-module epimorphism
$\pi _N\colon N |_A \otimes _A B \to N$
defined by $\pi_N(n \otimes b)=nb$.
\end{para}
\begin{prop}\label{para20201114f}
Let $A\to B$ be a homomorphism of DG $R$-algebras such that $B$ is free as an underlying graded $A$-module. Every semifree DG $B$-module $N$ fits into the following short exact sequence of DG $B$-modules:
\begin{equation}\label{basic SES}
0 \to N \otimes _BJ \to N |_A \otimes _A B \xrightarrow{\pi_N} N \to 0.
\end{equation}
\end{prop}
\begin{proof}
Since the left DG $B$-module $B^o$ is the right DG $B$-module $B$,
we have an isomorphism $N \otimes _B B ^o \cong N$ of right DG $A$-modules such that $x\otimes b^o\mapsto xb$ for all $x\in N$ and $b\in B$.
Hence, there are isomorphisms
$N \otimes _B B^e =N \otimes _B (B^o \otimes _A B ) \cong (N \otimes _B B^o) \otimes _A B \cong N |_A \otimes _A B$ such that $x\otimes(b_1^o\otimes b_2)\mapsto xb_1\otimes b_2$ for all $x\in N$ and $b_1,b_2\in B$.
Therefore, we get the commutative diagram
\begin{equation}\label{eq20201207s}
\xymatrix{
N \otimes _B B^e\ar[rr]^{\id_N\otimes \pi_B}\ar[d]_{\cong}&& N \otimes _B B \ar[d]^{\cong}\\
N |_A \otimes _A B\ar[rr]^{\pi_N}&&N
}
\end{equation}
of DG $B$-module homomorphisms.
Thus, by applying $N \otimes _B - $ to the short exact sequence~\eqref{eq20201114a}, we obtain the short exact sequence~\eqref{basic SES}
in which injectivity on the left follows from the fact that $N$ is free as an underlying graded $B$-module.
\end{proof}
\begin{para}\label{para20201206b}
To clarify, note that the DG algebra homomorphism $\pi_B\colon B^e\to B$ defined in~\ref{para20201114d} coincides with the DG algebra homomorphism $\pi_B\colon B |_A \otimes _A B \to B$ defined in~\ref{para20201113a}. In fact, as we mentioned in the proof of Proposition~\ref{para20201114f} (the left column in~\eqref{eq20201207s} with $N=B$),
we have the isomorphism $B^e\cong B |_A \otimes _A B$.
\end{para}
We remind the reader of the definition of na\"ive liftability from the introduction.
\begin{defn}\label{defn20201125a}
Let $A\to B$ be a homomorphism of DG $R$-algebras such that $B$ is free as an underlying graded $A$-module. A semifree DG $B$-module $N$ is {\it na\"ively liftable} to $A$ if
the map $\pi _N$ is a split DG $B$-module epimorphism, i.e., there exists a DG $B$-module homomorphism $\rho\colon N \to N |_A \otimes _A B$ that satisfies the equality $\pi _N \rho = \id _N$. Equivalently, $N$ is na\"ively liftable to $A$ if $\pi_N$ has a right inverse in the abelian category of right DG $B$-modules.
\end{defn}
If a semifree DG $B$-module $N$ is na\"ively liftable to $A$, then the short exact sequence~\eqref{basic SES} splits. This implies the following result.
\begin{cor}\label{naive definition}
Let $A\to B$ be a homomorphism of DG $R$-algebras such that $B$ is free as an underlying graded $A$-module. If a semifree DG $B$-module $N$ is na\"ively liftable to $A$, then $N$ is a direct summand of the DG $B$-module $N |_A \otimes _A B$ which is liftable to $A$.
\end{cor}
We use the following result in the proof of Theorem~\ref{thm2021017a}.
\begin{thm}\label{extnaive}
Let $A\to B$ be a homomorphism of DG $R$-algebras such that $B$ is free as an underlying graded $A$-module. If $N$ is a semifree DG $B$-module such that $\operatorname{Ext} _B ^1 (N, N \otimes _BJ)=0$, then $N$ is na\"ively liftable to $A$.
\end{thm}
\begin{proof}
Since $N$ is a semifree DG $B$-module, it follows from our Ext-vanishing assumption and~\cite[Theorem A]{NassehDG} that the short exact sequence~\eqref{basic SES} splits. Hence, $N$ is na\"ively liftable to $A$, as desired.
\end{proof}
\begin{para}{\emph{Proof of Theorem~\ref{thm2021017a}.}}\label{para20201125a}
By Theorem~\ref{Extzero} we have $\operatorname{Ext}_B ^i (N,N \otimes _B J/J^{[\ell]})=0$ for all $i \geq 0$ and all $\ell\geq 1$. It follows from Theorem~\ref{HLtheorem} that $\operatorname{Ext}_B ^{i} (N,N \otimes _B J)=0$ for all $i\geq 1$. Hence, by Theorem~\ref{extnaive}, $N$ is na\"{\i}vely liftable to $A$. The fact that $N$ is a direct summand of a DG $B$-module that is liftable to $A$ was already proved in Corollary~\ref{naive definition}.\qed
\end{para}
\section{Diagonal ideals in free and polynomial extensions and proof of Main Theorem}\label{sec20210107a}
This section is devoted to the properties of diagonal ideals in free and polynomial extensions of certain DG $R$-algebra $A$ with the aim to prove that such extensions are DG (quasi-)smooth over $A$; see Corollary~\ref{cor20210105a}. Then we will give the proof of our Main Theorem in~\ref{para20210107s}.
First, we focus on free extensions of DG algebras.
\begin{para}\label{para20201114a}
Let $B=A \langle X_1,\ldots,X_n \rangle$ with $n\leq \infty$, where $A$ is a divided power DG $R$-algebra. It follows from~\cite[Proposition 1.7.6]{GL} that $B$ is a divided power DG $R$-algebra.
Also, $B^o$ is a divided power DG $R$-algebra with the divided power structure $(b^o)^{(i)} = (b^{(i)})^o$, for all $b \in B$ and $i\in \mathbb{N}$.
\end{para}
The next lemma indicates that, in the setting of~\ref{para20201114a}, the map $\pi _B$ is a homomorphism of divided power algebras in the sense of~\cite[Definition 1.7.3]{GL}.
\begin{lem}\label{lem20201115a}
Let $B=A \langle X_1,\ldots,X_n \rangle$ with $n\leq \infty$, where $A$ is a divided power DG $R$-algebra. The DG algebra homomorphism $\pi _B$ preserves the divided powers.
\end{lem}
\begin{proof}
Note that $B^e$ is a divided power DG $R$-algebra by setting
$$
(b_1^o \otimes b_2)^{(i)} = \begin{cases} (b_1^i)^o \otimes b_2 ^{(i)} & \text{if $|b_1|, |b_2|$ are even and $|b_2| > 0$ } \\ 0 & \text{if $|b_1|, |b_2|$ are odd } \end{cases}
$$
for all $b_1^o \otimes b_2\in B^e$ of positive even degree and all integers $i\geq 2$.
By the properties of divided powers in~\ref{para20200329a}, for such $b_1^o \otimes b_2\in B^e$ we have
\begin{align*}
\pi_B \left( (b^o_1 \otimes b_2) ^{(i)} \right)&=\begin{cases} \pi_B\left((b_1^i)^o \otimes b_2 ^{(i)}\right) & \text{if $|b_1|, |b_2|$ are even and $|b_2| > 0$ } \\ 0 & \text{if $|b_1|, |b_2|$ are odd } \end{cases}\\
&=\begin{cases} b_1^i b_2 ^{(i)} & \text{if $|b_1|, |b_2|$ are even and $|b_2| > 0$ } \\ 0 & \text{if $|b_1|, |b_2|$ are odd } \end{cases}\\
&=(b_1 b_2) ^{(i)}\\
&=\left(\pi_B (b^o_1 \otimes b_2) \right)^{(i)}.
\end{align*}
Now, the assertion follows from the fact that every element of $B^e$ is a finite sum of the elements of the form
$b^o_1 \otimes b_2$.
\end{proof}
\begin{para}\label{para20201126c}
In Lemma~\ref{lem20201115a}, we assume that $A$ is a divided power DG $R$-algebra to show that $J$ is closed under taking divided powers.
Note that elements of $J$ are not all of the form of a monomial, so to define the ``powers'' of non-monomial elements we need to consider the divided powers.
For example, for a positive integer $\ell$ we cannot define $(X_i+a)^{(\ell)}$, where $a \in A$ without assuming that $A$ is a divided power DG $R$-algebra.
\end{para}
\begin{para}\label{para20201115a}
Let $B=A \langle X_1,\ldots,X_n \rangle$ with $n\leq \infty$, where $A$ is a divided power DG $R$-algebra. For $1\leq i \leq n$, the \emph{diagonal of the variable $X_i$} is an element of $B^e$ which is defined by the formula
\begin{equation}\label{xi}
\xi _i = X_i^o \otimes 1 - 1^o \otimes X_i.
\end{equation}
Since $\pi _B (\xi _i) =0$, we have that $\xi _i \in J$ for all $i$.
Note that if $|X_i|$ is odd, then $\xi _i^2=0$. From the basic properties of divided powers we have
\begin{equation}\label{xi(m)}
\xi _i^{(m)} = \sum _{j=0}^m (-1)^{m-j} \left(X_i^{(j)}\right)^o \otimes X_i^{(m-j)}
\end{equation}
for all $m \in \mathbb{N}$, considering the conventions that $\xi _i ^{(0)}=1^o\otimes 1$ and $\xi_i^{(m)}=0$ for all $m\geq 2$ if $|\xi_i|=|X_i|$ is odd. Note that $\xi_i^{(1)}=\xi_i$ by definition.
Since by Lemma~\ref{lem20201115a}, the map $\pi _B$ preserves the divide powers, we see that $\xi _i^{(m)} \in J$ for all $i$ and $m\in \mathbb{N}$.
Let $\Omega=\{ \xi _i ^{(m)} \mid 1 \leq i \leq n, \ m\in \mathbb{N}\}$.
\end{para}
\begin{lem}\label{lem20201116a}
Let $B=A \langle X_1,\ldots,X_n \rangle$ with $n\leq \infty$, where $A$ is a divided power DG $R$-algebra. The diagonal ideal $J$ is generated by $\Omega$, that is, $J= \Omega B^e$.
\end{lem}
The statement of this lemma is equivalent to the equality $J= B \Omega B$.
\begin{proof}
Since $\Omega \subseteq J$, we have $J' := \Omega B^e\subseteq J$.
Now we show $J\subseteq J'$.
We claim that for all $1\leq i\leq n$ with $n\leq \infty$ and all $m\in \mathbb{N}$ we have the equality
\begin{equation}\label{ximb}
(X_i ^{(m)})^o \otimes 1 \equiv 1^o \otimes X_i ^{(m)} \pmod{J'}.
\end{equation}
To prove this claim, we proceed by induction on $m\in \mathbb{N}$. For the base case, since $\xi_i\in \Omega\subseteq J'$, we have $X_i^o \otimes 1 \equiv 1^o \otimes X_i \pmod{J'}$ for all $1\leq i\leq n$ with $n\leq \infty$. Note that for all $1\leq i\leq n$ with $n\leq \infty$, we have $\sum _{j=0}^m (-1)^{m-j} (X_i^{(j)})^o \otimes X_i^{(m-j)} = \xi _i^{(m)} \equiv 0 \pmod{J'}$. Hence, we obtain a series of congruencies modulo $J'$ as follows:
\begin{eqnarray*}
(X_i ^{(m)})^o \otimes 1 + (-1)^m (1^o \otimes X_i ^{(m)})
&\equiv& -\sum _{j=1}^{m -1} (-1)^{m-j} (X_i^{(j)})^o \otimes X_i^{(m-j)} \\
&=& -\sum _{j=1}^{m -1} (-1)^{m-j} \left((X_i^{(j)})^o\otimes 1\right) \left(1^o\otimes X_i^{(m-j)}\right) \\
&\equiv& -\sum _{j=1}^{m -1} (-1)^{m-j} \left(1^o\otimes X_i^{(j)}\right) \left(1^o\otimes X_i^{(m-j)}\right) \\
&\equiv& -\sum _{j=1}^{m -1} (-1)^{m-j} (1^o \otimes X_i^{(j)} X_i^{(m-j)}) \\
&\equiv&-\sum _{j=1}^{m -1} (-1)^{m-j} \binom{m}{j} (1^o \otimes X_i^{(m)})
\end{eqnarray*}
where the third step uses the inductive hypothesis.
The claim now follows from the well-known equality $\sum _{j=1}^{m -1} (-1)^{m-j} \binom{m}{j}=-1-(-1)^m$.
Now let $\beta\in J\subseteq B^e$ be an arbitrary element. It follows from~\eqref{ximb} that there exists an element $b_{\beta} \in B$ such that $ \beta \equiv 1 \otimes b_{\beta} \pmod{J'}$.
Since $\pi _B (\beta)=0$, we have
$\pi _B (1 \otimes b_{\beta})= b_{\beta} = 0$. Hence, $\beta \equiv 0 \pmod{J'}$, which means that $\beta\in J'$. This implies that $J \subseteq J'$, as desired.
\end{proof}
\begin{lem}\label{basis}
Let $B=A \langle X_1,\ldots,X_n \rangle$ with $n\leq \infty$, where $A$ is a divided power DG $R$-algebra. The set
$$
\mathrm{Mon}(\Omega)=
\begin{cases}
\{ \xi_1^{(m_1)}\cdots \xi_n^{(m_n)} \ | \ \xi_i ^{(m_i)} \in \Omega \ (1 \leq i \leq n) \} \cup \{1^o \otimes 1 \}&\text{if}\ n<\infty\\
\{ \xi_{i_1}^{(m_{i_1})}\cdots \xi_{i_t}^{(m_{i_t})} \ | \ \xi_{i_j} ^{(m_{i_j})} \in \Omega \ (i_j\in \mathbb{N}, t<\infty) \} \cup \{1^o \otimes 1 \}&\text{if}\ n=\infty
\end{cases}
$$
is a basis for the underlying graded free $B^o$-module $B^e$.
Also, the diagonal ideal $J$ is a free graded $B^o$-module with the graded basis
$\mathrm{Mon} (\Omega) \setminus \{1^o \otimes 1 \}$.
\end{lem}
\begin{proof}
Recall from~\ref{para20201206d} that the underlying graded $B^o$-module $B^e$ is free with the basis $\mathrm{Mon}(\Gamma)$. We prove the lemma for $n<\infty$; the case of $n=\infty$ is similar.
Assume that $n<\infty$. For an integer $\ell \geq 0$ let
\begin{gather*}
\mathrm{Mon} _{\ell} (\Gamma ) = \{ (1^o \otimes X_1^{(m_1)}X_2^{(m_2)}\cdots X_n^{(m_n)})\in \mathrm{Mon}(\Gamma) \mid m_1 + \cdots + m_n = \ell \}\\
\mathrm{Mon} _{\ell} (\Omega ) = \{ \xi_1^{(m_1)}\xi_2^{(m_2)}\cdots \xi_n^{(m_n)}\in \mathrm{Mon}(\Omega) \mid m_1 + \cdots + m_n = \ell \}.
\end{gather*}
Also let $F_{\ell} (B^e)$ be the free $B^o$-submodule of $B^e$ generated by $\bigcup _{0 \leq i \leq \ell} \mathrm{Mon} _{i} (\Gamma )$, i.e.,
$$
F_{\ell} (B^e) =\left( \bigcup _{0 \leq i \leq \ell} \mathrm{Mon} _{i} (\Gamma ) \right) B^o = \sum _{i=0} ^{\ell} \mathrm{Mon} _{i} (\Gamma) B^o
$$
Then the family $\{ F_{\ell} (B^e) \mid \ell \geq 0\}$ is a filtration of the $B^o$-module $B^e$ satisfying the following properties:
\begin{enumerate}[\rm(1)]
\item $ B^o \otimes 1=F_{0} (B^e) \subset F_{1} (B^e) \subset \cdots \subset F_{\ell} (B^e) \subset F_{\ell +1 } (B^e) \subset \cdots \subset B^e$;
\item $\bigcup _{\ell \geq 0} F_{\ell} (B^e) = B^e$;
\item $F_{\ell} (B^e) F_{\ell '} (B^e) \subseteq F_{\ell + \ell '} (B^e)$; and
\item each $F_{\ell} (B^e) /F_{\ell -1} (B^e)$ is a free $B^o$-module with free basis $\mathrm{Mon} _{\ell } (\Gamma)$.
\end{enumerate}
Regarding $B^e$ as a $B^o$-module, by~\eqref{xi(m)} for all $1\leq i\leq n$ and $m\geq 1$ we have
$$
\xi _i^{(m)} = (-1)^m(1^o \otimes X_i^{(m)}) + \sum _{j=1}^m (-1)^{m-j} \left(1^o \otimes X_i^{(m-j)}\right)\left(X_i^{(j)}\right)^o.
$$
Hence,
$\xi _i^{(m)} - (-1)^m(1^o \otimes X_i^{(m)})\in F_{m-1}(B^e)$.
Therefore, if $\xi_1^{(m_1)}\xi_2^{(m_2)}\cdots \xi_n^{(m_n)} \in \mathrm{Mon} _{\ell} (\Omega)$, then we get a sequence of congruencies modulo $F_{\ell -1} (B^e)$ as follows:
\begin{eqnarray*}
\xi_1^{(m_1)}\xi_2^{(m_2)}\cdots \xi_n^{(m_n)}
&\equiv & (-1)^{m_1} (1^o \otimes X_1^{(m_1)})\xi_2^{(m_2)}\cdots \xi_n^{(m_n)} \\
&\equiv& \cdots \\
&\equiv& (-1)^{m_1+\cdots + m_n} (1^o \otimes X_1^{(m_1)}X_2^{(m_2)}\cdots X_n^{(m_n)})\\
&=& (-1)^{\ell} (1^o \otimes X_1^{(m_1)}X_2^{(m_2)}\cdots X_n^{(m_n)}).
\end{eqnarray*}
Thus, $\mathrm{Mon} _{\ell} (\Omega)$ is a basis for the $B^o$-module $F_{\ell} (B^e) /F_{\ell -1} (B^e)$.
By induction on $\ell$ we can see that $F_{\ell} (B^e)$ itself is also a free $B^o$-module with basis
$\bigcup _{0 \leq i \leq \ell} \mathrm{Mon} _{i} (\Omega)$.
In particular, every finite subset of $\mathrm{Mon} (\Omega)$ is linearly independent over $B^o$.
Since $\bigcup _{\ell \geq 0} F_{\ell} (B^e) = B^e$, the set $\mathrm{Mon} (\Omega)$ generates $B^e$ as a $B^o$-module.
Therefore, $\mathrm{Mon} (\Omega)$ is a basis of the free $B^o$-module $B^e$.
The fact that the diagonal ideal $J$ is free over $B^o$ with the basis
$\mathrm{Mon} (\Omega) \setminus \{1^o \otimes 1 \}$ follows from the short exact sequence~\eqref{eq20201114a}.
\end{proof}
\begin{thm}\label{free extension}
Let $B=A \langle X_1,\ldots,X_n \rangle$ with $n\leq \infty$, where $A$ is a divided power DG $R$-algebra. The DG algebra homomorphism $B^o \to B^e$ defined by $b^o\mapsto b^o\otimes 1$ is a free extension of DG $R$-algebras, i.e.,
$B^e =B^o \langle \xi _1, \xi_2, \ldots , \xi_n \rangle$ with $n\leq \infty$.
\end{thm}
\begin{proof}
By Lemma~\ref{basis}, the set $\mathrm{Mon} (\Omega)$ is a basis for the underlying graded free $B^o$-module $B^e$.
To complete the proof, it suffices to show that for each $1\leq i\leq n$ with $n\leq \infty$ the element $d^{B^e} (\xi _i)$ belongs to $B^o \langle \xi _1, \xi_2, \ldots , \xi_{i-1} \rangle$ and is a cycle.
To see this, note that we have the equalities $$d^{B^e} (\xi _i) = (d^{B}(X_i))^o \otimes 1 - 1^o \otimes d^{B}(X_i)=(d^{A^{(i)}}(X_i))^o \otimes 1 - 1^o \otimes d^{A^{(i)}}(X_i)$$
in which $d^{A^{(i)}}(X_i) \in A^{(i-1)}$ is a cycle.
Applying Lemma~\ref{basis} to $A^{(i-1)}$, we see that
$(A^{(i-1)})^e$ is generated by $\{ \xi _1^{(m_1)} \ldots \xi _{i-1}^{(m_{i-1})}\mid m_j \geq 0 \ (1\leq j \leq i-1)\}$ as an $(A^{(i-1)})^o$-module.
Since $(d^{A^{(i)}}(X_i))^o \otimes 1 - 1^o \otimes d^{A^{(i)}}(X_i) \in (A^{(i-1)})^e$ and $d^{A^{(i)}}(X_i) \in A^{(i-1)}$ is a cycle, $d^{B^e} (\xi _i)\in(A^{(i-1)})^o\langle \xi _1, \ldots , \xi _{i-1}\rangle \subseteq B^o\langle \xi _1, \ldots , \xi _{i-1}\rangle$ and is a cycle.
\end{proof}
Our next move is to define the notion of divided powers of the diagonal ideal $J$.
\begin{para}\label{para20201116a}
Let $B=A \langle X_1,\ldots,X_n \rangle$ with $n\leq \infty$, where $A$ is a divided power DG $R$-algebra. Recall from Lemma~\ref{lem20201116a} that $J= \Omega B^e$. For an integer $\ell\geq 0$ let
$$
\mathrm{Mon}_{\geq \ell}(\Omega) =
\begin{cases}
\{ \xi_1^{(m_1)}\cdots \xi_n^{(m_n)}\in \mathrm{Mon}(\Omega)\mid \ m_1+\cdots +m_n \geq \ell \}&\text{if}\ n<\infty\\
\{ \xi_{i_1}^{(m_{i_1})}\cdots \xi_{i_t}^{(m_{i_t})}\in \mathrm{Mon}(\Omega)\mid \ m_{i_1}+\cdots +m_{i_t} \geq \ell \}&\text{if}\ n=\infty.
\end{cases}
$$
which is the set of monomials that are (symbolically) products of more than or equal to $\ell$ variables.
By Lemma~\ref{basis}, the set $\mathrm{Mon}_{\geq 1}(\Omega)=\mathrm{Mon}(\Omega)\backslash \{1^o\otimes 1\}$ is a basis for $J$ as a free $B^o$-module.
We define the \emph{$\ell$-th power of $J$} to be $J ^{(\ell)} := \mathrm{Mon}_{\geq \ell}(\Omega)B^e$. Note that $J=J^{(1)}$ and there is a descending sequence
$$B^e \supset J \supset J ^{(2)} \supset \cdots \supset J ^{(\ell)} \supset J ^{(\ell +1)} \supset \cdots$$
of the DG ideals in $B^e$; see Lemma~\ref{lem20201116b} below.
\end{para}
\begin{lem}\label{lem20201116b}
Let $B=A \langle X_1,\ldots,X_n \rangle$ with $n\leq \infty$, where $A$ is a divided power DG $R$-algebra. For every $\ell \geq 0$ the ideal $J ^{(\ell)}$ is a DG ideal of $B^e$ and
we have
$J J^{(\ell)} = J^{(\ell)}J \subseteq J ^{(\ell +1)}$.
Moreover, the quotient $J ^{(\ell)}/J^{(\ell +1)}$ is a DG $B$-module.
\end{lem}
\begin{proof}
We prove the assertion for $n<\infty$. The case where $n=\infty$ is treated similarly by using the appropriate notation.
By Theorem \ref{free extension}, for all $1 \leq i \leq n$ and $m_1, \ldots , m_n \geq 0$ we have
$$
\xi _i\ (\xi_1^{(m_1)}\xi_2^{(m_2)}\cdots \xi_n^{(m_n)}) = (-1)^{|\xi_i|\left(\sum_{j=1}^{i-1} m_j|\xi_j|\right)}\ (m_i+1)\ \xi_1^{(m_1)}\cdots \xi_i^{(m_i +1)}\cdots \xi_n^{(m_n)}.
$$
This shows that $J J ^{(\ell)} \subseteq J ^{(\ell +1)}$. Similarly, $J^{(\ell)}J \subseteq J ^{(\ell +1)}$.
To prove that $J ^{(\ell)}$ is a DG ideal, we show that $d^{B^e} (J ^{(\ell)}) \subseteq J ^{(\ell)}$.
Recall, from the definition, that $J$ is the kernel of the DG $R$-algebra homomorphism $\pi_B$. Since $\pi_B$ is a chain map, we have $\pi_B( d^{B^e} (J ))=d^B(\pi_B(J))=0$. Hence, $d^{B^e} (J) \subseteq J$
Note that $d^{B^e} (\xi_i^{(m)})\in J^{(m)}$ for all $1\leq i\leq n$ and $m\geq 1$.
In fact, we have $d^{B^e} (\xi_i^{(m)}) = \xi_i^{(m-1)} d^{B^e} (\xi_i) \in J ^{(m-1)}J \subseteq J^{(m)}$.
Now assume that $\ell \geq 2$.
If $m_1+ \cdots + m_n \geq \ell$, then we have
$$
d^{B^e}( \xi_1^{(m_1)}\xi_2^{(m_2)}\cdots \xi_n^{(m_n)})
= \sum _{i=1}^n \pm\ d^{B^e} (\xi _i^{(m_i)}) \left(\xi_1^{(m_1)} \cdots \xi_{i-1}^{(m_{i-1})}\xi_{i+1}^{(m_{i+1})}\cdots \xi_n^{(m_n)}\right)
$$
which is an element in $\sum _{i=1}^n J^{(m_i)} J^{(\ell -m_i)}\subseteq J^{(\ell)}$. Therefore, $d^{B^e} (J ^{(\ell)}) \subseteq J ^{(\ell)}$.
The assertion that $J ^{(\ell)}/J^{(\ell +1)}$ is a DG $B$-module follows from the facts that the underlying graded $B^e$-module $J ^{(\ell)}/J^{(\ell +1)}$ is annihilated by $J$ and $B^e/J \cong B$ as graded algebras.
\end{proof}
\begin{thm}\label{Jell/Jell+1}
Let $B=A \langle X_1,\ldots,X_n \rangle$ with $n\leq \infty$, where $A$ is a divided power DG $R$-algebra. For every $\ell \geq 0$, the DG $B$-module $J^{(\ell)}/J^{(\ell +1)}$ is semifree with the semifree basis $\mathrm{Mon}_{\ell}(\Omega)$. In case that $n<\infty$, this is a finite semifree basis.
\end{thm}
\begin{proof}
Recall from~\ref{para20201114e} that $B^o$ is a DG $R$-subalgebra of $B^e$. By definition of $J^{(\ell)}$ from~\ref{para20201116a},
the underlying graded $B^o$-module $J^{(\ell)}/J^{(\ell +1)}$ is free with the basis $\mathrm{Mon}_{\ell}(\Omega)$.
Note that the composition of the maps
$B^o \to B^e \xrightarrow{\pi_B} B$ defined by $b^o\mapsto b^o\otimes 1\mapsto b$ is an isomorphism, and that the $B$-module structure on $J^{(\ell)}/J^{(\ell +1)}$ from Lemma~\ref{lem20201116b} coincides with its $B^o$-module structure.
Thus, $J^{(\ell)}/J^{(\ell +1)}$ is free as an underlying graded $B$-module. Therefore, $J^{(\ell)}/J^{(\ell +1)}$ is a semifree DG $B$-module with semifree basis $\mathrm{Mon}_{\ell}(\Omega)$.
\end{proof}
\begin{para}\label{para20210105c}
Let $B=A[X_1,\ldots,X_n]$ with $n\leq \infty$ be a polynomial extension of the DG $R$-algebra $A$ with variables $X_1,\ldots,X_n$ of positive degrees. For each $1\leq i\leq n$, consider the diagonal $\xi_i$ of the variable $X_i$ defined in~\eqref{xi}. In this case we have
\begin{equation}\label{xim}
\xi_i^m=\sum _{j=0}^m (-1)^{m-j} \binom{m}{j} \left(\left(X_i^{j}\right)^o \otimes X_i^{m-j}\right).
\end{equation}
Hence, similar to~\ref{para20201115a}, we can consider the set $\{ \xi _i ^{m} \mid 1 \leq i \leq n, \ m\in \mathbb{N}\}\subseteq J$, which we again denote by $\Omega$ in this case.
Replacing divided powers $X_i^{(m)}$ and $\xi_i^{(m)}$ by ordinary powers $X_i^{m}$ and $\xi_i^m$, we can show that Lemmas~\ref{lem20201116a} and~\ref{basis} hold in this case as well. Hence, similar to Theorem~\ref{free extension}, we have $B^e = B^o [\xi _1, \ldots , \xi _n]$. Note that in this case for an integer $\ell\geq 0$ we have $J^{\ell}=\mathrm{Mon}_{\geq \ell}(\Omega)B^e$ and $J^{\ell}/J^{\ell+1}$ is a semifree DG $B$-module with the semifree basis $\mathrm{Mon}_{\ell}(\Omega)$.
\end{para}
We can now prove the following which is a key to the proof of Main Theorem.
\begin{cor}\label{cor20210105a}
Let $n\leq \infty$. We consider the following two cases:
\begin{enumerate}[\rm(a)]
\item
$B=A[X_1,\ldots,X_n]$; or
\item
$A$ is a divided power DG $R$-algebra and $B=A \langle X_1,\ldots,X_n \rangle$.
\end{enumerate}
Then $B$ is DG quasi-smooth over $A$. If $n < \infty$, then $B$ is DG smooth over $A$.
\end{cor}
\begin{proof}
Note that by definition of $J^{(\ell)}$ and Lemma~\ref{basis}, the quotient $J/J^{(\ell)}$ is a semifree DG $B$-module with the semifree basis $\mathrm{Mon}(\Omega)\backslash \mathrm{Mon}_{\geq \ell}(\Omega)$. In case (a), set $J^{[\ell]} = J ^{(\ell)}$ and in case (b), set $J^{[\ell]} = J ^{\ell}$ for each positive integer $\ell$. The assertion follows from Lemma~\ref{lem20201116b}, Theorem~\ref{Jell/Jell+1}, and~\ref{para20210105c}.
\end{proof}
\begin{para}{\emph{Proof of Main Theorem.}}\label{para20210107s}
The assertion follows from Theorem~\ref{thm2021017a} and Corollary~\ref{cor20210105a}. \qed
\end{para}
The following result follows from Main Theorem(a) and~\ref{para20201126d}.
\begin{cor}\label{cor20201126a}
Assume that $A=R$, or $A$ is a DG $R$-algebra with $R$ containing the field of rational numbers, and let $B=A\langle X_1,\ldots,X_n\rangle$. If $N$ is a bounded below semifree DG $B$-module such that $\operatorname{Ext} _B ^i (N, N)=0$ for all $i\geq 1$, then $N$ is na\"ively liftable to $A$. Moreover, $N$ is a direct sum of a DG $B$-module that is liftable to $A$.
\end{cor}
\section{Auslander-Reiten Conjecture and na\"ive lifting property}\label{sec20201126n}
Our study in this paper is motivated by the following long-standing conjecture posed by Auslander and Reiten which has been studied in numerous works; see for instance~\cite{AY, auslander:lawlom, avramov:svcci, avramov:edcrcvct, avramov:phcnr, huneke:voeatoscmlr, huneke:vtci, jorgensen:fpdve, nasseh:vetfp, nasseh:lrqdmi, nasseh:oeire, MR1974627, sega:stfcar}, to name a few.
\begin{ARC}[\protect{\cite[p.\ 70]{AR}}]
\emph{Let $(S,\ideal{n})$ be a local ring and $M$ be a finitely generated $S$-module. If $\operatorname{Ext}^i_S(M\oplus S,M\oplus S)=0$ for all $i>0$, then $M$ is a free $S$-module.}
\end{ARC}
Our Main Theorem in this paper considers na\"ive liftability of DG modules along \emph{finite} free extensions of DG algebras. However, in dealing with the Auslander-Reiten Conjecture, we need to work with \emph{infinite} free extensions of DG algebras. So, we pose the following conjecture for which we do not have a proof yet.
\begin{NLC}\label{conj20210108b}
\emph{Assume that $A$ is a divided power DG $R$-algebra, and let $B=A\langle X_i\mid i\in \mathbb{N}\rangle$. If $N$ is a bounded below semifree DG $B$-module such that $\operatorname{Ext}^{i}_{B}(N\oplus B,N\oplus B)=0$ for all $i\geq 1$, then $N$ is na\"ively liftable to $A$.}
\end{NLC}
Our next result explains the relation between these conjectures.
\begin{thm}\label{thm20210108z}
If Na\"ive Lifting Conjecture holds, then the Auslander-Reiten Conjecture holds.
\end{thm}
\begin{proof}
Let $(S,\ideal{n})$ be a local ring and $M$ be a finitely generated $S$-module with $\operatorname{Ext}^i_S(M\oplus S,M\oplus S)=0$ for all $i>0$. Without loss of generality we can assume that $S$ is complete in its $\frak n$-adic topology. Consider the minimal Cohen presentation $S\cong R/I$ of $S$, where $R$ is a regular local ring and $I$ is an ideal of $R$. By a construction of Tate~\cite{Tate}, there is a DG $R$-algebra
$B=R\langle X_i\mid i\in \mathbb{N}\rangle$
that resolves $S$ as an $R$-module, that is, $S\simeq B$.
The $S$-module $M$ is regarded as a DG $B$-module via the natural augmentation $B \to S$.
This homomorphism of DG $S$-algebras induces a functor $\mathfrak{F}\colon \mathcal{D}(S) \to \mathcal{D}(B)$ of the derived categories.
Since $S\simeq B$, by Keller's Rickard Theorem~\cite{Keller}, the functor $\mathfrak{F}$ yields a triangle equivalence and its quasi-inverse is given by $-\otimes^{\mathbf{L}}_BS$.
Let $N\xrightarrow{\simeq} M$ be a semifree resolution of the DG $B$-module $M$; see~\cite{avramov:ifr} for more information.
Then, as an underlying graded free $B$-module, $N$ is non-negatively graded and $\operatorname{H}(N)\cong \operatorname{H}(M)= M$, which is bounded and finitely generated over $\operatorname{H}_0(B)\cong R$.
Note that $M$ corresponds to $N$ and $S$ corresponds to $B$ under the functor $\mathfrak{F}$.
Since $\mathfrak{F}$ is a triangle equivalence, we conclude that
$\operatorname{Ext} _B^i (N\oplus B, N\oplus B)=0$ for all $i\geq 1$.
By our assumption, $N$ is na\"ively liftable to $A$. In particular, by Corollary~\ref{naive definition}, $N$ is a direct summand of $N|_R\otimes_R B$. Using the category equivalence $\mathfrak{F}$, we see that $M$ is a direct summand of $M \otimes^{\mathbf{L}} _R S$ in $\mathcal{D} (S)$ which is a bounded free complex over $S$, since $R$ is regular. Hence, $\pd_S(M)<\infty$. It then follows from \cite[Theorem 2.3]{CH} that $M$ is free over $S$.
\end{proof}
\begin{para}\label{para20210108d}
According to the proof of Theorem~\ref{thm20210108z}, we do not need to prove the Na\"ive Lifting Conjecture in its full generality for the Auslander-Reiten Conjecture; only proving it for the case where $A=R$ is a regular local ring would suffice for this purpose. Note that, despite the finite free extension case, the assumption ``$\operatorname{Ext}^{i}_{B}(N,N)=0$ for all $i\geq 1$'' is not enough for the Na\"ive Lifting Conjecture to be true in general. The reason is that there exist non-free finitely generated modules $M$ over a general local ring $S$ satisfying $\operatorname{Ext}^{i}_{S}(M,M)=0$ for all $i\geq 1$; see, for instance, \cite{JLS}.
\end{para}
\begin{para}\label{para20210108b}
In the proof of Theorem~\ref{thm20210108z}, if $S$ is resolved as an $R$-module by a finite free extension $B=R\langle X_1,\ldots,X_n\rangle$, then $S$ is known to be a complete intersection ring. Hence, in this case, our Main Theorem and Theorem~\ref{thm20210108z} just provide another proof for the well-known fact that complete intersection rings satisfy the Auslander-Reiten Conjecture; see, for instance, \cite{AY,avramov:svcci,jorgensen:fpdve}.
\end{para}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
1,108,101,565,544 | arxiv | \section{Introduction}
Seismic full waveform inversion (FWI) produces high resolution images of the subsurface directly from seismic waveforms \cite{tarantola1984inversion}. FWI is traditionally solved using optimization by minimizing the difference between predicted and observed seismograms. In such methods a good starting model is often required because of multimodality of the misfit functions caused by the significant nonlinearity of the problem. Those methods also cannot provide accurate estimates of uncertainties, which are required to better understand and interpret the resulting images.
Monte Carlo sampling methods provide a general way to solve nonlinear inverse problems and quantify uncertainties, and have been applied to solve FWI problems \cite{ray2016frequency, zhao2019gradient, gebraad2019bayesian, guo2020bayesian}. However, Monte Carlo methods are usually computationally expensive and all Markov chain Monte Carlo-based methods are difficult to parallelise fully.
Variational inference provides an efficient, fully parallelisable alternative methodology. This is a class of methods that optimize an approximation to a probability distribution describing post-inversion parameter uncertainties \cite{blei2017variational}. The method has been applied to petrophysical inversion \cite{nawaz2018variational, nawaz2019rapid, nawaz2020variational}, travel time tomography \cite{zhang2020seismic}, and more recently to FWI \cite{zhang2020variational}. In the latter study strong prior information is imposed to the velocity structure to limit the space of possible models. Unfortunately such strong information is almost never available in practice. In addition, the method has only been applied to wavefield transmission problems in which seismic data are recorded on a receiver array that lies above the structure to be imaged given known, double-couple (earthquake-like) sources located underneath the same structure. In practice, knowledge of such sources is almost never definitive, and usually depends circularly on the unknown structure itself. In this study, we therefore apply variational inference to solve FWI problems with more practically realistic prior probabilities, and using seismic reflection data acquired from known near-surface sources.
In the next section we briefly summarise the concept of variational inference, specifically Stein variation gradient descent (SVGD). In section 3 we demonstrate the method by solving an acoustic FWI problem using the Marmousi model with practical prior information. To further explore the method we perform multiple inversions using data from different frequency ranges, and demonstrate that the method can be used with practical prior information to produce high resolution images and uncertainties.
\section{Methods}
\subsection{Stein variational gradient descent (SVGD)}
Bayesian inference solves inverse problems by finding the probability distribution function (pdf) of model $\mathbf{m}$ given prior information and observed data $\mathbf{d}_{obs}$. This is called a \textit{posterior} pdf written $p(\mathbf{m}|\mathbf{d}_{obs})$. By Bayes' theorem,
\begin{equation}
p(\mathbf{m}|\mathbf{d}_{obs}) = \frac{p(\mathbf{d}_{obs}|\mathbf{m})p(\mathbf{m})}{p(\mathbf{d}_{obs})}
\label{eq:Bayes}
\end{equation}
where $p(\mathbf{m})$ is the \textit{prior} pdf which characterizes the probability distribution of model $\mathbf{m}$ prior to the inversion, $p(\mathbf{d}_{obs}|\mathbf{m})$ is the \textit{likelihood} which represents the probability of observing data $\mathbf{d}_{obs}$ given model $\mathbf{m}$, and $p(\mathbf{d}_{obs})$ is a normalization factor called the \textit{evidence}.
Variational inference solves Bayesian inference problems using optimization. The method seeks an optimal approximation to the posterior pdf within a predefined family of pdfs, which is achieved by minimizing the Kullback-Leibler (KL) divergence \cite{kullback1951information} between the approximating pdf and the posterior pdf. Variational inference has been shown to be an efficient alternative to Monte Carlo sampling methods for a range of geophysical applications \cite{nawaz2018variational, zhang2020seismic, zhang2020variational}.
Stein variational gradient descent (SVGD) is one such algorithm which iteratively updates a set of models, called particles $\{\mathbf{m}^{i}\}$ generated from an initial distribution $q(\mathbf{m})$ using a smooth transform:
\begin{equation}
T(\mathbf{m}^{i}) = \mathbf{m}^{i} + \epsilon \boldsymbol{\phi} (\mathbf{m}^{i})
\label{eq:transform}
\end{equation}
where $\mathbf{m}^{i}$ is the $i^{th}$ particle, $\boldsymbol{\phi}(\mathbf{m}^{i})$ is a smooth vector function representing the perturbation direction and $\epsilon$ is the magnitude of the perturbation. At each iteration the optimal $\boldsymbol{\phi}$ which produces the steepest direction of KL divergence is found to be:
\begin{equation}
\boldsymbol{\phi} ^{*} (\mathbf{m}) \propto \mathrm{E}_{\{\mathbf{m'} \sim q\}} [\mathcal{A}_{p} k(\mathbf{m'},\mathbf{m})]
\label{eq:phi_qp}
\end{equation}
where $k(\mathbf{m'},\mathbf{m})$ is a kernel function, and $\mathcal{A}_{p}$ is the Stein operator such that for a given smooth function $k(\mathbf{m})$, $\mathcal{A}_{p} k(\mathbf{m}) = \nabla_{\mathbf{m}} \mathrm{log} p(\mathbf{m}) k(\mathbf{m})^{T} + \nabla_{ \mathbf{m} } k( \mathbf{m} )$ \cite{liu2016stein}. The expectation $\mathrm{E}_{\{\mathbf{m'} \sim q\}}$ is calculated using the set of particles $\{\mathbf{m}^{i}\}$, then $\boldsymbol{\phi} ^{*} (\mathbf{m})$ is used to update each particle using equation \ref{eq:transform}. This process is iterated to equilibrium, when the particles are optimally distributed according to the posterior pdf.
In SVGD the choice of kernels can affect the efficiency of the method. In this study we apply a matrix-valued kernel instead of a commonly used scalar kernel to improve efficiency:
\begin{equation}
\mathbf{k}(\mathbf{m'},\mathbf{m}) = \mathbf{Q}^{-1}exp(-\frac{1}{2h}||\mathbf{m}-\mathbf{m'}||^{2}_{\mathbf{Q}})
\end{equation}
where $\mathbf{Q}$ is a positive definite matrix, $||\mathbf{m}-\mathbf{m'}||^{2}_{\mathbf{Q}}=(\mathbf{m}-\mathbf{m'})^{T}\mathbf{Q}(\mathbf{m}-\mathbf{m'})$ and $h$ is a scaling parameter. \cite{wang2019stein} showed that by setting $\mathbf{Q}$ to be the Hessian matrix, the method converges faster than with a scalar kernel. However the Hessian matrix is usually expensive to compute. An alternative might be to use the covariance matrix calculated from the particles, but the full covariance matrix may occupy large memory and is difficult to estimate from a small number of samples \cite{ledoit2004well}. We therefore use a diagonal covariance matrix: $\mathbf{Q}^{-1} = diag(\mathrm{var}(\mathbf{m}))$ where $\mathrm{var}(\mathbf{m})$ is the variance estimated from the particles. For those parameters with higher variance, this choice applies higher weights to the posterior gradients to induce larger perturbations, and also enables interactions with more distant particles.
\subsection{Variational full-waveform inversion}
We apply SVGD to solve an acoustic FWI problem. The wave equation is solved using a time-domain finite difference method. Gradients of the likelihood function with respect to velocity are calculated using the adjoint method \cite{plessix2006review}. For the likelihood function, we assume Gaussian data errors with a diagonal covariance matrix:
\begin{equation}
p(\mathbf{d}_{obs}|\mathbf{m}) \propto exp[-\frac{1}{2}\sum_{i}(\frac{d_{i}^{obs}-d_{i}(\mathbf{m})}{\sigma_{i}})^{2}]
\end{equation}
where $i$ is the index of time samples and $\sigma_{i}$ is the standard deviation of each data point.
\section{Results}
\begin{figure}
\includegraphics[width=1.\linewidth]{Figure_true_model.pdf}
\caption{\textbf{(a)} The true velocity model. Red stars denote locations of 10 sources. The 200 receivers are equally spaced at 0.36 km depth. \textbf{(b)} The prior distribution of seismic velocity, which is chosen to be a Uniform distribution over an interval of up to 2 km/s at each depth. A lower velocity bound of 1.5 km/s is imposed to ensure the velocity is higher than the acoustic velocity in water.}
\label{fig:true_model}
\end{figure}
We apply the above method to a 2D acoustic full-waveform inversion to recover part of the Marmousi model \cite{martin2006marmousi2} from waveform data (Figure \ref{fig:true_model}). The model is discretised in space using a regular 200 $\times$ 120 grid. Sources are located at 20 m depth in the water layer. 200 equally spaced receivers are located at a depth of 360 meters across the horizontal extent of the model. We generated two waveform datasets using Ricker wavelets with dominant frequency of 4 Hz and 10 Hz respectively. Uncorrelated Gaussian noise with 0.1 standard deviation is added to the data.
\cite{zhang2020variational} and \cite{gebraad2019bayesian} imposed strong prior information (a Uniform distribution over an interval of 0.2 km/s) on the velocity to reduce the complexity of their (identical) inverse problems. Such strong prior information is almost never available in practice. In this study we use ten times weaker prior information: a Uniform distribution over an interval of 2 km/s at each depth (Figure \ref{fig:true_model}b). We also impose an extra lower velocity bound of 1.5 km/s to ensure the rock velocity is higher than the acoustic velocity in water. Velocity in the water layer is fixed to be 1.5 km/s in the inversion. This prior information mimics a practical choice which can be applied in real problems.
We perform two independent inversions using the two datasets respectively. For each inversion we use 600 particles which are initially generated from the prior distribution and updated using equation \ref{eq:transform} for 600 iterations. Figure \ref{fig:meanstd}a shows the mean model obtained using the low frequency data. In the shallower part ($<$ 2 km) the mean model shows similar features to the true model but has slightly lower resolution than the true model, which probably reveals the resolution limit restricted by the frequency range. In comparison the mean obtained using high frequency data shows higher resolution (Figure \ref{fig:meanstd}c) and is more similar to the true model. In the deeper part ($>$ 2 km) both mean models show differences to the true model: the mean obtained using low frequency data only shows large scale structure, whereas that obtained using high frequency data shows higher resolution details which are different from the true model. This may be because of poor illumination of the deeper part, which causes complex posterior pdfs when using high frequency data and which cannot be represented properly by a small number of particles. However, we also note that the mean model need not reflect the true model in nonlinear problems. Both standard deviation models show features that are related to the mean model. For example, in the shallow part ($<$ 1 km) the standard deviation is lower at locations of lower velocity anomalies, and in the deeper part lower standard deviations are associated with higher velocity anomalies. This phenomenon has also been found by previous studies \cite{gebraad2019bayesian,zhang2020variational}, and probably reflects the fact that waves spend comparatively longer in lower velocity areas resulting in greater sensitivity to those speed parameters.
\begin{figure}
\includegraphics[width=1.0\linewidth]{Figure_mean_stdev.pdf}
\caption{The \textbf{(a), (c)} and \textbf{(e)} mean and \textbf{(b), (d)} and \textbf{(f)} standard deviation models obtained respectively using low frequency data, high frequency data, and using high frequency data but starting from the results of low frequency data. Black pluses denote locations referred to in the main text.}
\label{fig:meanstd}
\end{figure}
\begin{figure}
\includegraphics[width=1.0\linewidth]{Figure_marginals.pdf}
\caption{The marginal distributions at horizontal location 2 km and depths of 0.6 km, 1.2 km, 2 km and 2.4 km. The \textbf{top}, \textbf{middle} and \textbf{bottom} rows show marginal distributions obtained using low frequency data, high frequency data only, and using high frequency data but starting from the results of low frequency data, respectively. Dashed red lines show true values.}
\label{fig:marginals}
\end{figure}
To improve the results in the deeper part, we conducted another inversion by using high frequency data but starting from those particles generated using the low frequency data and run the inversion for 300 iterations. By doing this the mean model shows more similar features to the true model in the deeper part (Figure \ref{fig:meanstd}e). The standard deviation model (Figure \ref{fig:meanstd}f) also shows smoother structure than the previous results.
To further understand the results, we show marginal velocity distributions at four locations (black pluses in Figure \ref{fig:meanstd}): point (2.0, 0.6) km, (2.0, 1.2) km, (2.0, 1.8) km and (2.0, 2.4) km. Overall the marginal distributions obtained using high frequency data have a tighter distribution. At the shallower points (at depths of 0.6 km and 1.2 km) all the marginal distributions show high probabilities around the true velocity (red lines in Figure \ref{fig:marginals}). At the two deeper points the marginal distributions obtained using low frequency data show high uncertainties due to lower resolution. The marginal distributions obtained using high frequency data only show complex, multimodal distributions, and the high probability area deviates from the true value. In comparison the marginal distributions obtained using the results of low frequency inversion as starting particles show high probabilities around the true value. This clearly indicates that the method can get stuck at local modes in regions of poor illumination when using only high frequency data -- for example, at depth 1.8 km only one incorrect mode is found (Figure \ref{fig:marginals}g). By starting from particles obtained using low frequency data, this issue can largely be resolved.
\section{Discussion}
Since SVGD is based on particles the method can be computationally expensive. For example, the above inversion with 600 iterations took about 6703 CPU hours, which required 74 hours to run using 90 Intel Xeon E5-2630 CPU cores. In practice stochastic minibatch optimization \cite{robbins1951stochastic} can be used to improve the computational efficiency for larger data sets and 3D applications. Since the method does not require good prior information as is required for linearised FWI, the results obtained using a small dataset could be used to provide a reliable starting model for linearised FWI of larger datasets to produce higher resolution models. This study used a diagonal matrix kernel. To improve efficiency of the method other full matrix kernels might be used, for example Hessian matrix kernels \cite{wang2019stein} or Stein variational Newton methods \cite{detommaso2018stein}.
\section{Conclusion}
In this study we presented the first application of variational full-waveform inversion (VFWI) to seismic reflection data. To explore the applicability of the method we imposed realistically weak prior information on seismic velocity: a Uniform prior pdf with 2 km/s interval, and performed multiple inversions using data from different frequency ranges. The results showed that the method can produce high resolution mean and uncertainty models using only high frequency data. However the method can still get stuck at local modes in areas of poor illumination. This can be resolved by using the results obtained from low frequency data to initiate high frequency inversions. We therefore conclude that VFWI may be a useful method to produce high resolution images and reliable uncertainties.
\section*{Acknowledgments}
The authors thank the Edinburgh Imaging Project sponsors (BP, Schlumberger and Total) for supporting this research. This work has made use of the resources provided by the Edinburgh Compute and Data Facility (ECDF) (http://www.ecdf.ed.ac.uk/).
\bibliographystyle{plainnat}
|
1,108,101,565,545 | arxiv | \section{Introduction}
The sheer volume of today's digital data has made {\it distributed
storage systems} $($DSS$)$ not only massive in scale but also
critical in importance. Every day, people knowingly or unknowingly
connect to various private and public distributed storage systems,
include large data centers (such as the Google data centers and
Amazon Clouds) and peer-to-peer storage systems (such as
OceanStore \cite{Rhea}, Total Recall \cite{Bhagwan}, and DHash++
\cite{Dabek}). In a distributed storage system, a data file is
stored at a distributed collection of storage devices/nodes in a
network. Since any storage device is individually
unreliable and subject to failure (i.e. erasure), redundancy must be introduced to provide the much-needed system-level protection against data loss due to device/node failure.
The simplest form of redundancy is {\it replication}. By storing $c$
identical copies of a file at $c$ distributed nodes, one copy per node, a $c$-replication system can guarantee the data availability as long as no more than $(c\!-\!1)$ nodes fail. Such systems are very easy to implement, but extremely inefficient in storage space utilization, incurring tremendous waste in devices and equipment, building space, and cost for powering and cooling. More sophisticated systems employing {\it erasure coding} \cite{Weather02} can expect to considerably improve the storage efficiency. Consider a file that is divided into $k$ equal-size fragments. A judiciously-designed $[n,k]$ erasure (systematic) code can be employed to encode the $k$ data fragments (terms {\it systematic symbols} in the coding jargon) into $n$ fragments (termed {\it coded symbols}) stored in $n$ different nodes. If the $[n,k,d]$ code reaches the Singleton bound such that the minimum Hamming distance satisfies $d=n-k+1$, then the code is {\it maximum distance separable} (MDS) and offers redundancy-reliability optimality. With an $[n,k]$ MDS erasure code, the original file can be recovered from any set of $k$ encoded fragments, regardless of whether they are systematic or parity. In other words, the system can tolerate up to $(n-k)$ concurrent device/node failure without jeopardizing the data availability.
Despite the huge potentials of MDS erasure codes, however, practical application of these codes in massive storage networks have been difficult. Not only are simple (i.e. requires very little computational complexity) MDS codes very difficult to construct, but data repair would in general require the access of $k$ other encoded fragments \cite{Rodrigues05}, causing considerable input/output (I/O) bandwidth that would pose huge challenges to a typical storage network.
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.73]{fig1}
\end{center}
\vspace{-5pt}
\caption{An example of how a locally repairable linear code is
used to construct a distributed storage system: a file $\mathcal
F$ is first split into five equal packets $\{x_1,\cdots,x_5\}$ and then
is encoded into $12$ packets, using a $(2,3)_a$ linear code. These
$12$ encoded packets are stored at $12$ nodes
$\{\text{v}_{1},\cdots,\text{v}_{12}\}$, which are divided into
three groups $\{v_1,v_2,v_3,v_4\}$, $\{v_5,v_6,v_7,v_8\}$ and
$\{v_9,v_{10},v_{11},v_{12}\}$. Each group can perform local repair of up to
two node-failures. For example, if Node $\text{v}_9$ fails, it
can be repaired by any two packets among
$\text{v}_{10},\text{v}_{11}$ and $\text{v}_{12}$. Moreover, the
entire file $\mathcal F$ can be recovered by five packets from any five nodes
$\text{v}_{i_1}, \cdots,\text{v}_{i_5}$ which intersect each
group with at most two packets. For example, $\mathcal F$ can be
recovered from five packets stored at
$\text{v}_1,\text{v}_3,\text{v}_7,\text{v}_8$ and $\text{v}_{10}$.
}\label{fig-dss}
\vspace{-17pt}
\end{figure}
\vskip 10pt
Motivated by the desire to reduce repair cost in
the design of erasure codes for distributed storage systems, Gopalan
\emph{et al.}~\cite{Gopalan12} introduced the interesting notion
of {\it symbol locality} in linear codes. The $i$th coded symbol
of an $[n,k]$ linear code ${\mathcal C}$ is said to have locality
$r~(1\leq r\leq k)$ if it can be recovered by accessing at most
$r$ other symbols in $\mathcal C$. The concept was further
generalized to $(r,\delta)$ locality by Prakash \emph{et al.}
\cite{Prakash12}, to address the situation of multiple device
failures.
According to \cite{Prakash12}, the $i$th code symbol $c_i, 1\leq
i\leq n$, in an $[n,k]$ linear code $\mathcal C$ is said to have
locality $(r,\delta)$ if there exists an index set
$S_i\subseteq[n]$ containing $i$ such that $|S_i|-\delta+1\leq r$
and each symbol $c_j, j\in S_i$, can be reconstructed by any
$|S_i|-\delta+1$ symbols in $\{c_\ell;\ell\in S_i\text{~and~}
\ell\neq j\}$, where $\delta\geq 2$ is an integer. Thus, when
$\delta = 2$, the notion of locality in \cite{Prakash12} reduces
to the notion of locality in \cite{Gopalan12}. Two cases of
$(r,\delta)$ codes are introduced in the literature: An
$(r,\delta)_i$ code is a systematic linear code whose {\it
information symbols} all have locality $(r,\delta)$; and an
$(r,\delta)_a$ code is a linear code all of whose {\it symbols}
have locality $(r,\delta)$. Hence, an $(r,\delta)_a$ code is also
referred to as having \emph{all-symbol locality} $(r,\delta)$, and
an $(r,\delta)_i$ code is also referred to as having
\emph{information locality} $(r,\delta)$. A symbol with
$(r,\delta)$ locality -- given that at the most $(\delta-1)$
symbols are erased -- can be deduced by reading at most $r$ other
unerased symbols.
Clearly, codes with a low symbol locality, such as $r<k$, impose a
low I/O bandwidth and repair cost in a distributed storage system.
In a DSS system, one can use ``group'' to describe storage nodes
situated in the same physical location which enjoy a higher
communication bandwidth and a shorter communication distance than
storage nodes belonging to different groups. In the case of node
failure, a \emph{locally repairable code} makes it possible to
efficiently recover data stored in the failed node by downloading
information from nodes in the same group (or in a minimal number
of other groups). Fig. \ref{fig-dss} provides a simple example of
how an $(r,\delta)_a$ code is used to construct a distributed
storage system. In this example, $\mathcal C$ is a $(2,3)_a$
linear code of length $12$ and dimension $5$. Note that a failed
node can be reconstructed by accessing only two other existing
nodes, while it takes five existing nodes to repair a failed node
if a $[12,5]$ MDS code is used.
\renewcommand\figurename{Fig}
\begin{figure*}[htbp]
\begin{center}
\includegraphics[width=18.2cm]{fig3}
\end{center}
\caption{Summary of existence of optimal $(r,\delta)_a$ linear
codes.}\label{sumy}
\end{figure*}
\subsection{Related Work}
Locality was identified as a repair cost metric for distributed
storage systems independently by Oggier \emph{et al.}
\cite{Oggier11}, Gopalan \emph{et al.} \cite{Gopalan12} and
PaPailiopoulos \emph{et al.} \cite{Papail121} using different
terms. In \cite{Gopalan12}, Gopalan \emph{et al.} introduced the
concept of symbol locality of linear codes and established a tight
bound for the redundancy in terms of the message length, the
distance, and the locality of information coordinates. A
generalized concept, i.e., $(r,\delta)$ locality, was addressed by
Prakash \emph{et al.} \cite{Prakash12}. It was proved in
\cite{Prakash12} that the minimum distance $d$ of an
$(r,\delta)_i$ linear code $\mathcal C$ is upper bounded by
\begin{align}
d\leq n-k+1-\left(\left \lceil\frac{k}{r}\right \rceil-1\right )
(\delta-1)\label{eqn:1}
\end{align}
where $n$ and $k$ are the length and dimension of $\mathcal C$
respectively. It was also proved that a class of codes known as
pyramid codes \cite{Huang07} achieve this bound. Since an
$(r,\delta)_a$ code is also an $(r,\delta)_i$ code, (\ref{eqn:1})
also presents an upper bound for the minimum distance of
$(r,\delta)_a$ codes.
Locality of general codes (linear or nonlinear) and bounds on the
minimum distance for a given locality were presented in parallel
and subsequent works \cite{Papail122,Rawat12}. An $(r,\delta)_a$
code (systematic or not) is also termed a \emph{locally repairable
code (LRC)}, and $(r,\delta)_a$ codes that achieve the minimum
distance bound are called \emph{optimal}.
It was proved in \cite{Prakash12} that there exists optimal
locally repairable linear codes when $(r+\delta-1)|n$ and
$q>kn^k$. Under the condition that $(r+\delta-1)|n$, a
construction method of optimal locally repairable vector codes was
proposed in \cite{Rawat12}, where maximal rank distance (MRD)
codes were used along with MDS array codes. For the special case
of $\delta=2$, Tamo \emph{et al.} \cite{Tamo13} proposed an
explicit construction of optimal LRCs when
$$(r+1)|n$$ or $$n~\text{mod}~(r+1)-1\geq k~\text{mod}~r>0.
\footnote{Note that this condition is equivalent to the condition
that $m\geq v+1$, where $n=w(r+1)+m$ and $k=u(r+1)+v$ satisfying
$0<m<r+1$ and $0<v<r$.}$$ Except for the special case that
$n~\text{mod}~(r+1)-1\geq k~\text{mod}~r>0$, no results are known
about whether there exists optimal $(r,\delta)_a$ code when
$(r+\delta-1)\nmid n$.
Up to now, designing LRCs with optimal distance remains an intriguing open
problem for most coding
parameters $n, k, r$ and $\delta$.
Since large fields involve rather complicated and expensive computation,
a related interesting open problem asks how to limit the design
(of optimal LRCs) over relatively smaller fields.
\subsection{Main Results}
In this paper, we investigate the structure properties and the
construction of optimal $(r,\delta)_a$ linear codes of length $n$
and dimension $k$. A simple property of optimal $(r,\delta)_a$
linear codes is proved in Lemma \ref{low-bound}, which shows that
$\frac{n}{r+k-1}\geq\frac{k}{r}$ for any optimal $(r,\delta)_a$
linear code. Hence we impose this condition of
$\frac{n}{r+k-1}\geq\frac{k}{r}$ throughout our discussion of
optimal $(r,\delta)_a$ codes.
The main results of this
paper include:
(\romannumeral1) We prove a structure theorem for the optimal
$(r,\delta)_a$ linear codes for $r|k$. This structure theorem
indicates that it is possible for optimal $(r,\delta)_a$ linear
codes, a sub-class of optimal $(r,\delta)_i$ linear code, to have
a simpler structure than otherwise.
(\romannumeral2) We prove that there exist no
optimal $(r,\delta)_a$ linear codes for
\begin{align}
(r+\delta-1)\nmid n~\text{and}~r|k \label{eqn:2}
\end{align}
or
\begin{align}
m<v+\delta-1~\text{and}~u\geq 2(r-v)+1 \label{eqn:3}
\end{align}
where $n=w(r+\delta-1)+m$ and $k=ur+v$ such that $0<v<r$ and
$0<m<r+\delta-1$ (Theorems \ref{non-exst} and \ref{non-exst-1}).
(\romannumeral3) We propose a deterministic algorithm for
constructing optimal $(r,\delta)_a$ linear codes over any field of
size $q\geq\binom{n}{k-1}$ when
\begin{align}
(r+\delta-1)|n \label{eqn:4}
\end{align}
or
\begin{align}
m\geq v+\delta-1 \label{eqn:5}
\end{align}
where $n=w(r+\delta-1)+m$ and $k=ur+v$ such that $0<v<r$ and
$0<m<r+\delta-1$ (Theorem \ref{opt-ext-1} and \ref{opt-ext-2}).
(\romannumeral4) We propose another deterministic algorithm for
constructing optimal $(r,\delta)_a$ linear codes over any field of
size $q\geq\binom{n}{k-1}$ when
\begin{align}
w\geq r+\delta-1-m \text{\ and\ } \text{min}\{r-v,w\}\geq u
\label{eqn:6}
\end{align}
or
\begin{align}
w+1\geq 2(r+\delta-1-m)~\text{and}~\text{min}\{2(r-v),w\}\geq u
\label{eqn:7}
\end{align}
where $n=w(r+\delta-1)+m$ and $k=ur+v$ such that
$0<v<r$ and $0<m<r+\delta-1$ (Theorem \ref{opt-ext-3} and
\ref{opt-ext-4}).
A summary of our results is given in Fig \ref{sumy}. Note that if
none of the conditions in (\ref{eqn:2})-(\ref{eqn:5}) holds, it
then follows that $$m<v+\delta-1~\text{and}~u\leq 2(r-v).$$ In
that case, if condition (\ref{eqn:6}) does not hold, we have
$w<r+\delta-1-m~\text{or}~r-v<u$; and if condition (\ref{eqn:7})
does not hold, we have $w+1<2(r+\delta-1-m)$, i.e.,
$w<2(r+\delta-1-m)-1$. Hence, if, neither condition (\ref{eqn:6})
nor condition (\ref{eqn:7}) holds (in addition to
(\ref{eqn:2})-(\ref{eqn:5})), then one of the following two
conditions must be satisfied:
\begin{align}
w<r+\delta-1-m, \label{eqn:8}
\end{align} or
\begin{align}
r+\delta-1-m\leq w<2(r+\delta-1-m)-1\ \text{and}\ r-v<u.\label{eqn:9}
\end{align}
In other words, if none of the conditions
(\ref{eqn:2})-(\ref{eqn:7}) holds, then either (\ref{eqn:8}) or
(\ref{eqn:9}) will hold. From our existence proof and/or
constructive results, the existence of optimal $(r,\delta)_a$
linear code is not known only for a limited scope with parameters
described by (\ref{eqn:8}) and (\ref{eqn:9}).
The remainder of the paper is organized as follows. In Section
\uppercase\expandafter{\romannumeral2}, we present the notions
used in the paper as well as some preliminary results about
$(r,\delta)_a$ linear codes.
In Section
\uppercase\expandafter{\romannumeral3}, we investigate the
structure of optimal $(r,\delta)_a$ linear codes when $r|k$
(should they exist). In Section
\uppercase\expandafter{\romannumeral4}, we consider the
non-existence conditions for optimal $(r,\delta)_a$ linear codes
under conditions (\ref{eqn:2}) and (\ref{eqn:3}). A construction
of optimal $(r,\delta)_a$ linear codes for conditions
(\ref{eqn:4}) and (\ref{eqn:5}) is presented in Section
\uppercase\expandafter{\romannumeral5}, and a construction of
optimal $(r,\delta)_a$ linear codes for conditions (\ref{eqn:6})
and (\ref{eqn:7}) is presented in Section
\uppercase\expandafter{\romannumeral6}. Finally, we conclude the
paper in Section \uppercase\expandafter{\romannumeral7}.
\section{Locality of Linear Codes}
For two positive integers $t_1$ and $t_2 ~(t_1\leq t_2)$, we
denote $[t_1,t_2]=\{t_1,t_1+1,\cdots,t_2\}$ and
$[t_2]=\{1,2,\cdots,t_2\}$. For any set $S$, the size
$($cardinality$)$ of $S$ is denoted by $|S|$. If $I$ is a subset
of $S$ and $|I|=r$, then we say that $I$ is an $r$-subset of $S$.
Let $\mathbb F_q^k$ be the $k$-dimensional vector space over the
$q$-ary field $\mathbb F_q$. For any subset $X\subseteq\mathbb
F_q^k$, we use $\langle X\rangle$ to denote the subspace of
$\mathbb F_q^k$ spanned by $X$.
In the sequel, whenever we speak of an $(r,\delta)_a$ or
$(r,\delta)_i$ code, we will by default assume it is an $[n,k,d]$
linear code (i.e., its length, dimension and minimum distance are
$n,k$ and $d$ respectively).
Suppose $\mathcal C$ is an $[n,k,d]$ linear code over $\mathbb
F_q$, and $G=(G_1,\cdots,G_n)$ is a generating matrix of $\mathcal
C$, where $G_i, i\in[n],$ is the $i$th column of $G$. We denote by
$\mathcal G=\{G_1,\cdots,G_n\}$ the collection of columns of $G$.
It is well known that the distance property is captured by the
following condition (e.g. \cite{Tsfasman}).
\vskip 10pt
\begin{lem}\label{fact}
An $[n,k]$ code $\mathcal C$ has a minimum distance $d$, if and
only if $|S|\leq n-d$ for every $S\subseteq\mathcal G$ having
$\text{Rank}(S)\leq k-1$. Equivalently, $\text{Rank}(T)=k$ for
every $T\subseteq\mathcal G$ of size $n-d+1$.
\end{lem}
\vskip 10pt
For any subset $S\subseteq[n]$, let $\mathcal C|_{S}$ denote the
punctured code of $\mathcal C$ associated with the coordinate set
$S$. That is, $\mathcal C|_{S}$ is obtained from $\mathcal C$ by
deleting all symbols $c_i, i\in[n]\backslash S$, in each codeword
$(c_1,\cdots,c_n)\in\mathcal C$.
\vskip 10pt
\begin{defn}[\cite{Prakash12}]\label{def-locality}
Suppose $1\leq r\leq k$ and $\delta\geq 2$. The $i$th code symbol
$c_i, 1\leq i\leq n$, in an $[n,k,d]$ linear code $\mathcal C$ is
said to have locality $(r,\delta)$ if there exists a subset
$S_i\subseteq[n]$ such that
\begin{itemize}
\item [(1)] $|S_i|\leq r+\delta-1$;
\item [(2)] The minimum distance of the punctured code
$\mathcal C|_{S_i}$ is at least $\delta$.
\end{itemize}
\end{defn}
\vskip 10pt
\begin{rem}\label{rem-loty}
Let $G=(G_1,\cdots,G_n)$ be a generating matrix of $\mathcal C$.
By Lemma \ref{fact}, it is easy to see that the second condition in
Definition \ref{def-locality} is equivalent to the following
condition
\begin{itemize}
\item [(2$'$)] $\text{Rank}(\{G_\ell; \ell\in I\})=
\text{Rank}(\mathcal G_i)$ for any subset $I\subseteq S_i$ of size
$|I|=|S_i|-\delta+1$, where $\mathcal G_i=\{G_\ell; \ell\in S_i\}$;
\end{itemize}
\end{rem}
\vskip 10pt
Moreover, by conditions (1) and (2$'$), we have
$$\text{Rank}(\mathcal G_i)=\text{Rank}(\{G_\ell; \ell\in
S_i\})\leq|S_i|-\delta+1\leq r.$$
That is, $\forall i'\in S_i$ and
$\forall I\subseteq S_i\backslash\{i'\}$ of size
$|I|=|S_i|-\delta+1$, $G_{i'}$ is an $\mathbb F_q$-linear
combination of $\{G_\ell;\ell\in I\}$. This means that the symbol
$c_{i'}$ can be reconstructed by the $|S_i|-\delta+1$ symbols in
$\{c_\ell;\ell\in I\}$.
An $(r,\delta)_a$ code $\mathcal C$ is said to be
\emph{optimal} if the minimum distance $d$ of $\mathcal C$
achieves the bound in (\ref{eqn:1}).
The following remark follows naturally from Definition
\ref{def-locality} and Remark \ref{rem-loty}.
\vskip 10pt
\begin{rem}\label{rem-locality}
If $\mathcal C$ is an $(r,\delta)_a$ code and $G=(G_1,\cdots,G_n)$
is a generating matrix of $\mathcal C$, then we can always find a
collection $\mathcal S=\{S_1,\cdots, S_t\}$, where
$S_i\subseteq[n], i=1,\cdots,t$, such that
\begin{itemize}
\item [(1)] $|S_i|\leq r+\delta-1, i=1,\cdots,t$;
\item [(2)] $\text{Rank}(\{G_\ell; \ell\in I\})=
\text{Rank}(\mathcal G_i)\leq r,
\forall i\in[t]$ and $I\subseteq S_i$ of size $|I|=|S_i|-\delta+1$,
where $\mathcal G_i=\{G_\ell; \ell\in S_i\}$;
\item [(3)] $\cup_{i\in[t]}S_i=[n]$ and $\cup_{i\in[t]\backslash\{j\}}
S_i\neq[n],\forall j\in[t]$.
\end{itemize}
We call the set $\mathcal S=\{S_1,\cdots,S_t\}$ an
$(r,\delta)$-\emph{cover set} of $\mathcal C$.
\end{rem}
\vskip 10pt
The following lemma presents a simple property of $(r,\delta)_a$ codes.
\vskip 10pt
\begin{lem}\label{low-bound}
An $(r,\delta)_a$ code $\mathcal C$ satisfies
\begin{itemize}
\item [1)] The minimum distance $d\geq\delta$.
\item [2)] If $\mathcal C$ is an optimal $(r,\delta)_a$ code, then
$\frac{n}{r+\delta-1}\geq\frac{k}{r}$.
\end{itemize}
\end{lem}
\begin{proof}
1) Let $\mathcal S=\{S_1,\cdots,S_t\}$ be an $(r,\delta)$-cover
set of $\mathcal C$. For any $0\neq(c_1,\cdots,c_n)\in\mathcal C$,
since $\cup_{i\in[t]}S_i=[n]$, there is an $i\in[t]$ such that the
punctured codeword $(c_j)_{j\in S_i}$ is nonzero in $\mathcal
C|_{S_i}$. By the second condition of Definition \ref{def-locality}, the Hamming
weight of $(c_j)_{j\in S_i}$ is at least $\delta$. Thus, the Hamming
weight of $(c_1,\cdots,c_n)$ is at least $\delta$. Since
$0\neq(c_1,\cdots,c_n)\in\mathcal C$ is arbitrary, the minimum
distance $d\geq\delta$.
2) Since $\mathcal C$ is an optimal $(r,\delta)_a$ code, from the minimum distance bound in (\ref{eqn:1}),
$$n=d+k-1+\left(\left\lceil\frac{k}{r}\right\rceil-1\right)(\delta-1).$$
From claim 1),
$d\geq\delta$; which leads to
$$n\geq\delta+k-1+\left(\left\lceil\frac{k}{r}\right\rceil-1\right)(\delta-1).$$ Hence,
\vspace{-0.05in}\begin{eqnarray*} nr&\geq&
r(\delta+k-1)+r(\lceil\frac{k}{r}\rceil-1)(\delta-1)\\
&\geq&r(\delta+k-1)+r(\frac{k}{r}-1)(\delta-1)\\
&=&k(r+\delta-1)
\end{eqnarray*}
which implies that $\frac{n}{r+\delta-1}\geq\frac{k}{r}$.
\end{proof}
\section{Structure of Optimal $(r,\delta)_a$ Code when $r|k$}
In this section, we prove a structure theorem for optimal
$(r,\delta)_a$ codes under the condition of $r|k$.
Throughout this section, we assume that $\mathcal C$ is an
$(r,\delta)_a$ code over the field $\mathbb F_q$, $\mathcal
S=\{S_1,\cdots,S_t\}$ is an $(r,\delta)$-cover set of $\mathcal
C$, where $S_i\subseteq[n]$, $i=1,\cdots t$, and
$G=(G_1,\cdots,G_n)$ is a generating matrix of $\mathcal C$. We
denote $\mathcal G=\{G_1,\cdots,G_n\}$ and $\mathcal G_i=\{G_\ell;
\ell\in S_i\}$\footnote{When $G_i$ and $G_j$ are viewed as vectors
of $\mathbb F_q^k$, it is possible for $G_i=G_j$ where $i\neq j$.
However, when treating them as two different columns of $G$, we
shall view $G_i$ and $G_j$ as two separate elements in $\mathcal
G$ (even though they may be identical).}. Then for any
$I\subseteq[t]$, we have
\begin{align}
|\cup_{i\in I}\mathcal G_i|=|\{G_i; i\in\cup_{\ell\in I}S_\ell\}|
=|\cup_{i\in I}S_i| \label{eqn:10}
\end{align} and by
Remark \ref{rem-locality}, we get
\begin{align}
\cup_{i\in[t]}\mathcal G_i=\mathcal G ~\text{and} ~
\cup_{i\in[t]\backslash\{j\}}\mathcal G_i\neq\mathcal G, \forall
j\in[t].\label{eqn:11}
\end{align}
We first give some lemmas to help prove our main results.
\vskip 10pt
\begin{lem}\label{rank-sum}
Consider three sets $A,B,X\subseteq\mathbb F_q^k$. If $C$ is a
subset of $X$ satisfies: $\text{Rank}(B\cup C)=\text{Rank}(A\cup
B\cup C),$ then
$$\text{Rank}(X\cup A\cup B)-|B|\leq\text{Rank}(X).$$
\end{lem}
\begin{proof}
Since $C\subseteq X$ and $\text{Rank}(B\cup C)=\text{Rank}(A\cup
B\cup C)$, we have \vspace{-0.05in}\begin{eqnarray*}
\text{Rank}(X\cup A\cup B)&=&\text{Rank}(X\cup C\cup A\cup B)\\
&=&\text{Rank}(X\cup B\cup C)\\&=&\text{Rank}(X\cup B)\\&\leq&
\text{Rank}(X)+\text{Rank}(B)\\&\leq&\text{Rank}(X)+|B|.
\end{eqnarray*}
Therefore, $\text{Rank}(X\cup A\cup B)-|B|\leq\text{Rank}(X)$.
\end{proof}
\vskip 10pt
\begin{lem}\label{cup-rank}
Suppose $\{i_1,\cdots,i_\ell\}\subseteq[t]$ such that $\mathcal
G_{i_j}\nsubseteq\langle\cup_{\lambda=1}^{j-1}\mathcal
G_{i_\lambda}\rangle$, $j=2,\cdots,\ell$. Then
$$|\cup_{j=1}^{\ell}S_{i_j}|\geq\text{Rank}(\cup_{j=1}^{\ell}\mathcal
G_{i_j})+\ell(\delta-1).$$
\end{lem}
\begin{proof} We prove this lemma by induction.
From Remark \ref{rem-loty}, $|S_{i_1}|\geq\text{Rank}(\mathcal
G_{i_1})+(\delta-1)$. Hence the claim holds for $\ell=1$.
Now consider $\ell\ge 2$. We assume that the claim holds for $\ell-1$, i.e.,
\begin{align}
|\cup_{j=1}^{\ell-1}S_{i_j}|\geq\text{Rank}(\cup_{j=1}^{\ell-1}\mathcal
G_{i_j})+(\ell-1)(\delta-1).\label{eqn:12}
\end{align}
We shall prove that the claim is true for $\ell$.
First, we point out that $|\mathcal
G_{i_\ell}\backslash(\cup_{j=1}^{\ell-1}\mathcal
G_{i_j})|>\delta-1.$ In fact, if $|\mathcal
G_{i_\ell}\backslash(\cup_{j=1}^{\ell-1}\mathcal
G_{i_j})|\leq\delta-1$, then $|\mathcal
G_{i_\ell}\cap(\cup_{j=1}^{\ell-1}\mathcal G_{i_j}|\geq|\mathcal
G_{i_\ell}|-(\delta-1)$. From condition (2) of Remark \ref{rem-locality},
$\mathcal G_{i_\ell}\subseteq\langle\mathcal
G_{i_\ell}\cap(\cup_{j=1}^{\ell-1}\mathcal
G_{i_j})\rangle\subseteq \langle\cup_{j=1}^{\ell-1}\mathcal
G_{i_j}\rangle$, which presents a contradiction to the assumption that $\mathcal
G_{i_\ell}\nsubseteq\langle\cup_{j=1}^{\ell-1}\mathcal
G_{i_j}\rangle$. Thus,
$$|\mathcal G_{i_\ell}\backslash(\cup_{j=1}^{\ell-1}\mathcal G_{i_j})|>\delta-1.$$
Let $X=\cup_{j=1}^{\ell-1}\mathcal G_{i_j}$ and $C=\mathcal
G_{i_\ell}\cap(\cup_{j=1}^{\ell-1}\mathcal G_{i_j})=\mathcal
G_{i_\ell}\cap X$. Let $A$ be a fixed $(\delta-1)$-subset of
$\mathcal G_{i_\ell}\backslash(\cup_{j=1}^{\ell-1}\mathcal
G_{i_j})$ and $B=(\mathcal
G_{i_\ell}\backslash\cup_{j=1}^{\ell-1}\mathcal G_{i_j})\backslash
A$.
From condition (2) of Remark \ref{rem-locality}, $\text{Rank}(B\cup
C)=\text{Rank}(A\cup B\cup C).$ Then, from Lemma \ref{rank-sum}, we
get
$$\text{Rank}(X\cup A\cup B)-|B|\leq\text{Rank}(X)$$ i.e.,
\begin{align}
\text{Rank}(\cup_{j=1}^{\ell}\mathcal G_{i_j})-|B|\leq
\text{Rank}(\cup_{j=1}^{\ell-1}\mathcal G_{i_j}).\label{eqn:13}
\end{align}
Clearly, $\cup_{j=1}^{\ell}\mathcal G_{i_j}$ is a disjoint union
of $A, B$ and $\cup_{j=1}^{\ell-1}\mathcal G_{i_j}$. Hence,
\begin{eqnarray*} |\cup_{j=1}^{\ell}\mathcal
G_{i_j}|&=&|\cup_{j=1}^{\ell-1}\mathcal
G_{i_j}|+|A|+|B|\nonumber\\&=&|\cup_{j=1}^{\ell-1}\mathcal
G_{i_j}|+(\delta-1)+|B|\nonumber
\end{eqnarray*}
and from (\ref{eqn:10}), we get
\begin{align}
|\cup_{j=1}^{\ell}S_{i_j}|=|\cup_{j=1}^{\ell}\mathcal G_{i_j}|
=|\cup_{j=1}^{\ell-1}S_{i_j}|+(\delta-1)+|B|.\label{eqn:14}
\end{align}
Combining (\ref{eqn:12})-(\ref{eqn:14}), we have
\begin{eqnarray*} |\cup_{j=1}^{\ell}S_{i_j}|
&=&|\cup_{j=1}^{\ell-1}S_{i_j}|+(\delta-1)+|B|
\\&\geq&\text{Rank}(\cup_{j=1}^{\ell-1}\mathcal
G_{i_j})+\ell(\delta-1)+|B|\\
&\geq&\text{Rank}(\cup_{j=1}^{\ell}\mathcal
G_{i_j})-|B|+\ell(\delta-1)+|B|\\&=&\text{Rank}(\cup_{j=1}^{\ell}\mathcal
G_{i_j})+\ell(\delta-1)
\end{eqnarray*}
which completes the proof.
\end{proof}
\vskip 10pt
\begin{lem}\label{not-ctn}
Suppose $\mathcal C$ is an optimal $(r,\delta)_a$ code. Then
\begin{itemize}
\item [1)] $t\geq\lceil\frac{n}{r+\delta-1}\rceil\geq\lceil\frac{k}{r}\rceil$.
\item [2)] If $J\subseteq[t]$ and $|J|\leq\lceil\frac{k}{r}\rceil-1$,
then $\text{Rank}(\cup_{i\in J}\mathcal G_i)\leq k-1$ and
$\mathcal G_h\nsubseteq\langle\cup_{i\in
J}\mathcal G_i\rangle, \forall h\in[t]\backslash J$.
\item [3)] If $J\subseteq[t]$ and $|J|=\lceil\frac{k}{r}\rceil$,
then $\text{Rank}(\cup_{i\in J}\mathcal G_i)=k$ and
$|\cup_{i\in J}S_i|\geq k+\lceil\frac{k}{r}\rceil(\delta-1)$.
\end{itemize}
\end{lem}
\begin{proof}
1) (Proof by contradiction) Suppose $t\leq\lceil\frac{n}{r+\delta-1}\rceil-1$. Then from
Remark \ref{rem-locality}, $$|S_i|\leq r+\delta-1.$$ Hence,
\begin{eqnarray*} n&=&|\cup_{i\in[t]}S_i|\\
&\leq&t(r+\delta-1)\\
&\leq&(\lceil\frac{n}{r+\delta-1}\rceil-1)(r+\delta-1)\\&<&n
\end{eqnarray*} which presents a contradiction. Hence, it must hold
that $t\geq\lceil\frac{n}{r+\delta-1}\rceil$.
Moreover, from Claim 2) of Lemma \ref{low-bound},
$\frac{n}{r+\delta-1}\geq\frac{k}{r}$. Thus,
$$t\geq\lceil\frac{n}{r+\delta-1}\rceil\geq\lceil\frac{k}{r}\rceil.$$
2) From Remark \ref{rem-loty}, $\text{Rank}(\mathcal G_i)\leq r,
\forall i\in[t]$. Hence, if $|J|\leq\lceil\frac{k}{r}\rceil-1$,
then
$$\text{Rank}(\cup_{i\in J}\mathcal G_i)\leq r|J|\leq
r(\lceil\frac{k}{r}\rceil-1)<r\frac{k}{r}=k.$$ i.e.,
$\text{Rank}(\cup_{i\in J}\mathcal G_i)\leq k-1.$
Now, suppose $\mathcal G_h\subseteq\langle\cup_{i\in J}\mathcal
G_i\rangle$, and we will see a contradiction results. First, we can find a subset
$J_0=\{i_1,\cdots,i_s\}\subseteq J$ such that $\mathcal
G_h\subseteq\langle\cup_{\lambda=1}^{s}\mathcal G_{i_s}\rangle$
and $\mathcal G_h\nsubseteq\langle\cup_{i\in J'}\mathcal
G_i\rangle$ for any proper subset $J'$ of $J_0$. In particular, we
have
$$\mathcal G_{i_j}\nsubseteq\langle\cup_{\lambda=1}^{j-1}\mathcal
G_{i_\lambda}\rangle, j=2,\cdots,s.$$ Note that $|J_0|\leq
|J|\leq\lceil\frac{k}{r}\rceil-1$. By the proved result, we have
$$\text{Rank}(\cup_{i\in J_0}\mathcal G_i)\leq k-1.$$
Next, we can find a sequence $\mathcal G_{i_1},\cdots,\mathcal
G_{i_s}, \mathcal G_{i_{s+1}},\cdots, \mathcal G_{i_\ell}$ such
that $\ell\geq\lceil\frac{k}{r}\rceil,
\text{Rank}(\cup_{j=1}^\ell\mathcal G_{i_j})=k$ and $\mathcal
G_{i_j}\nsubseteq\langle\cup_{\lambda=1}^{j-1}\mathcal
G_{i_\lambda}\rangle, j=2,\cdots,\ell$. In particular,
$\text{Rank}(\cup_{j=1}^{\ell-1}\mathcal G_{i_j})\leq k-1$.
Therefore, there exists a $\mathcal G'_{i_\ell}\subseteq\mathcal
G_{i_\ell}$ such that $\text{Rank}((\cup_{j=1}^{\ell-1}\mathcal
G_{i_j})\cup\mathcal G'_{i_\ell})=k-1$. Denote
$(\cup_{j=1}^{\ell-1}\mathcal G_{i_j})\cup\mathcal G'_{i_\ell}=S$.
Then $\text{Rank}(S)=k-1$ and
\begin{eqnarray}
\nonumber |\mathcal
G'_{i_\ell}\backslash\cup_{j=1}^{\ell-1}\mathcal
G_{i_j}|&\geq&\text{Rank}(S)-\text{Rank}(\cup_{j=1}^{\ell-1}\mathcal
G_{i_j})\\
&=&(k-1)-\text{Rank}(\cup_{j=1}^{\ell-1}\mathcal
G_{i_j}). \label{eqn:15}
\end{eqnarray}
From Lemma \ref{cup-rank},
\begin{align}
|\cup_{j=1}^{\ell-1}\mathcal
G_{i_j}|\geq\text{Rank}(\cup_{j=1}^{\ell-1}\mathcal
G_{i_j})+(\ell-1)(\delta-1).
\label{eqn:16}
\end{align}
Then by equations (\ref{eqn:15}) and (\ref{eqn:16}),
\begin{eqnarray} \nonumber|S|&=&|\mathcal
G'_{i_\ell}\backslash\cup_{j=1}^{\ell-1}\mathcal
G_{i_j}|+|\cup_{j=1}^{\ell-1}\mathcal G_{i_j}|\\
\nonumber&\geq&(k-1)+(\ell-1)(\delta-1)\\
&\geq&k-1+(\lceil\frac{k}{r}\rceil-1)(\delta-1). \label{eqn:17}
\end{eqnarray}
Since $h\in[t]\backslash J$, $\mathcal G_h\neq \mathcal G_{i_j},
j=1,\cdots,s$. Moreover, since $\mathcal
G_h\subseteq\langle\cup_{\lambda=1}^{s}\mathcal G_{i_s}\rangle$
and $\mathcal
G_{i_j}\nsubseteq\langle\cup_{\lambda=1}^{j-1}\mathcal
G_{i_\lambda}\rangle, j=2,\cdots,\ell$, so $\mathcal G_h\neq
\mathcal G_{i_j}, j=s+1,\cdots,\ell$. From equation (\ref{eqn:11}), we have
$\mathcal G_h\nsubseteq\cup_{j=1}^\ell \mathcal G_{i_j}$. Then, from
equation (\ref{eqn:17}), we get
$$|\mathcal G_h\cup S|>|S|\geq
k-1+(\lceil\frac{k}{r}\rceil-1)(\delta-1).$$ Since we assumed
$\mathcal G_h\subseteq\langle\cup_{\lambda=1}^{s}\mathcal
G_{i_s}\rangle\subseteq\langle S\rangle$, then
$\text{Rank}(\mathcal G_h\cup S)=\text{Rank}(S)=k-1$. By Lemma
\ref{fact}, we have
$$d\leq n-|\mathcal G_h\cup
S|<n-k+1-(\lceil\frac{k}{r}\rceil-1)(\delta-1),$$ which
contradicts the assumption that $\mathcal C$ is an optimal
$(r,\delta)_a$ code. Hence, it must be that $\mathcal
G_h\nsubseteq\langle\cup_{i\in J}\mathcal G_i\rangle$.\footnote{In
this proof, for any $(r,\delta)_a$ code $\mathcal C$, we obtain a
subset $S\subseteq\mathcal G$ such that $|S|\geq
k-1+(\lceil\frac{k}{r}\rceil-1)(\delta-1)$ and Rank$(S)=k-1$. Then
by Lemma \ref{fact}, the minimum distance of $\mathcal C$ is
$d\leq n-k+1-(\lceil\frac{k}{r}\rceil-1)(\delta-1)$, which also
provides a proof of the minimum distance bound in (\ref{eqn:1}).}
3) Suppose $J=\{i_1,\cdots,i_s\}$, where
$s=\lceil\frac{k}{r}\rceil$. By claim 2), $$\mathcal
G_{i_j}\nsubseteq\langle\cup_{\lambda=1}^{j-1}\mathcal
G_{i_\lambda}\rangle, j=2,\cdots,s.$$
First, we have $\text{Rank}(\cup_{i\in J}\mathcal G_i)=k$.
Otherwise, as in the proof of claim 2), we can find a sequence
$\mathcal G_{i_1},\cdots,\mathcal G_{i_s}$, $\mathcal
G_{i_{s+1}},\cdots, \mathcal
G_{i_\ell}~(\ell>s=\lceil\frac{k}{r}\rceil)$ and a set
$S=(\cup_{j=1}^{\ell-1}\mathcal G_{i_j})\cup\mathcal
G'_{i_\ell}~(\mathcal G'_{i_\ell}\subseteq\mathcal G_{i_\ell})$
such that $$|S|\geq k-1+(\ell-1)(\delta-1)>
k-1+(\lceil\frac{k}{r}\rceil-1)(\delta-1).$$ By Lemma \ref{fact},
$$d\leq n-|S|<n-k+1-(\lceil\frac{k}{r}\rceil-1)(\delta-1)$$ which
contradicts the assumption that $\mathcal C$ is an optimal
$(r,\delta)_a$ code. Therefore, we have $\text{Rank}(\cup_{i\in
J}\mathcal G_i)=k$.
Now, by Lemma \ref{cup-rank}, \begin{eqnarray*} |\cup_{i\in
J}S_i|&\geq&\text{Rank}(\cup_{i\in J}\mathcal
G_i)+\lceil\frac{k}{r}\rceil(\delta-1)
\\&=&k+\lceil\frac{k}{r}\rceil(\delta-1).
\end{eqnarray*}
This completes the proof.
\end{proof}
\vskip 10pt
We now present our main theorem of this section.
\vskip 10pt
\begin{thm}\label{stru-opt}
Suppose $\mathcal C$ is an optimal $(r,\delta)_a$ linear code. If
$r|k$ and $r<k$, then the following conditions hold:
\begin{itemize}
\item [1)] $S_1,\cdots,S_t$ are mutually disjoint; \item [2)]
$|S_i|=r+\delta-1, \forall i\in[t]$, and the punctured code
$\mathcal C|_{S_i}$ is an $[r+\delta-1,r,\delta]$ MDS code.
\end{itemize}
In particular, we have $(r+\delta-1)\mid n$.
\end{thm}
\begin{proof}
Since $r|k$ and $r<k$, then $k=\ell r$ for some $\ell\geq 2$. By
1) of Lemma \ref{not-ctn}, $t\geq\lceil\frac{k}{r}\rceil=\ell$.
Let $\{i_1,i_2\}\subseteq[t]$ be arbitrarily chosen. Let $J$ be an
$\ell$-subset of $[t]$ such that $\{i_1,i_2\}\subseteq J$. Then by
3) of Lemma \ref{not-ctn},
\begin{align}
\text{Rank}(\cup_{i\in J}\mathcal G_i)=k=\ell r, \label{eqn:18}
\end{align} and
\begin{align}
\left|\cup_{i\in J}\mathcal S_{i}\right|\geq k+\ell(\delta-1)=
\ell(r+\delta-1).\label{eqn:19}
\end{align}
Since $|S_{i}|\leq r+\delta-1$ and by Remark \ref{rem-locality},
$\text{Rank}(\mathcal G_i)\leq r$, then equations (\ref{eqn:18})
and (\ref{eqn:19}) imply that $\text{Rank}(\mathcal G_i)=r$,
$|S_{i}|=r+\delta-1$, and $\{S_{i}\}_{i\in J}$ are mutually
disjoint.
In particular, $\text{Rank}(\mathcal G_{i_1})=\text{Rank}(\mathcal
G_{i_2})=r$, $\mathcal G_{i_1}\cap\mathcal G_{i_2}=\emptyset$ and
$|S_{i_1}|=|S_{i_2}|=r+\delta-1$. Since $i_1$ and $i_2$ are
arbitrarily chosen, we have proved that $\text{Rank}(\mathcal
G_i)=r$, $|S_{i}|=r+\delta-1$, and $\{S_{i}\}_{i\in J}$ are
mutually disjoint. Hence, $(r+\delta-1)\mid n$. Moreover, by Lemma
\ref{fact} and Remark \ref{rem-loty}, $\mathcal C|_{S_i}$ is an
$[r+\delta-1,r,\delta]$ MDS code.
\end{proof}
\vskip 10pt
In \cite{Prakash12}, it was proved that if $\mathcal C$ is an
optimal $(r,\delta)_i$ code, then there exists a collection
$\{S_1,\cdots,S_a\}\subseteq\{S_1,\cdots,S_t\}$ which has the same
properties in Theorem \ref{stru-opt}, where $a$ is a
properly-defined value. Thus, Theorem \ref{stru-opt} shows that as
a sub-class of optimal $(r,\delta)_i$ codes, optimal
$(r,\delta)_a$ codes tend to have a simpler structure than
otherwise.
\section{Non-existence Conditions of Optimal $(r,\delta)_a$ Linear Codes}
In this section, we derive two sets of conditions under which
there exists no optimal $(r,\delta)_a$ linear codes. From the
minimum distance bound in (\ref{eqn:1}), we know that when $r=k$,
optimal $(r,\delta)_a$ linear codes are exactly MDS codes. Hence,
in this section, we focus on the case of $r<k$.
The first result is obtained directly from Theorem \ref{stru-opt}.
\vskip 10pt
\begin{thm}\label{non-exst}
If $(r+\delta-1)\nmid n$ and $r|k~$, then there exist no
optimal $(r,\delta)_a$ linear codes.
\end{thm}
\begin{proof}
If $\mathcal C$ is an optimal $(r,\delta)_a$ linear code and
$r|k$, then by Theorem \ref{stru-opt}, $(r+\delta-1)|n$, which
contradicts the condition that $(r+\delta-1)\nmid n$. Hence, there
exist no optimal $(r,\delta)_a$ linear codes when
$(r+\delta-1)\nmid n$ and $r|k$.
\end{proof}
\vskip 10pt
When $(r+\delta-1)\nmid n$ and $r\nmid k$, we provide in the below
a set of conditions under which no optimal $(r,\delta)_a$ code
exists.
\vskip 10pt
\begin{thm}\label{non-exst-1}
Suppose $n=w(r+\delta-1)+m$ and $k=ur+v$, where $0<m<r+\delta-1$
and $0<v<r$. If $m<v+\delta-1$ and $u\geq 2(r-v)+1$, then there
exist no optimal $(r,\delta)_a$ codes.
\end{thm}
\begin{proof}
We prove this theorem by contradiction.
Suppose $\mathcal C$ is an optimal $(r,\delta)_a$ code over the
field $\mathbb F_q$ and $\mathcal S=\{S_1,\cdots,S_t\}$ is an
$(r,\delta)$-cover set of $\mathcal C$. Then by claim 1) of Lemma
\ref{not-ctn}, we have
\begin{align}
t\geq\left\lceil\frac{n}{r+\delta-1}\right\rceil=w+1.\label{eqn:20}
\end{align}
Moreover, by 3) of Lemma \ref{not-ctn}, for any
$\lceil\frac{k}{r}\rceil$-subset $J$ of $[t]$, $$|\cup_{i\in
J}S_i|\geq k+\left\lceil\frac{k}{r}\right\rceil(\delta-1).$$
For each $i\in[t]$, if $|S_i|<r+\delta-1$, let $T_i\subseteq[n]$
be such that $S_i\subseteq T_i$ and $|T_i|=r+\delta-1$; If
$|S_i|=r+\delta-1$, let $T_i=S_i$. Then clearly,
$$\cup_{i\in[t]}T_i=\cup_{i\in[t]}S_i=[n]$$ and for any
$\lceil\frac{k}{r}\rceil$-subset $J$ of $[t]$,
\begin{align} |\cup_{i\in
J}T_i|\geq k+\left\lceil\frac{k}{r}\right\rceil(\delta-1).\label{eqn:21}
\end{align}
Let $M=(m_{i,j})_{t\times n}$ be a $t\times n$ matrix such that
$m_{i,j}=1$ if $j\in T_i$, and $m_{i,j}=0$ otherwise. For each
$j\in[n]$, let
$$A_j=\{i\in[t];m_{i,j}=1\}.$$ Then $|A_j|$ is the number of $T_i
~(i\in[t])$ satisfying $j\in T_i$, and this number equals the
number of $1$s in the $j$th column of $M$. Since
$\cup_{i\in[t]}T_i=[n]$, then $|A_j|>0, \forall j\in[n]$. On the
other hand, by the construction of $M$, for each $i\in[t]$,
$T_i=\{j\in[n];m_{i,j}=1\}.$ Thus, the number of the $1$s in each
row of $M$ is $r+\delta-1$. It then follows that the total number
of the $1$s in $M$ is
\begin{align}
\sum_{j=1}^n|A_j|=\sum_{i=1}^t|T_i|=t(r+\delta-1).\label{eqn:22}
\end{align}
Combining (\ref{eqn:20}) and (\ref{eqn:22}), we have
\begin{align} \nonumber
\sum_{j=1}^n|A_j|\geq&(w+1)(r+\delta-1)\\
=&n+(r+\delta-1-m).\label{eqn:23}
\end{align}
Since $m<v+\delta-1$, then $$r+\delta-1-m>r-v.$$ Hence from (\ref{eqn:23}), we
have
\begin{align}
\sum_{j=1}^n|A_j|\geq n+(r-v+1).\label{eqn:24}
\end{align}
Let $P=\{j\in[n];|A_j|>1\}$. From (\ref{eqn:24}), $P\neq\emptyset$ and
$$\sum_{j\in P}|A_j|\geq|P|+(r-v+1).$$
Without loss of generality, assume $P=\{1,\cdots,\ell\}$. Since
$|A_j|>1, \forall j\in P$, we can find a number
$\lambda\in\{1,\cdots,\ell\}$ such that
$\sum_{j=1}^{\lambda-1}|A_{j}|<\lambda+(r-v)$ and
$\sum_{j=1}^{\lambda}|A_{j}|\geq\lambda+(r-v+1)$. This means that
we can find a subset $B_{\lambda}\subseteq A_{\lambda}$ such that
$|B_{\lambda}|>1$ and
\begin{align}
\sum_{j=1}^{\lambda-1}|A_{j}|+|B_{\lambda}|=\lambda+r-v+1.\label{eqn:25}
\end{align}
Also note that
\begin{align}
\lambda\leq r-v+1,\label{eqn:26}
\end{align} because otherwise,
$\sum_{j=1}^{\lambda-1}|A_{j}|+|B_{\lambda}|\geq
2\lambda>\lambda+r-v+1$, which contradicts (\ref{eqn:25}).
Let $B=(\cup_{j=1}^{\lambda-1}A_{j})\cup B_\lambda$. Then from
(\ref{eqn:25}),
$$|B|=|(\cup_{j=1}^{\lambda-1}A_{j})\cup B_\lambda|
\leq\sum_{j=1}^{\lambda-1}|A_{i}|+|B_{\lambda}|\leq 2(r-v+1).$$
Since $u\geq 2(r-v)+1$, then $2(r-v+1)\leq u+1$, we get
$$|B|\leq u+1=\left\lceil\frac{k}{r}\right\rceil.$$
Let $J$ be a $\lceil\frac{k}{r}\rceil$-subset of $[t]$ such that
$B\subseteq J$. By the construction of $M$ and $B$, for each
$j\in\{1,\cdots,\lambda-1\}$, there are at least $|A_j|$ subsets
in $\{T_i;i\in B\}$ containing $j$, and there are at least
$|B_\lambda|$ subsets in $\{T_i;i\in B\}$ containing $\lambda$. Hence,
\begin{align}
|\cup_{i\in
J}T_i|\leq|J|(r+\delta-1)-(\sum_{j=1}^{\lambda-1}|A_{j}|+
|B_{\lambda}|-\lambda).\label{eqn:27}
\end{align}
Combining (\ref{eqn:25}) and (\ref{eqn:27}), we
have
\begin{eqnarray*} |\cup_{i\in
J}T_i|&\leq&\lceil\frac{k}{r}\rceil(r+\delta-1)-(r-v+1)\\
&=&ur+v-1+\lceil\frac{k}{r}\rceil(\delta-1)\\
&=&k-1+\lceil\frac{k}{r}\rceil(\delta-1).
\end{eqnarray*}
which contradicts (\ref{eqn:21}).
Thus, we can conclude that there exist no optimal $(r,\delta)_a$
linear codes when $m<v+\delta-1$ and $u\geq 2(r-v)+1$.
\end{proof}
\vskip 10pt
{\it Example:} We now provide an example to help illustrate the
method used in the proof of Theorem \ref{non-exst-1}. Let
$n=13,r=\delta=2$ and $k=7$. Suppose $T_1=\{1,2,3\}$,
$T_2=\{4,5,6\}$, $T_3=\{7,8,9\}, T_4=\{10,11,12\}, T_5=\{1,5,13\}$
and $T_6=\{5,8,13\}$. Following the notations in the proof of
Theorem \ref{non-exst-1}, we have
\begin{center}
\includegraphics[height=3cm]{fig4}
\end{center}
Therefore, $A_1=\{1,5\}, A_5=\{2,5,6\}, A_8=\{3,6\},
A_{13}=\{5,6\}$, and $P=\{1,5,8,13\}$. Note that
$|A_1|+|A_5|=5>2+(r-v+1)$. Let $B_2=\{2,5\}\subseteq A_5$ and
$B=A_1\cup B_2=\{1,2,5\}$; then $|B|<4=\lceil\frac{k}{r}\rceil$.
Let $J=\{1,2,3,5\}\supseteq B$, then $\cup_{i\in
J}T_i=\{1,2,3,4,5,6,7,8,9,13\}.$ Hence, $|\cup_{i\in
J}T_i|=10<11=k+\lceil\frac{k}{r}\rceil(\delta-1)$. (See the
illustration of $M$ below.)
\begin{center}
\includegraphics[height=3cm]{fig5}
\end{center}
More generally, in this example, for any $t\geq 5$ and
$\{T_1,\cdots,T_t\}$ such that $|T_i|=r+\delta-1=3$ and
$\cup_{i=1}^tT_i=[n]=\{1,\cdots,13\}$, we can always find a
$J\subseteq[t]$ such that $|\cup_{i\in
J}T_i|<11=k+\lceil\frac{k}{r}\rceil(\delta-1)$.
In general, since $0<v<r$, then $r-v\leq r-1$. If $k>2r^2+r$,
then we have $u\geq 2(r-1)+1\geq 2(r-v)+1$. Hence, when
$0<n~\text{mod~}(r+\delta-1)<(k~\text{mod}~r)+\delta-1$ and
$k>2r^2+r$, then by Theorem \ref{non-exst-1}, there exist no
optimal $(r,\delta)_a$ codes.
\section{Construction of Optimal $(r,\delta)_a$ Codes: Algorithm 1}
In this section, we propose a deterministic algorithm for
constructing optimal $(r,\delta)_a$ linear codes over the field of
size $q\geq\binom{n}{k-1}$, when $(r+\delta-1)|n$ or $m\geq
v+\delta-1$, where $n=w(r+\delta-1)+m$ and $k=ur+v$ satisfying
$0<v<r$ and $0<m<r+\delta-1$. Recall that when $(r+\delta-1)|n$,
it was proved in \cite{Prakash12} that optimal $(r,\delta)_a$
linear codes exist over the field of size $q>kn^k$. Note that our
method requires a much smaller field than what's shown in
\cite{Prakash12}, and hence it also has a lower complexity for
implementation.
To present our method, we will use the following definitions and
notations, most of which follow from \cite{Gopalan12}.
\vskip 10pt
\begin{defn}\label{core}
Let $\mathcal S=\{S_1,\cdots, S_t\}$ be a partition of $[n]$ and
$\delta\leq|S_i|\leq r+\delta-1, \forall i\in[t]$. A subset
$S\subseteq[n]$ is called an $(\mathcal S,r)$-\emph{core} if
$|S\cap S_i|\leq|S_i|-\delta+1, \forall i\in[t]$. If $S$ is an
$(\mathcal S,r)$-core and $|S|=k$, then $S$ is called an
$(\mathcal S,r,k)$-\emph{core}.
\end{defn}
\vskip 10pt
Clearly, if $S\subseteq[n]$ is an $(\mathcal S,r)$-core and
$S'\subseteq S$, then $S'$ is also an $(\mathcal S,r)$-core. In
particular, if $S\subseteq[n]$ is an $(\mathcal S,r)$-core and
$S'$ is a $k$-subset of $S$, then $S'$ is an $(\mathcal
S,r,k)$-core.
Before presenting our construction method, we first give a lemma,
which will take an important role in our discussion.
\vskip 10pt
\begin{lem}\label{sub-space}
Let $X_1,\cdots,X_\ell$ and $X$ be $\ell+1$ subspaces of $\mathbb
F_q^k$ and $X\nsubseteq X_i, \forall i\in[\ell]$. If $q\geq\ell$,
then $X\nsubseteq\cup_{i=1}^\ell X_i$.
\end{lem}
\begin{proof}
We prove this lemma by induction.
Clearly, the claim is true when $\ell=1$.
Now, we suppose that the claim is true for $\ell-1$, i.e.,
$$X\nsubseteq\cup_{i=1}^{\ell-1} X_i.$$ Then there exists an
$x\in X$ such that $x\notin\cup_{i=1}^{\ell-1} X_i$. If $x\notin
X_\ell$, then $x\notin\cup_{i=1}^{\ell} X_i$ and
$X\nsubseteq\cup_{i=1}^\ell X_i$. So we assume $x\in X_\ell$.
Since $X\nsubseteq X_\ell$, there exists a $y\in X$ such that
$y\notin X_\ell$. Then for any $\{a,a'\}\subseteq\mathbb F_q$ and
$i\in\{1,\cdots,\ell-1\}$,
$$\{ax+y,a'x+y\}\nsubseteq X_i.$$ $($Otherwise,
$(a-a')x=(ax+y)-(a'x+y)\in X_i$, which contradicts to the
assumption that $x\notin\cup_{i=1}^{\ell-1} X_i.)$
Since $q\geq\ell$, we can pick a subset
$\{a_1,\cdots,a_\ell\}\subseteq\mathbb F_q$. Then $\{a_1x+y,
\cdots, a_\ell x+y\}\nsubseteq\cup_{i=1}^{\ell-1} X_i.
~($Otherwise, by the Pigeonhole principle, there is a subset
$\{a_{i_1},a_{i_2}\}\subseteq\{a_1,\cdots,a_\ell\}$ and a
$j\in\{1,\cdots,\ell-1\}$ such that
$\{a_{i_1}x+y,a_{i_2}x+y\}\subseteq X_j$, which contradicts to the
proven result that for any $\{a,a'\}\subseteq\mathbb F_q$ and
$i\in\{1,\cdots,\ell-1\}$, $\{ax+y,a'x+y\}\nsubseteq X_i.)$
Without loss of generality, assume
$a_1x+y\notin\cup_{i=1}^{\ell-1} X_i$. Note that $x\in X_\ell$ and
$y\notin X_\ell$, then $a_1x+y\notin X_\ell$. Hence,
$a_1x+y\notin\cup_{i=1}^{\ell} X_i$. On the other hand, since
$x,y\in X$, then $a_1x+y\in X$. So $X\nsubseteq\cup_{i=1}^\ell
X_i$, which completes the proof.
\end{proof}
\vskip 10pt
We present our construction method in the following theorem.
\vskip 10pt
\begin{thm}\label{opt-suf-1}
Let $\mathcal S=\{S_1,\cdots,S_t\}$ be a partition of $[n]$ and
$\delta\leq|S_i|\leq r+\delta-1, \forall i\in[t]$. Suppose
$t\geq\lceil\frac{k}{r}\rceil$ and for any
$\lceil\frac{k}{r}\rceil$-subset $J$ of $[t]$, $|\cup_{i\in
J}S_i|\geq k+\lceil\frac{k}{r}\rceil(\delta-1)$. If
$q\geq\binom{n}{k-1}$, then there exists an optimal $(r,\delta)_a$
linear code over $\mathbb F_q$.
\end{thm}
\begin{proof}
For each $i\in[t]$, let $U_i$ be an $(|S_i|-\delta+1)$-subset of
$S_i$. Let $\Omega_0=\cup_{i\in[t]}U_i$ and $L=|\Omega_0|$. Let
$J$ be a $\lceil\frac{k}{r}\rceil$-subset of $[t]$. Since
$\cup_{i\in J}U_i\subseteq\Omega_0$, from the assumptions of this
theorem,
$$L=|\Omega_0|\geq|\cup_{i\in J}U_i|=|\cup_{i\in
J}S_i|-\lceil\frac{k}{r}\rceil(\delta-1)\geq k.$$
The construction of an optimal $(r,\delta)_a$ code consists of the
following two steps:
\emph{Step 1}: Construct an $[L,k]$ MDS code $\mathcal C_0$ over
$\mathbb F_q$. Since $q\geq\binom{n}{k-1}\geq n>L$, such an MDS
code exists over $\mathbb F_q$. Let $G'$ be a generating matrix of
$\mathcal C_0$. We index the columns of $G'$ by $\Omega_0$, i.e.,
$G'=(G_\ell)_{\ell\in\Omega_0}$, where $G_\ell$ is a column of
$G'$ for each $\ell\in\Omega_0$.
\emph{Step 2}: Extend $\mathcal C_0$ to an optimal $(r,\delta)_a$
code $\mathcal C$ over $\mathbb F_q$. This can be achieved by the
following algorithm.
\vspace{0.12in} \noindent \textbf{Algorithm 1:}
\noindent 1. ~ Let $\Omega=\Omega_0$.
\noindent 2. ~ $i$ runs from $1$ to $t$.
\noindent 3. ~ ~ While $S_{i}\backslash\Omega\neq\emptyset$:
\noindent 4. ~ ~ ~ ~Pick a $\lambda\in S_{i}\backslash\Omega$ and
let $G_\lambda\in\langle\{G_\ell;~\ell\in S_i\cap\Omega\}\rangle$
~ ~ ~ ~be such that for any $(\mathcal S,r,k)$-core
$S~\subseteq\Omega~\cup~\{\lambda\}$,
~ ~ ~ ~$\{G_\ell; ~\ell\in S\}$ is linearly independent.
\noindent 5. ~ ~ ~ ~$\Omega=\Omega\cup\{\lambda\}$.
\noindent 6. ~ Let $\mathcal C$ be the linear code generated by
the matrix $~G=$
~ $(G_1,\cdots,G_n)$.
\vskip 10pt
To complete the proof of Theorem \ref{opt-suf-1}, we need to prove
three claims: In Claim~1 and Claim~2 below we show that the code
$\mathcal{C}$ output by Algorithm~1 is indeed an optimal
$(r,\delta)_a$ linear code over $\mathbb F_q$; In Claim~3, we
prove that the vector $G_\lambda$ described in Line 4 of
Algorithm~1 can always be found, hence the algorithm does
terminate successfully. \\
\noindent\textbf{Claim 1:} The code $\mathcal{C}$ output by
Algorithm~1 is an $(r,\delta)_a$ linear code over $\mathbb F_q$.
By Definition \ref{def-locality} and Remark \ref{rem-loty}, we aim
to show that for every $i \in [t]$ and for every subset $I \subset
S_i$ with $|I|=|S_i|-\delta+1$, it holds that
\begin{equation}
\label{eq:h-1} \text{Rank}(\{G_\ell\}_{\ell\in
I})=\text{Rank}(\{G_\ell\}_{\ell\in S_i}).
\end{equation}
Since in Line 4 of Algorithm 1, we choose
$G_\lambda\in\langle\{G_\ell;\ell\in S_i\cap\Omega\}\rangle$, we
have
$$\text{Rank}(\{G_\ell\}_{\ell\in(S_i\cap\Omega)\cup\{\lambda\}})
=\text{Rank}(\{G_\ell\}_{\ell\in S_i\cap\Omega}).$$ By induction,
\begin{equation}
\label{eq:h-2}
\begin{split}
\text{Rank}(\{G_\ell\}_{\ell\in
S_i})&= \text{Rank}(\{G_\ell\}_{\ell\in S_i\cap\Omega_0})\\
&= \text{Rank}(\{G_\ell\}_{\ell\in U_i})\\
&= |S_i|-\delta+1.
\end{split}
\end{equation}
Suppose $i\in[t]$ and $I\subseteq S_i$ such that
$|I|=|S_i|-\delta+1$. Then $|I|=|S_i|-\delta+1\leq r\leq k$. Since
$t\geq\lceil\frac{k}{r}\rceil$, we can find a
$\lceil\frac{k}{r}\rceil$-subset $J'$ of $[t]$ such that $i\in
J'$. For each $j\in J'$, let $W_j$ be an $(|S_j|-\delta+1)$-subset
of $S_j$ such that $W_i=I$. Clearly, $\cup_{j\in J'}W_j$ is an
$(\mathcal S,r)$-core. From the assumption of this lemma,
$$|\cup_{j\in J'}S_j|\geq k+\lceil\frac{k}{r}\rceil(\delta-1).$$ Hence
$$|\cup_{j\in J'}W_j|=|\cup_{j\in J'}S_j|-
\lceil\frac{k}{r}\rceil(\delta-1)\geq k.$$ Let $S$ be a $k$-subset
of $\cup_{j\in J'}W_j$ such that $I\subseteq S$, then $S$ is an
$(\mathcal S,r,k)$-core. Therefore, $\{G_\ell;\ell\in S\}$ is
linearly independent, which in turn implies that $\{G_\ell;\ell\in
I\}$ is also linearly independent. Therefore,
\begin{equation}
\label{eq:h-3} \text{Rank}(\{G_\ell\}_{\ell\in I}) = |I| =
|S_i|-\delta+1.
\end{equation}
Combining (\ref{eq:h-2}) and (\ref{eq:h-3}) we obtain (\ref{eq:h-1}). \\
\noindent\textbf{Claim 2:} The code $\mathcal{C}$ output by
Algorithm~1 has minimum distance achieving the upper bound
(\ref{eqn:1}), and hence is an optimal $(r,\delta)_a$ linear code.
According to Lemma~\ref{fact} and (\ref{eqn:1}), it suffices to
prove that for any subset $T\subseteq[n]$ of size
$|T|=k+(\lceil\frac{k}{r}\rceil-1)(\delta-1)$,
\[
\text{Rank}(\{G_\ell\}_{\ell\in T}) = k.
\]
Let
$$J=\{j\in[t];|T\cap S_j|\geq|S_j|-\delta+1\}.$$ For each
$j\in J$, let $W_j$ be an $(|S_j|-\delta+1)$-subset of $T\cap
S_j$; For each $j\in[t]\backslash J$, let $W_j=T\cap S_j$. Then
$\cup_{j\in[t]}W_j$ is an $(\mathcal S,r)$-core. We consider the
following two cases:
Case 1: $|J|\geq\lceil\frac{k}{r}\rceil$. Without loss of
generality, assume that $|J|=\lceil\frac{k}{r}\rceil$\footnote{If
$|J|>\lceil\frac{k}{r}\rceil$, then pick a
$\lceil\frac{k}{r}\rceil$-subset $J_0$ of $J$, and replace $J$ by
$J_0$ in our discussion.}. Since $|\cup_{j\in J}S_j|\geq
k+\lceil\frac{k}{r}\rceil(\delta-1)$, then
$$|\cup_{j\in[t]}W_j|\geq|\cup_{j\in J}W_j|\geq k.$$
Case 2: $|J|\leq\lceil\frac{k}{r}\rceil-1$. In that case,
$$|\cup_{j\in
[t]}W_j|\geq |T|-|J|(\delta-1)\geq
|T|-(\lceil\frac{k}{r}\rceil-1)(\delta-1)\geq k.$$
In both cases, $|\cup_{j\in [t]}W_j|\geq k$. Let $S$ be a
$k$-subset of $\cup_{j\in J}W_j$, then $S$ is an $(\mathcal
S,r,k)$-core. Therefore, $\{G_\ell;\ell\in S\}$ are linearly
independent and
$$\text{Rank}(\{G_\ell\}_{\ell\in
T})=\text{Rank}(\{G_\ell\}_{\ell\in S})=k.$$ From equation
(\ref{eqn:1}) and Lemma \ref{fact}, we get
$$d=n-k+1-(\lceil\frac{k}{r}\rceil-1)(\delta-1),$$ where $d$ is
the minimum distance of $\mathcal C$. Thus, $\mathcal C$ is an
optimal $(r,\delta)_a$ code.\\
\noindent\textbf{Claim 3:} The vector $G_\lambda$ in Line 4 of
Algorithm 1 can always be found.
The proof of this claim is based on a classical technique in
network coding $($e.g., \cite{Li03,Jaggi05}$)$. Since
$G'=(G_\ell)_{\ell\in\Omega_0}$ is a generating matrix of the MDS
code $\mathcal C_0$, then for any $(\mathcal S,r,k)$-core
$S\subseteq\Omega_0$, $\{G_\ell;\ell\in S\}$ is linearly
independent. By induction, we can assume that for any $(\mathcal
S,r,k)$-core $S\subseteq\Omega$, $\{G_\ell;\ell\in S\}$ are
linearly independent.
Let $\Lambda$ be the set of all $S_0\subseteq\Omega$ such that
$S_0\cup\{\lambda\}$ is an $(\mathcal S,r,k)$-core. By Definition
\ref{core}, for any $S_0\in\Lambda$, $$|S_0|=k-1,$$
$$|S_0\cap S_j|\leq|S_j|-\delta+1, \ \forall j\in[t]\backslash\{i\},$$ and
$$|S_0\cap S_i|\leq|S_i|-\delta.$$ Note that
$$U_i\subseteq S_i\cap\Omega_0\subseteq S_i\cap\Omega.$$
Hence
$$|S_i\cap\Omega|\geq|U_i|=|S_i|-\delta+1.$$ Thus, there is an
$\eta\in(S_i\cap\Omega)\backslash S_0$. Since $S_1,\cdots,S_{t}$
are mutually disjoint, $\eta\notin S_j, \forall
j\in[t]\backslash\{i\}$. Therefore,
$$|(S_0\cup\{\eta\})\cap S_j|\leq|S_j|-\delta+1, j=1,\cdots, t.$$ Then
$S_0\cup\{\eta\}\subseteq\Omega$ is an $(\mathcal S,r,k)$-core. By
assumption, $\{G_\ell\}_{\ell\in S_0\cup\{\eta\}}$ is linearly
independent. Hence
$$G_\eta\notin\langle\{G_\ell\}_{\ell\in S_0}\rangle, $$ and
$$\langle\{G_\ell\}_{\ell\in
S_i\cap\Omega}\rangle\nsubseteq\langle\{G_\ell\}_{\ell\in
S_0}\rangle.$$ Since $q\geq\binom{n}{k-1}\geq |\Lambda|$, by Lemma
\ref{sub-space},
$$\langle\{G_\ell\}_{\ell\in
S_i\cap\Omega}\rangle\nsubseteq(\cup_{S_0\in\Lambda}\langle\{G_\ell\}_{\ell\in
S_0}\rangle).$$ Let $G_\lambda$ be a vector in
$\langle\{G_\ell\}_{\ell\in
S_i\cap\Omega}\rangle\backslash(\cup_{S_0\in\Lambda}\langle\{G_\ell\}_{\ell\in
S_0}\rangle)$. Then for any $S_0\in\Lambda$, $\{G_\ell\}_{\ell\in
S_0\cup\{\lambda\}}$ are linearly independent.
Suppose $S\subseteq\Omega\cup\{\lambda\}$ is an $(\mathcal
S,r,k)$-core. If $\lambda\notin S$, then $S\subseteq\Omega$ and by
assumption, $\{G_\ell;\ell\in S\}$ is linearly independent. If
$\lambda\in S$, then $S_0=S\backslash\{\lambda\}\in\Lambda$ and by
the selection of $G_\lambda$, $\{G_\ell;\ell\in S\}$ is linearly
independent. Hence we always have that $\{G_\ell;\ell\in S\}$ is
linearly independent. Thus, the vector $G_\lambda$ satisfies the
requirement of Algorithm 1.
\end{proof}
\vskip 10pt
From the proof of Theorem \ref{opt-suf-1}, we can see that
$\mathcal S=\{S_1,\cdots,S_t\}$ is in fact an $(r,\delta)$-cover
set of the code $\mathcal C$, where $\mathcal C$ is the output of
Algorithm 1. The following example demonstrates how does Algorithm
1 work.
{\it Example:} We now construct an optimal $(r,\delta)_a$ linear
code with $r=\delta=2,k=3$ and $n=6$. Let
$S_1=\{1,2,3\},S_2=\{4,5,6\}$ and $\mathcal S=\{S_1,S_2\}$. Let
$U_1=\{1,2\}, U_2=\{4,5\}$ and $\Omega_0=U_1\cup U_2=\{1,2,4,5\}$.
Our construct involves the following two steps.
Step 1: Construct a $[4,3]$ MDS code, where $4=|\Omega_0|$. Let
$G'=(G_1,G_2,G_4,G_5)$ be a generating matrix of such code.
Step 2: Extend $G'=(G_1,G_2,G_4,G_5)$ to a matrix
$G=(G_1,G_2,G_3,G_4,G_5,G_6)$ such that $G$ is a generating matrix
of an optimal $(2,2)_a$ linear code.
It remains to determine $G_3$ and $G_6$ via two iterations.
\begin{enumerate}
\item $i = 1$: $\Omega = \{1,2,4,5\}$ and $S_1 \setminus \Omega = \{3\}$.
We can verify that $\{1,4,3\},\{1,5,3\}$, $\{2,4,3\}$, $\{2,5,3\}$
and $\{4,5,3\}$ are all subsets of $\{1,2,3,4,5\}$ which is an
$(\mathcal S,r,k)$-core and contains the index $3$. Let
$\Lambda=\{\{1,4\},\{1,5\}$, $\{2,4\}$, $\{2,5\},\{4,5\}\}$. Since
$G'=(G_1,G_2,G_4,G_5)$ generates an MDS code, then $G_1,G_2$ and
$G_4$ are linearly independent. So $\langle
G_1,G_2\rangle\nsubseteq\langle G_1,G_4\rangle$. Similarly,
$\langle G_1,G_2\rangle\nsubseteq\langle G_i,G_j\rangle, \forall
\{i,j\}\in\Lambda$. By Lemma \ref{sub-space}, if
$q\geq|\Lambda|=5$, then $\langle
G_1,G_2\rangle\nsubseteq\cup_{\{i,j\}\in\Lambda}\langle
G_i,G_j\rangle$. Note that $S_1 \cap \Omega = \{1,2\}$. Therefore,
let
$$G_3\in\langle G_1,G_2\rangle\backslash(\cup_{\{i,j\}\in\Lambda}\langle
G_i,G_j\rangle).$$ Then for any $(\mathcal S,r,k)$-core
$S\subseteq\{1,2,3,4,5\}$, $\{G_\ell; \ell\in S\}$ is linearly
independent.
\item $i = 2$: $\Omega = \{1,2,3,4,5\}$ and $S_2 \setminus \Omega = \{6\}$.
Similarly, we can verify that $\{1,2,6\},\{1,3,6\}$, $\{2,3,6\}$,
$\{1,4,6\},\{1,5,6\}$, $\{2,4,6\}$ and $\{2,5,6\}$ are all subsets
which is an $(\mathcal S,r,k)$-core and contains the index $6$.
Let $\Lambda=\{\{1,2\}$, $\{1,3\}$, $\{2,3\}, \{1,4\},\{1,5\}$,
$\{2,4\}$, $\{2,5\}\}$. Clearly, $\langle
G_4,G_5\rangle\nsubseteq\langle G_i,G_j\rangle, \forall
\{i,j\}\in\Lambda$. By Lemma \ref{sub-space}, if
$q\geq|\Lambda|=7$, then $\langle
G_4,G_5\rangle\nsubseteq\cup_{\{i,j\}\in\Lambda}\langle
G_i,G_j\rangle$. As $S_2 \cap \Omega = \{4,5\}$, let
$$G_6\in\langle G_4,G_5\rangle\backslash(\cup_{\{i,j\}\in\Lambda}\langle
G_i,G_j\rangle).$$ Then for any $(\mathcal S,r,k)$-core $S$,
$\{G_\ell; \ell\in S\}$ is linearly independent. Thus, we can
obtain a matrix $G=(G_1,G_2,G_3,G_4,G_5,G_6)$ such that for any
$(\mathcal S,r,k)$-core $S$, $\{G_\ell; \ell\in S\}$ is linearly
independent. Let $\mathcal C$ be the linear code generated by $G$.
Then $\mathcal C$ is an optimal $(2,2)_a$ linear code.
\end{enumerate}
We can in fact employ a smaller field than $\mathbb{F}_7$. The
following is a generating matrix of an optimal $(2,2)_a$ linear
code:
\begin{equation*}
G=\left(\begin{array}{cccccc}
1 & 0 & 1 & 0 & 1 & 1\\
0 & 1 & 1 & 0 & \alpha & \alpha\\
0 & 0 & 0 & 1 & 1 & \alpha\\
\end{array}\right)
\end{equation*}
over the field $\mathbb F_4=\{0,1,\alpha,1+\alpha\}$, where
$\alpha^2=1+\alpha$. \vskip 10pt
In the rest of this section, we shall use Theorem \ref{opt-suf-1}
to prove that optimal $(r,\delta)_a$ linear codes exist over a
field of size $q\geq\binom{n}{k-1}$ when $(r+\delta-1)|n$ or
$m\geq v+\delta-1$, where $n=w(r+\delta-1)+m$ and $k=ur+v$
satisfying $0<v<r$ and $0<m<r+\delta-1$. By Claim 2) of Lemma
\ref{low-bound}, $\frac{n}{r+\delta-1}\geq\frac{k}{r}$ is a
necessary condition for the existence of optimal $(r,\delta)_a$
linear codes. For this reason, we assume
$\frac{n}{r+\delta-1}\geq\frac{k}{r}$ holds in both cases.
\vskip 10pt
\begin{thm}\label{opt-ext-1}
Suppose $(r+\delta-1)|n$.
If $q\geq\binom{n}{k-1}$, then there exists an optimal
$(r,\delta)_a$ linear code over $\mathbb F_q$.
\end{thm}
\begin{proof}
Let $n=t(r+\delta-1)$.
Note that we have assumed that
$\frac{n}{r+\delta-1}\geq\frac{k}{r}$. Then
$$t=\lceil\frac{n}{r+\delta-1}\rceil\geq\lceil\frac{k}{r}\rceil.$$
Let $\{S_1,\cdots,S_t\}$ be a partition of $\{1,\cdots,n\}$ such
that $|S_i|=r+\delta-1, i=1,\cdots,t$.
For any $J\subseteq[t]$ of size $|J|=\lceil\frac{k}{r}\rceil$,
$$|\cup_{i\in J}S_i|=\lceil\frac{k}{r}\rceil(r+\delta-1)\geq
k+\lceil\frac{k}{r}\rceil(\delta-1).$$ By Theorem \ref{opt-suf-1},
if $q\geq\binom{n}{k-1}$, then there exists an optimal
$(r,\delta)_a$ code over $\mathbb F_q$.
\end{proof}
\vskip 10pt
If $(r+\delta-1)|n$ and $\delta\leq d$, then following a similar
line of proof in \cite{Prakash12}, we can show that
$t=\lceil\frac{n}{r+\delta-1}\rceil\geq\lceil\frac{k}{r}\rceil$.
Under these two conditions, it was proved in \cite{Prakash12} that
there exists an optimal $(r,\delta)_a$ code over the field
$\mathbb F_q$ of size $q>kn^k$. Our method requires a field of
size only $\binom{n}{k-1}$, which is at the largest a fraction
$\frac{1}{k!}$ of $kn^k$.
\vskip 10pt
\begin{thm}\label{opt-ext-2}
Suppose $n=w(r+\delta-1)+m$ and $k=ur+v$, where $0<m<r+\delta-1$
and $0<v<r$. Suppose $m\geq v+\delta-1$ and $d\geq\delta$. If
$q\geq\binom{n}{k-1}$, then there exists an optimal $(r,\delta)_a$
linear code over $\mathbb F_q$.
\end{thm}
\begin{proof}
Let $t=w+1$. Since we have assumed that
$\frac{n}{r+\delta-1}\geq\frac{k}{r}$, we get
$$t=w+1=\lceil\frac{n}{r+\delta-1}\rceil\geq\lceil\frac{k}{r}\rceil= u+1.$$
Note that
$n-m=w(r+\delta-1)$. Let $\{S_1,\cdots,S_w\}$ be a partition of
$\{1,\cdots,n-m\}$ and $S_t=[n-m+1,n]$.
For any $J\subseteq[t]$ of size $|J|=\lceil\frac{k}{r}\rceil$, we
have the following two cases:
Case 1: $t\notin J$. Then
$$|\cup_{i\in J}S_i|=\left\lceil\frac{k}{r}\right\rceil(r+\delta-1)\geq
k+\left\lceil\frac{k}{r}\right\rceil(\delta-1).$$
Case 2: $t\in J$. Since $m\geq v+\delta-1$, then
\begin{eqnarray*}
|\cup_{i\in J}S_i|&=&(\left\lceil\frac{k}{r}\right\rceil-1)(r+\delta-1)+m,\\
&\geq&(\left\lceil\frac{k}{r}\right\rceil-1)(r+\delta-1)+v+\delta-1,\\
&=&k+\left\lceil\frac{k}{r}\right\rceil(\delta-1).
\end{eqnarray*}
Hence, for any $\lceil\frac{k}{r}\rceil$-subset $J$ of $[t]$,
$|\cup_{i\in J}S_i|\geq k+\lceil\frac{k}{r}\rceil(\delta-1)$. By
Theorem \ref{opt-suf-1}, if $q\geq\binom{n}{k-1}$, there
exists an optimal $(r,\delta)_a$ code over $\mathbb F_q$.
\end{proof}
\vskip 10pt
When $\delta=2$, the conditions of Theorem \ref{opt-ext-1} and
Theorem \ref{opt-ext-2} become $(r+1)|n$ and
$n~\text{mod}~(r+1)-1\geq k~\text{mod}~r>0$ respectively. For this
special case, Tamo \emph{et al.} \cite{Tamo13} introduced a
different construction method which is very easy to implement. However,
the method in \cite{Tamo13} requires the field size $q=O(n^k)$,
which is larger than the field size $q=\binom{n}{k-1}$ of our method.
\section{Construction of Optimal $(r,\delta)_a$ Codes: Algorithm 2}
In this section, we present yet another method for constructing
optimal $(r,\delta)_a$ codes. This constructive method also points
out two other sets of coding parameters where optimal
$(r,\delta)_a$ codes exist. As the method in Section
\uppercase\expandafter{\romannumeral5}, this method construct an
optimal $(r,\delta)_a$ code which has a given set $\mathcal S$ as
its $(r,\delta)$-cover set. The difference is that the set
$\mathcal S$ used by this method has a more complicated structure.
We again borrow the notion of \emph{core} from \cite{Gopalan12}.
\vskip 10pt
\begin{defn}\label{frame}
Let $\mathcal S=\{S_1,\cdots,S_t\}$ be a collection of
$(r+\delta-1)$-subsets of $[n]$, $\mathcal
A=\{A_1,\cdots,A_\alpha,B\}$ be a partition of $[t]$ and
$\Psi=\{\xi_1,\cdots,\xi_\alpha\}\subseteq[n]$. We say that
$\mathcal S$ is an $(\mathcal A,\Psi)$-\emph{frame} over the set
$[n]$, if the following two conditions are satisfied:
\begin{itemize}
\item [(1)] For each $j\in[\alpha]$, $\{\xi_j\}=\cap_{\ell\in A_j}S_\ell$
and $\{S_i\backslash\{\xi_j\}; i\in A_j\}$ are mutually disjoint;
\item [(2)] $\{\cup_{\ell\in A_j}S_\ell;j\in[\alpha]\}\cup\{S_j;j\in B\}$
is a partition of $[n]$.
\end{itemize}
\end{defn}
\vskip 10pt
\begin{exam}\label{eg-core}
Let $\mathcal S=\{S_1,\cdots,S_8\}$ be what's shown in Fig \ref{fig-core}.
Clearly $\mathcal S$ is an $(\mathcal A,\Psi)$-frame over $[n]$,
where the subsets $S_1,S_2,S_3$ have a common element $\xi_1=1$,
and the subsets $S_4,S_5$ have a common element $\xi_2=14$.
\end{exam}
\renewcommand\figurename{Fig}
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=5cm]{fig2}
\end{center}
\caption{An $(\mathcal A,\Psi)$-frame, where $n=37, r=\delta=3,
t=8, A_1=\{1,2,3\}$, $A_2=\{4,5\}, B=\{6,7,8\}, \mathcal A=\{A_1,
A_2, B\}$ and $\Psi=\{1,14\}$.}\label{fig-core}
\end{figure}
\vskip 10pt
\begin{defn}\label{g-core}
A subset $S\subseteq[n]$ is said to be an $(\mathcal
S,r)$-\emph{core} if the following three conditions hold:
\begin{itemize}
\item [(1)] If $j\in[\alpha]$ and $\xi_j\in S$, then
$|S\cap S_i|\leq r, \forall i\in A_j$;
\item [(2)] If $j\in[\alpha]$ and $\xi_j\notin S$, then
there is an $i_j\in A_j$ such that $|S\cap S_{i_j}|\leq r$ and
$|S\cap S_i|\leq r-1, \forall i\in A_j\backslash\{i_j\}$;
\item [(3)] If $i\in B$, then $|S\cap S_i|\leq r$.
\end{itemize}
Additionally, if $S\subseteq[n]$ is an $(\mathcal S,r)$-core and $|S|=k$, then
$S$ is called an $(\mathcal S,r,k)$-\emph{core}.
\end{defn}
\vskip 10pt
Clearly, if $S\subseteq[n]$ is an $(\mathcal S,r)$-core and
$S'\subseteq S$, then $S'$ is also an $(\mathcal S,r)$-core. In
particular, if $S\subseteq[n]$ is an $(\mathcal S,r)$-core and
$S'$ is a $k$-subset of $S$, then $S'$ is an $(\mathcal
S,r,k)$-core.
{\it Example \ref{eg-core} continued:}
In Example \ref{eg-core}, let $k=7$. Then $\{1,2,3,6,7,10,11\}$
and $\{2,3,4,6,7,28,33\}$ are both $(\mathcal S,r,k)$-core.
However, $S=\{2,3,4,6,7,8,28\}$ and $S'=\{2,6,15,23,24,25,26\}$
are not $(\mathcal S,r)$-core, because $S$ does not satisfy Condition (2)
and $S'$ does not satisfy Condition (3) of
Definition \ref{g-core}.
\vskip 10pt
\begin{lem}\label{core-form}
Let $\mathcal S$ be an $(\mathcal A,\Psi)$-frame as in Definition
\ref{frame}. Suppose $t\geq\lceil\frac{k}{r}\rceil$ and for any
$\lceil\frac{k}{r}\rceil$-subset $J$ of $[t]$, $|\cup_{i\in J}S_i|
\geq k+\lceil\frac{k}{r}\rceil(\delta-1)$. Then the following
hold:
\begin{itemize}
\item [1)] If $T\subseteq[n]$ has size $|T|\geq k+(\lceil\frac{k}{r}\rceil-1)
(\delta-1)$, then there is an $S\subseteq T$ such that $S$ is an
$(\mathcal S,r,k)$-core.
\item [2)] For any $i\in[t]$ and $I\subseteq S_i$ of size $|I|=r$, there is an
$(\mathcal S,r,k)$-core $S$ such that $I\subseteq S$.
\end{itemize}
\end{lem}
\begin{proof}
1) Let $$J=\{\ell\in[t]; |T\cap S_\ell|\geq r\}.$$
For each $j\in[\alpha]$ and $\ell\in A_j$, we pick a subset
$W_\ell\subseteq T$ as follows:
\romannumeral1) If $J\cap
A_j=\emptyset$, then let $W_\ell=T\cap S_\ell$ for each $\ell\in
A_j$.
\romannumeral2) If $J\cap A_j\neq\emptyset$ and $\xi_j\in
T$, then for each $\ell\in J\cap A_j$, let $W_\ell$ be an
$r$-subset of $T\cap S_\ell$ satisfying $\xi_j\in W_\ell$, and for
each $\ell\in A_j\backslash J$, let $W_\ell=T\cap S_\ell$.
\romannumeral3) If $J\cap A_j\neq\emptyset$ and $\xi_j\notin T$,
then fix an $\ell_j\in J\cap A_j$, and let $W_{\ell_j}$ be an
$r$-subset of $T\cap S_{\ell_j}$, let $W_\ell$ be an
$(r-1)$-subset of $T\cap S_\ell$ for each $\ell\in J\cap
A_j\backslash\{\ell_j\}$, and let $W_\ell=T\cap S_\ell$ for each
$\ell\in A_j\backslash J$.
Moreover, for each $\ell\in J\cap B$, let $W_\ell$ be an
$r$-subset of $T\cap S_\ell$, and for each $\ell\in B\backslash
J$, let $W_\ell=T\cap S_\ell$.
Let $W=\cup_{\ell\in[t]}W_\ell$, then by Definition \ref{g-core},
$W$ is an $(\mathcal S,r)$-core. We now prove that $|W|\geq k$.
Let
$$\Theta(J)=\{j\in[\alpha]; J\cap A_j\neq\emptyset\}.$$
We need to consider the following two cases:
Case 1: $|J|\geq\lceil\frac{k}{r}\rceil$. Without loss of
generality, assume $|J|=\lceil\frac{k}{r}\rceil$\footnote{If
$|J|>\lceil\frac{k}{r}\rceil$, then pick a
$\lceil\frac{k}{r}\rceil$-subset $J_0$ of $J$, and replace $J$ by
$J_0$ in our discussion.}. Then from the assumption of this lemma,
\begin{align}
|\cup_{\ell\in J}S_\ell|\geq k+|J|(\delta-1).\label{eqn:28}
\end{align}
By Definition \ref{frame},
\begin{align}
|\cup_{\ell\in J}S_\ell| = & \sum_{j\in\Theta(J)}|J\cap
A_j|(r+\delta-2) \nonumber\\
&\ \ +|\Theta(J)|+|J\cap B|(r+\delta-1).
\label{eqn:29}
\end{align}
Since $\mathcal A=\{A_1,\cdots,A_\alpha,B\}$ is a partition of
$[t]$, $\{J\cap A_j;j\in\Theta(J)\}\cup\{J\cap B\}$ is a partition
of $J$ and
\begin{align}
|J|=\sum_{j\in\Theta(J)}|J\cap A_j|+|J\cap
B|.\label{eqn:30}
\end{align}
Combining (\ref{eqn:28})$-$(\ref{eqn:30}), we have
\begin{align}
\sum_{j\in\Theta(J)}|J\cap A_j|(r-1)+|\Theta(J)|+|J\cap B|r\geq
k.\label{eqn:31}
\end{align}
By the construction of $W$, we have
\begin{align}
|\cup_{\ell\in J}W_\ell|=\sum_{j\in\Theta(J)}|J\cap A_j|(r-1)+
|\Theta(J)|+|J\cap B|r.\label{eqn:32}
\end{align}
Equations (\ref{eqn:31}) and (\ref{eqn:32}) imply that
$$|W|\geq|\cup_{\ell\in J}W_\ell|\geq k.$$
Case 2: $|J|<\lceil\frac{k}{r}\rceil$. By the construction of $W$,
for each $j\in[\alpha]$ and $\ell\in J\cap A_\ell$, $W_\ell$ is
obtained by deleting at most $(\delta-1)$ elements from $T\cap
S_\ell$. We thus have
$$|\cup_{\ell\in A_j}W_\ell|\geq|T\cap(\cup_{\ell\in
A_j}S_\ell)|-|J\cap A_j|(\delta-1).$$ Moreover,
$$|\cup_{\ell\in B}W_\ell|\geq|\cup_{\ell\in B}(T\cap S_\ell)|-|J\cap
B|(\delta-1).$$ Then $$|W|=|\cup_{\ell\in[t]}W_\ell|
\geq|T|-|J|(\delta-1).$$ Note that $|T|\geq
k+(\lceil\frac{k}{r}\rceil-1)(\delta-1)$ and
$|J|<\lceil\frac{k}{r}\rceil$. Therefore
$$|W|\geq|T|-(\lceil\frac{k}{r}\rceil-1)(\delta-1)=k.$$
Gathering both cases, we always have $|W|\geq k$. Let $S$ be a
$k$-subset of $W$. Note that $W$ is an $(\mathcal S,r)$-core. So
$S\subseteq W\subseteq T$ is an $(\mathcal S,r,k)$-core.
2) To prove the second claim of Lemma \ref{core-form}, note that
$t\geq\lceil\frac{k}{r}\rceil$, and hence we can always find a
$\lceil\frac{k}{r}\rceil$-subset $J$ of $[t]$ such that $i\in J$.
Similar to the proof of 1), for each $\ell\in J$, we can pick a
$W_\ell$ such that $W_i=I$, $\cup_{\ell\in J}W_\ell$ is an
$(\mathcal S,r)$-core and $|\cup_{\ell\in J}W_\ell|\geq k$. Let
$S$ be a $k$-subset of $\cup_{\ell\in J}W_\ell$ such that
$I\subseteq S$. Then $S$ is an $(\mathcal S,r,k)$-core and
$I\subseteq S$.
\end{proof}
\vskip 10pt
{\it Example \ref{eg-core} further continued}: Consider the
$(\mathcal A,\Psi)$-frame $\mathcal S$ in Example \ref{eg-core}.
Let $k=7$. Then $\mathcal S$ satisfies the conditions of Lemma
\ref{core-form}. We consider the following two instances:
Instance 1: $T=\{2,3,4,6,7,8,14,15,16,17,19,23,24,28\}$. As in the
proof of Lemma \ref{core-form}, $J=\{\ell;|T\cap S_\ell|\geq
r\}=\{1,2,4\}$ and $|J|=3=\lceil\frac{k}{r}\rceil$. Let
$W_1=\{2,3,4\}$, $W_2=\{6,7\}$, $W_4=\{14,15,16\}$, $W_5=\{19\}$,
$W_6=\{23,24\}$, $W_7=\{28\}$ and $W_\ell=\emptyset$ for
$\ell\in\{3,8\}$. Then
$|W|=|\cup_{\ell=1}^8W_\ell|\geq|\cup_{\ell\in J}W_\ell|\geq k=7$.
Instance 2: $T=\{2,3,4,6,7,8,10,11,14,15,19,23,24,28\}$. Then
$J=\{\ell;|T\cap S_\ell|\geq r\}=\{1,2\}$ and
$|J|<\lceil\frac{k}{r}\rceil$. Let $W_1=\{2,3,4\}, W_2=\{6,7\}$,
$W_3=\{10,11\}, W_4=\{14,15\}, W_5=\{19\},
W_6=\{23,24\},W_7=\{28\}$ and $W_8=\emptyset$. Then $|W|=$
$|\cup_{\ell=1}^8W_\ell|\geq|T|-|J|(\delta-1)\geq k=7$.
\vskip 10pt
\begin{rem}\label{nor-core}
Let $\mathcal S$ be an $(\mathcal A,\Psi)$-frame as in Definition
\ref{frame}. For each $j\in[\alpha]$ and $i\in A_j$, let $U_i$ be
an $r$-subset of $S_i$ such that $\xi_j\in U_i$. For each $i\in
B$, let $U_i$ be an $r$-subset of $S_i$. Let
$$\Omega_0=\cup_{i\in[t]}U_i.$$ Then
by Definition \ref{g-core}, $\Omega_0$ is an $(\mathcal
S,r)$-core. Clearly,
$$|\Omega_0|=n-t(\delta-1)=|\cup_{j=1}^\alpha A_j|(r-1)+\alpha+
|B|r.$$
\end{rem}
\vskip 10pt
\begin{exam}\label{eg-omg0}
In Example \ref{eg-core}, let $k=7$, then $\Omega_0=\{1,$
$2,3,6,7,10,11,14,15,16,19,20,23,24,25,28,29,30,33,34,$ $35\}$ is
an $(\mathcal S,r)$-core obtained by the process of Remark
\ref{nor-core}.
\end{exam}
\vskip 10pt
\begin{lem}\label{core-exist}
Let $\mathcal S$ be an $(\mathcal A,\Psi)$-frame as defined in Definition
\ref{frame} and $\Omega_0$ be what's described in Remark \ref{nor-core}. Suppose
$\Omega_0\subseteq\Omega\subseteq[n], S_0\subseteq\Omega$ and
$i\in[t]$. If $\lambda\in S_i\backslash\Omega$ and
$S_0\cup\{\lambda\}$ is an $(\mathcal S,r,k)$-core, then there exists
an $\eta\in S_i\cap\Omega$ such that $S_0\cup\{\eta\}$ is an
$(\mathcal S,r,k)$-core.
\end{lem}
\begin{proof}
By the construction of $\Omega_0$, $|S_i\cap\Omega_0|=r$. Since
$\Omega_0\subseteq\Omega$, so
$$|S_i\cap\Omega|\geq r.$$ Since $S_0\cup\{\lambda\}$ is
an $(\mathcal S,r,k)$-core, by Definition \ref{g-core},
$$|S_0|=k-1$$ and $$|S_0\cap S_i|\leq r-1.$$ Thus, we can find an
$\eta\in(S_i\cap\Omega)\backslash S_0$.
If $i\in B$, then by Definition \ref{frame}, $\eta\notin S_{i'},
\forall i'\in[t]\backslash\{i\}$. Then $S_0\cup\{\eta\}$ is an
$(\mathcal S,r,k)$-core.
Now, suppose $i\in A_j$ for some $j\in[\alpha]$. We need to
consider the following two cases.
Case 1: $\xi_j\in S_0$. Since $\eta\in(S_i\cap\Omega)\backslash
S_0$, then $\eta\neq\xi_j$ and $\eta\notin S_{i'}, \forall
i'\in[t]\backslash\{i\}$. Then $S_0\cup\{\eta\}$ is an $(\mathcal
S,r,k)$-core.
Case 2: $\xi_j\notin S_0$. Since $S_0\cup\{\lambda\}$ is an
$(\mathcal S,r,k)$-core, from Definition \ref{g-core}, we differentiate
the following two sub-cases:
Subcase 2.1: $|S_0\cap S_{i'}|\leq r-1, \forall i'\in A_j$. In that case, it is
clear that $S_0\cup\{\eta\}$ is an $(\mathcal S,r,k)$-core.
Subcase 2.2: There is an $i_j\in A_j\backslash\{i\}$ such that
$|S_0\cap S_{i_j}|=r$, $|S_0\cap S_{i}|\leq r-2$ and $|S_0\cap
S_{i'}|\leq r-1, \forall i'\in A_j\backslash\{i_j,i\}$. In that case, we have
$$|(S_i\cap\Omega)\backslash S_0|\geq 2.$$ Let
$\eta\in(S_i\cap\Omega)\backslash(S_0\cup\{\xi_j\})$, then
$\eta\neq\xi_j$ and $\eta\notin S_{i'}, \forall
i'\in[t]\backslash\{i\}$. It then follows that $S_0\cup\{\eta\}$ is an
$(\mathcal S,r,k)$-core.
\end{proof}
\vskip 10pt
{\it Example \ref{eg-core} and \ref{eg-omg0} continued:}
Consider again Example \ref{eg-core}. Let $k=7$,
$\Omega=\Omega_0\cup\{4,5,8\}$ and $\lambda=9\in S_2$, where
$\Omega_0$ is as in Example \ref{eg-omg0}. We can easily verify the following:
Let $S_0=\{1,2,3,6$,
$10,14\}$; Then $S_0\cup\{9\}$ is an $(\mathcal S,r,k)$-core. If we further let
$\eta=7\in S_2$, then $S_0\cup\{\eta\}$ is also an $(\mathcal
S,r,k)$-core.
Let $S_0'=\{2,3,6,7,14,15\}$; Then $S_0'\cup\{9\}$
is an $(\mathcal S,r,k)$-core. If we further let $\eta'=8\in S_2$, then
$S_0'\cup\{\eta\}$ is also an $(\mathcal S,r,k)$-core.
Let
$S_0''=\{2,3,4,10,11,15,23\}$; Then $S_0''\cup\{9\}$ is an
$(\mathcal S,r,k)$-core. If we further let $\eta''=6\in S_2$, then
$S_0''\cup\{\eta''\}$ is also an $(\mathcal S,r,k)$-core.
\vskip 10pt
\begin{lem}\label{code-extd}
Let $\mathcal S$ be an $(\mathcal A,\Psi)$-frame defined in Definition
\ref{frame} and $\Omega_0$ be what's defined in Remark \ref{nor-core}. Let
$\Omega_0\subseteq\Omega\subseteq[n]$ and $\mathcal
G=\{G_\ell\in\mathbb F_q^k; \ell\in\Omega\}$ such that for any
$(\mathcal S,r,k)$-core $S\subseteq\Omega$, the vectors in
$\{G_\ell;\ell\in S\}$ are linearly independent. Suppose $i\in[t]$
and $S_i\backslash\Omega\neq\emptyset$. If $q\geq\binom{n}{k-1}$,
then for any $\lambda\in S_i\backslash\Omega$, there is a
$G_\lambda\in\langle\{G_\ell\}_{\ell\in S_i\cap\Omega}\rangle$
such that for any $(\mathcal S,r,k)$-core
$S\subseteq\Omega\cup\{\lambda\}$, the vectors in
$\{G_\ell;\ell\in S\}$ are linearly independent.
\end{lem}
\begin{proof}
Let $\Lambda$ be the set of all $S_0\subseteq\Omega$ such that
$S_0\cup\{\lambda\}$ is an $(\mathcal S,r,k)$-core. For any
$S_0\in\Lambda$, by Lemma \ref{core-exist}, there is an $\eta\in
S_i\cap\Omega$ such that $S_0\cup\{\eta\}$ is an $(\mathcal
S,r,k)$-core. From the assumptions, $\{G_\ell\}_{\ell\in
S_0\cup\{\eta\}}$ is linearly independent. Hence
$$G_\eta\notin\langle\{G_\ell\}_{\ell\in S_0}\rangle.$$ Thus,
$$\langle\{G_\ell\}_{\ell\in
S_i\cap\Omega}\rangle\nsubseteq\langle\{G_\ell\}_{\ell\in
S_0}\rangle.$$ Since $q\geq\binom{n}{k-1}\geq |\Lambda|$, by Lemma
\ref{sub-space},
$$\langle\{G_\ell\}_{\ell\in
S_i\cap\Omega}\rangle\nsubseteq(\cup_{S_0\in\Lambda}\langle\{G_\ell\}_{\ell\in
S_0}\rangle).$$ Let $G_\lambda\in\langle\{G_\ell\}_{\ell\in
S_i\cap\Omega}\rangle\backslash(\cup_{S_0\in\Lambda}\langle\{G_\ell\}_{\ell\in
S_0}\rangle)$. Then for any $(\mathcal S,r,k)$-core
$S\subseteq\Omega\cup\{\lambda\}$, the vectors in $\{G_\ell;\ell\in
S\}$ are linearly independent.
\end{proof}
\vskip 10pt
The second construction method for optimal $(r,\delta)_a$ codes
is illustrated in the proof of the following theorem.
\vskip 10pt
\begin{thm}\label{opt-suf}
Let $\mathcal S$ be an $(\mathcal A,\Psi)$-frame in Definition
\ref{frame}. Suppose $t\geq\lceil\frac{k}{r}\rceil$ and for any
$\lceil\frac{k}{r}\rceil$-subset $J$ of $[t]$, $|\cup_{i\in J}S_i|
\geq k+\lceil\frac{k}{r}\rceil(\delta-1)$. If
$q\geq\binom{n}{k-1}$, then there exists an optimal $(r,\delta)_a$
linear code over $\mathbb F_q$.
\end{thm}
\begin{proof}
Let $\Omega_0$ be what's described in Remark \ref{nor-core} and $L=|\Omega_0|$.
Clearly, $$L=n-t(\delta-1).$$ Since
$t\geq\lceil\frac{k}{r}\rceil$, let $J$ be a
$\lceil\frac{k}{r}\rceil$-subset of $[t]$; then from the assumptions,
$$|\cup_{i\in J}S_i|\geq k+\lceil\frac{k}{r}\rceil(\delta-1)
=k+|J|(\delta-1).$$
By Remark \ref{nor-core}, $\cup_{i\in J}U_i\subseteq \Omega_0$. Hence
$$L=|\Omega_0|\geq|\cup_{i\in J}U_i|=\cup_{i\in J}S_i|-
|J|(\delta-1)\geq k.$$
The construction of an optimal $(r,\delta)_a$ code consists of the
following two steps.
\emph{Step 1}: Construct an $[L,k]$ MDS code $\mathcal C_0$ over
$\mathbb F_q$. Such an MDS code exists when
$q\geq\binom{n}{k-1}\geq n>L$. Let $G'$ be a generating matrix of
$\mathcal C_0$. We index the columns of $G'$ by $\Omega_0$, i.e.,
$G'=(G_\ell)_{\ell\in\Omega_0}$, where $G_\ell$ is a column of
$G', \forall\ell\in\Omega_0$.
\emph{Step 2}: Extend the code $\mathcal C_0$ to an optimal
$(r,\delta)_a$ code $\mathcal C$. This can be achieved by the
following algorithm, which appears similar to Algorithm 1 (on the surface)
but is actually different (in details).
\vspace{0.12in} \noindent \textbf{Algorithm 2:}
\noindent 1. ~ Let $\Omega=\Omega_0$.
\noindent 2. ~ $i$ runs from $1$ to $t$.
\noindent 3. ~ ~ While $S_{i}\backslash\Omega\neq\emptyset$:
\noindent 4. ~ ~ ~ ~Pick a $\lambda\in S_{i}\backslash\Omega$ and
let $G_\lambda\in\langle\{G_\ell;~\ell\in S_i\cap\Omega\}\rangle$
~ ~ ~ ~be such that for any $(\mathcal S,r,k)$-core
$S~\subseteq\Omega~\cup~\{\lambda\}$,
~ ~ ~ ~$\{G_\ell; ~\ell\in S\}$ is linearly independent.
\noindent 5. ~ ~ ~ ~$\Omega=\Omega\cup\{\lambda\}$.
\noindent 6. ~ Let $\mathcal C$ be the linear code generated by
the matrix $~G=$
~ $(G_1,\cdots,G_n)$.
\vspace{0.12in} Since $G'=(G_\ell)_{\ell\in\Omega_0}$ is a
generating matrix of the MDS code $\mathcal C_0$, so for any
$(\mathcal S,r,k)$-core $S\subseteq\Omega_0$, $\{G_\ell;\ell\in
S\}$ is linearly independent. Then in Algorithm 2, by induction,
we can assume that for any $(\mathcal S,r,k)$-core
$S\subseteq\Omega$, $\{G_\ell;\ell\in S\}$ is linearly
independent. By Lemma \ref{code-extd}, in line 4 of Algorithm 2,
we can always find a $G_\lambda$ satisfying the requirement.
Hence, by induction, the collection $\{G_\ell;\ell\in[n]\}$
satisfies the condition that for any $(\mathcal S,r,k)$-core
$S\subseteq[n]$, $\{G_\ell;\ell\in S\}$ is linearly independent.
Moreover, since in line 4 of Algorithm 2, we can choose a
$G_\lambda\in\langle\{G_\ell;\ell\in S_i\cap\Omega\}\rangle$,
which satisfies
$$\text{Rank}(\{G_\ell\}_{\ell\in(S_i\cap\Omega)\cup\{\lambda\}})
=\text{Rank}(\{G_\ell\}_{\ell\in S_i\cap\Omega}).$$ By induction,
\begin{align}
\text{Rank}(\{G_\ell\}_{\ell\in
S_i})&=\text{Rank}(\{G_\ell\}_{\ell\in
S_i\cup\Omega_0})\nonumber \\
&=\text{Rank}(\{G_\ell\}_{\ell\in U_i}) \nonumber\\
&=r.\nonumber
\end{align}
For any $i\in[t]$ and $I\subseteq S_i$ of size $|I|=r$, by Claim 2) of
Lemma \ref{core-form}, there is an $(\mathcal S,r,k)$-core $S$
such that $I\subseteq S$. Hence $\{G_\ell;\ell\in S\}$ is linearly
independent. Thus, $$\text{Rank}(\{G_\ell\}_{\ell\in I})=r.$$ Therefore,
by Definition \ref{def-locality} and Remark \ref{rem-loty},
$\mathcal C$ is an $(r,\delta)_a$ code.
Finally, we prove that the minimum distance of $\mathcal C$ is
$d=n-k+1-(\lceil\frac{k}{r}\rceil-1)(\delta-1)$.
Suppose $T\subseteq[n]$ and
$|T|=k+(\lceil\frac{k}{r}\rceil-1)(\delta-1)$. By 1) of Lemma
\ref{core-form}, there is an $S\subseteq T$ which is an $(\mathcal
S,r,k)$-core. Therefore, $$\text{Rank}(\{G_\ell; \ell\in
T\})=\text{Rank}(\{G_\ell; \ell\in S\})=k.$$ By the minimum distance bound in (\ref{eqn:1}) and
Lemma \ref{fact}, the minimum distance of $\mathcal C$ is
$$d=n-k+1-(\lceil\frac{k}{r}\rceil-1)(\delta-1).$$ Hence
$\mathcal C$ is an optimal $(r,\delta)_a$ code.
\end{proof}
\vskip 10pt
{\it Example \ref{eg-core} continued:}
Consider the $(\mathcal A,\Psi)$-frame $\mathcal S$ in Example
\ref{eg-core}. Let $k=7$. Then it is obvious $\mathcal S$ satisfies the
conditions of Theorem \ref{opt-suf}. Thus, we can use Algorithm 2 to
construct an optimal $(r,\delta)_a$ linear code over the field of
size $q\geq\binom{n}{k-1}=\binom{37}{6}$. Note that $r=\delta=3$.
Hence, $(r+\delta-1)\nmid n$ and this is a new optimal $(r,\delta)_a$
code.
\vskip 5pt
As applications of Theorem \ref{opt-suf}, in the following, we
show that optimal $(r,\delta)_a$ codes exist for two other
sets of coding parameters. From Claim 2) of Lemma \ref{low-bound}, we know that
$\frac{n}{r+\delta-1}\geq\frac{k}{r}$ is a necessary condition for
the existence of optimal $(r,\delta)_a$ linear codes. Thus we will assume
$\frac{n}{r+\delta-1}\geq\frac{k}{r}$ in the following discussion.
\vskip 10pt
\begin{thm}\label{opt-ext-3}
Suppose $n=w(r+\delta-1)+m$ and $k=ur+v$, where $0<m<r+\delta-1$
and $0<v<r$. Suppose $w\geq r+\delta-1-m$ and $r-v\geq u$. If
$q\geq\binom{n}{k-1}$, then there exists an optimal $(r,\delta)_a$
linear code over $\mathbb F_q$.
\end{thm}
\begin{proof}
Let $t=w+1$. Note that we have assumed that
$\frac{n}{r+\delta-1}\geq\frac{k}{r}$. Then
$$t=w+1=\lceil\frac{n}{r+\delta-1}\rceil\geq\lceil\frac{k}{r}\rceil=
u+1.$$ Let
$$\ell=r+\delta-1-m$$ and
\begin{align}
L=(\ell+1)(r+\delta-2)+1.\label{eqn:33}
\end{align} Then from the assumptions,
$w\geq(r+\delta-1)-m=\ell$. Therefore $$t=w+1\geq \ell+1$$ and
\begin{eqnarray}
n-L&=&(w-\ell)(r+\delta-1)\nonumber
\\&=&(t-\ell-1)(r+\delta-1).
\label{eqn:34}
\end{eqnarray}
From equation (\ref{eqn:33}), $L-1=(\ell+1)(r+\delta-2)$. The set $[2,L]$ can
be partitioned into $\ell+1$ mutually disjoint subsets, say,
$T_1,\cdots,T_{\ell+1}$, each of size $r+\delta-2$. Let
$$S_i=\{1\}\cup T_i, i=1,\cdots,\ell+1.$$
Moreover, from equation (\ref{eqn:34}), the set $[L+1,n]$ can be partitioned
into $t-(\ell+1)$ mutually disjoint subsets, say,
$S_{\ell+2},\cdots,S_{t}$, each of size $r+\delta-1$.
Let $\alpha=1$ and $A_1=\{1,\cdots,\ell+1\},B=\{\ell+1,\cdots,t\}$,
$\mathcal A=\{A_1,B\}$, and $\Psi=\{1\}$. Then $\mathcal
S=\{S_1,\cdots,S_t\}$ is an $(\mathcal A,\Psi)$-frame. For any
$\lceil\frac{k}{r}\rceil$-subset $J$ of $[t]$, since $r-v\geq u$,
then
$$|J|=\lceil\frac{k}{r}\rceil=u+1\leq r-v+1.$$ Let
$J_1=J\cap\{1,\cdots,\ell+1\},$ and
$J_2=J\backslash\{1,\cdots,\ell+1\}$. By the construction of
$\mathcal S$, we have
\begin{eqnarray*} |\cup_{i\in J}S_i|&=&|J_1|(r+\delta-2)+1+|J_2|(r+\delta-1)
\\&=&|J|(r+\delta-1)-|J_1|+1\\&\geq&|J|(r+\delta-1)-|J|+1
\\&\geq&|J|(r+\delta-1)-(r-v+1)+1\\&=&(|J|-1)r+v+|J|(\delta-1)
\\&=&ur+v+\lceil\frac{k}{r}\rceil(\delta-1)
\\&=&k+\lceil\frac{k}{r}\rceil(\delta-1).
\end{eqnarray*}
By Theorem \ref{opt-suf}, if $q\geq\binom{n}{k-1}$, then there
exists an optimal $(r,\delta)_a$ code over $\mathbb F_q$.
\end{proof}
\vskip 10pt
\begin{thm}\label{opt-ext-4}
Suppose $n=w(r+\delta-1)+m$ and $k=ur+v$, where $0<m<r+\delta-1$
and $0<v<r$. Suppose $w+1\geq 2(r+\delta-1-m)$ and $2(r-v)\geq u$.
If $q\geq\binom{n}{k-1}$, then there exists an optimal
$(r,\delta)_a$ linear code over $\mathbb F_q$.
\end{thm}
\begin{proof}
Let $t=w+1$. Note that we have assumed that
$\frac{n}{r+\delta-1}\geq\frac{k}{r}$. Then
$$t=w+1=\left\lceil\frac{n}{r+\delta-1}\right\rceil\geq
\left\lceil\frac{k}{r}\right\rceil=
u+1.$$ Let
$$\ell=(r+\delta-1)-m$$ and
\begin{align}
L=\ell(2(r+\delta-1)-1).\label{eqn:35}
\end{align}
Then by assumption, $t=w+1\geq 2(r+\delta-1-m)=2\ell$. It then follows that
\begin{align}
n-L=(t-2\ell)(r+\delta-1)\geq 0.\label{eqn:36}
\end{align}
From equation (\ref{eqn:35}), the set $[L]$ can be partitioned
into $\ell$ mutually disjoint subsets, say, $T_1,\cdots,T_{\ell}$,
each of size $2(r+\delta-1)-1$. For each $i\in\{1,\cdots,\ell\}$,
we can find two subsets $S_{2i-1},S_{2i}$ of $T_i$ such that
$$|S_{2i-1}|=|S_{2i}|=r+\delta-1$$ and $$S_{2i-1}\cup
S_{2i}=T_i.$$ Then
$$|S_{2i-1}\cap S_{2i}|=1.$$ Let $S_{2i-1}\cap
S_{2i}=\{\xi_i\}$ and $\Psi=\{\xi_1,\cdots,\xi_\ell\}$.
Moreover, from Equation (\ref{eqn:36}), the set $[L+1,n]$ can be partitioned
into $t-2\ell$ mutually disjoint subsets, say
$S_{2\ell+1},\cdots,S_{t}$, each of size $r+\delta-1$.
Let $A_i=\{2i-1,2i\}, i=1,\cdots,\ell$, $B=[2\ell+1,t]$ and
$\mathcal A=\{A_1,\cdots,A_\ell,B\}$. Then $\mathcal
S=\{S_1,\cdots,S_t\}$ is an $(\mathcal A,\Psi)$-frame. For any
$\lceil\frac{k}{r}\rceil$-subset $J$ of $[t]$. Since $2(r-v)\geq
u$, then
\begin{align}
|J|=\lceil\frac{k}{r}\rceil=u+1\leq 2(r-v)+1.
\label{eqn:37}
\end{align} Let
$\Gamma(J)=\{j\in[\ell];A_j\subseteq J\}$. Then
\begin{align}
|J|\geq|\cup_{j\in\Gamma(J)}A_j|=2|\Gamma(J)|.\label{eqn:38}
\end{align}
Combining (\ref{eqn:37}) an (\ref{eqn:38}), we have
$$|\Gamma(J)|\leq\frac{|J|}{2}\leq\frac{2(r-v)+1}{2}=r-v+\frac{1}{2}.$$
Since $|\Gamma(J)|$ is an integer, then $$|\Gamma(J)|\leq r-v.$$
By the construction of $\mathcal S$, we have
\begin{eqnarray*} |\cup_{i\in J}S_i|&=&|J|(r+\delta-1)-|\Gamma(J)|
\\&\geq&|J|(r+\delta-1)-(r-v)\\&=&(|J|-1)r+v+|J|(\delta-1)
\\&=&k+\lceil\frac{k}{r}\rceil(\delta-1).
\end{eqnarray*}
By Theorem \ref{opt-suf}, if $q\geq\binom{n}{k-1}$, then there
exists an optimal $(r,\delta)_a$ code over $\mathbb F_q$.
\end{proof}
\vskip 10pt
We now provide some discussions of Theorem \ref{opt-ext-4}. Since
$0<m<r+\delta-1$, then $2(r+\delta-1-m)<2(r+\delta-1)$. Given
$k,r$ and $\delta$, let
$\alpha=\text{max}\{2(r+\delta-1),\lceil\frac{k}{r}\rceil\}$. Then
the conditions $w+1\geq 2(r+\delta-1-m)$ and $w\geq u$ can always
be satisfied when $n\geq\alpha(r+\delta-1)$. On the other hand,
when $\frac{k}{3}<r<k$ and $r\neq\frac{k}{2}$, then $u=1$ or $2$
and $r-v\geq 1$, which leads to $2(r-v)\geq u$. By Theorem \ref{opt-ext-4},
there exist optimal $(r,\delta)_a$ codes when
$n\geq\alpha(r+\delta-1)$, $\frac{k}{3}<r<k$ and
$r\neq\frac{k}{2}$.
\vspace{0.2cm}\begin{center}
\begin{tabular}{|p{3.5mm}|p{3.5mm}|p{3.5mm}|p{3.5mm}|p{3.5mm}|p{3.5mm}
|p{3.5mm}|p{3.5mm}|p{3.5mm}|p{3.5mm}|p{3.5mm}|}
\hline $r\backslash k$ & \small11 & \small12 & \small13 & \small14 & \small15
& \small16 & \small17 & \small18 & \small19 & \small20 \\
\hline \small$2$ & \small{E$_M$} & \small{E$_M$} & \small{E$_M$} &
\small{E$_M$} & \small{E$_M$} & \small{E$_M$} & \small{E$_M$}
& \small{E$_M$} & \small{E$_M$} & \small{E$_M$} \\
\hline \small$3$ & \small N$_{11}$ & \small N$_{10}$ & \small
E$_{27}$ & \small E$_{27}$ & \small N$_{10}$ & \small N$_{11}$ &
\small N$_{11}$ & \small N$_{10}$ & \small N$_{11}$
& \small N$_{11}$ \\
\hline \small $4$ & \small E$_{27}$ & \small N$_{10}$ & \small
E$_{27}$ & \small E$_{27}$ & \small N$_{11}$ & \small N$_{10}$ &
\small E$_{27}$ & \small E$_{27}$ & \small N$_{11}$
& \small N$_{10}$ \\
\hline \small $5$ & \small E$_{16}$ & \small E$_{27}$ & \small
E$_{27}$ & \small E$_{27}$ & \small N$_{10}$ & \small E$_{27}$ &
\small E$_{27}$ & \small E$_{27}$ & \small N$_{12}$
& \small N$_{10}$ \\
\hline \small $6$ & \small{E$_M$} & \small{E$_M$} & \small{E$_M$}
& \small{E$_M$} & \small{E$_M$}
& \small{E$_M$} & \small{E$_M$} & \small{E$_M$} & \small{E$_M$} & \small{E$_M$} \\
\hline \small $7$ & \small E$_{26}$ & \small E$_{26}$ & \small
E$_{26}$ & \small N$_{10}$ & \small E$_{26}$ & \small E$_{26}$ &
\small E$_{26}$ & \small E$_{26}$
& \small E$_{26}$ & \small $\sim$ \\
\hline \small $8$ & \small{E$_M$} & \small{E$_M$} & \small{E$_M$}
& \small{E$_M$} & \small{E$_M$}
& \small{E$_M$} & \small{E$_M$} & \small{E$_M$} & \small{E$_M$} & \small{E$_M$} \\
\hline \small $9$ & \small E$_{16}$ & \small E$_{16}$ & \small
E$_{16}$ & \small E$_{26}$ & \small E$_{26}$ & \small E$_{26}$ &
\small E$_{26}$ & \small N$_{10}$ & \small E$_{16}$
& \small E$_{16}$ \\
\hline \small $10$ & \small $\sim$ & \small $\sim$ & \small $\sim$
& \small $\sim$ & \small $\sim$
& \small $\sim$ & \small $\sim$ & \small $\sim$ & \small $\sim$ & \small N$_{10}$ \\
\hline \small $11$ & \small{E$_M$} & \small{E$_M$} & \small{E$_M$}
& \small{E$_M$} & \small{E$_M$}
& \small{E$_M$} & \small{E$_M$} & \small{E$_M$} & \small{E$_M$} & \small{E$_M$} \\
\hline
\end{tabular}
\vspace{0.5cm}\footnotesize{Table 1. ~ Existence of optimal
$(r,\delta)_a$ codes for parameters $n=60,\delta=5, 2\leq r\leq
11$ and $11\leq k\leq 20$.}
\end{center}
\section{Conclusions}
We have investigated the structure properties and construction methods
of optimal $(r,\delta)_a$ linear codes, whose length and
dimension are $n$ and $k$ respectively.
A structure theorem for optimal $(r,\delta)_a$ code with $r|k$ is first
obtained. We next derived two sets of parameters where no
optimal $(r,\delta)_a$ linear codes could exist (over any field), as well as identified four sets of parameters where optimal $(r,\delta)_a$ linear codes
exist over any field of size $q\geq\binom{n}{k-1}$. Some of these existence conditions were reported in the literature before, but the minimum field size we derived is (considerably) smaller than those derived in the previous works.
Our results have considerably substantiated the results in terms of constructing optimal $(r,\delta)_a$ codes, and there are now only two small holes (two subcases with specific parameters) where the existence results are unknown.
Except for these two small subcases, for all the other cases, given each tuple of $(n,k,r,\delta)$, either an
optimal $(r,\delta)_a$ linear code does not exist or an optimal $(r,\delta)_a$
linear code can be constructed using a deterministic algorithm.
As an illustrative summary of our results, we also provide in
Table 1 an example of the existence of optimal $(r,\delta)_a$
linear codes for the parameters of $n=60$, $\delta=5, 2\leq r\leq
11$ and $11\leq k\leq 20$. In this table, E$_M$ means that
optimal $(r,\delta)_a$ linear codes can be constructed by the
method in \cite{Prakash12} or by our Theorem \ref{opt-ext-1} and
Algorithm 1 (which requires a substantially smaller field);
E$_{16}~($resp. E$_{26}$, E$_{27})$ means optimal $(r,\delta)_a$
linear codes can be constructed by Theorem \ref{opt-ext-2}
$($resp. Theorem \ref{opt-ext-3}, Theorem \ref{opt-ext-4}$)$;
N$_{10}~($resp. N$_{11})$ means optimal $(r,\delta)_a$ linear
codes do not exist according to Theorem \ref{non-exst} $($resp.
Theorem \ref{non-exst-1}$)$; and $\sim$ means we do not yet know
whether an optimal $(r,\delta)_a$ linear code exists or not.
|
1,108,101,565,546 | arxiv |
\section{Conclusion}
We present a novel MAR approach based on a generative adversarial framework with joint projection-sinogram correction and mask pyramid network. From the experimental evaluations, we show that existing MAR methods does not effectively reduce metal artifact. By contrast, the proposed approach leverages the extra contextual information from sinogram and achieves a superior performance over other MAR methods in both the synthesized and clinical datasets.
\noindent \textbf{Acknowledgement.} This work was supported in part by NSF award \#1722847, the Morris K. Udall Center of Excellence in Parkinson's Disease Research by NIH, and the corporate sponsor Carestream.
\section{Experimental Evaluations}
\subsubsection{Implementation Details and Baselines}
We implement the proposed model using PyTorch and train the model with the Adam optimization method. For the hyper-parameters, we set learning rate $= 5e^{-4}$, $\beta_1=0.5$, $\lambda=100$, and batch size $=16$. We compare our projection completion (PC) model and joint projection-sinogram correction (PC+SC) model with the following baseline MAR approaches: 1) LI, sinogram correction by linear interpolation \cite{kalender1987reduction}; 2) BHC, beam hardening correction for MAR \cite{verburg2012ct}; 3) NMAR, a state-of-the-art MAR model \cite{meyer2010normalized} that produces a prior CT image to correct metal artifacts; and 4) CNNMAR, the state-of-the-art deep learning based method \cite{zhang2018convolutional} that uses a CNN to output the prior image for MAR.
\subsubsection{Datasets and Simulation Details}
For the synthesized dataset, we use the images collected from a CBCT scanner that is dedicated for lower extremities.
The size of the CBCT projections is $448\times512$ and the projections contain no metal objects. We randomly apply masks to the projections to obtain masked and unmasked projection pairs. In total, there are 27 CBCT scans, each with 600 projections. Projections from 24 of the CBCT scans are used for training, and the rest are held out for testing.
Two types of object masks are collected for the experiments: metal masks and blob masks. For the metal masks, we collect 3D binary metal implant volumes from clinical records and forward project them to obtain 2D metal projection masks. In total, we obtain 18,000 projection masks from 30 binary metal implant volumes. During training, we simulate the metal implants insertion process by randomly placing metal segmentation masks on the metal-free projections. For the blob masks, we adopt the method from \cite{pathak2016context} by drawing randomly shaped blobs on the image. Results for projection and sinogram completion with the metal and blob masks are provided in the supplementary material.
For a fair comparison, we adopt the same procedures as in \cite{zhang2018convolutional} to synthesize metal-affected CBCT volumes. We assume a 120 kVp X-ray source with $2 \times 10^7$ photons. The distance from the X-ray source to the rotation center is set to 59.5cm, and 416 projection views are uniformly spaced between 0-360 degrees. The size of the reconstructed volume is $448\times448\times448$. During simulation, we set the material to iron for all the metal masks. Note that since the metal masks are from clinical records, the geometries and intensities of the metal artifacts are extremely diverse, which makes MAR highly challenging.
For the clinical dataset, we use the vertebrae localization and identification dataset from Spineweb\footnote{spineweb.digitalimaginggroup.ca}.
We first define regions with HU values greater than 2,500 as metal regions. Then, we select images with the largest-connected metal region greater than 400 pixels as metal-affected images and images with the largest HU value smaller than 2,000 as metal-free images. The metal masks for the projections and sinograms are obtained by forward projecting the metal regions in the CT image domain. The training for this dataset is performed on the metal-free images with metal masks obtained from the metal-affected images.
\input{figures/mar_metrics.tex}
\input{figures/sample_results.tex}
\subsubsection{Quantitative Comparisons} We use two metrics: the rooted mean square error (RMSE) and structural similarity index (SSIM) for quantitative evaluations. We conduct a thorough study by evaluating RMSE and SSIM for a wide range of mask sizes. The results are summarized in Fig. \ref{fig:mar_metrics}. We observe that the proposed method achieves superior performance over the other methods. For example, the RMSE error of the second-best method CNNMAR \cite{zhang2018convolutional} almost doubles that of the proposed method when the implant size is large. In addition, by further refining in the sinogram domain, improved performance can be achieved especially in terms of the SSIM metric. From Fig. \ref{fig:mar_metrics}, we also perceive that methods which require tissue segmentation (e.g. NMAR and CNNMAR) perform well when the metallic object is smaller than $1000$ pixels. However, when the size of the metallic implants becomes larger, these methods deteriorate significantly due to erroneous segmentation. The proposed joint correction approach, which does not rely on tissue segmentation, exhibits less degradation.
\subsubsection{Qualitative Comparisons}
Fig. \ref{fig:sample_results} shows MAR results on synthesized metal-affected images. It is clear that the proposed method successfully restores streaking artifacts caused by metallic implants. Unlike other approaches that generates erroneous surrogates, our method fills in contextually consistent values through generative modeling and joint correction. For the results with clinical data (Fig \ref{fig:clinical_results}), we also observe that our method produces qualitatively better results. BHC and NMAR cannot totally reduce the metal artifacts. LI and CNNMAR can recover most of the metal-affected regions. However, they also produce secondary artifacts. We notice a performance degradation for CNNMAR on the clinical data compare to the synthesized data, which demonstrates that image domain approaches relying on synthesizing metal artifact have worse generalizability.
\iffalse
\subsection{Ablation Study}
\input{figures/compare_mask_focused.tex}
\input{figures/compare_mask_recalled.tex}
We perform an ablation study on each of the proposed module. Specifically, the following configurations are compared: 1) BF, the model that is trained only with the base framework introduced in Section \ref{sec:basic_framework}; 2) MFL, the model that is trained with the base framework using the mask fusion loss; 3) PC, the model using both the mask fusion loss and MPN; and 4) PC+SC, the model that further refines the outputs of PC with the sinogram correction (SC).
\subsubsection{Analysis of the Mask Fusion Loss}
We first investigate the effectiveness of the mask-fusion loss. In this section, all the experimented models are trained (if applicable) with the projection images masked by the blob masks. We test LI, BF, and MFL in this experiment and compare their performances on blob-masked projection completion. Fig. \ref{fig:compare_mask_focused} shows an example projection completion results of these models. Compared with the results from LI, which contain apparent artifacts and are anatomically inconsistent with the context, the results from BF are more visually plausible, and the completion is more coherent with the surrounding anatomical structures. However, when we look into the results more carefully, we can see that the completed projection patches are still distinct from the nearby projection data, and the borders of the masks can be spotted. By contrast, the results from MFL blend well with the unmasked region.
\subsubsection{Analysis of the Mask Pyramid Network}
We then analyze whether diverse geometries of metallic implants can be learned by incorporating the proposed MPN. Specifically, we compare MFL and PC on the task of completing metal-masked projections. The comparisons are shown in Fig. \ref{fig:compare_mask_recalled}. Here, MFL-blob (Fig. \ref{fig:compare_mask_recalled} (c)) and MFL-metal (Fig. \ref{fig:compare_mask_recalled} (d)) are the MFL model trained with blob masks and metal masks, respectively. We can observe that the projection completion results from MFL-blob are poor and improper, while the results from MFL-metal are more in accordance with the context. This suggests that MFL is sensitive to the shapes of the masks, and it has difficulty completing the projection when an unseen and totally different type of mask is presented. However, even though the MFL-metal model is trained and tested on metal masks, a closer look at the results from MFL-metal shows that the model still has problems with completing metal masks, and the results are not as coherent as those in Fig. \ref{fig:compare_mask_focused} (e). In comparison, PC (Fig. \ref{fig:compare_mask_recalled} (e)) addresses the variations of metal masks much better. Similar to Fig. \ref{fig:compare_mask_focused} (e), it produces contextually concordant outputs. This demonstrates that a MPN can really help to guide the projection completion on irregular-shaped masks.
\input{figures/sinogram_refine.tex}
\subsubsection{Analysis of the Sinogram Correction}
We further investigate if the sinogram correction could aid in producing consistent projection and sinogram completions. We compare the outputs of PC and PC+SC in projection and sinogram domains. Fig. \ref{fig:sinogram_refine} shows a visual comparison. We can observe that even though PC yields reasonable projection completion, when viewed in the sinogram domain inconsistencies still exist. While for PC+SC, the output sinograms are more consistent with the context, which in turn improves the completion results in the projection domain.
\fi
\input{figures/clinical_results.tex}
\section{Introduction}
Metal artifact is one of the most prominent artifacts which impede reliable computed tomography (CT) or cone beam CT (CBCT) image interpretation. It is commonly addressed in the \textit{sinogram domain} where the metal-affected regions in the sinograms are segmented and replaced with synthesized values so that metal-free CT images can be ideally reconstructed from the corrected sinograms. Early sinogram domain approaches fill the metal-affected regions by interpolation \cite{kalender1987reduction} or from prior images \cite{meyer2010normalized}. These methods can effectively reduce metal artifacts, but secondary artifacts are often introduced due to the loss of structural information in the corrected sinograms. Recent works propose to leverage deep neural networks (DNNs) to directly learn the sinogram correction. Park et al. \cite{park2017sinogram} applies U-Net \cite{ronneberger2015u} to correct metal-inserted sinogram, and Gjesteby et al. \cite{gjesteby2017projection} proposes to refine NMAR-corrected sinograms \cite{meyer2010normalized} using a convolutional neural network (CNN). Although better sinogram completions are achieved, the results are still subject to secondary artifacts due to the imperfect completion.
The development of DNNs in recent years also enables an \textit{image domain} approach that directly reduces metal artifacts or the related artifacts in CT/CBCT images. Specifically, the existing methods \cite{gjesteby2017reducing,zhang2018convolutional,xu2018deep,park2017,svar_gan} train image-to-image CNNs to transform artifact-affected CT images to artifact-free CT images. Gjesteby et al. \cite{gjesteby2017reducing} proposes to include the NMAR-corrected CT as the input with a two-stream CNN. Zhang et al. \cite{zhang2018convolutional} fuses beam hardening corrected and linear interpolated CT images for better correction. All the current image domain approaches use synthesized data to generate the metal-affected and metal-free image pairs for training. However, the synthesized data may not fully simulate the CT imaging under the clinical scenario making the image domain approaches less robust to clinical applications.
In this work, we propose a novel learning-based sinogram domain approach to metal artifact reduction (MAR). Unlike the existing image domain methods, the proposed method does not require synthesized metal artifact during training. Instead, we treat MAR as an image inpainting problem, i.e., we apply random metal traces to mask out artifact-free sinograms, and train a DNN to recover the data within the metal traces. Since metal-affected regions are viewed as missing, factors such as X-ray spectrum and the material of metal implants will not affect the generalizability of the proposed method. Unlike the existing learning-based sinogram domain approaches, our method delivers high-quality sinogram completion with three designs. \textit{First}, we propose a two-stage projection-sinogram\footnote{We denote the X-ray data that captured at the same view angle as a ``projection'' and a stack of projections corresponding to the same CT slice as a ``sinogram''.} completion scheme to achieve more contextually consistent correction results. \textit{Second}, we introduce adversarial learning into the projection-sinogram completion so that more structural and anatomically plausible information can be recovered from the metal regions. \textit{Third}, to make the learning more robust to the various shapes of metallic implants, we introduce a novel mask pyramid network (MPN) to distill the geometry information of different scales and a mask fusion loss to penalize early saturation. Our extensive experiments on both synthetic and clinical datasets demonstrate that the proposed method is indeed effective and perform better than the state-of-the-art MAR approaches.
\section{Methodology}
\begin{figure}[t]
\begin{minipage}[b]{.48\textwidth}
\includegraphics[width=\linewidth]{overview.png}
\centering
\caption{Method overview.}
\label{fig:overview}
\end{minipage}
\hfill
\begin{minipage}[b]{.48\textwidth}
\includegraphics[width=\linewidth]{architecture.png}
\centering
\caption{The base framework.}
\label{fig:architecture}
\end{minipage}
\end{figure}
An overview of the proposed method is shown in Fig. \ref{fig:overview}. Our method consists of two major modules: a projection completion module (blue) and a sinogram correction module (green). The projection completion module is an image-to-image translation model enhanced with a novel mask pyramid network. Given an input projection image and a pre-segmented metal mask, the projection completion module generates anatomically plausible and structurally consistent surrogates within the metal-affected regions. The sinogram correction module predicts a residual map to refine the projection-corrected sinograms. This joint projection-sinogram correction approach enforces inter-projection consistency and makes use of the context information between different viewing angles. Note that we perform projection completion first due to the observation that the projection images contain better structural information that facilitates the learning of an image inpainting model.
\subsubsection{Base Framework} \label{sec:basic_framework}
Inspired by recent advances in deep generative models \cite{pathak2016context,isola2017image}, we formulate the projection and sinogram correction problems under a generative image-to-image translation framework. The structure of the proposed model is illustrated in Fig. \ref{fig:architecture}. It consists of two individual networks: a generator $G$ and a discriminator $D$. The generator $G$ takes a metal-segmented projection $x$ as the input and generates a metal-free projection $G(x)$. The discriminator $D$ is a patch-based classifier that predicts if the metal-free projection $y$ or $G(x)$, is real or not. Similar to the PatchGAN \cite{isola2017image} design, $D$ is constructed as a CNN without fully-connected layers at the end to enable the patch-wise prediction. The detailed structures of $G$ and $D$ are presented in the supplementary material. $G$ and $D$ are trained adversarially with LSGAN \cite{mao2017least}, i.e.,
\begin{align}
\label{eq:D1}
\min_{D} \mathcal{L}_{\text{GAN}} &= \mathbb{E}_{y}[\lVert \mathbf{1} - D(y) \rVert^2] + \mathbb{E}_{x}[\lVert D(G(x)) \rVert^2], \\
\label{eq:G1}
\min_{G} \mathcal{L}_{\text{GAN}} &= \mathbb{E}_{x}[\lVert \mathbf{1} - D(G(x)) \rVert^2].
\end{align}
In addition, we also expect the generator output $G(x)$ to be close to its metal-free counterpart $y$. Therefore, we add a content loss $\mathcal{L}_c$ to ensure the pixel-wise consistency between $G(x)$ and $y$,
\begin{equation}
\min_{G} \mathcal{L}_c = \mathbb{E}_{x,y}[\lVert G(x) - y \rVert_1].
\end{equation}
\begin{figure}[t]
\begin{minipage}[b]{.57\textwidth}
\includegraphics[width=\linewidth]{figures/data/generator_and_discriminator.png}
\centering
\caption{Generator and discriminator.}
\label{fig:generator_discriminator}
\end{minipage}
\begin{minipage}[b]{.38\textwidth}
\includegraphics[width=\linewidth]{sinogram_correction.png}
\centering
\caption{Sinogram correction.}
\label{fig:sinogram_correct}
\end{minipage}
\end{figure}
\subsubsection{Mask Pyramid Network}
Metallic implants have various shapes and sizes, such as metallic balls, bars, screws, wires, etc. When X-ray projections are acquired at different angles, the projected implants would exhibit complicated geometries. Hence, unlike typical image inpainting problems, where the shape of the mask is usually simple and fixed, projection completion is more challenging since the network has to learn how to fuse such diversified mask information of the metallic implants. Directly using metal-masked image as the input requires the metal mask information to be encoded by each layer and passed along to the later layers. For unseen masks, this encoding may not work very well and hence the mask information may be lost. To retain sufficient amount of mask information, we introduce a mask pyramid network (MPN) into the generator to feed the mask information into each layer explicitly.
The architecture of the generator $G$ with this design is illustrated in Fig. \ref{fig:generator_discriminator}. The MPN $M$ takes a metal mask $s$ as the input, and each block (in yellow) of $M$ is coupled with an encoding block (in grey) in $G$. Let $l^i_M$ denote the $i$th block of $M$ and $l^i_G$ denote the $i$th block of $G$. When $l^i_M$ and $l^i_G$ are coupled, the output of $l^i_M$ will be concatenated to the output of $l^i_G$. In this way, the mask information will then be used by $l^{i+1}_G$, and a recall of the mask is achieved. Each block $l^i_M$ of $M$ is implemented with an average pooling layer that has the same kernel, stride, and padding size as the convolutional layer in $l^{i}_G$. Hence, the metal mask output by $l^i_M$ not only has the same size as the feature maps from $l^{i}_G$, but also takes into account the receptive field of the convolution operation in $l^{i}_G$.
\subsubsection{Mask Fusion Loss} \label{sec:mask_focus}
In conventional image-to-image framework, the loss is usually computed on the entire image. On the one hand, this makes the generation less efficient, as a significant portion of the generator's computation will be spent on recovering the already known information. On the other hand, this also introduces early saturation during adversarial training, in which the generator stops improving in the masked regions, since the generator does not have information about the mask. We address this issue with two strategies. First, when computing the loss function, we only consider the content within the metal mask. That is, the content loss is rewritten as
\begin{equation}
\min_{G} \mathcal{L}_c = \mathbb{E}_{x,y}[\lVert \hat{y} - y \rVert_1],
\end{equation}
where $\hat{y}= s \odot G(x) + (\mathbf{1} - s) \odot x$.
Second, we modulate the output score matrix from the discriminator by the metal mask $s$ so that the discriminator can selectively ignore the unmasked regions. As shown in Fig. \ref{fig:generator_discriminator}, we implement this design using another MPN $N$. But this time, we do not feed the intermediate outputs from $N$ to the coupled blocks in $D$, since the metal mask will, in the end, be applied to the loss. The adversarial part of the mask fusion loss is given as
\begin{equation} \label{eq:D2}
\begin{split}
\min_{D} \mathcal{L}_{\text{GAN}} = \mathbb{E}_{y}[\lVert N(s) \odot (\mathbf{1} - D(y)) \rVert^2] + \mathbb{E}_{x}[\lVert N(s) \odot D(\hat{y}) \rVert^2],
\end{split}
\end{equation}
\begin{equation} \label{eq:G2}
\min_{G} \mathcal{L}_{\text{GAN}} = \mathbb{E}_{x}[\lVert N(s) \odot (\mathbf{1} - D(\hat{y})) \rVert^2],
\end{equation}
and the total mask fusion loss can be written as
\begin{equation} \label{eq:total}
\mathcal{L} = \mathcal{L}_{\text{GAN}} + \lambda \mathcal{L}_c,
\end{equation}
where $\lambda$ balances the importance between $\mathcal{L}_{\text{GAN}}$ and $\mathcal{L}_c$.
\subsubsection{Sinogram Correction with Residual Map Learning} \label{sec:sinogram_correct}
Although the proposed projection completion framework in previous sections can produce an anatomically plausible result, it only considers the contextual information within a projection. Observing that a stack of consecutive projections form a set of sinograms. We use a simple yet effective model to enforce the inter-projection consistency by having the completion results look like sinograms.
Let $x$ denote a sinogram formed from previous projection completion step. A generator, as shown in Fig. \ref{fig:sinogram_correct}, predicts a residual map $G(x)$ which is then added to $x$ to correct the projection completion results. Here, we use the same generator structure as the one introduced in Fig. \ref{fig:generator_discriminator}. For the objective function, we apply the same one as used in Eq. \ref{eq:total}, except that we have $\hat{y} = s \odot (G(x) + x) + (\mathbf{1} - s) \odot x$.
|
1,108,101,565,547 | arxiv | \section{Introduction}\label{sec:intro}
Reinforcement learning (RL) has been instrumental in many multi-agent tasks such as simulating social dilemmas \cite{leibo2017multi}, playing real-time strategy games \cite{vinyals2019grandmaster}, and optimising multi-robot control systems \cite{long2018towards}. However, since the joint state and action spaces expand exponentially as the number of agents grows, the application of RL is often hindered by a large population scale, e.g., traffic networks that contain millions of vehicles \cite{bazzan2009opportunities}, online games with massive players \cite{jeong2015analysis}, and online businesses that face a large customer body \cite{ahn2007analysis}. In many situations, though, the system can be assumed to consist of homogeneous agents, i.e., the agents are identical, indistinguishable, and interchangeable \cite{huang2003individual}. This assumption warrants the formulation of {\em mean field games} (MFG) \cite{lasry2007mean} that analyse the system behaviours in the asymptotic limit when the number of agents approaches infinity. Through mean field approximation, MFG leverages an empirical distribution, termed {\em mean field}, to represent the aggregated behaviours of the population at large. The interactions among agents are thus reduced to those between a single representative agent and the average effect of the population. This reduction to a dual-view interplay motivates the conventional solution concept for MFG, called {\em mean field Nash equilibrium} (MFNE), where the policy of the agent achieves a {\em best response} to the mean field and the mean field is in turn consistent with the policy. MFNE is shown to approximate the well-known Nash equilibrium in corresponding games with a finite number of agents \cite{carmona2018probabilistic}. With MFNE, MFG bridges the gap between RL and agent-level control in large-scale multi-agent systems. Recently, many RL methods for computing MFNE have been proposed \cite{carmona2019model,guo2019learning,subramanian2019reinforcement}.
However, RL crucially relies on well-designed reward to elicit agents' desired behaviours \cite{amodei2016faulty}. As a system grows, handcrafted rewards are increasingly prone to inaccuracies that lead to unexpected behaviours \cite{dewey2014reinforcement}. Manually designing rewards for MFG is even more difficult since the reward of a representative agent does not only depends on its own action and state but also on population's mean field. {\em Inverse reinforcement learning} (IRL) \cite{hadfield2017inverse} makes it possible to avoid manual reward engineering by automatically acquiring a reward function from behaviours demonstrated by expert agents.
By recovering reward functions, IRL is not only useful for deducing agents' intentions, but also for re-optimising policies when the environment dynamics changes \cite{ng2000algorithms}.
However, IRL is ill-defined as
multiple optimal policies can explain expert bebaviours and multiple reward functions can induce a same optimal policy \cite{ng1999policy}.
The {\em maximum entropy IRL} (MaxEnt IRL) framework \cite{ziebart2010modeling} can solve the former ambiguity by assuming the expert plays a nearly optimal policy, and
finding the trajectory distribution with the maximum entropy that matches the expected reward of the expert. The later ambiguity can be mitigated by supplying a {\em potential-based reward shaping} function as the reward regularisation \cite{fu2018learning}. Combining two techniques, we can obtain robust IRL methods that can correctly recover underlying reward functions \cite{fu2018learning,yu2019multi}. Unfortunately, this paradigm is not suitable to MFG due to two main reasons.
First,
MFNE has the problem of the policy ambiguity since it generally does not exist uniquely.
As a result, we cannot characterise the trajectory distribution with a probabilistic model. Second, the agent-level and population-level dynamics are coupled (the policy and mean field are interdependent), making it impossible to analytically express the trajectory distribution induced by the reward function, under an equilibrium point. Consequently, it is intractable to tune the reward parameter by maximising the likelihood of expert demonstrations.
In this paper, we propose {\em mean field inverse reinforcement learning} (MFIRL), a model-free IRL framework for MFG. To the best of our knowledge, MFIRL is the first method that can recover {\em ground-truth} reward functions for MFG. MFIRL is built on a new solution concept, called {\em entropy regularised mean field Nash equilibrium} (ERMFNE), which incorporates the policy entropy regularisation into MFNE. ERMFNE has a good property of uniqueness and can characterise the trajectory distribution induced by a reward function with an energy-based model. It thus allows us to tune the reward parameter by maximising the likelihood of expert demonstrations.
Most critically, MFIRL sidesteps the problem of coupled agent's and population's dynamics by
substituting the mean field with the empirical value estimated from expert demonstrations.
With this manner, MFIRL constructs an asymptotically consistent estimator of the optimal reward parameter. We also give a {\em mean-field type potential-based reward shaping} that preserves invariance of ERMFNE, which can be used as the reward reguarisation to make MFIRL more robust against changing environment dynamics. Experimental results on simulated environments demonstrate the superior performance of MFIRL on reward recovery and sample efficiency, compared to the state-of-the-art method.
\section{Preliminaries}
\subsection{Mean Field Games}\label{sec:MFG}
Mean field games (MFG) approximate the interactions of homogeneous agents by those between a representative agent and the population. Throughout, we focus on MFG with a large but finite number of agents \cite{saldi2018markov}, finite state and action spaces and a finite time horizon \cite{elie2020convergence}. First, consider an $N$-player stochastic game with finite state space $\S$ and action space $\mathcal{A}$.
A {\em joint state} of $N$ agents is a tuple $(s^1,\ldots,s^N)\in \S^N$ where $s^i\in \S$ is the state of the $i$th agent. As $N$ goes large, instead of modelling each agent individually, MFG models a representative agent and collapses the joint state into an empirical distribution, called a {\em mean field}, which is given by $\mu(s) \triangleq \frac{1}{N} \sum_{i=1}^N \mathds{1}_{\{s^i = s\}}$,
where $\mathds{1}$ denotes the indicator function. To describe the game dynamics, write $\mathcal{P}(X)$ for the set of probability distributions over set $X$. The {\em transition function} $p\colon \S \times \mathcal{A} \times \mathcal{P}(\S) \rightarrow \mathcal{P}(\S)$ specifies how states evolve at the agent-level, i.e., an agent's next state depends on its current state, action, and the current mean field. This function also induces a transition of the mean field at the population-level which maps a current mean field to the next mean field based on all agents' current states and actions. Let $T -1 \geq 0$ denote a finite time horizon. A {\em mean field flow} (MF flow) thus consists of a sequence of $T$ mean fields $\bm{\mu}\triangleq\{\mu_t\}_{t=0}^{T-1}$, where the initial value $\mu_0$ is given, and each $\mu_t$ ($0<t< T$) is the empirical distribution obtained by applying the transition above to $\mu_{t-1}$.
The {\em running reward} of an agent at each step is specified by the {\em reward function} $r\colon \S \times \mathcal{A} \times \mathcal{P}(\S) \rightarrow \mathbb{R}$. The agent's {\em long-term reward} is thus the sum
$\sum_{t=0}^{T-1} \gamma^t r(s_t,a_t,\mu_t)$, where $\gamma \in (0,1)$ is the {\em discounted factor}. Summarising the above, MFG is defined as the tuple $(S, A, p, \mu_0, r,\gamma)$.
A {\em time-varying stochastic policy}
in a MFG is $\bm{\pi} \triangleq \{\pi_t\}_{t=0}^{T-1}$ where $\pi_t\colon \S\rightarrow \mathcal{P}(A)$ is the {\em per-step policy} at step $t$, i.e., $\pi_t$ directs the agent to choose action $a_t\sim \pi_t(\cdot\vert s)$.
Given MF flow $\bm{\mu}$ and policy $\bm{\pi}$, the agent's expected return during the whole course of the game is written as
\begin{equation}
\small
J(\bm{\mu}, \bm{\pi}) \triangleq \mathbb{E}_{\bm{\mu}, \bm{\pi}} \left[ \sum_{t = 0}^{T-1} \gamma^{t} r(s_t, a_t, \mu_t) \right],
\end{equation}
where $s_0 \sim \mu_0$, $a_t \sim \pi_t(\cdot \vert s_t)$, $s_{t+1} \sim p(\cdot \vert s_t, a_t, \mu_t)$.
\subsection{The Solution Concept for MFG}\label{sec:MFNE}
An agent seeks an optimal control in the form of a policy that maximises expected return. If a MF flow $\bm{\mu}$ is fixed, we will derive an induced Markov decision process (MDP) with a time-dependent transition function.
An optimal policy of the induced MDP is called a {\em best response} to the corresponding fixed MF flow.
We denote the set of all best-response policies to a fixed MF flow $\bm{\mu}$ by $\Psi (\bm{\mu}) \triangleq \arg\max_{\bm{\pi}} J(\bm{\mu}, \bm{\pi})$. However, the situation is more complex as all agents optimise their behaviours simultaneously, causing the MF flow to shift. The solution thus needs to consider how the policy at the agent-level affects MF flow at the population level. Since all agents are identical and rational, it makes sense to assume that everyone follows the same policy. Under this assumption,
the dynamics of MF flow is governed by the (discrete-time) {\em McKean-Vlasov} (MKV) equation \cite{carmona2013control}:
\begin{equation}\label{eq:MKV}
\mu_{t+1}(s') = \sum_{s \in S} \mu_t(s) \sum_{a \in A} \pi_t(a \vert s) p(s' \vert s, a, \mu_t).
\end{equation}
Denote $\bm{\mu} = \Phi(\bm{\pi})$ as the MF flow induced by a policy that fulfils MKV equation.
The conventional solution concept for MFG is the {\em mean field Nash equilibrium} (MFNE), where agents adopt the same policy that is a best response to the MF flow, and in turn, the MF flow is consistent with the policy.
\begin{definition}[Mean Field Nash Equilibrium]\label{def:MFE}
A pair of MF flow and policy $(\bm{\mu}^{\star}, \bm{\pi}^{\star})$ consititutes a {\em mean field Nash equilibrium} if $\bm{\pi}^\star \in \Psi(\bm{\mu}^\star)$ and $\bm{\mu}^{\star} = \Phi(\bm{\pi}^{\star})$.
\end{definition}
In MFG with finite state and action spaces, a MFNE is guaranteed to exist if both the reward function and the transition function are continuous and bounded \cite{saldi2018markov,cui2021approximately}. Through defining any mapping $\hat{\Psi}: \bm{\mu} \mapsto \bm{\pi}$ that identifies a policy in $\Psi(\bm{\mu})$, we get a composition $\Gamma = \Phi \circ \hat{\Psi}$, the so-called {\em MFNE operator}. Repeating the MFNE operator, we derive the fixed point iteration for the MF flow. The standard assumption for the uniqueness of MFNE in the literature is the contractivity\footnote{The distance between two mean fields is normally measured by $\ell_1$-Wasserstein distance in the literature \cite{huang2017mean,guo2019learning,cui2021approximately}, and hence the contractivity, if holding, is under the $\ell_1$-Wasserstein measure.} of the MFNE operator \cite{carmona2019model, guo2019learning}. However, even if a MFNE exists uniquely, computing it is analytically intractable due to the interdependency between MF flow and policy. This issue poses the main technical challenge for designing IRL methods for MFG, which we will address in Sec.~\ref{sec:MFIRL}.
\subsection{Maximum Entropy Inverse Reinforcement Learning}\label{sec:MaxEnt}
We next give an overview of the {\em maximum entropy inverse reinforcement learning} (MaxEnt IRL) framework \cite{ziebart2008maximum} under the MDP defined by $(\S, \mathcal{A}, p, \rho_0, r, \gamma)$, where
$r(s,a)$ is the reward function and environment dynamics is determined by the transition function $p(s' \vert s,a)$ and initial state distribution $\rho_0(s)$.
In (forward) Reinforcement learning (RL), an optimal policy exists but might not exist uniquely. The {\em maximum entropy RL} can solve this {\em policy ambiguity} by introducing an entropy regularisation. The procedure to find the {\em unique} optimal policy can be written as follows:
\begin{equation}\label{eq:maxent_policy}
\small
\pi^* = \mathop{\arg\max}_{\pi \in \Pi} \mathbb{E}_{\tau \sim \pi} \left[ \sum_{t=0}^{T-1} \gamma^t \big( r(s_t,a_t) + \beta \H(\pi(\cdot \vert s_t)) \big) \right],
\end{equation}
where $\tau = \{ (s_t, a_t) \}_{t=0}^{T-1}$ is a state-action {\em trajectory} sampled via $s_0 \sim \rho_0$, $a_t \sim \pi(\cdot \vert s_t)$ and $s_t \sim p(\cdot \vert s_t, a_t)$, $\H(\pi(\cdot \vert s_t)) \triangleq \mathbb{E}_{a \in \mathcal{A}} \left[ - \log \pi(a_t \vert s_t) \right]$ is the {\em policy entropy}, and $\beta > 0$ controls the strength of entropy regularisation. The policy with both the highest expected return and the highest policy entropy is unique since the policy entropy is strictly concave with respect to policies and the set of optimal policies is convex. Shown in \cite{ziebart2010modeling,haarnoja2017reinforcement}, the optimal policy takes the form of $\pi^*(a_t \vert s_t) \propto \exp(\frac{1}{\beta} Q^{\pi^*}_{\mathrm{soft}}(s_t,a_t))$, where $Q^\pi_{\mathrm{soft}}(s_t, a_t) \triangleq r(s_t, a_t) + \mathbb{E}_{\pi} [ \sum_{\ell = t+1}^{T-1} \gamma^{\ell - t} ( r(s_\ell, a_\ell) + \H(\pi(\cdot \vert s_\ell)) )]$ is the {\em soft $Q$-function}. It can be seen that $\beta$ essentially embodies the temperature in an energy model.
Suppose we do not know the reward function $r$ but have a set of expert demonstrations $\mathcal{D}_E=\{\tau_j\}_{j = 1}^M$ sampled by executing an {\em unknown} optimal policy $\pi^E$ for $M$ times.
{\em Inverse reinforcement learning} (IRL) aims to infer the reward function $r$ by rationalising demonstrated expert behaviours.
Let $r_\omega(s,a)$ denote an $\omega$-parameterised reward function. Shown in \cite{ziebart2010modeling}, with $\beta=1$, a trajectory induced by the optimal policy can be characterised with an energy-based model:
\begin{equation}\label{eq:trajectoryMaxEntIRL}
\small
\Pr\nolimits_\omega(\tau) \propto \rho_0(s_0) \exp \left( \sum_{t=0}^{T-1} \gamma^t r_\omega (s_t, a_t) \right) \prod_{t=0}^{T-1} p(s_{t+1} \vert s_t, a_t) .
\end{equation}
MaxEnt IRL thus recovers the reward function by maximising the likelihood of demonstrations with respect to the trajectory distribution induced by the optimal policy, which can be reduced to the following maximum likelihood estimation (MLE) problem:
\begin{equation}\label{eq:MaxEntIRL}
\small
\max_{\omega} \mathbb{E}_{\tau \sim \pi^E} \left[\log\Pr\nolimits_\omega(\tau)\right] = \mathbb{E}_{\tau \sim \pi^E} \left[ \sum_{t=0}^{T-1} \gamma^t r_\omega (s_t, a_t) \right] - \log Z_\omega.
\end{equation}
Here, $Z_\omega \triangleq \int_{\tau \sim \pi^E} \exp(\sum_{t=0}^{T-1} \gamma^t r_\omega(s_t,a_t) )$ is the {\em partition function} of Eq.~\eqref{eq:trajectoryMaxEntIRL}, i.e., an integral over all feasible trajectories. The initial distribution $\rho_0$ and transition function $p$ is omitted in $Z_\omega$ since they do not depend on $\omega$. Computing $Z_\omega$ is intractable if state-action spaces are large or continuous. Many methods for estimating $Z_\omega$ are proposed and most are based on {\em importance sampling} \cite{boularias2011relative,finn2016guided,fu2018learning}.
\section{Method}\label{sec:method}
The task of IRL for MFG aims to infer the ground-truth reward function $r(s,a,\mu)$ from demonstrations provided by experts.
Consider an $N$-player MFG. Suppose we have no access to the reward function $r(s,a,\mu)$ but have a set of expert trajectories $\mathcal{D}_E = \{ (\tau_j^1,\ldots,\tau_j^N) \}_{j = 1}^M$ sampled from a total number of $M$ {\em game plays} under an {\em unknown} equilibrium point $(\bm{\mu}^E, \bm{\pi}^E)$. Each $\tau_j^i = \{(s^i_{j,t}, a^i_{j,t})\}_{t=0}^{T-1}$ is the state-action trajectory of the $i$th agent in the $j$th game play. Our ultimate goal is to generalise MaxEnt IRL to MFG. However, it is challenging due to the policy ambiguity in MFNE and the coupling between agent-level and population-level dynamics. We address both issues in this section and derive a model free IRL framework for MFG.
\subsection{Entropy-Regularised Mean Field Nash Equilibrium}\label{sec:ERMFNE}
To generalise MaxEnt IRL to MFG, we need to characterise trajectory distribution induced by a reward function in the similar manner as Eq.~\eqref{eq:trajectoryMaxEntIRL}. However, the solution concept of MFNE has the aforementioned policy ambiguity, thus it cannot provide a tractable trajectory distribution which we can use in maximising the likelihood of demonstrated trajectories. The ambiguity is because that the contractivity of the MFNE operator fails if multiple MFNE exist, and even if the contractivity holds, it still remains ambiguous which to identify if there exist multiple best-response policies to a MF flow.
To resolve this issue,
a natural way is to incorporate entropy regularisation into the objective of MFG, in the same vine of Eq.~\eqref{eq:maxent_policy}. This motivates a new solution concept -- {\em entropy-regularised mean field Nash equilibrium} (ERMFNE) -- where an agent aims to maximise the entropy-regularised rewards:
\begin{equation}
\small
\tilde{J}(\bm{\mu}, \bm{\pi}) \triangleq \mathbb{E}_{\bm{\mu}, \bm{\pi}} \left[ \sum_{t = 0}^{T-1} \gamma^{t} \big( r(s_t, a_t, \mu_t) + \beta \H(\pi_t(\cdot \vert s_t) \big) \right].
\end{equation}
\begin{definition}[Entropy-Regularised MFNE]
A pair of MF flow and policy $(\bm{\mu}^*, \bm{\pi}^*)$ is called an {\em entropy-regularised mean field Nash equilibrium} if
$\tilde{J} (\bm{\mu}^*, \bm{\pi}^*) = \max_{\bm{\pi}} \tilde{J} (\bm{\mu}^*, \bm{\pi})$ and $\bm{\mu}^{\ast} = \Phi(\bm{\pi}^{\ast})$.
\end{definition}
Note that a similar equilibrium concept is independently proposed in \cite{cui2021approximately} but is motivated from forward RL for MFG, in order to obtain a more robust convergence property. Shown in \cite{cui2021approximately}, with entropy regularisation, the best-response policy $\tilde{\bm{\pi}}$ to a fixed MF flow $\bm{\mu}$ exists {\em uniquely} and fulfils
\begin{equation}\label{eq:softQMFG}
\small
\tilde{\pi}_t (a_t \vert s_t) \propto \exp\left( \frac{1}{\beta} Q^{\bm{\mu}, \tilde{\bm{\pi}}_{t+1:T-1}}_{\mathrm{soft}}(s_t,a_t,\mu_t) \right),
\end{equation}
where $Q^{\bm{\mu}, \bm{\pi}_{t+1:T-1}}_{\mathrm{soft}}(s_t,a_t,\mu_t) \triangleq r(s_t, a_t, \mu_t) + \mathbb{E}_{p, \bm{\pi}_{t+1:T-1}} \left[ \sum_{\ell = t+1}^{T-1} \gamma^{\ell - t} ( r(s_\ell, a_\ell, \mu_\ell) + \H(\pi_\ell (\cdot \vert s_\ell)) )\right]$ denotes the soft $Q$-functions for MFG. Analogous to MFNE, ERMFNE exists for any $\beta > 0$ if both reward function and transition function are continuous and bounded \cite{cui2021approximately}. Using $\tilde{\Psi}(\bm{\mu}) = \tilde{\bm{\pi}}$ to denote the policy selection specified by Eq.~\eqref{eq:softQMFG}, we obtain the {\em ERMFNE operator} $\tilde{\Gamma} = \Phi \circ \tilde{\Psi}$. It is shown in \cite[Theorem~3]{cui2021approximately} that $\tilde{\Gamma}$ achieves a contraction under sufficiently large temperatures $\beta$, and thus implies a {\em unique} ERMFNE. Also note that we recover optimality as the temperature $\beta \to 0$. Though we cannot simultaneously have $\beta \to \infty$ and $\beta \to 0$, empirically we can often find low $\beta$ that achieves convergence \cite{cui2021approximately}. Therefore, without loss of generality, we assume $\beta = 1$ in the remainder of this paper , which is a conventional setting in the literature of MaxEnt IRL \cite{ziebart2008maximum,finn2016guided,fu2018learning,yu2019multi}.
Another good property of ERMFNE is that its induced trajectory distribution (of a representative agent) can be characterised with an energy based model, in analogy to Eq.~\eqref{eq:trajectoryMaxEntIRL}. Intuitively, fixing MF flow $\bm{\mu}^*$, we can show that the induced MDP (with a non-stationary transition function) persists the property of maximum entropy trajectory distribution, under the best-response policy $\bm{\pi}^*$.
\begin{proposition}\label{prop:trajectory_ERMFNE}
Let $(\bm{\mu}^*, \bm{\pi}^*)$ be a ERMFNE with $\beta=1$ for a MFG $(\S, \mathcal{A}, p, \mu_0, r,\gamma)$. A representative agent's trajectory $\tau = \{(s_t,a_t)\}_{t=0}^{T-1}$ induced by $(\bm{\mu}^*, \bm{\pi}^*)$ fulfils the following generative process
\begin{equation}\label{eq:trajectory_ERMFNE}
\small
\Pr\nolimits(\tau) \propto \mu_0(s_0) \exp\left( \sum_{t=0}^{T-1} \gamma^t r \left(s_t, a_t, \mu^*_t \right) \right) \prod_{t=0}^{T-1} p(s_{t+1} \vert s_t, a_t, \mu^*_t).
\end{equation}
\end{proposition}
\vspace{-1.5em}
\begin{proof}
See Appendix~\ref{proof:prop:trajectory_ERMFNE}.
\end{proof}
\subsection{Mean Field Inverse Reinforcement Learning}\label{sec:MFIRL}
From now on, we assume expert trajectories are sampled from a unique ERMFNE $(\bm{\mu}^E, \bm{\pi}^E)$.
Let $r_\omega(s,a,\mu)$ be a parameterised reward function and $(\bm{\mu}^\omega, \bm{\pi}^\omega)$ be the ERMFNE induced by $\omega$. Also assume $(\bm{\mu}^E, \bm{\pi}^E)$ is induce by some unknown true parameter $\omega^*$, i.e., $(\bm{\mu}^E, \bm{\pi}^E) = (\bm{\mu}^{\omega^*}, \bm{\pi}^{\omega^*})$.
Note that the initial mean field $\mu_0$ is specified by the MFG. According to Proposition~\ref{prop:trajectory_ERMFNE}, by taking ERMFNE as the optimal notion, we are able to rationalise the expert behaviours by maximising the likelihood of demonstrated trajectories with respect to the distribution defined by Eq.~\eqref{eq:trajectory_ERMFNE}. Due to the homogeneity of agents, trajectories of all $N$ expert agents are drawn from the same distribution defined in Eq.~\eqref{eq:trajectory_ERMFNE}. Hence, we can tune the reward parameter by maximising the likelihood over trajectories of all $N$ expert agents, which can be reduced to the following MLE problem:
\begin{equation}\label{eq:MFIRL_original}
\small
\begin{aligned}
\max_\omega L(\omega) & \triangleq \mathbb{E}_{\tau_j^i \sim (\bm{\mu}^E, \bm{\pi}^E)} \left[\log(\Pr\nolimits_\omega(\tau_j^i))\right]\\
= \frac{1}{M} & \sum_{j=1}^M \frac{1}{N} \sum_{i=1}^N \sum_{t=0}^{T-1} \left( \gamma^t r_\omega \left(s_{j,t}^i, a_{j,t}^i, \mu^\omega_t \right)
+ \log p(s^i_{j,t+1} \vert s_{j,t}^i, a_{j,t}^i, \mu^\omega_t) \right) - \log Z_\omega,
\end{aligned}
\end{equation}
where $Z_\omega$ is the partition function of the distribution defined in Eq.~\eqref{eq:trajectory_ERMFNE} such that
\begin{equation}\label{eq:partition_original}
\small
Z_\omega = \sum_{j=1}^M \sum_{i=1}^N \exp \left(\sum_{t=0}^{T-1} \gamma^t r_\omega \left(s_{j,t}^i, a_{j,t}^i, \mu^\omega_t \right) \right)
\prod_{t=0}^{T-1} \log p(s^i_{j,t+1} \vert s_{j,t}^i, a_{j,t}^i, \mu^\omega_t) .
\end{equation}
The initial mean field $\mu_0$ is discarded in Eq.~\eqref{eq:MFIRL_original} and Eq.~\eqref{eq:partition_original} since it does not depend on $\omega$.
However, directly optimising the MLE objective in Eq.~\eqref{eq:MFIRL_original} is intractable since we cannot analytically derive the induced MF flow $\bm{\mu}^\omega$.
In fact, this problem has its origin in the nature of MFG that the policy and MF flow are coupled with each other \cite{lasry2007mean}, since $\bm{\pi}^* = \tilde{\Psi}(\bm{\mu}^*)$ and in turn $\bm{\mu}^* = \Phi(\bm{\pi}^*)$. As a result, computing a ERMFNE (and also MFNE) is analytically intractable. This issue poses the main technical challenge for extending MaxEnt IRL to MFG. Moreover, the transition function $p(s,a,\mu)$ also depends on reward parameter $\omega$ due to the presence of $\bm{\mu}^\omega$. This brings an extra layer of complexity since the environment dynamics is generally unknown in the real world.
While, we note the fact that if either the policy or the MF flow is given by an ``oracle'', then we can derive the complement analytically according to the ERMFNE operator $\tilde{\Gamma} = \Phi \circ \tilde{\Psi}$. Inspired by this fact, we sidestep this problem by substituting $\bm{\mu}^\omega$ in Eq.~\eqref{eq:MFIRL_original} with the empirical value of the expert MF flow estimated from expert demonstrations: For each sample of game play, we derive an empirical value of $\bm{\mu}^E$, denoted by $\hat{\bm{\mu}}^E_j$, by averaging the frequencies of occurrences of states:
\begin{equation}\label{eq:est_mf}
\small
\hat{\mu}_{j,t}^E(s) \triangleq \frac{1}{N} \sum_{i=1}^N \mathds{s} \mathds{1}_{ \{ s_{j,t}^i = s \} }.
\end{equation}
By calculating the expectation over all $M$ game plays, we obtain an empirical estimation of the expert MF flow $\hat{\bm{\mu}}^E = \frac{1}{M} \sum_{j=1}^M \hat{\bm{\mu}}_j^E \approx \bm{\mu}^E$. Meanwhile, by substituting $\hat{\bm{\mu}}^E$ for $\bm{\mu}^\omega$, the transition function $p(s_t,a_t,\hat{\mu}_t^E)$ is decoupled the from the reward parameter $\omega$ since $\hat{\bm{\mu}}^E$ does not depend on $\omega$, and henceforth being discarded in the likelihood function. Finally, with this substitution, we obtain a simplified and tractable version of the original maximum likelihood objective in Eq.~\eqref{eq:MFIRL_original} :
\begin{equation}\label{eq:final}
\small
\max_{\omega} \hat{L}\left(\omega; \hat{\bm{\mu}}^E \right) \triangleq \frac{1}{M} \sum_{j=1}^M \frac{1}{N} \sum_{i=1}^N \sum_{t=0}^{T-1} \gamma^t r_\omega\left(s_{j,t}^i, a_{j,t}^i, \hat{\mu}^E_t\right) - \log \hat{Z}_\omega,
\end{equation}
where the partition function is simplified as
\begin{equation}
\small
\hat{Z}_{\omega} \triangleq \sum_{j=1}^M \sum_{i=1}^N \exp \left(\sum_{t=0}^{T-1} \gamma^t r_\omega\left(s_{j,t}^i, a_{j,t}^i, \hat{\mu}^E_t \right) \right).
\end{equation}
Intuitively, one can interpret Eq.~\eqref{eq:final} in such a way that we fix the expert MF flow $\hat{\bm{\mu}}^E$ and maximise the likelihood of expert trajectories with respect to the trajectory distribution induced by the best-response policy to $\hat{\bm{\mu}}^E$. Statistically, we use a likelihood function of a ``mis-specified'' model that treats the policy and MF flow as being independent, and replaces the MF flow with its empirical value. With this manner, we construct an estimate of $\omega^\ast$ by maximising a simplified form of the actual log-likelihood function defined in Eq~\eqref{eq:MFIRL_original}. Note that we sacrifice the accuracy for achieving tractability, due to the estimation error of $\hat{\bm{\mu}}^E$. While, it can be shown that the optimiser of Eq.~\eqref{eq:final} is still an asymptotically consistent estimator of the optimal $\omega^\ast$, since $\hat{\bm{\mu}}^E$ converges almost surely to $\bm{\mu}^E = \bm{\mu}^{\omega^*}$ as the number of samples tends to infinity (due to law of large numbers).
\smallskip
\begin{theorem}\label{thm:MFIRL}\label{thm:consistent}
Let demonstrated trajectories in $\mathcal{D}_E = \{ (\tau_j^1,\ldots,\tau_j^N) \}_{j = 1}^M$ be independent and identically distributed and sampled from a unique ERMFNE induced by some unknown reward function $r_{\omega^*}$. Suppose for all $s \in \S$, $a \in \mathcal{A}$ and $\mu \in \mathcal{P}(\S)$, $r_\omega(s,a,\mu)$ is differentiable with respect to $\omega$. Then,
with probability $1$ as the number of samples $M \to \infty$, the equation $ \frac{\partial}{\partial \omega} \nabla_\omega \hat{L}\left(\omega; \hat{\bm{\mu}}^E \right) = 0$ has a root $\hat{\omega}$ such that $\hat{\omega}=\omega^*$.
\end{theorem}
\vspace{-1em}
\begin{proof}
See Appendix~\ref{proof:thm:consistent}.
\end{proof}
In practice, we always have finite samples and ask about the sample complexity, i.e., how many demonstrations we must obtain from experts in order to guarantee the estimation accuracy. We can apply a standard Hoeffding inequality and a union bound over all states and all time steps $1\leq t \leq T-1$ ($\mu_0$ is given as input) to control the estimation error of $\hat{\bm{\mu}}^E$: for any given $\epsilon \in (0 , 1)$,
\begin{equation*}
\Pr\left( \forall 1\leq t \leq T-1, \left\|\mu^E_t - \hat{\mu}^E_t \right\|_\infty \leq \epsilon \right) \geq 1 - 2 |\S|(T-1) \exp\left(-2 \epsilon^2 M \right).
\end{equation*}
Theorem~\ref{thm:consistent} tells us that in principle the estimation error $\|\hat{\omega}-\omega^* \|_\infty$ decreases as $M$ grows. However, it is generally intractable to derive a rigorous error bounded, since we cannot ensure the convexity and continuity of $\hat{L}$ with respect to $\omega$ and $\mu$ in general. Nonetheless, we empirically show that our method can guarantee accuracy under a small amount of samples (see Sec.~\ref{sec:exp}).
\begin{comment}
\begin{theorem}\label{thm:error}
Assume $r_\omega(s, a, \mu)$ is linear in $\omega$ such that $r_\omega(s, a, \mu) = \omega^\top \phi(s,a,\mu)$ and the feature mapping $\phi$ fulfils $\phi(s,a,\mu) = [ \phi'(s,a), \mu]$, where $\phi': \S \times \mathcal{A} \to [0, 1]^{|d - |S||}$ and $[\cdot,\cdot]$ denotes concatenation. Let any $\epsilon, \delta \in (0, 1)$ be given. In order to ensure that with probability at least $1 - \delta$, the equation $ \frac{\partial}{\partial \omega} \hat{L}\left(\omega; \hat{\bm{\mu}}^E \right) = \bm{0}$ has a root $\hat{\omega}$ such that $\| \hat{\omega} - \omega^* \|_\infty \leq \epsilon$, it suffices that
\begin{equation*}
M \geq \frac{(1 - \gamma)^2 (1 + e)^2}{2 \epsilon^2 (1 - \gamma^T)^2 \left( 1 + \frac{1- \gamma^T}{1 - \gamma} e^{\frac{1- \gamma^T}{1 - \gamma}}\right) } \log \frac{2|\S|(T-1)}{\delta}.
\end{equation*}
\end{theorem}
\begin{proof}
See Appendix~\ref{proof:thm:error}.
\end{proof}
\end{comment}
As mentioned in Sec.~\ref{sec:MaxEnt}, computing exactly the partition function $Z_\omega$ is generally intractable. Following IRL methods for MDP \cite{finn2016guided,fu2018learning} and stochastic games \cite{yu2019multi},
we use {\em importance sampling} to estimate $\hat{Z}_{\omega}$ with the {\em adaptive sampler}. Since the policy in MFG is non-stationary, we need a set of $T$ samplers $\bm{q}^{\bm{\theta}} = (q^{\theta_0}, q^{\theta_1}, \ldots, q^{\theta_{T-1}})$, where each $q^{\theta_t}$ is a parameterised policy at step $t$. Formally, we use $\bm{q}^{\bm{\theta}}$ to sample a set of trajectories $\mathcal{D}_{\mathrm{samp}}$ and estimated $\hat{Z}_\omega$ estimated as follows:
\begin{equation}\label{eq:importance_sampling}
\small
\hat{Z}_\omega \approx \frac{1 }{| \mathcal{D}_{\mathrm{samp}} |} \sum_{\tau \sim \mathcal{D}_{\mathrm{samp}}} \frac{\exp \left( \sum_{t=0}^{T-1} \gamma^t r_\omega\left(s_t, a_t, \hat{\mu}^E_t\right) \right)}{\Pi_{t=0}^{T-1} q^{\theta_t}(a_t \vert s_t)}.
\end{equation}
The update of policy parameters $\bm{\theta}$ is interleaved with the update of the reward parameter $\omega$. Intuitively, tuning $\bm{q}^{\bm{\theta}}$ can be considered as a {\em policy optimisation} procedure, which is to find the optimal policy induced by the current reward parameter, in order to minimise the variance of importance sampling. In the context of MFG, given the estimated expert MF flow $\hat{\bm{\mu}}^E$ and fixing the reward parameter $\omega$, we obtain an induced finite-horizon MDP with the reward function $r_\omega(s_t,a_t,\hat{\mu}^E_t)$ a non-stationary transition function $p(s_t,a_t,\hat{\mu}^E_t)$. We can then tune $\bm{q}^{\bm{\theta}}$ using {\em soft Q-learning} \cite{haarnoja2017reinforcement} as the forward RL solver, which is a policy gradient method dedicated to maximising the entropy-regularised expected return. We combine soft Q-learning with {\em backward induction}, i.e., tuning $q^{\theta_t}$ based on $q^{\theta_{t+1}}, \ldots, q^{\theta_{T-1}}$ that are already well tuned. See Appendix~\ref{app:sampling} for detailed training procedures of $\bm{q}_{\bm{\theta}}$.
\subsection{Solving the Reward Ambiguity}\label{sec:shaping}
As mentioned earlier, IRL also faces the reward ambiguity. This issue is often referred as the effect of {\em reward shaping} \cite{ng1999policy}, i.e., there is a class of reward transformations that induce a same optimal policy (equilibrium for games), where IRL cannot identify the ground-truth one without prior knowledge on environments. Moreover, when such a transformation involves the environment dynamics, the policy (equilibrium) elicited by the learned reward function may be no longer an unbiased estimation of the ground-truth one, if the environment dynamics changes \cite{fu2018learning}. It is shown that for any state-only {\em potential function} $f: \S \to \mathbb{R}$, the following reward transformation ({\em potential-based reward shaping})
\begin{equation*}
r'(s_t, a_t, s_{t+1}) = r(s_t, a_t, s_{t+1}) + \gamma f(s_{t+1}) - f(s_t)
\end{equation*}
is the sufficient and necessary condition to ensure policy (equilibrium) invariance for both MDP \cite{ng1999policy} and stochastic games \cite{devlin2011theoretical}.
Reward shaping for MFG, however, has yet to be explored in the literature. Here, we show that for any potential function $g: \S \times \mathcal{P}(\S) \to \mathbb{R}$, potential-based reward shaping is the sufficient and necessary condition to ensure the invariance of both MFNE and ERMFNE in MFG.
\begin{theorem}\label{thm:shaping}
Let any $\S, \mathcal{A},\gamma$ be given. We say $F: \S \times \mathcal{A} \times \mathcal{P}(\S) \times \S \times \mathcal{P}(\S) \to \mathbb{R}$ is a {\em potential-based reward shaping} for MFG if there exists a real-valued function $g: \S \times \mathcal{P}(\S) \to \mathbb{R}$ such that $F(s_t,a_t,\mu_t,s_{t+1},\mu_{t+1}) = \gamma g(s_{t+1}, \mu_{t+1}) - g(s_t, \mu_t)$. Then, $F$ is sufficient and necessary to guarantee the invariance of the set of MFNE and ERMFNE in the sense that: {\it\bfseries $\bullet$ Sufficiency:} Every MFNE or ERMFNE in $\mathcal M' = (\S, \mathcal{A}, p, \mu_0, r + F,\gamma)$ is also a MFNE or ERMFNE in $\mathcal M = (\S, \mathcal{A}, p, \mu_0, r,\gamma)$;
{\it\bfseries $\bullet$ Necessity:} If $F$ is not a potential-based reward shaping, then there exist a initial mean field $\mu_0$, transition function $p$, horizon $T$, temperature $\beta$ (for ERMFNE only) and reward function $r$ such that no MFNE or ERMFNE in $\mathcal M'$ is an equilibrium in $\mathcal M$.
\end{theorem}
\vspace{-1em}
\begin{proof}
See Appendix~\ref{app:proof:shaping}.
\end{proof}
To mitigate of the effecting of reward shaping, similar to the setting in IRL methods for MDP \cite{fu2018learning} and stochastic games \cite{yu2019multi}, we assume the parameterised reward function is in the following structure:
\begin{equation*}
r_{\omega, \phi}(s_t, a_t, \mu_t, s_{t+1}, \mu_{t+1}) = r_\omega(s_t, a_t, \mu_t) + \gamma g_\phi(s_{t+1}, \mu_{t+1}) - g_\phi(s_t, \mu_t),
\end{equation*}
where $g_\phi$ is the $\phi$-parameterised potential function MFG. We tune $\phi$ together with $\omega$, in order to recover a reward function with higher linear correlation to the ground truth one. As a summary, we name our algorithm the {\em mean field IRL} (MFIRL) and present the pseudocode in Alg.~\ref{alg:MFIRL}.
\begin{algorithm}[!htbp]
\caption{Mean Field Inverse Reinforcement Learning (MFIRL)}\label{alg:MFIRL}
\begin{algorithmic}[1]
\STATE {\bf Input:} MFG with parameters $(\S, \mathcal{A}, p, \mu_0, \gamma)$ and demonstrations $\mathcal{D}_E = \{ (\tau_j^1,\ldots,\tau_j^N) \}_{j = 1}^M$.
\STATE {\bf Initialisation:} reward parameter $\omega$ and potential function parameter $\phi$.
\STATE Estimate empirical expert MF flow $\hat{\bm{\mu}}^E$ from $\mathcal{D}_E$ according to Eq.~\eqref{eq:est_mf}.
\FOR{each epoch}
\STATE Train adaptive samplers $\bm{q}^{\bm{\theta}}$ with respect to $r_{\omega}$ using soft Q learning (Alg.~\ref{alg:sampler} in Appendix~\ref{app:sampling}).
\STATE Sample a set of trajectories $\mathcal{D}_{\mathrm{samp}}$ using adaptive samplers $\bm{q}^{\bm{\theta}}$.
\STATE Estimate the partition function $Z_\omega$ from $\mathcal{D}_{\mathrm{samp}}$ according to Eq.~\eqref{eq:importance_sampling}.
\STATE Sample a minibatch of trajectories $\hat{\mathcal{D}}$ from $\mathcal{D}_E$.
\STATE Estimate the empirical values of gradients $\nabla_\omega \hat{L}$ and $\nabla_\phi \hat{L}$ of Eq.~\eqref{eq:final} using $\hat{\mathcal{D}}$.
\STATE Update $\omega$ and $\phi$ according to $\nabla_\omega \hat{L}$ and $\nabla_\phi \hat{L}$.
\ENDFOR
\STATE {\bfseries Output:} Learned reward function $r_{\omega}$.
\end{algorithmic}
\end{algorithm}
\vspace{-1em}
\section{Related Works}
{\bf RL for MFG.}
MFG were pioneered
by \cite{lasry2007mean,huang2006large} in the continuous setting.
Discrete MFG models were then proposed in \cite{gomes2010discrete}.
Learning in MFG has attracted great attentions and most methods are based on RL. Yang et al. use mean field theory to approximate joint actions in a large-population stochastic games to approximate Nash equilibria \cite{yang2018mean}.
Guo et al. present a Q-learning-based algorithm for computing stationary MFNE \cite{guo2019learning}.
Subramanian et al. use RL to compute a relaxed version of MFNE \cite{subramanian2019reinforcement}.
Elie et al. use fictitious play to approximate MFNE \cite{elie2020convergence}.
While, all these works rely on the presence of reward functions. Our work takes a complementary view where the reward function is not given, and hence the need for IRL for MFG.
{\bf IRL for MAS.}
Recently, IRL has been extended to the multi-agent setting. Most works assume specific reward structures, including fully cooperative games
\cite{bogert2014multi,barrett2017making}, fully competitive games \cite{lin2014multi}, or either of the two \cite{waugh2011computational,reddy2012inverse}. For general stochastic games,
Yu et al. present MA-AIRL \cite{yu2019multi}, a multi-agent IRL method using adversarial learning. However, all these prior methods are not scalable to games with a large population of agents. Despite there are some efforts on IRL for large-scale MAS, they are limited in modelling rewards and/or polices. \v{S}o{\v{s}i{\'c} et al. propose SwarmIRL \cite{vsovsic2017inverse} that views an MAS as an swarm system consisting of homogeneous agents, sharing the same idea of mean field approximation. But it cannot handle non-stationary (time-varing) policies and non-linear reward functions. Our MFIRL makes no modelling assumptions on policies and reward functions.
The most related work is \cite{yang2018learning} that proposes an IRL method for MFG, which we call {\em MFG-MDP IRL}. It converts a MFG to a MDP that takes mean fields as ``states'' and per-step policies in MFG as its ``actions''. Then it applies MaxEnt IRL on this constructed MDP, which implies that it assumes the policy-MF flow trajectories at the {\em population level} fulfil the maximum entropy distribution rather than state-action pairs at the {\em agent level}. There are two issues associated with this setting: (1) MFG-MDP IRL can suffer from the low sample efficiency, since it can only estimate a single policy-MF flow sample from each game play, whereas one game play contributes $N$ samples in our MFIRL. (2) MFG-MDP may be biased an estimator of the ground-truth reward function. As MaxEnt IRL operates at the population level, it presupposes that agents in the MFG are {\em cooperative}, i.e., they aim to optimise a common ``societal reward''.
This does not necessarily align with the interest of each individual agent, since a MFNE (or ERMFNE) does not always maximise the population's societal reward if multiple equilibria exist \cite{subramanian2019reinforcement,bardi2019non,dianetti2019submodular,delarue2020selection,cui2021approximately}. In fact, the MFNE that maximises societal reward is defined as the so-called {\em mean-field social-welfare optimal} (MF-SO) in \cite[Definition 2.2]{subramanian2019reinforcement}, where authors gives a detailed explanation about the difference between MFNE and MF-SO. As a result, MFG-MDP IRL is a biased estimator of the reward parameter if the observed equilibrium is not a MF-SO.
We present detailed explanations and discussions about MFG-MDP IRL in Appendix~\ref{app:reduce}. In contrast, since at the agent level, the expert policy must be optimal with respect to the expert MF flow, our MFIRL can recover the ground-truth reward functions with no bias.
\section{Experiments}\label{sec:exp}
{\bf Tasks and baseline.} We empirically evaluate the capacity of MFIRL on reward recovery using five MFG problems with simulated environments: {\em investment in product quality} \cite{weintraub2010computational,subramanian2019reinforcement} (INVEST for short), {\em malware spread} \cite{huang2016mean,huang2017mean,subramanian2019reinforcement} (MALWARE), {\em virus infection} \cite{cui2021approximately} (VIRUS), {\em Rock-Paper-Scissors} \cite{cui2021approximately} (RPS) and {\em Left-Right} \cite{cui2021approximately} (LR), ordered in decreasing complexity. Detailed descriptions and settings of these tasks can be found in Appendix~\ref{app:task}. We compare MFIRL against MFG-MDP IRL \cite{yang2018learning} as discussed above, since it is the only IRL method for MFG in the literature, as of now.
{\bf Performance Metrics.}
A learned reward function is considered of good quality if its induced ERMFNE is close to that induced by the ground-truth reward function. Therefore, we evaluate a recovered reward function using the following three metrics:
\begin{itemize}
\item {\em Expected return.} The expected return of the ERMFNE induced by a learned reward function.
\item {\em Deviation from expert MF flow} (Dev. MF for short). We use KL-divergence to measure the gap between the expert MF flow and that induced by a learned reward function, i.e., $\sum_{t = 1}^{T-1} D_{\mathrm{KL}} \big( \mu_t^E \parallel \mu^\omega_t \big)$ ($\mu_0$ is given as a part of the MFG).
\item {\em Deviation from expert policy} (Dev. Policy). We measure the gap between expert policy and the policy in the elicited ERMFNE from a learned reward function by $\sum_{t = 0}^{T-1} \sum_{s \in S} \mu_t^E (s) D_{\mathrm{KL}} \big( \pi_t^E(s) \parallel \pi^\omega_t(s) \big)$, where the KL-divergence with respect to state $s$ is weighted by the proportion of $s$ specified by the expert ERMFNE.
\end{itemize}
{\bf Training Procedures.}
In the simulated environments, we have access to the ground-truth reward function, which enables us to compute a ground-truth ERMFNE. To test effectiveness and accuracy of methods, we first carry out experiments under the original dynamics. We consider $N = 100$ agents and sample expert trajectories with $50$ time steps, which is same as the number of time steps used in \cite{song2018multi,yu2019multi,cui2021approximately}. We vary the number of game plays from $1$ to $10$. To investigate the robustness to new environment dynamics, we then change the transition function (see Appendix~\ref{app:task}), recompute ERMFNE induced by the ground-truth and the learned reward function (learned with $10$ game plays), and calculate two metrics again. Detailed settings and implementation details are in Appendix~\ref{app:exp}.
\begin{figure}[htp]
\centering
\includegraphics[width=\textwidth]{results.pdf}
\vspace{-1em}
\caption{ Results for in original environments. The solid line shows the median and the shaded area represents the region between 10\% and 90\% quantiles over 10 independent runs. }\label{fig:original}
\end{figure}
\begin{table}[!h]
\scriptsize
\caption{Results for new environments. Mean and variance are taken across 10 independent runs.}
\label{tab:new}
\centering
\begin{tabular}{c c c c c c c}
\toprule
\multirow{3}{*}{Metric} & \multirow{3}{*}{Algorithm} & \multicolumn{5}{c}{Task}\\
\cmidrule(r){3-7}
& & INVEST & MALWARE & VIRUS & RPS & LR\\
\midrule
\multirow{3}{*}{Expected Return}
& EXPERT & -35.870 & 18.896 & -1.240 & 93.156 & -0.637\\
& MFIRL & \textbf{-35.192} $\pm$ 0.511 & \textbf{18.489} $\pm$ 0.141 & \textbf{-1.713} $\pm$ 0.023 & \textbf{93.355} $\pm$ 2.508 & \textbf{-1.700} $\pm$ 0.095 \\
& MFG-MDP IRL & -34.947 $\pm$ 2.410 & 18.141 $\pm$ 0.402 & -2.658 $\pm$ 0.143 & 91.991 $\pm$ 0.442 & -2.686 $\pm$ 1.009 \\
\midrule
\multirow{2}{*}{Dev. MF} & MFIRL & \textbf{0.464} $\pm$ 0.029 & \textbf{0.435} $\pm$ 0.007 & \textbf{0.031} $\pm$ 0.0003 & \textbf{2.932} $\pm$ 0.057 & \textbf{0.366} $\pm$ 0.035\\
& MFG-MDP IRL & 1.510 $\pm$ 0.697 & 1.731 $\pm$ 2.207 & 0.081 $\pm$ 0.0081 & 3.112 $\pm$ 0.569 & 0.394 $\pm$ 0.230 \\
\midrule
\multirow{2}{*}{Dev. Policy} & MFIRL & \textbf{0.244} $\pm$ 0.016 & \textbf{0.523} $\pm$ 0.011 & \textbf{1.481} $\pm$ 0.006 & \textbf{6.106} $\pm$ 0.459 & \textbf{0.574} $\pm$ 0.039\\
& MFG-MDP IRL & 1.055 $\pm$ 0.210 & 1.538 $\pm$ 1.202 & 1.763 $\pm$ 0.181 & 6.468 $\pm$ 0.978 & 0.621 $\pm$ 0.223 \\
\bottomrule
\end{tabular}
\vspace{-1em}
\end{table}
{\bf Results.}
Results for original and new environment dynamics are shown in Fig.~\ref{fig:original} and Tab.~\ref{tab:new}, respectively.
For all tasks, the expected return and ERMFNE by MFIRL are highly close to the ground truth, even with a small number of game plays. MFG-MDP IRL requires more samples to produce relatively good expected return and ERMFNE, but still shows larger deviations and variances. This indicates that MFIRL can recover reward functions highly consistent with the ground-truth ones with high sample efficiency, in line with our theoretical analysis. When environment dynamics changes, MFIRL outperforms MFG-MDP IRL on all tasks. We owe the superior performance of MFIRL to two reasons: (1) MFIRL uses a potential function to mitigate the reward shaping problem while MFG-MDP does not, thus MFIRL is more robust against changing dynamics. (2) MFG-MDP IRL rationalises expert behaviours by finding reward function under which the expert's ERMFNE maximises the population's societal reward. However, a ERMFNE is not necessarily a maximiser of societal reward. Therefore, MFG-MDP IRL may fail to recover the ground-truth reward function.
\vspace{-.5em}
\section{Conclusion and Future Work}\label{sec:con}
\vspace{-.5em}
This paper amounts to an effort towards IRL for MFG. We propose MFIRL, the first IRL method that can recover the ground-truth reward function for MFG, based on a new solution concept termed ERMFNE. ERMFNE incorporates the entropy regularisation into the mean field Nash equilibrium, and allows us to characterise the expert demonstrated trajectories with an energy-based model. Most critically, MFIRL sidesteps the problem of the coupled backward-forward dynamics by substituting the mean field with the empirical estimation. With this manner, MFIRL achieves an asymptotically consistent estimation of the optimal reward parameter. We also provide a mean-field type potential-based reward shaping function in order to solve the reward ambiguity. Experimental results show that MFIRL can recover ground-truth reward functions with high accuracy and high sample efficiency. A direction for future work is to combine MFIRL with generative learning or adversarial leaning as in \cite{song2018multi,yu2019multi}, in order to make MFIRL more adaptive to high dimensional or continuous tasks.
\bibliographystyle{plain}
|
1,108,101,565,548 | arxiv | \section{Introduction} \label{intro}
A pairwise relation among a group of agents is represented by a `network' (also known as `graph'), where the set of agents forms the \emph{vertex set}, and the links among them form the \emph{edge set} of the network \cite{barabasi2016network}. Examples include `social network' (interconnected structure among a group human) \cite{musial2013creation}, `information network' (interconnected structure among a group of data centers) \cite{sun2013mining} and many more. Analysis of such networks for different topological structures brings out many important characteristics regarding the contact pattern. For example, a cohesive group of users in a social network can be interpreted as close friends. Most of the real\mbox{-}world networks are time varying in nature, which means the structure of the network are changing over time. This kind of networks are effectively modeled as \textit{temporal network} (also known as the \emph{time varying graph} or \emph{link stream}) \cite{kostakos2009temporal}.
\par To analyze a static network, there are several topological structures have been defined in the literature, such as \textit{clique} \cite{akkoyunlu1973enumeration} \cite{eppstein2013listing}, \textit{pseudo clique} \cite{zhai2016fast}, \emph{k-plex} \cite{berlowitz2015efficient}, $k$\mbox{-}\emph{club} \cite{almeida2014two}, $k$\mbox{-}\emph{cores} \cite{khaouid2015k} and many more \cite{akiba2013linear}. Among them the widely studied topological structure is clique. Plenty of solution methodologies have been proposed in the literature to enumerate such structures present in the network \cite{xu2013topological}. Initially, E. A. Akkoyunlu \cite{akkoyunlu1973enumeration} proposed an enumeration technique for maximal cliques. After that, a recursive technique has been proposed by Bron and Kerbosch \cite{bron1973algorithm}. Since then, a significant effort has been put to develop practical algorithms for enumerating maximal cliques in different scenarios such as for sparse graph \cite{eppstein2011listing,eppstein2013listing}, in large networks \cite{cheng2010finding,cheng2011finding,rossi2014fast}, in uncertain graphs \cite{mukherjee2015mining,mukherjee2017enumeration,zou2010finding}, using map reduce framework \cite{hou2016efficient,xiang2013scalable}, in parallel computing framework \cite{chen2016parallelizing,rossi2015parallel,schmidt2009scalable}, with limited memory resources \cite{cheng2012fast} and so on. As the real world networks are time varying, none of the mentioned techniques can be applied for analyzing such networks.
\par To analyze a temporal network for cohesive structures, some generalization of `clique' is required. In this direction, the first contribution came from Viard et al. \cite{viard2015revealing,viard2016computing}, who introduced the notion of $\Delta$\mbox{-}Clique. For a given temporal network, a $\Delta$\mbox{-}Clique is a vertex subset and time interval pair, where for every $\Delta$ duration of the interval, there exist at least one link between every pair of vertices in the subset. Recently, Banerjee and Pal \cite{banerjee2019enumeration} extended the notion of $\Delta$\mbox{-}Clique to $(\Delta, \gamma)$\mbox{-}Clique by incorporating frequency along with the duration, and proposed an enumeration algorithm for maximal $(\Delta, \gamma)$\mbox{-}Cliques present in a temporal network. In all these studies, it is implicitly assumed that the whole temporal links are available before the enumeration algorithm starts execution. However in reality, the scenario may be little different which has been explained with a real\mbox{-}world case study.
\par In $2011$ and $2012$, a human contact dataset was collected among a group of high school children in France in a proximity sensing platform based on wearable sensors for the duration of almost $8$ days \cite{fournet2014contact}. The goal was to study and analyze the following: how the mixing pattern happens by re-partition the students into groups? how the gender differences have an impact on contact patterns? how the evaluation of the contact pattern happens in two widely distinct timescales?, and finally how the contact pattern happens in different duration? To address the last question (the broader one), the notion of enumerating maximal $(\Delta, \gamma)$\mbox{-}Cliques can be applied effectively, as it returns the vertex subset and a time interval, where in each $\Delta$ duration of the interval, there must be at least $\gamma$ edges between every pair of vertices in that set. Suppose, once a contact is happening between two students, that information is getting stored in a centralized server. To apply the maximal $(\Delta, \gamma)$\mbox{-}Clique enumeration methodology proposed in \cite{banerjee2019enumeration} requires the completion of the data collection to obtain the entire set of links. However, in many data collection scenarios, the required time is much more. As an example, for the \textit{`college message'} dataset \cite{panzarasa2009patterns}, the duration for collecting the data was $193$ days. In this scenario, instead of waiting for the end of data collection process, it is more practical to start analyzing the contact pattern once a fraction of the dataset is available. Suppose, the first $24$ hours contact details are available and the $(\Delta, \gamma)$\mbox{-}Clique enumeration algorithm from \cite{banerjee2019enumeration} are used to obtain the partial results. Once the next $24$ hours contact history are available, instead of running from the scratch, can we update the previous maximal clique set to obtain the maximal clique set after $48$ hours? This is the problem that we address in this paper.
\par In summary, we study the maximal $(\Delta, \gamma)$\mbox{-}clique enumeration problem in `Online Setting', where the entire link set of the network is not known in advance, and the links are coming as a batch in an iterative manner. The goal here is to update the $(\Delta, \gamma)$\mbox{-}cliques of the links till the previous batch with the links of the current batch to obtain the updated maximal $(\Delta, \gamma)$\mbox{-}cliques. Formally, we call the problem as the \textit{Maximal $(\Delta, \gamma)$\mbox{-}Clique Updation Problem}. Particularly, we make the following contributions in this paper:
\begin{itemize}
\item We introduce a noble \emph{Maximal $(\Delta, \gamma)$\mbox{-}Clique Updation Problem}, where the links are coming as a batch in an iterative manner.
\item For this problem, we propose an efficient methodology to update the existing clique by considering new links in a sequential manner and named it as `edge on clique'.
\item By drawing consecutive arguments, we show that the proposed methodology correctly updates the previous $(\Delta, \gamma)$\mbox{-}cliques.
\item The proposed methodology has been analyzed to understand its time and space requirements.
\item We implement the proposed methodology to perform an extensive set of experiments on four real\mbox{-}world temporal network datasets with two different data partitioning schemes, and show that the updation approach can be effectively used to enumerate maximal $(\Delta, \gamma)$\mbox{-}cliques.
\item We also show that this methodology can be adopted in the offline setting (when all the links are available before the start of execution) to enumerate all the $(\Delta, \gamma)$\mbox{-}cliques present in a temporal networks by splitting the dataset into parts and then applying the proposed $(\Delta, \gamma)$\mbox{-}clique updation approach.
\end{itemize}
Rest of the paper is organized as follows: Section \ref{Sec:RW} describes the relevant studies from literature. Section \ref{Sec:PPD} contains the required preliminary definitions and defines the `Maximal $(\Delta, \gamma)$\mbox{-}Clique Updation Problem' formally. Section \ref{Sec:PA} discusses the proposed methodology. Section \ref{Sec:EE} contains the experimental evaluations of our proposed approach and finally, Section \ref{Sec:CFD} draws the conclusion of this study and gives future directions.
\section{Related Work} \label{Sec:RW}
This study comes under the broad theme of time varying graph analysis and in particular structural pattern finding in temporal networks. We describe both of them in two consecutive subsections.
\subsection{Time Varying Graph Analysis}
As most of the real\mbox{-}life networks such as \emph{wireless sensor networks}, \emph{social networks} are time varying in nature, past one decade has witnessed a significant effort in understanding and mining time varying graphs \cite{casteigts2012time}. Several kinds of problems have been studied, and hence, it is not possible here to survey all the results. Here, we present a few fundamental graph problems that have been studied in temporal setting with corresponding literature. First one is the `temporal connectivity problem'. One very fundamental problem studied in the context is finding the shortest paths in temporal graphs \cite{wu2014path}. Another very important problem in the context of temporal graph analysis is the `reachability' and there exist several studies. Basu et al. \cite{basu2014sample} studied the reachability estimation problem in temporal graphs. Wildemann et al. \cite{wildemann2015time} studied the traversal and reachability problem on temporal graphs and derive three classes of temporal traversals from a set of realistic use cases. Whitbeck et al. \cite{whitbeck2012temporal} introduced the concept of $(\tau,\delta)$ reachability graph for a given time\mbox{-}varying graph and they studied the mathematical properties of this graph and also provided several algorithms for computing such a graph. Following this study there are several works in this direction \cite{casteigts2015efficiently, xu2015network, wu2016reachability}. Huang et al. \cite{huang2015minimum} studied the minimum spanning tree problem on temporal graphs. Another well studied problem on temporal graph analysis is the community detection \cite{bazzi2016community, he2015fast, rossetti2018community}. Also, there are several theoretical problems studied in the context of temporal graphs such as finding small `separator in temporal graphs' \cite{zschoche2020complexity, fluschnik2020temporal}, travelling salesman problem \cite{michail2016traveling}, Steiner network problem \cite{khodaverdian2016steiner} and many more.
\subsection{Structural Pattern Findings in Temporal Graphs}
For structural pattern mining of time varying graphs, there exists very few studies. As mentioned previously, the concept of `clique' in static graphs has been extended as $\Delta$\mbox{-}clique by Virad et al. \cite{viard2015revealing, viard2016computing} and used to analyze the time varying relationship among a group of school children. Later on Vired et al. \cite{viard2018enumerating} extended their study on temporal clique enumeration on link streams with duration. Recently, Banerjee and Pal \cite{banerjee2019enumeration, banerjee2020first} extended the notion on $\Delta$\mbox{-}clique to $(\Delta, \gamma)$\mbox{-}Clique and their proposed methodology has been applied to analyze three different temporal network datasets. Recently, Molter et al. \cite{molter2019enumerating} extended the concept of `isolated clique' in the context of temporal networks and proposed \emph{fixed parameter tractable} algorithms to enumerate such maximal cliques. To the best of our knowledge, there does not exist any more literature on structural pattern analysis in the context of temporal graphs. However, there exist other studies for finding different structural patterns other than cliques such as plex \cite{bentert2019listing}, core decomposition \cite{wu2015core}, span cores \cite{galimberti2019span}.
\par In this paper, we study the problem of maximal $(\Delta, \gamma)$\mbox{-}clique enumeration when the entire links of the temporal network are not known at the beginning and links are available after a time gap. Formally, we name this problem as the Maximal $(\Delta, \gamma)$\mbox{-}Clique Updation Problem.
\section{Preliminaries and problem Definitions} \label{Sec:PPD}
Here, we present some preliminary concepts required to understand the main study presented in this paper. Given a set $\mathcal{X}$, $\binom{\mathcal{X}}{2}$ denotes the set of all $2$ element subsets of $\mathcal{X}$. First, we start by defining `temporal network' in Definition \ref{Def:1}.
\begin{mydef}[Temporal Network] \label{Def:1} \cite{holme2012temporal}
A temporal network (also known as a time varying graph or link stream) is defined as a triplet $\mathcal{G}(\mathcal{V}, \mathcal{E}, \mathcal{T})$, where $V(\mathcal{G})$ and $\mathcal{E}(\mathcal{G})$ ($\mathcal{E}(\mathcal{G}) \subseteq \binom {\mathcal{V}(\mathcal{G})}2 \times \mathcal{T}$) are the vertex and edge set of the network. $\mathcal{T}$ is the time interval during which the network is observed. Throughout the paper, we use $|\mathcal{V}(\mathcal{G})|=n$ and $|\mathcal{E}(\mathcal{G})|=m$.
\end{mydef}
Basically, a temporal networks is a collection of links of the form $(v_i, v_j, t)$, where $v_i, v_j \in \mathcal{V}(\mathcal{G})$, and $t$ is a timestamp in the time interval $\mathcal{T}$. It signifies, that there was a contact between $v_i$ and $v_j$ at time $t$. In our study, we assume the network is observed in discrete time steps and throughout the observation the vertex set remains fixed, however, the edge set is changing over time. We define $t_{min}=\underset{t}{argmin} \ (v_i,v_j,t) \in E(\mathcal{G})$ for all the vertex pairs, and similarly, $t_{max}=\underset{t}{argmax} \ (v_i,v_j,t) \in \mathcal{E}(\mathcal{G})$. The difference between $t^{max}$ and $t^{min}$ is called the \textit{lifetime of the network} and denoted as $t_{L}$, i.e., $t_{L}=t^{max}-t^{min}$. For any two vertices, $v_i, v_j \in \mathcal{V}(\mathcal{G})$, we say that there exist a static edge between $v_i$ and $v_j$ if $\exists t_{ij} \in [t^{min}, t^{max}]$, such that $(v_i,v_j, t_{ij}) \in \mathcal{E}(\mathcal{G})$. The frequency of a static edge is defined as the number of times it appeared throughout the lifetime of the network, i.e., $f_{(v_iv_j)}=|\{(v_iv_j,t_{ij}): t_{ij} \in [t^{min}, t^{max}] \}|$. We use the term `link' to denote a temporal link and `edge' to denote a static edge.
\par Given a temporal network $\mathcal{G}(\mathcal{V}, \mathcal{E}, \mathcal{T})$, Virad et al. \cite{viard2015revealing,viard2016computing} introduced the notion of $\Delta$\mbox{-}Clique, which is a natural extension of a clique in a static graph and mentioned in Definition \ref{Def:2}.
\begin{mydef}[$\Delta$\mbox{-}Clique] \label{Def:2} \cite{viard2015revealing} \cite{viard2016computing}
For a given time period $\Delta$, a $\Delta$\mbox{-}clique of the temporal network $\mathcal{G}$ is a vertex set, time interval pair, i.e., $(\mathcal{X}, [t_a,t_b])$ with $\mathcal{X} \subseteq \mathcal{V}(\mathcal{G})$, $\vert \mathcal{X} \vert \geq 2$ and $[t_a,t_b] \subseteq \mathcal{T}$, such that $\forall v_i,v_j \in \mathcal{X}$ and $t \in [t_a, max(t_b - \Delta, t_a)]$ there is an edge $(v_i, v_j, t_{ij}) \in \mathcal{E}(\mathcal{G})$ with $t_{ij} \in [t, min (t + \Delta, t_b)]$.
\end{mydef}
Recently, the notion of $\Delta$\mbox{-}Clique has been extended by Banerjee and Pal \cite{banerjee2019enumeration} to $(\Delta, \gamma)$\mbox{-}Clique mentioned in Definition \ref{Def:3}.
\begin{mydef}[$(\Delta, \gamma)$\mbox{-}Clique]\label{Def:3}
For a given time period $\Delta$ and $\gamma \in \mathbb{Z}^{+}$, a $(\Delta, \gamma)$-Clique of the temporal network $\mathcal{G}$ is a vertex set, time interval pair, i.e., $(\mathcal{X}, [t_a,t_b])$ where $\mathcal{X} \subseteq \mathcal{V}(\mathcal{G})$, $\vert \mathcal{X} \vert \geq 2$, and $[t_a,t_b] \subseteq \mathcal{T}$. Here $\forall v_i,v_j \in \mathcal{X}$ and $t \in [t_a, max(t_b - \Delta, t_a)]$, there must exist $\gamma$ or more number of edges, i.e., $(v_i, v_j, t_{ij}) \in \mathcal{E}(\mathcal{G})$ and $f_{(v_iv_j)} \geq \gamma$ with $t_{ij} \in [t, min (t + \Delta, t_b)]$. It is easy to observe, that a $(\Delta, \gamma)$\mbox{-}clique will be a $\Delta$\mbox{-}clique when $\gamma=1$.
\end{mydef}
\begin{mydef}[Maximal $(\Delta, \gamma)$\mbox{-}Clique] \label{Def:MDG} \label{Def:maximal}
A $(\Delta, \gamma)$\mbox{-}clique $(\mathcal{X}, [t_a,t_b])$ of the temporal network $\mathcal{G}(\mathcal{V}, \mathcal{E}, \mathcal{T})$ will be maximal if neither of the following is true.
\begin{itemize}
\item $\exists v \in \mathcal{V}(\mathcal{G}) \setminus \mathcal{X}$ such that $(\mathcal{X} \cup \{v\}, [t_a,t_b])$ is a $(\Delta, \gamma)$\mbox{-}Clique.
\item $(\mathcal{X}, [t_a - dt,t_b])$ is a $(\Delta, \gamma)$\mbox{-}clique. This condition is applied only if $t_a - dt \geq t$.
\item $(\mathcal{X}, [t_a,t_b + dt])$ is a $(\Delta, \gamma)$\mbox{-}clique. This condition is applied only if $t_b + dt \leq t^{'}$.
\end{itemize}
\end{mydef}
\begin{mydef}[Maximum $(\Delta, \gamma)$\mbox{-}Clique]
Let $\mathcal{S}$ be the set of all maximal $(\Delta, \gamma)$-cliques of the temporal network $\mathcal{G}(\mathcal{V}, \mathcal{E}, \mathcal{T})$. Now, $(\mathcal{X}, [t_a,t_b]) \in \mathcal{S}$ will be
\begin{itemize}
\item temporally maximum if $\forall (\mathcal{Y}, [t_a^{'},t_b^{'}]) \in \mathcal{S} \setminus (\mathcal{X}, [t_a,t_b])$, $t_b-t_a \geq t_b^{'} - t_a^{'}$,
\item cardinally maximum if $\forall (\mathcal{Y}, [t_a^{'},t_b^{'}]) \in \mathcal{S} \setminus (\mathcal{X}, [t_a,t_b])$, $\vert \mathcal{X} \vert \geq \vert \mathcal{Y} \vert$.
\end{itemize}
\end{mydef}
Now as mentioned previously, all the links of the temporal network may not be available at the beginning of the execution of the $(\Delta, \gamma)$-clique enumeration algorithm. In this setting, to apply the existing algorithms for $(\Delta, \gamma)$\mbox{-}clique enumeration, we need to wait till all the links are available. However, the better way to handle this problem is to adopt the updation approach. Assume that $T_{0}$ is the time stamp from which we are observing the network. Now, the first batch of links has appeared till time $T_1$. So, at time stamp $T_1$, without waiting for entire link set, it is desirable to execute existing $(\Delta, \gamma)$\mbox{-}clique enumeration to understand the contact pattern among the entities. Now, the next batch of links has appeared till time stamp $T_2$. At this point, it is always desirable to update the previously enumerated $(\Delta, \gamma)$\mbox{-}cliques to obtain the $(\Delta, \gamma)$\mbox{-}cliques till time stamp $T_2$. Now, when the next set of links appears again, the recently updated cliques has to be updated again and this process will go on.
Now, it is trivial to observe that the updation procedure is same irrespective of the number of iterations. Hence, in this work we primarily focus to update the cliques till time stamp $T_{1}$ to time stamp $T_{2}$. Finally, we introduce the Maximal $(\Delta, \gamma)$\mbox{-}Clique Updation Problem in Definition \ref{Def:Prob} that we worked in this paper.
\begin{mydef}[Maximal $(\Delta, \gamma)$\mbox{-}Clique Updation Problem] \label{Def:Prob}
Given a temporal network $\mathcal{G}(V, E, \mathcal{T})$ with its $(\Delta,\gamma)$-clique till time stamp $T_{1}$, and the links from time stamp $T_{1}$ to $T_{2}$, the problem of Maximal $(\Delta, \gamma)$\mbox{-}Clique Updation is to update the $(\Delta, \gamma)$-cliques till time stamp $T_{1}$ to obtain the maximal $(\Delta, \gamma)$-cliques till time stamp $T_{2}$.
\end{mydef}
\section{Proposed Solution Approach} \label{Sec:PA}
In this section, we describe the proposed solution approach for updating maximal $(\Delta, \gamma)$\mbox{-}cliques.
Let, $\mathcal{C}^{T_{1}}$, $\mathcal{C}^{T_{2}}$, and $\mathcal{C}^{T_{2} \setminus T_{1}}$ denote the set of maximal cliques till time $T_{1}$, $T_{2}$, and from $T_1$ to $T_2$, respectively. Also assume that for a clique $(\mathcal{X}, [t_a, t_b])$, the first and last $\gamma$-th occurrence timestamps of a static edge $(u, v)$ within $[t_a,t_b]$ are denoted by $f_{uv}^{\gamma}$ and $l_{uv}^{\gamma}$, respectively, where $u,v \in \mathcal{X}$. Initially, we establish the following important claims.
\begin{mylemma} \label{lemma:1}
If both of the following conditions are true
\begin{itemize}
\item $\forall (\mathcal{X}, [t_a, t_b]) \in \mathcal{C}^{T_{1}}$ if $\forall (u,v) \in \mathcal{X}, \nexists \ t$ such that $(u,v,t) \in \mathcal{E}{(\mathcal{G})}$, and $t \in [l^1_{uv} , l^\gamma_{uv} + \Delta]$ where $l^\gamma_{uv} +\Delta > T_1$, and
\item $\forall (\mathcal{X}, [t_a, t_b]) \in \mathcal{C}^{T_{2} \setminus T_{1}}$ if $\forall (u,v) \in \mathcal{X}, \nexists \ t$ such that $(u,v,t) \in \mathcal{E}{(\mathcal{G})}$, and $t \in [f^\gamma_{uv} - \Delta,f^{1}_{uv} ]$ where $f^\gamma_{uv}-\Delta < T_1$.
\end{itemize}
then $\mathcal{C}^{T_{2}}=\mathcal{C}^{T_{1}} \ \cup \ \mathcal{C}^{T_{2} \setminus T_{1}} \ \cup \ \mathcal{C}_{[T_{1}-\Delta,T_{1}+\Delta]}^{*}$, where $\mathcal{C}_{[T_{1}-\Delta,T_{1}+\Delta]}^{*}$ denotes the set of maximal cliques within the time interval $[T_{1}-\Delta,T_{1}+\Delta]$ which are not contained as a sub clique in any other maximal cliques in $\mathcal{C}^{T_{1}}$ or $\mathcal{C}^{T_{2} \setminus T_{1}}$.
\end{mylemma}
\begin{proof}
\begin{enumerate}
\item Assume that $(\mathcal{X}, [t_a, t_b]) \in \mathcal{C}^{T_{1}}$. If the clique is extended beyond $T_{1}$, then any one or both of the following can happen:
\begin{enumerate}
\item The clique $(\mathcal{X}, [t_a, t_b])$ got extended till $t_c$, where $t_c > T_{1}$ and thus the new maximal clique becomes $(\mathcal{X}, [t_a, t_c])$. If this happens then by the definition of $(\Delta, \gamma)$\mbox{-}Clique, between every pair of vertices in $\mathcal{X}$, there must be at least $\gamma$ links in every $\Delta$ duration between $t_a$ and $t_c$. However, in Condition $1$, it is mentioned that for every $u,v \in \mathcal{X}$, there does not exist any link $(u,v,t)$ with $t \in [l^1_{uv} , l^\gamma_{uv} + \Delta]$. Hence, the clique $(\mathcal{X}, [t_a, t_b])$ can not be extended beyond $t_b$.
\item The second thing that can happen is that the new maximal clique $(\mathcal{Y}, [t_a, t_d])$ is splitted out from the clique $(\mathcal{X}, [t_a, t_b])$, where $\mathcal{Y} \subset \mathcal{X}$ and $t_d > T_{1}$. Now, it is important to observe that this case can only occur if there does not exist any maximal clique $(\mathcal{Y}, [t_a^{'}, t_b^{'}])$ in $\mathcal{C}^{T_{1}}$ with ($t_a^{'} < t_a$ and $ t_b^{'} \geq t_b$) or ($t_a^{'} \leq t_a $ and $t_b^{'} > t_b$). However, in Condition $1$, it is mentioned that for every $u,v \in \mathcal{Y}$, there does not exist any link $(u,v,t)$ with $t \in [l^1_{uv} , l^\gamma_{uv} + \Delta]$. Hence, the clique $(\mathcal{Y}, [t_a, t_d])$ can not be formed.
\end{enumerate}
Both the sub cases together imply that neither the maximal clique $(\mathcal{X}, [t_a, t_b])$ nor any sub clique of this can be extended beyond $T_{1}$.
\item Assume that $(\mathcal{X}, [t_a, t_b]) \in \mathcal{C}^{T_{2} \setminus T_{1}}$. If the clique is extended before $T_{1}$, then any one or both of the followings can happen:
\begin{enumerate}
\item The clique $(\mathcal{X}, [t_a, t_b])$ got extended till $t_c$, where $t_c < T_{1}$ and thus the new maximal clique becomes $(\mathcal{X}, [t_c, t_b])$
\item The new maximal clique $(\mathcal{Y}, [t_d, t_b])$ is formed from the clique $(\mathcal{X}, [t_a, t_b])$, where $\mathcal{Y} \subset \mathcal{X}$ and $t_d < T_{1}$. Now, it is important to observe that this case can only occur if there does not exist any maximal clique $(\mathcal{Y}, [t_a^{'}, t_b^{'}])$ in $\mathcal{C}^{T_{2} \setminus T_{1}}$ with ($t_a^{'} < t_a$ and $ t_b^{'} \geq t_b$) or ($t_a^{'} \leq t_a $ and $t_b^{'} > t_b$).
\end{enumerate}
Similar to Case 1, it can be proved that such extension is not possible as there does not exist any link with $t \in [f^{\gamma}_{uv} - \Delta, f^{1}_{uv}]$ for all $u, v \in \mathcal{X}$.
\end{enumerate}
Figure \ref{fig:lemma1} shows an example scenario of Lemma \ref{lemma:1} with $T_1 = 12$, $\Delta=4$, and $\gamma = 2$. The contents of $\mathcal{C}^{T_1}$, $\mathcal{C}^{T_2 \setminus T_1}$, and $\mathcal{C}_{[T_{1}-\Delta,T_{1}+\Delta]}^{*}$ are shown in colour blue, green, and red, respectively ($\mathcal{C}^{T_1} = \{ (\{v_1, v_2\}, [2, 11]), (\{v_2, v_3\}, [4, 13]), (\{v_3, v_4\}, [1, 9])\}$, $\mathcal{C}^{T_2 \setminus T_1} = \{ (\{v_1, v_2\}, [12, 21]), (\{v_1, v_3\}, [11, 20]), (\{v_1, v_2, v_3\}, [12, 20])\}$, $\mathcal{C}_{[T_{1}-\Delta,T_{1}+\Delta]}^{*} = \{ (\{v_3, v_4\}, [8, 16])\}$).
\end{proof}
\begin{figure}
\centering
\includegraphics[scale=1.0]{lemma1_demo1.png}
\caption{Demonstration for Lemma \ref{lemma:1}}
\label{fig:lemma1}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.1]{new_block_diagram}
\caption{Block Diagram of the Proposed `Edge on Clique' Approach}
\label{Fig:Demo_edge_on_clique}
\end{figure}
\par Assume that we have the maximal $(\Delta, \gamma)$-cliques for the links till time stamp $T_{1}$. Now, the links $\mathcal{E}^{(T_1, T_2]}$ are just arrived. As mentioned previously, the goal is to enumerate all the maximal $(\Delta, \gamma)$-cliques for the links till time stamp $T_2$. In our proposed methodology, as we update the existing $(\Delta, \gamma)$-cliques with the new set of links, we call our proposed methodology as `Edge on Clique' which goes like this. It takes the maximal clique set till time stamp $T_1$ (i.e.; $\mathcal{C}^{T_{1}}$), the possible clique set to be extended from the previous time stamp ($\mathcal{C}^{T_{1}}_{ex}$), the links arrived in the time duration from $T_1- \Delta$ to $T_2$ ($\mathcal{E}^{ [T_{1}-\Delta, T_{2}]}$), the time stamp till which the maximal cliques are processed ($T_1$), the time stamp up to which the recent links are just arrived ($T_2$), $\Delta$, and $\gamma$ as inputs. It produces the maximal $(\Delta, \gamma)$-cliques till time stamp $T_{2}$ ($\mathcal{C}^{T_{2}}$), and the cliques that will be extended in the next update (i.e., when the links $\mathcal{E}^{(T_2, T_3]}$ for some $T_{3} > T_{2}$). The proposed method works in three parts; (i) First, it extends the right timestamp of all the cliques coming $\mathcal{C}^{T_1}_{ex}$; (ii) Second, it process the links $\mathcal{E}^{ [T_{1}-\Delta, T_{2}]}$ through initialization, extending the right and left timestamp, expanding the vertex set; (iii) Third, it removes the sub-cliques formed due to in-existence of the links before time $T_1 - \Delta$. Figure \ref{Fig:Demo_edge_on_clique} demonstrates the proposed `edge on clique' approach.
\begin{algorithm}[h]
\caption{Initialization Process of the `Edge on Clique' Approach} \label{Algo:edge_on_clique_ini}
\label{Algo:1}
\KwData{ The Clique Set till Time $T_1$; i.e.; $\mathcal{C}^{T_{1}}$, $\mathcal{C}^{T_{1}}_{ex}$, The link set $\mathcal{E}^{ [T_{1}-\Delta, T_{2}]}$ of $\mathcal{G}(\mathcal{V}, \mathcal{E}, \mathcal{T}), \Delta, \gamma$, $T_1$, \text{ and } $T_2$.}
\KwResult{Initialized Clique Set for the links $\mathcal{E}^{T_{2} \setminus T_{1}}$. }
$\mathcal{C}^{I} \longleftarrow \mathcal{C}^{T_{1}}_{ex}$ \; $\mathcal{C}^{T_{2} \setminus T_{1}} \longleftarrow \emptyset, \ \mathcal{C}^{T_{2} \setminus T_{1}}_{ex} \longleftarrow \emptyset, \ \mathcal{C}_{im} \longleftarrow \mathcal{C}^{I}$\;
\While{$\mathcal{C}^{I} \neq \emptyset$}{
take and remove ($\mathcal{Z}$, [$t_x,t_y$]) from $\mathcal{C}^{I}$\;
${r\_flag} = $ \texttt{Extend\_Right\_TS(}$(\mathcal{Z}, [t_x,t_y])$, $\mathcal{E}^{ [T_{1}-\Delta, T_{2}]}$\texttt{)}\;
\If{$ \text{r\_flag} == TRUE$}{
add $(\mathcal{Z}, [t_x, t_y])$ to $\mathcal{C}^{T_{2} \setminus T_{1}}$\;
}
\If{$t_y \ge T_2$}{
add $(\mathcal{Z}, [t_x, t_y])$ to $\mathcal{C}^{T_{2} \setminus T_{1}}_{ex}$\;
}
}
$\mathcal{C}^{I} \longleftarrow \text{Execute Algorithm 1 of \cite{banerjee2019enumeration} on the links } \mathcal{E}^{ [T_{1}-\Delta, T_{2}]}$\;
$\mathcal{C}_{im} \longleftarrow \mathcal{C}_{im} \cup \mathcal{C}^{I}$\;
\While{$\mathcal{C}^{I} \neq \emptyset$}{
take and remove ($\mathcal{Z}$, [$t_x,t_y$]) from $\mathcal{C}^{I}$\;
\If{$t_y - t_x == \Delta$}{
Prepare the static graph $G$ for the duration [$t_x,t_y$]\;
Associate $N_{G}(\mathcal{Z})$ to ($\mathcal{Z}$, [$t_x,t_y$])\;
}
${v\_flag} = $ \texttt{Expand\_Vertex\_Set(} $(\mathcal{Z}, [t_x,t_y])$, $N_{G}(\mathcal{Z})$ \texttt{)}\;
${l\_flag} = $ \texttt{Extend\_Left\_TS(}$(\mathcal{Z}, [t_x,t_y])$, $\mathcal{E}^{ [T_{1}-\Delta, T_{2}]}$\texttt{)}\;
${r\_flag} = $ \texttt{Extend\_Right\_TS(}$(\mathcal{Z}, [t_x,t_y])$, $\mathcal{E}^{ [T_{1}-\Delta, T_{2}]}$\texttt{)}\;
\If{$\text{v\_flag} \bigwedge \text{l\_flag} \bigwedge \text{r\_flag} == TRUE$}{
add $(\mathcal{Z}, [t_x, t_y])$ to $\mathcal{C}^{T_{2} \setminus T_{1}}$\;
}
\If{$t_y \ge T_2$}{
add $(\mathcal{Z}, [t_x, t_y])$ to $\mathcal{C}^{T_{2} \setminus T_{1}}_{ex}$\;
}
}
\texttt{EOC\_Remove\_Sub\_Cliques(}$T_1$\texttt{)}\;
$\mathcal{C}^{T_{2}} \longleftarrow \mathcal{C}^{T_{2} \setminus T_{1}} \cup (\mathcal{C}^{T_{1}} \setminus \mathcal{C}^{T_{1}}_{ex}) $ \;
\textbf{return} $\mathcal{C}^{T_{2}}$, $\mathcal{C}^{T_{2} \setminus T_{1}}_{ex}$\;
\end{algorithm}
\paragraph{Description of the Proposed Approach :} First, we initialize the clique set $\mathcal{C}^{I}$ with the cliques coming from $\mathcal{C}^{T_1}_{ex}$. These cliques are formed by the links of $\mathcal{E}^{T_1}$ which can be extended beyond time stamp $T_{1}$ without violating the properties of $(\Delta, \gamma)$-clique. Hence, we first process these cliques to extend the right timestamp with the links coming till timestamp $T_2$. For the enumeration process, four clique sets: $\mathcal{C}^{I}$ (for holding the cliques yet to be processed), $\mathcal{C}_{im}$ (for keeping the cliques already or yet to be processed), $\mathcal{C}^{T_{2} \setminus T_{1}}$ (for storing the maximal cliques, whose entire or partial links are present in $\mathcal{E}^{[T_1 - \Delta, T_2]}$), and $\mathcal{C}^{T_{2} \setminus T_{1}}_{ex}$ (for storing the cliques to be extended in next update) are maintained. The mentioned clique sets are initialized in Line 2. Here, we highlight that all these clique sets are `global', which means that the subroutines that are invoked in Algorithm \ref{Algo:1} will also have access. Next, in the while loop at Line 3 to 9, we process each of the cliques in $\mathcal{C}^{I}$, till $\mathcal{C}^{I}$ is empty. Next, an arbitrary clique $(\mathcal{Z}, [t_x,t_y])$ is taken out from $\mathcal{C}^{I}$ and tried to expand the right time stamp with \texttt{Extend\_Right\_TS()} Procedure (Procedure \ref{proc:rightts}). If the extension is possible, the new clique is added in $\mathcal{C}_{im}$ and $\mathcal{C}^{I}$, and set the $r\_flag$ as $FALSE$ to indicate $(\mathcal{Z}, [t_x,t_y])$ is non-maximal. Otherwise, $(\mathcal{Z}, [t_x,t_y])$ is maximal and added to $\mathcal{C}^{T_{2} \setminus T_{1}}$ in Line 7 of Algorithm \ref{Algo:1}. Now, if $t_y \geq T_2$ for the current popped clique, it is added into $\mathcal{C}^{T_{2} \setminus T_{1}}_{ex}$ as a possible candidate for the extension. This completes one step towards enumerating the all maximal cliques with the links till $T_2$.
\par Next, we execute the `initialization' procedure, i.e., Algorithm 1 from \cite{banerjee2019enumeration} on the links $\mathcal{E}^{ [T_{1}-\Delta, T_{2}]}$ and the generated cliques are kept in $\mathcal{C}^{I}$. The cliques in $\mathcal{C}^{I}$ holds the following properties: i)the cardinality of the vertex set is 2, ii)the time interval of each $(\Delta, \gamma)$-clique is of exact duration $\Delta$, and iii) each clique has exactly $\gamma$ links within the time interval (Follows from Lemma 1 in \cite{banerjee2019enumeration}). In Line 10, the initialized cliques set $\mathcal{C}^{I}$ is the complete set for the enumeration process with the links $\mathcal{E}^{ [T_{1}-\Delta, T_{2}]}$. The correctness of the initialization can be proved by lemma 6 of \cite{banerjee2019enumeration}. In Line 11, $\mathcal{C}_{im}$ is updated with the cliques of $\mathcal{C}^{I}$ to start the next step of the enumeration process.
\par Next, an arbitrary clique $(\mathcal{Z}, [t_x,t_y])$ is taken out from $\mathcal{C}^{I}$ in Line $13$. If $t_y - t_x=\Delta$, the static graph $G$ is built with the links present within $[t_x, t_y]$ in Line 15. The neighboring vertices in $G$, which have communicated at least $\gamma$ times with any of the vertices in $\mathcal{Z}$, forms the candidate vertex set $\mathcal{N}_G(\mathcal{Z})$ and associated with the clique $(\mathcal{Z}, [t_x,t_y])$ in Line 16. Next, the algorithm tries to expand $(\mathcal{Z}, [t_x,t_y])$ in the following three ways: (i) by adding vertices (invoking the function \texttt{Expand\_Vertex\_Set()}), (ii) by stretching $t_x$ towards its left (invoking the function \texttt{Extend\_Left\_TS()}), and (iii) by stretching $t_y$ towards its right (invoking the function \texttt{Extend\_Right\_TS()}). In \texttt{Expand\_Vertex\_Set()}, it selects the neighbors of $\mathcal{Z}$ from the static graph $G$ within the interval $[t_x, t_y]$ and checks whether the new tuple(vertex pair, time interval) holds the $(\Delta, \gamma)$-clique property. If the vertex addition is possible, the new clique is added in $\mathcal{C}_{im}$ and $\mathcal{C}^{I}$, and it declares $(\mathcal{Z}, [t_x,t_y])$ as non-maximal by setting $v\_flag$ to $FALSE$. In \texttt{Extend\_Left\_TS()}, it tries to extend $t_x$ by $t_{xl} - \Delta$ ($t_{xl}$ is the latest time stamp from all the $\gamma$-th occurrence time stamps within $[t_x-1, t_y]$ of each possible vertex pair in $\mathcal{Z}$) with the links $\mathcal{E}^{ [T_{1} - \Delta, T_{2}]}$. If the new tuple(vertex pair, time interval) holds the $(\Delta, \gamma)$-clique property, the new clique is added in $\mathcal{C}_{im}$ and $\mathcal{C}^{I}$, and it declares $(\mathcal{Z}, [t_x,t_y])$ as non-maximal by setting $l\_flag$ to $FALSE$. Similarly, in \texttt{Extend\_Right\_TS()}, $t_y$ is extended as $t_{yr} + \Delta$ ($t_{yr}$ is the earliest time stamp from all the last $\gamma$-th occurrence time stamps within $[t_x, t_y + 1]$ of each possible vertex pair in $\mathcal{Z}$) and sets $r\_flag$ to $FALSE$ if the extension is possible. If any of the function returns $FALSE$ that means the clique $(\mathcal{Z}, [t_x,t_y])$ is not maximal. Hence, we perform logical `AND' operation among the flags in Line 20, and if the outcome is $TRUE$, that means the clique $(\mathcal{Z}, [t_x,t_y])$ is a maximal clique, and it is added to $\mathcal{C}^{T_{2} \setminus T_{1}}$. Next, we verify whether the current clique $(\mathcal{Z}, [t_x, t_y])$ can be extended for the next update cycle in Line 22. If $t_y$ is greater than $T_2$, it is included in $\mathcal{C}^{T_{2} \setminus T_{1}}_{ex}$. The correctness of this condition is given in Lemma \ref{lemma:extendend}. This process is repeated until $\mathcal{C}^{I}$ is empty.
Now, it is important to observe that the maximal clique set $\mathcal{C}^{T_{2}}$ can be obtained by the union of $(\mathcal{C}^{T_{1}} \setminus \mathcal{C}^{T_{1}}_{ex})$, and $\mathcal{C}^{T_{2} \setminus T_{1}}$. However, in doing so we may end up with getting some non\mbox{-}maximal cliques as well. So, it is important to remove such cliques to obtain the final clique set. Hence, the \texttt{EOC\_Remove\_Sub\_Cliques()} function is invoked. For a fixed $\mathcal{Z}$, it tries to get the maximum duration, and for a fixed $[t_x, t_y]$, it tries to keep the maximum number of vertices in $\mathcal{Z}$. In this way, it ends up with getting the maximal cliques only. Now, the complete maximal clique set, $\mathcal{C}^{T_{2}}$, is computed in Line 25. Finally, the algorithm returns the set of maximal cliques till $T_2$ ($\mathcal{C}^{T_{2}}$), and the possible cliques to be extended in next update cycle ($\mathcal{C}^{T_{2} \setminus T_{1}}_{ex}$), as output.
We highlight that for the first cycle there does not exist any prior clique sets and Algorithm \ref{Algo:1} will execute on the links $\mathcal{E}^{T_{1}}$. Hence, we redefine the changed inputs for the first cycle and introduce the notation $T_0$ for the simplicity in understanding. Here, we make $T_1$ of the second update cycle, as $T_0$. Similarly, we replace $T_2$ with $T_1$. So, the inputs of Algorithm \ref{Algo:1} become the previous cliques sets $\mathcal{C}^{T_{0}}$ and $\mathcal{C}^{T_{0}}_{ex}$ as $\emptyset$, the link set $\mathcal{E}^{T_{1}}$, $T_0$ as `$-1$', $T_1$, $\Delta$, and $\gamma$. Hence, the while loop from line 3 to 9 will not execute and the algorithm will run as the procedure described in \cite{banerjee2019enumeration} with the extra operations in line 22 and 23 to get the $\mathcal{C}^{T_{1} \setminus T_{0}}_{ex}$. The first update cycle will not require any removal of non-maximal cliques. So, it will return from the if condition in line 2 of the function \texttt{EOC\_Remove\_Sub\_Cliques(}$T_0 = -1$\texttt{)}. The line 25 in Algorithm \ref{Algo:1} will copy the entire maximal clique set $\mathcal{C}^{T_{1} \setminus T_{0}}$ to $\mathcal{C}^{T_{1}}$. Finally, it will return $\mathcal{C}^{T_{1}}$, and $\mathcal{C}^{T_{1} \setminus T_{0}}_{ex}$ as the outputs of the first cycle. The clique set $\mathcal{C}^{T_{1} \setminus T_{0}}_{ex}$ will be used as $\mathcal{C}^{T_{1}}_{ex}$ for the next update cycle. Now, based on the working principal of Algorithm \ref{Algo:1}, we have the following important observation which has been used to prove Lemma \ref{lemma:4}.
\begin{tcolorbox} \label{note:1}
\begin{itemize}
\item \textbf{Note:} The clique extending procedures in Algorithm \ref{Algo:1} (i.e., \texttt{Expand\_Vertex\_Set()}, \texttt{Extend\_Left\_TS()}, and \texttt{Extend\_Right\_TS()}), are independent of their order of execution.
\end{itemize}
\end{tcolorbox}
\begin{procedure}[htb]
\caption{Expanding a clique by vertex addition()} \label{proc:nodeadd}
\SetKwFunction{FMain}{Expand\_Vertex\_Set}
\SetKwProg{Pn}{Function}{:}{\KwRet}
\Pn{\FMain{ $(\mathcal{Z}, [t_x, t_y])$, $N_G(\mathcal{Z})$}}{
$\text{flag} = TRUE$\;
\For{\text{All} $u \in N_G(\mathcal{Z}) \setminus \mathcal{Z}$}{
\If{$(\mathcal{Z} \cup \{u\}, [t_x, t_y])$ is a $(\Delta, \gamma)$\mbox{-}clique}{
$\text{flag} = FALSE$\;
\If{$(\mathcal{Z} \cup \{u\}, [t_x, t_y]) \notin \mathcal{C}_{im}$}{
add $(\mathcal{Z} \cup \{u\}, [t_x, t_y])$ to $\mathcal{C}^I$ and $\mathcal{C}_{im}$\;
}
}
}
\KwRet \text{flag}\;
}
\end{procedure}
\paragraph{Procedure: \texttt{Expand\_Vertex\_Set()}} This procedure takes a $(\Delta, \gamma)$\mbox{-}clique from $\mathcal{C}^{I}$ and its associated candidate vertex set for expanding the clique, as inputs, and returns a Boolean flag indicating whether the input clique is maximal or not. Now for a clique $(\mathcal{Z}, [t_x, t_y])$, we verify whether it is possible to add a vertex from $N_{G}(\mathcal{Z})$ to $\mathcal{Z}$, such that $\mathcal{Z} \cup \{u\}$ holds the $(\Delta, \gamma)$-clique property within $[t_x, t_y]$ in Line 4. If so, the flag is set to $FALSE$, and it is added to $\mathcal{C}^{I}$ and $\mathcal{C}_{im}$ if the new clique does not exists to $\mathcal{C}_{im}$.
\begin{mylemma} \label{lemma:t1-delta}
For updating $\mathcal{C}^{T_1}$ to obtain $\mathcal{C}^{T_2}$, it is enough to have the links for the duration $[T_1 - \Delta, T_2]$ .
\end{mylemma}
\begin{proof}
We have to show that it is enough to process $\mathcal{E}^{[T_1 - \Delta, T_2]}$ to get $\mathcal{C}^{T_2}$, along with the inputs $\mathcal{C}^{T_1}$ and $\mathcal{C}^{T_1}_{ex}$. To prove the statement, we need to show the following two points:
\begin{enumerate}
\item It does not miss any clique to get the maximal cliques.
\item It does not carry any redundant processing while building $\mathcal{C}^{T_2 \setminus T_1}$.
\end{enumerate}
During the enumeration process, the initialized cliques are of size $2$, time duration is of $\Delta$. It holds exact $\gamma$ occurrences of the vertex pair within the time duration. They are also associated with their respective candidate set of nodes as the neighbors of the vertex pair in that $\Delta$ duration. Now, the possibility of the extension of a clique involves three directions, as discussed, (vertex addition, extending its start time to the left, extending the end time stamp to the right). In this process, it merges different initialized cliques to the extended one, till it reaches the maximality. In Algorithm \ref{Algo:1}, the cliques from $ \mathcal{C}^{T1}_{ex}$ are extended in the right time stamp only. Now, we know that the smallest possible $t_y$ for extension from $\mathcal{C}^{T_1}$ side to $\mathcal{C}^{T_2 \setminus T_1}$, is $T_1$. Assume, a clique $(\mathcal{Z}, [t_x, t_y]) \in \mathcal{C}^{T1}_{ex}$ with $t_y = T_1$ and $t_{yr}=T_1 - \Delta$. To extend $t_y$ in the right, it needs to observe the existence of $\gamma$ links in $[t_{yr} + 1, t_{yr} + 1 + \Delta ]$, i.e, $[T_1 - \Delta + 1, T_1 + 1]$. This requires to have the links at least from the timestamp $T_1 - \Delta$. In Procedure \ref{proc:rightts}, it is calculated in Line 3. So, it is evident to have $\mathcal{E}^{[T_1 - \Delta, T_2]}$ to get the $\mathcal{C}^{T_2}$.
\par Another case can arise here is that the $\gamma$ edges of a vertex pair are separated by the partition at $T_1$. Hence, the cliques involving those vertices will not be initialized while building $\mathcal{C}^{T1}$. Hence, those cliques do not exists in $\mathcal{C}^{T1}_{ex}$. To check this possibility, for every new link in $(T1, T1+\Delta]$ it needs to verify the existence of $\gamma$ edges in the corresponding previous $\Delta$ duration for that vertex pair and initialize the $(\Delta, \gamma)$-cliques while building $\mathcal{C}^{T_2 \setminus T_1}$. So, it is necessary to have the set of links $\mathcal{E}^{[T_1 - \Delta, T_2]}$ in computing $\mathcal{C}^{T_2}$.
\par Now any other cliques initialized with the overlapping links are redundant. Let's assume, the set of links used for preparing $\mathcal{C}^{T_2 \setminus T_1}$ is $\mathcal{E}^{[T_1 - \Delta - k, T_2]}$, where $k > 0$. Here, the links between $T_1 - \Delta - k$ to $T_1$ are processed twice, once, while preparing $\mathcal{C}^{T_1}$ and another for $\mathcal{C}^{T_2 \setminus T_1}$. Now, we already shown that the links in $[T_1 - \Delta, T_1]$ are necessary to avoid the missing initialized cliques. Hence, adding more links from the left of $T_1-\Delta$ is redundant and $\mathcal{E}^{[T_1 - \Delta, T_2]}$ is sufficient to enumerate all the maximal cliques till $T_2$ by the `Edge on Cliques' method.
\end{proof}
\textbf{Note:} If any dataset follows the condition of Lemma \ref{lemma:1}, the overlapping links can be ignored.
\begin{mylemma} \label{lemma:3}
The candidate vertex set associated with each $(\Delta, \gamma)$-clique of $\mathcal{C}^{I}$, is complete for the final maximal clique set.
\end{mylemma}
\begin{proof}
To prove the statement by contradiction, we have to show for a clique $(\mathcal{Z}, [t_x, t_y])$, it's associated candidate vertex set is incomplete. In Algorithm \ref{Algo:1}, the associated candidate set $N_{G}(\mathcal{Z})$ defines the set of vertices which are possible to be added in $\mathcal{Z}$ for vertex set expansion of the clique. Now, the $N_{G}(\mathcal{Z})$ contains the set of vertices where each has to be the neighbor (connected at least $\gamma$ times within $[t_x, t_y]$) of at least one vertex of $\mathcal{Z}$ in the duration of $t_x$ to $t_y$. It is formed during the initialization of the clique when $\vert \mathcal{Z} \vert = 2$ and $t_y - t_x = \Delta$, and propagated further in it's each expansion. Now, we need to show that the $N_{G}(\mathcal{Z})$ is complete.
\par Assume, a vertex $u \notin N_{G}(\mathcal{Z})$ can be added to $\mathcal{Z}$ such that $(\mathcal{Z} \cup \{u\}, [t_x, t_y])$ forms a $(\Delta, \gamma)$-clique. So, $u$ has to be the neighbor of all the vertices from $\mathcal{Z}$. Let us assume, one such vertex is $v$. As $v \in \mathcal{Z}$, there has to be $\gamma$ edges of the vertex pair $(v, w)$ in $\Delta$ duration with $w \in \mathcal{Z}$, $[t_{x}^{'}, t_{y}^{'}] \subseteq [t_x, t_y]$, and $t_{y}^{'} - t_{x}^{'} = \Delta$. Hence, $(\{v, w\}, [t_{x}^{'}, t_{y}^{'}])$ is a $(\Delta, \gamma)$-clique and can be one of the initialization to get $(\mathcal{Z}, [t_x, t_y])$. Hence, $u$ has to be in the candidate set of $(\{v, w\}, [t_{x}^{'}, t_{y}^{'}])$, which has to be carry forwarded in $N_{G}(\mathcal{Z})$. Hence, $N_{G}(\mathcal{Z})$ is complete.
\end{proof}
\begin{mylemma} \label{Lemma:5}
The candidate vertex set associated with each $(\Delta, \gamma)$-clique of $\mathcal{C}^{I}$, is correct for the final maximal clique set.
\end{mylemma}
\begin{proof}
In the proof of Lemma \ref{lemma:3}, it has been discussed that for any clique in $\mathcal{C}^I$, the candidate vertex set is associated at the time of it's initialization, and propagated further (without modification) to the new extended cliques, till it reaches it's maximality. To prove that the candidate vertex set is correct, we need to show that it does not miss any maximal $(\Delta, \gamma)$-clique in $\mathcal{C}^{T_{2}}$. Let's assume, $(\mathcal{Z}, [t_x, t_y]) \notin \mathcal{C}^{T_{2}}$ is a maximal $(\Delta, \gamma)$-clique. Now, there can be either of the following two situations.
\begin{itemize}
\item \textbf{Case 1:} $[t_x, t_y] \subseteq [T_1 - \Delta, T_2]$
\item \textbf{Case 2:} $t_x < T_1 - \Delta$ and $t_y \geq T_1$
\end{itemize}
\par For Case 1, as $(\mathcal{Z}, [t_x, t_y])$ is a $(\Delta, \gamma)$-clique, so all the vertices have to be linked at least $\gamma$ times in each $\Delta$ duration within $[t_x, t_y]$. So, there exist $ \binom{\vert \mathcal{Z} \vert}{2}$ possible vertex sets for the initialized cliques within $[t_x, t_y]$. Now, for any such initial clique $A_{0} = (\mathcal{Z}^{'}, [t_{x}^{'}, t_{y}^{'}])$ in $\mathcal{C}^{I}$, such that $\mathcal{Z}^{'} \subseteq \mathcal{Z}$, $\vert \mathcal{Z}^{'} \vert =2$, $[t_{x}^{'}, t_{y}^{'}] \subseteq [t_x, t_y]$, and $t_{y}^{'} - t_{x}^{'} = \Delta$, the associated candidate set $\mathcal{N}_G(\mathcal{Z}^{'})$ has to contain all the vertices of $\mathcal{Z}$. This is possible as all the vertices are connected in the static graph $G$ generated in $[t_{x}^{'}, t_{y}^{'}]$ with the link set $\mathcal{E}^{[T_1-\Delta, T_2]}$. Now, by executing Procedure \ref{proc:nodeadd} with $A_{0}$, it will generate all the cliques of size 3 as $A_{1} = (\mathcal{Z}^{''}, [t_{x}^{'}, t_{y}^{'}])$. Next, repeating this process it will form the sequence as $A_{0} \longrightarrow A_{1} \longrightarrow A_{2}, \longrightarrow \ldots \longrightarrow, A_{k-2}$, where $A_{k-2} = (\mathcal{Z}, [t_x^{'}, t_y^{'}])$ with $\vert \mathcal{Z} \vert = k$. Now, as per Lemma 7 of \cite{banerjee2019enumeration}, $(\mathcal{Z}, [t_x, t_y])$ will be obtained from $(\mathcal{Z}, [t_x^{'}, t_y^{'}])$. So, $(\mathcal{Z}, [t_x, t_y])$ will be in $\mathcal{C}^{T_2 \setminus T_1}$. Also, as the clique is maximal, it will not be removed. Hence, $(\mathcal{Z}, [t_x, t_y])$ will be present in $\mathcal{C}^{T_{2}}$.
\par For Case 2, the clique $(\mathcal{Z}, [t_x, t_y])$ has to be emerged from a clique initialized through $\mathcal{C}^{T_1}_{ex}$. Assume, there exist a clique $B_{0} = (\mathcal{Z}^{'}, [t_{x}^{'}, t_{y}^{'}])$ with $[t_x^{'}, t_y^{'}] \subset [T_0, T_1]$, $\vert \mathcal{Z}^{'} \vert =2$, and $t_{y}^{'} - t_{x}^{'} = \Delta$. Similar to Case 1, it can be shown that $B_{0}$ forms $B_{k-2} = (\mathcal{Z}, [t_{x}^{'}, t_{y}^{'}])$, while building $\mathcal{C}^{T_1}$. Now, as per Lemma 7 of \cite{banerjee2019enumeration}, $B_{k-2}$ reaches to it's maximal $B_{k} = (\mathcal{Z}, [t_{x}, t_{y}^{''}])$ with $\mathcal{E}^{T_1}$, where $t_y^{''} \geq T_1$. $B_{k}$ is added in $\mathcal{C}^{T_1}_{ex}$. Next, the cliques in $\mathcal{C}^{T_1}_{ex}$ are expanded in the right timestamp only to reach $t_y$. Hence, $(\mathcal{Z}, [t_x, t_y])$ will be present in $\mathcal{C}^{T_2}$. This completes the proof of the lemma statement.
\end{proof}
Together Lemma \ref{lemma:3}, and \ref{Lemma:5} imply that the candidate vertex set associated with each of the cliques of $\mathcal{C}^{I}$ is correct and complete.
\begin{mylemma} \label{lemma:4}
There exist a maximal clique $(\mathcal{Z}, [t_x, t_y]) \in \mathcal{C}^{T_2}$, such that $t_x < T_1$ and $t_y \geq T_1$, iff there exist a clique $(\mathcal{Z}, [t_x, t_{y}^{'}]) \in \mathcal{C}^{T_1}_{ex}$ with $t_{y}^{'} \leq t_{y}$.
\end{mylemma}
\begin{proof}
First, we prove the forward direction of the lemma statement, i.e., if $(\mathcal{Z}, [t_x, t_{y}^{'}]) \in \mathcal{C}^{T_1}_{ex}$, then $(\mathcal{Z}, [t_x, t_y]) \in \mathcal{C}^{T_2}$. It leads to make the conclusion that the right timestamp extension is correct in Procedure \ref{proc:rightts} and possible while building $\mathcal{C}^{T_2 \setminus T_1}$. Now, given the links that appeared from the last $\Delta$ duration of any clique, the correctness of Procedure \ref{proc:rightts} is self-explanatory. Also, the possibility of extension for the cliques that are coming from $\mathcal{C}^{T_1}_{ex}$ is ensured in Lemma \ref{lemma:t1-delta}. Hence, $(\mathcal{Z}, [t_x, t_y])$ will be in $\mathcal{C}^{T2 \setminus T_1}$. Now, as $(\mathcal{Z}, [t_x, t_y])$ is maximal, it will not be contained in any other cliques either by vertex sets or in the time interval. So, it has to be present in $ \mathcal{C}^{T_2}$.
\par Next, we prove the reverse direction , i.e., if $(\mathcal{Z}, [t_x, t_y]) \in \mathcal{C}^{T_2}$, then $(\mathcal{Z}, [t_x, t_{y}^{'}]) \in \mathcal{C}^{T_1}_{ex}$. Assume that, we have the entire link set till time stamp $T_2$. Then by Lemma 7 of \cite{banerjee2019enumeration}, for $(\mathcal{Z}, [t_x, t_y])$ there must exist a clique $A_1= (\mathcal{Z}, [t_x^{'}, t_x^{'} + \Delta])$, where $t_{x}^{'} \in [t_x, t_y]$. In Lemma \ref{lemma:3} and \ref{Lemma:5}, we have already shown that the candidate vertex set associated with each $(\Delta, \gamma)$-clique is correct and complete. So, for the duration $[t_{x}^{'}, t_{x}^{'} + \Delta]$ there can be $\binom{| \mathcal{Z} |}{2}$ possible initialized cliques, and each of them will have the candidate set containing all the vertices of $\mathcal{Z}$. Let's say, one such clique is $A_0= (\{u,v\}, [t_x^{'}, t_x^{'} + \Delta])$. By executing only Procedure \ref{proc:nodeadd}, the initialized clique $A_0$ will generate $A_1= (\mathcal{Z}, [t_x^{'}, t_x^{'} + \Delta])$. As highlighted in the description of Algorithm \ref{Algo:1} that the order of execution of the enumeration process is irrelevant, we make the following arguments.
By Procedure \ref{proc:leftts}, $A_1$ will be emerged to $A_2 = (\mathcal{Z}, [t_x, t_x^{'} + \Delta])$ by extending it's left time stamp. Next, $A_2$ will be extended to $(\mathcal{Z}, [t_x, t_y])$ by Procedure \ref{proc:rightts}. This extension will verify the $(\Delta, \gamma)$-clique property in every last $\Delta$ duration and cross through a clique state $A_3 = (\mathcal{Z}, [t_x, t_y^{'}])$.
Now, while processing $\mathcal{C}^{T_1}$, $A_2$ will not be able to reach $(\mathcal{Z}, [t_x, t_y])$. However, $A_3$ must be reached from $A_2$, while building $\mathcal{C}^{T_1}$. Hence, it will be included in $\mathcal{C}^{T_1}_{ex}$ as $t_y \geq T_1$. Thus, the extension of $A_3$ will be performed in building $\mathcal{C}^{T_2 \setminus T_1}$. So, $\mathcal{C}^{T_1}_{ex}$ will contain the clique $(\mathcal{Z}, [t_x, t_{y}^{'}])$. This concludes the proof of the lemma statement.
\end{proof}
\begin{mylemma} \label{lemma:6}
It is sufficient to expand only the right timestamp of a clique in $\mathcal{C}^{T_1}_{ex}$ to build the final maximal clique set $\mathcal{C}^{T_2}$.
\end{mylemma}
\begin{proof}
To prove this statement, we need to show that Algorithm \ref{Algo:1} does not miss any maximal cliques in $\mathcal{C}^{T_2}$, whose left timestamp is less than $T_1$ and right time stamp is greater than or equal to $T_1$. We prove this statement by contradiction. Let's assume, $(\mathcal{Z}, [t_x, t_y]) \notin \mathcal{C}^{T_{2}}$ is a maximal $(\Delta, \gamma)$-clique, and $t_x < T_1$ and $t_y \geq T_1$.
\par Now, $\mathcal{C}^{T_1}_{ex}$ contains all the cliques whose $t_y \geq T_1$ and formed while running Algorithm \ref{Algo:1} with the link set $\mathcal{E}^{T_1}$. Hence, we need to show that there must be a clique $(\mathcal{Z}, [t_x, t_y^{'}]) \in \mathcal{C}^{T_1}_{ex}$ with $t_{y}^{'} \leq t_y$, whose right time stamp extension is sufficient to get the maximal clique $(\mathcal{Z}, [t_x, t_y])$. According to Lemma \ref{lemma:4}, such $(\mathcal{Z}, [t_x, t_y^{'}])$ exists in $\mathcal{C}^{T_1}_{ex}$. Alternatively, $(\mathcal{Z}, [t_x, t_y])$ belongs to $\mathcal{C}^{T_2}$. Hence, the statement is proved.
\end{proof}
\begin{procedure}[htb]
\caption{Extending a clique towards right in the time horizon()} \label{proc:rightts}
\SetKwFunction{FRight}{Extend\_Right\_TS}
\SetKwProg{Pn}{Function}{:}{\KwRet}
\Pn{\FRight{$(\mathcal{Z}, [t_x,t_y])$, $\mathcal{E}^{ [T_{1}-\Delta, T_{2}]}$}}{
$\text{flag} = TRUE$\;
$t_{yr} = min_{u,v \in \mathcal{Z}} \ t_{yuv}$ \tcp*{last $\gamma^{th}$ occurrence time of an edge $(u,v)$ within $[t_x, t_y+1]$}
\If{$t_{yr}+\Delta > t_y$}{
$\text{flag} = FALSE$\;
\If{$(\mathcal{Z}, [t_x, t_{yr}+\Delta]) \notin \mathcal{C}_{im}$}{
add $(\mathcal{Z}, [t_x, t_{yr}+\Delta])$ to $\mathcal{C}^{I}$ and $\mathcal{C}_{im}$\;
}
}
\KwRet \text{flag}\;
}
\end{procedure}
\paragraph{Procedure \texttt{Expand\_Right\_TS()}: } This procedure takes a $(\Delta, \gamma)$\mbox{-}clique from $\mathcal{C}^{I}$ and the edge list for the current update cycle, as inputs, and returns a Boolean flag indicating whether the input clique is maximal or not. For a clique $(\mathcal{Z}, [t_x, t_y])$, the trivial way of extending $t_y$ is as follows: For every pair of vertices $u,v \in \mathcal{Z}$, the last $\gamma$-th occurrence time stamp within $[t_x, t_y + 1]$ is $t_{yuv}$. If the resultant after adding $\Delta$ to the earliest of all $t_{yuv}$ is more than $t_y$, then $(\mathcal{Z}, [t_x, t_y])$ is not maximal and the new clique $(\mathcal{Z}, [t_x, t_{yr} + \Delta])$ is formed. Now, for the cliques which are initialized in the current update cycle or whose time interval $[t_x, t_y]$ falls within $[T_1 - \Delta, T_2]$, it is easy get $t_{yr}$ with the current set of links. Similar to Lemma \ref{lemma:t1-delta}, it can be concluded that for the cliques initialized by $\mathcal{C}^{T_1}_{ex}$, it is possible to get the $t_{yr}$ values with the links $\mathcal{E}^{ [T_{1}-\Delta, T_{2}]}$. This results in the following theorem.
\begin{mytheorem}
Procedure \ref{proc:rightts} is able to extend any $(\Delta, \gamma)$-clique using the set of links in the current update cycle only, irrespective of it's initialization.
\end{mytheorem}
\begin{proof}
In the theorem statement, `irrespective of it's initialization' refers two cases: i) cliques coming from $\mathcal{C}^{T_1}_{ex}$, and ii) cliques initialized with link set $\mathcal{E}^{[T_1- \Delta, T_2]}$. For Case 1, it is already shown in lemma \ref{lemma:t1-delta}, that the links from $T_1 - \Delta$ is sufficient to extend the right timestamp of the cliques in $\mathcal{C}^{T_1}_{ex}$. Now, for the later case, Lemma 6 and 7 of \cite{banerjee2019enumeration} together prove the correctness of the initialized cliques and it's extension in right time stamp. Hence, the statement of the theorem is proved.
\end{proof}
\begin{procedure}[htb]
\caption{Extending a clique towards left in the time horizon()} \label{proc:leftts}
\SetKwFunction{FM}{Extend\_Left\_TS}
\SetKwProg{Pn}{Function}{:}{\KwRet}
\Pn{\FM{$(\mathcal{Z}, [t_x,t_y])$, $\mathcal{E}^{ [T_{1}-\Delta, T_{2}]}$}}{
$\text{flag} = TRUE$\;
$t_{xl} = max_{u,v \in \mathcal{Z}} \ t_{xuv}$ \tcp*{first $\gamma^{th}$ occurrence time of an edge $(u,v)$ within $ [t_x-1, t_y]$}
\If{$t_{xl} - \Delta < t_x$}{
$\text{flag} = FALSE$\;
\If{$(\mathcal{Z}, [t_{xl}-\Delta, t_y]) \notin \mathcal{C}_{im}$}{
add $(\mathcal{Z}, [t_{xl}-\Delta, t_y])$ to $\mathcal{C}^I$ and $\mathcal{C}_{im}$\;
}
}
\KwRet \text{flag}\;
}
\end{procedure}
\paragraph{Procedure \texttt{Expand\_Left\_TS()}:} Similar to Procedure \ref{proc:rightts}, this procedure takes a $(\Delta, \gamma)$\mbox{-}Clique from $\mathcal{C}^{I}$ and the edge list for the current update cycle, as inputs, and returns a Boolean flag indicating whether the input clique is maximal or not. For a clique $(\mathcal{Z}, [t_x, t_y])$, the trivial way of extending $t_x$ is as follows: For every pair of vertices $u,v \in \mathcal{Z}$, let the first $\gamma$-th occurrence time stamp within $[t_x -1, t_y]$ is $t_{xuv}$. If subtracting $\Delta$ from the latest of all $t_{xuv}$ is less than $t_x$, then $(\mathcal{Z}, [t_x, t_y])$ is not maximal and the new clique $(\mathcal{Z}, [t_{xl} - \Delta, t_{y}])$ is formed. Assume that, the entire set of links till $T_2$ ($\mathcal{E}^{T_2}$) is being processed to get $\mathcal{C}^{T_2}$. Now, it is easy to observe that for a maximal clique $(\mathcal{Z}, [t_x, t_y])$ in $\mathcal{C}^{T_2}$, if $[t_x, t_y]$ lies entirely within $ [T_1 - \Delta, T_2]$, Procedure \ref{proc:leftts} can reach to the maximal clique by extending the start time stamp towards left. However, if $T_1 - \Delta$ lies within the $[t_x, t_y]$ and the first gamma edges of every pair of vertices in $\mathcal{Z}$ does not belong to $\mathcal{E}^{ [T_{1}-\Delta, T_{2}]}$, Procedure \ref{proc:leftts} will fail to get the maximal cliques from $\mathcal{C}^I$, initialized using $\mathcal{E}^{ [T_{1}-\Delta, T_{2}]}$. This scenario returns a non-maximal cliques by identifying them as falsely maximal. However, we do not miss the maximal ones as the algorithm uses the cliques from $\mathcal{C}^{T_1}_{ex}$, which has it's $t_x$ fixed and the $t_y$ is extended correctly by Procedure \ref{proc:rightts}. The following lemma highlights this claim.
\begin{procedure}[h]
\caption{Removal of the sub cliques()} \label{proc:removal}
\SetKwFunction{FMain}{EOC\_Remove\_Sub\_Cliques}
\SetKwProg{Pn}{Function}{:}{\KwRet}
\Pn{\FMain{$T_1$}}{
\If{$T_1 == -1 $}{
\KwRet \;
}
$\mathcal{C}_{check} \longleftarrow \emptyset$ \;
$R_{dic} = dict()$ \;
\For{all $(\mathcal{Z}, [t_x, t_y]) \in \mathcal{C}^{T_2 \setminus T_1}$}{
%
\If{$\mathcal{Z} \notin R_{dic}.keys()$}{
$\text{The new `key' } \mathcal{Z} \text{ is added to } R_{dic}$\;
$R_{dic}[\mathcal{Z}] \longleftarrow \emptyset$\;
}
add $[t_x, t_y]$ in $R_{dic}[\mathcal{Z}]$\;
\If{$t_x \leq T_1 $}{
add $(\mathcal{Z}, [t_x, t_y])$ in $\mathcal{C}_{check}$\;
}
}
\For{$(\mathcal{Z}, [t_x, t_y]) \in \mathcal{C}_{check}$}{
\If{$\mathcal{Z} \in R_{dic}.keys()$}{
\If{$|\{ [t_{{x}^{'}}, t_{{y}^{'}}]: [t_{{x}^{'}}, t_{{y}^{'}}] \in R_{dic}[\mathcal{Z}] \text{ and } [t_x, t_y] \subset [t_{{x}^{'}}, t_{{y}^{'}}] \}| \geq 1 $}{
remove $(\mathcal{Z}, [t_x, t_y])$ from $\mathcal{C}^{T_{2} \setminus T_{1}}$\;
\textbf{continue}\;
}
}
$temp = \{(\mathcal{Z}^{'}, [t_{{x}^{'}}, t_{{y}^{'}}]) : (\mathcal{Z}^{'}, [t_{{x}^{'}}, t_{{y}^{'}}]) \in \mathcal{C}^{T_2 \setminus T_1} \text{ and } \mathcal{Z} \subset \mathcal{Z}^{'} \}$\;
\For{$ \text{All }(\mathcal{Z}^{'}, [t_{{x}^{'}}, t_{{y}^{'}}]) \in temp$}{
\If{$[t_x, t_y] \subseteq [t_{{x}^{'}}, t_{{y}^{'}}]$}{
remove $(\mathcal{Z}, [t_x, t_y])$ from $\mathcal{C}^{T_{2} \setminus T_{1}}$\;
\textbf{break}\;}
}
}
\KwRet \;
}
\end{procedure}
\paragraph{Procedure \texttt{EOC\_Remove\_Sub\_Cliques()}:} In the description of Procedure \ref{proc:leftts}, we have realized that some of the non-maximal cliques are declared as falsely maximal of $\mathcal{C}^{T_2 \setminus T_1}$ before invoking Procedure \ref{proc:removal}. However, it removes such cliques and generate the final maximal clique set. It identifies the cliques with $t_x \leq T_1$, which are possibly non-maximal candidates (verified in Lemma \ref{lemma:maximality}). The cliques which are formed with the links $\mathcal{E}^{ [T_{1}-\Delta, T_{2}]}$, unable to verify it's extendibility towards left. This results in having $t_x$ greater than its actual value from it's maximal counterpart. So, for an identified clique $(\mathcal{Z}, [t_x, t_y])$, there are two possibilities; i) with same $\mathcal{Z}$ their exist a $[t_{x^{'}}, t_{y^{'}}] $ which contains $[t_x, t_y]$, or ii) there exist a clique $(\mathcal{Z}^{'}, [t_{x^{'}}, t_{y^{'}}])$ such that $\mathcal{Z} \subset \mathcal{Z}^{'}$ and $[t_x, t_y] \subseteq [t_{x^{'}}, t_{y^{'}}]$, resulting $(\mathcal{Z}, [t_x, t_y])$ as non-maximal. Hence, $(\mathcal{Z}, [t_x, t_y])$ is removed from $\mathcal{C}^{T_{2} \setminus T_{1}}$.
\par Procedure \ref{proc:removal} takes $T_1$ as input, the time stamp till which the maximal cliques are enumerated previously. When the algorithm runs for the first time (or with first batch of links), then the input of the procedure becomes $-1$ and returns from Line 3. For other cases, the removal of sub-cliques are done from Line 4 to 25. We keep all the cliques to be checked for maximality condition in $\mathcal{C}_{check}$ and set it to $\emptyset$ in Line 4. We prepare a dictionary $R_{dic}$ to hold all the cliques produced in $\mathcal{C}^{T_2 \setminus T_1}$. The keys of $R_{dic}$ is the vertex set of a clique and the value is the list containing all the time intervals when the corresponding vertex set has formed the $(\Delta, \gamma)$-cliques. For all the cliques in $\mathcal{C}^{T_2 \setminus T_1}$, it is added in $R_{dic}$ from line 7 to 10. Also, if left time stamp of the clique is less than or equal to $T_1$, it is added in $\mathcal{C}_{check}$ in line 11.
Now, for each clique $(\mathcal{Z},[t_x, t_y])$ in $\mathcal{C}_{check}$, two of the following things are checked. (i) $[t_x, t_y]$ is proper subset of any of the values in $R_{dic}[\mathcal{Z}]$, (ii) $\mathcal{Z}$ is subset of any other keys in $R_{dic}$ with $[t_x, t_y]$ is subset of that cliques as well. The clique can be removed if any of the mentioned case becomes true. The first case is checked within Line 14 to 17. If $\mathcal{Z}$ appears in $R_{dic}.keys()$, it computes the set having the intervals which are proper superset of $[t_x, t_y]$ in Line 15. Now, if the cardinality of that set is greater than 1, i.e., there exist at least one clique which contains the $(\mathcal{Z}, [t_x, t_y])$ temporally within it. Hence, $(\mathcal{Z}, [t_x, t_y])$ is not maximal and removed from $\mathcal{C}^{T_2 \setminus T_1}$ in line 16, and continues to pick next clique from $\mathcal{C}_{check}$. If Case (i) is false, it tries to check for case (ii). In line 18, it constructs a set $temp$ with the cliques from $\mathcal{C}^{T_2 \setminus T_1}$, whose vertex set is proper superset of $\mathcal{Z}$. Now, for each cliques in $temp$, it is checked if the time interval is also super set of $[t_x, t_y]$ in Line 20. If it is true, $(\mathcal{Z},[t_x, t_y])$ is not maximal, as it is contained into another clique both in terms of set of vertices and temporally. Hence, it is removed from $\mathcal{C}^{T_2 \setminus T_1}$, and the for loop at line 19 breaks. Finally, all the non-maximal cliques are removed from $\mathcal{C}^{T_2 \setminus T_1}$. Now, we state and prove few lemmas to show the correctness of Procedure \ref{proc:removal}.
\begin{mylemma}\label{lemma:maximality}
It is sufficient to identify the cliques having $t_x \leq T_1$, as possible candidates for checking the maximality condition.
\end{mylemma}
\begin{proof}
Before executing Procedure \ref{proc:removal}, a non-maximal clique $(\mathcal{Z}, [t_x, t_y])$ in $\mathcal{C}^{T_2 \setminus T_1}$ can come from either of the two following sources: i) $\mathcal{C}^{T_1}_{ex}$, and ii) the cliques initialized in Line 10 of Algorithm \ref{Algo:1}.
\par In the first case, we consider all the cliques having $t_y \geq T_1$ in $\mathcal{C}^{T_1}_{ex}$. While preparing $\mathcal{C}^{T_2 \setminus T_1}$, Algorithm \ref{Algo:1} only extends the right time stamp for the cliques in $\mathcal{C}^{T_1}_{ex}$. So, it is required to verify their maximality. Now, we need to show, what the maximum value of $t_x$ is possible for the cliques in $\mathcal{C}^{T_1}_{ex}$. Consider one scenario, when for a particular vertex pair $\{u, v\}$, all the $\gamma$ links appeared consecutively at each time stamp within $[T_1 -\gamma + 1, T_1]$. By the initialization algorithm of \cite{banerjee2019enumeration}, it will generate two cliques $(\{u,v\}, [T_1 -\gamma + 1, T_1 -\gamma + 1 + \Delta])$ and $(\{u,v\}, [ T_1 -\Delta, T_1])$. For $\gamma \in [1, \Delta+1]$, we observe the condition $T_1 - \Delta \leq T_1 -\gamma + 1 \leq T_1$ as true.
\par For the second case, the cliques will be extended in all the possible three ways of expansions. As it has the link set $\mathcal{E}^{[T_1 - \Delta, T_2]}$, expanding the left time stamp is not correct with respect to all the links till $T_2$. Now, the minimum possible value for the $t_x$, can be less than $T_1 - \Delta$. Consider one scenario, for a particular vertex pair $\{u, v\}$, all the $\gamma$ links only appeared consecutively at each time stamp within $[T_1 -\gamma + 1, T_1]$. Then, the maximal clique generated out of it, will be $A = (\{u, v\}, [T_1 - \Delta, T_1 -\gamma + 1 + \Delta ])$. Now, if $(u,v)$ appears on $T_1 - \Delta - 1$, then $A$ is not maximal and it's left time stamp should be expanded. So, the maximum value possible for such non-maximal cliques will be $T_1 - \Delta$. Hence, it is required to check all the cliques with $t_x \leq T_1 - \Delta$.
\par Combining both the cases, it is sufficient to identify the cliques having $t_x \leq T_1$, as possible candidates for checking the maximality condition.
\end{proof}
\begin{mylemma} \label{lemma:extendend}
$\mathcal{C}^{T_{1}}_{ex}$ contains all the cliques for updating $\mathcal{C}^{T_1}$ to $\mathcal{C}^{T_2}$.
\end{mylemma}
\begin{proof}
While building the maximal cliques till time stamp $T_1$, all the intermediate cliques having $t_y \geq T_1$ are kept in $\mathcal{C}^{T_{1}}_{ex}$. To prove the lemma statement, we need to show that $t_y \geq T_1$ is sufficient condition, to have all the cliques for extending in right (Refer Lemma \ref{lemma:6}). Here, we divide the proof in two parts: i) $t_y \ge T_1 - k$, ii) $t_y \ge T_1 + k$, where $k \in \mathbb{Z}^{+}$.
For part (i), it needs to have the links from $t_y - \Delta$ to $T_2$, to extend it's right time stamp by Procedure \ref{proc:rightts}. According to Lemma \ref{lemma:t1-delta}, it will have to process the link set $\mathcal{E}^{[T_1 - \Delta -k, T_2]}$, which will incur the redundant processing.
For part (ii), the maximum possible value for $t_y$ for the cliques generated with link set $\mathcal{E}^{T_1}$ can be greater than $T_1$. Now consider a scenario, where for a vertex pair $\{u, v\}$, the consecutive $\gamma$ links have only appeared within $[T1-\Delta , T_1 - \Delta + \gamma - 1]$. Hence, two cliques will be generated with the initialization of \cite{banerjee2019enumeration}, as $A_0 = (\{u, v\}, [T_1 - \Delta, T_1])$ and $A_1 = (\{u, v\}, [T_1 - 2\Delta + \gamma - 1, T_1 - \Delta + \gamma -1])$. Now if there exist a link $(u,v,t)$ with $t = T_1 + 1$, then the maximal clique becomes $A=(\{u, v\}, [T_1 - 2\Delta + \gamma - 1, T_1 + 1])$. To get the maximal clique $A$, $\mathcal{C}^{T_{1}}_{ex}$ must contain the clique $A_3=(\{u, v\}, [T_1 - 2\Delta + \gamma - 1, T_1])$. Hence, it is required to have the smallest possible value for the extension of $t_y$ is $T_1$.
From part (i) and (ii), it is proved that the $t_y \geq T_1$ is sufficient condition, to have all the cliques for extending in right. So, $\mathcal{C}^{T_{1}}_{ex}$ contains all the cliques for updating $\mathcal{C}^{T_1}$ to $\mathcal{C}^{T_2}$.
\end{proof}
\begin{mylemma} \label{lemma:9}
Without execution of Procedure \ref{proc:removal}, $\mathcal{C}^{T_{2} \setminus T_{1}} \cup (\mathcal{C}^{T_{1}} \setminus \mathcal{C}^{T_{1}}_{ex})$ contains all the maximal cliques along with some non-maximal ones.
\end{mylemma}
\begin{proof}
We denote $\mathcal{C}^{T_{2} \setminus T_{1}} \cup (\mathcal{C}^{T_{1}} \setminus \mathcal{C}^{T_{1}}_{ex})$ as $\hat{\mathcal{C}}^{T_{2}}$. Assume, a maximal clique $(\mathcal{Z}, [t_x, t_y])$ is not present in $\hat{\mathcal{C}}^{T_{2}}$. We prove the lemma statement with contradiction that $(\mathcal{Z}, [t_x, t_y])$ should be in $\hat{\mathcal{C}}^{T_{2}}$. To simplify the proof, we classify $(\mathcal{Z}, [t_x, t_y])$ to be in any of the three of the following cases.
\begin{itemize}
\item \textbf{Case 1:} $[t_x, t_y] \subseteq [T_0, T_1)$,
\item \textbf{Case 2:} $[t_x, t_y] \subseteq (T_1, T_2]$,
\item \textbf{Case 3:} $\{t_x < T_1 \And t_y \geq T_1\}$, or $\{t_x \leq T_1 \And t_y > T_1 \}$.
\end{itemize}
For both Case 1 and 2, it can be proved from Lemma 6 of \cite{banerjee2019enumeration} that the initialization is correct to get the maximal cliques with link set $\mathcal{E}^{T_1}$ and $\mathcal{E}^{[T_1 - \Delta, T_2]}$, respectively. Now, Lemma \ref{lemma:3} and \ref{Lemma:5}, together verifies that the candidate set associated with the initialized cliques is correct and complete. It shows that from any initialized clique $(\{u, v\}, [t_x^{'}, t_x^{'}+\Delta])$ where $u, v \in \mathcal{Z}$ and $ [t_x^{'}, t_x^{'}+\Delta] \subseteq [t_x, t_y]$, $(\mathcal{Z}, [t_x^{'}, t_x^{'}+\Delta])$ will be generated. Now, Lemma 7 of \cite{banerjee2019enumeration} show that $(\mathcal{Z}, [t_x, t_y])$ will be formed from $(\mathcal{Z}, [t_x^{'}, t_x^{'}+\Delta])$. Hence, $(\mathcal{Z}, [t_x, t_y])$ will be in $\mathcal{C}^{T_1}$ (also not in $\mathcal{C}^{T_1}_{ex}$), and $\mathcal{C}^{T_{2} \setminus T_{1}}$ (also not in $\mathcal{C}^{T_{2} \setminus T_{1}}_{ex}$) for Case 1 and Case 2, respectively. Hence, $(\mathcal{Z}, [t_x, t_y])$ should be in $\hat{\mathcal{C}}^{T_{2}}$.
\par For Case 3, $(\mathcal{Z}, [t_x, t_y])$ has to be initialized from $\mathcal{C}^{T_1}_{ex}$. According to Lemma \ref{lemma:4} and \ref{lemma:extendend}, for the maximal clique $(\mathcal{Z}, [t_x, t_y])$ there will be a clique $(\mathcal{Z}, [t_x, t_y^{'}])$ with $t_y^{'} \leq t_y$ in $\mathcal{C}^{T_1}_{ex}$. Hence, $(\mathcal{Z}, [t_x, t_y])$ will only be in $\mathcal{C}^{T_2 \setminus T_1}$ and not in $(\mathcal{C}^{T_{1}} \setminus \mathcal{C}^{T_{1}}_{ex})$. So, $(\mathcal{Z}, [t_x, t_y])$ is present in $\hat{\mathcal{C}}^{T_{2}}$.
\par To prove that $\hat{\mathcal{C}}^{T_{2}}$ will contain non-maximal cliques as well, it is enough to show one such non-maximal clique exists in $\hat{\mathcal{C}}^{T_{2}}$. The proof of Lemma \ref{lemma:maximality} shows that such non-maximal clique will exist. It concludes the proof of the lemma statement.
\end{proof}
\begin{mylemma} \label{lemma:10}
Procedure \ref{proc:removal} correctly removes all the non-maximal cliques, while building $\mathcal{C}^{T_2}$ from $\mathcal{C}^{T_1}$.
\end{mylemma}
\begin{proof}
The cliques having it's left timestamp less than or equal to $T_1$, are the only possible candidates, which can become a non-maximal clique in $\mathcal{C}^{T_2}$. It follows from Lemma \ref{lemma:maximality}. According to Lemma \ref{lemma:extendend}, it is evident that $\mathcal{C}^{T_2 \setminus T_1}$ will contain the maximal cliques along with some non-maximal cliques. By definition, the non-maximal $(\Delta, \gamma)$-cliques will contain temporally or by vertex sets into the maximal ones. Now, from the description of Procedure \ref{proc:removal}, it is easy to observe that it can remove a clique if this is contained into at least one another clique by temporally or by vertex sets. Hence, Procedure \ref{proc:removal} will remove all the non-maximal cliques while building $\mathcal{C}^{T_2}$ from $\mathcal{C}^{T_1}$.
\end{proof}
From Lemma \ref{lemma:10}, it is clear that $\mathcal{C}^{T_2 \setminus T_1}$ will not contain any non-maximal cliques of $\mathcal{C}^{T_2}$. Now, $(\mathcal{C}^{T_1} \setminus \mathcal{C}^{T_1}_{ex})$ will contain the maximal cliques only, as the minus operation will remove the cliques which are declared as falsely maximal due to the non-availability of the entire link set till $T_2$. Hence, $\mathcal{C}^{T_{2} \setminus T_{1}} \cup (\mathcal{C}^{T_{1}} \setminus \mathcal{C}^{T_{1}}_{ex})$ will contain the maximal cliques only till time stamp $T_2$. From these, we state the following theorems.
\begin{mytheorem}\label{Th:2}
All the cliques in $\mathcal{C}^{T_{2}}$ are maximal cliques till time stamp $T_2$.
\end{mytheorem}
\begin{mytheorem}\label{Th:3}
$\mathcal{C}^{T_{2}}$ contains all the maximal cliques till time stamp $T_2$.
\end{mytheorem}
Together Theorem \ref{Th:2}, and \ref{Th:3} complete the correctness of the proposed methodology. Now, we analyze the time and space complexity of the proposed methodology. Initially, we start with Algorithm \ref{Algo:1}. Let, $m_{1}$, $m_{2}$, and $m_{3}$ denote the number of links till time stamp $T_{1}-\Delta$, from $T_{1}-\Delta$ to $T_{1}$, and from $T_{1}$ to $T_{2}$, respectively. Now, the number of links from time stamp $T_{1}-\Delta$ to $T_{2}$ are $(m_2+m_3)$. As per the analysis shown in \cite{banerjee2019enumeration}, in the worst case, the size of $\mathcal{C}_{ex}^{T_{1}}$ can be $\mathcal{O}(2^{n}(m_1+m_2 - \gamma+1))$. Each clique can be of size $\mathcal{O}(n)$. Hence, copying the cliques from $\mathcal{C}_{ex}^{T_{1}}$ to $\mathcal{C}^{I}$ requires $\mathcal{O}(2^{n}n(m_1+m_2 - \gamma+1))$ time. All the statements in Line $2$ are intialization statement, where the first two requires $\mathcal{O}(1)$ time, whereas the third one requires $\mathcal{O}(2^{n}n(m_1+m_2 - \gamma+1))$ time. In Line $4$, removing a clique requires $\mathcal{O}(n)$ time. Complexity analysis of the \texttt{Extend\_Right\_TS} Procedure (i.e., Procedure 3) has been analyzed little later. Checking the condition of the \texttt{if} statement in Line $6$ and $8$ requires $\mathcal{O}(1)$ time, and putting the cliques into $\mathcal{C}^{T_2 \setminus T_1}$ and $\mathcal{C}_{ex}^{T_{1}}$ requires $\mathcal{O}(n)$ time. Now, it is important to understand how many times the \texttt{while} loop of Line $3$ will execute. In the worst case, all the cliques of $\mathcal{C}_{ex}^{T_{1}}$ may extend, and hence the \texttt{while} loop will execute for $\mathcal{O}(|\mathcal{C}_{ex}^{T_{1}}|.(T_2 - T_1))=\mathcal{O}(2^{n}(m_1+m_2 - \gamma+1)(T_2 - T_1))$. As per Lemma $1$ of \cite{banerjee2019enumeration} complexity of executing Line $10$ requires $\mathcal{O}(\gamma(m_2+m_3))$ time. In the worst case the size of $\mathcal{C}^{I}$ could be $\mathcal{O}(\gamma(m_2+m_3))$. Hence, copying the cliques of $\mathcal{C}^{I}$ into $\mathcal{C}_{im}$ requires $\mathcal{O}(\gamma(m_2+m_3))$ time.
\par Inside the next \texttt{while} loop, removing a clique in Line $13$ from $\mathcal{C}^{I}$ requires $\mathcal{O}(n)$ time. In Line $14$ condition checking of the \texttt{if} statement requires $\mathcal{O}(1)$ time. Now, the \texttt{if} condition checks whether $t_y - t_x == \Delta$ or not. Hence in the worst case, there can be $\Delta + 1$ links between any two vertices and all the vertices are connected within that $\Delta$ duration. So, preparing the static graph at Line 15, requires $\mathcal{O}(n^2\Delta)$ time as identified by the maximum possible number of links within a $\Delta$ duration. Associating $N_{G}(\mathcal{Z})$ to $(\mathcal{Z},[t_x,t_y])$ requires $\mathcal{O}(|\mathcal{Z}|(n-|\mathcal{Z}|))$ time. In the worst case, this quantity will be $\mathcal{O}(n^{2})$. Now, as per the sequential steps of Algorithm \ref{Algo:1}, subsequently we proceed to analyze the time and space requirement for the Procedures \ref{proc:nodeadd},\ref{proc:rightts},\ref{proc:leftts}, and \ref{proc:removal}.
\par Now, we start with Procedure \ref{proc:nodeadd}. As mentioned in the analysis of Algorithm 2 of \cite{banerjee2019enumeration}, the maximum number of intermediate cliques are $\mathcal{O}(2^{n}(m_2+m_3 -\gamma +1))$. For each of these cliques, time requirement to execute Procedure \ref{proc:nodeadd} is as follows. As mentioned previously, for any intermediate clique $(\mathcal{Z},[t_x,t_y])$, $|N_{G}(\mathcal{Z})|$ can be at most $\mathcal{O}(n)$. Hence, the \texttt{for} loop in Line $3$ will execute $\mathcal{O}(n)$ times in the worst case. To check any clique $(\mathcal{Z},[t_x,t_y])$ holds the $(\Delta, \gamma)$\mbox{-}Clique property or not, we need to check for all the links for the vertices of $\mathcal{Z}$. In the worst case, this may require $\mathcal{O}(n(m_2+m_3))$ time. So, the condition checking of the \texttt{if} statement in Line $4$ requires $\mathcal{O}(n(m_2+m_3))$ time. Setting the `flag'
in Line $5$ requires $\mathcal{O}(1)$ time. Now, as the maximum number of intermediate cliques are $\mathcal{O}(2^{n}(m_2+m_3 -\gamma +1))$, hence in the condition checking of the \texttt{if} statement in Line $6$, the clique $(\mathcal{Z}\cup \{u\},[t_x,t_y])$ needs to be compared with $\mathcal{O}(2^{n}(m_2+m_3 -\gamma +1))$ number of cliques. If the vertex ids of the clique are always stored in the sorted order then two $(\Delta, \gamma)$\mbox{-}cliques can be compared in $\mathcal{O}(n)$ time. If the cliques in the $\mathcal{C}_{im}$ are stored in the sorted of $t_x$ then the number of comparisons will be $\mathcal{O}(\log (2^{n}(m_2+m_3 -\gamma +1)))=\mathcal{O}(n+ \log(m_2+m_3 -\gamma +1))$. Hence, the total time requirement for the condition checking of the \texttt{if} statement in Line $6$ requires $\mathcal{O}(n^{2}+ n.\log(m_2+m_3 -\gamma +1))$ time. Adding the new clique in $\mathcal{C}_{im}$ and $\mathcal{C}^{I}$ requires $\mathcal{O}(n)$ time. Hence, the total time requirement for Procedure \ref{proc:nodeadd} is of $\mathcal{O}(n(n(m_2+m_3)+n^{2}+ n.\log(m_2+m_3 -\gamma +1) +n))= \mathcal{O}(n^{2}(m_2+m_3)+ n^{3})$.
\par Next, we analyze Procedure \ref{proc:rightts}. Setting the \texttt{flag} in Line $2$ requires $\mathcal{O}(1)$ time. Finding $t_{yr}$ in Line $3$ requires $\mathcal{O}(m_2+m_3)$ time. Condition checking of the \texttt{if} statement in Line $4$ requires $\mathcal{O}(1)$ time. As mentioned in the analysis of Procedure \ref{proc:nodeadd}, checking for the belongingness of any clique $(\mathcal{Z},[t_x,t_y])$ requires $\mathcal{O}(n^{2}+ n.\log(m_2+m_3 -\gamma +1))$ time. Hence, the running time for the Procedure \ref{proc:rightts} is of $\mathcal{O}(m_2+m_3+n^{2}+ n.\log(m_2+m_3 -\gamma +1))$ time. As Procedure \ref{proc:leftts} is identical to Procedure \ref{proc:rightts}, hence time requirement for Procedure \ref{proc:leftts} will also be of $\mathcal{O}(m_2+m_3+n^{2}+ n.\log(m_2+m_3 -\gamma +1))$.
\par Now, we analyze Procedure \ref{proc:removal}. It is easy to follow that all the statements from Line $2$ to $5$ require $\mathcal{O}(1)$ time. Number of keys in the dictionary $R_{dic}$ is of $\mathcal{O}(2^{n})$. It is easy to follow that the number of times the \texttt{for} loop in Line $6$ will execute $\mathcal{O}(2^{n}(m_2+m_3-\gamma+1))$ times. Condition checking of the \texttt{if} statement in Line $7$ requires $\mathcal{O}(2^{n}.n)$ time. Executing Line $8$ and $9$ require $\mathcal{O}(n)$, and $\mathcal{O}(1)$ time, respectively. Appending the time duration of the clique corresponding to the vertex set of the clique as `key' requires $\mathcal{O}(1)$ time. The condition checking of the \texttt{if} statement and at Line $11$ requires $\mathcal{O}(1)$ time, and adding clique $(\mathcal{Z}, [t_x,t_y])$ at Line $12$ requires $\mathcal{O}(n)$ time. Hence, time requirement from Line $2$ to $12$ is of $\mathcal{O}(2^{n}(m_2+m_3-\gamma+1)(n.2^{n}+n)) = \mathcal{O}(2^{2n}.n.(m_2+m_3-\gamma+1))$. Now, it is easy to follow that the in the worst case the size of $\mathcal{C}_{check}$ will be $\mathcal{O}(2^{n}(m_2+m_3-\gamma+1))$. So, the \texttt{for} loop in Line $13$ will run $\mathcal{O}(2^{n}(m_2+m_3-\gamma+1))$ times. Condition checking of the \texttt{if} statement in
Line $14$ requires $\mathcal{O}(2^{n}.n)$ time. Now, let $f_{max}$ denotes the maximum number of cliques with the same vertex set. Hence, the condition checking of the \texttt{if} statement in Line $15$ requires $\mathcal{O}(f_{max})$ time. Then, removing the clique from $\mathcal{C}^{T_2 \setminus T_1}$ requires $\mathcal{O}(n)$ time. The cardinality of `temp' in Line $18$ can be given by the following equation:
\begin{equation}
|temp|= \bigg [\binom{n}{|\mathcal{Z}|+1}+ \binom{n}{|\mathcal{Z}|+2}+ \ldots + \binom{n}{|\mathcal{Z}|+(n-|\mathcal{Z}|)} \bigg ]. \mathcal{O}(f_{max})
\end{equation}
In the worst case $|temp|$ may converges to $\mathcal{O}(2^{n}.f_{max})$. Hence, the \texttt{for} loop in Line $19$ will run for $\mathcal{O}(2^{n}.f_{max})$ times. It is easy to observe that execution of the condition checking of the \texttt{if} statement, removing the clique from $\mathcal{C}^{T_2 \setminus T_1}$, require $\mathcal{O}(1)$ and $\mathcal{O}(n)$ respectively. Hence, the total time requirement from Line $13$ to $22$ is as follows: $\mathcal{O}(2^{n}(m_2+m_3-\gamma+1)(n.2^{n}+ n.2^{n}(f_{max} + n) + n.2^{n}.f_{max}))=\mathcal{O}(2^{2n}.n.(m_2+m_3-\gamma+1).(f_{max}+n))$. Hence, the total running time of Procedure \ref{proc:removal} is $\mathcal{O}(2^{2n}.n.(m_2+m_3-\gamma+1).(f_{max}+n))$.
\begin{table}[]
\centering
\begin{tabular}{|c|c|}
\hline
Procedure & Time \\ \hline
\texttt{Expand\_Vertex\_Set} \ref{proc:nodeadd} & $\mathcal{O}(n^{2}(m_2+m_3)+ n^{3})$ \\ \hline
\texttt{Extend\_Left\_TS} \ref{proc:leftts} & $\mathcal{O}(m_2+m_3+n^{2}+ n.\log(m_2+m_3 -\gamma +1))$ \\ \hline
\texttt{Extend\_Right\_TS} \ref{proc:rightts} & $\mathcal{O}(m_2+m_3+n^{2}+ n.\log(m_2+m_3 -\gamma +1))$ \\ \hline
\texttt{EOC\_Removal\_Sub\_Cliques} \ref{proc:removal} & $\mathcal{O}(2^{2n}.n.(m_2+m_3-\gamma+1).(f_{max}+n))$ \\ \hline
\end{tabular}
\caption{Computational Time Required by the Procedures}
\label{tab:time_proc}
\end{table}
\par Now, the final task is to add up the step wise running time of Algorithm \ref{Algo:1} to get the running time of the proposed methodology. After adding up running time from Line $1$ to $11$ will be of $\mathcal{O}(2^{n}(m_1+m_2-\gamma+1)(T_2-T_1)(m_2+m_3+n^{2}+n \log(m_2+m_3-\gamma+1)))$. Also, running time of Line $12$ to $23$ of Algorithm \ref{Algo:1} requires $\mathcal{O}(2^{n}.(m_2+m_3-\gamma+1) (n^3 + n^2(m_2 + m_3)) )$. In Line $25$, we are performing set minus operation between $\mathcal{C}^{T_{1}}_{ex}$ and $\mathcal{C}^{T_{1}}$. Now, performing set minus between two sets with $k_1$ and $k_2$ elements requires $\mathcal{O}(k_1, k_2)$ number of operations. In the worst case size of both $\mathcal{C}^{T_{1}}_{ex}$ and $\mathcal{C}^{T_{1}}$ can be $\mathcal{O}(2^{n}(m_1+m_2-\gamma+1))$ and the size of a $(\Delta,\gamma)$-clique could be $\mathcal{O}(n)$, performing this set minus operation requires $\mathcal{O}(2^{2n}n^{2}(m_1+m_2-\gamma+1)^{2})$. Now, the number of cliques in $\mathcal{C}^{T_2 \setminus T_1}$ can be $\mathcal{O}(2^{n}(m_2+m_3-\gamma+1) + 2^{n}(m_1+m_2-\gamma+1)(T_2 - T_1)) $. The line $25$ can be executed by copying the elements of $\mathcal{C}^{T_1} \setminus \mathcal{C}^{T_1}_{ex}$ into $\mathcal{C}^{T_2 \setminus T_1}$ and add the reference to a new variable $\mathcal{C}^{T_2}$. Now, as the number elements in $\mathcal{C}^{T_1} \setminus \mathcal{C}^{T_1}_{ex}$ can be $\mathcal{O}(2^{n}(m_1+m_2-\gamma+1))$, copying that requires $\mathcal{O}(n 2^{n}(m_1+m_2-\gamma+1))$ time. Hence, the total time of line $25$ is $\mathcal{O}(2^{2n}n^{2}(m_1+m_2-\gamma+1)^{2} + n 2^{n}(m_1+m_2-\gamma+1) ) = \mathcal{O}(2^{2n}n^{2}(m_1+m_2-\gamma+1)^{2})$. The time complexity of the proposed methodology is of $\mathcal{O}(2^{n}(m_1+m_2-\gamma+1)(T_2-T_1)(m_2+m_3+n^{2}+n \log(m_2+m_3-\gamma+1)) + 2^{n}.(m_2+m_3-\gamma+1) (n^3 + n^2(m_2 + m_3)) + 2^{2n}n^{2}(m_1+m_2-\gamma+1)^{2})$.
\par Now, we turn our attention to the space requirement. It can be observed from the Algorithm \ref{Algo:1} that the space requirement of the proposed methodology is basically the sum of the space requirement of the following individual structures: $\mathcal{C}^{I}$, $\mathcal{C}_{im}$, $\mathcal{C}^{T_{2} \setminus T_{1}}$, $\mathcal{C}^{T_{2} \setminus T_{1}}_{ex}$, $\mathcal{R}_{dic}$, $\mathcal{C}_{check}$, and $Temp$. Among them the last three structures has been used in the \texttt{EOC\_Removal\_Sub\_Cliques} subroutine. Table \ref{tab:space} contains individual space requirement by different structures. So, the total space requirement of the proposed methodology is of $\mathcal{O}(n2^{n}(m_2 + m_3 - \gamma + 1) + n2^{n}(m_1 + m_2 - \gamma + 1)(T_2 - T_1)) $.
\begin{table}[]
\centering
\begin{tabular}{|c|c|}
\hline
Structure & Space \\ \hline
$\mathcal{C}^I$ & $\mathcal{O}(n2^{n}(m_2 + m_3 - \gamma + 1)) $ \\ \hline
$\mathcal{C}_{im}$ & $\mathcal{O}(n2^{n}(m_2 + m_3 - \gamma + 1) + n2^{n}(m_1 + m_2 - \gamma + 1)(T_2 - T_1)) $ \\ \hline
$\mathcal{C}^{T_2 \setminus T_1}$ & $\mathcal{O}(n2^{n}(m_2 + m_3 - \gamma + 1)) $ \\ \hline
$\mathcal{C}^{T_2 \setminus T_1}_{ex}$ & $\mathcal{O}(n2^{n}(m_2 + m_3 - \gamma + 1)) $ \\ \hline
$\mathcal{R}_{dic}$ & $\mathcal{O}(n2^{n}(m_2 + m_3 - \gamma + 1)) $ \\ \hline
$\mathcal{C}_{check}$ & $\mathcal{O}(n2^{n}(m_2 + m_3 - \gamma + 1)) $ \\ \hline
$temp$ at Line 18 in Procedure \ref{proc:removal} & $\mathcal{O}(n2^{n}(m_2 + m_3 - \gamma + 1)) $ \\ \hline
\end{tabular}
\caption{Computational space required to store different structures}
\label{tab:space}
\end{table}
\section{Experimental Evaluation}\label{Sec:EE}
Here we describe the experimental evaluation of the proposed solution approach. This section has been arranged in the following way. Subsection \ref{Sec:DD} contains the description of the datasets. Subsection \ref{SubSec:Exp_Set_Up} contains the set up for the experimentation. The goals of the experimentation have been listed out in Subsection \ref{Sec:GE}. Finally, Subsection \ref{Sec:ER} contains the experimental results with a detailed discussion.
\subsection{Dataset Description} \label{Sec:DD}
In this study, we use the following four publicly available temporal network datasets:
\begin{itemize}
\item \textbf{Infectious \cite{isella2011s}:} This dataset contains the dynamic contact information collected at the time of Infectious SocioPattern event at the science gallery of Dublin city. Content of this dataset is the collection of tuples of type $(t,u,v)$ signifying a contact between $u$ and $v$ at time $t$.
\item \textbf{Hypertext \cite{isella2011s}:} This dataset was collected during the ACM Hypertext conference 2009, where the conference attendees voluntarily weared wireless devises and their contacts (when two attendees come to a close proximity) during the conference days are captured in this dataset.
\item \textbf{College Message \cite{panzarasa2009patterns}:} This dataset contains the interaction information among a group of students from University of California, Irvine.
\item \textbf{Autonomous Systems (AS180) \cite{leskovec2005graphs}:} The dataset contains the daily traffic flow between routers in a communication network. The data was collected from University of Oregon Route Views Project - Online data and reports. The dataset contains 733 daily instances which span an interval of 785 days from November 8 1997 to January 2 2000. As the number of links on each day is very large, we consider the data for the first 180 days only for the experiment.
\end{itemize}
The first two datasets are downloaded from \url{http://www.sociopatterns.org}, and the last two from \url{https://snap.stanford.edu/data/index.html}. Table \ref{Tab:Data_Stat} gives a brief description of the datasets and Figure \ref{fig:dataset_description} shows the number of links present at each time stamp for the entire lifetime of the temporal network.
\begin{table}
\centering
\caption{Basic statistics of the datasets}
\label{Tab:Data_Stat}
\begin{tabular}{ | p{2.5 cm} | p{1.3cm} | p{1.5cm} | p{2cm} | p{2cm} |}
\hline
Datasets & \#Nodes($n$) & \#Links($m$) & \#Static Edges & Lifetime/Total Duration \\ \hline
Infectious & 410 & 17298 & 2765 & 8 Hours \\ \hline
Hypertext & 113 & 20818 & 2196 & 2.5 Days \\ \hline
College Message & 1899 & 59835 & 20296 & 193 Days \\ \hline
AS180 & 4002 & 2127983 & 8957 & 180 Days \\
\hline
\end{tabular}
\end{table}
\begin{figure}
\centering
\begin{tabular}{cc}
\includegraphics[scale=0.37]{infectious_linkvsTS.jpg} &
\includegraphics[scale=0.37]{ht09contact_linkvsTS.jpg} \\
(a) Infectious & (b) Hypertext \\
\includegraphics[scale=0.37]{collegemsg_linkvsTS.jpg} &
\includegraphics[scale=0.37]{as180_linkvsTS.jpg} \\
(c) College Message & (d) AS180 \\
\end{tabular}
\caption{The link count at each time $t$ in the entire life-cycle of the datasets}
\label{fig:dataset_description}
\end{figure}
\subsection{Experimental Setup} \label{SubSec:Exp_Set_Up}
Here, we describe the set up for our experimentation. Only the set up required in our experiments is to partition the dataset. We use the following two partitioning technique:
\begin{itemize}
\item \textbf{Uniform Time Interval\mbox{-}Based Partitioning (EOC-UT)}: In this technique, the whole dataset is splitted into parts, where in each part the number of time stamps are equal.
\item \textbf{Uniform Link Count\mbox{-}Based Partitioning (EOC-ULC)}: In this technique, the whole dataset is splitted into parts, where the number of time stamps in each part are equal.
\end{itemize}
Here, EOC stands for the proposed `Edge on Clique' procedure. We perform the experiments by making two partitions of the datasets, based on the mentioned partitioning schemes. We implement our proposed methodology in Python 3.6 along with NetworkX 2.2 environment, and all the experiments have been carried out in a 32-core server with 256GB RAM and 2.2 GHz processing speed.
\subsection{Goals of the Experiments} \label{Sec:GE}
The goal of the experimentation is to address the following research questions:
\begin{itemize}
\item To understand the change in the size of different clique sets used in Algorithm \ref{Algo:1}, with respect to the change of $\Delta$ and $\gamma$ value.
\item To understand the change in computational time and space required \emph{with} and \emph{without partition} for different partition schemes, with respect to the change of $\Delta$ and $\gamma$ value.
\item To understand the change in computational time and space with respect to the number of partitions.
\end{itemize}
All the experiments are done in two ways; (i) changing $\Delta$, with fixed $\gamma$ value, and (ii) varying $\gamma$, with fixed $\Delta$ value.
\subsection{Experimental Results with Discussion} \label{Sec:ER}
Here, we report the experimental results with detailed analysis, for the identified goals.
\subsubsection{Change in the Size of Different Clique Sets}
\begin{figure}
\centering
\begin{tabular}{cc}
\includegraphics[scale=0.2]{infectius_fixed_delta_360_e.png} & \includegraphics[scale=0.2]{infectius_fixed_delta_360_m.png} \\
(a) Uniform Interval-Based & (b) Uniform Link Count-Based \\
\includegraphics[scale=0.2]{infectious_fixed_gamma_3_e.png} & \includegraphics[scale=0.2]{infectious_fixed_gamma_3_m.png} \\
(c) Uniform Interval-Based & (d) Uniform Link Count-Based
\end{tabular}
\caption{Result for the change of clique count w.r.t $\Delta$ and $\gamma$ for Infectious Dataset; (a)-(b) fixed $\Delta = 360$; (c)-(d) infectious fixed $\gamma = 3$ }
\label{fig:Infectious}
\end{figure}
\begin{figure}
\centering
\begin{tabular}{cc}
\includegraphics[scale=0.2]{ht09contact_fixed_delta_360_e.png} & \includegraphics[scale=0.2]{ht09contact_fixed_delta_360_m.png} \\
(a) Uniform Interval-Based & (b) Uniform Link Count-Based \\
\includegraphics[scale=0.2]{ht09contact_fixed_gamma_3_e.png} & \includegraphics[scale=0.2]{ht09contact_fixed_gamma_3_m.png} \\
(c) Uniform Interval-Based & (d) Uniform Link Count-Based
\end{tabular}
\caption{Result for the change of clique count w.r.t $\Delta$ and $\gamma$ for Hypertext Dataset; (a)-(b) fixed $\Delta = 360$; (c)-(d) Hypertext fixed $\gamma = 3$ }
\label{fig:ht_cintact}
\end{figure}
Figure \ref{fig:Infectious} a, and b show the plots for the change in cardinality of the clique sets $\mathcal{C}^{T_{1}}$, $\mathcal{C}_{ex}^{T_{1}}$, $\mathcal{C}^{T_{2} \setminus T_{1}}$, $\mathcal{C}^{T_{2}}$, $\mathcal{C}_{check}$ with the change in $\gamma$ for a fixed $\Delta$ (we consider $\Delta=360$), for the Infectious dataset with the uniform interval\mbox{-}based, and uniform link count\mbox{-}based partitioning, respectively. Figure \ref{fig:Infectious} c, and d show the plots for the same in change with $\Delta$ for a fixed $\gamma$ (we consider $\gamma=3$). From the Figure \ref{fig:Infectious} a, and b it has been observed that in both the partitioning schemes, for a fixed $\Delta$, when the $\gamma$ value is increased, the cardinality of $\mathcal{C}^{T_{1}}$, $\mathcal{C}_{ex}^{T_{1}}$, $\mathcal{C}^{T_{2} \setminus T_{1}}$, $\mathcal{C}^{T_{2}}$, $\mathcal{C}_{check}$ are decreasing. The reason behind this is quite intuitive. For a fixed duration, if the frequency of contacts increases then certainly, the number of maximal cliques following this requirement is going to decrease. The decrements are very sharp till $\gamma=8$. From $\gamma=9$ to $13$ the decrements are quite gradual, and beyond $\gamma \geq 14$, the change is very less. As an example, for uniform interval\mbox{-}based partitioning scheme, when $\gamma=2$, the value of $|\mathcal{C}^{T_{2}}|$ is $4199$ and the same for $\gamma=8$ is $569$. However, the value of $|\mathcal{C}^{T_{2}}|$ for $\gamma=9$ and $13$ are $589$ and $311$ and also for $\gamma=14$ and $18$ are $185$ and $69$.
\par From Figure \ref{fig:Infectious} c and d, it can be observed that for a fixed $\gamma$ ($=3$ in our experiments) if the $\Delta$ value is increased gradually, the change in cardinality of $\mathcal{C}^{T_{1}}$, $\mathcal{C}^{T_{2}}$, $\mathcal{C}^{T_{2} \setminus T_{1}}$ are decreasing very slightly. As an example, for uniform time interval based partitioning scheme when the value of $\Delta =60$, $360$ and $600$, the value of $\mathcal{C}^{T_{1} }$ are $1834$, $1552$, and $1526$, respectively. The reason behind this is as follows. For a given frequency of contacts, when the duration is increased, there is a high possibility that the two or more maximal cliques get merged and this leads to the decrement in maximal clique count. However, the gradual increment of $\Delta$ value leads to the increase of $|\mathcal{C}_{ex}^{T_{1}}|$ and $|\mathcal{C}_{check}|$. The increment of $|\mathcal{C}_{ex}^{T_{1}}|$ is sharper than that of $|\mathcal{C}_{check}|$, particularly for uniform link count\mbox{-}based partitioning. As an example, for uniform interval\mbox{-}based partitioning, when $\Delta=60$ the value of $|\mathcal{C}_{check}|$ and $|\mathcal{C}_{ex}^{T_{1}}|$ is $33$ and $24$, respectively. However, the same for $\Delta=660$, are $1178$ and $2636$. In case of uniform link count\mbox{-}based partitioning, the value of $|\mathcal{C}_{check}|$ and $|\mathcal{C}_{ex}^{T_{1}}|$ for $\Delta=60$ are $39$ and $36$, respectively. However, when the $\Delta$ value is increased to $600$, these become $4337$ and $9228$, respectively.
\begin{figure}
\centering
\begin{tabular}{cc}
\includegraphics[scale=0.2]{collegemsg_fixed_delta_43200_e.png} & \includegraphics[scale=0.2]{collegemsg_fixed_delta_43200_m.png} \\
(a) Uniform Interval-Based & (b) Uniform Link Count-Based \\
\includegraphics[scale=0.2]{collegemsg_fixed_gamma_5_e.png} & \includegraphics[scale=0.2]{collegemsg_fixed_gamma_5_m.png} \\
(c) Uniform Interval-Based & (d) Uniform Link Count-Based
\end{tabular}
\caption{Result for the change of clique count w.r.t $\Delta$ and $\gamma$ for College Message Dataset; (a)-(b) fixed $\Delta = 43200$; (c)-(d) Collegemsg fixed $\gamma = 5$ }
\label{fig:collegemsg}
\end{figure}
Figure \ref{fig:ht_cintact} a and b show the plots for the change in cardinality of $\mathcal{C}^{T_{1}}$, $\mathcal{C}_{ex}^{T_{1}}$, $\mathcal{C}^{T_{2} \setminus T_{1}}$, $\mathcal{C}^{T_{2}}$, $\mathcal{C}_{check}$ with the change in $\gamma$ for a fixed $\Delta$ for the Hypertext dataset. In this dataset also, we conduct our experiments with $\Delta=360$. For these two plots, our observations are same as the `Infectious' dataset, i.e., with the increment of $\gamma$, the cardinality of all these clique sets are gradually decreasing. As an example, for uniform time interval\mbox{-}based partitioning, when $\gamma=2$, $10$, and $18$ the value of $|\mathcal{C}^{T_{2} \setminus T_{1}}|$ are $1307$, $364$, and $268$, respectively. Figure \ref{fig:ht_cintact} c and d show the plots change in cardinality of all the lists with the increment of $\Delta$ for a fixed $\gamma$ value. Like Infectious dataset, in this dataset also we consider $\gamma=3$. In both the partitioning schemes, with the increment of $\Delta$, the value of $|\mathcal{C}^{T_{1}}|$, $|\mathcal{C}^{T_{2}}|$, and $|\mathcal{C}^{T_{2} \setminus T_{1}}|$ decreases, however, $|\mathcal{C}_{ex}^{T_{1}}|$ and $|\mathcal{C}_{check}|$ increases. As an example, when $\Delta=60$, the value of $|\mathcal{C}^{T_{1}}|$, $|\mathcal{C}^{T_{2}}|$, and $|\mathcal{C}^{T_{2} \setminus T_{1}}|$ are $1621$, $1765$, and $3378$, respectively. When the value of $\Delta$ has been increased to $600$ their values are $928$, $837$, and $1742$, respectively. However, when the value of $\Delta=60$ and $\Delta=600$, the value of $|\mathcal{C}_{ex}^{T_{1}}|$ and $|\mathcal{C}_{check}|$ are $86$, $91$ and $2527$, $1099$, respectively.
Similar to Infectious and Hypertext, Figure \ref{fig:collegemsg} shows the results for College Message dataset. Due to the large life-cycle of the dataset, we select higher value of $\Delta$ (= 43200 sec or 12 hours) compared to the earlier. Figure \ref{fig:collegemsg} a and b show the plots for the change in cardinality of $\mathcal{C}^{T_{1}}$, $\mathcal{C}_{ex}^{T_{1}}$, $\mathcal{C}^{T_{2} \setminus T_{1}}$, $\mathcal{C}^{T_{2}}$, $\mathcal{C}_{check}$ with the change in $\gamma$ for a fixed $\Delta$. In both the partition schemes, the size of $\mathcal{C}_{ex}^{T_{1}}$, and $\mathcal{C}_{check}$ are less compared the number of maximal cliques in each partition. This helps to scale up the computation with number of partitions. Contradicting to Infectious and Hypertext, here, the clique counts increases with increasing $\Delta$ (Figure \ref{fig:collegemsg} c and d). In case of \emph{uniform link count partition}, the size of $\mathcal{C}_{ex}^{T_{1}}$ becomes 12470. It helps to exploit the partition mechanism and improve the computational efficacy of the algorithim. We will discuss this in detail in the next section.
\begin{figure}
\centering
\begin{tabular}{cc}
\includegraphics[scale=0.2]{infectious_fixed_delta_360_ts_time.png} & \includegraphics[scale=0.2]{infectious_fixed_delta_360_ts_pspace.png}\\
(a) Fixed - $\Delta=360$ time & (b) Fixed - $\Delta=360$ space \\
\includegraphics[scale=0.2]{infectious_fixed_gamma_3_ts_time.png} & \includegraphics[scale=0.2]{infectious_fixed_gamma_3_ts_pspace.png}\\
(c) Fixed - $\gamma=3$ time & (d) Fixed - $\gamma=3$ space
\end{tabular}
\caption{Results for Computational Time and Space for Infectious dataset}
\label{fig:Computational_Time_infectious}
\end{figure}
\begin{figure}
\centering
\begin{tabular}{cc}
\includegraphics[scale=0.2]{ht09contact_fixed_delta_360_ts_time.png} & \includegraphics[scale=0.2]{ht09contact_fixed_delta_360_ts_pspace.png} \\
(a) Fixed - $\Delta=360$ time & (b) Fixed - $\Delta=360$ space \\
\includegraphics[scale=0.2]{ht09contact_fixed_gamma_3_ts_time.png} & \includegraphics[scale=0.2]{ht09contact_fixed_gamma_3_ts_pspace.png}\\
(c) Fixed - $\gamma=3$ time & (d) Fixed - $\gamma=3$ space
\end{tabular}
\caption{Results for Computational Time and Space for Hypertext dataset}
\label{fig:Computational_Time_ht09contact}
\end{figure}
\subsubsection{Computational Time and Space}
Now, we turn our attention to discuss regarding time and space requirement of our proposed methodology. Note that, here we report the space taken by the process for execution. In case of partition, we report the total time to compute $\mathcal{C}^{T_2}$ by adding the time required to execute each partition. To compute the space, we report the maximum space required by among both the partitions. We continue our experiments in the setting as described (i.e., fixed $\Delta$ varying $\gamma$, and fixed $\gamma$ varying $\Delta$). Figures \ref{fig:Computational_Time_infectious}, \ref{fig:Computational_Time_ht09contact}, and \ref{fig:Computational_Time_collegemsg} show the plots for the Infectious, Hypertext, College Message Datasets, respectively. For computational time, two comparing values, with or without partitions, are shown for each of the partition schemes (\emph{EOC-UT} and \emph{EOC-ULC}). For computational space, we consider three comparing values for \emph{EOC-UT}, \emph{EOC-ULC}, and with \emph{all links}. As we run all the experiments in high performance computing cluster with multiple programs running simultaneously, at the job submission, it allocates the resources (cores) according to it's availability and current workload. Hence, we run both the settings (with or without partition) in a same job to get the actual comparison. So, we report \emph{All links-UT} and \emph{All links-ULC} for computational time.
\par Now, we discuss the computational time and space requirements for fixed $\Delta$ setting. From the Figure \ref{fig:Computational_Time_infectious} a, \ref{fig:Computational_Time_ht09contact} a, and \ref{fig:Computational_Time_collegemsg} a, it can be observed that the computational time decreases exponentially with the increase of $\gamma$. The reason is with the increment of $\gamma$, the number of maximal cliques decreases (Refer \ref{fig:Infectious} a and b, \ref{fig:ht_cintact} a and b, \ref{fig:collegemsg} a and b). It is also observed that compared to the \emph{all links}, the partition\mbox{-}based schemes lead to an improvement in computational time and the improvement is more when the $\gamma$ value is less. This is because for lower values of $\gamma$, the size of $\mathcal{C}_{ex}^{T_{1}}$ is more, and as shown previously these cliques will only be expanded by extending its right time stamp. However, these cliques will be expanded by vertex addition, and both right and left time stamp expansion while processing all the links at a time. Hence, the larger size of $\mathcal{C}_{ex}^{T_{1}}$ leads to more improvement in computational time. For `Hypertext' dataset (Figure \ref{fig:Computational_Time_ht09contact} a ), the computational time increases for $\gamma \geq 16$ due to the growth in intermediate clique count, while building $\mathcal{C}^{T_{2} \setminus T_{1}}$. Now, we turn our attention for space requirement analysis in Figure \ref{fig:Computational_Time_infectious} b, \ref{fig:Computational_Time_ht09contact} b, and \ref{fig:Computational_Time_collegemsg} b. For a fixed $\Delta$, when the value of $\gamma$ increases, the space requirement decreases. Also, the partition schemes lead to an improvement in terms of space compared to All-Links. As we have made two partitions, in majority of the cases the space requirement become approximately half for small value of $\gamma$. As the number of maximal cliques are comparatively more for small $\gamma$, the improvement is also significant. However, for large $\gamma$ improvement is negligible in Infectious and College Message (Figure \ref{fig:Computational_Time_infectious} b and \ref{fig:Computational_Time_collegemsg} b, respectively). Now, the space requirement by EOC-ULC is always less compared to EOC-UT. As explained earlier, the size of $\mathcal{C}^{T_1}_{ex}$ plays a major role to improve the computational time in partition-based scheme. Similarly, it effects the space requirement. Now, from the Figures \ref{fig:Infectious} a and b, \ref{fig:ht_cintact} a and b, \ref{fig:collegemsg} a and b, it can be noted that the size of $\mathcal{C}^{T_1}_{ex}$ is more in \emph{Uniform link count\mbox{-}based} compared to \emph{uniform time interval\mbox{-} based} partition scheme. It results to more improvement in space requirement in case of EOC-ULC.
\begin{figure}
\centering
\begin{tabular}{cc}
\includegraphics[scale=0.2]{collegemsg_fixed_delta_43200_ts_time.png} & \includegraphics[scale=0.2]{collegemsg_fixed_delta_43200_ts_pspace.png} \\
(a) Fixed - $\Delta=43200$ time & (b) Fixed - $\Delta=43200$ space \\
\includegraphics[scale=0.2]{collegemsg_fixed_gamma_5_ts_time.png} & \includegraphics[scale=0.2]{collegemsg_fixed_gamma_5_ts_pspace.png}\\
(c) Fixed - $\gamma=5$ time & (d) Fixed - $\gamma=5$ space
\end{tabular}
\caption{ Results for Computational Time and Space for College Message dataset}
\label{fig:Computational_Time_collegemsg}
\end{figure}
\par Figure \ref{fig:Computational_Time_infectious} c, \ref{fig:Computational_Time_ht09contact} c, and \ref{fig:Computational_Time_collegemsg} c show the plots for computational time requirement for fixed $\gamma$ and varying $\Delta$. From these plots, it is observed that with the increase of $\Delta$, in most of the instances the computational time increases. It is also observed that the partition based schemes leads an improvement over the `all links'. In particular, between the EOC-UT and EOC-ULC schemes, in most of the cases, the computational time requirement by EOC-UT scheme is less. The effect is achieved due to the similar reason as described earlier for the varying $\gamma$ scenario. Similar to computational time, the space requirement grows with the increment of $\Delta$ and the improvement is more in EOC-ULC partition scheme compared to EOC-UT (Figure \ref{fig:Computational_Time_infectious} d, \ref{fig:Computational_Time_ht09contact} d, and \ref{fig:Computational_Time_collegemsg} d). The rate of improvement also increases for large value of $\Delta$ wit a fixed $\gamma$.
\par In all the datasets, the computational time is in second. Hence, we experiment on a large dataset AS180 to observe improvement in the computational time and space in a huge scale. As the number of links in each time(day) is huge (Figure \ref{fig:dataset_description} d), we partition the entire dataset into three partitions with uniform link count based partition scheme. The result is reported in Table \ref{tab:as180_fixed_gamma}. As there data is collected on each day and 180 unique value of $t$ is present. We set the $\Delta$ as 5, 10, 15, 20. For all the cases, the improvement compared to with and without partition is significant for both time and space.
\begin{table}
\centering
\begin{tabular}{|c|c||c|c||c|c|}
\hline
& & \multicolumn{2}{c||}{computational time (hour)} & \multicolumn{2}{c|}{Program Space (GB)} \\
\hline
delta & gamma & with partition & without partition & with partition & without partition \\ \hline
5 & 3 & 7.52344 & 20.42738 & 77.42066 & 133.41058 \\ \hline
10 & 3 & 8.24277 & 19.5564 & 55.2932 & 92.70591 \\ \hline
15 & 3 & 9.41666 & 17.89813 & 47.7809 & 79.45884 \\ \hline
20 & 3 & 9.52877 & 17.79293 & 44.84441 & 74.34061 \\ \hline
20 & 5 & 8.86788 & $>$ 96 & 50.54501 & $>$ 256 GB \\ \hline
\end{tabular}
\caption{Results for Computational Time and Space for AS180 dataset}
\label{tab:as180_fixed_gamma}
\end{table}
\begin{figure}
\centering
\begin{tabular}{cc}
\includegraphics[scale=0.2]{partition_ht09contact_fixed_delta_360_ts_time.png} & \includegraphics[scale=0.2]{partition_ht09contact_fixed_delta_360_ts_space.png} \\
(a) Fixed - $\Delta=360$ time & (b) Fixed - $\Delta=360$ space \\
\includegraphics[scale=0.2]{partition_ht09contact_fixed_gamma_3_ts_time.png} & \includegraphics[scale=0.2]{partition_ht09contact_fixed_gamma_3_ts_space.png}\\
(c) Fixed - $\gamma=3$ time & (d) Fixed - $\gamma=3$ space
\end{tabular}
\caption{Computational Time and Space for Hypertext dataset based on number of partitions}
\label{fig:partition_ht09contact}
\end{figure}
\subsubsection{Effect on the Number of Partitions}
To understand the effect on the number of partitions, we select Hypertext dataset for the experiment. We can observe there exists three clear partitions in the dataset from Figure \ref{fig:dataset_description} b. We choose without partition as the number partition = 1, EOC-ULC with number of partition =2. For the number of partition = 3 case, we choose out $T_0, T_1, T_2, T_3$, we select $T_1 = 70800$, and $T_2 = 160000$. $T_0$ and $T_3$ are the start and end time of the temporal network, respectively. This follows the condition of lemma \ref{lemma:1}, for $\Delta = 360$ and $\gamma = 3$, which exploits the maximum effective improvement by partition. We report the result for computational time and space, with varying $\Delta$ and $\gamma$ in Figure \ref{fig:partition_ht09contact}. The effect of partiton clearly scales up the performance. However, for large $\gamma$, i.e. $\gamma \longrightarrow \Delta$ (the data is captured in each 20 second ). Hence, the improvement is not observed, for $\gamma \longrightarrow 18$. Whereas, the improvement is more significant in case of increasing $\Delta$.
\section{Conclusion and Future Research Directions} \label{Sec:CFD}
In this paper, we have introduced the Maximal $(\Delta,\gamma)$\mbox{-}Clique Updation Problem and proposed the `Edge on Clique' framework. We have established the correctness of the proposed methodology and analyzed it to obtain its time and space requirement. Also, we conduct an extensive set of experiments with four publicly available temporal network datasets. Experimental results show that the proposed methodology can be used to update the $(\Delta,\gamma)$\mbox{-}cliques efficiently. Now, one immediate future research direction is to consider the probabilistic nature of the links and modify the proposed methodology, so that it can be used in probabilistic setting as well.
\bibliographystyle{spbasic}
|
1,108,101,565,549 | arxiv | \section{Introduction}
In a number of applications from physics, financial engineering, biology and
chemistry it is of interest to compute expectations of some functionals of
solutions of ordinary stochastic differential equations (SDE) and stochastic
partial differential equations (SPDE) driven by white noise. Usually,
evaluation of such expectations requires to approximate solutions of
stochastic equations and then to compute the corresponding averages with
respect to the approximate trajectories. We will not consider the former in
this paper (see, e.g. \cite{MilTre-B04} and references therein) and will
concentrate on the latter. The most commonly used approach for computing the
averages is the Monte Carlo technique, which is known for its slow rate of
convergence and hence limiting
computational efficiency of stochastic simulations. To speed up computation
of the averages, variance reduction techniques (see, e.g. \cit
{MilTre-B04,MilTre09} and the references therein), quasi-Monte Carlo
algorithms \cite{Nie-B92,SloJoe-B94} {\color{black}and multi-level quasi-Monte Carlo methods \cite{KuoSS12a}}, and the multi-level Monte Carlo method
\cite{Gil08,Gil13} have been proposed and used.
An alternative approach to computing the averages is (stochastic)
collocation methods in random space, which are deterministic methods in
comparison with the Monte Carlo-type methods that are based on a statistical
estimator of a mean. The expectation can
be viewed as an integral with respect to the measure corresponding to
approximate trajectories. In stochastic collocation methods, one uses
(deterministic) high-dimensional quadratures to evaluate these integrals. In
the context of uncertainty quantification where moments of stochastic
solutions are sought, collocation methods and their close counterparts
(e.g., Wiener chaos expansion-based methods) have been very effective in
reducing the overall computational cost in engineering problems, see e.g.
\cite{GhaSpa-B91,TatMcR94,XiuKar02a}.
Stochastic equations or differential equations with randomness can be split
into differential equations perturbed by time-independent noise and by
time-dependent noise. It has been demonstrated in a number of works (see
e.g. \cite{BabTZ04,BieSch09,BabNT07,XiuHes05,MotNT12,NobTem09,ZhangGun12}
and references therein) that stochastic collocation methods can be a
competitive alternative to the Monte Carlo technique and its variants in the
case of differential equations perturbed by time-independent noise. The
success of these methods relies on smoothness in the random space and can
usually be achieved when it is sufficient to consider only a limited number
of random variables (i.e., in the case of a low dimensional random space).
The small number of random variables significantly limits the applicability
of stochastic collocation methods to differential equations perturbed by
time-dependent noise as, in particular, it will be demonstrated in this
paper.
The class of stochastic collocation methods for SDE with time-dependent
white noise includes cubatures on Wiener space \cite{LyoVic04},
derandomization \cite{MulRY12}, optimal quantization \cite{PagPha05,PagPri03}
and sparse grids of Smolyak type \cite{Ger-Phd07,GerGri98,GriHol10}. While
derandomization and optimal quantization aim at finding quadrature rules
which are in some sense optimal for computing a particular expectation under
consideration, cubatures on Wiener space and a stochastic collocation method
using Smolyak sparse grid quadratures (a sparse grid collocation method,
SGC) use pre-determined quadrature rules in a universal way without being
tailed towards a specific expectation {\color{black}unless some adaptive strategies are applied}. Since SGC is endowed with negative
weights, it is, in practice, different from cubatures on Wiener space, where
only quadrature rules with positive weights are used. Among quadrature
rules, SGC is of particular interest due to its computational convenience.
It has been considered in computational finance \cite{Ger-Phd07,GriHol10},
where high accuracy was observed. We note that the use of SGC in \cit
{Ger-Phd07,GriHol10} relies on exact sampling of geometric Brownian motion
and of solutions of other simple SDE models, i.e., SGC in these works was
not studied in conjunction with SDE approximations.
In this paper, we consider a SGC method accompanied by time discretization
of differential equations perturbed by time-dependent noise. Our objective
is twofold. {\em First}, using both analytical and numerical results, we warn that
straightforward carrying over stochastic collocation methods and, in
particular, SGC to the case of differential equations perturbed by
time-dependent noise (SDE or SPDE) usually leads to a failure. The main
reason for this failure is that when integration time increases and/or time
discretization step decreases, the number of random variables in
approximation of SDE and SPDE grows quickly. The number of collocation
points required for sufficient accuracy\ of collocation methods grows
exponentially with the number of random variables. This results in
{\color{black}failure} of algorithms based on SGC and SDE time discretizations.
Further, due to empirical evidence (see e.g. \cite{Pet03}), the use of SGC
is limited to problems with random space dimensionality of up to $40.$
Consequently, SGC algorithms for differential equations perturbed by
time-dependent noise can be used only over small time intervals unless a
cure for its fundamental limitation is found.
In Section~2 (after brief introduction to the sparse grid of Smolyak \cit
{Smolyak63} (see also \cite{WasWoz95,GerGri98,XiuHes05}) and to the
weak-sense numerical integration for SDE (see, e.g. \cite{MilTre-B04})), we
obtain an error estimate for a SGC method accompanied by the Euler scheme
for evaluating expectations of smooth functionals of solutions of a scalar
linear SDE with additive noise. In particular, we conclude that the SGC can
successfully work for a small magnitude of noise and relatively short
integration time while {\color{black} in general} it does not converge neither with decrease of the
time discretization step used for SDE approximation nor with increase of the
level of Smolyak's sparse grid{\color{black}; see Remark \ref{rem:convergence-sgc}}. Numerical tests in Section~\re
{sec:num-experiments-sg} confirm our theoretical conclusions and we also
observe first-order convergence in time step size of the algorithm
using the SGC method as long as the SGC error is small relative to the error
of time discretization of SDE. We note that our conclusion is, to some
extent, similar to that for cubatures on Wiener space \cite{CasLit11}, for
Wiener chaos method \cite{HouLRZ06,LotMR97,LotRoz06,ZhangRTK12} and some
other functional expansion approaches \cite{BudKal96,BudKal97}.
The {\em second objective} of the paper is to suggest a possible cure for the
aforementioned deficiencies, which prevent SGC to be used over longer time
intervals. For longer time simulation, deterministic replacements (such as
stochastic collocation methods and functional expansion methods) of the
Monte Carlo technique in simulation of differential equations perturbed by
time-dependent noise do not work effectively unless some restarting
strategies allowing to `forget' random variables from earlier time steps are
employed. Examples of such strategies are the recursive approach for Wiener
chaos expansion methods to compute moments of solutions to{\ linear} SPDE
\cite{LotMR97,ZhangRTK12} and an approach for cubatures on Wiener space
based on compressing the history data via a regression at each time step
\cite{LitLyo12}.
Here we exploit the idea of the recursive approach to achieve accurate
longer time integration by numerical algorithms using the SGC. For linear
SPDE with {\em time-independent} coefficients, the recursive approach works as
follows. We first find an approximate solution of an SPDE at a relatively
small time $t=h$, and subsequently take the approximation at $t=h$ as the
initial value in order to compute the approximate solution at $t=2h$, and so
on, until we reach the final integration time $T=Nh$. To find second moments
of the SPDE solution, we store a covariance matrix of the approximate
solution at each time step $kh$ and recursively compute the first two
moments. Such an algorithm is proposed in Section~\re
{sec:recursive-sg-advdiff}; in Section~\ref{sec:num-experiments-sg} we
demonstrate numerically that this algorithm converges in time step $h$ and
that it can work well on longer time intervals. At the same time, a major
challenge remains: how to effectively use restarting strategies for SGC in
the case of nonlinear SDE and SPDE and further work is needed in this
direction.
\section{Sparse grid for weak integration of SDE}
\subsection{Smolyak's sparse grid\label{sec:ssg}}
Sparse grid quadrature is a certain reduction of product quadrature rules
which decreases the number of quadrature nodes and allows effective
integration in moderately high dimensions \cite{Smolyak63} (see also \cit
{WasWoz95,NovRit99,GerGri98}). Here we introduce it in the form suitable for
our purposes.
We will be interested in evaluating $d$-dimensional integrals of a function
\varphi (y),$ $y\in \mathbb{R}^{d},$ with respect to a Gaussian measure:
\begin{equation}
I_{d}\varphi :=\frac{1}{\left( 2\pi \right) ^{d/2}}\int_{\mathbb{R
^{d}}\varphi (y)\exp \left( -\frac{1}{2}\sum_{i=1}^{d}y_{i}^{2}\right)
\,dy_{1}\cdots dy_{d}. \label{Defi}
\end{equation
Consider a sequence of one-dimensional Gauss--Hermite quadrature rules
Q_{n} $ with number of nodes $n\in \mathbb{N}$ for univariate functions
\psi (\mathsf{y}),$ $\mathsf{y\in }\mathbb{R}$:
\begin{equation}
Q_{n}\psi (\mathsf{y})=\sum_{\alpha =1}^{n}\psi (\mathsf{y}_{n,\alpha }
\mathsf{w}_{n,\alpha }, \label{eq:1d-Gauss-Hermite-rule}
\end{equation
where $\mathsf{y}_{n,1}<\mathsf{y}_{n,2}<\cdots <\mathsf{y}_{n,n}$ are the
roots of the Hermite polynomial
$H_{n}(\mathsf{y})=(-1)^{n}e^{\mathsf{y}^{2}/2}\frac{d^{n}}{d\mathsf{y}^{n}
e^{-\mathsf{y}^{2}/2}$
and $\mathsf{w}_{n,\alpha }=n!/(n^{2}[H_{n-1}(\mathsf{y}_{n,\alpha })]^{2})$
are the associated weights. It is known that $Q_{n}\psi $ is exactly equal
to the integral $I_{1}\psi $ when $\psi $ is a polynomial of degree less
than or equal to $2n-1,$ i.e., the polynomial degree of exactness of
Gauss--Hermite quadrature rules $Q_{n}$ is equal to $2n-1.$
We can approximate the multidimensional integral $I_{d}\varphi $ by a
quadrature expressed as the tensor product rule
\begin{eqnarray}
I_{d}\varphi &\approx &\bar{I}_{d}\varphi :=Q_{n}\otimes Q_{n}\cdots \otimes
Q_{n}\varphi (y_{1},y_{2},\cdots ,y_{d})=Q_{n}^{\otimes d}\varphi
(y_{1},y_{2},\cdots ,y_{d}) \label{appi} \\
&=&\sum_{\alpha _{1}=1}^{n}\cdots \sum_{\alpha _{d}=1}^{n}\varphi (\mathsf{y
_{n,\alpha _{1}},\ldots ,\mathsf{y}_{n,\alpha _{d}})\mathsf{w}_{n,\alpha
_{1}}\cdots \mathsf{w}_{n,\alpha _{d}}, \notag
\end{eqnarray
where for simplicity we use the same amount on nodes in all the directions.
The quadrature $\bar{I}_{d}\varphi $ is exact for all polynomials from the
space $\mathcal{P}_{k_{1}}\otimes \cdots \otimes \mathcal{P}_{k_{d}}$ with
\max k_{i}=2n-1,$ where $\mathcal{P}_{k}$ is the space of one-dimensional
polynomials of degree less than or equal to $k$ (we note in passing that
this fact is easy to prove using probabilistic representations of
I_{d}\varphi $ and $\bar{I}_{d}\varphi ).$ Computational costs of quadrature
rules are measured in terms of a number of function evaluations which is
equal to $n^{d}$ in the case of the tensor product (\ref{appi}), i.e., the
computational cost of (\ref{appi}) grows exponentially fast with dimension.
The sparse grid of Smolyak \cite{Smolyak63} reduces computational complexity
of the tensor product rule (\ref{appi}) via exploiting the difference
quadrature formulas:
\begin{equation*}
A(L,d)\varphi :=\sum_{d\leq \left\vert \mathbf{i}\right\vert \leq
L+d-1}(Q_{i_{1}}-Q_{i_{1}-1})\otimes \cdots \otimes
(Q_{i_{d}}-Q_{i_{d}-1})\varphi ,
\end{equation*
where $Q_{0}=0$ and $\mathbf{i}=(i_{1},i_{2},\ldots ,i_{d})$ is a
multi-index with $i_{k}\geq 1$ and $\left\vert \mathbf{i}\right\vert
=i_{1}+i_{2}+\cdots +i_{d}$. The number $L$ is usually referred to as{\ the
level of the sparse grid}. The sparse grid rule (\ref{eq:smolyak-tensor-like
) can also be written in the following form \cite{WasWoz95}:
\begin{equation}
A(L,d)\varphi =\sum_{L\leq \left\vert \mathbf{i}\right\vert \leq
L+d-1}(-1)^{L+d-1-\left\vert \mathbf{i}\right\vert }\binom{d-1}{\left\vert
\mathbf{i}\right\vert -L}Q_{i_{1}}\otimes \cdots \otimes Q_{i_{d}}\varphi .
\label{eq:smolyak-tensor-like}
\end{equation
The quadrature $A(L,d)\varphi $ is exact for polynomials from the space
\mathcal{P}_{k_{1}}\otimes \cdots \otimes \mathcal{P}_{k_{d}}$ with $
\mathbf{k}|=2L-1,$ i.e., for polynomials of total degree up to $2L-1$ \cite
Corollary 1]{NovRit99}. Due to (\ref{eq:smolyak-tensor-like}), the total
number of nodes used by this sparse grid rule is estimated by
\begin{equation*}
\#S\leq \sum_{L\leq \left\vert \mathbf{i}\right\vert \leq L+d-1}i_{1}\times
\cdots \times i_{d}.
\end{equation*
Table \ref{tbl:no-sgc-level5} lists the number of sparse grid points, $\#S$,
up to level 5 when the level is not greater than $d$.
\begin{table}[tbph]
\caption{The number of sparse grid points for the sparse grid quadrature
\eqref{eq:smolyak-tensor-like} using the one-dimensional Gauss-Hermite
quadrature rule \eqref{eq:1d-Gauss-Hermite-rule}, when the sparse grid level
$L$ $\leq d$.}
\label{tbl:no-sgc-level5}
\begin{center}
\scalebox{0.85}{
\begin{tabular}{c|ccccccc}
\hline
& $L=1$ & $L=2$ & $L=3$ & $L=4$ & $L=5$ & & \\ \hline
$\#S$ & $1$ & $2d+1$ & $2d^2+2d+1$ & $\frac{4}{3}d^3+2d^2+\frac{14}{3}d+1$ &
$\frac{2}{3}d^4+\frac{4}{3}d^3 +\frac{22}{3}d^2+\frac{8}{3}d+1$ & & \\
\hline
\end{tabular}}
\end{center}
\end{table}
The quadrature $\bar{I}_{d}\varphi $ from (\ref{appi}) is exact for
polynomials of total degree $2L-1$ when $n=L.$ It is not difficult to see
that if the required polynomial exactness (in terms of total degree of
polynomials) is relatively small then the sparse grid rule (\re
{eq:smolyak-tensor-like}) substantially reduces the number of function
evaluations compared with the tensor-product rule (\ref{appi}). For
instance, suppose that the dimension $d=40$ and the required polynomial
exactness is equal to $3.$ Then the cost of the tensor product rule (\re
{appi}) is $3^{40} \doteq 1.\,215\,8\times 10^{19}$ while the cost of the
sparse grid rule (\ref{eq:smolyak-tensor-like}) based on one-dimensional
rule \eqref{eq:1d-Gauss-Hermite-rule} is $3281.$
\begin{rem}
In this paper we consider the isotropic SGC. More efficient algorithms might
be built using anisotropic SGC methods \cite{GriHol10,NobTW08}, which employ
more quadrature points along the \textquotedblleft most
important\textquotedblright\ direction. Goal-oriented quadrature rules, e.g.
\cite{MulRY12,PagPha05,PagPri03}, can also be exploited instead of
pre-determined quadrature rules used here. {\color{black}
However, the effectiveness of adaptive sparse grids relies heavily on the order of importance in random dimension of numerical solutions to stochastic differential equations, which is not always easy to reach. Furthermore, all these sparse grids as integration methods in random space grow quickly with random dimensions and thus cannot be used for longer time integration (usually with large random dimensions). Hence, we consider only isotropic SGC.}
\end{rem}
\subsection{Weak-sense integration of SDE}
Let $(\Omega ,\mathcal{F},P)$ be a probability space and $(w(t),\mathcal{F
_{t}^{w})=((w_{1}(t),\ldots ,w_{r}(t))^{\intercal },\mathcal{F}_{t}^{w})$ be
an $r$-dimensional standard Wiener process, where $\mathcal{F}_{t}^{w},\
0\leq t\leq T,$ is an increasing family of $\sigma $-subalgebras of
\mathcal{F}$ induced by $w(t).$
Consider the system of Ito SDE
\begin{equation}
dX=a(t,X)dt+\sum_{l=1}^{r}\sigma _{l}(t,X)dw_{l}(t),\ \ t\in (t_{0},T],\
X(t_{0})=x_{0}, \label{eq:ito-sde-vector}
\end{equation
where $X,$\ $a,$\ $\sigma _{r}$ are $m$-dimensional column-vectors and
x_{0} $ is independent of $w$. We assume that $a(t,x)$ and $\sigma (t,x)$
are sufficiently smooth and globally Lipschitz. We are interested in
computing the expectation
\begin{equation}
u(x_{0})=\mathbb{E}f(X_{t_{0},x_{0}}(T)), \label{eq:weak-apprx-def}
\end{equation
where $f(x)$ is a sufficiently smooth function with growth at infinity not
faster than a polynomial:
\begin{equation}
\left\vert f(x)\right\vert \leq K(1+|x|^{\varkappa }) \label{f_cond}
\end{equation
for some $K>0$ and $\varkappa \geq 1.$
To find $u(x_{0}),$ we first discretize the solution of (\re
{eq:ito-sde-vector}). Let
\begin{equation*}
h=(T-t_{0})/N,\ \ t_{k}=t_{0}+kh,\ \ k=0,\ldots ,N.
\end{equation*
In application to \eqref{eq:ito-sde-vector} the Euler scheme has the form
\begin{equation}
X_{k+1}=X_{k}+a(t_{k},X_{k})h+\sum_{l=1}^{r}\sigma _{l}(t_{k},X_{k})\Delta
_{k}w_{l}, \label{eq:ito-sde-vector-euler}
\end{equation
where $X_{0}=x_{0}$ and $\Delta _{k}w_{l}=w_{l}(t_{k+1})-w_{l}(t_{k})$. The
Euler scheme can be realized in practice by replacing the increments $\Delta
_{k}w_{l}$ with Gaussian random variables:
\begin{equation}
X_{k+1}=X_{k}+a(t_{k},X_{k})h+\sum_{l=1}^{r}\sigma _{l}(t_{k},X_{k})\sqrt{h
\xi _{l,k+1}, \label{eq:ito-sde-vec-strong-euler}
\end{equation
where $\xi _{r,k+1}$ are i.i.d. $\mathcal{N}(0,1)$ random variables. Due to
our assumptions, the following error estimate holds for
\eqref{eq:ito-sde-vec-strong-euler} (see e.g. \cite[Chapter 2]{MilTre-B04}):
\begin{equation}
|\mathbb{E}f(X_{N})-\mathbb{E}f(X(T))|\leq Kh,
\label{eq:ito-sde-vec-strongEuler-error}
\end{equation
where $K>0$ is a constant independent of $h$. This first-order weak
convergence can also be achieved by replacing $\xi _{l,k+1}$ with discrete
random variables \cite{MilTre-B04}, e.g., the weak Euler scheme has the form
\begin{equation}
\tilde{X}_{k+1}=\tilde{X}_{k}+ha(t_{k},\tilde{X}_{k})+\sqrt{h
\sum_{l=1}^{r}\sigma _{l}(t_{k},\tilde{X}_{k})\zeta _{l,k+1},\ \ \ \
k=0,\ldots ,N-1, \label{eq:ito-sde-vec-weak-euler}
\end{equation
where $\tilde{X}_{0}=x_{0}$ and $\zeta _{l,k+1}$ are i.i.d. random variables
with the law
\begin{equation}
P(\zeta =\pm 1)=1/2. \label{eq:weak-euler-discrete-apprx-gauss}
\end{equation
The following error estimate holds for \eqref{eq:ito-sde-vec-weak-euler}
\eqref{eq:weak-euler-discrete-apprx-gauss} (see e.g. \cite[Chapter 2
{MilTre-B04})
\begin{equation}
|\mathbb{E}f(\tilde{X}_{N})-\mathbb{E}f(X(T))|\leq Kh\ ,
\label{eq:ito-sde-vec-weakEuler-error}
\end{equation
where $K>0$ can be a different constant than in (\re
{eq:ito-sde-vec-strongEuler-error}).
Introducing the function $\varphi (y),$ $y\in $ $\mathbb{R}^{rN},$ so that
\begin{equation}
\varphi (\xi _{1,1},\ldots ,\xi _{r,1},\ldots ,\xi _{1,N},\ldots ,\xi
_{r,N})=f(X_{N}), \label{eq:1d-sde-functional-weak}
\end{equation
we have
\begin{eqnarray}
u(x_{0}) &\approx &\bar{u}(x_{0}):=\mathbb{E}f(X_{N})=\mathbb{E}\varphi (\xi
_{1,1},\ldots ,\xi _{r,1},\ldots ,\xi _{1,N},\ldots ,\xi _{r,N})
\label{eq:strong-euler-mean-def} \\
&=&\frac{1}{(2\pi )^{rN/2}}\int_{\mathbb{R}^{rN}}\varphi (y_{1,1},\ldots
,y_{r,1},\ldots ,y_{1,N},\ldots ,y_{r,N})\exp \left( {-\frac{1}{2
\sum_{i=1}^{rN}y_{i}^{2}}\right) \,dy. \notag
\end{eqnarray
Further, it is not difficult to see from \eqref{eq:ito-sde-vec-weak-euler}
\eqref{eq:weak-euler-discrete-apprx-gauss} and (\ref{appi}) that
\begin{eqnarray}
u(x_{0}) &\approx &\tilde{u}(x_{0}):=\mathbb{E}f(\tilde{X}_{N})=\mathbb{E
\varphi (\zeta _{1,1},\ldots ,\zeta _{r,1},\ldots ,\zeta _{1,N},\ldots
,\zeta _{r,N}) \label{eq:ito-sde-weak-euler-functional-mean-tensor} \\
&=&Q_{2}^{\otimes rN}\varphi (y_{1,1},\ldots ,y_{r,1},\ldots ,y_{1,N},\ldots
,y_{r,N}), \notag
\end{eqnarray
where $Q_{2}$ is the Gauss-Hermite quadrature rule {\color{black}defined in \eqref{eq:1d-Gauss-Hermite-rule} with $n=2$
i.e., $\mathsf{y}_{1}=1$, $\mathsf{y}_2=-1$ with weights $\mathsf{w}_1=\mathsf{w}_2=1/2$).}
{\color{black}
Comparing \eqref{eq:strong-euler-mean-def} and \eqref{eq:ito-sde-weak-euler-functional-mean-tensor}, we can say that $\tilde{u}(x_{0})$ is a tensor-product quadrature rule for the multidimensional integral $\bar{u}(x_{0})$. In other words, the weak Euler scheme \eqref{eq:ito-sde-vec-weak-euler}-\eqref{eq:weak-euler-discrete-apprx-gauss}
can be interpreted as the strong Euler scheme with tensor-product integration in random space.We note that the approximation, $\tilde{u}(x_0)$, of $\bar{u}(x_{0})$ satisfies (cf. \eqref{eq:ito-sde-vec-strongEuler-error} and \eqref{eq:ito-sde-vec-weakEuler-error}) \begin{equation}\label{eq:weak-strong-order-one}
\bar{u}(x_{0})-\tilde{u}(x_{0})=O(h).
\end{equation}
}
\begin{rem}
Let $\zeta _{l,k+1}$ in \eqref{eq:ito-sde-vec-weak-euler} be i.i.d. random
variables with the law
\begin{equation}
P(\zeta =\mathsf{y}_{n,j})=\mathsf{w}_{n,j},\quad j=1,\ldots ,n,
\label{eq:weak-euler-discrete-apprx-gauss-n}
\end{equation
where $\mathsf{y}_{n,j}$ are nodes of the Gauss-Hermite quadrature $Q_{n}$
and $\mathsf{w}_{n,j}$ are the corresponding quadrature weights (see $(\re
{eq:1d-Gauss-Hermite-rule})$). Then
\begin{equation*}
\mathbb{E}f(\tilde{X}_{N})=\mathbb{E}\varphi (\zeta _{1,N},\ldots ,{}\zeta
_{r,N})=Q_{n}^{\otimes rN}\varphi (y_{1,1},\ldots ,y_{r,N}),
\end{equation*
which can be a more accurate approximation of $\bar{u}(x_{0})$ than $\tilde{
}(x_{0})$ from $(\ref{eq:ito-sde-weak-euler-functional-mean-tensor})$ but
the weak-sense error for the SDE approximation $\mathbb{E}f(\tilde{X}_{N})
\mathbb{E}f(X(T))$ remains of order $O(h).$
\end{rem}
Practical implementation of $\bar{u}(x_{0})$ and $\tilde{u}(x_{0})$ usually
requires the use of the Monte Carlo technique since the computational cost
of, e.g. the tensor product rule in (\re
{eq:ito-sde-weak-euler-functional-mean-tensor}) is prohibitively high (cf.
Section~\ref{sec:ssg}). In this paper, we consider application of the sparse
grid rule (\ref{eq:smolyak-tensor-like}) to the integral in (\re
{eq:strong-euler-mean-def}) motivated by lower computational cost of (\re
{eq:smolyak-tensor-like}).
{\color{black}In this approach, the total error has two parts
\begin{equation*}
\abs{\mathbb{E}f(X(T))- A(L,N)\varphi }\leq \abs{ \mathbb{E}f(X(T))- \mathbb{E}f(\tilde{X}_{N}) }+ \abs{\mathbb{E}f(\tilde{X}_{N}) - A(L,N)\varphi},
\end{equation*
where $A(L,N)$ is defined in \eqref{eq:smolyak-tensor-like} and $\varphi$ is from \eqref{eq:1d-sde-functional-weak}. The first part is controlled by the time step size $h$, see \eqref{eq:ito-sde-vec-strongEuler-error}, and it converges to zero with order one in $h$. The second part is controlled by the
sparse grid level $L$ but it depends on $h$ since decrease of $h$ increases the dimension of the random space. Some illustrative examples will be presented in Section \ref{sec:sg-errest-illus-b}.}
\subsubsection{Probabilistic interpretation of SGC}
It is not difficult to show that SGC admits a probabilistic interpretation,
e.g. in the case of level $L=2$ we hav
\begin{eqnarray}
&&A(2,N)\varphi (y_{1,1},\ldots ,y_{r,1},\ldots ,y_{1,N},\ldots ,y_{r,N})
\label{eq:smolyak-level2-gauss-hermite-prob} \\
&=&\left( Q_{2}\otimes Q_{1}\otimes \cdots \otimes Q_{1}\right) \varphi
+\left( Q_{1}\otimes Q_{2}\otimes Q_{1}\otimes \cdots \otimes Q_{1}\right)
\varphi \notag \\
&&+\cdots +\left( Q_{1}\otimes Q_{1}\otimes \cdots \otimes Q_{2}\right)
\varphi -(Nr-1)\left( Q_{1}\otimes Q_{1}\otimes \cdots \otimes Q_{1}\right)
\varphi \notag \\
&=&\sum_{i=1}^{N}\sum_{j=1}^{r}\mathbb{E}\varphi (0,\ldots ,0,\zeta
_{j,i},0,\ldots ,0)-(Nr-1)\varphi (0,0,\ldots ,0), \notag
\end{eqnarray
where $\zeta _{j,i}$ are i.i.d. random variables with the law (\re
{eq:weak-euler-discrete-apprx-gauss}). Using
\eqref{eq:ito-sde-weak-euler-functional-mean-tensor},
\eqref{eq:smolyak-level2-gauss-hermite-prob}, Taylor's expansion and
symmetry of $\zeta _{j,i}$, we obtain the relationship between the weak
Euler scheme (\ref{eq:ito-sde-vec-weak-euler}) and the SGC (\re
{eq:smolyak-tensor-like}):
\begin{eqnarray}
\mathbb{E}f(\tilde{X}_{N})-A(2,N)\varphi &=&\mathbb{E}\varphi (\zeta
_{1,1},\ldots ,\zeta _{r,1},\ldots ,\zeta _{1,N},\ldots ,\zeta _{r,N})
\label{eq:sg-ps-exact-remainder} \\
&&-\sum_{i=1}^{N}\sum_{j=1}^{r}\mathbb{E}\varphi (0,\ldots ,0,\zeta
_{j,i},0,\ldots ,0)-(Nr-1)\varphi (0,0,\ldots ,0) \notag \\
&=&\sum_{\left\vert \alpha \right\vert =4}\frac{4}{\alpha !}\mathbb{E}\left[
\prod_{i=1}^{N}\prod_{j=1}^{r}(\zeta _{j,i})^{\alpha
_{j,i}}\int_{0}^{1}(1-z)^{3}D^{\alpha }\varphi (z\zeta _{1,1},\ldots ,z\zeta
_{r,N})\,dz\right] \notag \\
&&-\frac{1}{3!}\sum_{i=1}^{N}\sum_{j=1}^{r}\mathbb{E}\left[ \zeta
_{j,i}^{4}\int_{0}^{1}(1-z)^{3}\frac{\partial ^{4}}{\left( \partial
y_{j,i}\right) ^{4}}\varphi (0,\ldots ,0,z\zeta _{j,i},0,\ldots ,0)\,d
\right] , \notag
\end{eqnarray
where the multi-index $\alpha =(\alpha _{1,1},\ldots ,\alpha _{r,N})\in
\mathbb{N}_{0}^{rN}$, $\left\vert \alpha \right\vert
=\sum_{i=1}^{N}\sum_{j=1}^{r}\alpha _{j,i}$, $\alpha
!=\prod_{i=1}^{N}\prod_{j=1}^{r}\alpha _{j,i}!$ and $D^{\alpha }=\frac
\partial ^{|\alpha |}}{(\partial y_{1,1})^{\alpha _{1,1}}\cdots (\partial
y_{r,N})^{\alpha _{r,N}}}.$ {\color{black}Similarly, we can write down a probabilistic interpretation for any $L$ and derive a similar error representation. For example, we have for $L=3$ that}
{\color{black}
\begin{eqnarray*}
&& \mean{\varphi(\zeta_{1,1}^{(3)},\cdots, \zeta_{r,N}^{(3)})} - A(3,N) \varphi\\
&=& \sum_{\abs{\alpha}=6} \frac{6}{\alpha !}\mean{\prod_{i=1}^{N}\prod_{j=1}^{r}(\zeta _{j,i}^{(3)})^{\alpha
_{j,i}} \int_0^1 (1-z)^5D^\alpha \varphi(z\zeta_{1,1}^{(3)}, \cdots,z\zeta_{r,N}^{(3)})\,d z }\\
&&- \sum_{\substack{ \abs{\alpha}=\alpha_{j,i}+\alpha_{l,k}=6\\ (j-l)^2+(i-k)^2\neq0}} \frac{6}{\alpha_{j,i}!\alpha_{k,l}!}\mean{ (\zeta _{j,i}^{(3)})^{\alpha
_{j,i}} (\zeta _{l,k}^{(3)})^{\alpha
_{l,k}}\int_0^1 (1-z)^5 D^\alpha \varphi(\cdots,z\zeta_{j,i}^{(3)},0,\cdots,0,z\zeta_{l,k}^{(3)},\cdots)\,dz }\\
&&- \sum_{i=1}^N\sum_{j=1}^r\frac{6}{6 !}\mean{(\zeta_{j,i} )^6 \int_0^1 (1-z)^5 D^\alpha \varphi(0,\cdots,z\zeta_{j,i},\cdots,0)\,dz},
\end{eqnarray*}
where $\zeta_{j,i}$ are defined in \eqref{eq:weak-euler-discrete-apprx-gauss} and $\zeta _{j,i}^{(3)}$ are i.i.d. random variables with the law $P(\zeta _{j,i}^{(3)} =\pm \sqrt{3})=1/6$, $P(\zeta _{j,i}^{(3)}=0)=2/3$.
}
The error of the SGC applied to weak-sense approximation of SDE is further studied in Section~\re
{sec:sg-errest-illus-b}.
\subsubsection{Second-order schemes}
In the SGC context, it is beneficial to exploit higher-order or
higher-accuracy schemes for approximating the SDE \eqref{eq:ito-sde-vector}
because they can allow us to reach a desired accuracy using larger time step
sizes and therefore less random variables than the first-order Euler scheme
\ref{eq:ito-sde-vec-strong-euler}) or \eqref{eq:ito-sde-vec-weak-euler}. For
instance, we can use the second-order weak scheme for
\eqref{eq:ito-sde-vector} (see, e.g. \cite[Chapter 2]{MilTre-B04}):
\begin{eqnarray}
X_{k+1} &=&X_{k}+ha(t_{k},X_{k})+\sqrt{h}\sum_{i=1}^{r}\sigma
_{i}(t_{k},X_{k})\xi _{i,k+1}+\frac{h^{2}}{2}\mathfrak{L}a(t_{k},X_{k})
\label{eq:ito-sde-vec-weak-2nd-order} \\
&&+h\sum_{i=1}^{r}\sum_{j=1}^{r}\Lambda _{i}\sigma _{j}(t_{k},X_{k})\eta
_{i,j,k+1}+\frac{h^{3/2}}{2}\sum_{i=1}^{r}(\Lambda _{i}a(t_{k},X_{k})
\mathfrak{L}\sigma _{i}(t_{k,}X_{k}))\xi _{i,k+1}, \notag \\
k &=&0,\ldots ,N-1, \notag
\end{eqnarray
where $X_{0}=x_{0};$ $\eta _{i,j}=\frac{1}{2}\xi _{i}\xi _{j}-\gamma
_{i,j}\zeta _{i}\zeta _{j}/2$ with $\gamma _{i,j}=-1$ if $i<j$ and $\gamma
_{i,j}=1$ otherwise;
\begin{equation*}
\Lambda _{l}=\sum_{i=1}^{m}\sigma _{l}^{i}\frac{\partial }{\partial x_{i}
,\quad \mathfrak{L}=\frac{\partial }{\partial t}+\sum_{i=1}^{m}a^{i}\frac
\partial }{\partial x_{i}}+\frac{1}{2}\sum_{l=1}^{r}\sum_{i,j=1}^{m}\sigma
_{l}^{i}\sigma _{l}^{i}\frac{\partial ^{2}}{\partial x_{i}\partial x_{j}};
\end{equation*
and $\xi _{i,k+1}$ and $\zeta _{i,k+1}$ are mutually independent random
variables with Gaussian distribution or with the laws $P(\xi =0)=2/3,\text{
P(\xi =\pm \sqrt{3})=1/6\text{ \ and \ }P(\zeta =\pm 1)=1/2.$ The following
error estimate holds for (\ref{eq:ito-sde-vec-weak-2nd-order}) (see e.g.
\cite[Chapter 2]{MilTre-B04}):
\begin{equation*}
\left\vert \mathbb{E}f(X(T))-Ef(X_{N})\right\vert \leq Kh^{2}.
\end{equation*}
Roughly speaking, to achieve $O(h)$ accuracy using
\eqref{eq:ito-sde-vec-weak-2nd-order}, we need only $\sqrt{2rN}$ ($\sqrt{rN}$
in the case of additive noise) random variables, while we need $rN$ random
variables for the Euler scheme \eqref{eq:ito-sde-vec-strong-euler}. This
reduces the dimension of the random space and hence can increase efficiency
and widen applicability of SGC methods (see, in particular Example~4.1 in
Section~\ref{sec:num-experiments-sg}\ for a numerical illustration). We note
that when noise intensity is relatively small, we can use high-accuracy
low-order schemes designed for SDE with small noise \cite{MilTre97} (see
also \cite[Chapter 3]{MilTre-B04}) in order to achieve a desired accuracy
using less number of random variables than the Euler scheme
\eqref{eq:ito-sde-vec-strong-euler}.
\subsection{Illustrative examples\label{sec:sg-errest-illus-b}}
In this section we show limitations of the use of SGC in weak approximation
of SDE. To this end, it is convenient and sufficient to consider the scalar
linear SDE
\begin{equation}
dX=\lambda Xdt+\varepsilon \,dw(t),\quad X_{0}=1, \label{linearSDE}
\end{equation
where $\lambda \ $and $\varepsilon $ are some constants.
We will compute expectations $\mathbb{E}f(X(T))$ for some $f(x)$ and $X(t)$
from (\ref{linearSDE}) by applying the Euler scheme (\re
{eq:ito-sde-vec-strong-euler}) and the SGC (\ref{eq:smolyak-tensor-like}).
This simple example provides us with a clear insight when algorithms of this
type are able to produce accurate results and when they are likely to fail.
Using direct calculations, we first (see Examples~\re
{exm:linear-sde-moments}--\ref{exm:linear-sde-cos} below) derive an estimate
for the error $\left\vert \mathbb{E}f(X_{N})-A(2,N)\varphi \right\vert $
with $X_{N}$ from (\ref{eq:ito-sde-vec-strong-euler}) applied to (\re
{linearSDE}) and for some particular $f(x)$. {\color{black}This will illustrate how the error of SGC with
practical level (no more than six) behaves.}
Then (Proposition~\re
{prop:weak-apprx-euler-sg}) we obtain an estimate for the error $\left\vert
\mathbb{E}f(X_{N})-A(L,N)\varphi \right\vert $ for a smooth $f(x)$ which
grows not faster than a polynomial function at infinity. We will observe
that the considered algorithm is not convergent in time step $h$ and {\color{black} the algorithm
is not convergent in level $L$ unless when noise
intensity and integration time are small.}
It follows from \eqref{eq:ito-sde-vec-strongEuler-error} and
\eqref{eq:ito-sde-vec-weakEuler-error} that
\begin{eqnarray}
\left\vert \mathbb{E}f(X_{N})-A(L,N)\varphi \right\vert &\leq &\left\vert
\mathbb{E}f(\tilde{X}_{N})-A(L,N)\varphi \right\vert +|\mathbb{E}f(X_{N})
\mathbb{E}f(\tilde{X}_{N})| \label{eq:simple-exam-error} \\
&\leq &\left\vert \mathbb{E}f(\tilde{X}_{N})-A(L,N)\varphi \right\vert +Kh,
\notag
\end{eqnarray
where $\tilde{X}_{N}$ is from the weak Euler scheme
\eqref{eq:ito-sde-vec-weak-euler} applied to (\ref{linearSDE}), which can be
written as $\tilde{X}_{N}=\displaystyle(1+\lambda
h)^{N}+\sum_{j=1}^{N}(1+\lambda h)^{N-j}\varepsilon \sqrt{h}\zeta _{j}.$
Introducing the function
\begin{equation}
\bar{X}(N;y)=(1+\lambda h)^{N}+\sum_{j=1}^{N}(1+\lambda h)^{N-j}\varepsilon
\sqrt{h}y_{j}, \label{Xy}
\end{equation
we see that $\tilde{X}_{N}=\bar{X}(N;\zeta _{1},\ldots ,\zeta _{N}).$ We
have
\begin{equation}
\frac{\partial }{\partial y_{i}}\bar{X}(N;y)=(1+\lambda h)^{N-i}\varepsilon
\sqrt{h}\text{\ \ and\ \ }\frac{\partial ^{2}}{\partial y_{i}\partial y_{j}
\bar{X}(N;y)=0. \label{Xder}
\end{equation
Then we obtain from (\ref{eq:sg-ps-exact-remainder}):
\begin{eqnarray}
R &:&=\mathbb{E}f(\tilde{X}_{N})-A(2,N)\varphi \label{Rnew} \\
&=&\varepsilon ^{4}h^{2}\sum_{\left\vert \alpha \right\vert =4}\frac{4}
\alpha !}\mathbb{E}\left[ \prod_{i=1}^{N}(\zeta _{i}(1+\lambda
h)^{N-i})^{\alpha _{i}}\int_{0}^{1}(1-z)^{3}z^{4}\frac{d^{4}}{dx^{4}}f(\bar{
}(N,z\zeta _{1},\ldots ,z\zeta _{N}))\,dz\right] \notag \\
&&-\frac{1}{3!}\varepsilon ^{4}h^{2}\sum_{i=1}^{N}\mathbb{E}\left[ \zeta
_{i}^{4}\int_{0}^{1}(1-z)^{3}z^{4}\frac{d^{4}}{dx^{4}}f(\bar{X}(0,\ldots
,0,z\zeta _{i},0,\ldots ,0))\,(1+\lambda h)^{4N-4i}dz\right] . \notag
\end{eqnarray}
\subsubsection{Non-Convergence in time step $h$}
We will illustrate {\color{black}no convergence} in $h$ {\color{black}for SGC for levels two and three through two examples,
where sharp error estimates of $\abs{\mathbb{E}f(X_{N})-A(2,N)\varphi}$ are derived for SGC. Higher level SGC can be also considered but the conclusion do not change. In contrast,
the algorithm of tensor-product integration in random space and the strong Euler scheme in time
(i.e., the weak Euler scheme \eqref{eq:ito-sde-vec-weak-euler}-\eqref{eq:weak-euler-discrete-apprx-gauss})
is convergent with order one in $h$. We also note that in practice, typically SGC with level no more than six are employed.}
\begin{exm}
\label{exm:linear-sde-moments
\upshape
For $f(x)=x^{p}$ with $p=1,2,3$, it follows from (\ref{Rnew}) that $R=0,$
i.e., SGC does not introduce any additional error, and hence by
\eqref{eq:simple-exam-error}
\begin{equation*}
\left\vert \mathbb{E}f(X_{N})-A(2,N)\varphi \right\vert \leq Kh,\quad
f(x)=x^{p},\quad p=1,2,3.
\end{equation*
For $f(x)=x^{4}$, we get from \eqref{Rnew}:
\begin{eqnarray*}
R &=&\frac{6}{35}\varepsilon
^{4}h^{2}\sum_{i=1}^{N}\sum_{j=i+1}^{N}(1+\lambda h)^{4N-2i-2j} \\
&=&\frac{6}{35}\varepsilon ^{4}\times \left\{
\begin{tabular}{ll}
$\frac{(1+\lambda h)^{2N}-1}{\lambda ^{2}(2+\lambda h)^{2}}\left[ \frac
(1+\lambda h)^{2N}+1}{1+(1+\lambda h)^{2}}-1\right] ,$ & $\lambda \neq 0,\ \
1+\lambda h\neq 0,$ \\
$\frac{T^{2}}{2}-\frac{Th}{2},$ & $\lambda =0.
\end{tabular
\right.
\end{eqnarray*
We see that $R$ does not go to zero when $h\rightarrow 0$ and that for
sufficiently small $h>0$
\begin{equation*}
\left\vert \mathbb{E}f(X_{N})-A(2,N)\varphi \right\vert \leq Kh+\frac{6}{35
\varepsilon ^{4}\times \left\{
\begin{tabular}{ll}
$\frac{1}{\lambda ^{2}}(1+e^{4T\lambda }),$ & $\lambda \neq 0,$ \\
$\frac{T^{2}}{2},$ & $\lambda =0.
\end{tabular
\right.
\end{equation*}
\end{exm}
We observe that the SGC algorithm does not converge with $h\rightarrow 0$
for higher moments. In the considered case of linear SDE, increasing the
level $L$ of SGC leads to the SGC error $R$ being $0$ for higher moments,
e.g., for $L=3$ the error $R=0$ for up to $5$th moment but the algorithm
will not converge in $h$ for $6$th moment and so on (see Proposition~\re
{prop:weak-apprx-euler-sg} below). Further (see the continuation of the
illustration below), in the case of, e.g. $f(x)=\cos x$ for any $L$ this
error $R$ {\color{black} does not converge in $h$}, which is also the case for nonlinear SDE. We also
note that one can expect that this error $R$ is small when noise intensity
is relatively small and either time $T$ is small or SDE has, in some sense,
stable behavior (in the linear case it corresponds to $\lambda <0).$
\begin{exm}
\label{exm:linear-sde-cos
\upshap
Now consider $f(x)=\cos (x).$ It follows from \eqref{Rnew} that
\begin{eqnarray*}
R &=&\varepsilon ^{4}h^{2}\sum_{\left\vert \alpha \right\vert =4}\frac{4}
\alpha !}\mathbb{E}\left[ \prod_{i=1}^{N}(\zeta _{i}(1+\lambda
h)^{N-i})^{\alpha _{i}}\int_{0}^{1}(1-z)^{3}z^{4}\cos ((1+\lambda
h)^{N}\right. \\
&&\left. +z\sum_{j=1}^{N}(1+\lambda h)^{N-j}\varepsilon \sqrt{h}\zeta
_{j})\,dz\right] \\
&&-\frac{1}{3!}\varepsilon ^{4}h^{2}\sum_{i=1}^{N}(1+\lambda
h)^{4N-4i}\int_{0}^{1}(1-z)^{3}z^{4}\mathbb{E}[\zeta _{i}^{4}\cos
((1+\lambda h)^{N}+z(1+\lambda h)^{N-i}\varepsilon \sqrt{h}\zeta _{i})]\,dz
\end{eqnarray*
and after routine calculations we obtain
\begin{eqnarray*}
R &=&\varepsilon ^{4}h^{2}\cos ((1+\lambda h)^{N})\left[ \left( \frac{1}{6
\sum_{i=1}^{N}(1+\lambda
h)^{4N-4i}+2\sum_{i=1}^{N}\sum_{j=i+1}^{N}(1+\lambda h)^{4N-2i-2j}\right)
\right. \\
&&\times \int_{0}^{1}(1-z)^{3}z^{4}\prod_{l=1}^{N}\cos (z(1+\lambda
h)^{N-l}\varepsilon \sqrt{h})dz\\
&&+\left( \frac{2}{3}\sum_{i,j=1;i\neq j}^{N}(1+\lambda h)^{4N-3i-j}+2\sum
_{\substack{ k,i,j=1 \\ i\neq j,i\neq k,k\neq j}}^{N}(1+\lambda
h)^{4N-2k-i-j}\right) \\
&&\times \int_{0}^{1}(1-z)^{3}z^{4}\prod_{l=i,j}\sin (z(1+\lambda
h)^{N-l}\varepsilon \sqrt{h})\prod_{\substack{ l=1 \\ l\neq i,l\neq j}
^{N}\cos (z(1+\lambda h)^{N-l}\varepsilon \sqrt{h})dz \\
&&+4\sum_{\substack{ i,j,k,m=1 \\ i\neq j,i\neq k,i\neq m,j\neq k,j\neq
m,k\neq m}}^{N}(1+\lambda h)^{4N-i-j-k-m} \\
&&\times \int_{0}^{1}(1-z)^{3}z^{4}\prod_{l=i,j,{\color{black}k},m}\sin (z(1+\lambda
h)^{N-l}\varepsilon \sqrt{h})\prod_{\substack{ l=1 \\ l\neq i,l\neq j,l\neq
k,l\neq m}}^{N}\cos (z(1+\lambda h)^{N-l}\varepsilon \sqrt{h})dz \\
&&\left. -\frac{1}{6}\sum_{i=1}^{N}(1+\lambda
h)^{4N-4i}\int_{0}^{1}(1-z)^{3}z^{4}\cos (z(1+\lambda h)^{N-i}\varepsilon
\sqrt{h})]\,dz\right] .
\end{eqnarray*
It is not difficult to see that $R$ does not go to zero when $h\rightarrow
0. $ {\color{black}In fact,} taking into account that $|\sin (z(1+\lambda
h)^{N-j}\varepsilon \sqrt{h})|\leq z(1+\lambda h)^{N-j}\varepsilon \sqrt{h},$ {\color{black} and
that there are $N^4$ terms of order $h^4$ and $N^3$ terms of order $h^3$},
we get for sufficiently small $h>0
\begin{equation*}
|R|\leq C\varepsilon ^{4} (1+e^{4T\lambda }),
\end{equation*
where $C>0$ is independent of $\varepsilon $ and $h.$ Hence
\begin{equation}
\left\vert \mathbb{E}f(X_{N})-A(2,N)\varphi \right\vert \leq C\varepsilon
^{4}(1+e^{4T\lambda })+Kh, \label{eq:estimate-euler-cos}
\end{equation
and we have arrived at a similar conclusion for $f(x)=\cos x$ as for
f(x)=x^{4}$. {\color{black}Similarly, we can also have for $L=3$ that
\begin{equation*}
\abs{\mathbb{E}f(X_{N})-A(3,N)\varphi} \leq C\varepsilon
^{6}(1+e^{6T\lambda })+Kh.
\end{equation*}
This example shows for $L=3$, the error of SGC with the Euler scheme in time does not converge in $h$.}
\end{exm}
\subsubsection{\color{black} Error estimate for SGC with fixed level}
{\color{black}
Now we will address the effect of the SGC level $L$.} To this end, we will need the following error
estimate of a Gauss-Hermite quadrature. Let $\psi (y),$ $y\in \mathbb{R}$,
be a sufficiently smooth function which itself and its derivatives are
growing not faster than a polynomial at infinity. Using the Peano kernel
theorem (see e.g. \cite{DavRab-B84}) and that a Gauss-Hermite quadrature
with $n$-nodes has the order of polynomial exactness $2n-1,$ we obtain for
the approximation error $R_{n,\gamma }\psi $ of the Gauss-Hermite quadrature
$Q_{n}\psi $:
\begin{equation}
R_{n,\gamma }(\psi ):=Q_{n}\psi -I_{1}\psi =\int_{\mathbb{R}}\frac{d^{\gamma
}}{dy^{\gamma }}\varphi (y)R_{n,\gamma }(\Gamma _{y,\gamma })\,dy,\quad
1\leq \gamma \leq 2n, \label{eq:gaussHer-quad-remainder}
\end{equation
where $\Gamma _{y,\gamma }(z)=(z-y)^{\gamma -1}/(\gamma -1)!$ if $z\geq y$
and $0$ otherwise. One can show (see, e.g. \cite[Theorem 2]{MasMon94}) that
there is a constant $c>0$ independent of $n$ and $y$ such that for any
0<\beta <1
\begin{equation}
\left\vert R_{n,\gamma }(\Gamma _{y,\gamma })\right\vert \leq \frac{c}{\sqrt
2\pi }}n^{-\gamma /2}\exp \left( -\frac{\beta y^{2}}{2}\right) ,\quad 1\leq
\gamma \leq 2n. \label{eq:estimate-GaussHer-peano-kernel}
\end{equation
We also note that (\ref{eq:estimate-GaussHer-peano-kernel}) and the triangle
inequality imply, for $1\leq \gamma \leq 2(n-1)$:
\begin{equation}
\left\vert R_{n,\gamma }(\Gamma _{y,\gamma })-R_{n-1,\gamma }(\Gamma
_{y,\gamma })\right\vert \leq \frac{c}{\sqrt{2\pi }}[n^{-\gamma
/2}+(n-1)^{-\gamma /2}]\exp \left( -\frac{\beta y^{2}}{2}\right) .
\label{eq:estimate-GaussHer-peano-kernel-diff}
\end{equation}
Now we consider an error of the sparse grid rule
\eqref{eq:smolyak-tensor-like} accompanied by the Euler scheme
\eqref{eq:ito-sde-vec-strong-euler} for computing expectations of solutions
to \eqref{linearSDE}.
\begin{prop}
\label{prop:weak-apprx-euler-sg} Assume that a function $f(x)$ and its
derivatives up to $2L$-th order satisfy the polynomial growth condition $
\ref{f_cond})$. Let $X_{N}$ be obtained by the Euler scheme
\eqref{eq:ito-sde-vec-strong-euler} applied to the linear SDE $(\re
{linearSDE})$ and $A(L,N)\varphi $ be the sparse grid rule $(\re
{eq:smolyak-tensor-like})$ with level $L$ applied to the integral
corresponding to $\mathbb{E}f(X_{N})$ as in $(\ref{eq:strong-euler-mean-def
) $. Then for {\color{black} any $L$} and sufficiently small $h>0$
\begin{equation} \label{propstat}
\left\vert \mathbb{E}f(X_{N})-A(L,N)\varphi \right\vert \leq K\varepsilon
^{2L}(1+e^{\lambda (2L+\varkappa )T}){\color{black}\left( 1+\left( 3c/2\right) ^{L\wedge N}\right)
\beta ^{-(L\wedge N)/2} }T^{L},
\end{equation
where $K>0$ is independent of $h,$ $L$ and $N$; $c$ and $\beta $ are from
\eqref{eq:estimate-GaussHer-peano-kernel}; $\varkappa $ is from
\eqref{f_cond}.
\end{prop}
\begin{proof}
We recall (see (\ref{eq:strong-euler-mean-def})) that
\begin{equation*}
\mathbb{E}f(X_{N})=I_{N}\varphi =\frac{1}{(2\pi )^{N/2}}\int_{\mathbb{R
^{rN}}\varphi (y_{1},\ldots ,y_{N})\exp \left( {-\frac{1}{2
\sum_{i=1}^{N}y_{i}^{2}}\right) \,dy.
\end{equation*
Introduce the integrals
\begin{equation}
I_{1}^{(k)}\varphi =\frac{1}{\sqrt{2\pi }}\int_{\mathbb{R}}\varphi
(y_{1},\ldots ,y_{k},\ldots ,y_{N})\exp \left( {-\frac{{y_{k}^{2}}}{2}
\right) dy_{k},\ \ k=1,\ldots ,N, \label{Ik1}
\end{equation
and their approximations $Q_{n}^{(k)}$ by the corresponding one-dimensional
Gauss-Hermite quadratures with $n$ nodes. Also, let $\mathcal{U
_{i_{k}}^{(k)}=Q_{i_{k}}^{(k)}-Q_{i_{k}-1}^{(k)}.$
Using (\ref{eq:smolyak-tensor-like}) and the recipe from the proof of
Lemma~3.4 in \cite{NobTW08}, we obtain
\begin{equation}
I_{N}\varphi -A(L,N)\varphi =\sum_{l=2}^{N}S(L,l)\otimes _{k=l+1}^{N}{I
_{1}^{(k)}\varphi +({I}_{1}^{(1)}-Q_{L}^{(1)})\otimes _{k=2}^{N}{I
_{1}^{(k)}\varphi , \label{eq34}
\end{equation
where
\begin{equation}
S(L,l)=\sum_{i_{1}+\cdots +i_{l-1}+i_{l}=L+l-1}\otimes _{k=1}^{l-1}\mathcal{
}_{i_{k}}^{(k)}\otimes ({I}_{1}^{(l)}-Q_{i_{l}}^{(l)}).
\label{eq:smolyak-int-recursive-component}
\end{equation}
Due to (\ref{eq:gaussHer-quad-remainder}), we have for $n>1$ and $1\leq
\gamma \leq 2(n-1)$
\begin{eqnarray}
\mathcal{U}_{n}\psi &=&Q_{n}\psi -Q_{n-1}\psi =[Q_{n}\psi -I_{1}(\psi
)]-[Q_{n-1}\psi -I_{1}(\psi )] \label{eq:gaussHer-quad-difference} \\
&=&\int_{\mathbb{R}}\frac{d^{\gamma }}{dy^{\gamma }}\psi (y)[R_{n,\gamma
}(\Gamma _{y,\gamma })-R_{n-1,\gamma }(\Gamma _{y,\gamma })]\,dy, \notag
\end{eqnarray
and for $n=1$
\begin{equation}
\mathcal{U}_{n}\psi =Q_{1}\psi -Q_{0}\psi =Q_{1}\psi =\psi (0).
\label{eq:gaussHer-quad-differencen=1}
\end{equation
By \eqref{eq:smolyak-int-recursive-component}, (\ref{Ik1}) and
\eqref{eq:gaussHer-quad-remainder}, we obtain for the first term in the
right-hand side of (\ref{eq34}):
\begin{eqnarray*}
S(L,l)\otimes _{n=l+1}^{N}I_{1}^{(n)}\varphi
&=&\sum_{i_{1}+\cdots +i_{l}=L+l-1}\otimes _{n=1}^{l-1}\mathcal{U
_{i_{k}}^{(k)}\otimes ({I}_{1}^{(l)}-Q_{i_{l}}^{(l)})\otimes
_{n=l+1}^{N}I_{1}^{(n)}\varphi \\
&=&\sum_{i_{1}+\cdots +i_{l}=L+l-1}\otimes _{n=1}^{l-1}\mathcal{U
_{i_{k}}^{(k)}\otimes ({I}_{1}^{(l)}-Q_{i_{l}}^{(l)}) \\
&&\otimes \int_{\mathbb{R}^{N-l}}\varphi (y)\frac{1}{(2\pi )^{(N-l)/2}}\exp
(-\sum_{k=l+1}^{N}\frac{y_{k}^{2}}{2})\,dy_{l+1}\ldots dy_{N} \\
&=&-\sum_{i_{1}+\cdots +i_{l}=L+l-1}\otimes _{n=1}^{l-1}\mathcal{U
_{i_{k}}^{(k)}\otimes \int_{\mathbb{R}^{N-l+1}}\frac{d^{2i_{l}}}
dy_{l}^{2i_{l}}}\varphi (y)R_{i_{l},2i_{l}}(\Gamma _{y_{l},2i_{l}}) \\
&&\times \frac{1}{(2\pi )^{(N-l)/2}}\exp (-\sum_{k=l+1}^{N}\frac{y_{k}^{2}}{
})\,dy_{l}\ldots dy_{N}.
\end{eqnarray*
Now consider two cases: if $i_{l-1}>1$ then by
\eqref{eq:gaussHer-quad-difference}
\begin{eqnarray*}
S(L,l)\otimes _{n=l+1}^{N}I_{1}^{(n)}\varphi &=&-\sum_{i_{1}+\cdots
+i_{l}=L+l-1}\otimes _{n=1}^{l-2}\mathcal{U}_{i_{k}}^{(k)}\otimes \int_
\mathbb{R}^{N-l+2}}\frac{d^{2i_{l-1}-2}}{dy_{l-1}^{2i_{l-1}-2}}\frac
d^{2i_{l}}}{dy_{l}^{2i_{l}}}\varphi (y)R_{i_{l},2i_{l}}(\Gamma
_{y_{l},2i_{l}}) \\
&&\times \lbrack R_{i_{l-1},2i_{l-1}-2}(\Gamma
_{y_{l-1},2i_{l-1}-2})-R_{i_{l-1}-1,2i_{l-1}-2}(\Gamma
_{y_{i_{l-1}},2i_{l-1}-2})] \\
&&\times \frac{1}{(2\pi )^{(N-l)/2}}\exp (-\sum_{k=l+1}^{N}\frac{y_{k}^{2}}{
})\,dy_{l-1}\ldots \,dy_{N}
\end{eqnarray*
otherwise (i.e., if $i_{l-1}=1)$ by \eqref{eq:gaussHer-quad-differencen=1}:
\begin{eqnarray*}
S(L,l)\otimes _{n=l+1}^{N}I_{1}^{(n)}\varphi &=&-\sum_{i_{1}+\cdots
+i_{l}=L+l-1}\otimes _{n=1}^{l-2}\mathcal{U}_{i_{k}}^{(k)}\otimes \int_
\mathbb{R}^{N-l+1}}Q_{1}^{(l-1)}\frac{d^{2i_{l}}}{dy_{l}^{2i_{l}}}\varphi
(y)R_{i_{l},2i_{l}}(\Gamma _{y_{l},2i_{l}}) \\
&&\times \frac{1}{(2\pi )^{(N-l)/2}}\exp (-\sum_{k=l+1}^{N}\frac{y_{k}^{2}}{
})\,dy_{l}\ldots dy_{N}.
\end{eqnarray*
Repeating the above process for $i_{l-2},\ldots ,i_{1}$, we obtain
\begin{gather}
S(L,l)\otimes _{n=l+1}^{N}I_{1}^{(n)}\varphi =\sum_{i_{1}+\cdots
+i_{l}=L+l-1}\int_{\mathbb{R}^{N-\#F_{l-1}}}[\otimes _{m\in
F_{l-1}}Q_{1}^{(m)}D^{2\alpha _{l}}\varphi (y)]
\label{eq:sg-component-recursive-l} \\
\times \mathcal{R}_{l,\alpha _{l}}(y_{1},\ldots ,y_{l})\frac{1}{(2\pi
)^{(N-l)/2}}\exp (-\sum_{k=l+1}^{N}\frac{y_{k}^{2}}{2})\prod_{n\in
G_{l-1}}\,dy_{n}\times \,dy_{l}\ldots dy_{N}, \notag
\end{gather
where the multi-index $\alpha _{l}=(i_{1}-1,\ldots ,i_{l-1}-1,i_{l},0,\ldots
,0)$ with the $m$-th element $\alpha _{l}^{m},$ the sets
F_{l-1}=F_{l-1}(\alpha _{l})=\left\{ m:~\alpha _{l}^{m}=0,\text{ }m=1,\ldots
,l-1\right\} $ and $G_{l-1}=G_{l-1}(\alpha _{l})=\left\{ m:~\alpha
_{l}^{m}>0,\text{ }m=1,\ldots ,l-1\right\} $, the symbols $\#F_{l-1}$ and
\#G_{l-1}$ stand for the number of elements in the corresponding sets, and
\begin{eqnarray*}
\mathcal{R}_{l,\alpha _{l}}(y_{1},\ldots ,y_{l})=-R_{i_{l},2i_{l}}(\Gamma
_{y_{l},2i_{l}})\otimes _{n\in G_{l-1}}[R_{i_{n},2i_{n}-2}(\Gamma
_{y_{n},2i_{n}-2})-R_{i_{n}-1,2i_{n}-2}(\Gamma _{y_{n},2i_{n}-2})].
\end{eqnarray*}
Note that $\#G_{l-1}\leq (L-1)\wedge (l-1)$ and also recall that $i_{j}\geq
1,$ $j=1,\ldots ,l.$
Using \eqref{eq:estimate-GaussHer-peano-kernel},
\eqref{eq:estimate-GaussHer-peano-kernel-diff} and the inequality
\begin{equation*}
\prod_{n\in
G_{l-1}}[i_{n}^{-(i_{n}-1)}+(i_{n}-1)^{-(i_{n}-1)}]i_{l}^{-i_{l}}\leq
(3/2)^{\#G_{l-1}},
\end{equation*
we get
\begin{eqnarray}
\left\vert \mathcal{R}_{l,\alpha }(y_{1},\ldots ,y_{l})\right\vert &\leq
&\prod_{n\in
G_{l-1}}[i_{n}^{-(i_{n}-1)}+(i_{n}-1)^{-(i_{n}-1)}]i_{l}^{-i_{l}}\frac
c^{\#G_{l-1}+1}}{(2\pi )^{(\#G_{l-1}+1)/2}}\ \
\label{eq:err-est-multi-peano-kernel} \\
&&\times \exp \left( -\sum_{n\in G_{l-1}}\frac{\beta y_{n}^{2}}{2}-\frac
\beta y_{l}^{2}}{2}\right) \notag \\
&\leq &\frac{(3c/2)^{\#G_{l-1}+1}}{(2\pi )^{(\#G_{l-1}+1)/2}}\exp \left(
-\sum_{n\in G_{l-1}}\frac{\beta y_{n}^{2}}{2}-\frac{\beta y_{l}^{2}}{2
\right) . \notag
\end{eqnarray
Substituting \eqref{eq:err-est-multi-peano-kernel} in
\eqref{eq:sg-component-recursive-l}, we arrive at
\begin{eqnarray}
&&\left\vert S(L,l)\otimes _{n=l+1}^{N}I_{1}^{(n)}\varphi \right\vert
\label{eq:sg-error-estimates-components-est} \\
&\leq &\sum_{i_{1}+\cdots +i_{l}=L+l-1}\frac{(3c/2)^{\#G_{l-1}+1}}{(2\pi
)^{(N-\#F_{l-1})/2}}\int_{\mathbb{R}^{N-\#F_{l-1}}}\left\vert \otimes _{m\in
F_{l-1}}Q_{1}^{(m)}D^{2\alpha _{l}}\varphi (y)\right\vert \notag \\
&&\times \exp \left( -\sum_{n\in G_{l-1}}\frac{\beta y_{n}^{2}}{2}-\frac
\beta y_{l}^{2}}{2}-\sum_{k=l+1}^{N}\frac{y_{k}^{2}}{2}\right) \prod_{n\in
G_{l-1}}dy_{n}\times \,dy_{l}\ldots dy_{N}. \notag
\end{eqnarray
Using (\ref{Xder}) and the assumption that $\left\vert \frac{d^{2L}}{dx^{2L}
f(x)\right\vert \leq K(1+|x|^{\varkappa })$ for some $K>0$ and $\varkappa
\geq 1$, we get
\begin{eqnarray}
\left\vert D^{2\alpha _{l}}\varphi (y)\right\vert &=&\varepsilon
^{2L}h^{L}\left\vert \frac{d^{2L}}{dx^{2L}}f(\bar{X}(N,y))\right\vert
(1+\lambda h)^{2LN-2\sum_{i=1}^{l}i\alpha _{l}^{i}} \label{Dfi} \\
&\leq &K\varepsilon ^{2L}h^{L}(1+\lambda h)^{2LN-2\sum_{i=1}^{l}i\alpha
_{l}^{i}}(1+|\bar{X}(N,y)|^{\varkappa }). \notag
\end{eqnarray
Substituting (\ref{Dfi}) and (\ref{Xy}) in (\re
{eq:sg-error-estimates-components-est}) and doing further calculations, we
obtain
\begin{gather}
\left\vert S(L,l)\otimes _{n=l+1}^{N}I_{1}^{(n)}\varphi \right\vert \leq
K\varepsilon ^{2L}h^{L}(1+e^{\lambda \varkappa T})(1+(3c/2)^{L\wedge
l})\beta ^{-(L\wedge l)/2} \label{fff} \\
\times \sum_{i_{1}+\cdots +i_{l}=L+l-1}(1+\lambda
h)^{2LN-2\sum_{i=1}^{l}i\alpha _{l}^{i}} \notag \\
\leq K\varepsilon ^{2L}h^{L}(1+e^{\lambda (2L+\varkappa
)T})(1+(3c/2)^{L\wedge l})\beta ^{-(L\wedge l)/2}\binom{L+l-2}{L-1} \notag
\\
\leq K\varepsilon ^{2L}h^{L}(1+e^{\lambda (2L+\varkappa
)T})(1+(3c/2)^{L\wedge l})\beta ^{-(L\wedge l)/2}l^{L-1}. \notag
\end{gather
with a new $K>0$ which does not depend on $h,$ $\varepsilon ,$ $L,$ $c,$
\beta ,$ and $l.$ In the last line of (\ref{fff}) we used
\begin{equation*}
\binom{L+l-2}{L-1}=\prod_{i=1}^{L-1}(1+\frac{l-1}{i})\leq \left[ \frac{1}{L-
}\sum_{i=1}^{L-1}(1+\frac{l-1}{i})\right] ^{L-1}\leq l^{L-1}.
\end{equation*
Substituting (\ref{fff}) in (\ref{eq34}) and observing that $\left\vert ({I
_{1}^{(1)}-Q_{L}^{(1)})\otimes _{k=2}^{N}{I}_{1}^{(k)}\varphi \right\vert $
is of order $O(h^{L})$, we arrive at (\ref{propstat}).
\end{proof}
\begin{rem}\label{rem:convergence-sgc}
Due to Examples~\ref{exm:linear-sde-moments} and \ref{exm:linear-sde-cos},
the error estimate $(\ref{propstat})$ proved in Proposition~\re
{prop:weak-apprx-euler-sg} is quite sharp and we conclude that in general
the SGC algorithm for weak approximation of SDE does not converge with
neither decrease of time step $h$ nor with increase of the level $L$. At the
same time, {\color{black} the algorithm is convergent in $L$ (when $L\leq N$) if $\varepsilon^2T$ is sufficiently small and SDE has some stable behavior (e.g., $\lambda\leq0$). Furthermore,} the algorithm
is sufficiently accurate when noise intensity
\varepsilon $ and integration time $T$ are relatively small.
\end{rem}
\begin{rem}
It follows from the proof (see $(\ref{Dfi})$) that if $\frac{d^{2L}}{dx^{2L}
f(x)=0$ then the error $I_{N}(\varphi )-A(L,N)\varphi =0.$ We emphasize that
this is a feature of the linear SDE $(\ref{linearSDE})$ thanks to $(\re
{Xder})$, while in the case of nonlinear SDE this error remains of the form
(\ref{propstat})$ even if the $2L$th derivative of $f$ is zero. See also the
discussion at the end of Example \ref{exm:linear-sde-moments} and numerical
tests in Example \ref{exm:mcir}.
\end{rem}
\begin{rem}
We note that it is possible to prove a proposition analogous to Proposition
\ref{prop:weak-apprx-euler-sg} for a more general SDE, e.g. for SDE with
additive noise. Since such a proposition does not add further information to
our discussion of the use of SGC and its proof is more complex than in the
case of $(\ref{linearSDE})$, we do not consider such a proposition here.
\end{rem}
\section{Recursive collocation algorithm for linear SPDE}
\label{sec:recursive-sg-advdiff}
In the previous section we have demonstrated the limitations of SGC
algorithms in application to SDE that, in general, such an algorithm will
not work unless integration time $T$ and magnitude of noise are small. It is
not difficult to understand that SGC algorithms have the same limitations in
the case of SPDE as well, which, in particular, is demonstrated in
Example~4.2, where a stochastic Burgers equation is considered. To cure this
deficiency and achieve longer time integration in the case of linear SPDE,
we will exploit the idea of the recursive approach proposed in \cit
{LotMR97,ZhangRTK12} in the case of a Wiener chaos expansion method. To this
end, we apply the algorithm of SGC accompanied by a time discretization of
SPDE over a small interval $[(k-1)h,kh]$ instead of the whole interval
[0,T] $ as we did in the previous section and build a recursive scheme to
compute the second-order moments of the solutions to linear SPDE.
Consider the following linear SPDE in Ito's form:
\begin{eqnarray}
d{u(t,x)} &=&\left[ \mathcal{L}u(t,x)+f(x)\right] dt+\sum_{l=1}^{r}\left[
\mathcal{M}_{l}u(t,x)+g_{l}(x)\right] {d}w_{l}(t),\;(t,x)\in (0,T]\times
\mathcal{D},\ \ \ \ \ \ \ \ \label{eq:sadv-diff} \\
{u(0,x)} &=&u_{0}(x),\ \ x\in \mathcal{D}, \notag
\end{eqnarray
where $\mathcal{D}$ is an open domain in $\mathbb{R}^{m}$ and $(w(t)
\mathcal{F}_{t})$ is a Wiener process as in \eqref{eq:ito-sde-vector}, and
\begin{eqnarray}
\mathcal{L}u(t,x) &=&\sum_{i,j=1}^{m}a_{ij}\left( x\right) \frac{\partial
^{2}}{\partial x_{i}\partial x_{j}}u(t,x)+\sum_{i=1}^{m}b_{i}(x)\frac
\partial }{\partial x_{i}}u(t,x)+c\left( x\right) u(t,x),
\label{eq:sadv-diff-coefficients} \\
\mathcal{M}_{l}u(t,x) &=&\sum_{i=1}^{m}\alpha _{i}^{l}(x)\frac{\partial }
\partial x_{i}}u(t,x)+\beta ^{l}\left( x\right) u(t,x). \notag
\end{eqnarray
We assume that $\mathcal{D}$ is either bounded with regular boundary or that
$\mathcal{D}=\mathbb{R}^{m}.$ In the former case we consider periodic
boundary conditions and in the latter the Cauchy problem. We also assume
that the coefficients of the operators $\mathcal{L}$ and $\mathcal{M}$ are
uniformly bounded and
\begin{equation*}
\tilde{\mathcal{L}}:=\mathcal{L}-\frac{1}{2}\sum_{1\leq l\leq r}\,\mathcal{M
_{l}\mathcal{M}_{l}
\end{equation*
is nonnegative definite. When the coefficients of $\mathcal{L}$ and
\mathcal{M}$ are sufficiently smooth, existence and uniqueness results for
the solution of \eqref{eq:sadv-diff}-\eqref{eq:sadv-diff-coefficients} are
available, e.g., in \cite{Roz-B90} and under weaker assumptions see, e.g.,
\cite{MikRoz98,LotRoz06a}.
We will continue to use the notation from the previous section: $h$ is a
step of uniform discretization of the interval $[0,T],$ $N=T/h$ and
t_{k}=kh,$ $k=0,\ldots ,N.$ We apply the trapezoidal rule in time to the
SPDE \eqref{eq:sadv-diff}:
\begin{gather}
u^{k+1}(x)=u^{k}(x)+h[\tilde{\mathcal{L}}u^{k+1/2}(x)-\frac{1}{2
\sum_{l=1}^{r}\mathcal{M}_{l}g_{l}(x)+f(x)] \label{eq:sadv-diff-cn} \\
+\sum_{l=1}^{r}\left[ \mathcal{M}_{l}u^{k+1/2}(x)+g_{l}(x)\right] \sqrt{h
\left( \xi _{lh}\right) _{k+1},\;x\in \mathcal{D}, \notag \\
u^{0}(x)=u_{0}(x), \notag
\end{gather
where $u^{k}(x)$ approximates $u(t_{k},x)$, $u^{k+1/2}=(u^{k+1}+u^{k})/2,$
and $\left( \xi _{lh}\right) _{k}$ are i.i.d. random variables so that
\begin{equation}
\xi _{h}=\left\{
\begin{array}{c}
\xi ,\;|\xi |\leq A_{h}, \\
A_{h},\;\xi >A_{h}, \\
-A_{h},\;\xi <-A_{h}
\end{array
\right. \label{Fin61}
\end{equation
with $\xi $ $\sim $ $\mathcal{N}(0,1)$ and $A_{h}=\sqrt{2p|\ln h|}$ with
p\geq 1.$ We note that the cut-off of the Gaussian random variables is
needed in order to ensure that the implicitness of \eqref{eq:sadv-diff-cn}
does not lead to non-existence of the second moment of $u^{k}(x)$ \cit
{MilRT02,MilTre-B04}. Based on the standard results of numerics for SDE \cit
{MilTre-B04}, it is natural to expect that under some regularity assumptions
on the coefficients and the initial condition of \eqref{eq:sadv-diff}, the
approximation $u^{k}(x)$ from (\ref{eq:sadv-diff-cn}) converges with order
1/2$ in the mean-square sense and with order $1$ in the weak sense and in
the latter case one can use discrete random variables $\zeta _{l,k+1}$ from
\ref{eq:weak-euler-discrete-apprx-gauss}) instead of $\left( \xi
_{lh}\right) _{k+1}$ (see also e.g. {\color{black}\cite{Deb11,GreKlo96,KloSho01}} but we are not
proving such a result here).
In what follows it will be convenient to also use the notation:
u_{H}^{k}(x;\phi (\cdot ))=u_{H}^{k}(x;\phi (\cdot );\left( \xi _{lh}\right)
_{k},l=1,\ldots ,r)$ for the approximation \eqref{eq:sadv-diff-cn} of the
solution $u(t_{k},x)$ to the SPDE \eqref{eq:sadv-diff} with $f(x)=0$ and
g_{l}(x)=0$ for all $l$ (homogeneous SPDE) and with the initial condition
\phi (\cdot )$ prescribed at time $t=t_{k-1};$ $u_{O}^{k}(x)=u_{O}^{k}(x
\left( \xi _{lh}\right) _{k},l=1,\ldots ,r)$ for the approximation
\eqref{eq:sadv-diff-cn} of the solution $u(t_{k},x)$ to the SPDE
\eqref{eq:sadv-diff} with the initial condition $\phi (x)=0$ prescribed at
time $t=t_{k-1}.$ Note that $u_{O}^{k}(x)=0$ if $f(x)=0$ and $g_{l}(x)=0$
for all $l.$
Let $\left\{ e_{i}\right\} =\left\{ e_{i}(x)\right\} _{i\geq 1}$ be a
complete orthonormal system (CONS) in $L^{2}(\mathcal{D})$ with boundary
conditions satisfied and $(\cdot ,\cdot )$ be the inner product in that
space. Then we can write
\begin{equation}
u^{k-1}(x)=\sum_{i=1}^{\infty }c_{i}^{k-1}e_{i}(x) \label{agan1}
\end{equation
with $c_{i}^{k-1}=(u^{k-1},e_{i})$ and, due to the SPDE's linearity:
\begin{equation*}
u^{k}(x)=u_{O}^{k}(x)+\sum_{i=1}^{\infty }c_{i}^{k-1}u_{H}^{k}(x;e_{i}(\cdot
)).
\end{equation*
We have
\begin{equation*}
c_{l}^{0}=(u_{0},e_{l}),\ \ \ c_{l}^{k}=q_{Ol}^{k}+\sum_{i=1}^{\infty
}c_{i}^{k-1}q_{Hli}^{k},\ \ l=1,2,\ldots ,\ \ k=1,\ldots ,N,
\end{equation*
where $q_{Ol}^{k}=(u_{O}^{k},e_{l})$ and $q_{Hli}^{k}=(u_{H}^{k}(\cdot
;e_{i}),e_{l}(\cdot )).$
Using (\ref{agan1}), we represent the second moment of the approximation
u^{k}(x)$ from \eqref{eq:sadv-diff-cn} of the solution $u(t_{k},x)$ to the
SPDE \eqref{eq:sadv-diff} as follow
\begin{equation}
\mathbb{E}[u^{k}(x)]^{2}=\sum_{i,j=1}^{\infty }C_{ij}^{k}e_{i}(x)e_{j}(x),
\label{Eu2}
\end{equation
where the covariance matrix $C_{ij}^{k}=\mathbb{E}[c_{i}^{k}c_{j}^{k}].$
Introducing also the means $M_{i}^{k},$ one can obtain the recurrent
relations in $k:$
\begin{eqnarray}
M_{i}^{0} &=&c_{i}^{0}=(u_{0},e_{i}),\ \ C_{ij}^{0}=c_{i}^{0}c_{j}^{0},
\label{agan2} \\
M_{i}^{k} &=&\mathbb{E}[q_{Oi}^{k}]+\sum_{l=1}^{\infty }M_{l}^{k-1}\mathbb{E
[q_{Hil}^{k}], \notag \\
C_{ij}^{k} &=&\mathbb{E}[q_{Oi}^{k}q_{Oj}^{k}]+\sum_{l=1}^{\infty
}M_{l}^{k-1}\left( \mathbb{E}[q_{Oi}^{k}q_{Hjl}^{k}]+\mathbb{E
[q_{Oj}^{k}q_{Hil}^{k}]\right) +\sum_{l,p=1}^{\infty }C_{lp}^{k-1}\mathbb{E
[q_{Hil}^{k}q_{Hjp}^{k}], \notag \\
i,j &=&1,2,\ldots ,\ \ k=1,\ldots ,N. \notag
\end{eqnarray
Since the coefficients of the SPDE \eqref{eq:sadv-diff} are time
independent, all the expectations involving the quantities $q_{Oi}^{k}$ and
q_{Hil}^{k}$ in (\ref{agan2}) do not depend on $k$ and hence it is
sufficient to compute them just once, on a single step $k=1,$ and we get
\begin{eqnarray}
M_{i}^{0} &=&c_{i}^{0}=(u_{0},e_{i}),\ \ C_{ij}^{0}=c_{i}^{0}c_{j}^{0},
\label{recMC} \\
M_{i}^{k} &=&\mathbb{E}[q_{Oi}^{1}]+\sum_{l=1}^{\infty }M_{l}^{k-1}\mathbb{E
[q_{Hil}^{1}], \notag \\
C_{ij}^{k} &=&\mathbb{E}[q_{Oi}^{1}q_{Oj}^{1}]+\sum_{l=1}^{\infty
}M_{l}^{k-1}\left( \mathbb{E}[q_{Oi}^{1}q_{Hjl}^{1}]+\mathbb{E
[q_{Oj}^{1}q_{Hil}^{1}]\right) +\sum_{l,p=1}^{\infty }C_{lp}^{k-1}\mathbb{E
[q_{Hil}^{1}q_{Hjp}^{1}], \notag \\
i,j &=&1,2,\ldots ,\ \ k=1,\ldots ,N. \notag
\end{eqnarray
These expectations can be approximated by quadrature rules from Section~2.1.
If the number of noises $r$ is small, then it is natural to use the tensor
product rule (\ref{appi}) with one-dimensional Gauss--Hermite quadratures of
order $n=2$ or $3$ (note that when $r=1,$ we can use just a one-dimensional
Gauss--Hermite quadrature of order $n=2$ or $3).$ If the number of noises $r$
is large then it might be beneficial to use the sparse grid quadrature (\re
{eq:smolyak-tensor-like}) of level $L=2$ or $3.$ More specifically,
\begin{eqnarray}
\mathbb{E}[q_{Oi}^{1}] &\doteq &\sum_{p=1}^{\eta }(u_{O}^{1}(\cdot ;\mathrm{
}_{p}),e_{i}(\cdot ))\mathsf{W}_{p},\ \ \mathbb{E}[q_{Hil}^{1}]\doteq
\sum_{p=1}^{\eta }(u_{H}^{1}(\cdot ;e_{l};\mathrm{y}_{p}),e_{i}(\cdot )
\mathsf{W}_{p}, \label{app_expec} \\
\mathbb{E}[q_{Oi}^{1}q_{Oj}^{1}] &\doteq &\sum_{p=1}^{\eta }(u_{O}^{1}(\cdot
;\mathrm{y}_{p}),e_{i}(\cdot ))(u_{O}^{1}(\cdot ;\mathrm{y}_{p}),e_{j}(\cdot
))\mathsf{W}_{p}, \notag \\
\mathbb{E}[q_{Oi}^{1}q_{Hjl}^{1}] &\doteq &\sum_{p=1}^{\eta
}(u_{O}^{1}(\cdot ;\mathrm{y}_{p}),e_{i}(\cdot ))(u_{H}^{1}(\cdot ;e_{l}
\mathrm{y}_{p}),e_{j}(\cdot ))\mathsf{W}_{p}, \notag \\
\mathbb{E}[q_{Hil}^{1}q_{Hjk}^{1}] &\doteq &\sum_{p=1}^{\eta
}(u_{H}^{1}(\cdot ;e_{l};\mathrm{y}_{p}),e_{i}(\cdot ))(u_{H}^{1}(\cdot
;e_{k};\mathrm{y}_{p}),e_{j}(\cdot ))\mathsf{W}_{p}, \notag
\end{eqnarray
where $\mathrm{y}_{p}\in \mathbb{R}^{r}$ are nodes of the quadrature,
\mathsf{W}_{p}$ are the corresponding quadrature weights, and $\eta =n^{r}$
in the case of the tensor product rule (\ref{appi}) with one-dimensional
Gauss--Hermite quadratures of order $n$ or $\eta $ is the total number of
nodes $\#S$ used by the sparse-grid quadrature (\ref{eq:smolyak-tensor-like
) of level $L.$ To find $u_{O}^{1}(x;\mathrm{y}_{p})$ and $u_{H}^{1}(x;e_{l}
\mathrm{y}_{p}),$ we need to solve the corresponding elliptic PDE problems,
which we do using the spectral method in physical space, i.e., using a
truncation of the CONS $\left\{ e_{l}\right\} _{l=1}^{l_{\ast }}$ to
represent the numerical solution.
To summarize, we formulate the following deterministic recursive algorithm
for the second-order moments of the solution to the SPDE problem (\re
{eq:sadv-diff}).
\begin{algo}
\label{algo:sadv-diff-s4-scm-mom} Choose the algorithm's parameters: a
complete orthonormal basis $\{e_{l}(x)\}_{l\geq 1}$ in $L^{2}(\mathcal{D})$
and its truncation $\{e_{l}(x)\}_{l=1}^{l_{\ast }}$; a time step size $h$;
and a quadrature rule (i.e., nodes $\mathrm{y}_{p}\ $and the quadrature
weights $\mathsf{W}_{p},$ $p=1,\ldots ,\eta )$.
Step 1. For each $p=1,\ldots ,\eta $ and $l=1,\ldots ,l_{\ast },$ find
approximations $\bar{u}_{O}^{1}(x;\mathrm{y}_{p})\approx u_{O}^{1}(x;\mathrm
y}_{p})$ and $\bar{u}_{H}^{1}(x;e_{l};\mathrm{y}_{p})\approx
u_{H}^{1}(x;e_{l};\mathrm{y}_{p})$ using the spectral method in physical
space.
Step 2. Using the quadrature rule, approximately find the expectations as in
$(\ref{app_expec})$ but with the approximate $\bar{u}_{O}^{1}(x;\mathrm{y
_{p})$ and $\bar{u}_{H}^{1}(x;e_{l};\mathrm{y}_{p})$ instead of $u_{O}^{1}(x
\mathrm{y}_{p})$ and $u_{H}^{1}(x;e_{l};\mathrm{y}_{p}),$ respectively.
Step 3. Recursively compute the approximations of the means $M_{i}^{k},$
i=1,\ldots ,l_{\ast },$ and covariance matrices $\{C_{ij}^{k},$
i,j=1,\ldots ,l_{\ast }\}$ for $k=1,\ldots ,N$ according to $(\ref{recMC})$
with the approximate expectations found in Step 2 instead of the exact ones.
Step 4. Compute the approximation of the second-order moment $\mathbb{E
\lbrack u^{k}(x)\rbrack^{2}$ using $(\ref{Eu2})$ with the approximate
covariance matrix found in Step~3 instead of the exact one $\{C_{ij}^{k}\}.$
\end{algo}
We emphasize that Algorithm~\ref{algo:sadv-diff-s4-scm-mom} for computing
moments does not have a statistical error. {\color{black}Based on the error estimate in Proposition \ref{prop:weak-apprx-euler-sg}, we expect the one-step error of SGC for our recursive algorithm
is of order $h^L$. Hence, we expect the total global error from trapezoidal rule in time and SGC to be $O(h)+O(h^{L-1})$.} Error analysis of this algorithm
will be considered elsewhere.
\begin{rem}
Algorithms analogous to Algorithm~\ref{algo:sadv-diff-s4-scm-mom} can also
be constructed based on other time-discretizations methods than the
trapezoidal rule used here or based on other types of SPDE approximations,
e.g. one can exploit the Wong-Zakai approximation.
\end{rem}
\begin{rem}
\label{rm:s4-scm-cost} The cost of this algorithm is, similar to the
algorithm in \cite{ZhangRTK12}, $\frac{T}{\Delta }\eta l_{\ast}^{4}$ and the
storage is $\eta l_{\ast}^{2}$. The total cost can be reduced by employing
some reduced order methods in physical space and be proportional to
l_{\ast}^{2}$ instead of $l_{\ast}^{4}$. The discussion on computational
efficiency of the recursive Wiener chaos method is also valid here, see \cit
[Remark 4.1]{ZhangRTK12}.
\end{rem}
{\color{black}
\begin{rem}
Choosing an orthonormal basis is an important topic in the research of spectral methods,
which can be found in \cite{GotOrs-B77} and many subsequent works. Here we choose Fourier basis for Problem \ref{eq:sadv-diff} because of periodic boundary conditions.
\end{rem}
}
\section{Numerical experiments\label{sec:num-experiments-sg}}
In this section we illustrate via three examples how the SGC algorithms can
be used for the weak-sense approximation of SDE and SPDE. The first example
is a scalar SDE with multiplicative noise, where we show that the SGC
algorithm's error is small when the noise magnitude is small. We also
observe that when the noise magnitude is large, the SGC algorithm does not
work well. In the second example we demonstrate that the SGC can be
successfully used for simulating Burgers equation with additive noise when
the integration time is relatively small. In the last example we show that
the recursive algorithm from Section~\ref{sec:recursive-sg-advdiff} works
effectively for computing moments of the solution to an advection-diffusion
equation with multiplicative noise over a longer integration time.
In all the tests we limit the dimension of random spaces by $40$, which is
an empirical limitation of the SGC of Smolyak on the dimensionality \cit
{Pet03}. Also, we take the sparse grid level less than or equal to five in
order to avoid an excessive number of sparse grid points. All the tests were
run using Matlab R2012b on a Macintosh desktop computer with Intel Xeon CPU
E5462 (quad-core, 2.80 GHz).
\begin{exm}[modified Cox-Ingersoll-Ross (mCIR), see e.g. {\protect\cit
{ComGR07}}]
\label{exm:mcir}
\upshap
Consider the Ito SD
\begin{equation}
dX=-\theta _{1}X\,dt+\theta _{2}\sqrt{1+X^{2}}\,dw(t),\quad X(0)=x_{0}.
\label{eq:m-cir}
\end{equation
For $\theta _{2}^{2}-2\theta _{1}\neq 0$, the first two moments of $X(t)$
are equal t
\begin{equation*}
\mathbb{E}X(t)=x_{0}\exp (-\theta _{1}t),\quad \mathbb{E}X^{2}(t)=-\frac
\theta _{2}^{2}}{\theta _{2}^{2}-2\theta _{1}}+(x_{0}^{2}+\frac{\theta
_{2}^{2}}{\theta _{2}^{2}-2\theta _{1}})\exp ((\theta _{2}^{2}-2\theta
_{1})t).
\end{equation*
In this example we test the SGC algorithms based on the Euler scheme (\re
{eq:ito-sde-vector-euler}) and on the second-order weak scheme
\eqref{eq:ito-sde-vec-weak-2nd-order}. We compute the first two moments of
the SDE's solution and {\color{black}use the relative errors to measure accuracy of} the algorithms as
\begin{equation}
\rho _{1}^{r}(T)=\frac{\left\vert \mathbb{E}X(T)-\mathbb{E}X_{N}\right\vert
}{\left\vert \mathbb{E}X(T)\right\vert },\quad \rho _{2}^{r}(T)=\frac
\left\vert \mathbb{E}X^{2}(T)-\mathbb{E}X_{N}^{2}\right\vert }{\mathbb{E
X^{2}(T)}. \label{eq:error-measure-sode-firstTwomom}
\end{equation}
Table \ref{tbl:mcir-1} presents the errors for the SGC algorithms based on
the Euler scheme (left) and on the second-order scheme
\eqref{eq:ito-sde-vec-weak-2nd-order} (right), when the noise magnitude is
small. For the parameters given in the table's description, the exact values
(up to 4 d.p.) of the first and second moments are $3.679\times 10^{-2}$ and
$4.162\times 10^{-2}$, respectively. We see that increase of the SGC level
L $ above $2$ in the Euler scheme case and above $3$ in the case of the
second-order scheme does not improve accuracy. When the SGC error is
relatively small in comparison with the error due to time discretization, we
observe decrease of the overall error of the algorithms in $h$: proportional
to $h$ for the Euler scheme and to $h^{2}$ for the second-order scheme. We
underline that in this experiment the noise magnitude is small.
\end{exm}
\begin{table}[tbph]
\caption{Comparison of the SGC algorithms based on the Euler scheme (left)
and on the second-order scheme \eqref{eq:ito-sde-vec-weak-2nd-order}
(right). The parameters of the model (\protect\ref{eq:m-cir}) are $x_{0}=0.1
, $\protect\theta _{1}=1,$ $\protect\theta _{2}=0.3,$ and $T=1$. }
\label{tbl:mcir-1}
\begin{center}
\scalebox{0.75}{
\begin{tabular}{cccccc|ccccc}
\hline
$h$ & $L$ & $\rho _{1}^{r}(1)$ & order & $\rho _{2}^{r}(1)$ &
\multicolumn{1}{c|}{order} & $L$ & $\rho _{1}^{r}(1)$ & order & $\rho
_{2}^{r}(1)$ & order \\ \hline\hline
\multicolumn{1}{l}{5$\times 10^{-1}$} & 2 & 3.20$\times 10^{-1}$ & -- & 3.72$\times 10^{-1}$ & \multicolumn{1}{c|}{--} & 3 & $\mathbf{6.05\times 10^{-2}}$ & -- &
8.52$\times 10^{-2}$ & -- \\ \hline
\multicolumn{1}{l}{2.5$\times 10^{-1}$} & 2 & 1.40$\times 10^{-1}$ & 1.2 & 1.40$\times 10^{-1}$ & \multicolumn{1}{c|}{1.4} & 3 & 1.14$\times 10^{-2}$ &
2.4 & 2.10$\times 10^{-2}$ & 2.0 \\ \hline
\multicolumn{1}{l}{1.25$\times 10^{-1}$} & 2 & $\mathbf{6.60\times 10^{-2}}$ & 1.1 & 4.87$\times 10^{-2}$ & \multicolumn{1}{c|}{1.5} & 3 & 1.75$\times 10^{-3}$ &
2.7 & 6.73$\times 10^{-3}$ & 1.6 \\ \hline
\multicolumn{1}{l}{6.25$\times 10^{-2}$} & 2 & 3.21$\times 10^{-2}$ & 1.0 & 8.08$\times 10^{-3}$ & \multicolumn{1}{c|}{2.6} & 4 & 3.64$\times 10^{-4}$ &
2.3 & 1.21$\times 10^{-3}$ & 2.5 \\ \hline
\multicolumn{1}{l}{3.125$\times 10^{-2}$} & 2 & 1.58$\times 10^{-2}$ & 1.0 & 1.12$\times 10^{-2}$ & \multicolumn{1}{c|}{-0.5} & 4 & 8.48$\times 10^{-4}$
& -1.2 & 3.75$\times 10^{-4}$ & 1.7 \\ \hline\hline
\multicolumn{1}{l}{2.5$\times 10^{-2}$} & 2 & 1.26$\times 10^{-2}$ & & 1.49$\times 10^{-2}$ & \multicolumn{1}{c|}{} & 2 & 9.02$\times 10^{-4}$ & & 5.72$\times 10^{-2}$ & \\ \hline
\multicolumn{1}{l}{2.5$\times 10^{-2}$} & 3 & 1.26$\times 10^{-2}$ & & 1.48$\times 10^{-2}$ & \multicolumn{1}{c|}{} & 3 & 9.15$\times 10^{-5}$ & & 2.84$\times 10^{-3}$ & \\ \hline
\multicolumn{1}{l}{2.5$\times 10^{-2}$} & 4 & 1.26$\times 10^{-2}$ & & 1.55$\times 10^{-2}$ & \multicolumn{1}{c|}{} & 4 & 1.06$\times 10^{-4}$ & & 2.77$\times 10^{-4}$ & \\ \hline
\multicolumn{1}{l}{2.5$\times 10^{-2}$} & 5 & 1.26$\times 10^{-2}$ & & 1.56$\times 10^{-2}$ & \multicolumn{1}{c|}{} & 5 & 1.06$\times 10^{-4}$ & & 1.81$\times 10^{-4}$ & \\ \hline\hline
\end{tabular}}
\end{center}
\end{table}
In Table \ref{tbl:mcir-21} we give results of the numerical experiment when
the noise magnitude is not small. For the parameters given in the table's
description, the exact values (up to 4 d.p.) of the first and second moments
are $0.2718$ and $272.3202$, respectively. Though for the Euler scheme there
is a proportional to $h$ decrease of the error in computing the mean, there
is almost no decrease of the error in the rest of this experiment. The large
value of the second moment apparently affects efficiency of the SGC here.
For the Euler scheme, increasing $L$ and decreasing $h$ can slightly improve
accuracy in computing the second moment, e.g. the smallest relative error
for the second moment is $56.88\%$ when $h=0.03125$ and $L=5$ (this level
requires 750337 sparse grid points) out of the considered cases of $h=0.5,$\
$0.25,$\ $0.125,$\ $0.0625,\ $and $0.03125$ and $L\leq 5$. For the mean,
increase of the level $L$ from $2$ to $3,$ $4$ or $5$ does not improve
accuracy. For the second-order scheme \eqref{eq:ito-sde-vec-weak-2nd-order},
relative errors for the mean can be decreased by increasing $L$ for a fixed
h$: e.g., for $h=0.25$, the relative errors are $0.5121$ $0.1753$, $0.0316$
and $0.0086$ when $L=2,$\ $3,$\ $4,\ $and $5,$ respectively.
We also see in Table~\ref{tbl:mcir-21} that the SGC algorithm based on the
second-order scheme may not admit higher accuracy than the one based on the
Euler scheme, e.g. for $h=0.5,~0,25,~0.125$ the second-order scheme yields
higher accuracy while the Euler scheme demonstrates higher accuracy for
smaller $h=0.0625$ and $0.03125$. Further decrease in $h$ was not considered
because this would lead to increase of the dimension of the random space
beyond 40 when the sparse grid of Smolyak \eqref{eq:smolyak-tensor-like} may
fail and the SGC algorithm may also lose its competitive edge with Monte
Carlo-type techniques.
\begin{table}[tbph]
\caption{Comparison of the SGC algorithms based on the Euler scheme (left)
and on the second-order scheme \eqref{eq:ito-sde-vec-weak-2nd-order}
(right). The parameters of the model (\protect\ref{eq:m-cir}) are
x_{0}=0.08 $, $\protect\theta _{1}=-1,$ $\protect\theta _{2}=2,$ and $T=1$.
The sparse grid level $L=4$.}
\label{tbl:mcir-21}
\begin{center}
\begin{tabular}{cccc|cc}
\hline
$h$ & $\rho _{1}^{r}(1)$ & order & $\rho _{2}^{r}(1)$ & $\rho _{1}^{r}(1)$ &
$\rho _{2}^{r}(1)$ \\ \hline\hline
\multicolumn{1}{l}{5$\times 10^{-1}$} & 1.72$\times 10^{-1}$ & -- & 9.61
\times 10^{-1}$ & 2.86$\times 10^{-2}$ & 7.69$\times 10^{-1}$ \\ \hline
\multicolumn{1}{l}{2.5$\times 10^{-1}$} & 1.02$\times 10^{-1}$ & 0.8 & 8.99
\times 10^{-1}$ & 8.62$\times 10^{-3}$ & 6.04$\times 10^{-1}$ \\ \hline
\multicolumn{1}{l}{1.25$\times 10^{-1}$} & 5.61$\times 10^{-2}$ & 0.9 & 7.87
\times 10^{-1}$ & 1.83$\times 10^{-2}$ & 7.30$\times 10^{-1}$ \\ \hline
\multicolumn{1}{l}{6.25$\times 10^{-2}$} & 2.96$\times 10^{-2}$ & 0.9 & 6.62
\times 10^{-1}$ & 3.26$\times 10^{-2}$ & 8.06$\times 10^{-1}$ \\ \hline
\multicolumn{1}{l}{3.125$\times 10^{-2}$} & 1.52$\times 10^{-2}$ & 1.0 & 5.6
$\times 10^{-1}$ & 4.20$\times 10^{-2}$ & 8.40$\times 10^{-1}$ \\ \hline\hline
\end{tabular
\end{center}
\end{table}
Via this example we have shown that the SGC algorithms based on first- and
second-order schemes can produce sufficiently accurate results when noise
magnitude is small and that the second-order scheme is preferable since for
the same accuracy it uses random spaces of lower dimension than the
first-order Euler scheme, compare e.g. the error values highlighted by bold
font in Table~\ref{tbl:mcir-1} and see also the discussion at the end of
Section~2.2. When the noise magnitude is large (see Table~\ref{tbl:mcir-21
), the SGC algorithms do not work well as it was predicted in Section~2.3.
\begin{exm}[Burgers equation with additive noise]
\upshap
Consider the stochastic Burgers equation \cite{DaPDT94,HouLRZ06}
\begin{equation}
du+u\frac{\partial u}{\partial x}dt=\nu \frac{\partial ^{2}u}{\partial x^{2}
dt+\sigma \cos (x)d{w},\quad 0\leq x\leq \ell ,\quad \nu >0
\label{eq:burgers-additive2}
\end{equation
with the initial condition $u_{0}(x)=2\nu \frac{2\pi }{\ell }\frac{\sin
\frac{2\pi }{\ell }x)}{a+\cos (\frac{2\pi }{\ell }x)},$ $a>1$, and periodic
boundary conditions. In the numerical tests the used values of the
parameters are $\ell =2\pi $ and $a=2$.
\end{exm}
Apply the Fourier collocation method in physical space and the trapezoidal
rule in time to (\ref{eq:burgers-additive2}):
\begin{equation}
\frac{\vec{u}_{j+1}-\vec{u}_{j}}{h}-\nu D^{2}\frac{\vec{u}_{j+1}+\vec{u}_{j
}{2}=-\frac{1}{2}D(\frac{\vec{u}_{j+1}+\vec{u}_{j}}{2})^{2}+\sigma \Gamma
\sqrt{h}\xi _{j}, \label{eq:cn-burgers-additive}
\end{equation
where $\vec{u}_{j}=(u(t_{j},x_{1}),\ldots ,u(t_{j},x_{M}))^{\intercal
},\quad t_{j}=jh,$ $D$ is the Fourier spectral differential matrix, $\xi
_{j} $ are i.i.d $\mathcal{N}(0,1)$ random variables, and $\Gamma =(\cos
(x_{1}),\ldots ,\cos (x_{M}))^{\intercal }.$ The Fourier collocation points
are $x_{m}=m\frac{\ell }{M}$ ($m=1,\ldots ,M$) in physical space and in the
experiment we used $M=100$. We aim at computing moments of $\vec{u}_{j},$
which are integrals with respect to the Gaussian measure corresponding to
the collection of $\xi _{j},$ and we approximate these integrals using the
SGC from Section~2. The use of the SGC amounts to substituting $\xi _{j}$ in
\eqref{eq:cn-burgers-additive} by sparse-grid nodes, which results in a
system of (deterministic) nonlinear equations of the form
\eqref{eq:cn-burgers-additive}. To solve the nonlinear equations, we used
the fixed-point iteration method with tolerance $h^{2}/100$.
The errors in computing the first and second moments are measured as follows
\begin{eqnarray}
\rho _{1}^{r,2}(T) &=&\frac{\left\Vert \mathbb{E}u_{\mathrm{ref}}(T,\cdot )
\mathbb{E}u_{\mathrm{num}}(T,\cdot )\right\Vert }{\left\Vert \mathbb{E}u_
\mathrm{ref}}(T,\cdot )\right\Vert },\quad \rho _{2}^{r,2}(T)=\frac
\left\Vert \mathbb{E}u_{\mathrm{ref}}^{2}(T,\cdot )-\mathbb{E}u_{\mathrm{num
}^{2}(T,\cdot )\right\Vert }{\left\Vert \mathbb{E}u_{\mathrm{ref
}^{2}(T,\cdot )\right\Vert },\qquad \label{eq:error-measure-l2} \\
\rho _{1}^{r,\infty }(T) &=&\frac{\left\Vert \mathbb{E}u_{\mathrm{ref
}(T,\cdot )-\mathbb{E}u_{\mathrm{num}}(T,\cdot )\right\Vert _{\infty }}
\left\Vert \mathbb{E}u_{\mathrm{ref}}(T,\cdot )\right\Vert _{\infty }},\ \
\rho _{2}^{r,\infty }(T)=\frac{\left\Vert \mathbb{E}u_{\mathrm{ref
}^{2}(T,\cdot )-\mathbb{E}u_{\mathrm{num}}^{2}(T,\cdot )\right\Vert _{\infty
}}{\left\Vert \mathbb{E}u_{\mathrm{ref}}^{2}(T,\cdot )\right\Vert _{\infty }
, \notag
\end{eqnarray
where $\left\Vert v(\cdot )\right\Vert =\displaystyle\left( \frac{2\pi }{M
\sum_{m=1}^{M}v^{2}(x_{m})\right) ^{1/2}$, $\left\Vert v(\cdot )\right\Vert
_{\infty }=\displaystyle\max_{1\leq m\leq M}\left\vert v(x_{m})\right\vert
, $x_{m}$ are the Fourier collocation points, and $u_{\mathrm{num}}$ and
~u_{\mathrm{ref}}$ are the numerical solution obtained by the SGC algorithm
and the reference solution, respectively. The first and second moments of
the reference solution $u_{\mathrm{ref}}$ were computed by the same solver
in space and time \eqref{eq:cn-burgers-additive} but accompanied by the
Monte Carlo method with a large number of realizations ensuring that the
statistical errors were negligible.
First, we choose $\nu =0.1$ and $\sigma =1$. We obtain the reference
solution with $h=10^{-4}$ and $1.92\times 10^{6}$ Monte Carlo realizations.
The corresponding statistical error is $1.004\times 10^{-3}$ for the mean
(maximum of the statistical error for $\mathbb{E}u_{\mathrm{ref}}(0.5,x_{j})
) and $9.49\times 10^{-4}$ for the second moment (maximum of the statistical
error for $\mathbb{E}u_{\mathrm{ref}}^{2}(0.5,x_{j})$) with $95\%$
confidence interval, and the corresponding estimates of $L^{2}$-norm of the
moments are $\left\Vert \mathbb{E}u_{\mathrm{ref}}(0.5,\cdot )\right\Vert
\doteq 0.18653$ and $\left\Vert \mathbb{E}u_{\mathrm{ref}}^{2}(0.5,\cdot
)\right\Vert \doteq 0.72817$. We see from the results of the experiment
presented in Table~\ref{tbl:sc-sburgers-CN-large-para} that for $L=2$ the
error in computing the mean decreases when $h$ decreases up to $h=0.05$ but
the accuracy does not improve with further decrease of $h$. For the second
moment, we observe no improvement in accuracy with decrease of $h$. For
L=4, $ the error in computing the second moment decreases with $h$. When
h=0.0125 $, increasing the sparse grid level improves the accuracy for the
mean: $L=3$ yields $\rho _{1}^{r,2}(0.5)\doteq 9.45\times 10^{-3}$ and $L=4$
yields $\rho _{1}^{r,2}(0.5)\doteq 8.34\times 10^{-3}$. As seen in Table~\re
{tbl:sc-sburgers-CN-large-para}, increase of the level $L$ also improves
accuracy for the second moment when $h=0.05,$ $0.25,$ and $0.125$.
\begin{table}[tbph]
\caption{Errors of the SGC algorithm to the stochastic Burgers equation
\eqref{eq:burgers-additive2} with parameters $T=0.5$, $\protect\nu =0.1$ and
$\protect\sigma =1$. }
\label{tbl:sc-sburgers-CN-large-para}
\begin{center}
\scalebox{0.79}{
\begin{tabular}{ccc|ccc}
\hline
$h$ & $\rho _{1}^{r,2}(0.5),\ L=2$ & $\rho _{1}^{r,2}(0.5),$ $L=3$ & $\rho
_{2}^{r,2}(0.5),\ L=2$ & $\rho _{2}^{r,2}(0.5),$ $L=3$ & $\rho
_{2}^{r,2}(0.5),$ $L=4$ \\ \hline\hline
\multicolumn{1}{l}{2.5$\times 10^{-1}$} & 1.28$\times 10^{-1}$ & 1.3661$ \times 10^{-1}$ & 4.01$\times 10^{-2}$ & 1.05$\times 10^{-2}$ & 1.25$\times
10^{-2}$ \\ \hline
\multicolumn{1}{l}{1.00$\times 10^{-1}$} & 4.70$\times 10^{-2}$ & 5.3874$ \times 10^{-2}$ & 4.48$\times 10^{-2}$ & 4.82$\times 10^{-3}$ & 4.69$\times
10^{-3}$ \\ \hline
\multicolumn{1}{l}{5.00$\times 10^{-2}$} & 2.75$\times 10^{-2}$ & 2.7273$ \times 10^{-2}$ & 4.73$\times 10^{-2}$ & 5.89$\times 10^{-3}$ & 2.82$\times
10^{-3}$ \\ \hline
\multicolumn{1}{l}{2.50$\times 10^{-2}$} & 2.51$\times 10^{-2}$ & 1.4751$ \times 10^{-2}$ & 4.87$\times 10^{-2}$ & 6.92$\times 10^{-3}$ & 2.34$\times
10^{-3}$ \\ \hline
\multicolumn{1}{l}{1.25$\times 10^{-2}$} & 2.67$\times 10^{-2}$ & 9.4528$ \times 10^{-3}$ & 4.95$\times 10^{-2}$ & 7.51$\times 10^{-3}$ & 2.29$\times
10^{-3}$ \\ \hline\hline
\end{tabular}
}
\end{center}
\end{table}
Second, we choose $\nu =1$ and $\sigma =0.5$. We obtain the first two
moments of the reference $u_{\mathrm{ref}}$ using $h=10^{-4}$ and the Monte
Carlo method with $3.84\times 10^{6}$ realizations. The corresponding
statistical error is $3.2578\times 10^{-4}$ for the mean and $2.2871\times
10^{-4}$ for the second moment with $95\%$ confidence interval, and the
corresponding estimates of $L^{2}$-norm of the moments are $\left\Vert
\mathbb{E}u_{\mathrm{ref}}(0.5,\cdot )\right\Vert \doteq 1.11198$ and
\left\Vert \mathbb{E}u_{\mathrm{ref}}^{2}(0.5,\cdot )\right\Vert \doteq
0.66199$.
The results of the experiment are presented in Table~\ref{tbl:sc-sburgers-CN
. We see that accuracy is sufficiently high and there is some decrease of
errors with decrease of time step $h.$ However, as expected, no convergence
in $h$ is observed and further numerical tests (not presented here) showed
that taking $h$ smaller than $1.25\times 10^{-2}$ and level $L=2$ or $3$
does not improve accuracy. In additional experiments we also noticed that
there was no improvement of accuracy for the mean when we increased the
level $L$ up to $5$. For the second moment, we observe some improvement in
accuracy when $L$ increases from 2 to 3 (see Table~\ref{tbl:sc-sburgers-CN})
but additional experiments (not presented here) showed that further increase
of $L$ (up to $5)$ does not reduce the errors.
\begin{table}[tbph]
\caption{Errors of the SGC algorithm applied to the stochastic Burgers
equation \eqref{eq:burgers-additive2} with parameters $\protect\nu =1$,
\protect\sigma =0.5,$ and $T=0.5$.}
\label{tbl:sc-sburgers-CN}\centerin
\begin{tabular}{ccc|c}
\hline
$h$ & $\rho _{1}^{r,2}(0.5),\ L=2$ & $\rho _{2}^{r,2}(0.5),\ L=2$ & $\rho
_{2}^{r,2}(0.5),\ L=3$ \\ \hline\hline
\multicolumn{1}{l}{2.5$\times 10^{-1}$} & 4.94$\times 10^{-3}$ & 8.75$\times
10^{-3}$ & 8.48$\times 10^{-3}$ \\ \hline
\multicolumn{1}{l}{1$\times 10^{-1}$} & 8.20$\times 10^{-4}$ & 1.65$\times
10^{-3}$ & 1.13$\times 10^{-3}$ \\ \hline
\multicolumn{1}{l}{5$\times 10^{-2}$} & 4.88$\times 10^{-4}$ & 1.18$\times
10^{-3}$ & 6.47$\times 10^{-4}$ \\ \hline
\multicolumn{1}{l}{2.5$\times 10^{-2}$} & 3.83$\times 10^{-4}$ & 1.08$\times
10^{-3}$ & 5.01$\times 10^{-4}$ \\ \hline
\multicolumn{1}{l}{1.25$\times 10^{-2}$} & 3.45$\times 10^{-4}$ & 1.07
\times 10^{-3}$ & 4.26$\times 10^{-4}$ \\ \hline\hline
\end{tabular
\end{table}
For the errors measured in $L^{\infty }$-norm \eqref{eq:error-measure-l2} we
had similar observations (not presented here) as in the case of $L^{2}$-norm.
In summary, this example has illustrated that SGC algorithms can produce
accurate results in finding moments of solutions of nonlinear SPDE when the
integration time is relatively small. Comparing Tables \re
{tbl:sc-sburgers-CN-large-para} and \ref{tbl:sc-sburgers-CN}, we observe
better accuracy for the first two moments when the magnitude of noise is
smaller. In some situations higher sparse grid levels $L$ improve accuracy
but dependence of errors on $L$ is not monotone. No convergence in time step
$h$ and in level $L$ was observed which is consistent with our theoretical
prediction in Section~2.
\begin{exm}[Stochastic advection-diffusion equation]
\upshap
Consider the stochastic advection-diffusion equation in the Ito sense:
\begin{gather}
du=\left( \frac{\epsilon ^{2}+\sigma ^{2}}{2}\frac{\partial ^{2}u}{\partial
x^{2}}+\beta \sin (x)\frac{\partial u}{\partial x}\right) dt+\sigma \frac
\partial u}{\partial x}\,d{w}(s),\quad (t,x)\in (0,T]\times (0,2\pi ),
\label{eq:perturbed--sadv-diff} \\
u(0,x)=\phi (x),\quad x\in (0,2\pi ), \notag
\end{gather
where $w(s)$ is a standard scalar Wiener process and $\epsilon \geq 0$,
\beta $, and $\sigma $ are constants. In the tests we took $\phi (x)=\cos
(x) $, $\beta =0.1$, $\sigma =0.5,$ and $\epsilon =0.2$.
\end{exm}
We apply Algorithm \ref{algo:sadv-diff-s4-scm-mom} to
\eqref{eq:perturbed--sadv-diff} to compute the first two moments at a
relatively large time $T=5$. The Fourier basis was taken as CONS. Since
\eqref{eq:perturbed--sadv-diff} has a single noise only, we used
one-dimensional Gauss--Hermite quadratures of order $n.$ The implicitness
due to the use of the trapezoidal rule was resolved by the fixed-point
iteration with stopping criterion $h^{2}/100$.
As we have no exact solution of \eqref{eq:perturbed--sadv-diff}, we chose to
find the reference solution by Algorithm~4.2 from \cite{ZhangRTK12} (a
recursive Wiener chaos method accompanied by the trapezoidal rule in time
and Fourier collocation method in physical space) with the parameters: the
number of Fourier collocation points $M=30$, the length of time subintervals
for the recursion procedure $h=10^{-4}$, the highest order of Hermite
polynomials $P=4$, the number of modes approximating the Wiener process $n=4
, and the time step in the trapezoidal rule $h=10^{-5}$. It gives the second
moment in the $L^{2}$-norm $\left\Vert \mathbb{E}u_{\mathrm{ref
}^{2}(1,\cdot )\right\Vert \doteq 1.065195$. The errors are computed as
follows
\begin{equation}
\varrho _{2}^{2}(T)=\left\vert \left\Vert \mathbb{E}u_{\mathrm{ref
}^{2}(T,\cdot )\right\Vert -\left\Vert \mathbb{E}u_{\mathrm{numer
}^{2}(T,\cdot )\right\Vert \right\vert ,\quad \varrho _{2}^{r,2}(T)=\frac
\varrho _{2}^{2}(T)}{\left\Vert \mathbb{E}u_{\mathrm{ref}}^{2}(T,\cdot
)\right\Vert }, \label{eq:error-measure-l2-v1}
\end{equation
where the norm is defined as in \eqref{eq:error-measure-l2}.
\begin{table}[tbph]
\caption{Errors in computing the second moment of the solution to the
stochastic advection-diffusion equation \eqref{eq:perturbed--sadv-diff} with
$\protect\sigma =0.5$, $\protect\beta =0.1$, $\protect\epsilon =0.2$ at $T=5$
by Algorithm~\protect\ref{algo:sadv-diff-s4-scm-mom} with $l_{\ast }=20$ and
the one-dimensional Gauss--Hermite quadrature of order $n=2$ (left) and $n=3$
(right).}
\label{tbl:recursive-scm-advdiff-onenoise-delta-1st-order}\centering
\scalebox{0.90}{
\begin{tabular}{cccc|ccc}
\hline
$h$ & $\varrho _{2}^{r,2}(5)$ & order & CPU time (sec.) & $\varrho
_{2}^{r,2}(5)$ & order & CPU time (sec.) \\ \hline\hline
5$\times 10^{-2}$ & 1.01$\times 10^{-3}$ & -- & 7.41 & 1.06$\times 10^{-3}$
& -- & 1.10$\times 10$ \\ \hline
2$\times 10^{-2}$ & 4.07$\times 10^{-4}$ & 1.0 & 1.65$\times 10$ & 4.25$\times 10^{-4}$ & 1.0 & 2.43$\times 10$ \\ \hline
1$\times 10^{-2}$ & 2.04$\times 10^{-4}$ & 1.0 & 3.43$\times 10$ & 2.12$\times 10^{-4}$ & 1.0 & 5.10$\times 10$ \\ \hline
5$\times 10^{-3}$ & 1.02$\times 10^{-4}$ & 1.0 & 6.81$\times 10$ & 1.06$\times 10^{-4}$ & 1.0 & 1.00$\times 10^{2}$ \\ \hline
2$\times 10^{-3}$ & 4.08$\times 10^{-5}$ & 1.0 & 1.70$\times 10^{2}$ & 4.25$\times 10^{-5}$ & 1.0 & 2.56$\times 10^{2}$ \\ \hline
1$\times 10^{-3}$ & 2.04$\times 10^{-5}$ & 1.0 & 3.37$\times 10^{2}$ & 2.12$\times 10^{-5}$ & 1.0 & 5.12$\times 10^{2}$ \\ \hline\hline
\end{tabular}
}
\end{table}
The results of the numerical experiment are given in Table~\re
{tbl:recursive-scm-advdiff-onenoise-delta-1st-order}. We observe first-order
convergence in $h$ for the second moments. We notice that increasing the
quadrature order $n$ from $2$ to $3$ does not improve accuracy which is
expected. Indeed, the used trapezoidal rule is of weak order one in $h$ in
the case of multiplicative noise and more accurate quadrature rule cannot
improve the order of convergence.
{\color{black}This observation confirms in some sense that the total error should be expected to be
$O(h)$+$O(h^{L-1})$, as discussed in Section \ref{sec:recursive-sg-advdiff}.}
We note in passing that in the additive
noise case we expect to see the second order convergence in $h$ when $n=3$
due to the properties of the trapezoidal rule.
In conclusion, we showed that recursive Algorithm~\re
{algo:sadv-diff-s4-scm-mom} can work effectively for accurate computing of
second moments of solutions to linear stochastic advection-diffusion
equations at relatively large time. We observed convergence of order one in
h$.
\section*{Acknowledgments}
MVT was partially supported by the Leverhulme Trust Fellowship SAF-2012-006
and is also grateful to ICERM (Brown University, Providence) for its
hospitality. The rest of the authors were partially supported by a OSD/MURI
grant FA9550-09-1-0613, by NSF/DMS grant DMS-1216437 and also by the
Collaboratory on Mathematics for Mesoscopic Modeling of Materials (CM4)
which is sponsored by DOE. BR was also partially supported by ARO grant W911NF-13-1-0012
and NSF/DMS grant DMS-1148284.
\def\polhk#1{\setbox0=\hbox{#1}{\ooalign{\hidewidth
\lower1.5ex\hbox{`}\hidewidth\crcr\unhbox0}}} \def\cprime{$'$}
|
1,108,101,565,550 | arxiv | \section{Introduction}
We study the problem of binomiality of polynomial ideals. Given an
ideal with a finite set of generators, we would like to know if there
exists a basis for the ideal such that its elements have at most two
monomials. Such an ideal is called a \textit{binomial ideal}. We use
the Conradi-Kahle Algorithm for testing whether an ideal is
binomial. Our investigations are focused on implementing this
algorithm and performing computations. Binomial ideals offer clear
computational advantages over arbitrary ideals. They appear in various
applications, e.g., in biological and chemical models.
Binomial ideals have been extensively studied in the literature
\cite{eisenbud1996binomial,kahle2012decompositions,kahle2014positive}.
Eisenbud and Sturmfels in \cite{eisenbud1996binomial} have shown that
Gr\"obner bases\xspace \cite{Buchberger:65a} can be used to test binomiality. Recently,
biochemical networks whose \textit{steady state ideals} are binomial
have been studied in the field of \textit{Algebraic Systems Biology}
\cite{craciun_toric_2009,gatermann_bernsteins_2005,millan2012chemical}.
Mill\'an and Dickenstein in \cite{millan_structure_2018} have defined
\textit{MESSI Biological Systems} as a general framework for
modifications of type enzyme-substrate or swap with intermediates,
which includes interesting binomial systems
\cite{millan_structure_2018}.
In the context of biochemical reaction networks, Mill\'an,
Dickenstein, Shiu and Conradi in \cite{millan2012chemical} present a
sufficient condition on the \textit{stoichiometric matrix} for
binomiality of the steady state ideal. Conradi and Kahle
\cite{conradi2015detecting} proved that this condition is necessary
for homogeneous ideals and proposed an algorithm. The Conradi-Kahle
Algorithm is implemented in Macaulay2 \cite{BinomialsPackage}. Iosif,
Conradi and Kahle in \cite{Conradi2019} use the fact that the
irreducible components of the varieties of binomial ideals admit
monomial parametrization in order to reduce the dimension of detecting
total concentrations that lead to multiple steady states.
Our contribution in this article is analysing efficiency and
effectiveness of the Conradi-Kahle Algorithm, using Gr\"obner bases
for reduction, applied to some biological models. We first discuss the
complexity of the algorithm and reduce it to the complexity of
computing a Gr\"obner basis\xspace for a preprocessed input set of polynomials. Then we
present our computations in Macaulay2 \cite{Macaulay2} and Maple
\cite{Maple} and compare the algorithm with simply computing Gr\"obner basis\xspace of
the input ideal which shows the strength of the algorithm. The
experiments are performed on biological models in the BioModels
repository \footnote{\url{https://www.ebi.ac.uk/biomodels/}}, which is
a repository of mechanistic models of bio-medical systems
\cite{BioModels2015a,BioModels2018a}. Our intial motivation was to
understand the advantages and disadvantages of the method in
\cite{millan2012chemical} for testing binomiality of chemical reaction
networks. As the Conradi-Kahle Algorithm follows the idea of the
method in \cite{millan2012chemical} with more subtle reduction steps,
we rather use the Conradi-Kahle Algorithm to check binomiality of
ideals coming from biomodels, although none of our steady state ideals
are homogeneous.
\section{The Conradi-Kahle Algorithm}\label{sec:maple}
The Conradi-Kahle Algorithm is based on the sufficient condition by
Mill\'an, Dickenstein, Shiu and Conradi \cite{millan2012chemical} for
binomiality of steady state ideals. The latter states that if the
kernel of the stoichiometric matrix has a basis with a particular
property then the steady state ideal is binomial. Conradi and Kahle
converted this into a sufficient condition for an arbitrary homogenous
ideal $I$ generated by a set $F$ of polynomials of fixed degree. They
proved that $I$ is binomial if and only if the reduced row echelon
form of the coefficient matrix of $F$ has at most two non-zero
elements in each row. This leads to the Algorithm \ref{alg:CK} which
is incremental on the degrees of the generators.
\begin{algorithm}
\caption{(Conradi and Kahle, 2015)}
\label{alg:CK}
\begin{algorithmic}[1]
\REQUIRE Homogeneous polynomials
$f_1,\ldots,f_s\in\KK[X]$, where $\KK$ is a field.\\
\ENSURE \emph{Yes} if the ideal $\langle f_1 , \ldots, f_s \rangle$
is binomial. \emph{No} otherwise.
\STATE Let $B:=\emptyset, R:=\KK[x_1,\ldots,x_n]$ and
$F:=\{f_1,\ldots,f_s\}$.
\WHILE {$F \ne \emptyset$}
\STATE Let $F_{\min}$ be the set of elements of minimal degree in
$F$.
\STATE $F := F \setminus F_{\min}$.
\STATE Compute the reduced row echelon form $A$ of the
coefficient matrix of $F_{\min}$.
\IF{ $A$ has a row with three or more non-zero entries}
\RETURN \emph{No} and stop
\ENDIF
\STATE Let $M$ be the vector of monomials in $F_{\min}$.
\STATE Let $B'$ be the set of entries of $AM$.
\STATE $B := B \cup B'$.
\STATE $R := \KK[x_1, \ldots,x_n]/\langle B \rangle$.
\STATE Redefine $F$ as the image of $F$ in $R$.
\ENDWHILE
\RETURN \emph{Yes}.
\end{algorithmic}
\end{algorithm}
Now we analyze the complexity of Algorithm~\ref{alg:CK}.
\begin{itemize}
\item \textbf{Steps $3$ and $4$.} can be ignored.
\item \textbf{Step $5$.} Let $t$ denote the number of distinct
monomials in
$F_{\min}$ and $m:=\max(s,t)$. Computing the reduced row echelon
form of $A$ can be done in at most $m^{\omega}$ steps, where
$\omega$ is the constant in the complexity of matrix multiplication.
\item \textbf{Step $6$.} needs at most $st$ operations which is less
or equal than $m^{\omega}$, so we ignore this term.
\item \textbf{Steps $10$.} can be bounded by $tm$, which itself can
be bounded by $m^{\omega}$, hence ignored.
\item \textbf{Step $12$.} This can be done via computing a Gr\"obner basis\xspace of
$\langle B \rangle$. Another way to do this, is by means of
Gaussian elimination on the corresponding Macaulay matrix of $B$.
\item \textbf{Step $13$.} is equivalent to reducing $F$ modulo
$\langle B \rangle$, which can be done via reducing $F$ modulo a
Gr\"obner basis of $\langle B \rangle$. Another method to do this
is via Gaussian elimination over the Macaulay matrix of $F \cup B$.
\end{itemize}
Following Mayr and Meyer's work on the complexity of computing Gr\"obner bases\xspace
\cite{mayr_complexity_1982}, computations in steps $11$ and $12$ of
the algorithm can be EXP-SPACE. Conradi and Kahle observe through
experiments that these steps can be performed via graph enumeration
algorithms like breadth first search, which makes it more efficient
than Gr\"obner bases in practice \cite{conradi2015detecting}. In this
article we do not use such graph enumeration algorithms in our
implementations. This is the subject of a future work.
\section{Macaulay2 and Maple experiments}
We consider 20 Biomodels from the BioModels repository
\cite{BioModels2015a,BioModels2018a} whose steady state ideal is
generated by polynomials in $\QQ(k_1,\dots,k_r)[x_1,\dots,x_n]$ where
$k_1$, $\dots$, $k_r$ are the parameters and $x_1,\dots,x_n$ are the
variables corresponding to the species. Our polynomials are taken from
\cite{luders_odebase:_2019}. We use Algorithm~\ref{alg:CK} to test
binomiality of these biomodels. We emphasise that in our computations
we do not assign values to the parameters $k_1,\dots,k_r$ and we work
in $\QQ(k_1,\dots,k_r)[x_1,\dots,x_n]$. We have implemented Algorithm
\ref{alg:CK} in Maple \cite{MapleCK} and also use a slight variant of
the implementation of the algorithm in the Macaulay2 package Binomials
\cite{BinomialsPackage,kahle2012decompositions}. We also test
binomiality of an ideal given by a set of generating polynomials via
computing a Gr\"obner basis\xspace of the ideal, using Corollary 1.2 in
\cite{eisenbud1996binomial}. Our computations are done on a 3.5 GHz
Intel Core i7 with 16 GB RAM. In our computations we used Macaulay2
1.12 and Maple 2019.1.
\begin{table}[htpb]\centering
\scalebox{1.0}{
\begin{tabular}{ | c || c | c | c | c |c|c|c|}
\hline
Biomodel & C-K (M2) & C-K (Maple) & Bin (C-K) & GB (M2) &
GB (Maple) & Bin (GB) \\
\hline
2 & 0.1 & 1 & No && & \\ \hline
9 &0.04&0.2 & Yes & 0.5 &0.001 &Yes\\\hline
28 &0.04&0.1 & No& & & \\\hline
30 &0.5&0.2 &No& & & \\\hline
46 &0.02&0.2 &No&100 &80 &No\\\hline
85 &0.04&0.6 & No& & & \\\hline
86 &0.08&6 &No& & & \\\hline
102 &0.04&0.2 &No& & &\\\hline
103 &0.1&0.9 &No& & & \\\hline
108 &0.01&0.03 &No& & & \\\hline
152 &0.3&400&No& & & \\\hline
153 &0.4&500& No& & & \\\hline
187 &0.02&0.07&No&0.06 & 0.1 &No \\\hline
200 &0.05&1&No& & & \\\hline
205 &0.6&50& No& & & \\\hline
243 &0.04&0.3&No& 0.01 & 0.05 &No \\\hline
262 &0.05&0.02&Yes&0.01 & 0.02& Yes\\\hline
264 &0.7&0.03& Yes&2& 0.04& Yes\\\hline
315 &0.02&0.2& No&& & \\\hline
335 &0.04&0.8&No&30&90 &No \\\hline
\end{tabular}
}
\caption{
\label{fig:computations}
CPU times (in seconds) for Algorithm \ref{alg:CK} and Gr\"obner bases\xspace.}
\end{table}
Table~\ref{fig:computations} shows the results of our computations.
Biomodel columns in the table shows the number of the biomodel. The
columns C-K (M2) and C-K (Maple) show the CPU timings in seconds of
executing Algorithm \ref{alg:CK} in Macaulay2 and Maple,
respectively. In the column Bin (C-K), Yes means that the algorithm
successfully determined that the ideal is binomial, while No means
that the algorithm cannot determine whether the ideal is binomial or
not. The columns GB (M2) and GB (Maple) are the timings of Gr\"obner bases\xspace
computations of the input polynomials in Macaulay2 and Maple,
respectively. The Macaulay2 and Maple timings are rounded to the first
nonzero digit. Bin (GB) column is blank if the Gr\"obner basis\xspace computation did not
finish after 600 seconds. Yes in the latter column means that Gr\"obner basis\xspace
computation finished and shows that the ideal is binomial, while No
shows that the Gr\"obner basis\xspace computation finished but it detected that the ideal
is not binomial.
None of the ideals in the biomodels that we have studied are
homogeneous. Therefore, in order to use Algorithm~\ref{alg:CK} we need
to homogenise the ideals. Consequently, if the algorithm returns
\emph{No}, we are not able to say whether the ideal is binomial or not
(see~\cite[Section~4]{conradi2015detecting}). As one can see from the
column Bin (C-K), the Conradi-Kahle Algorithm is able to test
binomiality only for Biomodels $9$, $262$ and $264$. If Gr\"obner bases\xspace
computations finish, then they can test binomiality for every
ideal. However, as one can see from the related columns, this is not
the case. Actually in most of the cases, Gr\"obner bases\xspace computations did not
finish within 600 seconds. One can see from the table that whenever
Gr\"obner bases\xspace computations give a yes answer to the binomiality question, then
the Conradi-Kahle Algorithm also can detect this as well. In the Yes
cases, the timings for both methods in both Macaulay2 and Maple are
very close.
Algorithm~\ref{alg:CK} returns the output within at most a few
seconds, however, most of the Gr\"obner bases computations did not
finish in 600 seconds. The advantage of testing binomiality using Gr\"obner bases\xspace
computations can be seen in Biomodels $46$, $187$, $243$ and $335$,
where Gr\"obner bases\xspace computations---although slower---show that the ideal is not
binomial, but the Conradi-Kahle Algorithm cannot detect this in spite
of its fast execution. With a few exceptions, we do not observe
significant difference between Macaulay2 and Maple computations,
neither for the Conradi-Kahle Algorithm nor for the Gr\"obner bases\xspace
computations. We would like to emphasise that the Conradi-Kahle
Algorithm is complete over homogeneous ideals. However, in this
article we are interested in ideals coming from some biological models
which are inhomogeneous, and this might affect the performance of the
algorithm. In future we will do experiments on homogeneous ideals in
order to better understand the performance of the algorithm in that
case.\\
\noindent\textbf {Acknowledgement.} We would like to thank the anonymous
referees for their comments.
|
1,108,101,565,551 | arxiv |
\section{Introduction}
The wide adoption of machine learning algorithms and ubiquitous sensors have together resulted in numerous tightly-coupled `perception-to-control' systems being deployed in the wild. In addition to effectiveness, robustness is an integral characteristic to be considered when building a trustworthy system. Adversarial training aims to increase the robustness of machine learning models by exposing them to perturbations that arise from artificial attacks~\cite{goodfellow2014explaining, madry2017towards} or natural disturbances~\cite{shen2021gradient}. In this work, we focus on the impact of these perturbations on image-based maneuvering and the design of efficient adversarial training for obtaining robust models.
Our test task is `maneuvering through a front-facing camera'--which represents one of the hardest perception-to-control tasks since the input images are taken from partially observable, nondeterministic, dynamic, and continuous environments~\cite{ullman1980against,chen2015deepdriving,codevilla2018end,li2019adaps,shen2021gradient}.
Adversarial training has progressed tremendously in recent years. Inspired by the finding that model robustness can be improved through learning with simulated perturbations~\cite{bhagoji2018enhancing}, effective techniques such as AugMix~\cite{hendrycks2019augmix}, AugMax~\cite{wang2021augmax}, MaxUp~\cite{gong2021maxup}, and AdvBN~\cite{shu2020prepare} have been introduced to improve the performance of language modeling, and image-based classification and segmentation. However, the focus of these studies is not \textit{efficient adversarial training for robust maneuvering}.
AugMix is less effective to gradient-based adversarial attacks due to the lack of sufficiently intense augmentations;
AugMax, based on AugMix, is less efficient because of using a gradient-based adversarial training procedure, which is also a limitation of AdvBN.
MaxUp requires multiple forward passes for a single data point to determine the most harmful perturbation, which increases computational costs and time proportional to the number of extra passes.
Recent work by Shen et al.~\cite{shen2021gradient} represents the SOTA adversarial training method for achieving robust maneuvering against image perturbations. Their technique adopts Fr\'{e}chet Inception Distance (FID)~\cite{heusel2017gans} to first determine intensity levels on the perturbations that minimize model performance. Afterwards, single perturbation datasets are generated.
Before each round of training, testing is done to select the dataset that minimizes model performance, which is then combined with the clean dataset for training. A fine-tuning step is also introduced to boost model performance on clean images. While effective, the FID is being used to scrutinize the parameter space of the perturbations, which adds complexity to the approach. The approach also generates a large number of datasets
prior to training, which is not cost effective in terms of computation and storage.
Additional inefficiency and algorithmic complexity are introduced during training as the pre-round selection of datasets requires testing against perturbed datasets, resulting in a large amount of data passing through the model.
Lastly, the choice of distinct intensity levels can limit the model generalizability and hence robust efficacy.
We aim to improve on the \textit{efficiency}, \textit{robustness}, and \textit{algorithm simplicity} of the work by Shen et al.
Figure~\ref{fig:pipeline} illustrates our approach where we divide a steering angle prediction model into an encoder and a regression head. Then, a decoder is attached to the encoder to form a denoising autoencoder (DAE) (the encoder/regression head pair remains the prediction model).
The insight for using the DAE alongside the prediction model is motivated by the idea that \textit{prediction on clean data is easier than on perturbed data}.
Subsequently, the full model -- the DAE and the prediction model -- is jointly learnt: when perturbed images are forward passed, the reconstruction loss is added with the regression loss, enabling the encoder to simultaneously improve on `steering' and `denoising sensor input.'
Our approach enjoys efficiency as the additional computational cost stems only from passing the intermediate features through the decoder.
Algorithmic complexity is kept simple as perturbations are randomly sampled within a moving range that is determined by a linear curriculum learning~\cite{bengio2009curriculum} scheme.
The FID is used only minimally to determine the maximum intensity value of each perturbation.
The model generalizability and robustness is improved as more parameter space of the perturbation is explored, and the fact that `denoising sensor input' grants `steering' to be learnt from the denoised, inner representations of the training data.
We test AutoJoin{} on four real-world driving datasets: Honda~\cite{ramanishka2018toward}, Waymo~\cite{sun2020scalability}, Audi~\cite{geyer2020a2d2}, and SullyChen~\cite{chen2017sully}, totaling over 5M clean and perturbed images.
The results show that our approach achieves \textit{the best performance on the steering task while being the most efficient.}
AutoJoin{} outperforms the recent SOTA method by Shen et al. by up to a 20\% accuracy increase and a 43\% error decrease on the Nvidia~\cite{bojarski2016end} architecture. We also test on ResNet-50~\cite{he2016deep} where AutoJoin{} achieves the best overall 'steering' accuracy and decreases error by up to 44\% compared to other adversarial training techniques.
Furthermore, we test on gradient-based white-box transfer attacks where AutoJoin{} outperforms every SOTA method tested.
Lastly, AutoJoin{} enjoys efficiency: it saves 21\% per epoch time compared to the next fastest, AugMix~\cite{hendrycks2019augmix}, and saves 83\% per epoch time and 90\% training data compared to the method by Shen et al~\cite{shen2021gradient}. All code and datasets will be made publicly available.
\section{Related Work}
In this section, we first introduce techniques for improving model robustness against simulated image perturbations. Then, we briefly discuss studies that use a denoising autoencoder (DAE) to improve model robustness and studies that use autoencoders to improve the performance of driving.
So far, most adversarial training techniques against image perturbations have focused on image classification. To mention some notable examples, AugMix~\cite{hendrycks2019augmix} is a technique that enhances model robustness and generalizability by layering randomly sampled augmentations together.
AugMax~\cite{wang2021augmax}, a derivation of AugMix, trains on AugMix-generated images and their gradient-based adversarial variants, which introduces additional computation overhead.
MaxUp~\cite{gong2021maxup} generates multiple augmented images stochastically for a single image and trains the model on the perturbed image that minimizes the model's performance. The inefficiency of MaxUp stems from the requirement of multiple passes of a data point through the model for determining the most harmful perturbation.
Lastly, AdvBN~\cite{shu2020prepare} is an adversarial training technique that switches between batch normalization layers based on whether the training data is clean or perturbed. It achieves SOTA performance when used with techniques such as AugMix on ImageNet-C~\cite{hendrycks2019benchmarking}. However, AdvBN uses a gradient-based technique which prevents high efficiency.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/pipeline.eps}
\vspace{-1em}
\caption{The pipeline of AutoJoin{}. The clean data comes from various driving datasets that contain front-facing camera images along with the corresponding steering angles.
The perturbed data is prepared using various base perturbations and their sampled intensity levels.
Next, the steering angle prediction model and denoising autoencoder are jointly learnt to reinforce each other's accuracy.
The resulting steering angle predictions and reconstructed images are then used to calculate the loss to adjust the perturbation intensity levels for continue learning.
}
\label{fig:pipeline}
\vspace{-1em}
\end{figure}
Recently, Shen et al.~\cite{shen2021gradient} has developed a gradient-free adversarial training technique against image perturbations. Their work uses Fr\'{e}chet Inception Distance (FID)~\cite{heusel2017gans} to select the intensity levels of the perturbations. During training, they select the perturbations such that all base perturbations are exploited; and for each perturbation, the intensity that minimizes the current model's performance is adopted. While being the most effective technique among all aforementioned methods, the algorithmic pipeline combined with pre-training dataset generation are not necessarily efficient: 1) an extensive analysis is needed to determine the intensity levels of the perturbations; 2) the data selection process during training requires testing various combinations of the perturbations and their distinct intensity levels; and 3) significant time and storage are required to generate and store the pre-training datasets. In contrast, our approach maintains efficiency: 1) we use only a minimal analysis to determine the most damaging intensity level; 2) no mid-training data selection is performed as we instead ensure every perturbation, at sampled intensities, is used; and 3) perturbed dataset generation is done during training in memory.
DAEs have been used to improve upon model robustness on various tasks~\cite{roy2018robust, xiong2022robust, aspandi2019robust}. Wang et al.~\cite{wang2020end} use an autoencoder to improve the accuracy of steering angle prediction by removing various roadside distractions such as trees or bushes.
However, their focus is not robustness against perturbed images as only clean images are used in training.
DriveGuard~\cite{papachristodoulou2021driveguard} explores different autoencoder architectures on adversarially degraded images that affect semantic segmentation rather than the steering task. They show that autoencoders can be used to enhance the quality of the degraded images, thus improving overall task performance.
To the best of our knowledge, our work is the first to adopt DAE and joint learning to improve model robustness against perturbed images on the steering task.
\section{Methodology}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/sample.eps}
\vspace{-1em}
\caption{Sample images with perturbations for the six test categories. A column represents a single image that is either clean or perturbed by one of the five perturbation categories.
Single images are perturbed by only one of the perturbations outlined in Section~\ref{sec:perturbation} Unseen images contain corruptions from ImageNet-C~\cite{hendrycks2019benchmarking}. Combined images have multiple perturbations overlaid, for example, the second column image has G, noise, and blur as the most prominent perturbations. FGSM and PGD adversarial examples are also shown at increasing intensities. The visual differences are not salient due to the preservation of gradient-based adversarial attack potency.
}
\label{fig:sample_imgs}
\vspace{-1.5em}
\end{figure}
In this section, we discuss our framework in detail. The pipeline is shown in Figure~\ref{fig:pipeline}. We use four driving datasets in this work, namely Honda~\cite{ramanishka2018toward}, Waymo~\cite{sun2020scalability}, A2D2~\cite{geyer2020a2d2}, and SullyChen~\cite{chen2017sully}.
These datasets contain clean images and their corresponding steering angles.
During training, each image is perturbed by selecting a perturbation from the pre-determined perturbation set at a sampled intensity level (see Sec.~\ref{sec:perturbation}).
The perturbed images are then passed through the encoder to the decoder and the regression head for joint learning (see Sec.~\ref{sec:joint_learning}). The testing is done on the learnt prediction model using datasets of both gradient-free perturbations and gradient-based attacks.
\subsection{Image Perturbations and Intensity Levels}
\label{sec:perturbation}
For a fair comparison, we use the same base perturbations as of Shen et al.~\cite{shen2021gradient}.
To be concrete, we perturb images on RGB color values and HSV saturation/brightness values. These channels are perturbed individually in two directions, lighter or darker, according to a linear model: $v'_c = \alpha(a_c || b_c) + (1-\alpha)v_c$, where $v'_c$ is the perturbed pixel value, $\alpha$ is the intensity level, $a_c$ is the channel value's lower bound, $b_c$ is the channel value's upper bound, and $v_c$ is the original pixel value. For the darker direction, $a_c$ is used, while $b_c$ is used for lighter. The default values for $a_c$ and $b_c$ are zero and 255, respectively with two exceptions: $a_c$ is set to 10 for the V channel to exclude a completely black image, and $b_c$ is set to 179 for the H channel according to its definition. We also adopt Gaussian noise, Gaussian blur, and radial distortion and apply them to the images to simulate natural corruptions.
The Gaussian noise and Gaussian blur perturbations are parameterized by the standard deviation of the images.
We show sample perturbed images in Figure~\ref{fig:sample_imgs}.
In addition to the nine base perturbations, the channel perturbations (i.e., R, G, B, H, S, V) are further discretized into their lighter or darker components such that if $p$ is a channel perturbation, then it is transformed into $p_{light}$ and $p_{dark}$. As a result, the perturbation set contains 15 elements. During learning, we expose the model to all 15 perturbations (see Algorithm~\ref{alg:autojoin}), aiming to improve its generalizability and robustness.
Note that our approach is trained only with \textit{images that have single perturbations} applied to them.
The intensity level of a perturbation is sampled within the range $[0,c)$. The minimum 0 represents no perturbation, and $c$ is the current maximum intensity. The range is upper-bounded by $c_{max}$, a value determined through FID. We adopt the same upper-bounds as of Shen et al.~\cite{shen2021gradient} to ensure comparable experiments. For ease of use, we scale $[0,c_{max})$ to $[0,1)$ for each perturbation.
During training, after each epoch, $c$ is increased by 0.1 providing the model loss has reduced when compared to previous epochs. The entire training process begins on clean images ($[0,0)$). Compared to Shen et al., our approach uses FID in a minimal fashion and allows the model to explore the entire parameter space of a perturbation (rather than on distinct intensity levels).
\subsection{Joint Learning of Denoising Autoencoder and Steering Angle Prediction Model}
\label{sec:joint_learning}
Next, we detail our joint learning procedure. Before training, the prediction model is split into two components: an encoder and a regression head. The decoder is then attached to the encoder alongside the regression head creating two models: the original prediction model and the DAE.
\begin{algorithm}
\caption{AutoJoin{}}
\begin{algorithmic}
\State{\textbf{input: } training batch (clean images) $\{x_{i}\}_n$, encoder $e$, decoder $d$, regression model $p$, perturbations $\mathcal{M}$, curriculum bound $c$}
\For{\textbf{each }epoch}
\For{\textbf{each }$i \in$ {1,...,$n$}}
\State Select perturbation $op$ = $\mathcal{M}$[$i$ mod $len(\mathcal{M})$]
\State Randomly sample intensity level $l$ from [0, $c$)
\State $y_{i}$ = $op$($x_{i}$, $l$) // perturb a clean image
\State $z_{i} = e(y_{i})$ // obtain the latent representation
\State $x'_{i} = d(z_{i})$ // reconstruct an image from the latent representation
\State $a_{p} = p(z_{i})$ // predict a steering angle using the latent representation
\If{$i$ \% $len(\mathcal{M})$ = 0}
\State Shuffle $\mathcal{M}$ // randomize order of perturbations
\EndIf
\EndFor
\State Calculate $\mathcal{L}$ using Eq.~\ref{loss_function}
\If{$\mathcal{L}$ improves}
\State Increase $c$ by 0.1 // increase the curriculum's difficulty
\EndIf
\State Update $e$, $d$, and $p$ to minimize $\mathcal{L}$
\EndFor
\State \textbf{return} $e$ and $p$ // for downstream steering angle regression
\end{algorithmic}
\label{alg:autojoin}
\end{algorithm}
The two models are trained simultaneously. Specifically, the prediction model learns how to maneuver given the perturbed sensor input, while the DAE learns how to denoise the perturbed sensor input. As the encoder is shared, both models train its latent representations. This can result in positive transfer between the tasks for two reasons. First, the DAE trains the latent representations to be the denoised versions of the perturbed images. This means the regression head trains on denoised representations rather than noisy representations that may deteriorate the task performance. Second, the prediction model trains the encoder's representations for better task performance.
As the DAE uses these representations, it improves the reconstruction and the downstream task's performance.
Our approach is formally described in Algorithm~\ref{alg:autojoin}. For a clean image $x_i$, a perturbation and intensity $l \in [0,c)$ are sampled. The augmented image $y_i$ is a function of the two and is passed through the encoder $e(\cdot)$ resulting in the latent representation $z_i$. Next, $z_i$ is passed through both the decoder $d(\cdot)$ and the regression model $p(\cdot)$ where the results are the reconstruction $x'_i$ and steering angle $a_{p_{i}}$, respectively. Every 15 images, the perturbation set is randomized to prevent overfitting the images to a pattern of perturbations.
For the DAE, the standard $\ell_2$ loss is used by comparing $x'_i$ to $x_i$. For the regression loss, $\ell_1$ is used between $a_{p_{i}}$ and $a_{t_{i}}$, where the latter is the ground truth angle. The two losses are combined to form the loss of the joint learning:
\vspace{-1.5em}
\begin{equation}
\mathcal{L} = \lambda_1\ell_2 \left(\mathbf{x'_i},\mathbf{x_i}\right)+\lambda_2 \ell_1 \left(\mathbf{a_{p_{i}}},\mathbf{a_{t_{i}}}\right).
\label{loss_function}
\end{equation}
\vspace{-1em}
In Equation~\ref{loss_function}, the two terms are weighted by $\lambda_1$ and $\lambda_2$. For the experiments on the Waymo~\cite{sun2020scalability} dataset, we find setting $\lambda_1$ to 10 and $\lambda_2$ to 1 allows better performance (emphasizing reconstructions). For the other three datasets, $\lambda_1$ is set to 1 and $\lambda_2$ is set to 10 to ensure the main focus of the joint learning is `steering.' Once training is finished, the decoder is detached, leaving the original (now trained) prediction model for testing through datasets in six categories (see Sec.~\ref{sec:exp_setup} for details).
\section{Experiments and Results}
\label{sec:results}
In this section, we first introduce the experiment setups and then discuss our results in detail.
\subsection{Experiment Setup}
\label{sec:exp_setup}
\textbf{Adversarial training techniques.} We compare our approach to five other approaches: the SOTA technique by Shen et al.~\cite{shen2021gradient} (referred to as Shen hereafter), AugMix~\cite{hendrycks2019augmix}, MaxUp~\cite{gong2021maxup}, AdvBN~\cite{shu2020prepare}, and AugMax~\cite{wang2021augmax}.
\textbf{Network architectures.} We test on two model architectures: the Nvidia model~\cite{bojarski2016end} and ResNet-50~\cite{he2016deep}.
For AutoJoin{}, we empirically choose to split the Nvidia architecture where the encoder is the first seven layers and the regression head is the last two layers; for ResNet-50, the encoder is the first 49 layers, and the regression head is the last fully-connected layer. The decoder is chosen to be a simple five-layer network with ReLU activations between each layer and a Sigmoid activation for the final layer.
\textbf{Driving datasets and perturbed datasets.}
We use four driving datasets in our experiments: Honda~\cite{ramanishka2018toward}, Waymo~\cite{sun2020scalability}, A2D2~\cite{geyer2020a2d2}, and SullyChen~\cite{chen2017sully}.
Based on these four datasets, we generate test datasets that contain more than 5M images in six categories.
Four of them are gradient-free, named Clean, Single, Combined, Unseen, and are produced according to Shen to ensure fair comparisons. The other two, FGSM and PGD, are gradient-based and are used to test our approach's adversarial transferability (sample images are shown in Figure~\ref{fig:sample_imgs}):
\begin{itemize}
\item Clean: the original driving datasets Honda, Waymo, A2D2, and SullyChen.
\item Single: images with a single perturbation applied at five intensity levels from Shen over the 15 perturbations introduced in Sec.~\ref{sec:perturbation}. This results in 75 test cases in total.
\item Combined: images with multiple perturbations at the intensity levels drawn from Shen. There are six combined test sets in total.
\item Unseen: images perturbed with simulated effects from ImageNet-C~\cite{hendrycks2019benchmarking}: fog, snow, rain, frost, motion blur, zoom blur, and compression. Each effect is perturbed at five intensity levels for a total of 35 unseen test cases.
\item FGSM: adversarial images generated using FGSM~\cite{goodfellow2014explaining} with either the Nvidia model or ResNet-50 trained only on clean data. FGSM generates adversarial examples in a single step by maximizing the gradient of the loss function with respect to the images. We generate the examples at five intensity levels, which are bounded by $L_{\infty}$ norm attack step sizes at $\epsilon = 0.01, 0.025, 0.05, 0.075$ and $0.1$.
\item PGD: adversarial images generated using PGD~\cite{madry2017towards} with either the Nvidia model or ResNet-50 trained only on clean data. PGD extends FGSM by taking iterative steps to produce an adversarial example at the cost of more computation. We generate these examples on five intensity levels with the same max bounds as of FGSM.
\end{itemize}
\textbf{Performance metrics.} We evaluate our approach using two metrics: mean accuracy (MA) and mean absolute error (MAE). MA is defined as $\sum_{\tau}acc_{\tau\in\mathcal{T}}/|\mathcal{T}|$ where $acc_{\tau} = count(|a_{p} - a_{t}| < \tau)/n$, $n$ denotes the number of test cases, $\mathcal{T} =$ \{1.5, 3.0, 7.5, 15.0, 30.0, 75.0\}, and $a_{p}$ and $a_{t}$ are the angle prediction and angle ground truth, respectively.
\textbf{Computing platforms and training parameters.} We conduct experiments using Intel Core i7-11700k CPU with 32G RAM and Nvidia RTX 3080 GPU.
For training, we use the Adam optimizer~\cite{kingma2014adam}, a batch size of 124, and a learning rate of $10^{-4}$. All models are trained for 500 epochs.
\subsection{Results}
We evaluate our approach's effectiveness based on the `steering' performance first under gradient-free perturbations and clean images (Sec.~\ref{sec:effectiveness-f}); and then under gradient-based attacks (Sec.~\ref{sec:effectiveness-g}). The efficiency of our approach compared to other adversarial training techniques is also studied (Sec.~\ref{sec:efficiency}).
As each of the six test categories has multiple test cases, the results reported are the averages over all test cases of a given test category.
\subsubsection{Effectiveness Against Gradient-free Perturbations}
\label{sec:effectiveness-f}
\begin{table}
\caption{Comparison results on the SullyChen dataset with the Nvidia model. The `Standard' model refers to the Nvidia model that is trained only using the original SullyChen dataset (clean images). AutoJoin{} outperforms all other techniques in all test categories (highest MA and lowest MAE), and is one of the few that can improve the performance on clean data over Standard.}
\centering
\begin{tabular}{lcccccccc}
\toprule
& \multicolumn{2}{c}{Clean} & \multicolumn{2}{c}{Single} & \multicolumn{2}{c}{Combined} & \multicolumn{2}{c}{Unseen} \\
\midrule
& MA (\%) & MAE & MA (\%) & MAE & MA (\%) & MAE & MA (\%) & MAE \\
\midrule
Standard & 86.19 & 3.35 & 66.19 & 11.33 & 38.50 & 25.03 & 67.38 & 10.94 \\
AdvBN & 79.51 & 5.06 & 69.07 & 9.18 & 44.89 & 20.36 & 67.97 & 9.78 \\
AugMix & 86.24 & 3.21 & 79.46 & 5.21 & 49.94 & 17.24 & 74.73 & 7.10 \\
AugMax & 85.31 & 3.43 & 81.23 & 4.58 & 51.50 & 17.25 & 76.45 & 6.35 \\
MaxUp & 79.15 & 4.40 & 77.40 & 5.01 & 61.72 & 12.21 & 73.46 & 6.71 \\
Shen & 87.35 & 3.08 & 84.71 & 3.76 & 53.74 & 16.27 & 78.49 & 6.01 \\
\midrule
AutoJoin{} (ours) & \textbf{89.46} & \textbf{2.86} & \textbf{86.90} & \textbf{3.53} & \textbf{64.67} & \textbf{11.21} & \textbf{81.86} & \textbf{5.12} \\
\bottomrule
\end{tabular}
\label{results_all_sully_nvidia}
\end{table}
\begin{table}
\caption{Comparison results on the A2D2 dataset with the Nvidia model. The `Standard' model refers to the Nvidia model that is trained only using the original A2D2 dataset (clean images). AutoJoin{} significantly outperforms every other approach in all test categories (highest MA and lowest MAE). For example, AutoJoin{} improves performance on clean data by a wide margin of 4.20\% MA compared to next best approach, Shen.}
\centering
\begin{tabular}{lcccccccc}
\toprule
& \multicolumn{2}{c}{Clean} & \multicolumn{2}{c}{Single} & \multicolumn{2}{c}{Combined} & \multicolumn{2}{c}{Unseen} \\
\midrule
& MA (\%) & MAE & MA (\%) & MAE & MA (\%) & MAE & MA (\%) & MAE \\
\midrule
Standard & 78.00 & 8.07 & 61.51 & 21.42 & 43.05 & 28.55 & 59.41 & 26.72 \\
AdvBN & 76.59 & 8.56 & 67.58 & 12.41 & 43.75 & 24.27 & 70.64 & 11.76 \\
AugMix & 78.04 & 8.16 & 73.94 & 10.02 & 58.22 & 20.66 & 71.54 & 11.44 \\
AugMax & 77.21 & 8.79 & 75.14 & 10.43 & 60.81 & 23.87 & 72.74 & 11.87 \\
MaxUp & 78.93 & 8.17 & 78.36 & 8.42 & 71.56 & 13.22 & 76.78 & 9.24 \\
Shen & 80.50 & 7.74 & 78.84 & 8.32 & 67.40 & 15.06 & 75.30 & 9.99 \\
\midrule
AutoJoin{} (ours) & \textbf{84.70} & \textbf{6.79} & \textbf{83.70} & \textbf{7.07} & \textbf{79.12} & \textbf{8.58} & \textbf{80.31} & \textbf{8.23} \\
\bottomrule
\end{tabular}
\label{results_all_audi_nvidia}
\end{table}
Table~\ref{results_all_sully_nvidia} shows the comparison results on the SullyChen dataset with the Nvidia architecture. AutoJoin{} outperforms every other adversarial technique across all test categories in both performance metrics (i.e., highest MAs and lowest MAEs). In particular, AutoJoin{} improves accuracy on Clean by 3.3\% MA and 0.58 MAE compared to the standard model (which is trained on solely clean data). This result is significant as the clean performance is the most difficult to improve while we achieve $\sim3x$ the improvement on clean data compared to the SOTA performance by Shen. Onto the perturbed datasets, AutoJoin{} achieves 64.67\% MA on Combined -- a 20\% accuracy increase compared to Shen, 11.21 MAE on Combined -- a 31\% error decrease compared to Shen, and 5.12 MAE on Unseen -- another 15\% error decrease compared to Shen.
Table~\ref{results_all_audi_nvidia} shows the comparison results on the A2D2 dataset with the Nvidia architecture. AutoJoin{} again outperforms all other techniques. To list a few notable improvements over Shen: 6.7\% MA improvement on Clean to the standard model ($\sim3x$ better); 11.72\% MA improvement (17\% accuracy increase) and 6.48 MAE drop (43\% error decrease) on Combined; 5.01\% MA improvement (7\% accuracy increase) and 1.76 MAE drop (18\% error decrease) on Unseen.
Switching from the Nvidia model to ResNet-50, Table~\ref{results_honda_rn50} shows the results on the Honda dataset.
Here, we only compare to AugMix and Shen because Shen is the SOTA and AugMix was the main approach that Shen compared with which also has the ability to improve both clean and robust performance on driving datasets.
As a result, AutoJoin{} in general outperforms both AugMix and Shen on perturbed datasets. Specifically, AutoJoin{} achieves the highest MAs across all perturbed categories.
AutoJoin{} also drops the MAE to 1.98 on Single, achieving 44\% improvement over AugMix and 21\% improvement over Shen; and drops the MAE to 2.89 on Unseen, achieving 33\% improvement over AugMix and 41\% improvement over Shen. On this particular dataset, Shen outperforms us on Clean by small margins due to its additional fine-tuning step on clean data. Nevertheless, AutoJoin{} still manages to improve upon the standard model and AugMix on Clean by larger margins.
During testing, we find Waymo to be unique in that the model benefits more from learning the inner representations of the denoised images. Therefore, we slightly modify the procedure of Algorithm~\ref{alg:autojoin} after perturbing the batch as follows: 1) one-tenth of the perturbed batch is sampled; 2) for each single perturbed image sampled, two other perturbed images are sampled; and 3) the three images are averaged to form a `fused' image. We term this procedure AutoJoin{}-Fuse.
Table~\ref{results_waymo_rn50} shows the results on the Waymo dataset using ResNet-50. AutoJoin{}-Fuse makes a noticeable impact by outperforming Shen on every test category except for combined MAE by .54.
We also improve clean performance over the standard model by 3.24\% MA and 1.93 MAE.
Our approach also outperforms AugMix by margins up to 7.14\% MA and 3.41 MAE. These differences are significant as for all four datasets, the well-performing robust techniques tend to operate within small ranges (e.g., within 1\% MA or 1 MAE).
\begin{table}
\caption{Results of comparing AutoJoin{} to AugMix and Shen on the Honda dataset using ResNet-50. The `Standard' model refers to ResNet-50 trained only with the clean images in the Honda dataset. AutoJoin{} achieves the best overall robust performance; however, Shen's second stage of fine-tuning on solely clean data allows them to outperform ours with small margins.
}
\centering
\begin{tabular}{lcccccccc}
\toprule
& \multicolumn{2}{c}{Clean} & \multicolumn{2}{c}{Single} & \multicolumn{2}{c}{Combined} & \multicolumn{2}{c}{Unseen} \\
\midrule
& MA (\%) & MAE & MA (\%) & MAE & MA (\%) & MAE & MA (\%) & MAE \\
\midrule
Standard & 92.87 & 1.63 & 73.12 & 11.86 & 55.01 & 22.73 & 69.92 & 13.65 \\
AugMix & 90.57 & 1.97 & 86.82 & 3.53 & 64.01 & 15.32 & 84.34 & 4.31 \\
Shen & \textbf{97.07} & \textbf{0.93} & 93.08 & 2.52 & 70.53 & \textbf{13.20} & 87.91 & 4.94 \\
\midrule
AutoJoin{} (ours) & 96.46 & 1.12 & \textbf{94.58} & \textbf{1.98} & \textbf{70.70} & 14.56 & \textbf{91.92} & \textbf{2.89} \\
\bottomrule
\end{tabular}
\label{results_honda_rn50}
\end{table}
\begin{table}
\caption{Results of comparing our approaches (AutoJoin{} and AutoJoin{}-Fuse) to AugMix and Shen on the Waymo dataset using ResNet-50. The `Standard' model refers to ResNet-50 trained only with the clean images in the Waymo dataset. Our techniques not only improve the clean performance the most over Standard, but also achieve the best overall robust performance on Single, Combined, and Unseen with margins up to 7.14\% MA and 3.41 MAE.}
\centering
\begin{tabular}{lcccccccc}
\toprule
& \multicolumn{2}{c}{Clean} & \multicolumn{2}{c}{Single} & \multicolumn{2}{c}{Combined} & \multicolumn{2}{c}{Unseen} \\
\midrule
& MA (\%) & MAE & MA (\%) & MAE & MA (\%) & MAE & MA (\%) & MAE \\
\midrule
Standard & 61.83 & 19.53 & 55.99 & 31.78 & 45.66 & 55.81 & 57.74 & 24.22 \\
AugMix & 61.74 & 19.19 & 60.83 & 20.10 & 56.34 & 24.23 & 59.78 & 21.75 \\
Shen & 64.77 & 18.01 & 64.07 & 19.77 & 61.67 & \textbf{20.28} & 63.93 & 18.77 \\
\midrule
AutoJoin{} & 64.91 & 18.02 & 63.84 & 19.30 & 58.74 & 26.42 & 64.17 & 19.10 \\
AutoJoin{}-Fuse & \textbf{65.07} & \textbf{17.60} & \textbf{64.34} & \textbf{18.49} & \textbf{63.48} & 20.82 & \textbf{65.01} & \textbf{18.17} \\
\bottomrule
\end{tabular}
\label{results_waymo_rn50}
\end{table}
\subsubsection{Effectiveness Against Gradient-based Attacks}
\label{sec:effectiveness-g}
Although being a gradient-free technique, AutoJoin{} exhibits superb adversarial transferability defense against gradient-based attacks when compared to other techniques.
The evaluation results using the A2D2 dataset with the Nvidia model are shown in Table~\ref{results_transfer_audi_nvidia}: AutoJoin{} outperforms every other approach by a large margin at all intensity levels of FGSM and PGD.
\begin{table}
\caption{Comparison results on gradient-based adversarial examples using the A2D2 dataset and the Nvidia model. The individual columns represent the datasets generated at different intensities. All results are reported in MA (\%). AutoJoin{} demonstrates least adversarial transferability by outperforming every other approach under all intensities of FGSM~\cite{goodfellow2014explaining} and PGD~\cite{madry2017towards}.}
\centering
\begin{tabular}{lccccc}
\toprule
& \multicolumn{5}{c}{FGSM} \\
\midrule
& 0.01 & 0.025 & 0.05 & 0.075 & 0.1 \\
\midrule
Standard & 73.91 & 65.42 & 57.70 & 53.27 & 50.12 \\
AdvBN & 76.34 & 76.14 & 75.50 & 74.25 & 72.75 \\
AugMix & 77.66 & 76.69 & 73.61 & 69.74 & 66.38 \\
AugMax & 77.04 & 76.94 & 76.18 & 75.10 & 73.91 \\
MaxUp & 78.71 & 78.47 & 78.10 & 77.42 & 76.71 \\
Shen & 80.10 & 79.83 & 79.02 & 77.94 & 76.98 \\
\midrule
AutoJoin{} & \textbf{84.11} & \textbf{83.83} & \textbf{83.13} & \textbf{82.02} & \textbf{81.14} \\
\bottomrule
\end{tabular}
\begin{tabular}{ccccc}
\toprule
\multicolumn{5}{c}{PGD} \\
\midrule
0.01 & 0.025 & 0.05 & 0.075 & 0.1 \\
\midrule
73.87 & 65.60 & 57.93 & 53.43 & 51.07 \\
76.35 & 76.17 & 75.62 & 74.46 & 72.91 \\
77.65 & 76.75 & 73.74 & 69.75 & 66.40 \\
77.04 & 76.93 & 76.23 & 75.10 & 73.91 \\
78.71 & 78.47 & 78.09 & 77.39 & 76.72 \\
80.09 & 79.79 & 79.02 & 77.93 & 76.94 \\
\midrule
\textbf{84.14} & \textbf{83.84} & \textbf{83.15} & \textbf{81.97} & \textbf{81.09}\\
\bottomrule
\end{tabular}
\label{results_transfer_audi_nvidia}
\end{table}
\subsubsection{Effectiveness of DAE}
\label{sec:design}
\begin{table}
\caption{Results of comparing AutoJoin{} with or without DAE on the SullyChen dataset with the Nvidia model. Standard is the Nvidia model trained on clean SullyChen images.
Using DAE allows AutoJoin{} to achieve $\sim3x$ the performance on clean over Shen as without it, it is only $\sim2x$. AutoJoin{} without DAE also performs worse than Shen on Clean and Single MAE.}
\centering
\begin{tabular}{lcccccccc}
\toprule
& \multicolumn{2}{c}{Clean} & \multicolumn{2}{c}{Single} & \multicolumn{2}{c}{Combined} & \multicolumn{2}{c}{Unseen} \\
\midrule
& MA (\%) & MAE & MA (\%) & MAE & MA (\%) & MAE & MA (\%) & MAE \\
\midrule
Standard & 86.19 & 3.35 & 66.19 & 11.33 & 38.50 & 25.03 & 67.38 & 10.94 \\
Shen & 87.35 & 3.08 & 84.71 & 3.76 & 53.74 & 16.27 & 78.49 & 6.01 \\
\midrule
Ours w/o DAE & 88.30 & 3.09 & 85.75 & 3.81 & 62.96 & 11.90 & 81.09 & 5.33 \\
Ours (AutoJoin{}) & \textbf{89.46} & \textbf{2.86} & \textbf{86.90} & \textbf{3.53} & \textbf{64.67} & \textbf{11.21} & \textbf{81.86} & \textbf{5.12} \\
\bottomrule
\end{tabular}
\label{results_no_dae_sully}
\end{table}
To evaluate the effectiveness of DAE, we test our approach with and without it. The results are shown in Table~\ref{results_no_dae_sully}. AutoJoin{} without DAE outperforms Shen in several test categories but not on Clean and Single MAE, meaning the perturbations and sampled intensity levels are effective for performance gains. With DAE added, AutoJoin{} outperforms every model, showing the clear advantage of our architecture design.
\subsubsection{Efficiency}
\label{sec:efficiency}
For the comparison on efficiency, we use AugMix/Shen + the Honda/Waymo datasets + ResNet-50 as the baselines.
On the Honda dataset, AutoJoin{} takes 131 seconds per epoch on average, while AugMix takes 164 seconds and Shen takes 785 seconds. Our approach saves 20\% and 83\% per epoch time compared to AugMix and Shen, respectively.
On the Waymo dataset, AugMix takes 179 seconds and Shen takes 857 seconds, while AutoJoin{} takes only 142 seconds -- a 21\% and 83\% time save on AugMix and Shen, respectively.
We also save 90\% training data per epoch compared to Shen as they join together the original dataset with nine perturbed versions of the dataset during training. AutoJoin{} only use the amount of the original dataset in training.
\section{Conclusion and Future Work}
\label{sec:conclusion}
We propose AutoJoin{}, a very simple yet effective and efficient gradient-free adversarial technique for robust autonomous maneuvering. AutoJoin{} attaches a decoder to any maneuvering regression model forming a denoising autoencoder within the architecture. This allows the task `denoising sensor input' to support the task `steering' through joint learning such that `steering' can learn from denoised images.
We show that AutoJoin{} can outperform well-known SOTA adversarial techniques on various real-word driving datasets. AutoJoin{} allows for $\sim3x$ clean performance improvement compared to Shen on both the SullyChen~\cite{chen2017sully} and A2D2~\cite{geyer2020a2d2} datasets. AutoJoin{} also increases robust accuracy by 20\% and 17\% compared to Shen on each dataset, respectively. Furthermore, AutoJoin{} achieves the best overall robust accuracy on the larger datasets Honda~\cite{ramanishka2018toward} and Waymo~\cite{sun2020scalability} while decreasing error by up to 44\% and 14\%, respectively.
Facing gradient-based transfer attacks, AutoJoin{} demonstrates the strongest resilience among all techniques compared.
Lastly, AutoJoin{} is the most efficient technique tested on by being faster per epoch compared to AugMix and saving 83\% per epoch time and 90\% training data over Shen.
There exist many future research directions to be pursued. First of all, to further improve the framework itself, we are interested in exploring wider perturbation space and more intensity levels to remove any use of FID as well as using other perturbation sets. Second, we would like to integrate AutoJoin{} with other vehicle-centric applications such as trajectory prediction~\cite{Lin2022Attention} and real-time learning and control of autonomous vehicles~\cite{Poudel2022Micro,sheninverse2022}, and study its impact on those applications. Third, we are interested in testing AutoJoin{} in more complex simulated environments~\cite{Wilkie2015Virtual,Li2017CityFlowRecon,Chao2020Survey}, which can be calibrated using mobile sensor data in reflecting real-world traffic conditions~\cite{Li2017CityEstSparseITSM,Li2018CityEstIterIET,Lin2019Compress,Lin2022Network}. Lastly, we would like to combine the use of AutoJoin{} with other adversarial training techniques~\cite{Poudel2021Attack} for large-scale traffic to build resilient intelligent transportation systems.
\section*{Appendix}
\label{sec:app}
\subsection*{A.1 Loss Function}
\label{sec:app_loss_function}
\begin{table}[H]
\caption{Results comparing our approach using different loss functions on the Sully dataset with Nvidia~\cite{bojarski2016end} architecture. The decoder is attached to the 7th layer of the encoder making the regression head two layers for all experiments. We find that using $\ell_1$ instead of $\ell_2$ for the regression loss results in better performance. We find that using $\ell_2$ for reconstruction loss and $\ell_1$ for regression loss results in the overall best performance.}
\centering
\begin{tabular}{lcccccccc}
\toprule
& \multicolumn{2}{c}{Clean} & \multicolumn{2}{c}{Single} & \multicolumn{2}{c}{Combined} & \multicolumn{2}{c}{Unseen} \\
\midrule
& MA (\%) & MAE & MA (\%) & MAE & MA (\%) & MAE & MA (\%) & MAE \\
\midrule
$\ell_1$ + $\ell_2$ & 88.30 & 3.09 & 85.75 & 3.81 & 62.96 & 11.90 & 81.09 & 5.33 \\
$\ell_1$ + $\ell_1$ & 88.90 & 2.90 & 86.50 & 3.59 & 63.65 & 11.71 & 81.59 & 5.17 \\
$\ell_2$ + $\ell_2$ & 80.89 & 4.03 & 78.34 & 4.85 & 56.91 & 14.16 & 74.47 & 6.46 \\
$\ell_2$ + $\ell_1$ & 89.23 & 2.89 & \textbf{87.08} & \textbf{3.53} & 61.62 & 12.25 & 81.71 & 5.14 \\
$\ell_2$ + 10$\ell_1$ & \textbf{89.46} & \textbf{2.86} & 86.90 & \textbf{3.53} & \textbf{64.67} & \textbf{11.21} & \textbf{81.86} & \textbf{5.12} \\
\bottomrule
\end{tabular}
\label{results_loss_function_sully}
\end{table}
We evaluate the effects of using different loss function combinations with our approach to see which results in the best performance. Since our approach has an additional term to the initial regression loss, there are different combinations that can be made when evaluating using $\ell_1$ and $\ell_2$. From Table~\ref{results_loss_function_sully}, we see that using $\ell_2$ for regression loss results in worse performance compared to using $\ell_1$. When keeping the regression loss as $\ell_1$, we find that using $\ell_2$ for the reconstruction loss overall performs better than $\ell_1$. It performs better in the clean, single perturbations, and unseen perturbations categories; however, it performs worse in the combined category. Weighting the regression loss term by 10, however, allows for the combination of $\ell_2$ reconstruction loss and $\ell_1$ regression loss to perform the best overall. This loss function was used for the Sully~\cite{chen2017sully}, A2D2~\cite{geyer2020a2d2}, and Honda~\cite{ramanishka2018toward} datasets. Weighting the regression loss also ensures that the 'learning to steer' task is the primary focus of learning with 'learning to denoise' used as support.
On the Waymo~\cite{sun2020scalability} dataset with the ResNet-50~\cite{he2016deep} architecture, we find that weighting the reconstruction loss is more beneficial for 'learning to steer.' This means that emphasizing learning the denoised inner representations of the perturbed images helps 'learning to steer' more than emphasizing 'learning to steer' like in the above-mentioned datasets.
\subsection*{A.2 Decoder Position}
We evaluate the different effects of placing the decoder at different positions in the regression model. We find that for the Nvidia model, attaching the decoder to the 7th layer of the Nvidia model results in the best performance. Although, attaching the decoder to the 4th layer also results in similar performance. The 7th layer was chosen over the 4th layer, however, as the conversion of the intermediate features' dimensions to the decoder results in an overall smaller model compared to if the decoder was attached at the 4th layer.
For the ResNet-50 model, attaching the decoder after the 4th layer and before the final fully-connected layer results in relatively the same performance as attaching the decoder after the 3rd layer. This is most likely due to the size of the ResNet-50 model enabling the encoder portion to store more information within its features compared to the Nvidia model.
\subsection*{A.3 Using the Decoder}
\begin{table}[H]
\caption{Results comparing using AutoJoin{} with or without the DAE on the Sully dataset with the Nvidia architecture. This is to show the DAE helping learning and improving results to be the best results. The loss function for AutoJoin{} is the one described in the main paper for Sully. Standard (a Nvidia architecture trained on clean Sully images) and Shen results are given for comparison purposes. From the table, attaching the decoder to form a DAE within the architecture does provide performance benefits. Using the DAE allows for AutoJoin{} to achieve 3x the performance on clean over Shen. For without the DAE, the performance gain is only 2x on clean over Shen.}
\centering
\begin{tabular}{lcccccccc}
\toprule
& \multicolumn{2}{c}{Clean} & \multicolumn{2}{c}{Single} & \multicolumn{2}{c}{Combined} & \multicolumn{2}{c}{Unseen} \\
\midrule
& MA (\%) & MAE & MA (\%) & MAE & MA (\%) & MAE & MA (\%) & MAE \\
\midrule
Standard & 86.19 & 3.35 & 66.19 & 11.33 & 38.50 & 25.03 & 67.38 & 10.94 \\
Shen & 87.35 & 3.08 & 84.71 & 3.76 & 53.74 & 16.27 & 78.49 & 6.01 \\
AutoJoin{}, no DAE & 88.30 & 3.09 & 85.75 & 3.81 & 62.96 & 11.90 & 81.09 & 5.33 \\
AutoJoin{}, w/ DAE & \textbf{89.46} & \textbf{2.86} & \textbf{86.90} & \textbf{3.53} & \textbf{64.67} & \textbf{11.21} & \textbf{81.86} & \textbf{5.12} \\
\bottomrule
\end{tabular}
\label{results_no_dae_sully_app}
\end{table}
In order to motivate that the denoising autoencoder (DAE) portion of the architecture is contributing to the performance of the model, we evaluate the pipeline with and without it. From Table~\ref{results_no_dae_sully}, the model that uses the DAE within the architecture outperforms the other models. Standard (a Nvidia model trained on the Sully dataset) and Shen results are provided for points of comparison. AutoJoin{} that does not have a DAE still manages to outperform the SOTA Shen on several categories, which means that the discretization of perturbations and sampling of intensities is effective at improving performance. However, AutoJoin{} with the DAE outperforms AutoJoin{} without the DAE in every metric across every category. This means that the addition of the DAE is able to support and help the main task of 'steering,' providing for the best performance on clean and perturbed datasets.
\section*{Checklist}
The checklist follows the references. Please
read the checklist guidelines carefully for information on how to answer these
questions. For each question, change the default \answerTODO{} to \answerYes{},
\answerNo{}, or \answerNA{}. You are strongly encouraged to include a {\bf
justification to your answer}, either by referencing the appropriate section of
your paper or providing a brief inline description. For example:
\begin{itemize}
\item Did you include the license to the code and datasets? \answerYes{See Section~\ref{gen_inst}.}
\item Did you include the license to the code and datasets? \answerNo{The code and the data are proprietary.}
\item Did you include the license to the code and datasets? \answerNA{}
\end{itemize}
Please do not modify the questions and only use the provided macros for your
answers. Note that the Checklist section does not count towards the page
limit. In your paper, please delete this instructions block and only keep the
Checklist section heading above along with the questions/answers below.
\begin{enumerate}
\item For all authors...
\begin{enumerate}
\item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
\answerYes{}
\item Did you describe the limitations of your work?
\answerYes{See Section~\ref{sec:conclusion}}
\item Did you discuss any potential negative societal impacts of your work?
\answerNo{Our approach's goal is to improve the safety of autonomous driving systems.}
\item Have you read the ethics review guidelines and ensured that your paper conforms to them?
\answerYes{}
\end{enumerate}
\item If you are including theoretical results...
\begin{enumerate}
\item Did you state the full set of assumptions of all theoretical results?
\answerNA{}
\item Did you include complete proofs of all theoretical results?
\answerNA{}
\end{enumerate}
\item If you ran experiments...
\begin{enumerate}
\item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?
\answerYes{Code will be provided upon acceptance with instructions on getting the datasets and running the code.}
\item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?
\answerYes{See Section~\ref{sec:exp_setup} and Section~\ref{sec:joint_learning}.}
\item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?
\answerNo{}
\item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?
\answerYes{See Section~\ref{sec:exp_setup}.}
\end{enumerate}
\item If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...
\begin{enumerate}
\item If your work uses existing assets, did you cite the creators?
\answerYes{See Section~\ref{sec:exp_setup}.}
\item Did you mention the license of the assets?
\answerNo{We cite the sources our data comes from. The licenses can be found on the sources' official websites.}
\item Did you include any new assets either in the supplemental material or as a URL?
\answerNo{}
\item Did you discuss whether and how consent was obtained from people whose data you're using/curating?
\answerNo{We utilize public data/code from their official website or GitHub.}
\item Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content?
\answerNo{}
\end{enumerate}
\item If you used crowdsourcing or conducted research with human subjects...
\begin{enumerate}
\item Did you include the full text of instructions given to participants and screenshots, if applicable?
\answerNA{}
\item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?
\answerNA{}
\item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?
\answerNA{}
\end{enumerate}
\end{enumerate}
|
1,108,101,565,552 | arxiv | \section{Introduction}
Coupled oscillator systems are used to model a wide range of physical and biological phenomena, where the dynamics of the underlying complex system is dominated by the interactions of its elements over multiple temporal and spatial scales. The main advantage of using such a modelling approach stems from the fact that it allows one to obtain vital information about the behaviour of the complex systems by studying the laws and rules that govern different dynamical regimes, including (de)synchronization, cooperation, stability etc. For instance, some brain pathologies and cognitive deficiencies, such as Parkinson's disease, Alzheimer's disease, epilepsy, autism are known to be associated with the synchronisation of oscillating neural populations \cite{US06,PHT05,SG05}. Phase synchronisation arises when two or more coupled elements start to oscillate with the same frequency, and if the coupling strength is quite weak, the amplitudes stay unchanged. However, when the coupling strength becomes stronger, the amplitudes start to interact, which may lead to complete synchronization or amplitude death \cite{CFHBS07,FIE09}. The coupling between elements is often not instantaneous accounting, for example, for propagation delays or processing times and these time delays have been shown to play a crucial role in sychronisation as well as in (de)stabilisation of general coupled oscillator systems \cite{PRK01,JUS10,DVDF10,CHO09,FYDS10,SHFD10}, coupled semiconductor lasers \cite{HFEMM01,FLU09,HIC11}, neural \cite{DHPS09,SCH08} and engineering systems \cite{KBGHW06,KH10}.
In coupled oscillator systems with delayed couplings one of the intriguing effects of time delays is amplitude death \cite{RSJ98,RSJ99}, or death by delay \cite{strogatz98}, when oscillations are suppressed, and the system is driven to a stable equilibrium. The amplitude death phenomenon has been studied both theoretically and experimentally \cite{HFRPO00,TFE00}, and it has been shown that in the presence of time delays, amplitude death can occur even if the frequencies of the individual oscillators are the same. On the contrary, in the non-delayed case, the amplitude death in coupled oscillator systems can only be observed if the frequencies of the individual oscillators are sufficiently different \cite{AEK90,MS90}.
The majority of research on the effects of time delays upon the stability of coupled oscillators has been focussed on systems with one or several constant time delays. This is a valid, though somewhat limiting, assumption for some systems where the time delay is fixed and does not change with time. However, many real-life applications involve time delays which are non-constant \cite{GJU08,GJU10}, or more crucially, where the exact value of the time delay is not explicitly known. This assumption can be overcome by considering distributed-delay systems, where the time delay is given by an integral with a memory kernel in the form of a prescribed delay distribution function. In population dynamics models, distributed time delays have been used to represent maturation time, which varies between individuals \cite{GS03,FT10}; in some engineering systems, the use of distributed time delays is better suited due to the fact that only an approximate value of time delay is known \cite{KK11,MVAN05}; in neural systems, the time delay is different for different paths of the feedback signals and inclusion of a distribution over time delays has been shown to increase the stability region of the system as opposed to the same systems with constant time delays \cite{TSE03}; a similar effect has been shown to occur in predator-prey and ecological food webs \cite{ETF05}, in epidemiological models distributed time delays have been used to model the effects of waning immunity times after disease or vaccination, which differs between individuals \cite{BK10}. The influence of distributed delays on the overall stability of the system under consideration has been discussed by a number of authors; for example, in \cite{Atay03}, the effects of the time delays are studied in relation to the coupled oscillators, and in traffic dynamics \cite{SAN08}, and in \cite{CJ09} the authors have shown how to approximate the stability boundary around a steady state for a general distribution function.
In this paper we consider a generic system of coupled oscillators with distributed-delay coupling, where the local dynamics is represented by the normal form close to a supercritical Hopf bifurcation. Consider a system of two coupled Stuart-Landau oscillators
\begin{eqnarray}\label{SL}
\dot{z}_1(t)&=&(1+i\omega_1)z_{1}(t)-|z_1(t)|^2z_1(t)\nonumber \\
\nonumber \\
&+&Ke^{i\theta}\left[\int_{0}^{\infty}g(t')z_{2}(t-t')dt'-z_1(t)\right],\nonumber\\ \\
\dot{z}_2(t)&=&(1+i\omega_2)z_{2}(t)-|z_2(t)|^2z_2(t)\nonumber\\ \nonumber\\
&+&Ke^{i\theta}\left[\int_{0}^{\infty}g(t')z_{1}(t-t')dt'-z_2(t)\right],\nonumber
\end{eqnarray}
where $z_{1,2}\in\mathbb{C}$, $\omega_{1,2}$ are the oscillator frequencies, $K\in\mathbb{R}_{+}$ and $\theta\in\mathbb{R}$ are the strength and the phase of coupling, respectively, and $g(\cdot)$ is a distributed-delay kernel, satisfying
\[
g(u)\geq 0,\hspace{0.5cm} \int_{0}^{\infty}g(u)du=1.
\]
When $g(u)=\delta(u)$, one recovers an instantaneous coupling $(z_2-z_1)$; when $g(u)=\delta(u-\tau)$, the coupling takes the form of a discrete time delay $[z_2(t-\tau)-z_1(t)]$. We will concentrate on the case of identical oscillators having the same frequency $\omega_1=\omega_2=\omega_0$. Without coupling, the local dynamics exhibits an unstable steady state $z=0$ and a stable limit cycle with $|z(t)|=1$.
It is known that time delay in the coupling can introduce amplitude death, which means destruction of a periodic orbit and stabilization of the unstable steady state.
The system (\ref{SL}) has been analysed by Atay in the case of zero coupling phase for a uniformly distributed delay kernel \cite{Atay03}. He has shown that distributed delays increase stability of the steady state and lead to merging of death islands in the parameter space. In this paper, we extend this work in three directions. First of all, we also take into consideration a coupling phase, which is important not only theoretically, but also in experimental realizations of the coupling, as has already been demonstrated in laser experiments \cite{SHWSH,FAYSL}. Second, to get a better understanding of the system behavior inside the stability regions, we will numerically compute eigenvalues. Finally, we will also consider the case of a practically important gamma distributed delay kernel to illustrate that it is not only the mean delay and the width of the distribution, but also the actual shape of the distribution that affects amplitude death in systems with distributed-delay coupling.
The outline of this paper is as follows. In Sec. \ref{UD} we study amplitude death in the system (\ref{SL}) with a uniformly distributed delay kernel. This includes finding analytically boundaries of stability of the trivial steady state, as well as numerical computation of the eigenvalues of the corresponding characteristic equations. Section \ref{GaD} is devoted to the analysis of amplitude death for the case of a gamma distributed delay kernel. We illustrate how regions of amplitude death are affected by the coupling parameters and characteristics of the delay distribution. The paper concludes with a summary of our findings, together with an outlook on their implications.
\section{Uniformly distributed delay}\label{UD}
To study the possibility of amplitude death in the system (\ref{SL}), we linearize this system near the trivial steady state $z_{1,2}=0$. The corresponding characteristic equation is given by
\begin{equation}\label{ch_eq}
\left(1+i\omega_0 -Ke^{i\theta}-\lambda\right)^{2}-K^{2}e^{2i\theta}\left[\{\mathcal{L}g\}(\lambda)\right]^{2}=0,
\end{equation}
where $\lambda$ is an eigenvalue of the Jacobian, and
\begin{equation}
\{\mathcal{L}g\}(s)=\int_{0}^{\infty}e^{-su}g(u)du,
\end{equation}
is the Laplace transform of the function $g(u)$. To make further analytical progress, it is instructive to specify a particular choice of the delay kernel.
As a first example, we consider a uniformly distributed kernel
\begin{equation}\label{UKer}
g(u)=\left\{
\begin{array}{l}
\displaystyle{\frac{1}{2\rho} \hspace{1cm}\mbox{for }\tau-\rho\leq u\leq \tau+\rho,}\\\\
0\hspace{1cm}\mbox{elsewhere.}
\end{array}
\right.
\end{equation}
This distribution has the mean time delay
\[
\tau_{m}\equiv<\tau>=\int_{0}^{\infty}ug(u)du=\tau,
\]
and the variance
\begin{equation}\label{VD}
\displaystyle{\sigma^2=\int_{0}^{\infty}(u-\tau_m)^{2}g(u)du=\frac{\rho^{2}}{3}.}
\end{equation}
In the case of a uniformly distributed kernel Eq.~(\ref{UKer}), it is quite easy to compute the Laplace transform of the distribution $g(u)$ as:
\[
\{\mathcal{L}g\}(\lambda)=\frac{1}{2\rho\lambda}e^{-\lambda\tau}\left(e^{\lambda\rho}-e^{-\lambda\rho}\right)=e^{-\lambda\tau}
\frac{\sinh(\lambda\rho)}{\lambda\rho},
\]
and this also transforms the characteristic equation (\ref{ch_eq}) as
\begin{equation}\label{ch_eq_2}
1+i\omega_0 -Ke^{i\theta}-\lambda=\pm Ke^{i\theta}e^{-\lambda\tau}
\frac{\sinh(\lambda\rho)}{\lambda\rho}.
\end{equation}
Since the roots of the characteristic equation (\ref{ch_eq_2}) are complex-valued, stability of the trivial steady state can only change if some of these eigenvalues cross the imaginary axis.
To this end, we can look for characteristic roots in the form $\lambda=i\omega$. Substituting this into the characteristic equation (\ref{ch_eq_2}) and separating real and imaginary parts gives the following system of equations for $(K,\tau)$:
\begin{equation}\label{Ktau}
\begin{array}{l}
\displaystyle{K^2\left[1-\delta(\rho,\omega)\right]-2K[\cos\theta+(\omega_0-\omega)\sin\theta]}\\\\
\hspace{2.5cm}+(\omega_0-\omega)^2+1=0,\\\\
\displaystyle{\tan(\theta-\omega\tau)=\frac{\omega_0-\omega-K\sin\theta}{1-K\cos\theta},}
\end{array}
\end{equation}
where
\[
\delta(\rho,\omega)=\left[\frac{\sin(\omega\rho)}{\omega\rho}\right]^2,
\]
We begin the analysis of the effect of the coupling phase $\theta$ on stability by first considering the case $\theta=0$. In this case, the system (\ref{Ktau}) simplifies to
\begin{equation}\label{T0Ktau}
\begin{array}{l}
\displaystyle{K^2\left[1-\delta(\rho,\omega)\right]-2K+(\omega_0-\omega)^2+1=0,}\\\\
\displaystyle{\tan(\omega\tau)=\frac{\omega-\omega_0}{1-K}.}
\end{array}
\end{equation}
To illustrate the effects of varying the coupling stren gth $K$ and the time delay $\tau$ on the (in)stability of the trivial steady state, we now compute the stability boundaries (\ref{T0Ktau}) as parametrized by the Hopf frequency $\omega$. Besides the stability boundaries themselves, which enclose the amplitude death regions, we also compute the maximum real part of the eigenvalues using the traceDDE package in Matlab.
In order to compute these eigenvalues, we introduce real variables $z_{1r,i}$ and $z_{2r,i}$, where $z_1=z_{1r}+iz_{1i}$ and $z_2=z_{2r}+iz_{2i}$, and rewrite the linearized system (SL) with the distributed kernel (\ref{UKer}) as
\begin{equation}\label{Trace}
\displaystyle{\dot{\bf z}(t)=L_0 {\bf z}(t)+\frac{K}{2\rho}\int_{-(\tau+\rho)}^{-(\tau-\rho)}M{\bf z}(t+s)ds,}
\end{equation}
where
\[
\begin{array}{c}
{\bf z}=(z_{1r},z_{1i},z_{2r},z_{2i})^{T},\hspace{0.3cm}
L_0=\left(
\begin{array}{ll}
N&{\bf 0}_{2}\\
{\bf 0}_{2}&N
\end{array}
\right),\\\\
M=\left(
\begin{array}{ll}
{\bf 0}_{2}&R\\
R&{\bf 0}_{2}
\end{array}
\right),\hspace{0.5cm}
R=\left(
\begin{array}{ll}
\cos\theta&\sin\theta\\
-\sin\theta&\cos\theta
\end{array}
\right),\\\\
N=\left(
\begin{array}{ll}
1-K\cos\theta&K\sin\theta-\omega_0\\
\omega_0-K\sin\theta&1-K\cos\theta
\end{array}
\right),
\end{array}
\]
and ${\bf 0}_{2}$ denotes a $2\times 2$ zero matrix. When $\rho=0$, the last term in the system (\ref{Trace}) turns into $KM{\bf z}(t-\tau)$, which describes the system with a single discrete time delay $\tau$. System (\ref{Trace}) is in the form in which it is amenable to the algorithms described in Breda {\it et al.} \cite{BMS06} and implemented in traceDDE.
\begin{figure*}
\hspace{-0.5cm}
\includegraphics[width=18cm]{fig1.jpg}
\caption{Areas of amplitude death in the plane of the coupling strength $K$ and the mean time delay $\tau$ for $\theta=0$, $\omega_0=20$. Colour code denotes $[-\max\{{\rm Re}(\lambda)\}]$ for $\max\{{\rm Re}(\lambda)\}\le 0$. (a) $\rho=0$. (b) $\rho=0.005$. (c) $\rho=0.015$. (d) $\rho=0.018$.}\label{ADfig}
\end{figure*}
Figure~\ref{ADfig} shows the boundaries of the amplitude death together with the magnitude of the real part of the leading eigenvalue for different widths of the delay distribution $\rho$.
When $\rho=0$ (single discrete time delay $\tau$), there are two distinct islands in the ($K,\tau$) parameter space, in which the trivial steady state is stable. It is noteworthy that the number of such stability islands increases with increasing fundamental frequency $\omega_0$ (e.g., there are three islands for $\omega_0=30$).
\begin{figure*}
\hspace{-0.5cm}
\includegraphics[width=18cm]{fig2.jpg}
\caption{Areas of amplitude death depending on the coupling strength $K$ and the phase $\theta$ for $\tau=0.08$ and $\omega_0=20$. Colour code denotes $[-\max\{{\rm Re}(\lambda)\}]$. (a) $\rho=0$. (b) $\rho=0.002$. (c) $\rho=0.004$. (d) $\rho=0.026$.}\label{KTheta}
\end{figure*}
This behaviour agrees with that of an equivalent single system with time-delayed feedback and delay $2\tau$ \cite{HOE05}, where similar islands of stability of the trivial steady state form in the ($K,\tau$) plane around $2\tau= \frac{2n+1}{2} T_0 \equiv \frac{2n+1}{2} 2\pi/\omega_0$ ($n=0,1,2,...$),
and the number and size of those islands increases with increasing ratio $\omega_0/\lambda_0$ \cite{footnote1}. As the width of the delay distribution $\rho$ increases, the stability islands grow (see Fig.~\ref{ADfig} (b) and (c)) until they merge into a single continuous region in the parameter space, as shown in Fig.~\ref{ADfig} (d).
Next, we consider the effects of the coupling phase on stability of the trivial steady state in the system (\ref{SL}) with a uniformly distributed delay kernel (\ref{UKer}). The boundaries of amplitude death in this case as parametrized by the Hopf frequency $\omega$ are given in Eq.~(\ref{Ktau}). Figure~\ref{KTheta} shows how the areas of amplitude death depend on the coupling strength and coupling phase for various delay distribution widths $\rho$. In the case of a single delay ($\rho=0$), the only area of amplitude death is symmetric around $\theta=0$ and also has the largest range of $K$ values at $\theta=0$, for which amplitude death is achieved, as illustrated in Fig.~\ref{KTheta}(a). Provided that the coupling phase is large enough by the absolute value, oscillations in the system (\ref{SL}) are maintained, and amplitude death cannot be achieved for any values of the coupling strength $K$. As the width of the delay distribution $\rho$ increases, this leads to an increase in the size of the amplitude death region in ($K,\theta$) space, as well as to an asymmetry with regard to the coupling phase: for sufficiently large $\rho$, amplitude death can occur for an arbitrarily large $K$ if $\theta$ is negative, but only for a very limited range of $K$ values if $\theta$ is positive. At the same time, if the coupling phase exceeds $\pi/2$ by the absolute value, amplitude death does not occur, irrespectively of the values of $K$.
\begin{figure*}
\hspace{-0.5cm}
\includegraphics[width=17cm]{fig3.jpg}
\caption{(a) Amplitude of oscillations depending on the coupling phase $\theta$ and the width of distribution $\rho$. (b) Sections for various values of $\theta$. Parameter values are $K=120$, $\tau=0.08$ and $\omega_0=20$.}\label{Amp_plot}
\end{figure*}
Whilst distributed-delay coupling may fail to suppress oscillations in a certain part of the parameter space, it will still have an effect on the amplitude of oscillations. In Fig.~\ref{Amp_plot} we illustrate how the amplitude of oscillations varies depending on the coupling phase $\theta$ and the delay distribution width $\rho$ for a fixed value of the coupling strength $K$. As the magnitude of the coupling phase increases, the range of possible delay distribution widths, for which oscillations are observed, is growing, and the amplitude reaches its maximum at $\rho=0$ corresponding to the case of discrete time delay.
\section{Gamma distributed delay}\label{GaD}
In many realistic situations, the distribution of time delays is better represented by a gamma distribution, which can be written as
\begin{equation}
g(u)=\frac{u^{p-1}\alpha^{p}e^{-\alpha u}}{\Gamma(p)},
\end{equation}
with $\alpha,p\geq 0$, and $\Gamma(p)$ being an Euler gamma function defined by $\Gamma(0)=1$ and $\Gamma(p+1)=p\Gamma(p)$. For integer powers $p$, this can be equivalently written as
\begin{equation}\label{GD}
g(u)=\frac{u^{p-1}\alpha^{p}e^{-\alpha u}}{(p-1)!}.
\end{equation}
For $p=1$ this is simply an exponential distribution (also called a {\it weak delay kernel}) with the maximum contribution to the coupling coming from the present values of variables $z_{1}$ and $z_{2}$. For $p>1$ (known as {\it strong delay kernel} in the case $p=2$), the biggest influence on the coupling at any moment of time $t$ is from the values of $z_{1,2}$ at $t-(p-1)/\alpha$.
The delay distribution (\ref{GD}) has the mean time delay
\begin{equation}
\displaystyle{\tau_{m}=\int_{0}^{\infty}ug(u)du=\frac{p}{\alpha},}
\end{equation}
and the variance
\[
\displaystyle{\sigma^2=\int_{0}^{\infty}(u-\tau_m)^{2}g(u)du=\frac{p}{\alpha^2}}.
\]
When studying stability of the trivial steady state of the system (\ref{SL}) with the delay distribution kernel (\ref{GD}), one could use the same strategy as the one described in the previous section. The only complication with such an analysis would stem once again from the Laplace transform of the distribution kernel, which in this case has the form
\[
\{\mathcal{L}g\}(\lambda)=\frac{\alpha^{p}}{(\lambda+\alpha)^{p}}.
\]
A convenient way to circumvent this is to use the {\it linear chain trick} \cite{MD78}, which allows one to replace an equation with a gamma distributed delay kernel by an equivalent system of $(p+1)$ ordinary differential equations. To illustrate this, we consider a particular case of system (\ref{SL}) with a weak delay kernel given by (\ref{GD}) with $p=1$, which is equivalent to a low-pass filter \cite{HOE05}:
\begin{equation}\label{WK}
g_{w}(u)=\alpha e^{-\alpha u}.
\end{equation}
Introducing new variables
\[
\begin{array}{l}
\displaystyle{Y_{1}(t)=\int_{0}^{\infty}\alpha e^{-\alpha s}z_{1}(t-s)ds,}\\\\
\displaystyle{Y_{2}(t)=\int_{0}^{\infty}\alpha e^{-\alpha s}z_{2}(t-s)ds,}
\end{array}
\]
allows us to rewrite the system (\ref{SL}) as follows
\begin{eqnarray}\label{ED_sys}
\dot{z}_1(t)&=&(1+i\omega_1)z_{1}(t)-|z_1(t)^2|z_1(t)\nonumber \\
\nonumber \\
&+&Ke^{i\theta}\left[Y_2(t)-z_1(t)\right],\nonumber\\
\nonumber \\
\dot{z}_2(t)&=&(1+i\omega_2)z_{2}(t)-|z_2(t)^2|z_2(t)\nonumber \\
\nonumber \\
&+&Ke^{i\theta}\left[Y_1(t)-z_2(t)\right],\nonumber\\\\
\dot{Y}_{1}(t)&=&\alpha z_{1}(t)-\alpha Y_{1}(t),\nonumber\\
\nonumber\\
\dot{Y}_{2}(t)&=&\alpha z_{2}(t)-\alpha Y_{2}(t),\nonumber
\end{eqnarray}
where the distribution parameter $\alpha$ is related to the mean time delay as $\alpha=1/\tau_{m}$. The trivial equilibrium $z_1=z_2=0$ of the original system (\ref{SL}) corresponds to a steady state $z_1=z_2=Y_1=Y_2=0$ of the modified system (\ref{ED_sys}).
The characteristic equation for the linearization of system (\ref{ED_sys}) near this trivial steady state reduces to
\begin{eqnarray}\label{CEQ}
&&[\lambda^2+\lambda\left(Ke^{i\theta}-1+\alpha-i\omega_0\right)-\alpha(1+i\omega_0)]\times\nonumber\\
\nonumber\\
&&[\lambda^2+\lambda\left(Ke^{i\theta}-1+\alpha-i\omega_0\right)\\
\nonumber\\
&&-\alpha(1+i\omega_0-2Ke^{i\theta})]=0.\nonumber
\end{eqnarray}
\begin{figure*}
\hspace{-0.5cm}
\includegraphics[width=9cm]{fig4a.jpg}\hspace{-0.8cm}
\includegraphics[width=10.5cm]{fig4b.jpg}
\caption{(a) Stability boundary for the system (\ref{SL}) with a weak delay distribution kernel (\ref{WK}) for $\theta=0$ ($p=1$).The trivial steady state is unstable outside the boundary surface and stable inside the boundary surface. (b) Stability boundary for $\omega_0=10$. Colour code denotes $[-\max\{{\rm Re}(\lambda)\}]$.}\label{EB0}
\end{figure*}
Let us consider the first factor in Eq.~(\ref{CEQ})
\begin{equation}\label{lam1}
\lambda^2+\lambda\left(Ke^{i\theta}-1+\alpha-i\omega_0\right)-\alpha(1+i\omega_0)=0.
\end{equation}
Since $\lambda=0$ is not a solution of this equation, the only way how stability of the trivial steady state can change is when $\lambda$ crosses the imaginary axis. To find the values of system parameters when this can happen, we look for solutions of equation (\ref{lam1}) in the form $\lambda=i\omega$. Substituting this into the above equation and separating real and imaginary parts gives
\[
\begin{array}{l}
\omega K\sin\theta=\omega\omega_0-\alpha-\omega^2,\\\\
\omega K\cos\theta=\alpha\omega_0+\omega(1-\alpha).
\end{array}
\]
Solving this system gives the coupling strength $K$ and the inverse mean time delay $\alpha$ as functions of the coupling phase $\theta$ and the Hopf frequency $\omega$:
\begin{equation}\label{EBoundT}
\begin{array}{l}
\displaystyle{K=\frac{1+(\omega-\omega_{0})^{2}}{\cos\theta+\sin\theta(\omega_0-\omega)},}\\\\
\displaystyle{\alpha=-\frac{\omega[\sin\theta+\cos\theta(\omega-\omega_{0})}{\cos\theta+\sin\theta(\omega_0-\omega)}}.
\end{array}
\end{equation}
When the coupling phase $\theta$ is equal to zero, these expressions simplify to
\begin{equation}\label{EBound0}
K=1+(\omega-\omega_0)^2,\hspace{0.5cm}\alpha=\omega(\omega_0-\omega).
\end{equation}
This implies that the permissible range of Hopf frequencies is $0\leq\omega\leq\omega_0$, the minimal value of the mean time delay for which the amplitude death can occur is $\tau_{\rm min}=1/\alpha_{\rm max}=4/\omega_{0}^{2}$, and the minimal coupling strength required for amplitude death is $K=1$. Figure~\ref{EB0} illustrates how the stability boundary (\ref{EBound0}) depends on the intrinsic frequency $\omega_0$, and it also shows how the leading eigenvalues vary inside the stable parameter region. As the intrinsic frequency $\omega_0$ increases, the region in the ($K,\alpha$) space for which amplitude death occurs grows, thus indicating that it is possible to stabilise a trivial steady state for even smaller values of the mean time delay.
\begin{figure*}
\hspace{-1cm}
\includegraphics[width=17cm]{fig5.jpg}
\caption{(a) Stability boundary for the system (\ref{SL}) with a weak delay distribution kernel (\ref{WK}) with $\omega_0=10$ ($p=1$). The trivial steady state is unstable outside the boundary and stable inside the boundary. (b) Sections for various values of $\theta$.}\label{EBT}
\end{figure*}
\begin{figure*}\hspace{-0.2cm}
\includegraphics[width=8cm]{fig6a.jpg}\hspace{-0.4cm}
\includegraphics[width=10.5cm]{fig6b.jpg}
\caption{(a) Stability boundary for the system (\ref{SL}) with a strong delay distribution kernel (\ref{SK}) for $\theta=0$ ($p=2$). The trivial steady state is unstable outside the boundary surface and stable inside the boundary surface. (b) Stability boundary for $\omega_0=10$. Colour code denotes $[-\max\{{\rm Re}(\lambda)\}]$.}\label{SKB}
\end{figure*}
For $\theta$ different from zero, the requirement $K\geq 0$, $\alpha\geq 0$ in (\ref{EBoundT}) translates into a restriction on admissible coupling phases $0\leq\theta\leq\arctan(\omega_0)$. Figure~\ref{EBT} shows how for a fixed value of $\omega_0$ the stability area reduces as $\theta$ grows, eventually collapsing at $\theta=\arctan(\omega_0)$. The range of admissible Hopf frequencies also reduces and is given for each $\theta$ by $0\leq\omega\leq\omega_0-\tan\theta$. The second factor in (\ref{CEQ}) provides another stability boundary in the parameter space, but the values of $K$ and $\alpha$ satisfying this equation with $\lambda=i\omega$ lie outside the feasible range of $K\geq 0$, $\alpha\geq 0$. Hence, it suffices to consider the stability boundary given by (\ref{EBoundT}).
Next, we consider the case of the strong delay kernel ($p=2$)
\begin{equation}\label{SK}
g_{w}(u)=\alpha^{2}u e^{-\alpha u}.
\end{equation}
Following the same strategy as in the case of the weak delay kernel (\ref{ED_sys}), we introduce new variables
\[
\begin{array}{l}
Y_{11}(t)=\int_{0}^{\infty}\alpha e^{-\alpha s}z_{1}(t-s)ds,\\\\
Y_{12}(t)=\int_{0}^{\infty}\alpha^{2}s e^{-\alpha s}z_{1}(t-s)ds,\\\\
Y_{21}(t)=\int_{0}^{\infty}\alpha e^{-\alpha s}z_{2}(t-s)ds,\\\\
Y_{22}(t)=\int_{0}^{\infty}\alpha^{2}s e^{-\alpha s}z_{2}(t-s)ds,
\end{array}
\]
and then rewrite the system (\ref{SL}) in the form
\begin{eqnarray}\label{SK_sys}
\dot{z}_1(t)&=&(1+i\omega_1)z_{1}(t)-|z_1(t)^2|z_1(t)\nonumber\\
\nonumber \\
&+&Ke^{i\theta}\left[Y_{12}(t)-z_1(t)\right],\nonumber \\
\nonumber\\
\dot{z}_2(t)&=&(1+i\omega_2)z_{2}(t)-|z_2(t)^2|z_2(t)\nonumber\\
\nonumber \\
&+&Ke^{i\theta}\left[Y_{22}(t)-z_2(t)\right],\nonumber \\\\
\dot{Y}_{11}(t)&=&\alpha z_{1}(t)-\alpha Y_{11}(t),\nonumber\\
\nonumber \\
\dot{Y}_{12}(t)&=&\alpha^2 z_{1}(t)+\alpha Y_{11}(t)-\alpha Y_{12}(t),\nonumber\\
\nonumber\\
\dot{Y}_{21}(t)&=&\alpha z_{2}(t)-\alpha Y_{21}(t),\nonumber\\
\nonumber\\
\dot{Y}_{22}(t)&=&\alpha^2 z_{2}(t)+\alpha Y_{21}(t)-\alpha Y_{22}(t),\nonumber
\end{eqnarray}
where the mean time delay is given by $\tau_{m}=2/\alpha$. Linearizing system (\ref{SK_sys}) near the steady state $z_1=z_2=Y_{11}=Y_{12}=Y_{21}=Y_{22}=0$ yields a characteristic equation, which is more involved than in the case of the weak delay kernel. Solving this characteristic equation provides the boundary of amplitude death depending on system parameters. Figure~\ref{SKB} illustrates how this boundary changes with the coupling strength $K$, fundamental frequency $\omega_0$, and the inverse time delay $\alpha/2$ in the case $\theta=0$. Similar to the case of the weak delay kernel, as the inverse time delay $\sim \alpha$ approaches zero, the larger value of the critical coupling strength of $K$
tends to some finite value. At the same time, the lower value of the critical coupling strength $K$ does vary depending on the coupling phase unlike the case of the weak kernel. Also, in the case of a strong delay kernel, the amplitude death is achieved for a much smaller range of coupling strengths $K$ and only for much larger mean time delays $\tau_{m}=2/\alpha$. However, the effect of the coupling phase on the region of amplitude death is similar to that for a weak delay kernel, i.e., as the coupling phase increases, this reduces the area in the $(K,\alpha)$ parameter plane where amplitude death is observed, as shown in Fig.~\ref{SKtheta}.
\begin{figure*}
\hspace{-1cm}
\includegraphics[width=17cm]{fig7.jpg}
\caption{(a) Stability boundary for the system (\ref{SL}) with a strong delay distribution kernel (\ref{SK}) with $\omega_0=10$ ($p=2$). The trivial steady state is unstable outside the boundary and stable inside the boundary. (b) Sections for various values of $\theta$.
}\label{SKtheta}
\end{figure*}
\section{Discussion}\label{Disc}
In this paper, we have studied the effects of distribu- ted-delay coupling on the stability
of the trivial steady state in a system of coupled oscillators. Using generic Stuart-Landau oscillators, we have identified parameter regimes of amplitude death, i.e., stabilised steady states, in terms of intrinsic oscillator frequency and characteristics of the coupling. In order to better understand the dynamics inside stable regimes, we have numerically computed eigenvalues of the corresponding characteristic equation. We have considered two particular types of delay distribution: a uniform distribution around some mean time delay and a class of exponential or more general gamma distributions. For both of these distributions it has been possible to find stability in terms of coupling strength, coupling phase, and the mean time delay.
These results suggest that the coupling phase plays an important role in determining the ranges of admissible Hopf frequencies and values of the coupling strength, for which stabilisation of the trivial steady state is possible.
For the uniformly distributed delay kernel, as the width of the distribution increases, the region of amplitude death in the parameter space of the coupling strength and the average time delay increases. As the coupling phase $\theta$ moves away from zero, the range of coupling strength providing amplitude death gets smaller until some critical value of the width of distribution, beyond which an asymmetry in $\theta$ is introduced and this range is significantly larger for negative values of $\theta$ than it is for $\theta\geq 0$. Furthermore, as the magnitude of the coupling phase grows, the maximum amplitude of oscillations in the coupled system also grows, and such oscillations can be observed for a larger range of delay distribution widths.
In the case of gamma distributed delay, the region of amplitude death does not consist of isolated stability islands but is always a continuous region in the $(K,\tau_{m})$ plane. Similar to the uniform delay distribution, as the coupling phase increases from zero, the range of coupling strengths for which stabilisation of the steady state can be achieved reduces, while the minimum average time delay required for the stabilization increases. However, unlike the uniform distribution, within the stable region the amplitude death can occur for an arbitrarily large value of the average time delay, provided it is above some minimum value and the coupling strength is within the appropriate range.
So far, we have considered the effects of distribu- ted-delay coupling on the dynamics of identical coupled oscillators only. The next step would be to extend this analysis to the case when the oscillators have differing intrinsic frequencies in order to understand the dynamics of the phase difference, as well as to analyse the stability of in-phase and
anti-phase oscillations.
\section*{Acknowledgements}
This work was partially supported by DFG in the framework of SFB 910: {\em Control of self-organizing nonlinear
systems: Theoretical methods and concepts of application}.
|
1,108,101,565,553 | arxiv | \section{Introduction:}
Recently, there has been an increasing interest in spin dependent semi-inclusive deep inelastic processes motivated by the inminence of high precision semi-inclusive experiments\cite{1,27}, and by theoretical considerations related to facto\-ri\-za\-tion\cite{2,7}. Both, open the possibility to extract information about the spin of the proton and study hadronization phenomena.
Originally, the observables proposed to be measured, the so called spin dependent asymmetries, were thought in terms of its QCD leading order (LO)\cite{21} descomposition which implied a trivial or eventually vanishing dependence on the kinematical variable $z$: the hadron energy fraction. The experiments proposed so far are designed to measure this observables in certain specific range of this variable.
As the naive description does not take into account events coming from the target fragmentation, which dominate at low values of $z$, the low $z$-cut is chosen is such a way that most of these events are discarded. In order to increase the statistics, the data are then integrated over the measured range assuming the trivial LO $z$-dependence of the observable.
In this picture, once the target fragmentation region is eliminated,
both the $z$ dependence of the asymmetry in the restricted interval and that of the
integrated observable coming from the $z$-cut, become trivial.
However, two facts spoil this simple picture, target fragmentation is always present and NLO corrections introduce a non trivial dependence\cite{2,7}. It is then highly desirable to have an estimate of how these target fragmentation effects are suppresed by the kinematical cut and how dramatic is the residual next to leading order (NLO) $z$ dependence. These are the main objectives of this presentation.
\section{Semi-Inclusive Asymmetries:}
It is customary to define semi-inclusive spin asymmetries $A_{1\,N}^{h}$, as the ratio between
the differences of semi-inclusive deep inelastic cross sections for the production of a hadron $h$ off a target $N$ with opposite helicities ($\Delta \sigma^h_N$) and the correspondent unpolarized cross section ($\sigma^h_N$), and difference asymmetries $A_{N}^{h^+-h^-}$, for the production of hadrons with opposite charges as
\begin{equation}
A_{1\,N}^{h}=\frac{Y_M}{\lambda Y_P}\frac{\Delta \sigma^h_{N}}{\sigma^h_{N}}
\ \ \ \ \ \ \ A_{N}^{h^+-h^-}=\frac{Y_M}{\lambda Y_P}\frac{\Delta \sigma^{h^+}_{N}-\Delta \sigma^{h^-}_{N}}
{\sigma^{h^+}_{N}-\sigma^{h^-}_{N}}
\end{equation}
where $Y_M, \, Y_P$ are kinematical factors and $\lambda$ is the helicity of the incoming lepton.
In the naive parton model these asymmetries reduce to expressions like
\begin{equation}
A_{1\,N}^{h}=\frac{\sum_{i} e^2_{i}\Delta q(x) D_{h/i}(z)}{\sum_{i} e^2_{i} q(x) D_{h/i}(z)}
\end{equation}
for single asymmetries, or
\begin{eqnarray}
A_{D}^{\pi^+-\pi^-}=\frac{\Delta u_v(x) +\Delta d_v(x)}{u_v(x)+d_v(x)}
\frac{(D_u^{\pi^+}(z)-D_d^{\pi^+}(z))}{(D_u^{\pi^+}(z)-D_d^{\pi^+}(z))}, \nonumber \\ A_{p}^{\pi^+-\pi^-}=\frac{4 \Delta u_v(x) -\Delta d_v(x)}{4 u_v(x)-d_v(x)} \frac{(D_u^{\pi^+}(z)-D_d^{\pi^+}(z))}{(D_u^{\pi^+}(z)-D_d^{\pi^+}(z))}
\end{eqnarray}
for pion production on deuterium and proton targets respectively.
In the first case, eq.2, the $z$ dependence is that of the already known unpolarized fragmentation functions and for difference asymmetries, eq.3, this dependence cancels.
In the experimental determination of these asymmetries what are actually measured are the cross sections integrated over a range of the variable $z$, and eventually the data are analised taking into account the LO dependence.
This simple picture for the asymmetries is obiously spoiled when either target fragmentation effects or next to leading order corrections are included.
The latter implies that the factors in eq.2 have to be replaced by NLO convolutions that also prevents the cancelation of the $z$ dependence in eq.3. Adding to this, at NLO target fragmentation effects must be necesarily included using fracture functions
for a consistent factorization os collinear divergences\cite{2}. The full expressions for NLO cross sections can be found in reference\cite{2,18,19}.
\section{Results:}
In this section we compute semi-inclusive spin asymmetries for proton and deuteron targets including NLO QCD corrections and contributions coming from the target fragmentation region. For these we use NLO fragmentation functions\cite{5}, parton distributions\cite{4,3} and a model for spin dependent fracture functions\cite{7}.
In figures (1a) and (1b) we show the naive (solid lines) and the NLO corrected (dashed lines) single asymmetry results for the production of positive hadrons off protons imposing two different kinematical cuts ({\bf a)} $z>0.20$ as in the COMPASS proposal\cite{6}, and
{\bf b.)} $z>0.10$ as in that of HERMES\cite{27}). For comparison we also include data from EMC\cite{9} and SMC\cite{1} experiments and the errors expected by COMPASS\cite{6}.
In these asymmetries the size of the corrections, which is not significant regarding the precision of the available data, are dominated by NLO current fragmentation effects,
fracture functions do not contribute significantly. These results have been obtained using polarized parton distributions with non negligible gluon polarization. As can be expected, for sets without gluon polarization the size of the corrections are even smaller.
Identical results are obtained for the production of negative hadrons and for
neutron and deuteron targets.
\setlength{\unitlength}{1.mm}
\begin{figure}[hbt]
\begin{picture}(120,48)(0,0)
\put(0,0){\mbox{\epsfxsize5.3cm\epsffile{pol1a.ps}}}
\put(60,0){\mbox{\epsfxsize5.3cm\epsffile{pol1b.ps}}}
\end{picture}
\caption{Semi-inclusive asymmetries for muoproduction of charged pions and kaons on a proton target with: {\bf a)} $z>0.2$, {\bf b)} $z>0.1$
\label{fig:radish}}
\end{figure}
These figures show also a very mild $z$-cut dependence which can be exploited for increasing the statistics within on experiment, or just for a safe comparison of the future experimental results.
In figures (2a) and (2b), we show the naive (solid lines) and the NLO corrected (dashed lines) results for difference asymmetries imposing two different kinematical cuts ({\bf a)} $z>0.25$ and {\bf b)} $z>0.20$). We also include data from the SMC\cite{1,11} experiment.
\setlength{\unitlength}{1.mm}
\begin{figure}[hbt]
\begin{picture}(120,48)(0,0)
\put(0,0){\mbox{\epsfxsize5.3cm\epsffile{pol2a.ps}}}
\put(60,0){\mbox{\epsfxsize5.3cm\epsffile{pol2b.ps}}}
\end{picture}
\caption{Difference asymmetries for: {\bf a)} $z>0.25$, {\bf b)} $z>0.20$
\label{fig:radish}}
\end{figure}
Notice that
at variance with what is found for single asymmetries, here the $z$-cut dependence is much more apparent. For $z>0.10$, figure 3, one finds that the corrections differ in shape and sign, and for the deuteron becomes singular at $z\sim 0.20$.
\setlength{\unitlength}{1.mm}
\begin{figure}[hbt]
\begin{picture}(120,47)(0,0)
\put(30,0){\mbox{\epsfxsize5.3cm\epsffile{pol2c.ps}}}
\end{picture}
\caption{The same as figure (2) for $z>0.10$
\label{fig:radish}}
\end{figure}
The unexpected behauvior of these corrections, which are dominated by target fragmentation, is due to the fact that these effects spoil the cancelation of difference between fragmentation functions in expressions like eq.3. These differences vanish in the neighbourhood of $z\sim 0.20$ for pions, making the observable unstable, and for example leading to the singular behauvior in the deuteron asymmetry.
\section*{Conclusions}
Imposing different kinematical cuts, such as those proposed by the COMPASS and the HERMES collaborations, we have computed predictions for both experiments,
using NLO fragmentation functions, parton distributions and also a model for spin dependent fracture functions.
We have found for the so called ``difference" asymmetries that the corresponding corrections produce significant effects which depend on the inclusion of events where the hadron energy fraction is smaller or greater than approximattely 0.2 units. The value chosen for the $z$-cut not only modifies the size of the corrections but also the sign of them.
These behaviours are closely related to the fact that pions of opposite charge are produced with almost equal probability in this kinematical region.
Our estimates imply for single asymmetries that the corrections are almost in the accuracy limits of the forthcoming experiments and that the different choices for the $z$-cuts have no significant consequences in them. However, for
difference asymmetries, the corrections and its dependence on $z$ are crucial for the interpretation of the data and the comparison of the experimental results.
\section*{References}
|
1,108,101,565,554 | arxiv | \section{Introduction}
\label{sec:intro}
Characteristic masses of the first stars
(Population III or Pop III stars) and the nature of their
supernova explosions
are critically important to determine their role
in the cosmic reionization and subsequent star formation
in the early universe \citep[e.g.][]{bromm11}.
Cosmological simulations have shown
that the Pop III stars could be very massive $\gtrsim 100M_{\odot}$,
as the result of cooling of primordial gas via hydrogen molecules \citep
[e.g.][]{bromm04}.
More recent studies, however, propose the mechanisms in which
lower mass stars can form through radiation feedback from growing
protostars and/or disk fragmentation
\citep{hosokawa11,clark11,hirano13,bromm13,susa13}.
Abundance patterns of the lowest-metallicity stars in
our Galaxy provide us with a rare opportunity to
observationally constrain the masses
of the Pop III stars.
The chemical abundances in the four iron-poor stars with [Fe/H]$<-4.5$,
HE 0107-5240 \citep{christlieb02}, HE 1327-2326 \citep{frebel05,aoki06},
HE 0557-4840 \citep{norris07}, and SDSS J102915$+$172927 \citep{caffau11}
(see \citet{hansen14} for a recent discovery of another metal-poor star
in this metallicity range) do not show signature of
pair-instability supernovae of very massive
($\gtrsim 140M_{\odot}$) stars as their progenitors
\citep[][and references therein]{nomoto13}.
Instead, the observed abundances in these stars
are better explained by the yields of
core-collapse supernovae of moderately massive Pop III stars
with several tens of $M_{\odot}$
\citep{umeda02,umeda03,limongi03,iwamoto05,tominaga07b,
tominaga09,tominaga14, heger10, kobayashi14}.
Another important insight into the nature of
the Pop III stars is that a large fraction of most iron-poor stars
are carbon-rich \citep[e.g.][]{hansen14}.
\citet{iwamoto05} suggests that the large
enhancement of carbon observed in both HE 0107-5240
and HE 1327-2326 is explained by their models
with mixing of supernova ejecta and their
subsequent fallback on to the central remnant.
These models with different
extent of the mixing regions simultaneously reproduce
the observed
similarity in [C/Fe] and more than a factor of $\sim 10$
differences in [O, Mg, Al/Fe] between
the two stars.
A metal-poor star SMSS J031300.36-670839.3 (SMSS J0313-6708),
recently discovered by SkyMapper Southern Sky Survey,
provides us with a new opportunity to test theoretical
predictions about the Pop III stars \citep{keller07,keller14}.
Follow-up spectroscopic observations found that this object
shows an extremely low upper limit for its iron abundance
([Fe/H]$<-7.1$); more than $1.0$ dex lower
than the previous record of the lowest iron-abundance stars.
In this letter, we extend the study of \citet{iwamoto05} and examine
whether the abundances of the five stars with [Fe/H]$<-4.5$,
including the most iron-deficient star SMSS 0313-6708, can be
consistently explained by the supernova yields of Pop III
stars which undergo the mixing and fallback.
\section{Models}
\label{sec:model}
We employ the Pop III supernova/hypernova
yields of progenitors with main-sequence masses of
$M=25M_{\odot}$ \citep{iwamoto05} and $40M_{\odot}$
\citep{tominaga07a} and assume kinetic explosion energies,
$E$, of $E_{51}=E/10^{51}$erg$=1, 10$ for
the $25M_{\odot}$ model and
$E_{51}=1, 30$ for the $40M_{\odot}$ model in the same
way as in \citet{tominaga07b}.
The abundance distribution after the explosions of the
$25M_{\odot}$ models with $E_{51}=1$ and $10$
as a function of the
enclosed mass ($M_{r}$) are illustrated in Figure \ref{fig:model}.
For the parameters that describe the extent
of the mixing-fallback, we follow the prescription of
\citet{umeda02} and \citet{tominaga07b} as briefly summarized below.
We assume that the supernova ejecta within a $M_{r}$ range between
an initial mass-cut $M_{\rm cut}(\rm ini)$ and
$M_{\rm mix}(\rm out)$ are mixed and a fraction $f$ of the mixed material
fall back onto the central remnant, presumably forming a black hole.
This approach has been compared with a
hydrodynamical calculation of jet-induced explosions \citep{tominaga09}
and it is demonstrated that the prescription of the mixing-fallback
applied to a one-dimensional calculation mimics
an angle-averaged yield of the aspherical supernovae.
We assume that the $M_{\rm cut}(\rm ini)$ is approximately located at the
outer edge of the Fe-core \citep[$M_{r}=1.79 M_{\odot}$ and $2.24 M_{\odot}$
for $M=25 M_{\odot}$ and $40 M_{\odot}$, respectively: ][]{tominaga07b}.
The value of $M_{\rm cut}({\rm ini})$ is
varied within $\pm 0.3 M_{\odot}$ with steps of 0.1 $M_{\odot}$
to better fit the observed
abundance patterns. Then we calculate the grids of
supernova yields for the range of $\log f=-7.0$-$0.0$
with steps of $\Delta\log f=0.1$ and for
$M_{\rm mix}({\rm out})=1.5$-$9.0 M_{\odot}$ ($M=25M_{\odot}$)
and 2.0-16.0 $M_{\odot}$ ($M=40M_{\odot}$) with steps
of $\Delta M_{\rm mix}({\rm out})=0.1M_{\odot}$.
These ranges have been chosen so that the $M_{\rm mix}({\rm out})$
is searched approximately in the range between the location of the $M_{\rm cut}({\rm ini})$
and the outer boundary of the CO core (see Figure \ref{fig:model}). Using this
grid of yields, parameter sets ($M_{\rm cut}({\rm ini})$, $M_{\rm mix}({\rm out}), f$) that reproduce
the observed [C/Ca] and [C/Mg] ratios in SMSS 0313-6708 are searched.
In addition to SMSS 0313-6708, we adopt the same parameter-search
method for the four other iron-poor stars with [Fe/H]$<-4.5$
(see \citet{tominaga14} for models with other $E$).
We adopt the observational data analyzed with 3D and/or NLTE corrections
(see the captions of Figure \ref{fig:abupattern} and \ref{fig:hmpump}
for details)
and normalized
with the solar abundances of \citet{asplund09}.
For Ca abundances, we use abundances estimated from Ca I lines
for the four other iron-poor stars.
We should note, however, that there are uncertainties in the Ca abundances
for the 3D and NLTE effects \citep[e.g.][]{korn09,lind13}, which affects the
normalization of Figure \ref{fig:hmpump}.
We also consider Ca II abundances when we draw our conclusions.
\section{Result}
\label{sec:result}
Table \ref{tab:bestmodels} summarizes the model parameters
($M_{\rm cut}({\rm ini})$, $M_{\rm mix}({\rm out})$, $f$) that best
reproduce the observed abundance patterns of the five iron-poor stars.
The resultant
masses of the central remnant ($M_{\rm rem}$) and ejected masses
of $^{56}$Ni, which
finally decays to $^{56}$Fe, are indicated in the last two columns.
Since only C, Mg, and Ca have been measured for SMSS 0313-6708,
all four models ($(M, E_{51})=(25, 1)$, $(25, 10)$, $(40, 1)$, and
$(40, 30)$) can equally well fit the observed abundance ratios as
indicated in Table \ref{tab:bestmodels}.
For the other four stars, the parameters have been constrained using
a larger number of elements and thus only the best-fit models
are shown.
\subsection{The best-fit models for SMSS 0313-6708 \label{sec:bestfit}}
Figure \ref{fig:abupattern} shows
the abundance patterns relative to Ca of
the best-fit models
for $M=25M_{\odot}$
(top) and 40$M_{\odot}$ (bottom).
In each panel, the models of the supernova ($E_{51}=1$:
solid lines with squares) and hypernovae ($E_{51}=10$ and 30 for
$M=25M_{\odot}$ and 40$M_{\odot}$: solid lines with triangles) are shown
with the observed abundances in
SMSS 0313-6708 (filled circles and arrows for the upper limits).
In the following we describe comparison
of each model with the observed abundance pattern.
{\it $M=25M_{\odot}$, $E_{51}=1$, supernova model: }
The model yield that fits the observed
[C/Ca] and [Mg/Ca] ratios are consistent with
the upper limits of other elements except for
Na and Al. Because the predicted Na and Al abundances vary by more than
a few dex depending on overshooting \citep{iwamoto05}
and stellar rotation \citep{meynet10}, and on the reaction rate of
$^{12}$C$(\alpha, \gamma)$$^{16}$O \citep[][and references therein]{chieffi02}
in the presupernova models,
we restrict our discussion for Na and Al to
their relative yields between the supernova and hypernova models.
In our model, the observed large [Mg/Ca] ratio
results from the large mixing region
($M_{\rm mix}({\rm out})=5.6M_{\odot}$) and
the small ejected fraction, $f$;
as seen in Figure \ref{fig:model}, the material in the
layer containing Ca is mostly falls back while
the material in the outer layer containing Mg is partly ejected.
The abundances of iron-peak elements in the model depend on
the adopted $M_{\rm cut}({\rm ini})$.
For $M_{\rm cut}({\rm ini})=2.0 M_{\odot}$,
the ejected mass of $^{56}$Ni is less than
$10^{-5}M_{\odot}$ for these models,
which is extremely
small compared to those estimated for nearby
supernovae \citep[$^{56}$Ni$\sim 0.1M_{\odot}$: ][]{blinnikov00,nomoto13}.
{\it $M=25M_{\odot}$, $E_{51}=10$, hypernova model: }
The higher explosion energy induces explosive
burning at the bottom of the He layer, which leads to
an extra production of Mg at $M_{r}\sim 6M_{\odot}$ as
can be seen in the bottom panel of Figure \ref{fig:model}.
Consequently, a large value of $M_{\rm mix}({\rm out})$
($6.4M_{\odot}$), which results in a large fallback of Mg
synthesized at $M_r\sim6M_\odot$,
best explain the observed [C/Mg] ratio.
The [O/Ca] ratio is smaller than that of the supernova
model as a result of the associated fallback of oxygen for the given [Mg/Ca]
ratio. The [O/Ca] ratio, however, can be as large as $\sim +4$
depending on $M_{\rm mix}({\rm out})$ and $f$ as shown by
the dotted gray line in Figure \ref{fig:abupattern}.
The [Na/Ca] and [Al/Ca] ratios of the hypernova yields
are lower than those predicted by the supernova model.
This is due to the more extended explosive
O and C burning regions in the more energetic
explosion [cross-hatched regions in
Figure \ref{fig:model}, in which more Na and Al are consumed to synthesize
heavier nuclei \citep{nakamura01}]. The larger
$M_{\rm mix}({\rm out})$ with small $f$
also leads to the smaller amount of ejected Na and Al than the
supernova model.
The abundance differences between the odd and even atomic number
iron-peak elements are smaller for the hypernova model.
This results from enhanced production of
odd-Z iron-peak elements due to the higher entropy
and larger neutron excess in the Si burning region in the
more energetic explosion \citep[e.g.][]{nakamura01}.
{\it $M=40M_{\odot}$, $E_{51}=1$, supernova model: }
Compared to the supernova model with $M=25M_{\odot}$,
the outer boundary of the Mg-rich layer extends to a larger
mass at $M_r\sim 13M_{\odot}$.
Consequently, the models with
$M_{\rm mix}({\rm out})\sim 12-14M_{\odot}$ best-fit the observed [C/Mg] ratio.
{\it $M=40M_{\odot}$, $E_{51}=30$, hypernova model: }
The hypernova model for the
$40M_{\odot}$ progenitor is characterized by
larger [Si/Ca] and [S/Ca] ratios because of more
extended explosive O and C-burning region, in which elements such
as O and C are consumed to synthesize Si and S. Aluminum
is also synthesized in the
explosive C burning layer but destroyed in the explosive O burning layer
to synthesize Si.
In comparison to the
supernova model, predicted abundances of V, Mn, Co and
Cu relative to Ca are larger, similar to
the $25M_{\odot}$ hypernova model.
\begin{table*}
\begin{center}
\caption{Summary of the observed abundances and the best-fit models \label{tab:bestmodels}}
\begin{tabular}{lcccccccccc}
\hline
Name & [Fe/H]\tablenotemark{a}&[C/Ca]\tablenotemark{a} &[C/Mg]\tablenotemark{a} & $M$ & $E_{51}$& $M_{\rm cut}$(ini) & $M_{\rm mix}$(out) & $\log f$& $M_{\rm rem}$ & $M(^{56}{\rm Ni})$ \\
& (dex) & (dex) & (dex) & ($M_{\odot}$)&($10^{51}$erg) & ($M_{\odot}$) & ($M_{\odot}$) & & ($M_{\odot}$) & ($M_{\odot}$) \\
\hline
SMSSJ0313-6708 & $<-7.1$ & 4.4 & 1.2 & 25 & 1 & 2.0 & 5.6 & $-5.1$ & 5.6 & 2.1$\times 10^{-7}$ \\
& & & & 25 & 10 & 1.7 & 6.4 & $-5.8$ & 6.4 & 1.1$\times 10^{-6}$ \\
& & & & 40 & 1 & 2.0 & 12.7 & $-5.4$ & 12.7 & 3.4$\times 10^{-6}$ \\
& & & & 40 & 30 & 2.5 & 14.3 & $-6.0$ & 14.3 & 1.9$\times 10^{-6}$ \\
HE0107-5240 & $-5.7$ &2.9&2.6 & 25 & 1 & 1.7 & 5.7 & $-3.3$ & 5.7 & 1.4$\times 10^{-4}$ \\
HE1327-2326 & $-6.0$ &3.3 &1.8 & 25 & 1 & 1.7 & 5.7 & $-4.4$ & 5.7 & 1.1$\times 10^{-5}$ \\
HE0557-4840 & $-4.9$ & 1.2 & 1.1& 25 & 1 & 1.7 & 5.7 & $-2.1$ & 5.7 & 2.2$\times 10^{-3}$ \\
SDSSJ102915$+$172927 & $-4.9$ & $<0.1$& $<0.1$ & 40 & 30 & 2.0 & 5.5 & $-0.9$ & 6.0 & 2.8$\times 10^{-1}$ \\
\hline
\end{tabular}
\tablenotetext{1}{See the captions of Figure \ref{fig:abupattern}
and \ref{fig:hmpump} for the references of the observational data.}
\end{center}
\end{table*}
\subsection{Four other iron-poor stars with [Fe/H]$<-4.5$ }
Figure \ref{fig:hmpump} shows the abundance patterns of
the best-fit models for four other iron-poor stars with [Fe/H]$<-4.5$.
We note that the N abundances are not included from the fitting because of
their uncertainty in the progenitor models \citep{iwamoto05}.
For the three carbon-enhanced stars
(HE 1327-2326, HE 0107-5240, and HE 0557-4840)
shown in the top-three panels,
the supernova models of $M=25 M_{\odot}$ with $E_{51}=1$
adopting the parameters $M_{\rm mix}({\rm out})\sim 5.7 M_{\odot}$ and
$f\sim 10^{-4}-10^{-2}$
best explain the observed
abundances (squares).
The hypernova models with $E_{51}=10$ (triangles)
tend to yield higher [Mg/Ca] ratios than the observed values,
which is due to the explosive synthesis of Mg at the
bottom of the He layer.
The abundance pattern of SDSS J102915$+$172927 with no
carbon enhancement is in better agreement with the hypernova yields of
$M=40M_{\odot}$ and $E_{51}=30$ (triangles).
This is due to the relatively high [Si/Ca] ratio
observed in this star, which is better explained by the
larger Si/Ca ratio in the higher explosion energy model.
Such an energy dependence can be understood from the comparison of the
abundance distributions between the top and bottom panels of Figure
\ref{fig:model}. In the hypernova model (bottom),
the region where the postshock
temperature becomes high enough to produce a significant amount of Si
extends to larger $M_r$ than in the supernova model (top).
\subsection{Comparison of the five most iron-poor stars}
Figure \ref{fig:c_ca_mg} summarizes the best-fit
parameters ($M_{\rm mix}({\rm out}), \log f$) obtained for the
five iron-poor stars
(star: SMSS 0313-6708, square: HE 1327-2326, pentagon: HE 0107-5240,
three-pointed star: HE 0557-4840, and asterisk: SDSS J102915$+$172927).
The results for the model
$(M, E_{51})=(25, 1), (25, 10), (40, 1),$ and $(40, 30)$ are shown from
the top-left to bottom-right panels.
The regions where the [C/Ca] and [C/Mg]
ratios measured within $\pm 3\sigma$ in SMSS 0313-6708 are realized
are shown by orange and green, respectively.
In the $M_{\rm mix}({\rm out})-\log f$ parameter space, the [C/Ca] ratio
is at maximum when $M_{\rm mix}({\rm out})$ is located at
the layer where carbon burning takes place ($M_r\sim 3-4M_{\odot}$ in the
$M=25M_{\odot}$ and $E_{51}=1$ model; top panel of Figure \ref{fig:model}).
At a given $M_{\rm mix}({\rm out})$, the [C/Ca]
is larger for smaller $f$ (larger fallback), because Ca
synthesized in the inner layer falls back more efficiently for
smaller $f$ than C synthesized in the outer layer.
As a result, the high [C/Ca] ratio (orange in Figure \ref{fig:c_ca_mg})
in SMSS 0313-6708 is reproduced
with an extensive fallback ($\log f<-4$).
The [C/Mg] ratio is at maximum when the $M_{\rm mix}({\rm out})$
is located at the outer edge of the Mg-rich ejecta.
The ejection of a large amount of C and the fallback of
most of the Mg-rich layer lead to the large [C/Mg] ratios.
The observed [C/Mg] ratio in
SMSS 0313-6708 is close to the maximum for the observed [C/Ca].
Consequently, mixing up to $M_r \sim 6M_{\odot}$
(for $M=25M_{\odot}$) and $\sim 13M_{\odot}$ (for
$M=40M_{\odot}$) and small $f$ are required
(green in Figure \ref{fig:c_ca_mg}).
We note that the gap
in the [C/Mg] constraint, which is only seen in the hypernova models,
stems from the explosive Mg production in the energetic explosion
(Section \ref{sec:bestfit}). The resultant [C/Mg] ratio at the
gap is lower than the observed [C/Mg] ratio.
For the three other iron-poor stars showing
carbon enhancement, the best-fit value of $M_{\rm mix}({\rm out})$
is as large as that required for
SMSS 0313-6707 with $\log f\leq -2$.
The obtained parameters
$M_{\rm mix}({\rm out})$ and $f$
remain similar if we take the average of Ca I and Ca II abundances
instead of Ca I abundances, which gives 0.2 dex systematically
lower [C/Ca] than in Table 1.
They also remain similar
if only C, Mg, and Ca abundances are taken into
account in the fitting. This is because these elements
are synthesized in the different layers and dominant in each layer
(the thick lines in Figure \ref{fig:model}) and
thus their abundance ratios are
mostly determined by the mixing-fallback.
The abundance pattern of SDSS J102915$+$172927,
which does not show carbon enhancement, is best reproduced by the 40$M_{\odot}$
model with $M_{\rm mix}({\rm out})= 6.0 M_{\odot}$ and $\log f=-0.9$ (asterisk in
Figure \ref{fig:c_ca_mg}). If we use the average of Ca I and Ca II abundances,
a model with $M_{\rm mix}({\rm out})= 13.9 M_{\odot}$ and $\log f=-1.9$
best reproduce the observed abundances
(gray triangles with the dotted line in the
bottom panel of Figure \ref{fig:hmpump}).
The differences in masses, energies and the state of mixing-fallback
in the fitted models may explain populations
with and without carbon enhancement in the Galactic halo stars
\citep[e.g.][]{norris13}. Analyses
based on a larger number of sample are required to examine
whether or not the properties of Pop III supernovae/hypernovae can reproduce
the abundance patterns and the relative fraction of the C-enhanced
and the C-normal metal-poor populations
in the Galactic halo.
The gray-filled region in Figure \ref{fig:c_ca_mg} represents
the range of the parameters that
give the ejected Fe mass less than
$10^{-3} M_{\odot}$, which presumably corresponds to faint Pop III supernovae.
All three stars showing the C enhancement are located
in the faint supernova region.
\section{Discussion}
\label{sec:discussion}
As shown in the previous section, the
abundance measurements (C, Mg, and Ca) in
SMSS 0313-6707 are reproduced with
the Pop III supernova/hypernova yields
($(M, E_{51})=(25, 1), (25, 10), (40, 1)$ and $(40, 30)$),
with the model
parameters corresponding to faint supernova/hypernova with
extensive mixing and prominent fallback.
In order to discriminate
models with different masses and energies, additional
abundance measurements for oxygen as well as
iron-peak elements including V, Mn, Co, and Cu
are particularly useful.
The ejected mass of $^{40}$Ca is only
$\sim 10^{-7}-10^{-8} M_{\odot}$ in the faint supernovae/hypernovae
models. To be compatible with
the observed calcium abundance ([Ca/H]$=-7.0$),
the supernova ejecta should be diluted
with $\sim 10^{3}- 10^{4}M_{\odot}$ of the primordial gas.
In the case of the supernova models with $E_{51}=1$, this is
consistent with the suggested relation between the supernova energy
versus swept-up gas mass with primordial composition \citep{thornton98},
as adopted in \citet{tominaga07b},
for the assumed number density of hydrogen $1<n_{H}<100$ cm$^{-3}$.
On the other hand, this relation predicts that the hypernova
sweeps up much larger amount of hydrogen
($\gtrsim 10^{5}M_{\odot}$ for $n_{H}<10^4$ cm$^{-3}$)
than the above values.
A recent cosmological simulations of the transport of the metals
synthesized in a Pop III supernova, however,
suggests a wide range of
metal abundances in the interstellar gas clouds
after an explosion of a Pop III star \citep{ritter12},
and thus the hypernova
could work as the source of the chemical
enrichment for the formation of stars with [Ca/H]$\lesssim -7$.
In our model, Ca is produced by hydrostatic/explosive O burning
and incomplete Si burning in the Pop III supernova or
hypernova with masses $25$ or $40 M_{\odot}$.
This is different from the 60$M_{\odot}$ model adopted in \citet{keller14},
where Ca is originated from the hot CNO cycle during the main-sequence phase.
Synthesis of $\sim 10^{-7}M_{\odot}$ Ca in the hot CNO cycle
is also seen in the $100 M_{\odot}$ models of \citet{umeda05}.
On the other hand, Ca produced in this mechanism is not seen
in the 25 and 40$M_{\odot}$ progenitors analyzed in this paper.
The mass fraction of Ca near the bottom of the hydrogen layer in
these progenitors is
$\log X_{\rm Ca}<-10$.
In order to clarify which of these nucleosynthesis sites are
responsible for the observed Ca, we note a different prediction
between the two scenarios.
Our models suggest that a certain amount of Fe distributed in the
inner region as well as explosively synthesized
Ca are ejected as a result of the assumed mixing at $M_{r}=2-6M_{\odot}$.
This results in the [Fe/Ca] ratio of $\sim -2$ - $0$,
depending on the $M_{\rm cut}({\rm ini})$. Consequently,
our models of the faint supernova/hypernova
predict the metallicity distribution
function (MDF) to be continuous even below [Fe/H]$<-6$.
On the other hand, the model adopted in \citet{keller14}, in which
Ca is produced in the hot-CNO cycle,
predicts [Fe/Ca]$\lesssim-3$, which is not observed in
other extremely iron-poor stars. Because of the distinct
Ca production sites, the MDF could be
discontinuous in the most metal-poor region.
Future photometric and spectroscopic surveys to
discover lowest metallicity stars and their MDF provide clues to discriminate
these mechanisms.
The models adopted
in this work suggest that the faint Pop III supernovae could be the origin of
the observed abundance patterns and the variation among the
most iron-poor carbon-rich stars.
To understand physics of faint Pop III supernovae,
multi-dimensional simulations are necessary.
A large-scale mixing as suggested for the carbon-enhanced stars are
not predicted in the models with Rayleigh-Taylor mixing alone
\citep{joggerst09}.
Instead, a more likely origin of such large-scale mixing-fallback would be
the jet-induced supernova/hypernova, where the inner material can be ejected
along the jet-axis while a large fraction of the material along
the equatorial
plain falls back \citep{tominaga09}.
\acknowledgements
This work has been supported by the Grant-in-Aid for JSPS fellow
(M.N.I.) and for Scientific Research of the JSPS (23224004, 23540262,
26400222), and by WPI Initiative, MEXT, Japan.
The authors thank
S. Keller, M. Asplund, and K. Lind for fruitful discussion.
|
1,108,101,565,555 | arxiv | \section{Introduction}
\label{sec:introduction}
Sensitivity analysis, i.e. the assessment of how much uncertainty in a given model output is conveyed by each model input, is a fundamental step to judge the quality of model-based inferences \cite{Saltelli2008, Jakeman2006, Eker2018}. Among the many sensitivity indices available, variance-based indices are widely regarded as the gold standard because they are model-free (no assumptions are made about the model), global (they account for interactions between the model inputs) and easy to interpret \cite{Saltelli2002b, Iooss2015, Becker2014}. Given a model of the form $y=f(\bm{x})$, $\bm{x}=(x_1, x_2, ...,x_i,..., x_k)\in \mathbb{R}^k$, where $y$ is a scalar output and $x_1,...,x_k$ are the $k$ independent model inputs, the variance of $y$ is decomposed into conditional terms as
\begin{equation}
V(y)=\sum_{i=1}^{k}V_i+\sum_{i}\sum_{i<j}V_{ij}+...+V_{1,2,...,k} \\,
\label{eq:decomposition}
\end{equation}
where
\begin{equation}
\begin{aligned}
V_i = V_{x_{i}}\big[E_{\bm{x}_{\sim i}}(y | x_i)\big] \hspace{4mm}
V_{ij} &= V_{x_{i}, x_{j}}\big[E_{\bm{x}_{\sim i, j}}(y | x_i, x_j)\big] \hspace{4mm}\\
& - V_{x_{i}}\big[E_{\bm{x}_{\sim i}}(y | x_i)\big] \\
& - V_{x_{j}}\big[E_{\bm{x}_{\sim j}}(y | x_j)\big]
\end{aligned}
\label{eq:Ex_i}
\end{equation}
%
and so on up to the $k$-th order. The notation $\bm{x}_{\sim i}$ means \emph{all-but-$x_i$}. By dividing each term in Equation~\ref{eq:decomposition} by the unconditional model output variance $V(y)$, we obtain the first-order indices for single inputs ($S_i$), pairs of inputs ($S_{ij}$), and for all higher-order terms. First-order indices thus provide the proportion of $V(y)$ caused by each term and are widely used to rank model inputs according to their contribution to the model output uncertainty, a setting known as \emph{factor prioritization} \parencite{Saltelli2008}.
\textcite{Homma1996a} also proposed the calculation of the total-order index $T_i$, which measures the first-order effect of a model input jointly with its interactions up to the $k$-th order:
\begin{equation}
T_i=1 - \frac{V_{\bm{x}_{\sim i}}\big[E_{x_i}(y | \bm{x}_{\sim i})\big]}{V(y)} = \frac{E_{\bm{x}_{\sim i}}\big[V_{x_{i}}(y | \bm{x}_{\sim i})\big]}{V(y)} \,.
\label{eq:Ti}
\end{equation}
When $T_i \approx 0$, it can be concluded that $x_i$ has a negligible contribution to $V(y)$. For this reason, total-order indices have been applied to distinguish influential from non-influential model inputs and reduce the dimensionality of the uncertain space, a setting known as \emph{factor-fixing} \parencite{Saltelli2008}.
The most direct computation of $T_i$ is via Monte Carlo (MC) estimation because it does not impose any assumption on the functional form of the response function, unlike metamodeling approaches \parencite{LeGratiet2017, Saltelli1999}. The Fourier Amplitude Sensitivity Test (FAST) may also be used to calculate $T_i$, which involves transforming input variables into periodic functions of a single frequency variable, sampling the model and analysing the sensitivity of input variables using Fourier analysis in the frequency domain \cite{cukier1973study, cukier1978nonlinear}. While an innovative approach, FAST is sensitive to the characteristic frequencies assigned to input variables, and is not a very intuitive method - for these reasons it has mostly been superseded by Monte Carlo approaches, or by metamodels when computational expense is a serious issue. In this work we focus on the former.
MC methods require generating a $(N, 2k)$ base sample matrix with either random or quasi-random numbers (e.g. Latin Hypercube Sampling, Sobol' quasi-random numbers \cite{Sobol1967, Sobol1976}), where each row is a sampling point and each column a model input. The first $k$ columns are allocated to an $\bm{A}$ matrix and the remaining $k$ columns to a $\bm{B}$ matrix, which are known as the ``base sample matrices''. Any point in either $\bm{A}$ or $\bm{B}$ can be indicated as $x_{vi}$, where $v$ and $i$ respectively index the row (from 1 to $N$) and the column (from 1 to $k$). Then, $k$ additional $\bm{A}_{B}^{(i)}$ ($\bm{B}_{A}^{(i)}$) matrices are created, where all columns come from $\bm{A}$ ($\bm{B}$) except the $i$-th column, which comes from $\bm{B}$ ($\bm{A}$). The numerator in Equation~\ref{eq:Ti} is finally estimated using the model evaluations obtained from the $\bm{A}$ ($\bm{B}$) and $\bm{A}_{B}^{(i)}$ ($\bm{B}_{A}^{(i)}$) matrices. Some estimators may also use a third or $\bm{X}$ base sample matrices (i.e. $\bm{A}, \bm{B}, \bm{C}, \hdots, \bm{X}$), although the use of more than three matrices has been recently proven inefficient by \textcite{LoPiano2021}.
\subsection{Total-order estimators and uncertainties in the benchmark settings}
The search for efficient and robust total-order estimators is an active field of research \cite{Homma1996a, Jansen1999, Saltelli2008, Janon2014, Glen2012, Azzini2020, Monod2006a, Razavi2016a}. Although some works have compared their asymptotic properties (i.e. \cite{Janon2014}), most studies have promoted empirical comparisons where different estimators are benchmarked against known test functions and specific sample sizes. However valuable these empirical studies may be, \textcite{Becker2020} observed that their results are very much conditional on the choice of model, its dimensionality and the selected number of model runs. It is hard to say from previous studies whether an estimator outperforming another truly reflects its higher accuracy or simply its better performance under the narrow statistical design of the study. Below we extend the list of factors which \textcite{Becker2020} regards as influential in a given benchmarking exercise and discuss how they affect the relative performance of sensitive estimators.
\begin{itemize}
\item \textit{The sampling method:} The creation of the base sample matrices can be done using Monte-Carlo (MC) or quasi Monte-Carlo (QMC) methods \cite{Sobol1967, Sobol1976}. Compared to MC, QMC allows to more effectively map the input space as it leaves smaller unexplored volumes (Fig.~S1). However, \textcite{Kucherenko2011} observed that MC methods might help obtain more accurate sensitivity indices when the model under examination has important high-order terms. Both MC and QMC have been used when benchmarking sensitivity indices \cite{Jansen1999, Saltelli2010a}.
\item \textit{The form of the test function:} some of the most commonly used functions in SA are the \textcite{Ishigami1990}'s, the Sobol' G and its variants \parencite{Sobol1998, Saltelli2010a}, the \textcite{Bratley1988a}'s or the set of functions presented in \textcite{Kucherenko2011} \parencite{Janon2014, Azzini2020, Saltelli2010a, LoPiano2021}. Despite being analytically tractable, these functions capture only one possible interval of model behaviour, and the effects of nonlinearities and nonadditivities is typically unknown in real models. This \emph{black-box} nature of models has become more of a concern in the last decades due to the increase in computational power and code complexity (which prevents the analyst from intuitively grasping the model's behaviour \parencite{Borgonovo2016}), and to the higher demand for model transparency \parencite{Eker2018, Saltelli2019b, Saltelli2020a}. This renders the functional form of the model similar to a random variable \parencite{Becker2020}, something not accounted for by previous works \parencite{Janon2014, Azzini2020, Saltelli2010a, LoPiano2021}.
\item \textit{The function dimensionality:} many studies focus on low-dimensional problems, either by using test functions that only require a few model inputs (e.g. the Ishigami function, where $k=3$), or by using test functions with a flexible dimensionality, but setting $k$ at a small value of e.g. $k\leq8$ (\textcite{Sobol1998}'s G or \textcite{Bratley1988a} functions). This approach trades computational manageability for comprehensiveness: by neglecting higher dimensions, it is difficult to tell which estimator might work best in models with tens or hundreds of parameters. Examples of such models can be readily found in the Earth and Environmental Sciences domain \parencite{Sheikholeslami2019a}, including the Soil and Water Assessment Tool (SWAT) model, where $k=50$ \parencite{Sarrazin2016}, or the Mod\'{e}lisation Environmentale-Surface et Hydrologie (MESH) model, where $k=111$ \parencite{Haghnegahdar2017}.
\item \textit{The distribution of the model inputs}: the large majority of benchmarking exercises assume uniformly-distributed inputs $p(\bm{x})\in U(0,1)^k$ \parencite{Janon2014, Azzini2019, Saltelli2010a, LoPiano2021}. However, there is evidence that the accuracy of $T_i$ estimators might be sensitive to the underlying model input distributions, to the point of overturning the model input ranks \parencite{Shin2013, Paleari2016}. Furthermore, in uncertainty analysis -- e.g. in decision theory, the analysts may use distributions with peaks for the most likely values derived, for instance, from an experts elicitation stage.
\item \textit{The number of model runs:} sensitivity test functions are generally not computationally expensive and can be run without much concern for computational time. This is frequently not the case for real models, whose high dimensionality and complexity might set a constraint on the total number of model runs available. Under such restrictions, the performance of the estimators of the total-order index depends on their efficiency (how accurate they are given the budget of runs that can be allocated to each model input). There are no specific guidelines as to which total-order estimator might work best under these circumstances \parencite{Becker2020}.
\item \textit{The performance measure selected:} typically, a sensitivity estimator has been considered to outperform the rest if, on average, it displays a smaller mean absolute error (MAE), computed as
\begin{equation}
\text{MAE}=\frac{1}{p}\sum_{v=1}^{p} \left ( \frac{\sum_{i=1}^{k} | T_i - \hat{T}_i|}{k} \right )\,,
\label{eq:MAE}
\end{equation}
where $p$ is the number of replicas of the sample matrix, and $T_i$ and $\hat{T}_i$ the analytical and the estimated total-order index of the $i$-th input. The MAE is appropriate when the aim is to assess which estimator better approaches the true total-order indices, because it averages the error for both influential and non-influential indices. However, the analyst might be more interested in using the estimated indices $\bm{\hat{T}}=\{\hat{T}_1,\hat{T}_2,...,\hat{T}_i,...,\hat{T}_k\}$ to accurately rank parameters or screen influential from non-influential model inputs \parencite{Saltelli2008}. In such context, the MAE may be best substituted or complemented with a measure of rank concordance between the vectors $\bm{r}$ and $\bm{\hat{r}}$, which reflect the ranks in $\bm{T}$ and $\bm{\hat{T}}$ respectively, such as the Spearman's $\rho$ or the Kendall's $W$ coefficient \parencite{Spearman1904, Kendall1939, Becker2020}. It can also be the case that disagreements on the exact ranking of low-ranked parameters may have no practical importance because the interest lies in the correct identification of top ranks only \parencite{Sheikholeslami2019a}. \textcite{Savage1956} scores or other measures that emphasize this top-down correlation are then a more suitable choice.
\end{itemize}
Here we benchmark the performance of eight different MC-based formulae available to estimate $T_i$ (Table~1). While the list is not exhaustive, they reflect the research conducted on $T_i$ over the last 20 years: from the classic estimators of \textcite{Homma1996a, Jansen1999, Saltelli2008} up to the new contributions by \textcite{Janon2014}, \textcite{Glen2012}, \textcite{Azzini2019} and \textcite{Razavi2016a, Razavi2016b}. In order to reduce the influence of the benchmarking design in the assessment of the estimators' accuracy, we treat the sampling method $\tau$, the underlying model input distribution $\phi$, the number of model runs $N_t$, the test function $\varepsilon$, its dimensionality and degree of non-additivity ($k,k_2,k_3$) and the performance measure $\delta$ as random parameters. This better reflects the diversity of models and sensitivity settings available to the analyst. By relaxing the dependency of the results on these benchmark parameters\footnote{We refer to the set of benchmarking assumptions as \emph{benchmarking parameters} or \emph{parameters}. This is intended to distinguish them from the inputs of each test function generated by the metafunction, which we refer to as inputs.}, we define an unprecedentedly large setting where all formulae can prove their accuracy. We therefore extend \textcite{Becker2020}'s approach by testing a wider set of Monte Carlo estimators, by exploring a wider range of benchmarking assumptions and by performing a formal SA on these assumptions. The aim is therefore to provide a much more global comparison of available MC estimators than is available in the existing literature, and investigate how the benchmarking parameters may affect the relative performance of estimators. Such information can help point to estimators that are not only efficient on a particular case study, but efficient and robust to a wide range of practical situations.
\begingroup
\renewcommand{\arraystretch}{1.7}
\begin{table}[!ht]
\centering
\caption{Formulae to compute $T_i$. $f_0$ and $V(y)$ are estimated according to the original papers. For estimators 2 and 5, $f_0=\frac{1}{N}\sum_{v=1}^{N}f(\bm{A})_v$. For estimators 1, 2 and 5, $V(y)=\frac{1}{N}\sum_{v=1}^{N} \left [ f(\bm{A})_v-f_0 \right]^2$ \cites[Eq.~4.16]{Saltelli2008}[Eqs.~15, 20]{Homma1996a}. For estimator 3, $f_0 = \frac{1}{N} \sum_{v=1}^{N}\frac{f(\bm{A})_v + f(\bm{A}_{B} ^{(i)})_v}{2} $ and $V(y)=\frac{1}{N} \sum_{v=1}^{N} \frac{f(\bm{A})_v^2 + f(\bm{A}_{B} ^{(i)})_v^2}{2} - f_0^2$ \cite[Eq.~15]{Janon2014}. In estimator 4, $\langle f(\bm{A})_v \rangle$ is the mean of $f(\bm{A})_v$. We use a simplified version of the Glen and Isaacs estimator because spurious correlations are zero by design. As for estimator 7, we refer to it as pseudo-Owen given its use of a $\bm{C}$ matrix and its identification with \textcite{Owen2013} in \textcite{Iooss2020}, where we retrieve the formula from. $V(y)$ in Estimator 7 is computed as in Estimator 3 following \textcite{Iooss2020}, whereas $V(y)$ in Estimator 8 is computed as in Estimator 1.}
\label{tab:Ti_estimators}
\begin{tabular}{llp{3.2cm}}
\toprule
Nº & Estimator & Author \\
\midrule
1 & $\frac{\frac{1}{2N} \sum_{v=1}^{N } \left [ f(\bm{A})_v - f(\bm{A}_{B} ^{(i)})_v \right ] ^ 2}{V(y)}$ & \textcite{Jansen1999} \\
2 & $\frac{V(y) - \frac{1}{N} \sum_{v = 1}^{N} f(\bm{A})_v f(\bm{A}_{B} ^{(i)})_v + f_0^2}{V(y)}$ & \textcite{Homma1996a} \\
3 & $1 - \frac{\frac{1}{N} \sum_{v=1}^{N}f(\bm{A})_v f(\bm{A}_{B} ^{(i)})_v - f_0^2}{V(y)}$ & \textcite{Janon2014} \newline \textcite{Monod2006a} \\
4 & $ 1 - \left [\frac{1}{N-1}\sum_{v=1}^{N} \frac{\left [ f(\bm{A})_v - \left \langle f(\bm{A})_v\right \rangle \right ] \left[ f(\bm{A}_{B} ^{(i)})_v - \left \langle f(\bm{A}_{B} ^{(i)})_v\right \rangle \right ]}{\sqrt{V\left [f(\bm{A})_v\right ] V\left [f(\bm{A}_{B} ^{(i)})_v \right]}} \right ]
$ & \textcite{Glen2012} \\
5 & $1 - \frac{\frac{1}{N} \sum_{v=1}^{N} f(\bm{B})_v f(\bm{B}^{(i)}_A)_v - f_0^2}{V(y)}$ & \textcite{Saltelli2008} \\
6 & $\frac{\sum_{v = 1}^{N} [ f(\bm{B})_v - f(\bm{B}^{(i)}_A)_v ] ^ 2 + [ f(\bm{A})_v - f(\bm{A}^{(i)}_B)_v ] ^ 2}{\sum_{v = 1} ^ {N} [ f(\bm{A})_v - f(\bm{B}) _v ] ^ 2 + [ f(\bm{B}^{(i)}_A)_v - f(\bm{A}^{(i)}_B)_v ] ^ 2}$ & \textcite{Azzini2019, Azzini2020} \\
7 & $\frac{V(y) - \left [ \frac{1}{N} \sum_{v=1} ^ {N} \left \{\left [ f(\bm{B})_v - f(\bm{C}_{B} ^{(i)})_v \right ] \left [ f(\bm{B}_{A} ^{(i)})_v - f(\bm{A})_v \right ] \right \} \right ]}{V(y)}$ & pseudo-Owen \\
8 & $\frac{E_{x^*_{\sim_{i}}}\left [ \gamma_{x^*{_\sim i}}(h_i)\right] + E_{x^*{_\sim i}} \left [ C_{x^*{_\sim i}}(h_i) \right ] }{V(y)}$ & \textcite{Razavi2016a, Razavi2016b} (see SM). \\
\bottomrule
\end{tabular}
\end{table}
\endgroup
\section{Assessment of the uncertainties in the benchmarking parameters}
\label{sec:materials}
In this section we formulate the benchmarking parameters as random variables and assess how the performance of estimators is dependent on them by performing a sensitivity analysis. In essence this is a \emph{sensitivity analysis of sensitivity analyses} \cite{Puy2020}, and a natural extension of a similar uncertainty analysis in a recent work by \textcite{Becker2020}. The use of global sensitivity analysis tools to better understand the properties of estimators can give insights into how estimators behave in different scenarios that are not available through analytical approaches.
\subsection{The setting}
The variability in the benchmark settings ($\tau, N_t, k, k_2, k_3, \phi, \epsilon, \delta$) is described by probability distributions (Table~\ref{tab:parameters}). We assign uniform distributions (discrete or continuous) to each parameter. In particular, we choose $\tau\sim\mathcal{DU}(1, 2)$ to check how the performance of $T_i$ estimators is conditioned by the use of Monte-Carlo ($\tau=1$) or Quasi Monte-Carlo ($\tau=2$) methods in the creation of the base sample matrices. For $\tau=2$ we use the Sobol' sequence scrambled according to \textcite{Owen1995} to avoid repeated coordinates at the beginning of the sequence. The total number of model runs and inputs is respectively described as $N_t\sim\mathcal{DU}(10, 1000)$ and $k\sim\mathcal{DU}(3,100)$ to explore the performance of the estimators in a wide range of $N_t,k$ combinations. Given the sampling constraints set by the estimators' reliance on either a $\bm{B}$, $\bm{B}_{A} ^{(i)}$, $\bm{A}_{B} ^{(i)}$ or $\bm{C}_{B} ^{(i)}$ matrices (Table~\ref{tab:Ti_estimators}), we modify the space defined by ($N_t, k$) to a non-rectangular domain (we provide more information on this adjustment in Section~\ref{sec:algorithm}).
\begin{table}[ht]
\centering
\caption{Summary of the parameters and their distributions. $\mathcal{DU}$ stands for discrete uniform.}
\label{tab:parameters}
\begin{tabular}{llc}
\toprule
Parameter & Description & Distribution \\
\midrule
$\tau$ & Sampling method & $\mathcal{DU}(1, 2)$ \\
$N_t$ & Total number of model runs & $\mathcal{DU}(10,1000)$\\
$k$ & Number of model inputs & $\mathcal{DU}(3,100)$ \\
$\phi$ & Probability distribution of the model inputs & $\mathcal{DU}(1, 8)$ \\
$\varepsilon$ & Randomness in the test function & $\mathcal{DU}(1, 200)$ \\
$k_2$ & Fraction of pairwise interactions & $\mathcal{U}(0.3, 0.5)$ \\
$k_3$ & Fraction of three-wise interactions & $\mathcal{U}(0.1, 0.3)$ \\
$\delta$ & Selection of the performance measure & $\mathcal{DU}(1, 2)$ \\
\bottomrule
\end{tabular}
\end{table}
For $\phi$ we set $\phi\sim\mathcal{DU}(1,8)$ to ensure an adequate representation of the most common shapes in the $(0,1)^k$ domain. Besides the normal distribution truncated at $(0,1)$ and the uniform distribution, we also take into account four beta distributions parametrized with distinct $\alpha$ and $\beta$ values and a logitnormal distribution (Fig.~\ref{fig:metafunction}a). The aim is to check the response of the estimators under a wide range of probability distributions, including U-shaped distributions and distributions with different degrees of skewness.
\begin{figure}[ht]
\centering
\includegraphics[keepaspectratio]{metafunctions_distributions-1}
\caption{The metafunction approach. a) Probability distributions incuded in $\phi$. $N_T$ stands for truncated normal distribution. b) Univariate functions included in the metafunction ($f_1(x)=$ cubic, $f_2(x)=$ discontinuous, $f_3(x)=$ exponential, $f_4(x)=$ inverse, $f_5(x)=$ linear, $f_6(x)=$ no effect, $f_7(x)=$ non-monotonic, $f_8(x)=$ periodic, $f_9(x)=$ quadratic, $f_{10}(x)=$ trigonometric).}
\label{fig:metafunction}
\end{figure}
We link each distribution in Fig.~\ref{fig:metafunction}a to an integer value from 1 to 7. For instance, if $\phi=1$, the joint probability distribution of the model inputs is described as $p(x_1,\hdots,x_k)=\mathcal{U}(0,1)^k$. If $\phi=8$, we create a vector $\bm{\phi}=\{\phi_1,\phi_2,...,\phi_i,...,\phi_k\}$ by randomly sampling the seven distributions in Fig.~\ref{fig:metafunction}a, and use the $i$-th distribution in the vector to describe the uncertainty of the $i$-th input. This last case examines the behavior of the estimators when several distributions are used to characterize the uncertainty in the model input space.
\subsubsection{The test function}
The parameter $\varepsilon$ operationalizes the randomness in the form and execution of the test function. Our test function is an extended version of \textcite{Becker2020}'s metafunction, which randomly combines $p$ univariate functions in a multivariate function of dimension $k$. Here we consider the 10 univariate functions listed in Fig.~\ref{fig:metafunction}b, which represent common responses observed in physical systems and in classic SA test functions (see \textcite{Becker2020} for a discussion on this point). We note that an alternative approach would be to construct orthogonal basis functions which could allow analytical evaluation of true sensitivity indices for each generated function; however, this extension is left for future work.
We construct the test function as follows:
\begin{enumerate}
\item Let us consider a sample matrix such as
\begin{equation}
\bm{M}=
\begin{bmatrix}
x_{11} & x_{12} & \cdots & x_{1i} & \cdots & x_{1k} \\
x_{21} & x_{22} & \cdots & x_{2i} & \cdots & x_{2k} \\
\vdots & \vdots & \ddots & \vdots & \ddots & \vdots \\
x_{v1} & x_{v2} & \cdots & x_{vi} & \cdots & x_{vk} \\
\vdots & \vdots & \ddots & \vdots & \ddots & \vdots \\
x_{N1} & x_{N2} & \cdots & x_{Ni} & \cdots & x_{Nk} \\
\end{bmatrix}
\end{equation}
where every point $\bm{x}_v=x_{v1}, x_{v2}, \hdots, x_{vk}$ represents a given combination of values for the $k$ inputs and $x_i$ is a model input whose distribution is defined by $\phi$.
\item Let $\bm{u}=\{u_1,u_2,...,u_k\}$ be a $k$-length vector formed by randomly sampling with replacement the ten functions in Fig.~\ref{fig:metafunction}b. The $i$-th function in $\bm{u}$ is then applied to the $i$-th model input: for instance, if $k=4$ and $\bm{u}=\{u_3,u_4,u_8, u_1\}$, then $f_3(x_1)=\frac{e^{x_1}-1}{e-1}$, $f_4(x_2)=(10-\frac{1}{1.1})^{-1}(x_2 + 0.1)^{-1}$, $f_8(x_3)=\frac{\sin(2\pi x_3)}{2}$, and $f_1(x_4)=x_4^3$. The elements in $\bm{u}$ thus represent the first-order effects of each model input.
\item Let $\bm{V}$ be a $(n, 2)$ matrix, for $n=\frac{k!}{2!(k-2)!}$, the number of pairwise combinations between the $k$ inputs of the model. Each row in $\bm{V}$ thus specifies an interaction between two columns in $\bm{M}$. In the case of $k=4$ and the same elements in $\bm{u}$ as defined in the previous example,
\begin{equation}
\bm{V}=
\begin{bmatrix}
1 & 2\\
1 & 3 \\
1 & 4 \\
2 & 3 \\
2 & 4 \\
3 & 4 \\
\end{bmatrix}
\end{equation}
e.g., the first row promotes $f_3(x_1) \cdot f_4(x_2)$, the second row $f_3(x_1) \cdot f_8(x_3)$, and so on until the $n$-th row. In order to follow the \emph{sparsity of effects principle} (most variations in a given model output should be explained by low-order interactions \parencite{Box2005}), the metafunction activates only a fraction of these effects: it randomly samples $\llceil k_2n \rrceil $ rows from $\bm{V}$, and computes the corresponding interactions in $\bm{M}$. $\llceil k_2n \rrceil $ is thus the number of pairwise interactions present in the function. We make $k_2$ an uncertain parameter described as $k_2\sim\mathcal{U}(0.3, 0.5)$ in order to randomly activate only between 30\% and 50\% of the available second-order effects in $\bm{M}$.
\item Same as before, but for third-order effects: let $\bm{W}$ be a ($m, 3$) matrix, for $m=\frac{k!}{3!(k-3)!}$, the number of three-wise combinations between the $k$ inputs in $\bm{M}$. For $k=4$ and $\bm{u}$ as before,
\begin{equation}
\bm{W}=
\begin{bmatrix}
1 & 2 & 3\\
1 & 2 & 4 \\
1 & 3 & 4 \\
2 & 3 & 4 \\
\end{bmatrix}
\end{equation}
e.g. the first row leads to $f_3(x_1)\cdot f_4(x_2) \cdot f_8(x_3)$, and so on until the $m$-th row. The metafunction then randomly samples $\llceil k_3m \rrceil$ rows from $\bm{W}$ and computes the corresponding interactions in $\bm{M}$. $\llceil k_3m \rrceil $ is therefore the number of three-wise interaction terms in the function. We also make $k_3$ an uncertain parameter described as $k_3\sim\mathcal{U}(0.1,0.3)$ to activate only between 10\% and 30\% of all third-order effects in $\bm{M}$. Note that $k_2>k_3$ because third-order effects tend to be less dominant than two-order effects (Table~\ref{tab:parameters}).
\item Three vectors of coefficients ($\bm{\alpha}, \bm{\beta}, \bm{\gamma}$) of length $k$, $n$ and $m$ are defined to represent the weights of the first, second and third-order effects respectively. These coefficients are generated by sampling from a mixture of two normal distributions $\Psi=0.3\mathcal{N}(0, 5) + 0.7\mathcal{N}(0, 0.5)$. This coerces the metafunction into replicating the \textcite{Pareto1906} principle (around 80\% of the effects are due to 20\% of the parameters), found to widely apply in SA \parencite{Box1986, Saltelli2008}.
\item The metafunction can thus be formalized as \begin{equation}
\begin{aligned}
y = & \sum_{i=1}^{k}\alpha_i f^{u_i}\phi_i(x_i) \\
& + \sum_{i=1}^{\llceil k_2n \rrceil}\beta_i f^{u_{V_{i,1}}} \phi_i(x_{V_{i,1}}) f^{u_{V_{i,2}}} \phi_i(x_{V_{i,2}}) \\
& + \sum_{i=1}^{\llceil k_3m \rrceil}\gamma_i f^{u_{W_{i,1}}} \phi_i(x_{W_{i,1}}) f^{u_{W_{i,2}}} \phi_i(x_{W_{i,2}}) f^{u_{W_{i,3}}} \phi_i(x_{W_{i,3}})\,.
\end{aligned}
\label{eq:metafunction}
\end{equation}
Note that there is randomness in the sampling of $\bm{\phi}$, the univariate functions in $\bm{u}$ and the coefficients in $(\bm{\alpha}, \bm{\beta}, \bm{\gamma})$. The parameter $\varepsilon$ assesses the influence of this randomness by fixing the starting point of the pseudo-random number sequence used for sampling the parameters just mentioned. We use $\varepsilon\sim\mathcal{U}(1,200)$ to ensure that the same seed does not overlap with the same value of $N_t$, $k$ or any other parameter, an issue that might introduce determinism in a process that should be stochastic. In Figs.~S2--S3we show the type of $T_i$ indices generated by this metafunction.
\end{enumerate}
Finally, we describe the parameter $\delta$ as $\delta\sim\mathcal{DU}(1,2)$. If $\delta=1$, we compute the Kendall $\tau$-b correlation coefficient between $\bm{\hat{r}}$ and $\bm{r}$, the estimated and the ``true'' ranks calculated from $\bm{\hat{T}}$ and $\bm{T}$ respectively. This aims at evaluating how well the estimators in Table~\ref{tab:Ti_estimators} rank all model inputs. If $\delta=2$, we compute the Pearson correlation between $\bm{r}$ and $\bm{\hat{r}}$ after transforming the ranks to Savage scores \cite{Savage1956}. This setting examines the performance of the estimators when the analyst is interested in ranking only the most important model inputs. Savage scores are given as
\begin{equation}
Sa_i=\sum_{j=i}^{k}\frac{1}{j}\,,
\label{eq:savage_scores}
\end{equation}
where $j$ is the rank assigned to the $j$th element of a vector of length $k$. If $x_1>x_2>x_3$, the Savage scores would then be $Sa_1=1+\frac{1}{2}+\frac{1}{3}$, $Sa_2=\frac{1}{2}+\frac{1}{3}$, and $Sa_3=\frac{1}{3}$. The parameter $\delta$ thus assesses the accuracy of the estimators in properly ranking the model inputs; in other words, when they are used in a factor prioritization setting \cite{Saltelli2008}.
In order to examine also how accurate the estimators are in approaching the ``true'' indices, we run an extra round of simulations with the MAE as the only performance measure, which we compute as
\begin{equation}
\text{MAE}= \frac{\sum_{i=1}^{k} | T_i - \hat{T}_i|}{k}\,.
\label{eq:MAE_no}
\end{equation}
Note that, unlike Equation~\ref{eq:MAE}, Equation~\ref{eq:MAE_no} does not make use of replicas. This is because the effect of the sampling is averaged out in our design by simultaneously varying all parameters in many different simulations.
\subsection{The execution of the algorithm}
\label{sec:algorithm}
We examine how sensitive the performance of total-order estimators is to the uncertainty in the benchmark parameters $\tau, N_t, k, k_2, k_3, \phi, \epsilon, \delta$ by means of a global SA. We create an $\bm{A}$, $\bm{B}$ and $k-1$ $\bm{A}^{(i)}_B$ matrices, each of dimension $(2^{11}, k)$, using Sobol' quasi-random numbers. In these matrices each column is a benchmark parameter described with the probability distributions of Table~\ref{tab:parameters} and each row is a simulation with a specific combination of $\tau, N_t, k,\hdots$ values. Note that we use $k-1$ $\bm{A}^{(i)}_B$ matrices because we group $N_t$ and $k$ and treat them like a single benchmark parameter given their correlation (see below).
Our algorithm runs rowwise over the $\bm{A}$, $\bm{B}$ and $k-1$ $\bm{A}^{(i)}_B$ matrices, for $v=1,2,\hdots,18,432$ rows. In the $v$-th row it does the following:
\begin{enumerate}
\item It creates five $(N_{t_v},k_v)$ matrices using the sampling method defined by $\tau_v$. The need for these five sub-matrices responds to the five specific sampling designs requested by the estimators of our study (Table~\ref{tab:Ti_estimators}). We use these matrices to compute the vector of estimated indices $\bm{\hat{T}}_i$ for each estimator:
\begin{enumerate}
\item An $\bm{A}$ matrix and $k_v$ $\bm{A}_{B}^{(i)}$ matrices, each of size $(N_v, k_v)$, $N_v=\llceil \frac{N_{t{_v}}}{k_v+1} \rrceil$ (Estimators 1--4 in Table~\ref{tab:Ti_estimators}).
\item An $\bm{A}$, $\bm{B}$ and $k_v$ $\bm{A}_{B}^{(i)}$ matrices, each of size $(N_v, k_v)$, $N_v=\llceil \frac{N_{t{_v}}}{k_v+ 2} \rrceil$ (Estimator 5 in Table~\ref{tab:Ti_estimators}).
\item An $\bm{A}$, $\bm{B}$ and $k_v$ $\bm{A}_{B}^{(i)}$ and $\bm{B}_{A}^{(i)}$ matrices, each of size $(N_v, k_v)$, $N_v=\llceil \frac{N_{t_{v}}}{2k_v+2} \rrceil$ (Estimator 6 in Table~\ref{tab:Ti_estimators}).
\item An $\bm{A}$, $\bm{B}$ and $k_v$ $\bm{B}_{A}^{(i)}$ and $\bm{C}_{B}^{(i)}$ matrices, each of size $(N_v, k_v)$, $N_v=\llceil \frac{N_{t{_v}}}{2k_v+2} \rrceil$ (Estimator 7 in Table~\ref{tab:Ti_estimators}).
\item A matrix formed by $N_v$ stars, each of size $k_v (\frac{1}{\Delta h} - 1) + 1$. Given that we set $\Delta h$ at 0.2 (see Supplementary Materials), $N_v=\llfloor \frac{N_{t_{v}}}{4k+1} \rrfloor$ (Estimator 8 in Table~\ref{tab:Ti_estimators}).
\end{enumerate}
The different sampling designs and the value for $k_v$ constrains the total number of runs $N_{t_v}$ that can be allocated to each estimator. Furthermore, given the probability distributions selected for $N_t$ and $k$ (Table~\ref{tab:parameters}), specific combinations of ($N_{t_{v}}, k_v$) lead to $N_v\leq1$, which is computationally unfeasible. To minimize these issues we force the comparison between estimators to approximate the same $N_{t_{v}}$ value. Since the sampling design structure of Razavi and Gupta is the most constraining, we use $N_v=\frac{2(4k+1)}{k+1}$ (for estimators 1--4), $N_v=\frac{2(4k+1)}{k+2}$ (for estimator 5) and $N_v=\frac{2(4k+1)}{2k+2}$ (for estimators 6--7) when $N_v\leq1$ in the case of Razavi and Gupta. This compels all estimators to explore a very similar portion of the ($N_t, k$) space, but $N_t$ and $k$ become correlated, which contradicts the requirement of independent inputs characterizing variance-based sensitivity indices \cite{Saltelli2008}. This is why we treat ($N_t,k$) as a single benchmark parameter in the SA.
\item It creates a sixth matrix, formed by an $\bm{A}$ and $k_v$ $\bm{A}_{B}^{(i)}$ matrices, each of size $(2^{11}, k_v)$. We use this sub-matrix to compute the vector of ``true'' indices $\bm{T}$, which could not be calculated analytically due to the wide range of possible functional forms created by the metafunction. Following \textcite{Becker2020}, we assume that a fairly accurate approximation to $\bm{T}$ could be achieved with a large Monte Carlo estimation.
\item The distribution of the model inputs in these six sample matrices is defined by $\phi_v$.
\item The metafunction runs over these six matrices simultaneously, with its functional form, degree of active second and third-order effects as set by $\varepsilon_v$, $k_{2_v}$ and $k_{3_v}$ respectively.
\item It computes the estimated sensitivity indices $\bm{\hat{T}}_v$ for each estimator and the ``true'' sensitivity indices $\bm{T}_v$ using the \textcite{Jansen1999} estimator, which is currently best practice in SA.
\item It checks the performance of the estimators. This is done in two ways:
\begin{enumerate}
\item If $\delta=1$, we compute the correlation between $\bm{\hat{r}}_v$ and $\bm{r}_v$ (obtained respectively from $\bm{\hat{T}}_v$ and $\bm{T}_v$) with Kendall tau, and if $\delta=2$ we compute the correlation between $\bm{\hat{r}}_v$ and $\bm{r}_v$ on Savage scores. The model output in both cases is the correlation coefficient $r$, with higher $r$ values indicating a better performance in properly ranking the model inputs.
\item We compute the MAE between $\bm{\hat{T}}_v$ and $\bm{T}_v$. In this case the model output is the MAE, with lower values indicating a better performance in approaching the ``true'' total-order indices.
\end{enumerate}
\end{enumerate}
\section{Results}
\subsection{Uncertainty analysis}
Under a factor prioritization setting (e.g. when the aim is to rank the model inputs in terms of their contribution to the model output variance), the most accurate estimators are Jansen, Razavi and Gupta, Janon/Monod and Azzini and Rosati. The distribution of $r$ values (the correlation between estimated and "true" ranks) when these estimators are used is highly negatively skewed, with median values of $\approx0.9$. Glen and Isaacs, Homma and Saltelli, Saltelli and pseudo-Owen lag behind and display median $r$ values of $\approx0.35$, with pseudo-Owen ranking last ($r\approx0.2$). The range of values obtained with these formulae is much more spread out and include a significant number of negative $r$ values, suggesting that they overturned the true ranks in several simulations (Figs.~\ref{fig:boxplots}a, S4).
\begin{figure}[ht]
\centering
\includegraphics[keepaspectratio]{boxplots-1}
\caption{Boxplots summarizing the results of the simulations. a) Correlation coefficient between $\bm{\hat{r}}$ and $\bm{r}$, the vector of estimated and ``true'' ranks. b) Mean Absolute Error (MAE).}
\label{fig:boxplots}
\end{figure}
When the goal is to approximate the ``true'' indices, Janon/Monod, Jansen and Azzini and Rosati also offer the best performance. The median MAE obtained with these estimators is generally smaller than Glen and Isaacs' and pseudo-Owen's, and the distribution of MAE values is much more narrower than that obtained with Homma and Saltelli, Saltelli or Razavi and Gupta. These three estimators are the least accurate and produce several MAE values larger than $10^2$ in several simulations (Fig.~\ref{fig:boxplots}b). The volatility of Razavi and Gupta under the MAE is reflected in the numerous outliers produced and sharply contrasts with its very good performance in a factor prioritization setting (Fig.~\ref{fig:boxplots}a).
To obtain a finer insight into the structure of these results, we plot the total number of model runs $N_t$ against the function dimensionality $k$ (Fig.~\ref{fig:scatter_color}). This maps the performance of the estimators in the input space formed by all possible combinations of $N_t$ and $k$ given the specific design constraints of each formulae. Under a factor prioritization setting, almost all estimators perform reasonably well at a very small dimensionality ($k\leq10, r >0.7$), regardless of the total number of model runs available. However, some differences unfold at higher dimensions: Saltelli, Homma and Saltelli, Glen and Isaacs and especially pseudo-Owen swiftly become inaccurate for $k>10$, even with large values for $N_t$. Azzini and Rosati display a very good performance overall except in the upper $N_t,k$ boundary, where most of the orange dots concentrate. The estimators of Jansen, Janon/Monod and Razavi and Gupta rank the model inputs almost flawlessly regardless of the region explored in the $N_t,k$ domain (Fig.~\ref{fig:scatter_color}a).
With regards to the MAE, Janon/Monod, Jansen and Azzini and Rosati maintain their high performance regardless of the $N_t,k$ region explored. The accuracy of Razavi and Gupta, however, drops at the upper-leftmost part of the $N_t,k$ boundary, where most of the largest MAE scores are located ($\mbox{MAE}>10$). In the case of Saltelli and Homma and Saltelli, the largest MAE values concentrate in the region of small $k$ regardless of the total number of model runs, a domain in which they achieved a high performance when the focus was on properly ranking the model inputs.
\begin{figure}[!ht]
\centering
\includegraphics[keepaspectratio]{scatterplot-1}
\caption{Number of runs $N_t$ against the function dimensionality $k$. Each dot is a simulation with a specific combination of the benchmark parameters in Table~\ref{tab:parameters}. The greener (blacker) the color, the better (worse) the performance of the estimator. a) Accuracy of the estimators when the goal is to properly rank the model inputs, e.g. a factor prioritization setting. b) Accuracy of the estimators when the goal is to approach the ``true'' total-order indices.}
\label{fig:scatter_color}
\end{figure}
The presence of a non-negligible proportion of model runs with $r<0$ suggests that some estimators significantly overturned the true ranks (Figs~\ref{fig:scatter_color}a, S4). To better examine this phenomenon, we re-plot Fig~\ref{fig:scatter_color}b with just the simulations yielding $r<0$ (Fig.~S5). We observe that $r<0$ values not only appear in the region of small $N_t$, a foreseeable miscalculation derived from allocating an insufficient number of model runs to each model input: they also emerge at a relatively large $N_t$ and low $k$ in the case of pseudo-Owen, Saltelli and Homma and Saltelli. The Saltelli estimator actually concentrates in the $k<10$ zone most of the simulations with the lowest negative $r$ values (Fig.~S5). This suggests that rank reversing is not an artifact of our study design as much as a by-product of the volatility of these estimators when stressed by the sources of computational uncertainty listed in Table~\ref{tab:parameters}. Such strain may lead these estimators to produce a significant fraction of negative indices or indices beyond 1, thus effectively promoting $r<0$.
We calculate the proportion of $T_i<0$ and $T_i > 1$ in each simulation that yielded $r<0$. In the case of Glen and Isaacs and Homma and Saltelli, $r<0$ values are caused by the production of a large proportion of $T_i < 0$ (25\%--75\%, the $x$ axis in Fig.~\ref{fig:ti>1}). Pseudo-Owen and Saltelli suffer this bias too and in several simulations they also generate a large proportion of $T_i>1$ (up to 100\% of the model inputs, the $y$ axis in Fig.~\ref{fig:ti>1}). The production of $T_i<0$ and $T_i>1$ is caused by numerical errors and fostered by the values generated at the numerator of Equation~\ref{eq:Ti}: $T_i < 0$ may either derive from $E_{\bm{x}_{\sim i}}\big[V_{x_{i}}(y | \bm{x}_{\sim i})\big] < 0$ (e.g. Homma and Saltelli and pseudo-Owen) or $V_{\bm{x}_{\sim i}}\big[E_{x_i}(y | \bm{x}_{\sim i})\big] > V(y)$ (e.g. Saltelli), whereas $T_i > 1$ from $E_{\bm{x}_{\sim i}}\big[V_{x_{i}}(y | \bm{x}_{\sim i})\big] > V(y)$ (e.g. Homma and Saltelli and pseudo-Owen) or $V_{\bm{x}_{\sim i}}\big[E_{x_i}(y | \bm{x}_{\sim i})\big] < 0$ (e.g. Saltelli).
\begin{figure}
\centering
\includegraphics[keepaspectratio]{plot_further_negative-1}
\caption{Scatterplot of the proportion of $T_i<0$ against the proportion of $T_i>1$ mapped against the model output $r$. Each dot is a simulation. Only simulations with $r<0$ are displayed.}
\label{fig:ti>1}
\end{figure}
To better examine the efficiency of the estimators, we summarized their performance as a function of the number of runs available per model input $N_t/k$ \parencite{Becker2020} (Fig.~\ref{fig:medians}, S6). This information is especially relevant to take an educated decision on which estimator to use in a context of a high-dimension, computationally expensive model. Even when the budget of runs per input is low $\left [ (N_t/k) \in [2, 20] \right ]$, Razavi and Gupta, Jansen and Janon/Monod are very good at properly ranking model inputs ($r\approx0.9$), and are followed very close by Azzini and Rosati ($r\approx0.8$). Saltelli, Homma and Saltelli and Glen and Isaacs come after ($r\approx0.3$), with pseudo-Owen scoring last ($r\approx0.2$). When the $N_t/k$ ratio is increased, all estimators improve their ranking accuracy and some quickly reach the asymptote: this is the case of Razavi and Gupta, Janon/Monod and Jansen, whose performance becomes almost flawless from $(N_t/k) \in [40, 60]$ onwards, and of Azzini and Rosati, which reaches its optimum at $(N_t/k) \in [60, 80]$. The accuracy of the other estimators does not seem to fully stabilize within the range of ratios examined. In the case of Homma and Saltelli and Saltelli, their performance oscillates before plummeting at $(N_t/k) \in [200, 210]$, $(N_t/k) \in [240, 260]$ and $(N_t/k) \in [260, 280]$ due to several simulations yielding large $r<0$ values (Fig.~\ref{fig:medians}a).
Janon/Monod and Jansen are also the most efficient estimators when the MAE is the measure of choice, followed closely by Azzini and Rosati, Razavi and Gupta and Glen and Isaacs. Saltelli and Homma and Saltelli gain accuracy at higher $N_t / k$ ratios yet their precision diminishes all the same from $(N_t/k) \in [200, 210]$ onwards (Fig.~\ref{fig:medians}b).
\begin{figure}[ht]
\centering
\includegraphics[keepaspectratio]{median_plot-3}
\caption{Scatterplot of the model output $r$ against the number of model runs allocated per model input $(Nt / k)$. See Fig.~S6 for a visual display of all simulations and Fig.~S7 for an assessment of the number of model runs that each estimator has in each $N_t/k$ compartment.}
\label{fig:medians}
\end{figure}
\subsection{Sensitivity analysis}
When the aim is to rank the model inputs, the selection of the performance measure ($\delta$) has the highest first-order effect in the accuracy of the estimators (Fig.~\ref{fig:SA}a). The parameter $\delta$ is responsible for between 20\% (Azzini and Rosati) and 30\% (Glen and Isaacs) of the variance in the final $r$ value. On average, all estimators perform better when the rank is conducted on Savage scores ($\delta=2$), i.e. when the focus is on ranking the most important model inputs only (Figs.~S8--S15). As for the distribution of the model inputs ($\phi$), it has a first-order effect in the accuracy of Azzini and Rosati ($\approx10$\%), Jansen and Janon / Monod ($\approx15$\%) and Razavi and Gupta ($\approx20$\%) regardless of whether the aim is a factor prioritization ($r$) or approaching the ``true'' indices (MAE). The performance of these estimators drops perceptibly when the model inputs are distributed as $Beta(8,2)$ or $Beta(2,8)$ ($\phi=3$ and $\phi=4$, Figs. S8-S23), suggesting that they may be especially stressed by skewed distributions. The selection of random or quasi-random numbers during the construction of the sample matrix ($\tau$) also directly conditions the accuracy of several estimators. If the aim is to approach the ``true'' indices (MAE), $\tau$ conveys from 17\% (Azzini and Rosati) to $\approx30$\% (Glen and Isaacs) of the model output variance, with all estimators except Razavi and Gupta performing better on quasi-random numbers ($\tau=2$, Figs.~S16--S23). In a factor prioritization setting, $\tau$ is mostly influential through interactions. Interestingly, the proportion of active second and third-order interactions ($k_2,k_3$) does not alter the performance of any estimator in any of the settings examined.
\begin{figure}[!ht]
\centering
\includegraphics[keepaspectratio]{eff_plot_sobol-1}
\caption{Sobol' indices. a) Individual parameters. b) Clusters of parameters. The cluster $f(x)$ includes all parameters that describe the uncertainty in the functional form of the model ($\epsilon, k_2,k_3, \phi$). $N_t$ and $k$ are assessed simultaneously due to their correlation. Note that the MAE facet does not include the group ($\delta \tau$) because $\delta$ (the performance measure used) is no longer an uncertain parameter in this setting.}
\label{fig:SA}
\end{figure}
To better understand the structure of the sensitivities, we compute Sobol' indices after grouping individual parameters in three clusters, which we define based on their commonalities: the first group includes $(\delta,\tau)$ and reflects the influence of those parameters that can be defined by the sensitivity analyst during the setting of the benchmark exercise. The second combines ($\varepsilon,k_2,k_3,\phi$) and examines the overall impact of the model functional form, referred to as $f(x)$, which is often beyond the analyst's grasp. Finally, the third group includes $(N_t,k)$ only and assesses the influence of the sampling design in the accuracy of the estimators (we assume that the total number of model runs, besides being conditioned by the computing resources at hand, is also partially determined by the joint effect of the model dimensionality and the use of either a $\bm{B}$, $\bm{A}_{B} ^{(i)})$, $\bm{B}_{A} ^{(i)}$ or $\bm{C}_{B} ^{(i)}$ matrices) (Fig~\ref{fig:SA}b).
The uncertainty in the functional form of the model [$f(x)$] is responsible for approximately 20\% of the variance in the performance of Azzini and Rosati, Janon/Monod or Jansen in a factor prioritization setting. For Glen and Isaacs, Homma and Saltelli, pseudo-Owen or Saltelli, $f(x)$ is influential only through interactions with the other clusters. When the MAE is the performance measure of interest, $f(x)$ has a much stronger influence in the accuracy of the estimators than the couple $(N_t,k)$, especially in the case of Glen and Isaacs ($\approx 40$\%). In any case, the accuracy of the estimators is significantly conditioned by interactions between the benchmark parameters. The sum of all individual $S_i$ indices plus the $S_i$ index of the $(N_t,k)$ cluster only explains from $\approx 45$\% (Saltelli) to $\approx 70$\% (Glen and Isaacs) of the estimators' variance in ranking the model inputs, and from $\approx 24$\% (pseudo-Owen) to $\approx 60$\% (Razavi and Gupta) of the variance in approaching the ``true'' indices.
\section{Discussion and conclusions}
\label{sec:discussion}
Here we design an eight-dimension background for variance-based total-order estimators to confront and prove their value in an unparalleled range of SA scenarios. By randomizing the parameters that condition their performance, we obtain a comprehensive picture of the advantages and disadvantages of each estimator and identify which particular benchmark factors make them more prone to error. Our work thus provides a thorough empirical assessment of state-of-the-art total-order estimators and contributes to define best practices in variance-based SA. The study also aligns with previous works focused on testing the robustness of the tools available to sensitivity analysts, a line of inquiry that can be described as a \emph{sensitivity analysis of a sensitivity analysis} (SA of SA) \parencite{Puy2020}.
Our results provide support to the assumption that the scope of previous benchmark studies is limited by the plethora of non-unique choices taken during the setting of the analysis \parencite{Becker2020}. We have observed that almost all decisions have a non-negligible effect: from the selection of the sampling method to the choice of the performance measure, the design prioritized by the analyst can influence the performance of the estimator in a non-obvious way, namely through interactions. The importance of non-additivities in conditioning performance suggests that the benchmark of sensitivity estimators should no longer rely on statistical designs that change one parameter at a time (usually the number of model runs and, more rarely, the test function \cite{Janon2014, Azzini2019, Azzini2020, Saltelli2010a, LoPiano2021, Razavi2016a, Razavi2016b, Owen2013, Puy2020}). Such setting reduces the uncertain space to a minimum and misses the effects that the interactions between the benchmark parameters have in the final accuracy of the estimator. If global SA is the recommended practice to fully explore the uncertainty space of models, sensitivity estimators, being algorithms themselves, should be likewise validated \parencite{Puy2020}.
Our approach also compensates the lack of studies on the theoretical properties of estimators in the sensitivity analysis literature (see for instance \cite{Azzini2020a, Jansen1999}), and allows a more detailed examination of their performance than theoretical comparisons. Empirical studies like ours mirror the numerical character of sensitivity analysis when the indices can not be analytically calculated, which is most of the time in ``real-world'' mathematical modeling.
Two recommendations emerge from our work: the estimators by Razavi and Gupta, Jansen, Janon / Monod or Azzini and Rosati should be preferred when the aim is to rank the model inputs. Jansen, Janon/Monod or Azzini and Rosati should also be prioritized if the goal is to estimate the ``true'' total-order indices. The drop in performance of Razavi and Gupta in the second setting may be explained by a bias at a lower sample sizes, i.e. a consistent over-estimation of all total-order indices. This is because their estimator relies on a constant mean assumption whose validity degrades with larger values of $\Delta h$ \cite{Razavi2016a, Razavi2016b}. In order to remove this bias, $\Delta h$ should take very small values (e.g., $\Delta h = 0.01$), which may not be computationally feasible. Since the direction of this bias is the same for all parameters it only affects the calculation of the ``true'' total-order indices, not the capacity of the estimator to properly rank the model inputs.
It is also worth stating that Razavi and Gupta is the only estimator studied here that require the analyst to define a tuning parameter, $\Delta h$. In this paper we have set $\Delta h=0.2$ after some preliminary trials with the estimator; other works have used different values (e.g. $\Delta h = 0.002$, $\Delta h = 0.1$, $\Delta h = 0.3$; \cite{Becker2020, Razavi2016a, Razavi2016b}). Selecting the most appropriate value for a given tuning parameter is not an obvious choice and this uncertainty can make an estimator volatile, as shown by \textcite{Puy2020} in the case of the PAWN index.
The fact that Glen and Isaacs, Homma and Saltelli, Saltelli and pseudo-Owen do not perform as well in properly ranking the model inputs and approaching the ``true'' total-order indices may be partially explained by their less efficient computation of elementary effects: by allowing the production of negative terms in the numerator these estimators also permit the production of negative total-order indices, thus leading to biased rankings or sensitivity indices. In the case of Saltelli, the use of a $\bm{B}$ matrix at the numerator and an $\bm{A}$ matrix at the denominator exacerbates its volatility (Table~\ref{tab:Ti_estimators}, Nº 5). Such inconsistency was corrected in \textcite{Saltelli2010a}.
The consistent robustness of Jansen, Janon/Monod and Azzini and Rosati makes their sensitivity to the uncertain parameters studied here almost negligible. They are already highly optimized estimators with not much room for improvement. Most of their performance is conditioned by the first and total-order effects of the model form jointly with the underlying probability distributions ($f(x)$ in Fig.~\ref{fig:SA}b), as well as by their sampling design ($N_t,k$), which are in any case beyond the analyst's control. As for the rest, their accuracy might be enhanced by allocating a larger number of model runs per input (if computationally affordable), and especially in the case of Homma and Saltelli, Saltelli and Glen and Isaacs, by restricting their use to low-dimensional models ($k<10$) and sensitivity settings that only require ranking the most important parameters (a \emph{restricted} factor prioritisation setting; \cite{Saltelli2008}). Nevertheless, their substantial volatility is considerably driven by non-additivities, a combination that makes them hard to tame and should raise caution about their use in any modeling exercise.
Our results slightly differ from \textcite{Becker2020}'s, who observed that Jansen outperformed Janon/Monod under a factor prioritization setting. We did not find any significant difference between these estimators. Although our metafunction approach is based on \textcite{Becker2020}'s, our study tests the accuracy of estimators in a larger uncertain space as we also account for the stress introduced by changes in the sampling method $\tau$, the underlying probability distributions $\phi$ or the performance measure selected $\delta$. These differences may account for the slightly different results obtained between the two papers.
Our analysis can be extended to other sensitivity estimators (i.e. moment-independent like entropy-based \cite{Liu2006a}; the $\delta$-measure \cite{Borgonovo2007}; or the PAWN index, \cite{Pianosi2015, Pianosi2018}). Moreover, it holds potential to be used overall as a standard crash test every time a new sensitivity estimator is introduced to the modeling community. One of its advantages is its flexibility: \textcite{Becker2020}'s metafunction can be easily extended with new univariate functions or probability distributions, and the settings modified to check performance under different degrees of non-additivities or in a larger $(N_t,k)$ space. With some slight modifications it should also allow to produce functions with dominant low-order or high-order terms, labeled as Type B and C by \textcite{Kucherenko2011}. This should prompt developers of sensitivity indices to severely stress their estimators so the modeling community and decision-makers fully appraise how they deal with uncertainties.
\section{Code availability}
The \texttt{R} code to replicate our results is available in \textcite{Puy2020c} and in GitHub (\url{https://github.com/arnaldpuy/battle_estimators}). The uncertainty and sensitivity analysis have been carried out with the \texttt{R} package \texttt{sensobol} \cite{Puyk}, which also includes the test function used in this study.
\section{Acknowledgements}
We thank Saman Razavi for his insights on the behavior of the Razavi and Gupta estimator. This work has been funded by the European Commission (Marie Sk\l{}odowska-Curie Global Fellowship, grant number 792178 to A.P.).
\printbibliography
\end{document}
\section{Razavi and Gupta's estimator (VARS)}
Unlike the other total-order estimators examined in our paper, Razavi and Gupta's VARS (for Variogram Analysis of Response Surfaces \cite{Razavi2016a, Razavi2016b}) relies on the variogram $\gamma(.)$ and covariogram $C(.)$ functions to compute what they call the VARS-TO, for VARS Total-Order index.
Let us consider a function of factors $\bm{x}=(x_1, x_2, ..., x_k)\in \mathbb{R}^k$. If $\bm{x}_A$ and $\bm{x}_B$ are two generic points separated by a distance $\bm{h}$, then the variogram is calculated as
\begin{equation}
\gamma(\bm{x}_A-\bm{x}_B) = \frac{1}{2}V \left [y(\bm{x}_A) - y(\bm{x}_B) \right ]
\end{equation}
and the covariogram as
\begin{equation}
C(\bm{x}_A-\bm{x}_B) = COV \left [y(\bm{x}_A), y(\bm{x}_B) \right ]
\end{equation}
Note that
\begin{equation}
V \left [y(\bm{x}_A) - y(\bm{x}_B) \right ] = V \left [y(\bm{x}_A) \right ] + V \left [y(\bm{x}_B) \right ] - 2COV \left [ y(\bm{x}_A), y(\bm{x}_B) \right ]
\end{equation}
and since $V \left [ y(\bm{x}_A) \right ] = V \left [ y(\bm{x}_B) \right ]$, then
\begin{equation}
\gamma (\bm{x}_A - \bm{x}_B) = V \left [ y(\bm{x}) \right ] - C(\bm{x}_A, \bm{x}_B)
\label{eq:variogram}
\end{equation}
In order to obtain the total-order effect $T_i$, the variogram and covariogram are computed on all couples of points spaced $h_i$ along the $x_i$ axis, with all other factors being kept fixed. Thus equation~\ref{eq:variogram} becomes
\begin{equation}
\gamma_{x^*_\sim i}(h_i)=V(y|x^*_\sim i)-C_{x^*_\sim i}(h_i)
\label{eq:var_fixed}
\end{equation}
where $x^*_\sim i$ is a fixed point in the space of non-$x_i$. \textcite{Razavi2016a, Razavi2016b} suggest to take the mean value across the factors' space on both sides of equation~\ref{eq:var_fixed}, thus obtaining
\begin{equation}
E_{x^*_\sim{i}} \left [ \gamma_{x^*_\sim i}(h_i) \right ]=E_{x^*_\sim{i}} \left [ V(y|x^*_{\sim i}) \right ] -E_{x^*_\sim{i}} \left [ C_{x^*_\sim i}(h_i) \right ]
\end{equation}
which can also be written as
\begin{equation}
E_{x^*_\sim{i}} \left [ \gamma_{x^*_\sim i}(h_i) \right ]=V(y)T_i -E_{x^*_\sim{i}} \left [ C_{x^*_\sim i}(h_i) \right ]
\end{equation}
and therefore
\begin{equation}
T_i=\frac{E_{x^*_\sim i}\left [ \gamma_{x^*_\sim i}(h_i)\right] + E_{x^*_\sim i} \left [ C_{x^*_\sim i}(h_i) \right ] }{V(y)}
\label{eq:SM_VARS_ti}
\end{equation}
The sampling scheme for VARS does not rely on $\textbf{A}, \textbf{B}, \textbf{A}_{B}^{(i)}...$ matrices, but on star centers and cross sections. Star centers are $N$ random points sampled across the input space. For each of these stars, $k$ cross sections of points spaced $\Delta h$ apart are generated, including and passing through the star center. Overall, the computational cost of VARS amounts to $N_t=N\left [k((1 /\Delta h )-1) + 1 \right]$.
\newpage
\section{Figures}
\begin{figure}[ht]
\centering
\includegraphics[keepaspectratio]{sampling_method-1}
\caption{Examples of Monte-Carlo and Quasi Monte-Carlo sampling in two dimensions. $N=200$.}
\label{fig:SM_sampling_method}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[keepaspectratio]{plot_proportion_meta-1}
\caption{Proportion of the total sum of first-order effects and of the active model inputs (defined as $T_i>0.05$) after 1000 random metafunction calls with $k\in(3,100)$. Note how the sum of first-order effects clusters around $0.8$ (thus evidencing the production of non-additivities) and how, on average, the number of active model inputs revolves around 10--20\%, thus reproducing the Pareto principle.}
\label{fig:SM_prove_meta}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[keepaspectratio]{show_metafunction-1}
\caption{Sobol' $T_i$ indices obtained after a run of the metafunction with the following parameter settings: $N=10^4$, $k=17$, $k_2=0.5$, $k_3=0.2$, $\varepsilon=666$. The error bars reflect the 95\% confidence intervals after bootstrapping ($R=10^2$). The indices have been computed with the \textcite{Jansen1999} estimator.}
\label{fig:SM_metafunction}
\end{figure}
\begin{figure}
\centering
\includegraphics[keepaspectratio]{negative_proportion-1}
\caption{Proportion of model runs yielding $r<0$.}
\label{fig:SM_negative}
\end{figure}
\begin{figure}
\centering
\includegraphics[keepaspectratio]{negative_r-1}
\caption{Scatter of the total number of model runs $N_t$ against the function dimensionality $k$ only for $r<0$.}
\label{fig:SM_negative_map}
\end{figure}
\begin{figure}
\centering
\includegraphics[keepaspectratio]{scatter_ratio-1}
\caption{Scatterplot of the correlation between $\bm{T}_i$ and $\bm{\hat{T}}_i$ ($r$) against the number of model runs allocated per model input $(Nt / k)$.}
\label{fig:SM_ratio}
\end{figure}
\begin{figure}
\centering
\includegraphics[keepaspectratio]{n_ntk_ratio-1}
\caption{Bar plot with the number of simulations conducted in each of the $N_t / k$ comparments assessed. All estimators have approximately the same number of simulations in each compartment.}
\label{fig:SM_ntk_ratio}
\end{figure}
\foreach \x in {1,...,16} {
\begin{figure}
\centering
\includegraphics[keepaspectratio, height=\textheight, width=\textwidth]{scatterplots_sens-\x}
\caption{Scatterplots of the model inputs against the model output. The red dots show the mean value in each bin (we have set the number of bins arbitrarily at 30).}
\label{fig:SM_scatters}
\end{figure}
}
\clearpage
\clearpage
\printbibliography
\end{document} |
1,108,101,565,556 | arxiv | \section{Introduction}\label{sec:intro}
Intense emission and rapid flux variability provide important clues to understand the underlying physical process operating in the relativistic, magnetized jets of blazar sources. This subset of active galactic nuclei (AGN) is comprised of BL Lacertae objects (BL Lacs) and the high optical polarization flat-spectrum radio quasars (FSRQs), with jets launched from a supermassive black hole (SMBH)--accretion disk systems\citep[for a recent review see,][]{Hovatta19}. The flux variability is observed at all frequencies of the electromagnetic spectrum on both long-term (decades to $\simeq$ day; \citealt{Ulrich97, Aller99, Ghosh00, Gopal-Krishna11, Goyal12, Gupta16, Gaur19, Hess17, Abdo10a}) and intranight timescales ($\leq$day; \citealt{Aharonian07, Albert07, Goyal13b, Bachev15, Ackermann16, Nalewajko17, Zhu18, Shukla18}). The two-component broadband spectral energy distribution is nonthermal radiation arising within the relativistic jet \citep[][]{Ghisellini08}. Within the leptonic scenario, the particle (electron and positron) pairs, accelerated to GeV/TeV energies, produce synchrotron radiation in the presence of magnetic field at lower frequencies (radio--to--optical/X-rays) and inverse Comptonization of the seed photons (same or thermal from the accretion disk) by the synchrotron emitting particles produce emission at higher frequencies (X-rays--to--TeV $\gamma-$rays). Alternatively, direct synchrotron radiation by the protons accelerated to PeV/EeV energies or the emission from secondaries can give rise to high energy radiation within the hadronic scenario \citep[e.g.,][]{Blandford19}. Well--defined flare emission has often been attributed to particle acceleration mechanisms related to shocks in the jet \citep[][]{Spada01, Marscher08} while turbulence can mimic observed fluctuations on long-term as well as small timescales \citep[][]{Marscher14}. Annihilation of magnetic field lines at the reconnection sites within the jet plasma can also impart energy to the particles \citep[][]{Giannios13, Sironi15}; this scenario is supported by the recent detection of minute-like variability at GeV energies for the blazar 3C\,279 \citep[][]{Shukla20}. On the other hand, flux changes on the intranight timescales have also been associated with variable Doppler boosting factors related to changes in the viewing angle of the emitting plasma \citep[e.g.,][]{Gopal-Krishna12}, although in such a scenario frequency--independent variability is expected \citep[see, in this context,][]{Pasierb20}.
\begin{deluxetable*}{cccccccc}
\tablenum{1}
\tablecaption{Sample properties.\label{tab:sample}}
\tablewidth{0pt}
\tabletypesize{\small}
\tablehead{
\colhead{IAU name} & \colhead{RA(J2000)} & \colhead{Dec(J2000)} & SED & $z$ & V--mag & $M_{BH}$ & Reference for $M_{BH}$ \\
\colhead{} & \colhead{(h m s)} & \colhead{(d $\prime$ ${\prime\prime}$)} & \colhead{} & \colhead{} & \colhead{} & \colhead{(M$_\odot$)} & \colhead{}
}
\decimalcolnumbers
\startdata
0109$+$224 & 01 12 05.824 & $+$22 44 38.78 & BL Lac$^c$ & 0.265$^f$ & 15.66 & -- & \\
0235$+$164 & 02 38 38.930 & $+$16 36 59.27 & FSRQ$^c$ & 0.940$^f$ & 15.50 & 2.0$\times$10${^8}$& \citet{Raiteri07} \\
0420$-$014 & 04 23 15.800 & $-$01 20 33.06 & FSRQ$^c$ & 0.915$^f$ & 17.00 & 7.9$\times$10${^8}$& \citet{Liang03} \\
0716$+$714 & 07 21 53.448 & $+$71 20 36.36 & BL Lac$^c$ & 0.300$^f$ & 15.50 & 1.3$\times$10${^8}$& \citet{Liang03} \\
0806$+$315 & 08 09 13.440 & $+$31 22 22.90 & BL Lac$^d$ & 0.220$^g$ & 15.70 & -- & \\
0806$+$524 & 08 09 49.186 & $+$52 18 58.25 & BL Lac$^e$ & 0.138$^h$ & 15.59 & 7.9$\times$10${^8}$& \citet{Wu02} \\
0851$+$202$^a$ & 08 54 48.874 & $+$20 06 30.64 & BL Lac$^c$ & 0.306$^f$ & 15.43 & 1.5$\times$10${^8}$& \citet{Liang03} \\
1011$+$496 & 10 15 04.139 & $+$49 26 00.70 & BL Lac$^c$ & 0.200$^f$ & 16.15 & 2.1$\times$10${^8}$& \citet{Wu02} \\
1156$+$295 & 11 59 31.833 & $+$29 14 43.82 & FSRQ$^c$ & 0.729$^f$ & 14.41 & 7.9$\times$10${^8}$& \citet{Liang03} \\
1216$-$010 & 12 18 34.929 & $-$01 19 54.34 & BL Lac$^e$ & 0.415$^i$ & 15.64 & -- & \\
1219$+$285 & 12 21 31.690 & $+$28 13 58.50 & BL Lac$^c$ & 0.102$^f$ & 16.11 & 2.5$\times$10${^7}$& \citet{Liang03} \\
1253$-$055$^b$ & 12 56 11.166 & $-$05 47 21.52 & FSRQ$^c$ & 0.538$^f$ & 17.75 & 7.9$\times$10${^8}$& \citet{Sbarrato12} \\
1510$-$089 & 15 12 50.532 & $-$09 05 59.82 & FSRQ$^d$ & 0.360$^j$ & 16.54 & 4.0$\times$10${^8}$& \citet{Sbarrato12} \\
1553$+$113 & 15 55 43.044 & $+$11 11 24.36 & BL Lac$^c$ & 0.360$^f$ & 15.00 & -- \\
\enddata
\tablecomments{
(1) the name of the blazar following the IAU convention. $^a$ also known as OJ\,287; $^b$ also known as 3C\,279;
(2) right ascension;
(3) declination;
(4) SED classification. $^c$\citet[][]{Healey08}; $^d$\citet[][]{Veron06}; $^e$\citet[][]{Plotkin08};
(5) spectroscopic redshift. $^f$\citet[][]{Healey08}; $^g$\citet{Falco98}; $^h$\citet{Bade98}; $^i$\citet{Dunlop89}; $^j$\citet{Thompson90}.
(6) typical optical V--band magnitude \citep{Veron10};
(7) mass of the SMBH;
(8) reference for the mass of the SMBH.
}
\end{deluxetable*}
The noise-like appearance of blazar light curves have prompted efforts to investigate variability power spectral densities (PSDs) which is a distribution of variability amplitudes over different Fourier frequencies (=timescale$^{-1}$). The blazar PSDs are mostly represented by power-law shapes defined as P($\nu_k$) $\propto$ $\nu_k^{-\beta}$ where $\beta$(=1--3) is the slope and $\nu_k$ is the temporal frequency which indicate that variability is a {\it correlated} colored--noise type stochastic processes \citep[see,][and references therein]{Goyal20}. Specifically, $\beta$$\simeq$1, $\simeq$2 and $\gtrsim$3 are known as long-memory/pink--noise, damped--random walk/red--noise, and black--noise type stochastic processes while $\beta$$\simeq$0 corresponds to {\it uncorrelated}, white--noise type stochastic process \citep[][]{Press78, Schroeder91}. For a colored noise--type stationary stochastic process, one expects the slope of PSDs to change to 0 on longer timescales to preserve the finite variance of the process, leading to a relaxation timescale beyond which the variations should be generated due to uncorrelated processes. Moreover, it also means that different random realizations of the process will have different statistical moments (e.g., mean, sigma) due to statistical fluctuation of the process itself and not due to the change of nature of the process which indicates that the process is weakly non-stationary \citep[][]{Vaughan03}. Fluctuations resulting from such stochastic processes obey certain probability distributions, so the light curves tend to produce predictable PSDs. PSD slope and normalization, as well as the breaks, are of particular interest as they carry information about the parameters of the stochastic process and the `characteristic timescales' in the system which can be related to physical parameters shaping the variability, such as the size of the emission zone or the particle cooling timescales \citep[][]{Sobolewska14, Finke14, Chen16}. The noise-like appearance of light curves has been modeled where the aggregate flux arises from many cells behind a shock \citep[][see also, \citet{Marscher14} who models the flux and polarization light curves but does not provide PSDs]{Calafut15, Pollack16} against the emission from well-defined flares which could be attributed to single emission zones \citep[][]{Hughes85, Abdo10b}. The models of \citet[][]{Calafut15} and \cite[][]{Pollack16} compute the light curves and the PSDs, which are shaped by the combination bulk Lorentz factor fluctuations and the turbulence within the jets. In their hydrodynamic simulations of 2D jets, the changes in bulk Lorentz factor produce PSD slopes in the range 2.1 to 2.9 while the turbulence produces PSD slopes in the range 1.7 to 2.3, respectively \citep[][]{Pollack16}. The model of \citet[][]{O'Riordan17}, on the other hand, hypothesizes that the turbulence in the magnetically arrested disk (MAD) shapes the variability of synchrotron and IC emission components from the jet, with a cutoff of variability power at the timescales governed by the light crossing time of the event horizon of the SMBH.
Unlike the long--term variability timescales where the slopes of PSDs of multiwavelength variability have been estimated for large samples of blazar sources \citep[in particular, $\beta$$\sim$2 for radio and optical and $\beta$$\sim$1 for $\gamma-$rays,][]{Max-Moerbeck14a, Park17, Nilsson18, Meyer19, Goyal20}, such studies remain scarce on intranight timescales. This is due to the fact that it requires continuous pointing of an observing facility to a single target for many hours which is usually not feasible due to scheduling constraints, weather, limited photon sensitivity (at high energies), etc. The intranight PSD slopes at GeV and TeV energies exhibited $\beta\sim$1 and 2, respectively, for the blazars 3C\,279 and PKS\,2155$-$304, respectively, using the {\it Fermi-}LAT and the High Energy Stereoscopic System data, but only when the blazars were in a flaring state \citep{Aharonian07, Ackermann16}. \citet[][]{Zhang19} obtained the X-ray PSD slopes of 1.5, 3.1 and 1.4 using the 40--180\,ks long {\it Suzaku} observations. In another study, \citet{Bhattacharyya20} obtained intranight X-ray PSD slopes equal to 2.7, 2.6, 1.9 and, 2.7 for the Mrk\,421 and 2.2, 2.8, and 2.9, respectively, for the PKS\,2155$-$304 using the 30--90\,ks long XMM--{\it Newton} observations. \citet[][]{Goyal17} obtained the optical intranight PSD slopes in the range 1.5--4.1 for five monitoring sessions of the BL Lac PKS\,0735+178. \citet{Wehrle13} and \citet{Wehrle19} obtained the PSD slopes of blazar sources using the {\it Kepler}--satellite in the range 1.2--3.8 on timescales ranging from half a day to few months. Recently, \citet{Raiteri20} obtained the PSD slope $\sim$2.0 using the Transiting Exoplanet Survey Satellite ({\it TESS})--2\,min integration time light curve for the blazar 0716+714 at variability timescales between a month and few minutes.
In this respect, a few optical observatories with 1--2\,m class optical telescopes and fitted with CCDs have been devoted to blazar/AGN monitoring programs since 1990 \citep[see, for a review,][]{Gopal-Krishna18}. The goal of the present study is to characterize the intranight variability of a large sample of blazars using the ARIES monitoring program which was carried out between 1998 to 2010, the results of which are presented in \citet[][]{Goyal13b}. The paper is organized as follows. Sample selection is given in Section~\ref{sec:sample} while Section~\ref{sec:analysis} provides the details on the analysis method, in particular, the derivation of power spectral densities and the estimation of best-fit PSD shapes using extensive numerical simulations of light curves. Section~\ref{sec:results} provides the main results while a discussion and conclusions are given in Section~\ref{sec:discussions}.
\begin{figure*}[ht!]
\hbox{
\includegraphics[width=0.30\textwidth]{lc_0109+224_29oct05.pdf}
\includegraphics[width=0.30\textwidth]{psd_0109+224_29oct05.pdf}
\includegraphics[width=0.30\textwidth]{beta_0109+224_29oct05.pdf}
}
\hbox{
\includegraphics[width=0.30\textwidth]{lc_0235+164_12nov19.pdf}
\includegraphics[width=0.30\textwidth]{psd_0235+164_12nov19.pdf}
\includegraphics[width=0.30\textwidth]{beta_0235+164_12nov19.pdf}
}
\hbox{
\includegraphics[width=0.30\textwidth]{lc_0235+164_14nov19.pdf}
\includegraphics[width=0.30\textwidth]{psd_0235+164_14nov19.pdf}
\includegraphics[width=0.30\textwidth]{beta_0235+164_14nov19.pdf}
}
\hbox{
\includegraphics[width=0.30\textwidth]{lc_0235+164_18nov03.pdf}
\includegraphics[width=0.30\textwidth]{psd_0235+164_18nov03.pdf}
\includegraphics[width=0.30\textwidth]{beta_0235+164_18nov03.pdf}
}
\hbox{
\includegraphics[width=0.30\textwidth]{lc_0420-014_19nov03.pdf}
\includegraphics[width=0.30\textwidth]{psd_0420-014_19nov03.pdf}
\includegraphics[width=0.30\textwidth]{beta_0420-014_19nov03.pdf}
}
\begin{minipage}{\textwidth}
\caption{
The intra-night variability power spectrum of blazar sources obtained in this study. Panel (a) presents the light curve on a linear scale (see text). Panel (b) presents the derived power spectrum down to the Nyquist sampling frequency of the (mean) observed data. The dashed line shows the `raw' periodogram while the blue triangles and red circles give `logarithmically binned power spectrum' and the best-fit power spectrum, respectively. The error on the best-fit PSD slope corresponds to a 98\% confidence limit. The dashed horizontal line corresponds to the statistical noise floor level due to measurement noise. Panel (c) shows the probability curve as a function of the input power spectrum slope. The source name and date of monitoring are presented at the top of each panel.\label{fig:analysis}
}
\end{minipage}
\end{figure*}
\addtocounter{figure}{-1}
\begin{figure*}[ht!]
\hbox{
\includegraphics[width=0.30\textwidth]{lc_0420-014_25oct09.pdf}
\includegraphics[width=0.30\textwidth]{psd_0420-014_25oct09.pdf}
\includegraphics[width=0.30\textwidth]{beta_0420-014_25oct09.pdf}
}
\hbox{
\includegraphics[width=0.30\textwidth]{lc_0716+714_01feb05.pdf}
\includegraphics[width=0.30\textwidth]{psd_0716+714_01feb05.pdf}
\includegraphics[width=0.30\textwidth]{beta_0716+714_01feb05.pdf}
}
\hbox{
\includegraphics[width=0.30\textwidth]{lc_0806+315_28dec98.pdf}
\includegraphics[width=0.30\textwidth]{psd_0806+315_28dec98.pdf}
\includegraphics[width=0.30\textwidth]{beta_0806+315_28dec98.pdf}
}
\hbox{
\includegraphics[width=0.30\textwidth]{lc_0806+524_04feb05.pdf}
\includegraphics[width=0.30\textwidth]{psd_0806+524_04feb05.pdf}
\includegraphics[width=0.30\textwidth]{beta_0806+524_04feb05.pdf}
}
\hbox{
\includegraphics[width=0.30\textwidth]{lc_0851+202_31dec99.pdf}
\includegraphics[width=0.30\textwidth]{psd_0851+202_31dec99.pdf}
\includegraphics[width=0.30\textwidth]{beta_0851+202_31dec99.pdf}
}
\begin{minipage}{\textwidth}
\caption{(continued) }
\end{minipage}
\end{figure*}
\addtocounter{figure}{-1}
\begin{figure*}[ht!]
\hbox{
\includegraphics[width=0.30\textwidth]{lc_0851+202_28mar00.pdf}
\includegraphics[width=0.30\textwidth]{psd_0851+202_28mar00.pdf}
\includegraphics[width=0.30\textwidth]{beta_0851+202_28mar00.pdf}
}
\hbox{
\includegraphics[width=0.30\textwidth]{lc_0851+202_17feb01.pdf}
\includegraphics[width=0.30\textwidth]{psd_0851+202_17feb01.pdf}
\includegraphics[width=0.30\textwidth]{beta_0851+202_17feb01.pdf}
}
\hbox{
\includegraphics[width=0.30\textwidth]{lc_0851+202_12apr05.pdf}
\includegraphics[width=0.30\textwidth]{psd_0851+202_12apr05.pdf}
\includegraphics[width=0.30\textwidth]{beta_0851+202_12apr05.pdf}
}
\hbox{
\includegraphics[width=0.30\textwidth]{lc_1011+496_17feb10.pdf}
\includegraphics[width=0.30\textwidth]{psd_1011+496_17feb10.pdf}
\includegraphics[width=0.30\textwidth]{beta_1011+496_17feb10.pdf}
}
\hbox{
\includegraphics[width=0.30\textwidth]{lc_1011+496_07mar10.pdf}
\includegraphics[width=0.30\textwidth]{psd_1011+496_07mar10.pdf}
\includegraphics[width=0.30\textwidth]{beta_1011+496_07mar10.pdf}
}
\begin{minipage}{\textwidth}
\caption{(continued) }
\end{minipage}
\end{figure*}
\addtocounter{figure}{-1}
\begin{figure*}[ht!]
\hbox{
\includegraphics[width=0.30\textwidth]{lc_1156+295_31mar12.pdf}
\includegraphics[width=0.30\textwidth]{psd_1156+295_31mar12.pdf}
\includegraphics[width=0.30\textwidth]{beta_1156+295_31mar12.pdf}
}
\hbox{
\includegraphics[width=0.30\textwidth]{lc_1156+295_01apr12.pdf}
\includegraphics[width=0.30\textwidth]{psd_1156+295_01apr12.pdf}
\includegraphics[width=0.30\textwidth]{beta_1156+295_01apr12.pdf}
}
\hbox{
\includegraphics[width=0.30\textwidth]{lc_1156+295_02apr12.pdf}
\includegraphics[width=0.30\textwidth]{psd_1156+295_02apr12.pdf}
\includegraphics[width=0.30\textwidth]{beta_1156+295_02apr12.pdf}
}
\hbox{
\includegraphics[width=0.30\textwidth]{lc_1216-010_16mar02.pdf}
\includegraphics[width=0.30\textwidth]{psd_1216-010_16mar02.pdf}
\includegraphics[width=0.30\textwidth]{beta_1216-010_16mar02.pdf}
}
\hbox{
\includegraphics[width=0.30\textwidth]{lc_1219+285_19mar03.pdf}
\includegraphics[width=0.30\textwidth]{psd_1219+285_19mar03.pdf}
\includegraphics[width=0.30\textwidth]{beta_1219+285_19mar03.pdf}
}
\begin{minipage}{\textwidth}
\caption{(continued) }
\end{minipage}
\end{figure*}
\addtocounter{figure}{-1}
\begin{figure*}[ht!]
\hbox{
\includegraphics[width=0.30\textwidth]{lc_1219+285_20mar03.pdf}
\includegraphics[width=0.30\textwidth]{psd_1219+285_20mar03.pdf}
\includegraphics[width=0.30\textwidth]{beta_1219+285_20mar03.pdf}
}
\hbox{
\includegraphics[width=0.30\textwidth]{lc_1253-055_26jan06.pdf}
\includegraphics[width=0.30\textwidth]{psd_1253-055_26jan06.pdf}
\includegraphics[width=0.30\textwidth]{beta_1253-055_26jan06.pdf}
}
\hbox{
\includegraphics[width=0.30\textwidth]{lc_1253-055_28feb06.pdf}
\includegraphics[width=0.30\textwidth]{psd_1253-055_28feb06.pdf}
\includegraphics[width=0.30\textwidth]{beta_1253-055_28feb06.pdf}
}
\hbox{
\includegraphics[width=0.30\textwidth]{lc_1253-055_20apr09.pdf}
\includegraphics[width=0.30\textwidth]{psd_1253-055_20apr09.pdf}
\includegraphics[width=0.30\textwidth]{beta_1253-055_20apr09.pdf}
}
\hbox{
\includegraphics[width=0.30\textwidth]{lc_1510-089_01may09.pdf}
\includegraphics[width=0.30\textwidth]{psd_1510-089_01may09.pdf}
\includegraphics[width=0.30\textwidth]{beta_1510-089_01may09.pdf}
}
\begin{minipage}{\textwidth}
\caption{(continued) }
\end{minipage}
\end{figure*}
\addtocounter{figure}{-1}
\begin{figure*}[ht!]
\hbox{
\includegraphics[width=0.30\textwidth]{lc_1553+113_05may99.pdf}
\includegraphics[width=0.30\textwidth]{psd_1553+113_05may99.pdf}
\includegraphics[width=0.30\textwidth]{beta_1553+113_05may99.pdf}
}
\hbox{
\includegraphics[width=0.30\textwidth]{lc_1553+113_24jun09.pdf}
\includegraphics[width=0.30\textwidth]{psd_1553+113_24jun09.pdf}
\includegraphics[width=0.30\textwidth]{beta_1553+113_24jun09.pdf}
}
\hbox{
\includegraphics[width=0.30\textwidth]{lc_1553+113_15may10.pdf}
\includegraphics[width=0.30\textwidth]{psd_1553+113_15may10.pdf}
\includegraphics[width=0.30\textwidth]{beta_1553+113_15may10.pdf}
}
\hbox{
\includegraphics[width=0.30\textwidth]{lc_1553+113_16may10.pdf}
\includegraphics[width=0.30\textwidth]{psd_1553+113_16may10.pdf}
\includegraphics[width=0.30\textwidth]{beta_1553+113_16may10.pdf}
}
\begin{minipage}{\textwidth}
\caption{(continued) }
\end{minipage}
\end{figure*}
\section{Sample}\label{sec:sample}
The blazar light curves studied here are obtained from the samples of \citet{Goyal13b} who studied the intra-night variability properties of different types of active galactic nuclei (AGN) using 262 intranight light curves. The AGNs monitored belong to radio--quiet quasar, radio--intermediate quasar, radio--loud quasar and blazar types, including BL Lacs and the FSRQs. \citeauthor{Goyal13b} blazar sample consists of 24 sources monitored on 85 sessions. The details of data gathering, reduction procedure, generation of differential light curves, and the statistical tests used to infer intranight variability are given in \citet[][]{Goyal13b} which we briefly describe here. On each monitoring session, continuous CCD observations ($>$4 hr) of the target were performed with the integration time of each frame chosen such that the flux measurement could be obtained with 0.2--0.5\% accuracies. The preprocessing (bias subtraction and flat fielding) of raw CCD frames were done using {\sc Image reduction and analysis facility (IRAF)} software and the instrumental magnitudes of the target blazar and the comparison stars on the same CCD chip were derived using aperture photometry. The relative instrumental magnitude of the blazar was computed against one steady `star--star' pair, thereby producing two differential light curves (DLCs) for a blazar on a given monitoring session. The differential photometry technique is widely used in variability studies as it counters the spurious AGN variability occurring due to varying atmospheric conditions (variable seeing or presence of thin clouds during the monitoring session) as any changes seen in blazar light should be accompanied by the same changes in comparison starlight, thereby keeping the difference unaltered. Next, we used $F-$test to infer the statistical significance of variability at significance levels, $\alpha$= 0.01 and 0.05, corresponding to $p$ value $>$0.99 and $>$0.95, respectively. If the value of $F-$test statistic for a DLC turned out to be more than the critical value at $\alpha$=0.05(0.01), the DLCs were assigned a `probable variable (confirmed variable)' status. If the test statistic turned out to be smaller than the critical value at$\alpha$=0.05 for the DLC, it was assigned a `non-variable' status. In the present study, all the monitoring sessions where both DLCs showed a `confirmed variable' status are used. We also included two monitoring sessions where one of the two DLCs showed a `probable variable' status while the other showed a `confirmed variable' status. The monitoring sessions where the blazar DLCs showed `non-variable' status are not used in this analysis as the PSDs of `non-variable' light curves will be consistent with $\beta\sim0$, resulting from fluctuations arising from measurement errors. The above criterion reduced the sample to 15 blazars and 34 intranight light curves. We further exclude the BL Lac object PKS\,0735+178 from the sample as the PSD analysis of its five intranight light curves has been reported in \citet{Goyal17}. Therefore, the current sample consists of 14 blazars which have shown statistically significant variability on 29 intranight monitoring sessions\footnote{Light curves can be obtained upon request.}. Table~\ref{tab:sample} lists the basic properties of the blazars.
Next, we converted the differential magnitudes of blazars from logarithmic scale to linear scale using the relation F$_{obs}$$=$F$_{0}$\,$\times 10^{-0.4\times m_{obs}}$, where F$_{0}$(=1) is the arbitrary zero point magnitude flux, $m_{obs}$ is the differential blazar magnitude relative to one comparison star and F$_{obs}$ is the corresponding differential flux density as it contains the contribution from the steady comparison star flux. The errors in were derived using standard error propagation \citep{Bevington03} and scaled by a factor 1.54 to account for the underestimation of the photometric errors by the {\sc IRAF} \citep[see, for details,][and references theirin]{Goyal13a}. We note that the differential blazar flux density can be scaled to proper fluxes using the appropriate F$_{0}$ for the R--band and the apparent magnitudes of comparison star used. Figure~\ref{fig:analysis} (panel a) shows flux density intranight light curves.
\begin{deluxetable*}{ccccccccccc}
\tablenum{2}
\tablecaption{Summary of the observations and the PSD analysis.\label{tab:psd}}
\tablewidth{0pt}
\tabletypesize{\small}
\tablehead{
\colhead{IAU name} & \colhead{Date of obs.} & \colhead{Tel.} & \colhead{$T_{\rm obs}$} & $N_{obs}$ & $\psi$ & \colhead{$T_{\rm mean}$} &
\colhead{$\rm \log_{10}(P_{stat})$} & \colhead{$\log_{10}(\nu_k) $ range} & \colhead{{$\beta \pm err$}} & \colhead{$p_\beta^\ast$}\\
\colhead{} & \colhead{} & \colhead{} & \colhead{(hr)} & \colhead{} & \colhead{} & \colhead{(min)} & \colhead{($\frac{\mathrm rms}{\mathrm mean})^2$d} & \colhead{(d$^{-1}$)} & \colhead{} & \colhead{}
}
\decimalcolnumbers
\startdata
0109$+$224 & 2005 Oct 29 & ST & 7.1 & 36 & 3.98 & 11.9 & $-$7.08 & 0.53 to 1.47 &2.7$\pm$4.3 & 0.119 \\
0235$+$164 & 1999 Nov 12 & ST & 6.6 & 40 & 13.25& 9.70 & $-$5.87 & 0.56 to 1.74 &3.9$\pm$2.8 & 0.011 \\
& 1999 Nov 14 & ST & 6.2 & 34 & 10.59& 10.26 & $-$5.35 & 0.59 to 1.53 &3.7$\pm$3.2 & 0.057 \\
& 2003 Nov 18 & ST & 7.8 & 41 & 8.25 & 9.75 & $-$6.35 & 0.48 to 1.67 &4.0$\pm$2.5 & 0.007 \\
0420$-$014 & 2003 Nov 19 & ST & 6.7 & 38 & 2.22 & 10.55 & $-$7.18 & 0.55 to 1.73 &3.7$\pm$2.8 & 0.559 \\
& 2009 Oct 25 & ST & 4.5 & 21 & 5.14 & 12.73 & $-$5.94 & 0.73 to 1.67 &2.7$\pm$2.9 & 0.108 \\
0716$+$714 & 2005 Feb 1 & ST & 1.7 & 26 & 3.36 & 3.88 & $-$7.28 & 1.15 to 2.10 &3.5$\pm$2.3 & 0.048 \\
0806$+$315 & 1998 Dec 28 & ST & 7.3 & 36 & 16.66& 12.16 & $-$5.49 & 0.52 to 1.46 &3.0$\pm$2.4 & 0.134 \\
0806$+$524 & 2005 Feb 4 & ST & 7.2 & 29 & 1.31 & 14.98 & $-$7.01 & 0.52 to 1.46 &2.8$\pm$3.8 & 0.898 \\
0851$+$202 & 1999 Dec 31 & ST & 5.6 & 29 & 4.81 & 11.61 & $-$6.56 & 0.63 to 1.58 &3.8$\pm$2.8 & 0.404 \\
& 2000 Mar 28 & ST & 4.2 & 22 & 5.34 & 11.55 & $-$6.50 & 0.75 to 1.44 &2.9$\pm$2.8 & 0.979 \\
& 2001 Feb 17 & ST & 6.9 & 47 & 2.78 & 8.82 & $-$6.88 & 0.54 to 1.73 &3.3$\pm$2.4 & 0.832 \\
& 2005 Apr 12 & ST & 4.8 & 56 & 9.07 & 5.10 & $-$6.95 & 0.70 to 1.88 &3.9$\pm$2.9 & 0.006 \\
1011$+$496 & 2010 Feb 19 & ST & 5.6 & 43 & 2.59 & 8.43 & $-$6.80 & 0.59 to 1.78 &2.1$\pm$2.9 & 0.248 \\
& 2010 Mar 7 & ST & 5.5 & 36 & 3.95 & 9.16 & $-$6.73 & 0.63 to 1.58 &3.3$\pm$2.8 & 0.057 \\
1156$+$295 & 2012 Mar 31 & IGO & 5.9 & 26 & 9.06 & 19.76 & $-$5.51 & 0.60 to 1.29 &3.5$\pm$2.9 & 0.506 \\
& 2012 Apr 1 & IGO & 8.4 & 26 & 11.50& 19.38 & $-$5.19 & 0.45 to 1.40 &1.4$\pm$3.1 & 0.343 \\
& 2012 Apr 2 & IGO & 7.2 & 20 & 22.34& 21.67 & $-$4.85 & 0.52 to 1.21 &3.5$\pm$4.0 & 0.047 \\
1216$-$010 & 2002 Mar 16 & ST & 8.2 & 22 & 14.27& 22.35 & $-$5.97 & 0.46 to 1.41 &4.0$\pm$2.8 & 0.014 \\
1219$+$285 & 2003 Mar 19 & ST & 6.2 & 60 & 5.76 & 6.20 & $-$7.03 & 0.59 to 1.99 &2.6$\pm$2.0 & 0.852 \\
& 2003 Mar 20 & ST & 6.3 & 67 & 10.16& 5.63 & $-$6.82 & 0.58 to 1.98 &3.3$\pm$1.8 & 0.174 \\
1253$-$055 & 2006 Jan 26 & ST & 4.7 & 21 & 3.23 & 13.56 & $-$7.12 & 0.70 to 1.65 &1.9$\pm$1.7 & 0.259 \\
& 2006 Feb 28 & ST & 6.5 & 42 & 13.20& 9.30 & $-$7.28 & 0.56 to 1.75 &2.3$\pm$1.7 & 1.000 \\
& 2009 Apr 20 & ST & 5.5 & 22 & 25.86& 14.89 & $-$5.80 & 0.64 to 1.51 &3.4$\pm$2.2 & 0.066 \\
1510$-$089 & 2009 May 1 & ST & 6.0 & 25 & 7.73 & 14.45 & $-$6.10 & 0.59 to 1.54 &3.3$\pm$2.8 & 0.021 \\
1553$+$113 & 1999 May 5 & ST & 4.2 & 23 & 2.85 & 10.83 & $-$6.63 & 0.76 to 1.70 &4.0$\pm$2.8 & 0.110 \\
& 2009 Jun 24 & ST & 4.2 & 26 & 5.73 & 9.74 & $-$6.85 & 0.75 to 1.70 &2.4$\pm$2.0 & 0.533 \\
& 2010 May 15 & ST & 6.5 & 22 & 3.13 & 17.73 & $-$6.95 & 0.56 to 1.51 &3.9$\pm$2.3 & 0.345 \\
& 2010 May 16 & ST & 6.3 & 33 & 2.46 & 11.39 & $-$6.95 & 0.58 to 1.53 &3.1$\pm$2.7 & 0.252 \\
\enddata
\tablecomments{
(1) name of the blazar following the IAU convention;
(2) the date of observations;
(3) the telescope facility used. ST = 1\,m Sampurnanand Telescope of Aryabhatta Research Institute of Observational Sciences, India; IGO = 2\,m IUCAA-Girawali Observatory of Inter-University Centre of Astronomy and Astrophysics, India.
(4) the duration of the observed light curve;
(5) number of data points in the light curve;
(6) peak-to-peak variability amplitude \citep[Eq. 9 of][]{Goyal13b};
(7) the mean sampling interval for the observed light curve (light curve duration/number of data points);
(8) the noise level in PSD due to the measurement uncertainty;
(9) the temporal frequency range covered by the binned logarithmic power spectra;
(10) the best-fit power-law slope of the PSD along with the corresponding errors representing 98\% confidence limit (see Section~\ref{sec:psresp});
(11) corresponding $p_\beta$. $^\ast$ power law model is considered as a bad-fit if $p_\beta$ $\leq$ 0.1 as the corresponding rejection confidence for the model is $\geq$90\% (Section~\ref{sec:psresp}).
}
\end{deluxetable*}
\section{PSD Analysis}\label{sec:analysis}
\subsection{Derivation of PSDs: discrete Fourier transform}\label{sec:dft}
Since the aim of the study is to obtain reliable shapes of PSDs, we subject the light curves Fourier transformation using the discrete Fourier transform (DFT) method \citep[see, for details,][and references therein]{Goyal20}. The fractional rms-squared-normalized periodogram is given as the squared modulus of its DFT for the evenly sampled light curve $f(t_i)$, observed at discrete times $t_i$ and consisting of $N$ data points and the total monitoring duration $T$,
\begin{multline}
P(\nu_k) = \frac{2 \, T}{\mu^2 \, N^2} \, \Bigg\{ \Bigg[ \sum_{i=1}^{N} f(t_i) \, \cos(2\pi\nu_k t_i) \Bigg]^2 + \\ \Bigg[ \sum_{i=1}^{N} f(t_i) \, \sin(2\pi\nu_k t_i) \Bigg]^2 \, \Bigg \} ,
\label{eq:psd}
\end{multline}
where $\mu$ is the mean of the light curve and is subtracted from the flux values, $f(t_i)$. The DFT is computed for evenly spaced frequencies ranging from the total duration of the light curve down to the Nyquist sampling frequency ($\nu_{\rm Nyq}$) of the observed data. Specifically, the frequencies corresponding to $\nu_{k} = k/T$ with $k=1, ..., N/2$, $\nu_{\rm Nyq}= N/2T$, and $T = N (t_k-t_1)/(N-1)$ are considered. The normalized periodogram as defined in Eq.~\ref{eq:psd} corresponds to total excess variance when integrated over positive frequencies. The constant noise floor level from measurement uncertainties is given as \citep[e.g.,][]{Isobe15, Vaughan03}
\begin{equation}
\rm P_{stat} = \frac{2 \, T}{\mu^2 \, N} \, \sigma_{\rm stat}^2 \, .
\label{eq:poi_psd}
\end{equation}
where, $\sigma_{stat}^2= \sum_{j=1}^{j=N} \Delta f(t_j)^2 / N$ is the mean variance of the measurement uncertainties on the flux values $\Delta f\!(t_j)$ in the observed light curve at times $t_j$, with $N$ denoting the number of data points in the original light curve. The intranight light curves are roughly evenly sampled but the application of the DFT method requires strict even sampling of the time series, otherwise, the `spectral window function' corresponding to the sampling times gives a non-zero response in the Fourier-domain, resulting in false powers in the periodograms \citep[see, Appendix A of][]{Goyal20,Deeming75}. Therefore, in order to perform the DFT, we obtained the regular sampling only by linearly interpolating between the two consecutive observed data points with an interpolation interval of 1 minute which is roughly 5--15 times smaller than the original sampling interval (see column 7 of Table~\ref{tab:psd}). Even though the choice of interpolation interval is arbitrary, we note that it cannot be longer than the mean sampling interval. We tested our procedure by also using an interpolation interval about half of the original sampling interval which did not change the results. We refer the reader to \citet{Goyal17} and \citet{Max-Moerbeck14a} for a discussion on the distortions introduced in the PSDs due to the discrete sampling and the finite duration of the light curve, known as `red-noise leakage' and `aliasing' respectively. To minimize the effects of red-noise leak, the PSDs are generated using the `Hanning' window function \citep[e.g.,][]{Press92, Max-Moerbeck14a}. Aliasing, on the other hand, contributes an equal amount of power (around the Nyquist frequency) to the periodograms \citep[][]{Uttley02}, hence will not distort the shape of PSDs.
The periodogram obtained using equation~(\ref{eq:psd}), known as the `raw' periodogram, provides a noisy estimate of the spectral power as it consists of independently distributed $\chi^2$ variables with two degrees of freedom (DOF) \citep{TK95, Papadakis93, Vaughan03}. Therefore, a number of PSD estimates should be averaged in order to obtain a reliable estimate of the spectral power. The periodograms falling within a factor of 1.6 in frequency range are averaged with the representative frequency taken as the geometric mean of each bin \citep{Isobe15, Goyal17, Goyal20}. Except for the first bin, this choice of binning factor provides at least two periodograms in each frequency bin.
Since the observed power-spectrum is related to the `true' power spectrum by $P(\nu_k) = P_{true}({\nu_k}) \frac{\chi^2}{2}$ for a noise-like process \citep{Papadakis93, TK95, Vaughan03}. The transformation to log-log space, offsets the the observed periodograms as
\begin{equation}
\log_{10}[ P(\nu_k) ] = \log_{10}[ (P_{true}({\nu_k}) ] + \log_{10}\Bigl[ \frac{\chi^2}{2} \Bigr] .
\label{logpsd}
\end{equation}
This offset is the expectation value of $\chi^2$ distribution with 2 DOF in log-log space and is equal to $-$0.25068 which is added to the observed periodograms \citep[][]{Vaughan05}.
\subsection{Estimation of the spectral shape: PSRESP method}\label{sec:psresp}
Since the aim of the present study is to derive shapes of intranight PSDs, we use the `power spectral response' (PSRESP) method \citep[e.g.,][]{Uttley02, Chatterjee08, Max-Moerbeck14a, Isobe15, Meyer19, Goyal20} which further mitigates the deleterious effects of red-noise leak and aliasing. In this method, an (input) PSD model is tested against the observed PSD. The estimation of best-fit model parameters and their uncertainties is performed by varying the model parameters. To achieve this, a large number of light curves are generated with a known underlying power-spectral shape using Monte Carlo (MC) simulations. Rebinning of the light curve to mimic the sampling pattern and interpolation is performed for the DFT application. The DFT of such light curve gives the distorted PSD due to effects mentioned above. Averaging large number of such PSDs gives the mean of the distorted model (input) power spectrum. The standard deviation around the mean gives errors on the modeled (input) power spectrum. The goodness of fit of the model is estimated by computing two functions, similar to $\chi^2$, defined as
\begin{equation}
\chi^2_{\rm obs} = \sum_{\nu_{k}=\nu_{min}}^{\nu_{k}=\nu_{max}} \frac{[\overline{ \log_{10}P_{\rm sim}}(\nu_k)-\log_{10}P_{\rm obs}(\nu_k)]^2}{\Delta \overline{\log_{10}P_{\rm sim}}(\nu_k)^2}
\label{chiobs}
\end{equation}
and
\begin{equation}
\chi^2_{\rm dist, i} = \sum_{\nu_{k}=\nu_{min}}^{\nu_{k}=\nu_{max}} \frac{[\overline{ \log_{10}P_{\rm sim}}(\nu_k)-\log_{10}P_{\rm sim,i}(\nu_k)]^2}{\Delta \overline{\log_{10}P_{\rm sim}}(\nu_k)^2},
\label{chidist}
\end{equation}
where $\log P_{\rm obs}$ and $\log P_{\rm {sim, i}}$ are the observed and the simulated log-binned periodograms, respectively. $\overline{ \log P_{\rm sim}}$ and $\Delta \overline{\log P_{\rm sim}}$ are the mean and the standard deviation obtained by averaging a large number of PSDs; $k$ represents the number of frequencies in the log-binned power spectrum (ranging from $\nu_{min}$ to $\nu_{max}$), while $i$ runs over the number of simulated light curves for a given $\beta$.
Here $\chi^2_{\rm obs}$ determines the minimum $\chi^2$ for the model compared to the data and the $\chi^2_{\rm dist}$ values determine the goodness of the fit corresponding to the $\chi^2_{\rm obs}$. We note that $\chi^2_{\rm obs}$ and $\chi^2_{\rm dist}$ are not the same as a standard $\chi^2$ distribution because $\log_{10}P_{\rm obs}(\nu_k)$'s are not normally distributed variables since the number of power spectrum estimates averaged in each frequency bin are small \citep[][]{Papadakis93}. Therefore, a reliable goodness of fit is computed using the distribution of $\chi^2_{\rm dist}$ values. For this, the $\chi^2_{\rm dist}$ values are sorted in ascending order. The probability or $p_{\beta}$, that a given model can be rejected is then given by the percentile of $\chi^2_{\rm dist}$ distribution above which $\chi^2_{\rm dist}$ is found to be greater than $\chi^2_{\rm obs}$ for a given $\beta$ \citep[success fraction;][]{Chatterjee08}. A large value of $p_{\beta}$ represents a good--fit in the sense that a large fraction of random realizations of the model (input) power spectrum are able to recover the shape of the intrinsic PSD. Therefore, this analysis essentially uses the MC approach toward a frequentist estimation of the quality of the model compared to the data. This is a well-known approach to estimate the goodness of fit when the fitting statistic is not well understood \citep[see, for details,][]{Press92}.
In this study, the light curve simulations are performed using the method of \citet{Emmanoulopoulos13} which preserves the probability density function (PDF) of the flux distribution as well as the underlying power spectral shape. In addition to assuming the power spectral shape, the method requires supplying a value of mean and standard deviation ($\sigma$) of the flux values to reproduce the flux distribution and match the variance \citep[][]{Meyer19}. We have assumed single power-law PSDs with a given $\beta$ (to reproduce the PSD shape) and supplied mean and $\sigma$ of the logarithmically transformed flux values which is found to be an adequate representation of flux distribution on shorter ($\lesssim$days) timescales for a few cases \citep[e.g.,][]{Hess10, Kushwaha20}.
For this purpose, the mean and the $\sigma$ are computed by fitting a Gaussian function to the flux distribution. Finally, the measurement errors in the simulated flux values were incorporated by adding a Gaussian random variable with mean 0 and standard deviation equal to the mean error of the measurement uncertainties on the observed flux values \citep[][]{Meyer19, Goyal20}. In such a manner, 1,000 light curves are simulated in the $\beta$ range 0.1 to 4.0, with a step of 0.1 for each observed light curve. For the simulated light curve, the periodograms are derived in an identical manner as that of the observed light curve (Section~\ref{sec:dft}). The best-fit PSD slope for the observed PSD is given by the one with the highest $p_{\beta}$ value and the uncertainty is given as 2.354$\sigma$ of the $p_{\beta}$ curve where $\sigma$ is the standard deviation of fitted Gaussian. This gives roughly a 98\% confidence limit on the best-fit PSD slope.
Details on the intranight light curves used for the analysis and the derived PSDs, along with the best-fit PSDs and the maximum $p_{\beta}$, are summarized in Table~\ref{tab:psd}. Figure~\ref{fig:analysis} presents the analyzed light curves (panel a), the corresponding best-fit PSD (panel b) and the probability distribution curves ($p_\beta$ as a function of $\beta$; panel c) for the duration of the light curve down to the mean sampling intervals for the given light curve. In our analysis, we have not subtracted the constant noise floor level (shown by the dashed horizontal lines in panel b of the figure), as some of the data points are below this level. The PSRESP method also allows us to compute the rejection confidence for the input PSD shape; the maximum probability lower than 10\% means that the rejection confidence (1--$p_{\beta}$ value) is higher than 90\% for the (input) PSD model. In our analysis, we use $p_\beta$$<$0.1 as a rejection threshold for the the model, meaning that the input spectral shape does not provide a good fit to the PSD. The distribution of acceptable PSD slopes is shown in Figure~\ref{fig:hist}. The mean of a sample is computed in a straightforward manner while the error on the mean is computed using the MC bootstrap method as follows. For each sample, the $\beta$ is drawn from a Gaussian distribution of mean and standard deviation equal to $\beta$ and error/2.354 (note that Table~\ref{tab:psd} reports errors equal to 2.354$\sigma$). The mean $\beta$ is computed. These two steps are repeated 500 times. The error is given as the standard deviation of the distribution of mean $\beta$ values. In addition, we provide the $\nu_k$ $P(\nu_k)$ vs.\ $\nu_k$ curves in Figure~\ref{fig:jointpsds} for blazars observed on more than one occasion, to compare the `square' fractional variability on timescales probed by our analysis \citep[][]{Goyal20}.
\begin{figure}
\hbox{
\includegraphics[width=0.4\textwidth]{hist-crop.pdf}
}
\caption{Histograms of the best-fit PSD slopes derived for the entire blazar sample (cyan line; 10 sources and 19 monitoring sessions), BL Lacs (red line; seven sources and 13 monitoring sessions), and FSRQs (blue line; three sources and six monitoring sessions), respectively. The sample mean along with 1$\sigma$ uncertainty estimated using the bootstrap method for different groups is given in parentheses (see Section~\ref{sec:psresp}). }
\label{fig:hist}
\end{figure}
\begin{figure*}
\hbox{
\includegraphics[width=0.33\textwidth]{jointpsd_0420-014-crop.pdf}
\includegraphics[width=0.33\textwidth]{jointpsd_0851+202-crop.pdf}
\includegraphics[width=0.33\textwidth]{jointpsd_1156+295-crop.pdf}
}
\hbox{
\includegraphics[width=0.33\textwidth]{jointpsd_1219+285-crop.pdf}
\includegraphics[width=0.33\textwidth]{jointpsd_1253-055-crop.pdf}
\includegraphics[width=0.33\textwidth]{jointpsd_1553+113-crop.pdf}
}
\caption{$\nu_k$ P($\nu_k$) PSDs for individual blazars which showed acceptable fit in the analysis. The lines show the log-binned periodograms and the filled symbols show mean and standard deviation of best-fit PSDs given by the PSRESP method for different epochs. }
\label{fig:jointpsds}
\end{figure*}
\section{Results}\label{sec:results}
In this study, we have derived the optical intranight variability PSDs of the blazar sources, covering the temporal frequency range from 10$^{0.52}$\,day$^{-1}$ to 10$^{1.99}$\,day$^{-1}$ (timescales corresponding to 7.4 hours and $\sim$15 minutes). The intranight light curves showed modest intranight variability with peak--to--peak variability amplitudes, $>$1--15\%, occasionally rising to over $15\%$, over the span of observations. Our main results are the following:
\begin{enumerate}
\item{Out of the 29 intranight light curves analyzed in the present study, the PSDs shows an acceptable fit to the single power-law spectral shapes for 19 monitoring sessions (see Section~\ref{sec:psresp}). The maximum $p_\beta$ is higher than 10\% and reaches as high as 100\% for these sessions (column 10; Table~\ref{tab:psd}; panel c of Figure~\ref{fig:analysis}). }
\item{For these 19 acceptable PSD fits, the simple power-law slopes range from 1.4 to 4.0 (albeit with a large scatter); consistent with a statistical characters of red ($\beta$$\sim$2) and black ($\beta$$\geq$3) noise stochastic processes (Table~\ref{tab:psd}, Figure~\ref{fig:hist}). The mean $\beta$ turns out to be 2.9$\pm$0.3 (1$\sigma$ uncertainty) for blazar sources. }
\item{The computed mean value PSD slopes for the BL Lac objects (seven sources and 13 light curves) and FSRQs (three sources and six light curves) are 3.1$\pm$0.3 and 2.6$\pm$0.4, respectively; consistent with one another within 1$\sigma$ uncertainty (Figure~\ref{fig:hist}).}
\item{The PSD slopes for a few sources whose intranight PSDs show an acceptable fit to single power-law on multiple occasions are consistent with each other (column 9; Table~\ref{tab:psd}).}
\item{The normalization of the PSDs for the sources monitored on different epochs turns out to be consistent with each another within 1$\sigma$ uncertainty for the blazars 1156+295, 1219+285. However, one order of magnitude change is noted in the normalization of PSDs between 2003 November 19 and 2009 October 25 for 0420$-$014, between 1999 Dec 31 and 2001 February 17 for 0851+2020, and 2006 January 26 and 2006 February 28 for 1253$-$055 (Figure~\ref{fig:jointpsds}).}
\end{enumerate}
Our PSD analysis using the PSRESP method returns rejection confidence higher than 90\% for 10 out of the 29 analyzed lightcurves. These are: 0235+164 on 1999 November 12 and 14, and 2003 November 18, 0716+714 on 2005 February 1, OJ\,287 on 2005 April 12, 1011+496 on 2010 March 7, 1156+295 on 2012 April 2, 1216$-$010 on 2002 March 16, 3C\,279 on 2009 April 20, 1510$-$089 on 2009 May 1 (Table~\ref{tab:psd}). This is due to the fact that for the majority of these light curves, the intensity variations are essentially monotonic, i.e., a steady rise or fall without any other feature over the span of observations. This means that PSD model with $\beta$ $>$4 could fit the observed PSDs better over the temporal frequency range probed by the observations.
Next, we note that the reported 98\% confidence limits on the acceptable best-fit PSD slopes are large, in general (Table~\ref{tab:psd}), and in some cases, larger than the value itself. These are: 0109+224 on 2005 October 29, 0420$-$014 on 2009 October 25, 0806+524 on 2005 February 4, 1011+496 on 2010 February 19, and 1156+295 on 2012 April 1. A possible cause of this could be the limited number of data points in the studied light curves (20--67; column 4 of Table~\ref{tab:psd}). \citet[][]{Aleksic15a} studied the effects of changing the number of data points in the light curve and the estimation of the best-fit PSD slope (and uncertainty) using the long-term multiwavelength light curves having $\geq$30 data points for the blazar Mrk\,421. They note that the location of the maximum in the probability distribution curve does not change noticeably for different binning factors but the width, shape, and amplitude change significantly; however, it is unclear if the broadening of the probability distribution curve and hence the estimation of the uncertainty in the best-fit PSD slope is related to the gradual increase of binning factors (i.e., a decrease of a number of data points) for different light curves \citep[see, Figure 4 of][]{Aleksic15a}. Also, we note that the reported uncertainties on the best-fit X-ray intra-night PSD slopes using the PSRESP method also show large scatter, despite having $>$300 data points in the examined light curves for the AGNs Mrk\,421, PKS\,2155$-$304, and 3C\,273 \citep[Table 4 of][]{Bhattacharyya20}. Therefore, we conclude that a limited number of data points do not play a significant role in assessing the uncertainties of the spectral shape parameters using the PSRESP method.
\section{Discussion and conclusions}\label{sec:discussions}
We report the first systematic study to characterize the intranight variability PSD properties comprising of 14 blazar sources and 29 densely sampled light curves, covering timescales from several hours to $\sim$15 minutes. All the analyzed light curves were of duration $\geq$ 4 hours (except for the BL Lac 0716+714 for which duration was $\sim$1.5\,hr) and could be obtained with measurement accuracies $\lesssim$0.2--0.5\% in 5--15 minutes of integration time using the 1--2\,m class telescopes irrespective of blazar flux state, sky brightness or atmospheric conditions. The intranight monitoring sessions were scheduled solely based on the target's availability in the night sky for a duration longer than 4\,hours from a given telescope site. Therefore, the intranight PSDs are derived irrespective of the flux state of a blazar. The slopes show a range from 1.4 to 4.0, indicating that the variability at synchrotron emission frequencies has a statistical character of red to black--noise stochastic process on intranight timescales with no signs of cutoff at high frequencies due to measurement noise-floor levels arising due to measurement uncertainties. The mean $\beta$ for the entire sample is $\sim$2.9, indicating a steeper than red-noise character preference of the variability. Our crude estimates of mean $\beta$'s for the BL Lacs and the FSRQs subclasses (due to the small number of sources and the intranight light curves analyzed) give $\sim$3.1 and 2.6, respectively. These two estimates are comparable with each other, indicating that processes driving the variability occur in non-thermal jets for these sources and not in the accretion disk which could be dominant in FSRQs at optical frequencies \citep[e.g.,][see, however, \citealt{Mangalam93}, for models relating variability due to hot spots or instabilities in accretion disk resulting in $\beta$ =1.4 to 2.1]{Ghisellini17}. The majority of obtained slopes could be reconciled if the intranight fluctuations are driven by changes in the bulk Lorentz factors of the jet, provided that the turbulence is dominant on smaller than few minutes timescales \citep[$\beta$$\sim$2.1--2.9;][]{Pollack16}.
The PSD slopes obtained in this analysis can be directly compared with \citet{Wehrle19} who derived blazar/AGN PSDs using the {\it Kepler-}satellite data with long duration ($>$75 days), nearly uniformly sampled light curves with sampling intervals 30 minutes (long--cadence) and 1 minute (short cadence data for the blazar OJ\,287), respectively. First, the PSD slopes obtained for their sample using the long--cadence data range between $\beta\sim$1.8 and 3.8 and covers temporal frequencies between log $\nu_k$$\sim$-6.5\,Hz and $\sim$-5.0\,Hz with the slope tending to white noise at higher frequencies (timescales $\leq$18 hours). Second, the PSDs slopes obtained for the BL Lacs and FSRQ types are indistinguishable from one another within this frequency range. Their results are comparable to ours (see above), even though our analysis cover variability frequencies higher than {\it Kepler}'s long--cadence data ($\sim$1.5 decades in frequency range down to sub--hour timescales). Moreover, using short--cadence data for the blazar OJ\,287, they obtain $\beta$ $\sim$2.8 in the frequency range log $\nu_k$ = $-$2.7\,Hz to $-$5.7\,Hz with the PSD flattening to white-noise at $\nu_k$ $\geq$ 10$^{-2.7}$\,Hz. Our intranight PSD slopes for the OJ\,287, obtained on three separate occasions, are 3.8, 2.9, and 3.1, respectively, consistent with their result on overlapping variability frequencies (Table~\ref{tab:psd}).
In \citet[][]{Goyal17}, \citet[][]{Goyal18}, and \citet[][]{Goyal20}, based on PSD analyses of long-term variability using decade--long GHz--band radio--to--TeV$\gamma-$ray light curves of a few selected blazar sources, we hypothesized that the broadband emission is generated in an extended yet highly turbulent jet. The variability appeared to be driven by a single stochastic process at synchrotron frequencies but seemed to require the linear superposition of two stochastic processes at IC frequencies with relaxation timescales $\geq$1,000 days and $\sim$ days, respectively. Stochastic fluctuations in the local jet conditions (e.g., velocity fluctuations in the jet plasma or magnetic field variations) lead to energy dissipation over all spatial scales. The radiative response of the accelerated particles is delayed with respect to the input perturbations and this forms the red--noise segment of the PSD at synchrotron frequencies. At IC frequencies, however, due to inhomogeneities in the local photon population available for upscattering, the additional relaxation timescale of about $\sim$ one day, i.e., the light crossing time of the emission region, can result in a jet with Doppler boosting factor, 30, forming the pink-noise segment of the PSD. The steeper than red--noise PSD slopes on intranight timescales obtained in this analysis against the strict red--noise character of long--term variability at optical frequencies \citep[$\beta\sim$2;][]{Chatterjee08, Goyal17, Nilsson18, Goyal20}, indicate a cutoff of variability power on timescales around $\sim$days. We note that such a cutoff of variability power on timescales $\sim$days have been noted in the X--ray PSD of the blazar Mrk\,421 for which the X-ray emission, although {\it it} originates in the non-thermal jet, it is believed that the variability process is driven by accretion disk processes \citep[][]{Chatterjee18}. Moreover, our conclusion is only tentative, as joint analysis of full variability spectrum using long-term and intranight data, covering many orders of frequencies without gaps, is needed to reach robust conclusions. The normalization of PSDs for a few sources which were monitored on multiple occasions turns out to be consistent with one another within 1$\sigma$ uncertainty with a few exceptions. For the blazars 0420$-$014, OJ\,287 and 3C\,279, the normalization changes by one order of magnitude between different epochs (Figure~\ref{fig:jointpsds}). This indicates a hint of non-stationarity of the variability process on intranight timescales \citep[similar conclusions are obtained for the intranight X--ray variability of the blazar Mrk\,421 for which the intranight light curves are modeled as a non-stationary stochastic process;][]{Bhattacharyya20}.
At this point, we note that the duty cycle of intranight blazar variability at optical frequencies is found to be $\sim$40\% when monitored for a duration $>$4\,hours and measurement accuracies 0.2--5\% in few minutes of integration time \citep[][]{Goyal13b}. Almost always, these blazars are variable on longer timescales \citep[$>$days to years;][]{Stalin04, Sagar04, Gopal-Krishna11, Goyal12} which exhibits a red--noise character down to a few days timescales (see above). However, $\sim$60\% of the monitoring sessions, these sources turned out to be non-variable at short timescales meaning that small-scale flux variability, if present, is below the measurement uncertainties. This would imply, occurring intermittently, a cutoff of variability power on timescales longer than $\sim$0.5\,day. Neglecting these cases clearly introduces a bias in the understanding of these results as the PSDs are derived only when the statistically significant variability is found. The implication would be that the energy dissipation processes within the jet generating the flux variations on these timescales are transitory in nature and as such should be taken into consideration when modeling the jet emission and its variability down to intranight timescales \citep[see, in this context,][who derives the PSDs over 5 decades of temporal frequency range down to days timescale]{Pollack16}.
Finally, we note that variability timescales smaller than the light--crossing time of the event horizon of the SMBH provide natural scales for the cutoff of variability if the dominant particle acceleration mechanism arise from disturbances or instabilities near the jet base \citep[$\sim$16 minutes for a 10$^8$ solar mass SMBH;][]{Begelman08}. Using the black hole masses of blazars studied here (column 7; Table~\ref{tab:sample}), the light crossing time of the event horizon ranges from $\sim$4\,minutes for the smallest SMBH mass of the blazar 1219+285 to $\sim$2\,hours for the largest SMBH mass for blazars 0420$-$014, 1156+295, 0806+524, and 3C\,279, respectively. These timescales translate to $\sim$24\,seconds--12\,minutes in the observer's frame assuming typical bulk Lorentz factor, 10, for the jet plasma \citep[][]{Lister16}. Such timescales are not covered by us, given the typical sampling intervals $\sim$5--15\,minutes. This will be explored in future studies with dedicated blazar monitoring programs on $>$2\,m aperture telescopes enabling flux measurements down to sub-percent accuracies in a few seconds of integration time to reach the smallest energy dissipation sites in jets.
\acknowledgments
I thank the referee for careful reading of the manuscript and providing many insightful comments which have improved both the content and the presentation. AG acknowledges the financial support from the Polish National Science Centre (NCN) through the grant 2018/29/B/ST9/02298. The light curve simulations have been performed at the Prometheus cluster of the Cyfronet PL grid under the computing grant `lcsims2'. I thank Micha{\l} Ostroswski, Paul J. Wiita, and Marian Soida for discussions.
\vspace{5mm}
\facilities{NED, ST:1.0m, IGO:2.0m}
\clearpage
\newpage
|
1,108,101,565,557 | arxiv | \section{Introduction \label{sec:Intro}}
The well-known singularity theorems by Hawking and Penrose \cite{Hawking}
show that
cosmological solutions to the Einstein equations generally contain
singularities.
As discussed by Clarke \cite{Clarke} (see also \cite{Ellis} for a
comprehensive overview) there are two types of singularities:
(i) \emph{curvature singularities}, for which components of the Riemann
tensor or its $k$th derivatives are irregular (e.g. unbounded),
and (ii) \emph{quasiregular singularities}, which are associated with peculiarities
in the topology of space-time (e.g. the vertex of a cone), although the local geometry is
well behaved. In addition, the curvature singularities are divided up into \emph{scalar
singularities} (for which some curvature invariants are badly behaved)
and \emph{nonscalar singularities} (for which arbitrarily large or
irregular tidal forces occur).
The singularity theorems mentioned above provide, however, in general no information
about the specific type of singularity --- they make statements solely
about causal geodesic incompleteness. This lack of knowledge
concerning the specific nature of the singular structure
is the reason for many open outstanding problems in
general relativity, including the strong cosmic censorship conjecture
and the BKL conjecture (see \cite{Andersson} for an overview).
A major motivation for the study of \emph{Gowdy spacetimes} as
relatively simple, but non-trivial inhomogeneous cosmological models
results from the desire to understand the
mathematical and physical properties of such cosmological singularities.
The Gowdy cosmologies, first studied in \cite{Gowdy1971,Gowdy1974}, are characterized by an
Abelian isometry group $U(1)\times U(1)$ with spacelike group orbits, i.e.~these spacetimes
possess two associated spacelike and commuting Killing vector
fields
$\xi$ and $\eta$. Moreover, the definition of Gowdy spacetimes
includes that the \emph{twist constants}
$\epsilon_{\alpha\beta\gamma\delta}\xi^\alpha\eta^\beta\nabla^\gamma\xi^\delta$and
$\epsilon_{\alpha\beta\gamma\delta}\xi^\alpha\eta^\beta\nabla^\gamma\eta^\delta$
(which are constant as a consequence of the field equations) are
zero\footnote[1]{The assumption of vanishing twist constants is non-trivial
only in the case of spatial $T^3$ topology. Note that in spatial $S^3$ or $S^2\times S^1$
topology there are specific axes on which one of the Killing vectors vanishes
identically, which leads to vanishing twist constants.}.
For compact, connected, orientable and smooth three manifolds, the corresponding spatial topology
must be either $T^3$, $S^3$, $S^2\times S^1$ or $L(p,q)$, cf.~\cite{Gowdy1974} (see also
\cite{Mostert,Neumann,Fischer}). Note that the universal cover of the lens space
$L(p,q)$ is $S^3$ and hence this case needs not be treated
separately, see references in \cite{Chrusciel1990}.
In the $T^3$-case, global existence in time with respect to the areal
foliation time $t$
was proved by Moncrief \cite{Moncrief1981}.
Moreover, he has shown that the trace of the second fundamental
form blows up uniformly on the hypersurfaces $t=\textrm{constant}$ in
the limit $t\to0$. As a consequence, the solutions do not permit a globally
hyperbolic extension beyond the time $t=0$. However, to date it has not
been clarified whether the solutions are extendible (as
non-globally hyperbolic $C^2$-solutions) or are generically subject to curvature
singularities at $t=0$.
Although global existence of solutions inside the ``Gowdy square'' (i.e. for $0<t<\pi$, cf.~Fig.~\ref{GS} below)
was shown by Chru\'sciel for $S^2\times S^1$ and $S^3$ topology, see Thm.~6.3 in \cite{Chrusciel1990},
it is still an open question whether globally hyperbolic extensions beyond the hypersurfaces $t=0$ or $t=\pi$ exist.
It is expected that these hypersurfaces
contain either curvature singularities or Cauchy horizons; the
theorem in \cite{Chrusciel1990} however does not in fact exclude the
possibility that these are merely coordinate singularities.
For \emph{polarized} Gowdy models, where the Killing vector fields
can be chosen to be orthogonal everywhere, the nature of the singularities
for all possible spatial topologies has been studied in
\cite{Isenberg1990,ChruscielIM}.
In particular, strong cosmic censorship
and a version of the BKL conjecture have been proved.
Investigations of singularities in the \emph{unpolarized} case
for $T^3$ topology
can be found in
\cite{Berger1993,Kichenassamy1998,Ringstrom2006,Ringstrom2006b}.
For unpolarized $S^3$ or $S^2\times S^1$ Gowdy spacetimes
not many results on singularities (strong cosmic
censorship, BKL conjecture, Gowdy spikes) are known.
Particular singular solutions have been constructed with Fuchsian
techniques in \cite{Stahl}. Moreover, numerical studies indicate
that the behavior near singularities and the appearance of spikes are similar
to the $T^3$-case \cite{Garfinkle1999,Beyer2008,Beyer2009}.
In this paper, we study general (unpolarized or polarized)
$S^2\times S^1$ Gowdy models with a
regular Cauchy horizon (with $S^2\times S^1$ topology)
at $t=0$ (cf.~Fig.~\ref{GS})\footnote{Without loss of generality we choose a
\emph{past} Cauchy horizon ${\mathcal H_\mathrm{p}}$.}
and assume that the spacetime
is regular (precise regularity requirements are given below)
at this horizon as well as in a neighborhood. As mentioned above, a theorem by
Chru\'sciel \cite{Chrusciel1990} implies then that the metric is regular
for all $t<\pi$, i.e.~excluding
only the future hypersurface $t=\pi$. With the methods utilized
in this paper we are able to provide the missing piece,
i.e.~we prove that under our regularity assumptions the existence
of a regular second (future) Cauchy horizon ${\mathcal H_\mathrm{f}}$ (at $t=\pi$) is implied,
provided that a particular conserved
quantity $J$ is not zero\footnote{As we will see in
Sec.~\ref{Sec:conserved}, the conserved quantity
$J$ vanishes in \emph{polarized} Gowdy models.}.
Moreover, we derive an explicit
expression for the metric form on the future Cauchy horizon in
terms of the initial data on the past horizon. From this explicit formula,
the universal relation $A_\mathrm{p} A_\mathrm{f}=(8\pi J)^2$
between the areas $A_\mathrm{p}, A_\mathrm{f}$ of past and future Cauchy horizons
and the above mentioned conserved quantity $J$ can be concluded.
The proofs of these statements can be found by relating any
$S^2\times S^1$
Gowdy model to a corresponding axisymmetric and stationary black hole solution
(with possibly non-pure vacuum exterior, e.g.~with surrounding matter),
considered between outer event and inner Cauchy horizon.
Note that the region between these horizons is regular hyperbolic,
i.e. the Einstein equations are hyperbolic PDEs in an appropriate gauge
with coordinates adapted to the Killing vectors, see
\cite{Ansorg2008,Ansorg2009,Hennig2009}\footnote{The interior of axisymmetric and stationary black hole solutions is
non-compact and has spatial
$S^2\times\mathds R$ topology. Here the $\mathds R$-factor is generated by
a subgroup of the symmetry group corresponding to one of the Killing
fields. Therefore, it is possible to factor out a discrete subgroup such
that $S^2\times S^1$ topology is achieved.}.
(The Kerr metric is an explicitly known solution of these PDEs, see
\cite{Obregon}.)\footnote{
Another interesting example of a
spacetime with a region isometric to Kerr is the Chandrasekhar and
Xanthopoulos solution \cite{Chandrasekhar}
which describes colliding plane waves. It
turns out that the region of interaction of the two waves is an alternative
interpretation of a part of the Kerr spacetime region between event
horizon and Cauchy horizon, cf. \cite{Griffiths,Helliwell}.}
As a consequence, the
results on the regularity of the interior of such black holes and
existence of regular Cauchy horizons inside the black holes
obtained in \cite{Ansorg2008,Ansorg2009,Hennig2009} can be carried over to
Gowdy spacetimes.
The results in \cite{Ansorg2008} were found by utilizing a particular soliton method --- the so-called
\emph{B\"acklund transformation}. Making use of the theorem by
Chru\'sciel mentioned earlier, it was possible to show that
a regular Cauchy horizon inside the black hole always exists, provided that the
angular momentum of the black hole does not vanish. (The above
quantity $J$ is the Gowdy counterpart of the angular
momentum.)
In \cite{Ansorg2009,Hennig2009} these results have been generalized to the case in which an
additional Maxwell field is considered. The corresponding technique, that is the
\emph{inverse scattering method}, again comes from soliton theory and permits the reconstruction of the field quantities
along the entire boundary of the Gowdy square. Hereby, an associated linear matrix problem
is analyzed, whose integrability conditions are equivalent to the non-linear field equations
in axisymmetry and stationarity. Note that in this article we restrict ourselves to the pure
Einstein case (without Maxwell field) and refer the reader to \cite{Ansorg2009,Hennig2009}
for results valid in full Einstein-Maxwell theory.
We start by introducing appropriate coordinates, adapted to
the description of regular axes and Cauchy horizons at the boundaries of
the Gowdy square, see Sec.~\ref{Sec:coords}. Moreover, we revisit the
complex Ernst formulation of the field equations and corresponding
boundary conditions and introduce the conserved
quantity $J$ in question. In this formulation we can translate
the results of \cite{Ansorg2008,Ansorg2009,Hennig2009} and obtain the metric
on the future Cauchy horizon in terms of initial data on the past horizon, see Sec.~\ref{Sec:EP}.
As another consequence we arrive at the above equation relating $A_\mathrm{p},
A_\mathrm{f}$ and $J$, see Sec.~\ref {Sec:formula}.
Finally, in Sec.~\ref {Sec:Disc} we conclude with a discussion of our results.
\section{Coordinates and Einstein equations\label{Sec:coords}}
\subsection{Coordinate system, Einstein equations and regularity requirements\label{Subsec:coords}}
We introduce suitable coordinates and metric functions by adopting our
notation from \cite{Garfinkle1999}. Accordingly, we write the Gowdy line
element in the form
\begin{equation}\label{LE}
\mathrm{d} s^2=\mathrm{e}^M(-\mathrm{d} t^2+\mathrm{d}\theta^2)+\sin t\sin\theta
\left[\mathrm{e}^L(\mathrm{d}\varphi+Q\mathrm{d}\delta)^2+\mathrm{e}^{-L}\mathrm{d}\delta^2\right],
\end{equation}
where the metric functions $M$, $L$ and $Q$ depend on $t$ and $\theta$
alone. In these coordinates, the two Killing vectors are given by
\begin{equation}
\eta = \diff{}{\varphi},\qquad \xi = \diff{}{\delta}.
\end{equation}
As mentioned in Sec.~\ref{sec:Intro}, any $S^2\times S^1$ Gowdy model can be related to the spacetime portion
between outer event and inner Cauchy horizon of an appropriate axisymmetric and stationary black hole solution.
Black hole spacetimes of this kind have been studied by Carter \cite{Carter} and Bardeen \cite{Bardeen}. Among
other issues they discussed conditions for regular horizons.
In this paper we adopt their regularity arguments for our study of Gowdy
spacetimes. Accordingly we rewrite the line element (\ref{LE}) in the form
\begin{equation}\label{LE2}
\mathrm{d} s^2 = \mathrm{e}^M(-\mathrm{d} t^2+\mathrm{d}\theta^2)
+\mathrm{e}^u\sin^2\!\theta(\mathrm{d}\varphi+Q\mathrm{d}\delta)^2
+\mathrm{e}^{-u}\sin^2\! t\,\mathrm{d}\delta^2
\end{equation}
where
\begin{equation}\label{u}
u=\ln\sin t-\ln\sin\theta + L.
\end{equation}
Now, at a {\em regular} horizon (clear statements about the type of regularity follow below)
the metric functions $M,Q$ and $u$ are regular, meaning that $L$ possesses a specific irregular
behavior there.\footnote{We achieve the form of the line element used in
\cite{Ansorg2008,Ansorg2009,Hennig2009} from \eref{LE2}
by introducing the Boyer-Lindquist-type coordinates
$(R,\theta,\varphi,\tilde t)$ with
$R:=r_\mathrm{h}\cos t$, $\tilde t:=\delta/(2r_\mathrm{h})$, $r_\mathrm{h}=\textrm{constant}$,
and the metric functions
$\hat\mu:=\mathrm{e}^M$, $\hat u := \mathrm{e}^u$, $\omega:=-2r_\mathrm{h} Q$. Since the potentials
$\hat\mu>0$, $\hat u>0$ and $\omega$ are regular at the axes and at
the Cauchy horizon
(cf. \cite{Bardeen}), we see that $M$, $u$ and $Q$ are regular as well.}
At this point, some remarks about the specific regularity requirements
needed in our investigation are necessary. A crucial role is played by
a theorem of Chru\'sciel (Theorem 6.3 in \cite{Chrusciel1990})
which provides us
with the essential regularity information valid in the {\em interior}
of the Gowdy square. In this theorem it is assumed that initial data
are given on an interior Cauchy slice, described by
$t=\mbox{constant}=t_0,$ $0 < t_0 <\pi$. These data
are supposed to consist of (i) metric potentials that are
$H^k$-functions of $\theta$
and (ii) first time derivatives that are $H^{k-1}$-functions of $\theta$
(with $k\ge3$). Here $H^k$ denotes the Sobolev space $W^{k,2}$ that contains all functions
for which both the function and its weak
derivatives up to the order $k$ are in $L^2$.
With these assumptions the theorem by Chru\'sciel
guarantees the existence of a unique continuation of the given initial
data for which the metric is $H^k$ on
all future spatial slices $t=\mbox{constant}$ with $t_0<t<\pi$,
i.e.~only the future boundary $t=\pi$ of the Gowdy square is excluded.
(Note that Theorem 6.3 as formulated in \cite{Chrusciel1990} assumes
the metric to be smooth. However, this condition can be
relaxed considerably to the assumption of $H^k$ spaces \cite{Chrusciel}.)
Now, for the applicability of our soliton methods it is essential that the
metric potentials in \eref{LE2} possess $C^2$-regularity.
Therefore, in order to apply both Chru\'sciel's theorem and the soliton
methods, we need to require that the metric potentials $M$, $u$, $Q$ be
$H^4$-functions and the time derivatives $H^3$-functions
of $\theta$ on all slices $t=\textrm{constant}$
in a neighborhood of the horizon ${\mathcal H_\mathrm{p}}$, see Fig.~\ref{GS}.\footnote{In
\cite{Ansorg2008,Ansorg2009,Hennig2009}, the much stronger
assumption was made that the metric functions be \emph{analytic}
in an {\em exterior} neighborhood of the black hole's event
horizon. This stronger requirement was necessary to conclude that the
metric is also regular (in fact analytic) in an {\em interior}
vicinity of the event horizon, a requirement needed for applying
Chru\'sciel's theorem.} Then Chru\'sciel's theorem ensures
the existence of an $H^4$-regular continuation which implies (via Sobolev
embeddings and the validity of the Einstein equations)
that the metric potentials $M$, $u$, $Q$ are $C^2$-functions of $t$ and $\theta$ for
($t,\theta)\in(0,\pi)\times[0,\pi]$, i.e.~in the entire Gowdy square
with the exception of
the two horizons ${\mathcal H_\mathrm{p}}\,(t=0)$ and ${\mathcal H_\mathrm{f}}\,(t=\pi)$. Now, in accordance with Carter's and Bardeen's arguments
concerning regularity at the horizon, we require
that this $C^2$-regularity
holds also for $t=0$, i.e.~we assume in this
manner a specifically regular past horizon ${\mathcal H_\mathrm{p}}$.
As mentioned above, these requirements allow us to utilize our soliton methods at ${\mathcal H_\mathrm{p}}$.
Since ${\mathcal H_\mathrm{p}}$ is a degenerate boundary surface of the interior hyperbolic region,
the study of the Einstein equations provides us with specific relations that permit the identification of
an appropriate set of initial data of the hyperbolic problem at the past Cauchy horizon ${\mathcal H_\mathrm{p}}$.
\begin{figure}
\centering
\includegraphics[scale=0.7]{GowdySquareB1.eps}
\caption{The Gowdy square. We assume a $H^4$-regular metric and
$H^3$-regular time derivatives on all slices
$t=\textrm{constant}$ in a neighborhood (gray region)
of the past Cauchy horizon (${\mathcal H_\mathrm{p}}: t=0$) and find
by virtue of the results in \cite{Ansorg2008}, that then the metric
is $H^4$-regular on all future slices $t=\textrm{constant}$, $0\le t\le\pi$
(unless the quantity $J$ introduced in (\ref{def_J}) is zero). In
particular, a $H^4$-regular future Cauchy horizon (${\mathcal H_\mathrm{f}}: t=\pi$) exists.}
\label{GS}
\end{figure}
For the line element \eref{LE2}, the Einstein equations read as follows:
\begin{equation}\fl\label{E1}
-u_{,tt}-\cot t\, u_{,t} + u_{,\theta\theta}+\cot\theta\,u_{,\theta}
= 2-\frac{\sin^2\theta}{\sin^2 t}\mathrm{e}^{2u}
\left(Q_{,t}^2 - Q_{,\theta}^2\right),
\end{equation}
\begin{equation}\fl\label{E2}
-Q_{,tt}+\cot t\,Q_{,t}+Q_{,\theta\theta}+3\cot\theta\,Q_{,\theta}
-2(u_{,t}Q_{,t}-u_{,\theta}Q_{,\theta})=0,
\end{equation}
\begin{equation}\fl\label{M}
-M_{,tt}+M_{,\theta\theta}-\frac{1}{2}u_{,t}(u_{,t}-2\cot t)
+\frac{1}{2}u_{,\theta}(u_{,\theta}+2\cot\theta)
-\frac{1}{2}\frac{\sin^2\theta}{\sin^2 t}\mathrm{e}^{2u}
\left(Q_{,t}^2-Q_{,\theta}^2\right)=0.
\end{equation}
Alternatively to \eref{M}, the metric potential $M$ can also be
calculated from the first order field equations
\begin{eqnarray}\fl\label{M1a}
(\cos^2t - \cos^2\theta)M_{,t}
& = & \frac{1}{2}\mathrm{e}^{2u}\frac{\sin^3\theta}{\sin t}\left[
\cos t\sin\theta(Q_{,t}^2+Q_{,\theta}^2)
-2\sin t\cos\theta\, Q_{,t}Q_{,\theta}\right]\nonumber\\
&& + \frac{1}{2}\sin t\sin\theta\left[\cos t\sin\theta(u_{,t}^2+u_{,\theta}^2)
-2\sin t\cos\theta\, u_{,t}u_{,\theta}\right]\nonumber\\
&& +(2\cos^2t\cos^2\theta-\cos^2 t-\cos^2\theta)u_{,t}\nonumber\\
&& +2\sin t\cos t\sin\theta\cos\theta (u_{,\theta}-\tan\theta),
\end{eqnarray}
\begin{eqnarray}\fl\label{M2a}
(\cos^2t - \cos^2\theta)M_{,\theta}
& = & -\frac{1}{2}\mathrm{e}^{2u}\frac{\sin^3\theta}{\sin t}\left[
\sin t\cos\theta(Q_{,t}^2+Q_{,\theta}^2)
-2\cos t\sin\theta\, Q_{,t}Q_{,\theta}\right]\nonumber\\
&& - \frac{1}{2}\sin t\sin\theta\left[\sin t\cos\theta(u_{,t}^2+u_{,\theta}^2)
-2\cos t\sin\theta\, u_{,t}u_{,\theta}\right]\nonumber\\
&& +2\sin t\cos t\sin\theta\cos\theta(u_{,t}+\tan t)\nonumber\\
&& +(2\cos^2t\cos^2\theta-\cos^2 t-\cos^2\theta)u_{,\theta}.
\end{eqnarray}
These expressions tell us that (see \ref{App2} for a detailed derivation)
\begin{equation}\label{BC1}
M_{,t}=Q_{,t}=u_{,t}=0\,,\quad
Q = Q_\mathrm{p} = \mbox{constant}\,,\quad M+u=\textrm{constant}
\end{equation}
holds on ${\mathcal H_\mathrm{p}}$. As the $t$-derivatives of all metric
functions vanish identically at ${\mathcal H_\mathrm{p}}$, a complete set of
initial data at ${\mathcal H_\mathrm{p}}$ consists of
\begin{equation}\label{id}
Q=Q_\mathrm{p}\in\mathds R,\quad u\in H^4,\quad Q_{,tt}\in H^2,
\end{equation}
where $Q_{,tt}$ is in $H^2$ as a consequence of the regularity assumptions discussed above.
Note that among the second $t$-derivatives only
$Q_{,tt}$ can be chosen freely since the values of $M_{,tt}$ as well
as $u_{,tt}$ are then fixed, as again the study of the field
equations \eref{E1}-\eref{M} near ${\mathcal H_\mathrm{p}}$ reveals.
Similarly, $M$ is also fixed on ${\mathcal H_\mathrm{p}}$ by the choice of the data in
\eref{id}.
It turns out that the constant $Q_\mathrm{p}$ is a gauge degree of freedom. This results from the fact that
the line element \eref{LE} is invariant under the coordinate change
\begin{equation}\label{ccord_transf}
\Sigma:(t,\theta,\varphi,\delta) \mapsto \Sigma':(t,\theta,\varphi' =
\varphi -\Omega\delta,\delta),
\end{equation}
leading to $Q_\mathrm{p}'=Q_\mathrm{p}+\Omega$ in the new coordinates \footnote{Note that
for the corresponding black hole spacetimes, the coordinate change \eref{ccord_transf} describes a
transformation into a rigidly rotating frame of reference (for more
details see \cite{Ansorg2008,Ansorg2009,Hennig2009}).}.
We use this freedom in order
to exclude two specific values, namely $Q_\mathrm{p}=0$ and $Q_\mathrm{p}=1/J$, where $J$
is the already mentioned conserved quantity that will be introduced in \eref{J}.
This exclusion becomes necessary since the analysis carried out below
breaks down if $Q_\mathrm{p}$ takes one of these values.
We note further that as another consequence of our regularity requirements, the following axis condition
holds at least in a neighborhood of the points $A$ and $B$
(cf.~Fig.~\ref{GS}):
\begin{equation}\label{BC2}
\mathcal A_{1/2}:\qquad M=u.
\end{equation}
Moreover, at these points $A,B$ we have (see \ref{App2})
\begin{equation}\label{BC3}
M_A=M_B=u_A=u_B.
\end{equation}
Note that solutions which are also $C^2$-regular up to and including
${\mathcal H_\mathrm{f}}$ satisfy corresponding conditions at the points $C$ and $D$.
\subsection{The Ernst equation}
In order to introduce the Ernst formulation of the Einstein
equations, we define the complex Ernst potential
\begin{equation}\label{EP}
\mathcal E(t,\theta)=f(t,\theta)+\mathrm{i} b(t,\theta),
\end{equation}
where the real part $f$ is given by
\begin{equation}\label{Re}
f:=-\xi_i\xi^i=-\mathrm{e}^{-u}\sin^2\!t-Q^2\mathrm{e}^{u}\sin^2\!\theta
\end{equation}
and the imaginary part $b$ is defined in terms of a potential $a$,
\begin{equation}\label{Im}
a:=\frac{\xi^i\eta_i}{\xi^j\xi_j}=-\frac{Q}{f}\mathrm{e}^u\sin^2\!\theta,
\end{equation}
via
\begin{equation}\label{a}
a_{,t} = \frac{1}{f^2}\sin t\sin\theta\, b_{,\theta},\qquad
a_{,\theta} = \frac{1}{f^2}\sin t\sin\theta\, b_{,t}.
\end{equation}
In this formulation, the vacuum Einstein equations are equivalent to the
Ernst equation
\begin{equation}\label{Ernst}
\Re(\mathcal E)\left(-\mathcal E_{,tt}-\cot t\,\mathcal E_{,t}+\mathcal E_{,\theta\theta}
+\cot\theta\,\mathcal E_{,\theta}\right)
=-\mathcal E_{,t}^2+\mathcal E_{,\theta}^2,
\end{equation}
where $\Re(\mathcal E)$ denotes the real part of $\mathcal E$.
As a consequence of \eref{Ernst}, the integrability condition
$a_{,t\theta}=a_{,\theta t}$ of the system \eref{a} is satisfied such
that $a$ may be calculated from (\ref{a}) using $\mathcal E$. Moreover, given $a$ and $\mathcal E$ we
can use \eref{Re} and \eref{Im} to obtain the metric functions $u$ and
$Q$. Finally, the potential $M$ may be calculated
from
\begin{eqnarray}\label{M1}
M_{,t} & = & -\frac{f_{,t}}{f} + \frac{1}{2f^2}
\frac{\sin t\sin\theta}{\cos^2\! t-\cos^2\!\theta}
\Big[\cos t\sin\theta
\left(f_{,t}^2+f_{,\theta}^2+b_{,t}^2
+b_{,\theta}^2\right)\nonumber\\
& & -2\sin t\cos\theta
\left(f_{,t} f_{,\theta} + b_{,t} b_{,\theta}\right)
-4f^2\frac{\cos t}{\sin\theta}\Big],\\
\label{M2}
M_{,\theta} & = & -\frac{f_{,\theta}}{f} - \frac{1}{2f^2}
\frac{\sin t\sin\theta}{\cos^2\! t-\cos^2\!\theta}
\Big[\sin t\cos\theta
\left(f_{,t}^2+f_{,\theta}^2+b_{,t}^2
+b_{,\theta}^2\right)\nonumber\\
& & -2\cos t\sin\theta
\left(f_{,t} f_{,\theta} + b_{,t} b_{,\theta}\right)
-4f^2\frac{\cos\theta}{\sin t}\Big]
\end{eqnarray}
since the Ernst equation \eref{Ernst} also ensures the
integrability condition $M_{,t\theta}=M_{,\theta t}$.
As for the potentials introduced in Sec.~\ref{Subsec:coords} we
conclude axis conditions which
hold at least in a neighborhood of the points $A$ and $B$ (cf.~Fig.~\ref{GS}):
\begin{equation}\label{BC2_E}
\mathcal A_{1/2}:\qquad
\mathcal E_{,\theta}=0,\quad a=0.
\end{equation}
Moreover, at the points $A,B$ we have $f=0$.
Again, solutions which are also $H^4$-regular on ${\mathcal H_\mathrm{f}}$ satisfy
corresponding conditions at the points $C$ and $D$.
It turns out that initial data
$\mathcal E_\mathrm{p}(\theta)\equiv\mathcal E(0,\theta)=f_\mathrm{p}(\theta)+\mathrm{i} b_\mathrm{p}(\theta)$ of the
Ernst potential are equivalent to the inital data set consisting of $u$, $Q=Q_\mathrm{p}$, $Q_{,tt}$ at ${\mathcal H_\mathrm{p}}$.
Both sets are related via
\begin{eqnarray}\label{fp}
f_\mathrm{p} & = & -Q_\mathrm{p}^2\mathrm{e}^{u(0,\theta)}\sin^2\!\theta,\\
b_\mathrm{p} & = & b_A+2Q_\mathrm{p}(\cos\theta-1)-Q_\mathrm{p}^2\int_0^\theta
\mathrm{e}^{2u(0,\theta')}Q_{,tt}(0,\theta')\sin^3\!\theta'\,
\mathrm{d}\theta',
\end{eqnarray}
where $b_A=b(0,0)$ is an arbitrary integration constant.
\subsection{Conserved quantities\label{Sec:conserved}}
As a consequence of the symmetries of the Gowdy metric, there exist
{\em conserved} quantities, i.e.~integrals with respect to $\theta$
that are independent of the coordinate time $t$.
One of them is $J$, defined by
\begin{equation}\label{def_J}
J:=-\frac{1}{8}\int_0^\pi\frac{Q_{,t}(t,\theta)}
{\sin t}\,\mathrm{e}^{2u(t,\theta)}\sin^3\!\theta\,\mathrm{d}\theta = \mbox{constant}.
\end{equation}
As for the black hole angular momentum in the
corresponding axisymmetric and stationary black hole
spacetimes (cf.~discussion at the end of Sec.~\ref{sec:Intro}),
this quantity determines whether or not a regular future Cauchy horizon exists.
In fact, it exists if and only if $J\neq 0$ holds. Note that $J$ vanishes in polarized
Gowdy models, where we have $Q_{,t}\equiv 0$.
It turns out that $J$ can be read off directly from the Ernst potential
and its second $\theta$-derivative at the points $A$ and $B$ on ${\mathcal H_\mathrm{p}}$
(see Fig.~\ref{GS}),
\begin{equation}\label{J}
J = -\frac{1}{8Q_\mathrm{p}^2}(b_A-b_B-4Q_\mathrm{p}),\quad
Q_\mathrm{p} = -\frac{1}{2}b_{,\theta\theta}|_A
\end{equation}
where
\[
b_B = b(t=0,\theta=\pi).
\]
A detailed derivation of these formulas can be found in \cite{Hennig2009}.
\section{Potentials on $\mathcal A_1$, $\mathcal A_2$, and ${\mathcal H_\mathrm{f}}$\label{Sec:EP}}
\subsection{Ernst potential}
In the previous sections we have derived a formulation which permits
the direct translation to the situation in which the hyperbolic region
inside the event horizon of an axisymmetric and stationary black hole
(with possibly non-pure vacuum exterior, e.g.~with surrounding matter)
is considered, as was done in \cite{Ansorg2008,Ansorg2009,Hennig2009}.
In \cite{Ansorg2008} it has been demonstrated that a
specific soliton method (the {\em B\"acklund transformation}, see \ref{App})
can be used to write the Ernst potential $\mathcal E$ in terms of another Ernst
potential $\mathcal E_0$ which corresponds to a spacetime without a black hole, but
with a completely regular central vacuum region. Interestingly, the potential
$\mathcal E_0=\mathcal E_0(t,\theta)$ possesses specific symmetry conditions which
translate here into
\[
\begin{array}{lcll}
\mathcal E_0 (t, 0) &=& \mathcal E_0 (0 , t) &\quad \mbox{potential at $\mathcal A_1$},\\[2mm]
\mathcal E_0 (t, \pi) &=& \mathcal E_0 (0 , \pi-t) &\quad \mbox{potential at $\mathcal A_2$},\\[2mm]
\mathcal E_0 (\pi , \theta) &=& \mathcal E_0 (0 , \pi-\theta) &\quad \mbox{potential at ${\mathcal H_\mathrm{f}}$}.
\end{array}
\]
Hence the potential values at the boundaries $\mathcal A_1$, $\mathcal A_2$ and ${\mathcal H_\mathrm{f}}$ are
given explicitly in terms of those at ${\mathcal H_\mathrm{p}}$.
Now the B\"acklund transformation carries these dependencies over to the
corresponding original Ernst potential $\mathcal E$,
i.e. we obtain $\mathcal E$ at $\mathcal A_1$, $\mathcal A_2$ and ${\mathcal H_\mathrm{f}}$ completely in terms of the
initial data at ${\mathcal H_\mathrm{p}}$.
An alternative approach (see \cite{Ansorg2009,Hennig2009}) uses the \emph{inverse scattering method}.
In these papers the
potentials on $\mathcal A_1$, $\mathcal A_2$ and ${\mathcal H_\mathrm{f}}$ were obtained from the
investigation of an associated linear matrix problem. The integrability conditions of
this matrix problem are equivalent to the non-linear
field equations, see \ref{App}.
We may carry the corresponding procedure over to our considerations of Gowdy spacetimes.
Accordingly we are able to perform an explicit integration
of the linear problem along the
boundaries of the Gowdy square. Since the resulting solution
is closely related to the Ernst potential,
it provides us with the desired expressions
between the metric quantities on the four boundaries of the Gowdy square.
Note that in both approaches the axes $\mathcal A_1$ and $\mathcal A_2$ are considered first.
Starting at ${\mathcal H_\mathrm{p}}$ and using the theorem by Chru\'sciel
\cite{Chrusciel1990}, which ensures $H^4$-regularity of the metric inside the
Gowdy square
(i.e.~excluding only ${\mathcal H_\mathrm{f}}$), we derive first the Ernst potentials at $\mathcal A_1$ and $\mathcal A_2$ in terms of the values at ${\mathcal H_\mathrm{p}}$.
It turns out that for $J\neq 0$ these formulas can be extended continuously to the points $C$ and $D$ at which $\mathcal A_1$ and $\mathcal A_2$ meet ${\mathcal H_\mathrm{f}}$ (cf.~Fig.~\ref{GS}). Moreover, with the values at $C$ and $D$ it is possible to proceed to ${\mathcal H_\mathrm{f}}$, and in this way we eventually find an Ernst potential which is continuous along the entire boundary of the Gowdy square. As the theorem by Chru\'sciel
ensures unique solvability of the Einstein equations inside the Gowdy square, we conclude that the $H^4$-regularity of the Ernst potential
holds up to and including ${\mathcal H_\mathrm{f}}$ which therefore turns out to be an $H^4$-regular future Cauchy horizon.
The resulting expressions of the Ernst potentials at the boundaries
$\mathcal A_1$, $\mathcal A_2$ and ${\mathcal H_\mathrm{f}}$ read
\begin{eqnarray}
\fl\label{EA1}
\mathcal A_1: && \quad \mathcal E_1(x):=\mathcal E(t=\arccos x, \theta=0)\hspace{6.2mm}
= \frac{\mathrm{i}[b_A-2Q_\mathrm{p}(x-1)]\mathcal E_\mathrm{p}(x)+b_A^2}
{\mathcal E_\mathrm{p}(x)-\mathrm{i}[b_A+2Q_\mathrm{p}(x-1)]},\\
\fl\label{EA2}
\mathcal A_2: && \quad \mathcal E_2(x):=\mathcal E(t=\arccos(-x),\theta=\pi)
= \frac{\mathrm{i}[b_B-2Q_\mathrm{p}(x+1)]\mathcal E_\mathrm{p}(x)+b_B^2}
{\mathcal E_\mathrm{p}(x)-\mathrm{i}[b_B+2Q_\mathrm{p}(x+1)]},\\
\fl\label{EHf}
{\mathcal H_\mathrm{f}}: && \quad \mathcal E_\mathrm{f}(x):=\mathcal E(t=\pi,\theta=\arccos(-x))\hspace{0.67mm}
= \frac{a_1(x)\mathcal E_\mathrm{p}(x)+a_2(x)}{b_1(x)\mathcal E_\mathrm{p}(x)+b_2(x)},
\end{eqnarray}
where
\begin{equation}
\mathcal E_\mathrm{p}(x):=\mathcal E(t=0,\theta=\arccos x)
\end{equation}
denotes the Ernst potential on
${\mathcal H_\mathrm{p}}$ and $a_1$, $a_2$, $b_1$, and $b_2$ in \eref{EHf} are
polynomials in $x$, defined by
\begin{eqnarray}\label{a1_b2}
a_1 & = & \mathrm{i}\big[16Q_\mathrm{p}^2(1-x^2)+8Q_\mathrm{p}(b_A(x+1)+b_B(x-1))\nonumber\\
&& \qquad +(b_A-b_B)(b_A(x-1)^2-b_B(x+1)^2)\big],\\
a_2 & = & 8Q_\mathrm{p}[b_A^2(x+1)+b_B^2(x-1)]-4b_Ab_B(b_A-b_B)x,\\
\label{b1}
b_1 & = & 4(4Q_\mathrm{p}+b_B-b_A)x,\\
\label{b2}
b_2 & = & \mathrm{i}\big[4Q_\mathrm{p}(1-x^2)-b_A(1+x)^2+b_B(1-x)^2\big]
(4Q_\mathrm{p}+b_B-b_A).
\end{eqnarray}
A discussion of \eref{EHf} shows that $\mathcal E_\mathrm{f}$ is indeed always
regular
provided that the black hole angular momentum does not vanish, which in turn means that
$J\neq 0$, cf.~\eref{def_J}.
In order to prove this statement, we first note that both numerator and denominator on
the right hand side of \eref{EHf} are completely regular functions in terms of $x$, since
$a_1$, $a_2$, $b_1$, $b_2$ are polynomials in $x$ and the initial function $\mathcal E_\mathrm{p}$
is regular by assumption. Hence, an irregular behavior of the potential $\mathcal E_\mathrm{f}$ could
only be caused by a zero of the denominator. Consequently, we investigate whether the equation
\begin{equation}\label{denom}
b_1(x)\mathcal E_\mathrm{p}(x)+b_2(x)=0
\end{equation}
has solutions $x\in[-1,1]$.
The real part of \eref{denom} is given by
\begin{equation}\label{denomr}
4x(4Q_\mathrm{p}+b_B-b_A)f_\mathrm{p}(x)=0.
\end{equation}
Using \eref{fp} and \eref{J} together with our gauge $Q_\mathrm{p}\neq0$ and the assumption $J\neq0$
we find that \eref{denomr} has exactly the three zeros, $x=-1$, $x=0$ and $x=1$
(corresponding to $\theta=\pi$,
$\theta=\pi/2$ and $\theta=0$). Now, for $x=0$ the imaginary part of
\eref{denom} does not vanish, whereas for $x=\pm 1$ it does.
Thus we find that the only zeros of the
denominator in \eref{EHf} are located at the two axes ($x=\pm 1$).
As a matter of fact, the regular numerator of \eref{EHf} also vanishes
at $x=\pm1$, as can be derived in a similar manner. Consequently, we study the behavior of
$\mathcal E_\mathrm{f}$ for $x=\pm 1$ in terms of the rule by L'H\^opital. As both numerator and denominator in \eref{EHf} have non-vanishing
values of the derivative with respect to $x$ for $x=\pm 1$, we conclude that the Ernst potential is regular everywhere
whenever $J\neq0$ holds.
Consider now the limit $J\to0$ for which the expression
\[
4Q_\mathrm{p}+b_B-b_A
\]
vanishes, cf.~\eref{J}. As this term appears as a factor in both $b_1$ and $b_2$ (cf. \eref{b1},\eref{b2}),
we find that the denominator in \eref{EHf} vanishes identically.
The numerator, however, remains non-zero, which means that Ernst potential
diverges on the entire future boundary $t=\pi$, $0\le\theta\le\pi$.
We conclude that ${\mathcal H_\mathrm{f}}$ becomes singular in the limit $J\to0$.
This divergent behavior of the Ernst potential corresponds to the
formation of a (scalar) curvature singularity at ${\mathcal H_\mathrm{f}}$. In order to illustrate
this property, we calculate the Kretschmann scalar at the
point $C$ on ${\mathcal H_\mathrm{f}}$ (see Fig.~\ref{GS}).
Using the axis conditions discussed in Sec.~\ref{Sec:coords} and the
Einstein equations, we obtain
\begin{equation}
R_{ijkl}R^{ijkl}|_C
=12\left[\mathrm{e}^{-2u}(1+2u_{,tt})^2-Q_{,tt}^2\right]_C.
\end{equation}
In terms of the Ernst potential, this expression reads
(cf. Eq.~\eref{FA1} below)
\begin{equation}
R_{ijkl}R^{ijkl}|_C
= \frac{1}{3}\left[(f_{,tt}+f_{,tttt})^2-(b_{,tt}+b_{,tttt})^2\right]_C.
\end{equation}
Now we can use \eref{EA1} to derive a formula that contains only the
initial data on the past horizon ${\mathcal H_\mathrm{p}}$. Together with \eref{J} we get
\begin{equation}\label{Kret}\fl
R_{ijkl}R^{ijkl}|_C = -\frac{3}{256Q_\mathrm{p}^8
J^6}\left[(16Q_\mathrm{p}^4J^2-4b_{,xx}Q_\mathrm{p}^2J-f_{,x}^2)^2
-16Q_\mathrm{p}^4J^2(f_{,xx}-2f_{,x})^2\right]_B,
\end{equation}
where $x=\cos\theta$.
Note that the numerator is well-defined and bounded for our
$H^4$-regular metric, a fact which is ensured by the validity of the
Einstein equations near ${\mathcal H_\mathrm{p}}$.
Equation \eref{Kret} indicates that the Kretschmann scalar diverges as
$J^{-6}$ in the limit $J\to0$.
In fact, as we choose $Q_\mathrm{p}\neq0$ (see Sec.~\ref{Subsec:coords}),
and furthermore $f_{,x}\neq0$ holds (because $2\pi f_{,x}=-Q_\mathrm{p}^2A_\mathrm{p}$
where $A_\mathrm{p}$, $0<A_\mathrm{p}<\infty$, is the horizon area
of ${\mathcal H_\mathrm{p}}$, see Sec.~\ref{Sec:formula}), we conclude that
$f_{,x}^4$ is the dominating term in the numerator of \eref{Kret} for
sufficiently small $J$.
Hence the Kretschmann scalar indeed diverges as $J^{-6}$ in the limit $J\to0$.
\subsection{Metric potentials}
From the Ernst potentials $\mathcal E_1=f_1+\mathrm{i} b_1$, $\mathcal E_2=f_2+\mathrm{i} b_2$,
$\mathcal E_\mathrm{f}=f_\mathrm{f}+\mathrm{i} b_\mathrm{f}$ in \eref{EA1}, \eref{EA2}, \eref{EHf} we may
calculate the metric potentials $M$, $Q$ and $u$ on the boundaries of
the Gowdy square. Using \eref{Re}, \eref{Im}, \eref{a}, \eref{BC1},
\eref{BC2}, \eref{BC3} we obtain
\begin{eqnarray}
\label{FA1}
\mathcal A_1: && \mathrm{e}^{M_1}=\mathrm{e}^{u_1}=-\frac{\sin^2\! t}{f_1},\quad
Q_1 = \frac{b_{1,t}}{2\sin t},\\
\mathcal A_2: && \mathrm{e}^{M_2}= \mathrm{e}^{u_2}=-\frac{\sin^2\! t}{f_2} ,\quad
Q_2 = -\frac{b_{2,t}}{2\sin t},\quad\\
{\mathcal H_\mathrm{f}}: && \mathrm{e}^{M_\mathrm{f}}=-\frac{f_{,\theta\theta}^{\ 2}|_C}{4Q_\mathrm{f}^2}
\frac{\sin^2\!\theta}{f_\mathrm{f}},\quad
Q=Q_\mathrm{f},\quad
\mathrm{e}^{u_\mathrm{f}} = -\frac{f_\mathrm{f}}{Q_\mathrm{f}^2\sin^2\!\theta},
\end{eqnarray}
where
\begin{eqnarray}
Q_\mathrm{f} & = &
\frac{b_A-b_B+4Q_\mathrm{p}}{b_A-b_B-4Q_\mathrm{p}}\,Q_\mathrm{p}.
\end{eqnarray}
Note that $Q_\mathrm{f} \neq 0$ in our gauge (cf.~\eref{J}):
\begin{equation*}\fl
b_A-b_B+4Q_\mathrm{p}
= (b_A-b_B-4Q_\mathrm{p})+8Q_\mathrm{p}=-8Q_\mathrm{p}^2J+Q_\mathrm{p}
= 8Q_\mathrm{p}(1-JQ_\mathrm{p})\neq 0
\end{equation*}
in accordance with the discussion in Sec.~\ref{Subsec:coords} where the
gauge freedom was used to assure $0\neq Q_\mathrm{p}\neq 1/J$.
Furthermore, using \eref{EA1}-\eref{EHf} and our regularity assumptions for
the initial data, it is straightforward to show that $(-\sin^2\!t/f_1)$,
$(-\sin^2\!t/f_2)$ and $(-\sin^2\!\theta/f_f)$ are regular and positive
functions on the entire boundaries $\mathcal A_1$, $\mathcal A_2$ and ${\mathcal H_\mathrm{f}}$,
respectively. Moreover, the terms $(b_{1,t}/\sin t)$ and $(b_{2,t}/\sin t)$
are regular on $\mathcal A_1$ and $\mathcal A_2$, respectively.
Consequently, the above boundary values for the
metric potentials $M$, $u$ and $Q$ are regular, too.
\section{A universal formula for the horizon areas\label{Sec:formula}}
In \cite{Ansorg2008} a relation between the black hole angular momentum
and the two horizon areas of the outer event and inner Cauchy horizons
was found. This relation
emerged from the explicit expressions of the inner Cauchy horizon potentials in terms of those
at the event horizon. Translated to the case of general
$S^2\times S^1$ Gowdy spacetimes,
this relation is given by
\begin{equation}\label{area_formula}
A_\mathrm{p} A_\mathrm{f}=(8\pi J)^2,
\end{equation}
where the areas $A_\mathrm{p}$
and $A_\mathrm{f}$ of the Cauchy horizons ${\mathcal H_\mathrm{p}}$ and ${\mathcal H_\mathrm{f}}$ are defined
as integrals over the horizons (in a slice
$\delta=\textrm{constant}$),
\begin{equation}
A_\mathrm{p/f} =
\int\limits_{S^2}\sqrt{g_{\theta\theta}g_{\varphi\varphi}}\,\mathrm{d}\theta\mathrm{d}\varphi =
2\pi\int\limits_0^\pi\mathrm{e}^{\frac{M+u}{2}}\big|_{\mathcal H_\mathrm{p/f}}
\sin\theta\,\mathrm{d}\theta=4\pi\mathrm{e}^u|_{A/C}.
\end{equation}
\section{Discussion\label{Sec:Disc}}
In this paper we have analyzed general
$S^2\times S^1$ Gowdy models with a past
Cauchy horizon ${\mathcal H_\mathrm{p}}$. As any such spacetime can be related to a
corresponding axisymmetric and stationary black hole solution,
considered between outer event and inner Cauchy horizons, the
results on the regularity of the interior of such black holes
(obtained in \cite{Ansorg2008,Ansorg2009,Hennig2009}) can be carried over to the
Gowdy spacetimes treated here. In particular, specific soliton methods have proved to be useful,
(i) the \emph{B\"acklund transformation} and (ii) the \emph{inverse scattering method}.
Both methods imply explicit expressions for the metric potentials on the boundaries
$\mathcal A_1$, $\mathcal A_2$, ${\mathcal H_\mathrm{f}}$ of the Gowdy square in terms of the initial values at ${\mathcal H_\mathrm{p}}$.
Moreover we obtain statements on existence and regularity of a future Cauchy
horizon as well as a universal relation for the horizon areas. These
results are summarized in the following.\\
{\noindent\bf Theorem 1.}
{\it Consider an $S^2\times S^1$ Gowdy spacetime with a past
Cauchy horizon~${\mathcal H_\mathrm{p}}$,
where the metric potentials $M$, $u$ and $Q$ appearing in the line element
\eref{LE2} are $H^4$-functions and the time derivatives
$H^3$-functions of the adapted coordinate $\theta$ on all slices
$t=\textrm{constant}$ in a closed neighborhood $N:=[0,t_0]\times[0,\pi]$,
$t_0\in(0,\pi)$, of ${\mathcal H_\mathrm{p}}$. In addition, suppose $M,Q,u\in C^2(N)$.
Then this spacetime
possesses an $H^4$-regular future Cauchy horizon ${\mathcal H_\mathrm{f}}$ if and only if
the conserved quantity $J$ (cf.~\eref{def_J}) does not vanish.
In the limit $J\to 0$, the future Cauchy horizon transforms into a
curvature singularity. Moreover, for $J\neq 0$ the universal relation
\begin{equation}\label{unirel}
A_{\rm p} A_{\rm f} = (8\pi J)^2
\end{equation}
holds, where $A_{\rm p}$ and $A_{\rm f}$ denote the areas of past and future
Cauchy horizons.}\\
\noindent\emph{Remark.} Note that the statements in Thm.~1 can be
generalized to $S^2\times S^1$ Gowdy spacetimes with additional electromagnetic
fields, see \cite{Ansorg2009,Hennig2009}. The proof
utilizes a more general linear matrix problem in which the Maxwell field
is incorporated. Again the corresponding integrability conditions are equivalent
to the coupled system of field equations that describe the Einstein-Maxwell field
in electrovacuum with two Killing vectors (associated to the two Gowdy symmetries).
It turns out that apart from $J$ a second conserved quantity $Q$
becomes relevant. The corresponding counterpart of this quantity
in Einstein-Maxwell black hole spacetimes describes the electric charge of the black hole.
For Gowdy spacetimes we conclude that a regular future Cauchy horizon exists if and only
if $J$ and $Q$ do not vanish simultaneously. Moreover, we find that
Eq.~\eref{unirel} generalizes to $A_\mathrm{p} A_\mathrm{f}=(8\pi J)^2 +(4\pi Q^2)^2$.\\
With the above theorem we provide a long outstanding
result on the existence of a regular future Cauchy horizon in $S^2\times S ^1$ Gowdy spacetimes.
We note that the soliton methods being utilized in order to derive our conclusions
are not widely used in previous studies of this kind. Therefore we believe
that these techniques might enhance further investigations in the
realm of Gowdy cosmologies.
\ack
We would like to thank Florian Beyer and Piotr T. Chru\'sciel
for many valuable discussions and
John Head for commenting on the manuscript.
This work was supported by the Deutsche
For\-schungsgemeinschaft (DFG) through the
Collaborative Research Centre SFB/TR7
``Gravitational wave astronomy''.
|
1,108,101,565,558 | arxiv | \section{Introduction}
In recent years, there have been quite a few measurements of
quantities in $B$ decays which differ from the predictions of the
Standard Model (SM) by $\sim2\sigma$. For example, in $B \to \pi K$, the
SM has some difficulty in accounting for all the experimental
measurements \cite{piKupdate}. The measured indirect (mixing-induced)
CP asymmetry in some $ b \to s$ penguin decays is found not to be
identical to that in $B_d^0\to J/\psiK_{\sss S}$ \cite{btos-1,btos-2,btos-3},
counter to the expectations of the SM. While the SM predicts that the
indirect CP asymmetry in $\bsbar\to J/\psi \phi$ should be $\simeq 0$, the
measurement of this quantity by the CDF and D\O\ collaborations shows
a deviation from the SM \cite{cdf-d0-note}. One naively expects the
ratio of transverse and longitudinal polarizations of the decay
products in $B\to\phi K^*$ to be $f_{\sss T}/f_{\sss L} \ll 1$, but it is observed
that $f_{\sss T}/f_{\sss L} \simeq 1$ \cite{phiK*-babar,phiK*-belle}. It may be
possible to explain this value of $f_{\sss T}/f_{\sss L}$ within the SM, but this is
not certain. Finally, the recent observation of the anomalous dimuon
charge asymmetry by the D\O\ collaboration \cite{D0-dimuon} also
points towards some new physics in $B_s$ mixing that affects the
lifetime difference and mixing phase involved therein (for example,
see Ref.~\cite{dgs-rc}). Though none of the measurements above show a
strong enough deviation from the SM to claim positive evidence for new
physics (NP), they are intriguing since (i) the effects are seen in
several different $B$ decay channels, (ii) use a number of independent
observables, and (iii) all involve $b \to s$ transitions.
A further hint has recently been seen in the leptonic decay channel:
in the exclusive decay $\bdbar\to \kstar \mu^+ \mu^-$, the forward-backward asymmetry
($A_{FB}$) has been found to deviate somewhat from the predictions of
the SM
\cite{Belle-oldKstar,Belle-newKstar,BaBar-Kmumu,BaBar-Kstarmumu}.
This is interesting since it is a CP-conserving process, whereas most
of the other effects involve CP violation. Motivated by this
tantalizing hint of NP in $\bdbar\to \kstar \mu^+ \mu^-$, we explore the
consequences of such NP in related decays. We do not restrict
ourselves to any particular model, but work in the framework of
effective operators with different Lorentz structures.
If NP affects $\bdbar\to \kstar \mu^+ \mu^-$, it must be present in the decay
$ b \to s \mu^+ \mu^-$, and will affect the related decays $\bsbar \to \mu^+ \mu^-$, $\bdbar \to X_s \mu^+ \mu^-$,
$\bsbar \to \mu^+ \mu^- \gamma$, and $\bdbar \to {\bar K} \mu^+ \mu^-$. The analyses of these decays in the
context of the SM as well as in some NP models have been performed in
the literature: $\bsbar \to \mu^+ \mu^-$ \cite{Skiba:1992mg, Choudhury:1998ze,
Huang:2000sm, Bobeth:2001sq, Huang:2002ni, Chankowski:2003wz,
Alok:2005ep, blanke, Alok:2009wk, Buras:2010pi, Golowich:2011cx},
$\bdbar \to X_s \mu^+ \mu^-$
\cite{Ali:1991is,Buras:1994dj,Ali:1996bm,Huang:1998vb,Fukae:1998qy,bmu,Ali:2002jg,
Huber:2007vv,Lee:2006gs,Ligeti:2007sn}, $\bsbar \to \mu^+ \mu^- \gamma$
\cite{Eilam:1996vg,Aliev:1996ud,Geng:2000fs, Dincer:2001hu,
Kruger:2002gf, Melikhov:2004mk, Melikhov:2004fs,
Alok:2006gp,Balakireva:2009kn}, $\bdbar \to {\bar K} \mu^+ \mu^-$
\cite{Ali:2002jg,ali-00,Aliev:2001pq,Bensalem:2002ni, Bobeth:2007dw,
Alok:2008wp,Charles,Beneke},
$\bdbar\to \kstar \mu^+ \mu^-$\cite{Kruger:2000zg,Beneke:2001at, Yan:2000dc,
Aliev:2003fy,Aliev:2004hi,kruger-matias,Lunghi:2006hc,HHM,Egede:2008uy,
Altmannshofer:2008dz,AFBNP,Soni:2010xh,hill1,Lunghi:2010tr,hill2,aoife}.
Correlations between some of these modes have been studied in
Refs.~\cite{Hiller:2003js,Alok:2008aa,Alok:2008hh}.
In this paper, we consider the addition of NP vector-axial vector
(VA), scalar-pseudoscalar (SP), and tensor (T) operators that
contribute to $ b \to s \mu^+ \mu^-$, and compute their effects on the above
decays. Our aim here is not to obtain precise predictions, but rather
to obtain an understanding of how the NP affects the observables, and
to establish which Lorentz structure(s) can provide large deviations
from the SM predictions. Some of these effects have already been
examined by some of us: for example, new VA and SP operators in
$\bsbar \to \mu^+ \mu^-$ \cite{Alok:2005ep}, new VA and SP operators in
$\bsbar \to \mu^+ \mu^- \gamma$ \cite{Alok:2006gp}, the correlation between $\bsbar \to \mu^+ \mu^-$
and $\bdbar \to {\bar K} \mu^+ \mu^-$ with SP operators \cite{Alok:2008aa,Alok:2008hh}, large
forward-backward asymmetry in $\bdbar \to {\bar K} \mu^+ \mu^-$ from T operators
\cite{Alok:2008wp}, and the contribution of all Lorentz structures to
$\bdbar\to \kstar \mu^+ \mu^-$, with a possible explanation of the $A_{FB}$ anomaly
\cite{AFBNP}. Here we perform a combined study of all of these decay
modes with all the Lorentz structures, consolidating and updating some
of the earlier conclusions, and adding many new results and insights.
Such a combined analysis, performed here for the first time, is
crucial for obtaining a consistent picture of the bounds on NP and the
possible effect of NP on the observables of interest. While
observables like the differential branching ratio (DBR) and
$A_{FB}(q^2)$ by themselves are sensitive to NP, we also examine the
correlations between them in the context of NP Lorentz structures.
A full angular distribution of $\bdbar\to \kstar \mu^+ \mu^-$ allows us access to many
independent observables, and hence to multiple avenues for probing NP.
We present here for the first time the full angular distribution,
including all the NP Lorentz structures, for this decay mode. This
leads to the identification of observables that could be significantly
influenced by specific Lorentz structures of NP. In addition to the
DBR and $A_{FB}$, we also examine the longitudinal polarization
fraction $f_L$ and the angular asymmetry $A_T^{(2)}$, introduced
recently in Ref.~\cite{kruger-matias}. We further analyze the
longitudinal-transverse asymmetry $A_{LT}$, which, as we will argue,
has very small hadronic uncertainties.
Hadronic uncertainties often are the main source of error in the
calculation of SM predictions of a quantity, and make the positive
identification of NP rather difficult. In this paper, for $\bdbar \to {\bar K} \mu^+ \mu^-$
we use the form factors from light-cone sum rules. For $\bdbar\to \kstar \mu^+ \mu^-$,
we use the form factors obtained from QCD factorization at low $q^2$,
and those from light-cone sum rules at high $q^2$. The latest
next-to-leading order (NLO QCD) corrections \cite{Beylich:2011aq} have
not been included. These corrections would affect the central values
of the SM predictions to a small extent, while also decreasing the
renormalization-scale uncertainty. However, since our primary interest
is looking for observables for which the NP effects are large, a LO
analysis is sufficient at this stage. In our figures, we display
bands for the SM predictions that include the form-factor
uncertainties as claimed by the respective authors.
In addition to the form-factor uncertainties, the SM prediction bands
also include the uncertainties due to quark masses,
Cabibbo-Kobayashi-Maskawa (CKM) matrix elements and meson decay
constants. In our figures, these bands are overlaid with some
examples of the allowed values of these observables when NP
contributions are included. This allows the scaling of these
uncertainties to be easily visualized. It turns out that in many
cases, the results with the NP can be significantly different from
those without the NP, even taking into account inflated values for the
hadronic uncertainties. We identify and emphasize such observables.
We also show that the hadronic uncertainties in several of these
observables are under control, especially when the invariant mass of
the muon pair is small and one can use the limit of large-energy
effective theory (LEET). This makes such observables excellent probes
of new physics. Also, since all the observables are shown as
functions of $q^2$, we have the information not just about the
magnitudes of the observables, but also about their shape as a
function of $q^2$, where some of the uncertainties are expected to
cancel out.
In this paper, we restrict ourselves to real values for all the NP
couplings, and study only the CP-conserving observables\footnote{The
CP-violating observables, with complex values of the couplings, are
treated in the companion paper \cite{CPviol}.}. In
section~\ref{bsmumuops}, we examine the various SM and NP $ b \to s \mu^+ \mu^-$
operators, and give the current constraints on the NP couplings. The
effects of the NP operators on the observables of the decays are
discussed in the following sections: $\bsbar \to \mu^+ \mu^-$ (Sec.~\ref{Bsmumu}),
$\bdbar \to X_s \mu^+ \mu^-$ (Sec.~\ref{BXsmumu}), $\bsbar \to \mu^+ \mu^- \gamma$
(Sec.~\ref{Bsmumugamma}), $\bdbar \to {\bar K} \mu^+ \mu^-$ (Sec.~\ref{BKmumu}), and
$\bdbar\to \kstar \mu^+ \mu^-$ (Sec.~\ref{BKstarmumu}). Our notation in these sections
clearly distinguishes the contributions from VA, SP and T operators
and their interference terms, which offers many insights into their
impact on modifying the observables. We give the details of the
calculations involved in sections \ref{BXsmumu}--\ref{BKstarmumu} in
the appendices \ref{app-bxsmumu}-\ref{app-bkstarmumu}, respectively,
for the sake of completeness and in order to have a clear consistent
notation for this combined analysis. In Sec.~\ref{summary}, we
summarize our findings and discuss their implications. In particular,
we point out the measurements which will allow one to distinguish
among the different classes of NP operators, and thus clearly identify
which type of new physics is present.
\section{\boldmath $ b \to s \mu^+ \mu^-$ Operators
\label{bsmumuops}}
\subsection{Standard Model and New Physics: effective Hamiltonians}
Within the SM, the effective Hamiltonian for the quark-level
transition $ b \to s \mu^+ \mu^-$ is
\begin{eqnarray}
{\cal H}_{\rm eff}^{SM} &=& -\frac{4 G_F}{\sqrt{2}}
\, V_{ts}^* V_{tb} \, \Bigl\{ \sum_{i=1}^{6} {C}_i (\mu) {\cal O}_i (\mu)
+ C_7 \,\frac{e}{16 \pi^2}\, [\bar{s}
\sigma_{\mu\nu} (m_s P_L + m_b P_R) b] \,
F^{\mu \nu} \nonumber \\
&& +\, C_9 \,\frac{\alpha_{em}}{4 \pi}\, (\bar{s}
\gamma^\mu P_L b) \, \bar{\mu} \gamma_\mu \mu + C_{10}
\,\frac{\alpha_{em}}{4 \pi}\, (\bar{s} \gamma^\mu P_L b) \, \bar{\mu}
\gamma_\mu \gamma_5
\mu \, \Bigr\} ~,
\label{HSM}
\end{eqnarray}
where $P_{L,R} = (1 \mp \gamma_5)/2$. The operators ${\cal O}_i$
($i=1,..6$) correspond to the $P_i$ in Ref.~\cite{bmu}, and $m_b =
m_b(\mu)$ is the running $b$-quark mass in the $\overline{\rm MS}$
scheme. We use the SM Wilson coefficients as given in
Ref.~\cite{Altmannshofer:2008dz}. In the magnetic dipole operator
with the coefficient $C_7$, we neglect the term proportional to $m_s$.
The operators $O_i$, $i=1$-6, can contribute indirectly to $ b \to s \mu^+ \mu^-$
and their effects can be included in an effective Wilson coeficient as
\cite{Altmannshofer:2008dz}
\begin{eqnarray}
\label{effecWC1}
C^{\rm eff}_9 &\!=\!& C_9(m_b) + h(z,\hat{m_c}) \left(\frac{4}{3} C_1 + C_2 + 6\, C_3 + 60\,
C_5 \right) \nonumber\\
&&-~\frac{1}{2} h(z,\hat{m_b}) \left(7 C_3 + \frac{4}{3} C_4 + \, 76
C_5 +
\frac{64}{3} C_6 \right) \\
&&-~\frac{1}{2} h(z,0) \left( C_3 + \frac{4}{3} C_4 + 16\, C_5 +
\frac{64}{3} C_6
\right) + \frac{4}{3} C_3 + \frac{64}{9} C_5 + \frac{64}{27} C_6 ~. \nonumber
\end{eqnarray}
Here $z \equiv q^2/m_b^2$, and $\hat{m}_q \equiv m_q/m_b$ for all
quarks $q$. The function $h(z,\hat m)$ represents the one-loop
correction to the four-quark operators $O_1$-$O_6$ and is given by
\cite{Buras:1994dj,Altmannshofer:2008dz}
\begin{eqnarray}
\label{effecWC}
h(z,\hat m) & = & -\frac{8}{9}\ln\frac{m_b}{\mu_b} - \frac{8}{9}\ln \hat m +
\frac{8}{27} + \frac{4}{9} x \\
& & - \frac{2}{9} (2+x) |1-x|^{1/2}
\left\{\begin{array}{ll}
\left( \ln\left| \frac{\sqrt{1-x} + 1}{\sqrt{1-x} - 1}\right| - i\pi \right), &
\mbox{for } x \leq 1 ~, \nonumber \\
2 \arctan \frac{1}{\sqrt{x-1}}, & \mbox{for } x > 1 ~,
\end{array}
\right.\
\end{eqnarray}
where $x \equiv {4\hat m^2}/{z}$. In the numerical analysis, the
renormalization scale $\mu_b$ is varied between $m_b/2$ and
$2m_b$. Note that in the high-$q^2$ region one can perform an operator
product expansion (OPE) in $ 1/Q$ with $Q=(m_b \sqrt{q^2})$
\cite{grin1, grin2}. Numerically the results of Refs.~\cite{grin1,
grin2} differ little from those in Eq.~(\ref{effecWC1}) and so we
use the above expression for the entire range of $q^2$. An analysis
of $ b \to s \mu^+ \mu^-$ where the OPE in the high-$q^2$ region is used can be
found in Refs.~\cite{hill1, hill2}.
We now add new physics to the effective Hamiltonian for $ b \to s \mu^+ \mu^-$,
so that it becomes
\begin{equation}
{\cal H}_{\rm eff}( b \to s \mu^+ \mu^-) = {\cal
H}_{\rm eff}^{SM} + {\cal H}_{\rm eff}^{VA} + {\cal H}_{\rm eff}^{SP} +
{\cal H}_{\rm eff}^{T} ~,
\label{NP:effHam}
\end{equation}
where ${\cal H}_{\rm eff}^{SM}$ is given by Eq.~(\ref{HSM}), while
\begin{eqnarray}
{\cal H}_{\rm eff}^{VA} &=& - \frac{4 G_F}{\sqrt{2}} \,
\frac{\alpha_{em}}{4\pi} \, V_{ts}^* V_{tb} \,
\Bigl\{ R_V \, (\bar{s} \gamma^\mu P_L b)
\, \bar{\mu} \gamma_\mu \mu + R_A \, (\bar{s} \gamma^\mu P_L b)
\, \bar{\mu} \gamma_\mu \gamma_5 \mu \nonumber \\
&& \hskip3.0 truecm +~R'_V \, (\bar{s} \gamma^\mu P_R b) \,
\bar{\mu} \gamma_\mu \mu + R'_A \, (\bar{s} \gamma^\mu P_R b)
\, \bar{\mu} \gamma_\mu \gamma_5\mu \Bigr\} ~, \\
{\cal H}_{\rm eff}^{SP} &=& - \frac{4G_F}{\sqrt{2}} \,
\frac{\alpha_{em}}{4\pi}\, V_{ts}^* V_{tb} \,
\Bigl\{R_S ~(\bar{s} P_R b) ~\bar{\mu}\mu +
R_P ~(\bar{s} P_R b) ~ \bar{\mu}\gamma_5 \mu \nonumber\\
&& \hskip3.0 truecm +~R'_S ~(\bar{s} P_L b) ~\bar{\mu}\mu +
R'_P ~(\bar{s} P_L b) ~ \bar{\mu}\gamma_5 \mu \Bigr\} \;, \\
{\cal H}_{\rm eff}^{T} &=& -\frac{4 G_F}{\sqrt{2}} \,
\frac{\alpha_{em}}{4\pi}\, V_{ts}^* V_{tb} \,
\Bigl\{C_T (\bar{s} \sigma_{\mu \nu } b)
\bar{\mu} \sigma^{\mu\nu}\mu + i C_{TE} (\bar{s} \sigma_{\mu \nu } b)
\bar{\mu} \sigma_{\alpha \beta } \mu ~\epsilon^{\mu
\nu \alpha \beta} \Bigr\}
\end{eqnarray}
are the new contributions. Here, $R_V, R_{A}, R_V', R_A', R_S, R_P,
R_S', R_P', C_{T}$ and $C_{TE}$ are the NP effective couplings. We do
not consider NP in the form of the $O_7 =\bar s\sigma^{\alpha\beta}
P_R b \, F_{\alpha\beta}$ operator or its chirally-flipped counterpart
$O_7^\prime= \bar s\sigma^{\alpha\beta} P_L b \,F_{\alpha\beta}$.
This is because there has been no hint of NP in the radiative decays
${\bar B} \to X_s \gamma, {\bar K}^{(*)} \gamma$ \cite{ali-00}, which
imposes strong constraints on $|C_7^{\rm eff}|$. This by itself does
not rule out the possibility of a flipped-sign $C_7^{\rm eff}$
scenario. However this solution can be ruled out at 3$\sigma$ from
the decay rate of ${\bar B}\to X_s\ell^+\ell^-$ if there are no NP
effects in $C_9$ and $C_{10}$~\cite{GHM}. Thus, NP effects
exclusively in $C_7$ cannot provide large deviations from the SM. The
impact of $O_7^\prime$ on the forward-backward asymmetry in
$\bdbar\to \kstar \mu^+ \mu^-$, together with other observables, was studied in
Ref.~\cite{Egede:2008uy}.
Note that the operators with coefficients $R_V$ and $R_A$ have the same
Lorentz structure as those in the SM involving $C_9$ and $C_{10}$,
respectively [see Eq.~(\ref{HSM})], so that any measurement will be
sensitive only to the combinations $(C_9+R_V)$ or $(C_{10}+R_A)$. For
simplicity, in our numerical analysis of the observables of various
decays, these couplings are taken to be real. As a consequence,
the results in this paper would be the same if the
corresponding CP-conjugate decays were considered.
However, for completeness, the expressions allow for a
complex-coupling analysis.
When calculating the transition amplitudes, for the leptonic part
we use the notation
\begin{equation}
\begin{tabular}{ll}
$L^\mu \equiv \langle \mu^+(p_+) \mu^-(p_-) |
\bar\mu \gamma^\mu \mu | 0 \rangle$ , &
$L^{\mu 5} \equiv \langle \mu^+(p_+) \mu^-(p_-) |
\bar\mu \gamma^\mu \gamma^5 \mu | 0 \rangle$ , \\
$L \equiv \langle \mu^+(p_+) \mu^-(p_-) |
\bar\mu \mu | 0 \rangle$ , &
$L^{5} \equiv \langle \mu^+(p_+) \mu^-(p_-) |
\bar\mu \gamma^5 \mu | 0 \rangle$ , \\
$L^{\mu\nu} \equiv \langle \mu^+(p_+) \mu^-(p_-) |
\bar\mu \sigma^{\mu\nu} \mu | 0 \rangle$ . & \\
\end{tabular}
\label{Ldefs}
\end{equation}
\subsection{Constraints on NP couplings}
\label{constraints}
The constraints on the NP couplings in $ b \to s \mu^+ \mu^-$ come mainly from the
upper bound on the branching ratio $B(\bsbar \to \mu^+ \mu^-)$ and the measurements
of the total branching ratios $B(\bdbar \to X_s \mu^+ \mu^-)$ and $B(\bdbar \to {\bar K} \mu^+ \mu^-)$
\cite{:2007kv,pdg,Barberio:2008fa,Aubert:2004it,Iwasaki:2005sy}:
\begin{eqnarray}
B(\bsbar \to \mu^+ \mu^-) & < & 3.60 \times 10^{-8} \quad \mbox{(90\% C.L.)} \; , \\
B(\bdbar \to X_s \mu^+ \mu^-) & = & \left\{ \begin{array}{ll}
\left( 1.60 \pm 0.50 \right) \times 10^{-6} & (\mbox{low } q^2) \\
\left( 0.44 \pm 0.12 \right) \times 10^{-6} & (\mbox{high } q^2) \\
\end{array} \right. \; , \\
B(\bdbar \to {\bar K} \mu^+ \mu^-) & = & \left(4.5^{+1.2}_{-1.0} \right) \times 10^{-7} \; ,
\end{eqnarray}
where the low-$q^2$ and high-$q^2$ regions correspond to 1 GeV$^2 \le
q^2 \le 6$ GeV$^2$ and $q^2 \ge 14.4$ GeV$^2$, respectively, where
$q^2$ is the invariant mass squared of the two muons. The constraints
from the first two quantities above have been derived in
Ref.~\cite{AFBNP}. Here we also include the additional constraints
from $B(\bdbar \to {\bar K} \mu^+ \mu^-)$. The three decays above provide complementary
information about the NP operators. For the SM predictions here, we
use the latest NNLO calculations. Note that the measurements for
$B(\bdbar\to \kstar \mu^+ \mu^-)$ are also available
\cite{Belle-newKstar,CDF-Kstar}. However, the form-factor
uncertainties in $\bdbar\to \kstar \mu^+ \mu^-$ are rather large, and as a result the
constraints due to this decay mode are subsumed in those from the
other three modes.
\FIGURE[t]{
\includegraphics[width=0.4\linewidth]{RVA-constraints.eps}
~~~~~\includegraphics[width=0.4\linewidth]{RpVA-constraints.eps}
\caption{The constraints on the couplings $R_V,R_A$ (left panel)
and $R'_V,R'_A$ (right panel) when only primed or unprimed
couplings are present.
\label{VA-constraints}}
}
The constraints on the new VA couplings come mainly from $B(\bdbar \to X_s \mu^+ \mu^-)$
and $B(\bdbar \to {\bar K} \mu^+ \mu^-)$. Their precise values depend on which NP operators
are assumed to be present. For example, if only $R_{V,A}$ or only
$R'_{V,A}$ couplings are present, the constraints on these couplings
take the form shown in Fig.~\ref{VA-constraints}.
For $R_{V,A}$, the allowed parameter space is the region between
two ellipses:
\begin{equation}
1.0 \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} \frac{|R_V + 3.6|^2}{(4.7)^2} + \frac{|R_A -4.0|^2}{(4.8)^2} \; ,
\quad \frac{|R_V + 2.8|^2}{(6.5)^2} + \frac{|R_A -4.1|^2}{(6.6)^2} \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 1 ~,
\end{equation}
while for $R'_{V,A}$, the allowed region is the intersection of
an annulus and a circle:
\begin{equation}
22.2 \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}}
|R'_V + 3.6|^2 + |R'_A - 4.0|^2 \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 56.6\; , \quad |R'_V|^2 + |R'_A|^2 \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 17 ~.
\end{equation}
If both $R_{V,A}$ and $R'_{V,A}$ are present, the constraints on them get
individually weakened to
\begin{equation}
\frac{|R_V + 2.8|^2}{(6.5)^2} + \frac{|R_A -4.1|^2}{(6.6)^2} \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 1 ~,
\label{RVAconstraints}
\end{equation}
and
\begin{equation}
|R'_V|^2 + |R'_A|^2 \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 40 ~,
\end{equation}
respectively\footnote{Note: the constraints on $R_{V,A}$ obtained here
are milder than those obtained in Ref.~\cite{Alok:2006gp} using
$B({\bar B}_d^0 \to ({\bar K}\,,{\bar K}^*)\, \mu^+ \, \mu^-)$. This is
because Ref.~\cite{Alok:2006gp} had neglected the interference terms
between the SM and new physics VA operators. Their inclusion relaxes
the stringent constraints therein.}.
For the SP operators, the present upper bound on $B(\bsbar \to \mu^+ \mu^-)$ provides
the limit
\begin{equation}
|R_S - R'_S|^2 + |R_P - R'_P|^2 \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 0.44 ~,
\label{SP-constraints}
\end{equation}
where we have used $f_{B_s}=(238.8 \pm 9.5)\,{\rm MeV}$
\cite{Laiho:2009eu} and $|V_{ts}^* V_{tb}|=0.0407\pm 0.0010$
\cite{pdg}. This constitutes a severe constraint on the NP couplings
if only $R_{S,P}$ or $R'_{S,P}$ are present. However, if both types of
operators are present, these bounds can be evaded due to cancellations
between the $R_{S,P}$ and $R'_{S,P}$. In that case, $B(\bdbar \to X_s \mu^+ \mu^-)$ and
$B(\bdbar \to {\bar K} \mu^+ \mu^-)$ can still bound these couplings. The stronger bound is
obtained from the measurement of the latter quantity, which yields
\begin{equation}
|R_S|^2 + |R_P|^2 \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 9 \; ,
\quad R_S \approx R'_S \; , \quad R_P \approx R'_P \; .
\end{equation}
Finally, the constraints on the NP tensor operators come entirely from
$B(\bdbar \to X_s \mu^+ \mu^-)$. When only the T operators are present,
\begin{equation}
|C_T|^2 +4 |C_{TE}|^2 \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$<$}}} 1.0 ~.
\end{equation}
Although the bounds presented in this section for VA, SP and T
couplings are obtained by taking one kind of Lorentz structure at a
time, in our numerical analysis for scenarious where we consider
combinations of two or more kinds of Lorentz structres, we use the
allowed parameter space obtained by considering the corresponding
combined Lorentz structures.
We now analyze the $ b \to s \mu^+ \mu^-$ modes in detail and present our results.
As explained in the Introduction, the figures have the SM prediction
bands overlaid with the predictions for specific allowed values of NP
couplings. The SM band is generated by varying the form factors
within their ranges as predicted by the respective authors, while the
CKM matrix elements, quark masses and meson decay constants are varied
within their $1.6 \sigma$ allowed values.
\section{\boldmath $\bsbar \to \mu^+ \mu^-$
\label{Bsmumu}}
In this section we examine the NP contributions to $\bsbar \to \mu^+ \mu^-$. Within
the SM, $\bsbar \to \mu^+ \mu^-$ is chirally suppressed. The SM prediction for the
branching ratio is $B(\bsbar \to \mu^+ \mu^-) = (3.35\pm 0.32)\times 10^{-9}$
\cite{blanke}. The Tevatron gives an upper bound on its branching
ratio (BR) of $3.6 \times 10^{-8}$ at 90\%
C.L. \cite{:2007kv,pdg,Barberio:2008fa}. This decay can be observed
at the Tevatron only if NP enhances its BR above $10^{-8}$. LHCb is
the only experiment which will probe $B(\bsbar \to \mu^+ \mu^-)$ down to its SM
value. It has the potential for a $3 \sigma$ observation ($5 \sigma$
discovery) of $\bsbar \to \mu^+ \mu^-$ with $\sim 2\, {\rm fb}^{-1}$ ($\sim 6\, {\rm
fb}^{-1}$) of data \cite{Lenzi:2007nq}. LHCb therefore has the
potential to observe either an enhancement or a suppression of
$B(\bsbar \to \mu^+ \mu^-)$. It can observe $\bsbar \to \mu^+ \mu^-$ as long as its BR is above
$1.0 \times 10^{-9}$.
\subsection{Branching ratio}
The transition amplitude for ${\bar B}_s^0 \to \mu^+ \mu^-$ is given by
\begin{eqnarray}
i{\cal M}({\bar B}_s^0 \to \mu^+ \mu^-) & = &
(-i)\frac{1}{2}\Bigg[-\frac{4 G_F}{\sqrt{2}} \frac{\alpha_{em}}{4 \pi}
(V_{ts}^* V_{tb})\Bigg] \times \nonumber \\
&& \hskip-1truecm \Bigg\{
\left< 0 \left|\bar{s}\gamma_{\mu}\gamma_{5}b\right|{\bar B}_s^0(p)\right>
(-C_{10}^{\rm eff} - R_A + R'_A) L^{5\mu} \nonumber \\
& + & \left< 0 \left|\bar{s} \gamma_5 b\right|{\bar B}_s^0(p)\right>
\left[ (R_S - R'_S) L + (R_P - R'_P) L^5 \right] \Bigg\} \; ,
\end{eqnarray}
where $L^{5\mu}$, $L$ and $L^5$ are defined in
Eq.~(\ref{Ldefs}). Using the matrix elements \cite{Skiba:1992mg}
\begin{equation}
\left< 0 \left|\bar{s}\gamma_{\mu}\gamma_{5}b\right|{\bar B}_s^0(p)\right> =
i\,p_\mu\,f_{B_s}\;, \quad
\left< 0 \left|\bar{s}\gamma_{5}b\right|{\bar B}_s^0(p)\right> = -
i\,f_{B_s}\frac{m_{B_s}^2}{m_b + m_s}\;,
\end{equation}
the calculation of the BR gives
\begin{eqnarray}
B({{\bar B}_s^0} \to \mu^+ \, \mu^-) & =& \frac{G^2_F \alpha_{em}^2
m^5_{B_s} f_{B_s}^2 \tau_{B_s}}{64 \pi^3}
|V_{tb}^{}V_{ts}^{\ast}|^2 \sqrt{1 - \frac{4 m_\mu^2}{m_{B_s}^2}}\times
\nonumber\\
&& \hskip-2truecm \Bigg\{
\Bigg(1 - \frac{4m_\mu^2}{m_{B_s}^2} \Bigg) \Bigg|
\frac{R_S - R'_S}{m_b + m_s}\Bigg|^2
+ \Bigg|\frac{R_P - R'_P}{m_b + m_s} + \frac{2 m_\mu}{m^2_{B_s}} (C_{10}+R_A-R'_A)\Bigg|^2 \Bigg\}. \phantom{space}
\label{bmumu-BR}
\end{eqnarray}
Clearly, NP in the form of tensor operators does not contribute to
$\bsbar \to \mu^+ \mu^-$. From Eq.~(\ref{bmumu-BR}) and the constraints on NP
couplings obtained in Sec.~\ref{constraints}, one can study the effect
of new VA and SP couplings.
Since the NP contribution from VA operators is suppressed by a factor
of $\sim m_\mu/m_b$ compared to that from the SP operators, the effect
of SP operators dominates. Both VA and SP operators can suppress
$B(\bsbar \to \mu^+ \mu^-)$ significantly below the SM prediction. However while VA
operators can only marginally enhance $B(\bsbar \to \mu^+ \mu^-)$ above $10^{-8}$,
making the decay accessible at the Tevatron in an optimistic scenario,
the SP operators can enhance the branching ratio even up to the
present experimental bound. Indeed, the strongest limit on the SP
couplings comes from this decay. This strong limit prevents the SP
operators from expressing themselves in many other observables, as we
shall see later in this paper.
\subsection{Muon polarization asymmetry}
\label{ALP}
The longitudinal polarization asymmetry of muons in $\bsbar \to \mu^+ \mu^-$ is defined as
\begin{equation}
A_{LP} = \frac{N_R-N_L}{N_R+N_L}\;,
\end{equation}
where $N_R\,(N_L)$ is the number of $\mu^{-}$'s emerging with positive
(negative) helicity. $A_{LP}$ is a clean observable that is not
suppressed by $m_\mu/m_{B_s}$
only if the NP contribution is in the form of SP operators, such as in
an extended Higgs sector.
$A_{LP}$ for the most general NP is \cite{Alok:2008hh}
\begin{equation}
A_{LP} = \frac{2\sqrt{1 - \frac{4m_\mu^2}{m_{B_s}^2}}\;{\rm Re}\Bigg[\Big(
\frac{R_S - R'_S}{m_b + m_s}\Big)\Big(\frac{R_P - R'_P}{m_b + m_s} + \frac{2 m_\mu}{m^2_{B_s}} (C_{10}+R_A-R'_A)\Big)\Bigg]}
{\Big(1 - \frac{4m_\mu^2}{m_{B_s}^2} \Big) \Bigg|
\frac{R_S - R'_S}{m_b + m_s}\Bigg|^2
+ \Bigg|\frac{R_P - R'_P}{m_b + m_s} + \frac{2 m_\mu}{m^2_{B_s}} (C_{10}+R_A-R'_A)\Bigg|^2} ~.
\end{equation}
From the above equation, we see that $A_{LP}$ can be nonzero if and
only if $R_S-R'_S\neq 0$, i.e.\ there must be a contribution from NP
SP operators. (Within the SM, SP couplings are negligibly small, so
that $A_{LP} \simeq 0$.)
The present upper bound on $B(\bsbar \to \mu^+ \mu^-)$ puts no constraint on
$A_{LP}$, and it can be as large as $100\%$ \cite{Alok:2008hh}.
$A_{LP}$ can be maximal even if $B(\bsbar \to \mu^+ \mu^-)$ is close to its SM
prediction. Therefore, in principle $A_{LP}$ can serve as an important
tool to probe NP of the SP form. However, in order to measure its
polarization, the muon must decay within the detector. This is not
possible due to the long muon lifetime ($c\tau$ for the muon is 659
m). Hence in practice, this quantity is not measurable at current
detectors.
\section{\boldmath $\bdbar \to X_s \mu^+ \mu^-$
\label{BXsmumu}}
The BR of $\bdbar \to X_s \mu^+ \mu^-$ in the low-${q}^2$ and high-${q}^2$ regions has
been measured to be \cite{Aubert:2004it,Iwasaki:2005sy}
\begin{eqnarray}
{B} ( {\bar {\text{B}}} \to {X}_{s} \ell^+ \ell^-)_{{\rm low}~{q}^2} ~=~
\left\{ \begin{array}{ll}
\left( 1.49 \pm 0.50^{+0.41}_{-0.32} \right) \times
10^{-6}~, & (\rm Belle)~, \\
\left( 1.8 \pm 0.7 \pm 0.5 \right) \times 10^{-6} ~, & (\rm
BaBar)~, \\
\left( 1.60 \pm 0.50 \right) \times 10^{-6}~, & (\rm Average)
~. \\
\end{array} \right. \\
{B} ( {\bar B} \to X_s \ell^+ \ell^-)_{{\rm high}~{q}^2} ~=~
\left\{ \begin{array}{ll}
\left( 0.42 \pm 0.12^{+0.06}_{-0.07} \right) \times
10^{-6} ~, & (\rm Belle) ~, \\
\left( 0.50 \pm 0.25 ^{+0.08}_{-0.07} \right) \times 10^{-6}
~, & (\rm BaBar) ~, \\
\left( 0.44 \pm 0.12 \right) \times 10^{-6} ~,
& (\rm Average) ~. \\
\end{array} \right.
\end{eqnarray}
The SM predictions for ${B} ( {\bar {{B}}} \to {X}_{s} \, \mu^+ \,
\mu^-)$ in the low-${q}^2$ and high-${q}^2$ regions are $(1.59\pm0.11)
\times 10^{-6}$ and $(0.24 \pm 0.07) \times 10^{-6}$, respectively
\cite{Huber:2007vv}.
Apart from the measurement of the total BR of $\bdbar \to X_s \mu^+ \mu^-$, which has
already been used to restrict the VA and T operators in
Sec.~\ref{constraints}, the differential branching ratio (DBR) as a
function of $q^2$ also contains valuable information that can help us
detect NP. In particular, the SM predicts a positive zero crossing
for $A_{FB}$ in $\bdbar \to X_s \mu^+ \mu^-$ in the low-$q^2$ region, i.e.\ for $q^2$ less
than (greater than) the crossing point, the value of $A_{FB}$ is
negative (positive). This zero crossing is sufficiently away from the
charm resonances so that its value can be determined perturbatively to
an accuracy of $\sim 5\%$. The NNLO prediction \cite{Huber:2007vv} for
the zero of $A_{FB}(q^2)$ is (taking $m_b = 4.8$ GeV)
\begin{equation}
(q^2)_0= (3.5 \pm 0.12)\, {\rm GeV}^2 \,.
\end{equation}
This quantity has not yet been measured. However, estimates show that
a precision of about $5\%$ could be obtained at a Super-$B$ factory
\cite{Browder:2008em}. A deviation from the zero crossing point
predicted above will be a clear signal of NP.
\subsection{Differential branching ratio and forward-backward asymmetry}
After including all the NP interactions, and neglecting terms
suppressed by $m_\mu/m_b$ and $m_s/m_b$, the total differential
branching ratio $\text{d}{B}/{\text{d}z}$ is given by
\begin{eqnarray}
\Bigg(\frac{\text{d}{B}}{\text{d}z}\Bigg)_{\text{Total}}~=~
\Bigg(\frac{\text{d}{B}}{\text{d}z}\Bigg)_{\text{SM}} + {B}_0
\Bigg[{B}_{SM{\hbox{-}}VA} + {B}_{VA} + {B}_{SP} + {B}_{T}\Bigg] \; ,
\end{eqnarray}
where the quantities $B$ depend on the SM and NP couplings and
kinematic variables. The complete expressions for these quantities
are given in Appendix~\ref{app-bxsmumu}. The subscripts denote the
Lorentz structure(s) contributing to that term.
The forward-backward asymmetry in $\bdbar \to X_s \mu^+ \mu^-$ is
\begin{eqnarray}
\label{AFB-Xsmumu-1}
A_{FB}(q^2) &=&\frac{\int^1_0 d\cos{\theta_\mu}
\frac{d^2B}{dq^2d\cos{\theta_\mu} }-\int^0_{-1}
d\cos{\theta_\mu}\frac{d^2B}{dq^2d\cos{\theta_\mu} }}
{\int^1_0 d\cos{\theta_\mu} \frac{d^2B}{dq^2d\cos{\theta_\mu} }
+\int^0_{-1} d\cos{\theta_\mu}\frac{d^2B}{dq^2d\cos{\theta_\mu} }} \; ,
\end{eqnarray}
where $\theta_\mu$ is the angle between the $\mu^+$ and the
$\bar{B}^0$ in the dimuon center-of-mass frame. We can write $A_{FB}$
in the form
\begin{equation}
A_{FB}(q^2)=\frac{N(z)}{\text{d} {B}/ \text{d}z}\;,
\end{equation}
where the numerator is given by
\begin{eqnarray}
N(z) &~=~& B_0
\Bigg[N_{SM} + N_{SM{\hbox{-}}VA} + N_{VA} + N_{SP{\hbox{-}}T}
\Bigg] \;.
\label{N-Xsmumu-1}
\end{eqnarray}
The terms suppressed by $m_\mu/m_b$ and $m_s/m_b$ have been neglected
as before. Again for the detailed expressions, we refer the reader to
Appendix~\ref{app-bxsmumu}.
\FIGURE[t]{
\includegraphics[width=0.4\linewidth]{afb-incl-va-low.eps}
\includegraphics[width=0.4\linewidth]{afb-incl-va-high.eps} \\
\includegraphics[width=0.4\linewidth]{dbr-incl-va-low.eps}
\includegraphics[width=0.4\linewidth]{dbr-incl-va-high.eps}
\caption{The left (right) panels of the figure show $A_{FB}$ and DBR for
$\bdbar \to X_s \mu^+ \mu^-$ in the low-$q^2$ (high-$q^2$) region, in the scenario
where only $(R_V, R_A)$ terms are present. The band corresponds to
the SM prediction and its uncertainties; the lines show predictions
for some representative values of NP parameters $(R_V, R_A)$. For
example, the blue curves in the low-$q^2$ and high-$q^2$ regions
correspond to $(-6.85,8.64)$ and $(-9.34,8.85)$, respectively.
\label{fig:incl-va}}
}
Fig.~\ref{fig:incl-va} shows $A_{FB}(q^2)$ and the DBR for $\bdbar \to X_s \mu^+ \mu^-$
in the presence of NP in the form of $R_{V,A}$ couplings, which are
the ones that can most influence these observables. Enhancement or
suppression of the DBR by a factor of 2 is possible. The NP couplings
can enhance $A_{FB}$ up to 30\% at low $q^2$, make it have either sign,
and even make the zero crossing disappear altogether. At high $q^2$,
however, $A_{FB}$ can only be suppressed. The $R'_{V,A}$ couplings
can only affect these observables mildly: a 50\% enhancement in DBR is
possible (no suppression), but $A_{FB}$ can only be marginally
enhanced and a positive zero crossing in the $q^2= 2$-4 GeV$^2$ region
is maintained. The mild effect of $R'_{V,A}$ couplings as compared to
the $R_{V,A}$ couplings is a generic feature for almost all
observables. This may be attributed to the bounds on the magnitudes of
these couplings: from Sec.~\ref{constraints}, while $|R_{V,A}|<10$,
the values of $|R'_{V,A}| <5$.
Eq.~(\ref{N-Xsmumu-1}) shows that if SP or T couplings are
individually present, their contribution to $A_{FB}$ is either absent
or suppressed by $m_\mu/m_b$. In such a case, though they can enhance
the DBR (marginally for SP, by up to a factor of 2 for T), $A_{FB}$ is
suppressed in general (marginally for SP, significantly for T).
However if both SP and T operators are present, their interference
term is not suppressed and some enhancement of $A_{FB}$ is possible.
This still is not significant, since the magnitude of the SP couplings
is highly constrained from $\bsbar \to \mu^+ \mu^-$ measurements. A positive zero
crossing in the low-$q^2$ region is always maintained. This may be
seen in Fig.~\ref{fig:incl-spsppt}.
\FIGURE[t]{
\includegraphics[width=0.4\linewidth]{afb-incl-spsppt-low.eps}
\includegraphics[width=0.4\linewidth]{afb-incl-spsppt-high.eps} \\
\includegraphics[width=0.4\linewidth]{dbr-incl-spsppt-low.eps}
\includegraphics[width=0.4\linewidth]{dbr-incl-spsppt-high.eps}
\caption{The left (right) panels of the figure show $A_{FB}$ and DBR for
$\bdbar \to X_s \mu^+ \mu^-$ in the low-$q^2$ (high-$q^2$) region, in the scenario
where both SP and T terms are present. The band corresponds to the
SM prediction and its uncertainties; the lines show predictions for
some representative values of NP parameters $(R_S, R_P, R'_S, R'_P ,
C_T, C_{TE})$. For example, the magenta curves in the low-$q^2$ and
high-$q^2$ regions correspond to
$(-1.23,-1.79,-0.86,-1.85,0.27,-0.36)$ and
$(-1.23,-0.23,-1.35,0.08,1.37,0.01)$, respectively.
\label{fig:incl-spsppt}}
}
\subsection{Polarization fractions $f_L$ and $f_T$}
In Ref.~\cite{Lee:2006gs} it was pointed out that, besides the
dilepton invariant mass spectrum and the forward-backward asymmetry, a
third observable can be obtained from $\bdbar \to X_s \mu^+ \mu^-$, namely the double
differential decay width:
%
\begin{equation}
\frac{d^2B}{dz\,d\cos\theta_\mu}=
\frac{3}{8}\Big[ (1 + \cos^2\theta_\mu)H_T(z)
+ 2\cos\theta_\mu H_A(z) + 2(1-\cos^2\theta_\mu)H_L(z)\Big]\;.
\end{equation}
The functions $H_i(z)$ do not depend on $\cos \theta_\mu$. The sum
$H_L(z)+H_T(z)$ gives the differential branching ratio $dB/dz$, while
the forward-backward asymmetry is given by $3\,H_A/4(H_L+H_T)$.
Splitting $dB/dz$ into longitudinal and transverse parts separates the
contributions with different $q^2$ dependences, providing a third
independent observable. This does not require measuring any additional
kinematical variable -- $q^2$ and $\cos \theta_\mu$ are sufficient.
Including all the NP interactions, and neglecting terms suppressed by
$m_\mu/m_b$ and $m_s/m_b$, $H_L(z)$ and $H_T(z)$ are given by
\begin{equation}
H_L(z)=H^{SM}_L(z)+H^{SM-VA}_L(z)+H^{VA}_L(z)+H^{SP}_L(z)+H^{T}_L(z)\;,
\end{equation}
\begin{equation}
H_T(z)=H^{SM}_T(z)+H^{SM-VA}_T(z)+H^{VA}_T(z)+H^{SP}_T(z)+H^{T}_T(z)\;,
\end{equation}
where the $H$ functions are given in Appendix~\ref{app-bxsmumu}.
The superscripts indicate the Lorentz structures contributing to
the term.
The polarization fractions $f_L$ and $f_T$ can be defined as
\begin{equation}
f_L=\frac{H_L(z)}{H_L(z)+H_T(z)} ~, \qquad \qquad f_T=\frac{H_T(z)}{H_L(z)+H_T(z)} ~.
\end{equation}
In the SM, $f_L$ can be as large as 0.9 at low $q^2$, and it decreases
to about 0.3 at high $q^2$.
Fig.~\ref{fig:incl-fl-vap} shows that when only $R_{V,A}$ couplings
are present, in the low-$q^2$ region $f_L$ can be suppressed
substantially, or even enhanced up to 1. A similar effect -- small
enhancement or a factor of two suppression -- is possible at high
$q^2$. The suppression at low-$q^2$ is typically correlated with an
enhancement at high-$q^2$. The effect of $R'_{V,A}$ couplings is
similar, but much milder, as expected. SP and T operators,
individually or together, can only have an marginal effect on $f_L$.
\FIGURE[t]{
\includegraphics[width=0.4\linewidth]{fl-incl-va-low.eps}
\includegraphics[width=0.4\linewidth]{fl-incl-va-high.eps}
\caption{The left (right) panels of the figure show $f_L$ for
$\bdbar \to X_s \mu^+ \mu^-$ in the low-$q^2$ (high-$q^2$) region, in the scenario
where only $(R_V, R_A)$ terms are present. The band corresponds to
the SM prediction and its uncertainties; the lines show predictions
for some representative values of NP parameters $(R_V, R_A)$. For
example, the blue curves in the low-$q^2$ and high-$q^2$ regions
correspond to $(-8.14,5.75)$ and $(1.87,4.85)$, respectively.
\label{fig:incl-fl-vap}}
}
\section{\boldmath $\bsbar \to \mu^+ \mu^- \gamma$
\label{Bsmumugamma}}
In this section we examine the NP contributions to the radiative
leptonic decay $\bsbar \to \mu^+ \mu^- \gamma$. This decay has not been detected as
yet. The SM prediction for the BR in the range $q^2 \le 9.5$ GeV$^2$
and $q^2 \ge 15.9$ GeV$^2$ is $\approx 18.9 \times 10^{-9}$
\cite{Melikhov:2004mk}. Although this decay needs the emission of an
additional photon as compared to $\bsbar \to \mu^+ \mu^-$, which would suppress the
BR by a factor of $\alpha_{em}$, the photon emission also frees it
from helicity suppression, making its BR much larger than $\bsbar \to \mu^+ \mu^-$.
This decay has contributions from many channels
\cite{Eilam:1996vg,Aliev:1996ud,Geng:2000fs,Dincer:2001hu,Melikhov:2004mk,Melikhov:2004fs}:
(i) direct emission of real or virtual photons from valence quarks of
the ${\bar B}^0_s$, (ii) real photon emitted from an internal line of
the $b \to s$ loop, (iii) weak annihilation due to the axial anomaly,
and (iv) bremsstrahlung from leptons in the final state. The photon
emission from the $b \to s$ loop is suppressed by $m_b^2/m_W^2$
\cite{Aliev:1996ud}, and the weak annihilation is further suppressed
by $\Lambda_{QCD}/m_b$ \cite{Melikhov:2004mk}. These two
contributions can then be neglected. The bremsstrahlung contribution
is suppressed by $m_\mu/m_b$, and dominates only at extremely low
photon energies due to the infrared divergence. The virtual photon
emission dominates in the low-$q^2$ region around the $\phi$
resonance. If we choose the regions $2$ GeV$^2 \le q^2 \le 6$ GeV$^2$
and $14.4$ GeV$^2 \le q^2 \le 25$ GeV$^2$ as the low-$q^2$ and
high-$q^2$ regions, respectively, then the dominating contribution
comes from the diagrams in which the final-state photon is emitted
either from the $b$ or the $s$ quark. Then the $\bsbar \to \mu^+ \mu^- \gamma$ decay is
governed by the effective Hamiltonian describing the $ b \to s \mu^+ \mu^-$
transition, as given in Eq.~(\ref{HSM}), and our formalism is
applicable. Here we consider the the DBR and $A_{FB}$ in
$\bsbar \to \mu^+ \mu^- \gamma$.
\subsection{Differential branching ratio and forward-backward asymmetry}
We begin with the differential branching ratio. The SP operators do
not contribute to the amplitude of $\bsbar \to \mu^+ \mu^- \gamma$ and hence do not
play any role in the decay.
In terms of the dimensionless parameter $x_\gamma=2 E_\gamma/m_{B_s}$,
where $E_\gamma$ is the photon energy in the ${\bar B}_s^0$ rest frame,
one can calculate the double differential decay rate to be
\begin{eqnarray}
\frac{\text{d}^2\Gamma}{\text{d}x_{\gamma}\text{d}(\cos\theta_\mu)} =
\frac{1}{2 m_{B_s}} \dfrac{2 v\, m_{B_s}^2 x_{\gamma}}{(8\pi)^3}
{\cal M}^{\dagger}{\cal M} \; ,
\label{ddbr-mumugamma1}
\end{eqnarray}
where
$v \equiv \sqrt{1- 4 m_{\mu}^2/[m_{B_s}^2(1-x_\gamma)]}$.
From Eq.~(\ref{ddbr-mumugamma1}) we get the DBR to be
\begin{eqnarray}
\frac{\text{d}B}{\text{d} x_{\gamma}} &=&
\tau_{B_s}\int_{-1}^{1} \frac{\text{d}^2\Gamma}
{\text{d} x_{\gamma}\text{d}(\cos\theta_\mu)}\,
\text{d}\cos\theta_\mu \nonumber \\
&=&\tau_{B_s}\Bigg[\frac{1}{2 m_{B_s}}
\dfrac{2 v m_{B_s}^2}{(8\pi)^3}\Bigg]
\Bigg[\frac{1}{4} ~ \frac{16 G_F^2}{2}
\frac{\alpha_{em}^2}{16 \pi^2} |V_{tb}V_{ts}^*|^2 e^2\Bigg] \Theta \; .
\label{dbr:bsg:main-1}
\end{eqnarray}
Here the quantity $\Theta$ has the form
\begin{equation}
\Theta = \frac{2}{3}~m_{B_s}^4~x^3_{\gamma}
\Big[X_{VA}+X_{T}+X_{VA{\hbox{-}}T}\Big] \; ,
\label{mumugamma-theta-expansion-1}
\end{equation}
where the $X$ terms are given in Appendix~\ref{app-Bsmumugamma}.
The subscripts of the $X$ terms denote the Lorentz structure(s)
contributing to that term.
For the sake of brevity, we have included the SM contributions in
$X_{VA}$.
The normalized forward-backward asymmetry of muons in
$\bsbar \to \mu^+ \mu^- \gamma$ is defined as
\begin{equation}
A_{FB}(q^2) = \frac {\displaystyle \int_{0}^{1} d\cos\theta_\mu
\frac{d^2B}{dq^2 d\cos\theta_\mu} - \int_{-1}^{0} d\cos\theta_\mu
\frac{d^2B}{dq^2 d\cos\theta_\mu} }{\displaystyle \int_{0}^{1}
d\cos\theta_\mu \frac{d^2B}{dq^2 d\cos\theta_\mu} + \int_{-1}^{0}
d\cos\theta_\mu \frac{d^2B}{dq^2 d\cos\theta_\mu} } ~,
\end{equation}
where $\theta_\mu$ is the angle between the three-momentum vectors of the
${\bar B}_s^0$ and the $\mu^+$ in the dimuon center-of-mass frame. The
calculation of $A_{FB}$ gives
\begin{eqnarray}
A_{FB}(q^2) &=& \frac{1}{\Theta}~
\Bigg(2 m^4_{B_s} v ~ x_{\gamma}^3\Bigg)\Bigg[
Y_{VA} + Y_{VA{\hbox{-}}T} \Bigg] \; ,
\label{afb-y}
\end{eqnarray}
with the $Y$ terms are defined in Appendix~\ref{app-Bsmumugamma}.
The details of the calculation are given in Appendix~\ref{app-Bsmumugamma}.
For the numerical calculations, we use the matrix elements given in
Ref.~\cite{Kruger:2002gf}.
The parameters involved in the form factor calculations are chosen
in such a way that the LEET relations
between form factors are satisfied to a 10\% accuracy \cite{Kruger:2002gf}.
In our numerical analysis we take the errors in these form factors
to be $\pm 10\%$.
Within the SM, $A_{FB}(q^2)$ is predicted to vanish around $q^2
\approx 4.3$ GeV$^2$ (i.e. $x_\gamma \approx 0.85$)
\cite{Kruger:2002gf}, and the crossing is predicted to be negative.
It is therefore interesting to see the effects of various NP operators
and their combinations on $A_{FB}$. In the extreme LEET limit, using
the form-factor relations given in Ref.~\cite{Kruger:2002gf}, one can
easily see that the $A_{FB}$ is independent of the form factors. In
Fig.~\ref{fig:bsg-va} we see large bands in the SM predictions of
$A_{FB}$ in the low $q^2$ region. One may tend to interpret these as
large corrections to the LEET limit, however this would be somewhat
misleading, as we take the errors in the form factors, due to
corrections from the LEET limit, to be uncorrelated. In realistic
models, LEET corrections to the form factors will be correlated,
leading to a smaller uncertainty band for $A_{FB}$ in the SM.
\FIGURE[t]{
\includegraphics[width=0.4\linewidth]{afb-bsg-va-low.eps}
\includegraphics[width=0.4\linewidth]{afb-bsg-va-high.eps} \\
\includegraphics[width=0.4\linewidth]{dbr-bsg-va-low.eps}
\includegraphics[width=0.4\linewidth]{dbr-bsg-va-high.eps}
\caption{The left (right) panels of the figure show $A_{FB}$ and DBR for
$\bsbar \to \mu^+ \mu^- \gamma$ in the low-$q^2$ (high-$q^2$) region, in the scenario
where only $(R_V, R_A)$ terms are present. Note that here $q^2 =
m_B^2 (1-x_\gamma)$. The band corresponds to the SM prediction and
its uncertainties; the lines show predictions for some
representative values of NP parameters $(R_V, R_A)$. For example,
the magenta curves in the low-$q^2$ and high-$q^2$ regions
correspond to $(2.47,7.08)$ and $(-7.14,-0.42)$, respectively.
\label{fig:bsg-va}}
}
Fig.~\ref{fig:bsg-va} also shows $A_{FB}$ and DBR in the presence of NP
in the form of $R_{V,A}$ couplings. With the large allowed values of
$|R_{V,A}|$ and the absence of any helicity suppression, we expect VA
operators to have a significant impact on the observables. As can be
seen from the figure, the maximum allowed value of DBR can be 2-3
times larger than the SM prediction. The BR can also be suppressed
below the SM prediction due to destructive interference. In the
low-$q^2$ region, the suppression can be large. The features of the
zero-crossing predicted by the SM can be affected: it can be positive
or negative, can take place at any value of $q^2$, and can disappear
altogether. As expected, the impact of $R'_{V,A}$ couplings is much
milder. In particular, the zero-crossing is always positive and in
the low-$q^2$ region.
With new tensor couplings, an enhancement of the DBR by up to a factor
of 3 in comparison to the SM prediction is possible. Moreover, in the
limit of neglecting the muon mass, T operators do not contribute to
the $Y$-terms in Eq.~(\ref{afb-y}); their contribution is only to
$\Theta$. As a result, they can only suppress $A_{FB}$ from its SM
value.
When all NP operators are allowed, we find that $B(\bsbar \to \mu^+ \mu^- \gamma)$ can
be enhanced by a factor of 4, or it can be suppressed significantly.
The shape of $A_{FB}(q^2)$ is determined by the new VA couplings, while
its magnitude can be suppressed if the T couplings are significant.
\section{\boldmath $\bdbar \to {\bar K} \mu^+ \mu^-$
\label{BKmumu}}
The decay mode $\bdbar \to {\bar K} \mu^+ \mu^-$ is interesting primarily because the
forward-backward asymmetry of muons is predicted to vanish in the SM.
This is due to the fact that the hadronic matrix element for the
${\bar B}_d^0 \to \bar{K}$ transition does not have any axial-vector
contribution. $A_{FB}$ can have a nonzero value only if it receives a
contribution from new physics in the form of SP or T operators. Thus, the
information from this decay is complementary to that from the other
decays considered earlier, which were more sensitive to new physics VA
operators.
The total branching ratio of $\bdbar \to {\bar K} \mu^+ \mu^-$ has been measured to be
\cite{Barberio:2008fa}
\begin{equation}
B(\bdbar \to {\bar K} \mu^+ \mu^-) = \left(4.5 ^{+1.2} _{-1.0} \right) \times 10^{-7} ~,
\label{BR-BKmumu}
\end{equation}
which is consistent with the SM prediction \cite{Ali:2002jg}
\begin{equation}
B(\bdbar \to {\bar K} \mu^+ \mu^-)_{\rm SM} = (3.5 \pm 1.2) \times 10^{-7} \; .
\end{equation}
The integrated asymmetry, $\langle A_{FB} \rangle$, has been measured
by BaBar \cite{babar-06} and Belle \cite{Belle-oldKstar,ikado-06} to be
\begin{equation}
\left\langle A_{FB}\right\rangle = (0.15_{-0.23}^{+0.21} \pm 0.08)\,
\, \, \, \, \,
{\rm (BaBar)} \, ,
\end{equation}
\begin{equation}
\left\langle A_{FB}\right\rangle = (0.10 \pm 0.14 \pm 0.01) \, \,
\, \, {\rm (Belle)}. \label{fb_exp}
\end{equation}
These measurements are consistent with zero. However, within
$2\sigma$ they can be as large as $\sim 40\%$. Experiments such as
the LHC or a future Super-$B$ factory will increase the statistics by
more than two orders of magnitude. For example, at ATLAS at the LHC,
after analysis cuts the number of $\bdbar \to {\bar K} \mu^+ \mu^-$ events is expected to be
$\sim 4000$ with $30$ fb$^{-1}$ of data \cite{Adorisio}.
Thus, $\langle A_{FB} \rangle$ can soon be probed to values as low as $5\%$.
With higher statistics, one will even be able to measure $A_{FB}$ as
a function of the invariant dimuon mass squared $q^2$.
This can provide a stronger handle on this quantity than just its
average value $\langle A_{FB} \rangle$.
The effect of NP on $\langle A_{FB} \rangle$ and the $A_{FB}(q^2)$
distribution in $\bdbar \to {\bar K} \mu^+ \mu^-$ was studied in Refs.~\cite{Bobeth:2007dw}
and \cite{Alok:2008wp} respectively. In the latter, it was shown that
simultaneous new-physics SP and T operators can lead to a large
enhancement of $A_{FB}(q^2)$ in the high-$q^2$ region. However, NP
effects due to other operators were not studied. Here we present a
complete analysis of the effect of NP on the $A_{FB}(q^2)$ distribution
in $\bdbar \to {\bar K} \mu^+ \mu^-$ by taking into account all possible NP operators and
their combinations. In addition, we study the possible zero crossing
of $A_{FB}(q^2)$ and the correlations between the DBR and $A_{FB}$
features.
\subsection{Differential branching ratio and forward-backward asymmetry}
The differential branching ratio for this mode is given by
\begin{eqnarray}
\frac{dB}{dz} & = & B'_0\, \phi^{1/2}\,\beta_\mu
\Bigg[X'_{VA} + X'_{SP} + X'_T + X'_{VA{\hbox{-}}SP} + X'_{VA{\hbox{-}}T} \Bigg] \; ,
\end{eqnarray}
where the normalization factor $B_0'$, the phase factor $\phi$ and the
$X'$ terms are given in Appendix~\ref{app-bkmumu}. The subscripts for
the $X'$ terms denote the Lorentz structure(s) contributing to that
term.
The normalized forward-backward asymmetry for the muons in $\bdbar \to {\bar K} \mu^+ \mu^-$
is defined as
%
\begin{equation}
A_{FB}(q^2) = \frac {\displaystyle \int_{0}^{1} d\cos\theta_\mu
\frac{d^2B}{dq^2 d\cos\theta_\mu} - \int_{-1}^{0} d\cos\theta_\mu
\frac{d^2B}{dq^2 d\cos\theta_\mu} }{\displaystyle \int_{0}^{1}
d\cos\theta_\mu \frac{d^2B}{dq^2 d\cos\theta_\mu} + \int_{-1}^{0}
d\cos\theta_\mu \frac{d^2B}{dq^2 d\cos\theta_\mu} } ~,
\end{equation}
where $\theta_\mu$ is the angle between the three-momenta of the ${\bar B}_d^0$
and the $\mu^+$ in the dimuon center-of-mass frame.
The calculation of $A_{FB}(q^2)$ gives
\begin{equation}
A_{FB}(q^2)=\frac{2B'_0 \, \beta_\mu \, \phi}{dB/dz}
\Bigg[Y'_{VA{\hbox{-}}SP} + Y'_{VA{\hbox{-}}T} + Y'_{SP{\hbox{-}}T} \Bigg]
\label{afb-Kmumu-main-1}
\end{equation}
where the Y terms are given in Appendix~\ref{app-bkmumu}.
The largest source of uncertainty in the calculations are the $\bar{B}
\to \bar{K}$ form factors. As these cannot be calculated from first
principles within QCD, one has to rely on models. In the numerical
calculations, we use the form factors as calculated in
Ref.~\cite{ali-00} in the framework of QCD light-cone sum rules; the
details are given in Appendix~\ref{app-bkmumu}. There are, however,
certain limits in which relations between form factors can be
rigorously obtained. In the large energy (LEET) limit, these relations
are valid up to $\alpha_s$, $1/E_K$ and $1/m_b$ corrections
\cite{Charles, Beneke}.
In the LEET limit, using the form-factor relations in
Eq.~(\ref{leet_rel}), one can verify that $A_{FB}$ is independent of
the form factors. This is quite useful as it implies that the
measurement of $A_{FB}$ can be used to extract the parameters of the
new-physics operators without form-factor uncertainties in this limit.
In the low-energy, large $q^2$, region one can also derive relations
between form factors in the heavy-quark limit \cite{grin1,
grin2}. However, these relations do not completely eliminate the
form-factor dependence of the calculated quantities, and hence we do
not consider these relations. An analysis where these relations have
been used in the context of $ b \to s \mu^+ \mu^-$ can be found in
Refs.~\cite{hill1,hill2}.
{}From Eq.~(\ref{afb-Kmumu-main-1}), clearly new VA couplings alone
cannot give rise to $A_{FB}$, which vanishes in the SM in any case. Note
that this is one of the few cases where the VA couplings fail to
influence an asymmetry significantly, in spite of the large allowed
values of the couplings. This is because the argument about the
hadronic matrix element $\bar{B}^0_d \to \bar{K}$ not having any
axial-vector contribution stays valid even in the presence of NP. The
DBR can, however, be enhanced by up to a factor of 2, or marginally
suppressed.
\FIGURE[t]{
\includegraphics[width=0.4\linewidth]{afb-bkll-spspp-low.eps}
\includegraphics[width=0.4\linewidth]{afb-bkll-spspp-high.eps} \\
\includegraphics[width=0.4\linewidth]{dbr-bkll-spspp-low.eps}
\includegraphics[width=0.4\linewidth]{dbr-bkll-spspp-high.eps}
\caption{The left (right) panels of the figure show $A_{FB}$ and DBR for
$\bdbar \to {\bar K} \mu^+ \mu^-$ in the low-$q^2$ (high-$q^2$) region, in the scenario
where all NP SP couplings are present. The band corresponds to the
SM prediction and its uncertainties; the lines show predictions for
some representative values of NP parameters $(R_S, R_P, R'_S, R'_P)$
. For example, the blue curves in the low-$q^2$ and high-$q^2$
regions correspond to $(-2.50,6.18,-2.84,-5.64)$ and
$(-2.41,1.86,-2.07,1.42)$, respectively.
\label{fig:bkll-spspp}}
}
The contribution of SP operators through the $Y'_{VA{\hbox{-}}SP}$
terms can give rise to $A_{FB}$, where the VA contribution comes from
the SM operators. The effect is rather small when only $R_{S,P}$ or
only $R'_{S,P}$ couplings are present, due to the strong constraints
on their values. The peak value of $A_{FB}$ in the low-$q^2$ region
stays below the percent level, while in the the high-$q^2$ region it
can be enhanced up to $2\%$ at the extreme end point ($q^2 \roughly> 22$
GeV$^2$), which is virtually impossible to observe. However if both
the primed and unprimed SP couplings are present simultaneously, the
constraints on them are weakened. In such a situation, the peak value
of $A_{FB}$ in the low-$q^2$ (high-$q^2$) can become $\sim 5\%$ ($\sim
3\%$). This may be seen in Fig.~\ref{fig:bkll-spspp}. It is also
observed that $A_{FB}$ is always positive or always negative,
i.e.\ there is no zero crossing. The DBR also is significantly
affected only if both the primed and unprimed SP couplings are
present: it can be enhanced by up to a factor of 3.
\FIGURE[]{
\includegraphics[width=0.4\linewidth]{afb-bkll-t-low.eps}
\includegraphics[width=0.4\linewidth]{afb-bkll-t-high.eps} \\
\includegraphics[width=0.4\linewidth]{dbr-bkll-t-low.eps}
\includegraphics[width=0.4\linewidth]{dbr-bkll-t-high.eps}
\caption{The left (right) panels of the figure show $A_{FB}$ and DBR for
$\bdbar \to {\bar K} \mu^+ \mu^-$ in the low-$q^2$ (high-$q^2$) region, in the scenario
where only T terms are present. The band corresponds to the SM
prediction and its uncertainties; the lines show predictions for
some representative values of NP parameters $(C_T, C_{TE})$. For
example, the blue curves in the low-$q^2$ and high-$q^2$ regions
correspond to $(0.30,0.37)$ and $(0.49,0.57)$, respectively.
\label{fig:Kmumu-T}}
}
New T couplings are also expected to give rise to $A_{FB}$ through the
$Y'_{VA{\hbox{-}}T}$ terms in Eq.~(\ref{afb-Kmumu-main}). It is
observed from Fig.~\ref{fig:Kmumu-T} that $A_{FB}(q^2)$ can be
enhanced up to 5-6\% in almost the entire $q^2$ region. Moreover, at
$q^2 \mathrel{\hbox{\rlap{\hbox{\lower4pt\hbox{$\sim$}}}\hbox{$>$}}} 21$ GeV$^2$, the peak value of $A_{FB}(q^2)$ reaches a
larger value ( $\sim 30\%$). The value of $A_{FB}(q^2)$ is always
positive or always negative, i.e.\ there is no zero crossing point.
The DBR values do not go significantly outside the SM-allowed range.
\FIGURE[]{
\includegraphics[width=0.4\linewidth]{afb-bkll-spsppt-low.eps}
\includegraphics[width=0.4\linewidth]{afb-bkll-spsppt-high.eps} \\
\includegraphics[width=0.4\linewidth]{dbr-bkll-spsppt-low.eps}
\includegraphics[width=0.4\linewidth]{dbr-bkll-spsppt-high.eps}
\caption{The left (right) panels of the figure show $A_{FB}$ and DBR for
$\bdbar \to {\bar K} \mu^+ \mu^-$ in the low-$q^2$ (high-$q^2$) region, in the scenario
where both SP and T terms are present. The band corresponds to the
SM prediction and its uncertainties; the lines show predictions for
some representative values of NP parameters $(R_S, R_P, R'_S, R'_P ,
C_T, C_{TE})$. For example, the magenta curves in the low-$q^2$ and
high-$q^2$ regions correspond to
$(-0.09,-2.24,0.16,-2.14,-0.33,-0.40)$ and
$(-0.40,1.87,-0.59,1.88,-0.34,0.66)$, respectively.
\label{fig:bkll-spsppt}}
}
When VA and T couplings are present simultaneously, a DBR enhancement
of up to a factor of 2 is possible, while $A_{FB}$ can be large only at
extremely high $q^2$. On the other hand, when SP and T couplings are
present simultaneously, their interference can have a large impact on
$A_{FB}$. The interference term $Y'_{SP{\hbox{-}}T}$ that contributes
to $A_{FB}$ is not suppressed by $m_\mu/m_b$, and therefore a large
$A_{FB}$ is possible, as can be seen from Fig.~\ref{fig:bkll-spsppt}.
This is also the only combination of NP couplings where a zero
crossing may occur. Among the asymmetries considered in this paper,
this is the one where the SP and T operators can have the largest
impact. The DBR can also be enhanced by up to a factor of 2-3 at
large $q^2$ due to the simultaneous presence of primed and unprimed SP
operators.
\section{\boldmath $\bdbar\to \kstar \mu^+ \mu^-$
\label{BKstarmumu}}
The measurement of the forward-backward asymmetry in $\bdbar\to \kstar \mu^+ \mu^-$ by
the Belle collaboration \cite{Belle-oldKstar,Belle-newKstar}, which
showed a deviation from the SM prediction, indicates the possibility
of the presence of new physics. According to the SM, $A_{FB}$ is $\leq
20\%$ and negative at low $q^2$, has a zero crossing at $q^2 \approx
4$ GeV$^2$, and is positive but $\leq 40\%$ for larger $q^2$
values. The experiment showed the asymmetry to be positive throughout
the range of $q^2$ -- consequently no zero crossing -- and $A_{FB}
\approx 60\%$ at large $q^2$ values. This has generated a special
interest in this decay.
There have already been a number of theoretical studies, both within
the SM \cite{Kruger:2000zg,Beneke:2001at,Egede:2008uy} and in specific
NP scenarios
\cite{kruger-matias,Lunghi:2006hc,Altmannshofer:2008dz,AFBNP},
focusing on the branching fraction and $A_{FB}$ of $\bdbar\to \kstar \mu^+ \mu^-$. For
example, Ref.~\cite{HHM} has pointed out that $A_{FB}(q^2)$ is a
sensitive probe of NP that affects the SM Wilson coefficients. Other
observables based on the $K^*$ spin amplitudes of this decay are at
present under active theoretical and experimental analysis
\cite{kruger-matias,Lunghi:2006hc,Egede:2008uy}. Finally, more
challenging observables, such as the polarized lepton forward-backward
asymmetry \cite{Aliev:2001pq, Bensalem:2002ni,
Aliev:2003fy,Aliev:2004hi}, have also been considered, though the
measurement of this quantity is still lacking.
In the coming years, the LHCb experiment will collect around 3000
events of $\bdbar\to \kstar \mu^+ \mu^-$ per fb$^{-1}$ in the full range of $q^2$.
An integrated luminosity of 2 fb$^{-1}$ already would allow
the extraction of the SM zero of $A_{FB}$ (if it is there)
with a precision of $\pm 0.5$ GeV$^2$ \cite{lhc-roadmap}. Indeed,
a dataset of 100 pb$^{-1}$ would already improve the world precision
obtained by Babar, Belle and CDF. These measurements would also
permit many of the additional tests for NP mentioned above.
The decay $\bdbar\to \kstar \mu^+ \mu^-$, with $\bar{K^*}$ decaying to $\bar{K}\pi$,
has four particles in the final state. This implies that there are
three physical angles that can specify the relative directions of
these four final-state particles. The differential decay rate as a
function of these three angles has much more information than just the
forward-backward asymmetry. Indeed, $A_{FB}$ is just one of the
observables that can be derived from the complete angular analysis of
this decay. In this section we also consider other CP-conserving
observables.
\subsection{Angular analysis}
The complete angular distribution in $\bdbar\to \kstar \mu^+ \mu^-$ has been calculated
in Refs.~\cite{Chen:2002bq,Chen:2002zk} within the SM. In this
section, we calculate the angular distribution in the presence of NP,
which is a new result. The full transition amplitude for
$\bar{B}(p_B)\rightarrow \bar{K}^*(p_{K^*},\epsilon^*) \mu^+(p_\mu^+)
\mu^-(p_\mu^-)$ is
\begin{eqnarray}
\label{TABKstmumu}
i{\cal M}(\bdbar\to \kstar \mu^+ \mu^-) & = & (-i)\frac{1}{2}~\Bigg[\frac{4~G_F}{\sqrt{2}}
\frac{\alpha_{em}}{4 \pi} (V_{ts}^* V_{tb})\Bigg] \times
\nonumber \\
&& \hspace{-3.2cm}
[ M_{V\mu} L^\mu+ M_{A\mu} L^{\mu5}+M_S L+M_P L^5 +M_{T\mu \nu} L^{\mu\nu}
+i M_{E\mu \nu} L_{\alpha \beta} \epsilon^{\mu\nu\alpha\beta}] \; ,
\end{eqnarray}
where the $L$'s are defined in Eq.~(\ref{Ldefs}). The $M$'s are given in Appendix~\ref{app-bkstarmumu}.
The complete three-angle distribution for the decay
$\bar{B}\rightarrow \bar{K}^* (\rightarrow \bar{K}\pi)\mu^+\mu^-$
can be expressed in terms of $q^2$,
two polar angles $\theta_\mu$, $\theta_{K}$, and the angle between
the planes of the dimuon and $K \pi$ decays, $\phi$.
These angles are described in Fig.~\ref{KstmumuAD}.
We choose the momentum and polarization four-vectors of the $K^*$ meson
in the dimuon rest frame as
\begin{eqnarray}
\label{KstKin1}
& & p_{K^{*}} =(E_{K^{*}},0,0, |\vec{p}_{K^{*}}|) \; , \nonumber \\
& & \varepsilon(0) =\frac{1}{m_{K^{*}}}(|\vec{p}_{K^{*}}|,0,0,E_{K^{*}}) \; ,
\quad \varepsilon(\lambda=\pm 1)= \mp \frac{1}{\sqrt{2}} (0,1,\pm i,0) \; ,
\end{eqnarray}
with
\begin{eqnarray}
\label{KstKin2}
E_{K^{*}} &=& \frac{m^2_B-m^2_{K^*}-q^2}{2 \sqrt{q^2}},
~~|\vec{p}_{K^{*}}|=\sqrt{E^2_{K^{*}}-m^2_{K^*}} \; .
\end{eqnarray}
\FIGURE[t]{
\centerline{
\includegraphics[width=9.5cm]{kstmumudia.eps}}
\caption{The description of the angles $\theta_{\mu,K}$ and $\phi$ in the angular distribution of $\bar{B}\rightarrow \bar{K}^* (\rightarrow\bar{K}\pi)\mu^+\mu^-$ decay.}
\label{KstmumuAD}
}
The three-angle distribution can be obtained using the
helicity formalism:
\begin{eqnarray}
\label{ADKst}
&& \frac{d^4\Gamma}{dq^2d\cos{\theta_\mu} d\cos{\theta_{K}} d\phi } =N_F
\times \nonumber\\
\Bigg\lbrace && \cos^2{\theta_{K}} \Big(I^0_1 + I^0_2 \cos{2 \theta_\mu} + I^0_3 \cos{ \theta_\mu} \Big) + \sin^2{\theta_{K}} \Big(I^T_1 + I^T_2 \cos{2 \theta_\mu} + I^T_3 \cos{ \theta_\mu} \nonumber\\ &&+ I^T_4 \sin^2{\theta_\mu} \cos{2 \phi}+ I^T_5 \sin^2{\theta_\mu} \sin{2 \phi} \Big) + \sin{2\theta_{K}}\Big( I^{LT}_1 \sin{2\theta_\mu}\cos{ \phi} \nonumber\\ && + I^{LT}_2 \sin{2\theta_\mu}\sin{ \phi} + I^{LT}_3 \sin{\theta_\mu}\cos{ \phi} + I^{LT}_4 \sin{\theta_\mu}\sin{ \phi}\Big) \Bigg\rbrace \; ,
\end{eqnarray}
where the normalization factor $N_F$ is
\begin{eqnarray}
\label{NF}
N_F &=& \frac{3 \alpha^2_{em}G^2_F|V^*_{ts} V_{tb}|^2 |\vec{p}^B_{K^*}|
\beta_\mu}{2^{14}\pi^6 m^2_B}Br(K^*\rightarrow K\pi) \; .
\end{eqnarray}
Here $\beta_\mu =\sqrt{1-4 m^2_\mu/q^2}$, and $|\vec{p}^B_{K^*}|$ is
the magnitude of the $K^*$ momentum in the $B$-meson rest frame:
\begin{eqnarray}
\label{KstmominB}
|\vec{p}^B_{K^*}| &=& \frac{1}{2 m_B}\sqrt{m^4_B+m^4_{K^*}+q^4-2
[q^2 m^2_B+m^2_{K^*}(m^2_B+q^2)]}\; .
\end{eqnarray}
The twelve angular coefficients $I$ depend on the couplings, kinematic
variables and form factors, and are given in
Appendix~\ref{app-bkstarmumu}. In this paper we concentrate on the
CP-conserving observables: the DBR, the forward-backward asymmetry
$A_{FB}$, the polarization fraction $f_L$, and the asymmetries
$A_T^{(2)}$ and $A_{LT}$.
The theoretical predictions for the relevant $B \to K^*$ form factors
are rather uncertain in the region ($7$~GeV$^2 \le q^2 \le
12$~GeV$^2$) due to nearby charmed resonances. The predictions are
relatively more robust in the lower and higher $q^2$ regions. We
therefore concentrate on calculating the angular distribution in the
low-$q^2$ ($1~{\rm GeV^2} \le q^2 \le 6~{\rm GeV^2}$) and the
high-$q^2$ ($q^2 \ge 14.4~{\rm GeV^2}$) regions. For numerical
calculations, we follow Ref.~\cite{AFBNP} for the form factors: in the
low-$q^2$ region, we use the form factors obtained using QCD
factorization, while in the high-$q^2$ region, we use the form factors
calculated in the light-cone sum-rule approach.
\subsection{Differential branching ratio and forward-backward asymmetry}
The forward-backward asymmetry for the muons is defined by
\begin{eqnarray}
\label{FBA}
A_{FB}(q^2) &=&\frac{\int^1_0 d\cos{\theta_\mu}\frac{d^2\Gamma}{dq^2d\cos{\theta_\mu} }-\int^0_{-1} d\cos{\theta_\mu}\frac{d^2\Gamma}{dq^2d\cos{\theta_\mu} }}{\int^1_0 d\cos{\theta_\mu} \frac{d^2\Gamma}{dq^2d\cos{\theta_\mu} }+\int^0_{-1} d\cos{\theta_\mu}\frac{d^2\Gamma}{dq^2d\cos{\theta_\mu} }} \; .
\end{eqnarray}
It can be obtained by integrating over the two angles $\theta_{K}$
and $\phi$ in Eq.~(\ref{ADKst}).
We obtain the double differential decay rate as
\begin{eqnarray}
\label{doubDR2}
\frac{d^2\Gamma}{dq^2d\cos{\theta_\mu} } &=&\frac{8 \pi N_F}{3} \Big[\frac{1}{2} \Big(
I^0_1 + I^0_2 \cos{2 \theta_\mu} + I^0_3 \cos{\theta_\mu} \Big)+\Big (
I^T_1 + I^T_2 \cos{2 \theta_\mu} \nonumber\\ && + I^T_3 \cos{\theta_\mu} \Big)\Big]
\; .
\end{eqnarray}
Further integration over the angle $\theta_\mu$ gives the differential
decay rate.
The contribution of the NP operators to the differential branching
ratio and forward-backward asymmetry of $\bdbar\to \kstar \mu^+ \mu^-$ was examined in detail
in Ref.~\cite{AFBNP}.
We do not reproduce the analysis here, but only give
the results below.
If only $R_{V,A}$ couplings are present, $A_{FB}$ can be enhanced at low
$q^2$, while keeping it positive, so that there is no zero crossing as
indicated by the recent data
\cite{Belle-oldKstar,Belle-newKstar,BaBar-Kmumu,BaBar-Kstarmumu}.
However, an enhancement at high $q^2$, also indicated by the same
data, is not possible. On the other hand, if only $R'_{V,A}$
couplings are present, $A_{FB}$ can become large and positive at high
$q^2$, but then it has to be large and negative at low $q^2$. These
couplings are therefore unable to explain the positive values of
$A_{FB}$ at low $q^2$. Thus, in order to reproduce the current
$\bdbar\to \kstar \mu^+ \mu^-$ experimental data, one needs both unprimed and primed NP
VA operators. The NP coupling values that come closest to the data
typically correspond to suppressed DBR at low $q^2$. (See
Fig.~\ref{fig:K*mumu-Afb}.) But it is also possible to have a large
$A_{FB}$ (up to 60\%) in the entire $q^2$ region while being
consistent with the SM prediction for the DBR. At present, the errors
on the measurements are quite large. However, if future experiments
reproduce the current central values with greater precision, this will
put important constraints on any NP proposed to explain the data.
\FIGURE[t]{
\includegraphics[width=0.4\linewidth]{AFBOnlyRVRARVpRAp.eps}
\includegraphics[width=0.4\linewidth]{AFBOnlyRVRARVpRApH.eps} \\
\includegraphics[width=0.4\linewidth]{BROnlyRVRARVpRAp.eps}
\includegraphics[width=0.4\linewidth]{BROnlyRVRARVpRApH.eps}
\caption{The left (right) panels of the figure show $A_{FB}$ and DBR for
$\bdbar\to \kstar \mu^+ \mu^-$ in the low-$q^2$ (high-$q^2$) region, in the scenario
where both $(R_V, R_A)$ and $(R'_V, R'_A)$ terms are present. The
band corresponds to the SM prediction and its uncertainties; the
lines show predictions for some representative values of NP
parameters $(R_V, R_A, R'_V, R'_A)$. For example, the red curves
for $A_{FB}$ in the low and high $q^2$ regions correspond to $(-1.55,
1.75, 6.16, 1.73)$ and $(-5.79, 1.10, 0.47,-3.34)$,
respectively. The pink curves for DBR in the low-$q^2$ and high-$q^2$
regions correspond to $(1.96, -4.09, 4.61, 0.13)$. For comparison,
the experimental data are also displayed in blue cross lines.
\label{fig:K*mumu-Afb}}
}
New SP couplings by themselves cannot significantly affect either the
DBR or the $A_{FB}$ predictions of the SM. New T couplings in general
tend to enhance DBR significantly, by up to a factor of 2, while not
contributing any additional terms to the asymmetry. As a result, the
magnitude of $A_{FB}$ is suppressed. The zero crossing can be anywhere
in the entire $q^2$ range, or it may disappear altogether. However,
whenever it is present, it is always a SM-like (positive) crossing.
When SP and T couplings are present simultaneously, additional
contributions to $A_{FB}$ that are not suppressed by $m_\mu/m_B$ are
possible. As a result, $A_{FB}$ obtained with this combination can be
marginally enhanced as compared to the case with only T operators. It
is then possible to have no zero crossing. However, the magnitude of
$A_{FB}$ cannot be large in the high-$q^2$ region.
\subsection{Polarization fraction $f_L$}
The differential decay rate and $K^*$ polarization fractions can be
found by integrating over the three angles in Eq.~(\ref{ADKst}) to get
\begin{eqnarray}
\frac{d\Gamma}{dq^2 } &=& \frac{8 \pi N_F}{3} (A_{L}+A_{T}) ~,
\end{eqnarray}
where the longitudinal and transverse polarization amplitudes
$A_L$ and $A_T$ are obtained from Eq.~(\ref{doubDR2}):
\begin{eqnarray}
\label{HL}
A_L &=& \Big(I^0_1 - \frac{1}{3} I^0_2 \Big),\quad A_T = 2 \Big(I^T_1 - \frac{1}{3} I^T_2 \Big) ~.
\end{eqnarray}
It can be seen from the expressions for the $I$'s in Appendix~\ref{app-bkstarmumu}
[see Eq.~(\ref{eq:IT})] that SP couplings cannot affect $A_T$.
The longitudinal and transverse polarization fractions, $f_L$ and $f_T$,
respectively, are defined as
\begin{eqnarray}
\label{flft}
f_L &=& \frac{A_L}{A_L+A_T} ~~,~~~~
f_T = \frac{A_T}{A_L+A_T} ~.
\end{eqnarray}
In the SM, $f_L$ can be as large as 0.9 at low $q^2$, and it decreases
to about 0.3 at high $q^2$. As can be seen from Fig.~\ref{fig:fL-VA},
new VA couplings can suppress $f_L$ substantially: it can almost
vanish in some allowed parameter range.
\FIGURE[t]{
\includegraphics[width=0.4\linewidth]{fLOnlyRVRARVpRAp.eps}
\includegraphics[width=0.4\linewidth]{fLOnlyRVRARVpRApH.eps} \\
\caption{The left (right) panel of the figure shows $f_L$ for
$\bdbar\to \kstar \mu^+ \mu^-$ in the low-$q^2$ (high-$q^2$) region, in the scenario
where both $(R_V, R_A)$ and $(R'_V, R'_A)$ terms are present. For
example, the blue curves in the low-$q^2$ and high-$q^2$ regions
correspond to $(1.64, -0.90, 4.27, -0.91)$ and $(1.96, -4.09, 4.61,
0.13)$, respectively. For comparison, the experimental data are
also displayed in blue cross lines.
\label{fig:fL-VA}}
}
New SP couplings cannot change the value of $f_L$ outside the range
allowed by the SM. This may be attributed to the strong constraints
on the values of these couplings. New T couplings tend to suppress
$f_L$, except at $q^2 \approx 1$-$2$ GeV$^2$, where the value of
$f_L$ cannot be less than 0.5 as may be seen from Fig.~\ref{fig:fL-T}.
Since both VA and T couplings tend to suppress $f_L$, their combined
effect results in a similar behavior.
\FIGURE[t]{
\includegraphics[width=0.4\linewidth]{fLOnlyCT.eps}
\includegraphics[width=0.4\linewidth]{fLOnlyCTH.eps} \\
\caption{The left (right) panel of the figure shows $f_L$ for
$\bdbar\to \kstar \mu^+ \mu^-$ in the low-$q^2$ (high-$q^2$) region, in the scenario
where only new T couplings are present. The band corresponds to the
SM prediction and its uncertainties; the lines show predictions for
some representative values of NP parameters $(C_T,C_{TE})$. For
example, the red curves in the low-$q^2$ and high-$q^2$ regions correspond
to $(0.66, -0.14)$ and $(0.3, -0.46)$, respectively.
\label{fig:fL-T}}
}
\subsection{Angular asymmetries $A_T^{(2)}$ and $A_{LT}$}
In this subsection we consider the two angular asymmetries $A_T^{(2)}$
and $A_{LT}$. The first quantity was discussed before in
Ref.~\cite{kruger-matias}, while $A_{LT}$ is introduced here for the
first time.
The CP-conserving transverse asymmetry $A_T^{(2)}$
can be defined through the double differential decay rate
\begin{eqnarray}
\label{doubDR3}
\frac{d^2\Gamma}{dq^2 d\phi } &=&\frac{1}{2\pi} \frac{d\Gamma}{dq^2 }
\Big[ 1+ f_T \left(A^{(2)}_T \cos{2 \phi} + A^{(im)}_T \sin{2 \phi}\right)
\Big] \; .
\end{eqnarray}
Here $ A^{(im)}_T$ depends on the imaginary part of a certain combination
of amplitudes and can be used to construct CP-violating observables.
We will not consider it any further in this work. The asymmetry
$ A^{(2)}_T $ can be obtained by integrating over the two polar
angles $\theta_{\mu}$ and $\theta_{K}$ in Eq.~(\ref{ADKst}).
It can be expressed as
\begin{eqnarray}
\label{AT2}
A^{(2)}_T &=& \frac{4 I^T_4}{3 A_T}.
\end{eqnarray}
We observe that $A_T^{(2)}$ cannot be affected by SP couplings.
In the SM,
\begin{eqnarray}
A^{(2)}_T & \approx& \frac{4 \beta^2_\mu \Big(|A^V_\perp|^2-|A^V_\parallel|^2 + |A^A_\perp|^2-|A^A_\parallel|^2\Big)}{3 A_T} \; .
\end{eqnarray}
The transversity amplitudes $A_{\parallel, \perp}$ are defined through
Eqs.~(\ref{Ksthelamp}) and (\ref{Kstrasamp1}) given in
Appendix~\ref{app-bkstarmumu}. At leading order in
$\Lambda_{QCD}/E_{K^{*}}$, $\Lambda_{QCD}/m_b$ and $\alpha_s$ (the
LEET limit), one can use the form-factor relations of
Refs.~\cite{Charles, Beneke} and neglect terms of ${\cal
O}(m^2_{K^*}/m^2_B)$ to obtain
\begin{equation}
A^{+}_{V} \approx 0 ~~,~~~~ A^{+}_{A} \approx 0 \; .
\label{leet_kstar}
\end{equation}
Thus, in the low-$q^2$ region,
\begin{equation}
A^i_\parallel \approx \frac{A^{-}_i}{\sqrt{2}} ~~,~~~~ A^i_\perp \approx -\frac{A^{-}_i}{\sqrt{2}} \quad \mathrm{for}\quad i= V, A \; ,
\label{leet_kstar2}
\end{equation}
which corresponds to the LEET limit. $A_{T}^{(2)} \approx 0$ in the
SM and is independent of form factors up to corrections of order
$\Lambda_{QCD}/E_{K^{*}}$, $\Lambda_{QCD}/m_b$ and $\alpha_s$,
i.e.\ the hadronic uncertainty is small. This can be seen in
Figs.~\ref{fig:AT2-VA} and \ref{fig:AT2-T}. This indicates that
corrections to the LEET limit are small, and makes $A_{T}^{(2)}$ an
excellent observable to look for new-physics effects
\cite{kruger-matias}.
We now examine the longitudinal-transverse asymmetry $A_{LT}$, defined by
\begin{eqnarray}
\label{ALT1-def}
A_{LT} &=& \frac{\int^{\pi/2}_{-\pi/2}d\phi(\int^1_0 d\cos {\theta_{K}} \frac{d^3\Gamma}{dq^2d\phi d\cos {\theta_{K}}}-\int^0_{-1} d\cos {\theta_{K}} \frac{d^3\Gamma}{dq^2d\phi d\cos {\theta_{K}}})}{\int^{\pi/2}_{-\pi/2}d\phi(\int^1_0 d\cos {\theta_{K}} \frac{d^3\Gamma}{dq^2d\phi d\cos {\theta_{K}}}+\int^0_{-1} d\cos {\theta_{K}} \frac{d^3\Gamma}{dq^2d\phi d\cos {\theta_{K}}})} \; .
\end{eqnarray}
One can compare $A_{LT}$ to $A_{FB}$. In $A_{FB}$ the angle $\phi$ is
integrated over its entire range, while in $A_{LT}$ $\phi$ is only
integrated over the range $(- \pi/2,\pi/2)$. This choice of
integration range eliminates all terms which depend on the imaginary
part of combinations of amplitudes in the angular distribution. (These
eliminated terms can be used to construct CP-violating observables and
will not be discussed here.) In $A_{LT}$ only the CP-conserving parts
of the angular distribution survive. Note that, in the CP-conserving
limit, $A_{LT}$ is the same as the observable $S_5$ defined in
Ref.~\cite{Altmannshofer:2008dz}, apart from a normalization constant.
The quantity $A_{LT}$ can also be expressed in terms of the
observables $A_{T}^{(3)}$ and $A_{T}^{(4)}$ defined in
Ref.~\cite{Egede:2008uy}. However, $A_{LT}$ is easily extracted from
the angular distribution and has different properties in the LEET
limit than $A_{T}^{(3)}$ and $A_{T}^{(4)}$.
Using Eq.~(\ref{ADKst}), the asymmetry $A_{LT}$ can be expressed as
\begin{eqnarray}
\label{ALT1-expr}
A_{LT} &=& \frac{I^{LT}_3}{2 (A_L + A_T) }.
\end{eqnarray}
We observe from Eq.~(\ref{eq:ILT}) that $A_{LT}$ depends on the VA couplings, as
well as on V-S, S-TE, and P-T interference terms.
In the SM,
\begin{eqnarray}
\label{ALT1-exprSM}
A_{LT} &=& \frac{\beta_\mu {\rm Re}[ A^{L}_{0,VA}(A^{V*}_\perp - A^{A*}_\perp)-A^{R}_{0,VA}(A^{V*}_\perp + A^{A*}_\perp)]}{\sqrt{2} (A_L + A_T) }\; .
\end{eqnarray}
Now, in the LEET limit, $A^{+}_{V,A} \approx 0$. Hence, in this limit,
\begin{eqnarray}
A_{LT}^{LEET} &\propto& \frac{ {\rm Re}[A^0_V A^{-*}_A + A^0_A A^{-*}_V]}{A_L + A_T} \; .
\end{eqnarray}
{}From this it can be shown that the SM predicts ${A}_{LT} = 0$ at
\begin{eqnarray}
q^2 \approx -\frac{C^{eff}_7 m_b m^2_B}{C^{eff}_7 m_b +C^{eff}_9 m_B }
\approx 1.96 ~{\rm GeV}^2 \; .
\end{eqnarray}
Thus, just like $A_{FB}$, the quantity $A_{LT}$ also has a zero
crossing which is independent of form factors in the LEET limit. Note
that the zero crossing of $A_{LT}$ is different from that of $A_{FB}$.
Figs.~\ref{fig:AT2-VA} and \ref{fig:AT2-T} also demonstrate that the
zero crossing of $A_{LT}$ has a very small hadronic uncertainty. This
indicates small corrections to the LEET limit, making the position of
the zero crossing of $A_{LT}$ a robust prediction of the SM. This quantity
would therefore be very useful in searching for new-physics effects.
New VA couplings can affect $A_T^{(2)}$ significantly: they can
enhance its magnitude by a large amount, change its sign, and change
its $q^2$-dependence. The zero-crossing point may be at a value of
$q^2$ different from that predicted by the SM.
Since $A_{LT}$ here is identical to the observable $S_5$ in
Refs.~\cite{ Altmannshofer:2008dz,aoife} in the CP-conserving limit
(apart from a normalization factor), the zero-crossing in both of
these observables is expected to take place at the same $q^2$.
Indeed, the results agree at LO, while the NLO corrections can shift
the $q^2$ at the zero-crossing to $q^2= 2.24^{+0.06}_{-0.08}$
\cite{Altmannshofer:2008dz}. Note that the deviation due to new VA
couplings can be much larger than the effects due to NLO corrections.
Except at very low $q^2$, the magnitude of $A_{LT}$ is generally
suppressed by new VA couplings. The primed VA couplings can be
constrained by $A_{LT}$ better than the unprimed VA couplings. In both
cases, the value of $A_{LT}$ can be anywhere in the $q^2$ range, and
can be positive or negative. In particular, there may or may not be a
zero crossing, and if there is, its position can be different from
that of the SM.
\FIGURE[t]{
\includegraphics[width=0.4\linewidth]{AT2OnlyRVRARVpRAp.eps}
\includegraphics[width=0.4\linewidth]{AT2OnlyRVRARVpRApH.eps} \\
\includegraphics[width=0.4\linewidth]{ALTOnlyRVRARVpRAp.eps}
\includegraphics[width=0.4\linewidth]{ALTOnlyRVRARVpRApH.eps}
\caption{The left (right) panels of the figure show $A_T^{(2)}$ and
$A_{LT}$ for $\bdbar\to \kstar \mu^+ \mu^-$ in the low-$q^2$ (high-$q^2$) region, in
the scenario where both $(R_V, R_A)$ and $(R'_V, R'_A)$ terms are
present. The band corresponds to the SM prediction and its
uncertainties; the lines show predictions for some representative
values of NP parameters $(R_V, R_A, R'_V, R'_A)$. For example, the
pink curves for $A_T^{(2)}$ in the low-$q^2$ and high-$q^2$ regions
correspond to $(1.96, -4.09, 4.61,0.13)$ and $(1.64, -0.90, 4.27,
-0.91)$, respectively. The red curves for $A_{LT}$ in the low-$q^2$ and
high-$q^2$ regions correspond to $(-1.55, 1.75, 6.16, 1.73)$ and
$(-5.79, 1.10, 0.47, -3.33)$, respectively.
\label{fig:AT2-VA}}
}
New SP couplings do not affect $A_T^{(2)}$, and $A_{LT}$ qualitatively
behaves similarly to the SM. New T couplings in general tend to
suppress the magnitudes of both asymmetries (see
Fig.~\ref{fig:AT2-T}).
\FIGURE[t]{
\includegraphics[width=0.4\linewidth]{AT2OnlyCT.eps}
\includegraphics[width=0.4\linewidth]{AT2OnlyCTH.eps} \\
\includegraphics[width=0.4\linewidth]{ALTOnlyCT.eps}
\includegraphics[width=0.4\linewidth]{ALTOnlyCTH.eps}
\caption{The left (right) panels of the figure show $A_T^{(2)}$ and
$A_{LT}$ for $\bdbar\to \kstar \mu^+ \mu^-$ in the low-$q^2$ (high-$q^2$) region, in
the scenario where only new T couplings are present. The band
corresponds to the SM prediction and its uncertainties; the lines
show predictions for some representative values of NP parameters
$(C_T, C_{TE})$. For example, the blue curves for $A_T^{(2)}$ in
the low-$q^2$ and high-$q^2$ regions correspond to $(0.3, -0.46)$ and
$(-0.005, 0.014)$, respectively. The red curves for $A_{LT}$ in the
low-$q^2$ and high-$q^2$ regions correspond to $(0.3, -0.46)$ and $(0.66,
-0.14)$, respectively.
\label{fig:AT2-T}}
}
\section{Discussion and Summary}
\label{summary}
Flavor-changing neutral current (FCNC) processes are expected to be
incisive probes of new physics. In the SM, they occur only at loop
level, and hence are suppressed. This may allow the new-physics (NP)
effects to be identifiable. Of course, since we have no clue about
what form the NP takes, the observations from a variety of processes
are necessary. In this paper, we have focussed on the processes that
involve the effective transition $ b \to s \mu^+ \mu^-$.
The transition $ b \to s \mu^+ \mu^-$ is responsible for many decay modes such as
$\bsbar \to \mu^+ \mu^-$, $\bdbar \to X_s \mu^+ \mu^-$, $\bsbar \to \mu^+ \mu^- \gamma$, $\bdbar \to {\bar K} \mu^+ \mu^-$, $\bdbar\to \kstar \mu^+ \mu^-$.
While some of these processes (e.g.\ $\bsbar \to \mu^+ \mu^-$) have not yet been
observed, the upper bounds on their branching ratios have already
yielded strong constraints on NP. Some of these processes have been
observed and the measurements of their branching fractions, as well as
of additional observables such as the forward-backward asymmetries,
are available. Indeed, the recently-observed muon forward-backward
asymmetry in $\bdbar\to \kstar \mu^+ \mu^-$ has been found to deviate slightly from the
SM predictions. If this is in fact due to the presence of NP, such NP
should contribute to all the other decays involving the effective
transition $ b \to s \mu^+ \mu^-$. The effects of this NP on these decay modes
would be correlated, and hence a combined analysis of all these decay
modes would be invaluable in discerning the type of NP present.
While specific models of NP may be used and their effect on the
relevant observables studied, we have chosen to explore the NP in a
model-independent way, in terms of the Lorentz structures of the NP
operators that contribute to the effective $ b \to s \mu^+ \mu^-$ Hamiltonian. We
have performed a general analysis that includes NP vector-axial vector
(VA), scalar-pseudoscalar (SP), and/or tensor (T) operators. We have
computed the effects of such NP operators, individually and in all
combinations, on these decays. We have taken the couplings to be real
and have considered the CP-conserving observables in this paper; the
CP-violating observables are discussed in Ref.~\cite{CPviol}. The aim
is to find NP signals, and using them, to identify the Lorentz
structure of the NP. As the first step towards this goal, we
calculate the constraints on the NP couplings, and, keeping the
couplings within these bounds, we look for the observables where the
NP signal can potentially stand out above the SM background.
It is crucial to understand this SM background, which makes it
imperative to use observables whose values are predicted reasonably
accurately within the SM. The main source of the SM uncertainties is
the hadronic matrix elements, whose theoretical calculations often
have errors of the order of tens of percent. We have handled this on
many levels. First, we have tried to identify observables that will
not be very sensitive to the hadronic uncertainties. For example in
$\bdbar \to {\bar K} \mu^+ \mu^-$, the SM prediction for the forward-backward asymmetry is
simply zero, independent of any hadronic elements. Also, while the
differential branching ratios may be strongly dependent on the
hadronic matrix elements, the forward-backward asymmetries are less
so. Furthermore, the large-energy effective theory (LEET) limits can
be used to control the uncertainties in the low-$q^2$ region for
observables like $A_{FB}$ and $A_T^{(2)}$. For example, certain
observables, such as the zero-crossing of $A_{FB}$ in $\bdbar\to \kstar \mu^+ \mu^-$,
can be shown to be robust under form-factor uncertainties in the LEET
limit. The longitudinal-transverse asymmetry $A_{LT}$ in $\bdbar\to \kstar \mu^+ \mu^-$
also has a zero crossing in the SM with small
hadronic uncertainties. These measurements can even be used to
extract the parameters of the NP operators, to a very good
approximation.
Also, we focus only on the situations where the NP contribution can be
so significant that it will stand out even if the SM errors were
magnified. Our figures show bands for SM predictions that include the
form-factor uncertainties as quoted in the form-factor calculations,
and these are overlaid with some examples of the allowed values of
these observables when NP contributions are included. This allows the
scaling of these uncertainties to be easily visualized. We identify
and emphasize only those situations where the results with the NP can
be significantly different from those without the NP, even if the
hadronic uncertainties were actually much larger. Note that further
inclusion of the NLO QCD corrections would affect the central values
of the SM predictions to a small extent, while also decreasing the
renormalization scale uncertainty. However, since our primary interest
is looking for observables where the NP effects are large, a LO
analysis is sufficient.
\afterpage{\clearpage}
\TABLE[!htb]{
{\footnotesize
\begin{tabular}{p{2.6cm}|p{2.5cm}|p{2.7cm}|p{2.6cm}|p{2.6cm}}
\hline
Observable & SM & {Only new VA} & {Only new SP} & {Only new T} \\
\hline
$\bsbar \to \mu^+ \mu^-$ & & & & \\
\hfill BR & $(3.35 \pm 0.32) \times 10^{-9}$ &
$\bullet$ Marginal E \newline $\bullet$ Significant S
& $\bullet$ Large E \newline
$\bullet$ Maximal S
& No effect \\
\hline
$\bdbar \to X_s \mu^+ \mu^-$ & & & & \\
\hfill DBR & & $\bullet$ E ($\times 2$) \newline
$\bullet$ S ($\div 2$)
& $\bullet$ Marginal E & $\bullet$ E ($\times 2$) \\
& & & & \\
\hfill $A_{FB}$ & ZC$\approx 3.5$ GeV$^2$ &
$\bullet$ E(30\%) low $q^2$ \newline
$\bullet$ ZC shift / \newline disappearence
& $\bullet$ Marginal S & $\bullet$ Marginal S \\
& & & & \\
\hfill $f_L$ & $\bullet$ $0.9 \to 0.3$ \newline
(low$\to$high $q^2$)
& $\bullet$ Large S at low $q^2$
& $\bullet$ Marginal S & $\bullet$ Marginal E \\
\hline
$\bsbar \to \mu^+ \mu^- \gamma$ & & & & \\
\hfill DBR &
& $\bullet$ E ($\times 2- \times 3$) \newline $\bullet$ S (low $q^2$)
& No effect & $\bullet$ E ($\times 3$) \\
& & & & \\
\hfill $A_{FB}$ & ZC$\approx 4.3$ GeV$^2$
& $\bullet$ ZC shift / \newline disappearence
& No effect & $\bullet$ Large S \\
\hline
$\bdbar \to {\bar K} \mu^+ \mu^-$ & & & & \\
\hfill DBR &
& $\bullet$ E ($\times 2$) \newline $\bullet$ Marginal S
& $\bullet$ E at high $q^2$ & $\bullet$ Small effect \\
& & & & \\
\hfill $A_{FB}$ & Vanishes
& $\bullet$ No effect
& $\bullet$ E at low $q^2$ \newline
$\bullet$ No ZC & $\bullet$ E at high $q^2$ \newline
$\bullet$ No ZC \\
\hline
$\bdbar\to \kstar \mu^+ \mu^-$ & & & & \\
\hfill DBR &
& $\bullet$ E ($\times 2$) \newline $\bullet$ S ($\div 2$)
& No effect & $\bullet$ E ($\times 2$) \\
& & & & \\
\hfill $A_{FB}$ & ZC$\approx 3.9$ GeV$^2$
& $\bullet$ E at low $q^2$ \newline
$\bullet$ ZC shift / \newline disappearence
& No effect & $\bullet$ Significant S \newline
$\bullet$ ZC shift \\
& & & & \\
\hfill $f_L$ & $\bullet$ $0.9 \to 0.3$ \newline
(low$\to$high $q^2$)
& $\bullet$ Large S
& No effect & $\bullet$ Significant S \\
& & & & \\
\hfill $A_T^{(2)}$ & $\bullet$ $\uparrow$ with $q^2$ \newline
$\bullet$ No ZC
& $\bullet$ E ($\times 2$) \newline $\bullet$ ZC possible
& No effect & $\bullet$ Significant S \\
& & & & \\
\hfill $A_{LT}$ & $\bullet$ ZC at low $q^2$ \newline
$\bullet$ more -ve \newline at large $q^2$
& $\bullet$ Significant S \newline $\bullet$
ZC shift / \newline disappearence
& No effect & $\bullet$ Significant S \\
\hline
\end{tabular}
}
\caption{The effect of NP couplings on observables.
E($\times n$): enhancement by up to a factor of $n$,
S($\div n$): suppression by up to a factor of $n$,
ZC: zero crossing.
\label{tab:summary}}
}
Our results are summarized in Table~\ref{tab:summary}, for the cases
where the NP has only one type of Lorentz structure: VA, SP or T. We
note certain generic features of the influence of different NP Lorentz
structures.
New VA operators are the ones that influence the observables strongly
in most cases. They typically can interfere with the SM terms
constructively or destructively, thus enhancing or suppressing the
differential branching ratios by up to factors of 2 or 3. They also
are able to enhance almost all the asymmetries, the notable exception
being $A_{FB}$ in $\bdbar \to {\bar K} \mu^+ \mu^-$, where the VA operators cannot
contribute. But for most other observables, this kind of NP can
potentially be observed. This can be traced to the large magnitudes of
the NP couplings still allowed by data, which in turn can be traced to
the possibility of interference between the new VA operators with the
SM operators that allows more freedom for the new VA couplings.
Typically, the $R_{V,A}$ couplings are constrained more weakly than
the $R'_{V,A}$ couplings, since the corresponding operators have the
same structure as those of the SM, allowing strong destructive
interferences. Consequently, the operators with $R_{V,A}$ couplings
are more likely to show themselves over and above the SM
background. We point out that the exception to this rule is the
$A_{FB}$ in $\bdbar\to \kstar \mu^+ \mu^-$ at large $q^2$, where the $R'_{V,A}$
couplings can cause a larger enhancement.
The SP operators, on the other hand, are handicapped by the stringent
constraints from the upper bound on $B(\bsbar \to \mu^+ \mu^-)$. If only $R_{S,P}$ or
$R'_{S,P}$ couplings are present, the constraints become even more
severe. It is for this reason that, even when the SP contributions are
unsuppressed by $m_\mu/m_b$, they are not often large enough to stand
apart from the SM background.
The couplings of the T operators, viz.\ $C_T$ and $C_{TE}$, are not as
suppressed as those of the SP operators. Therefore, they typically
contribute significantly to the DBRs. However, the interference terms
of these operators with the SM operators often suffer from the
$m_\mu/m_b$ helicity suppression, and hence they tend to suppress the
magnitudes of the asymmetries.
The combination of multiple Lorentz structures in general gives rise
to the combination of features of the individual Lorentz structures
involved. In particular, if the VA operators appear in conjunction
with another Lorentz structure, the effects of the VA operators
typically dominate. The T operators can interfere with the SP
operators without the $m_\mu/m_b$ helicity suppression, but the strong
constraints on the SP operators hold them back. A remarkable
exception is the combination of SP and T operators in the
forward-backward asymmetry in $\bdbar \to {\bar K} \mu^+ \mu^-$. This asymmetry, which
vanishes in the SM, can be enhanced to $\sim 5 \%$ at low $q^2$ with
only SP operators, and can be enhanced to $\sim 30\%$ with T operators
but only at $q^2 \approx m_B^2$. However, the presence of both SP and
T operators allows the asymmetry to be $\sim 40\%$ in the whole
high-$q^2$ region. A similar feature, though to a less-spectacular
extent, is observed in $A_{FB}$ of $\bdbar\to \kstar \mu^+ \mu^-$ \cite{AFBNP}.
With the large amount of data expected from the LHC experiments and
$B$-factories in the coming years, we may be able to detect confirmed
NP signals in the above processes. In that case, a combined analysis
of all these decay modes, as carried out in this paper, would enable
us to identify the Lorentz structure of the NP operators. This will
be important in establishing precisely what type of NP is present.
\bigskip
\noindent
{\bf Acknowledgments}: We thank Gagan Mohanty and Zoltan Ligeti for
useful comments, and S. Uma Sankar and Alejandro Szynkman for helpful
collaboration on several parts of this analysis. M.D. would like to
thank Wolfgang Altmannshofer for useful discussions. This work was
financially supported by NSERC of Canada (AKA, DL).
\bigskip
\noindent
{\bf Notes added}: After this paper was submitted, the CDF
Collaboration reported \cite{CDFmeas} the measurement of
\begin{equation}
B(\bsbar \to \mu^+ \mu^-) = (1.8^{+1.1}_{-0.9}) \times 10^{-8} ~.
\end{equation}
On the other hand, the recent LHCb update does not confirm this result
\cite{LHCbupdate}. They improve the present upper bound on
$B(\bsbar \to \mu^+ \mu^-)$ to
\begin{equation}
B(\bsbar \to \mu^+ \mu^-) \le 1.3 \times 10^{-8} ~~~{\hbox{(90\% C.L.)}}
\end{equation}
In addition, LHCb has measured various observables in $\bdbar\to \kstar \mu^+ \mu^-$
\cite{LHCbupdate2}. Their measurement of the $A_{FB}$ distribution is
consistent with the SM prediction, except in the high-$q^2$ region,
where we now see a slight suppression. This is contrary to the
measurement of Belle. That is, LHCb does not confirm the Belle result
of a large FB asymmetry in the low-$q^2$ region. Thus, the jury is
still out on whether NP has already been seen in these measurements.
|
1,108,101,565,559 | arxiv |
\subsection{Receiver design}
\label{sec:receiver_design}
A schematic of the LFI pseudo correlation receiver is shown in Figure~\ref{fig:lfi_pseudo_correlation_schematic}. In each radiometer the sky signal and a stable reference load at $\sim$4~K \cite{2009_LFI_cal_R1} are coupled to cryogenic low-noise HEMT amplifiers via a 180$^\circ$ hybrid. A phase shift oscillating between 0 and 180$^\circ$ at a frequency of 4096~Hz is then applied to one of the two signals. A second phase switch is present for symmetry on the second radiometer leg but it does not introduce any phase shift. A second 180$^\circ$ hybrid coupler recombines the signals so that the output is a sequence of sky-load outputs alternating at twice the frequency of the phase switch.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=12cm]{figures/fig_front_back_end_v2.eps}
\end{center}
\caption{Schematic of the LFI receivers pseudo correlation architecture}
\label{fig:lfi_pseudo_correlation_schematic}
\end{figure}
In the back-end of each radiometer (see bottom part of Figure~\ref{fig:lfi_pseudo_correlation_schematic}) the RF signals are further amplified, filtered by a low-pass filter and then detected. After detection the sky and reference load signals are integrated and digitised in 14-bit integers by the LFI Digital Acquisition Electronics (DAE) box. Further binning and software quantisation is performed in the Radiometer Electronics Box Assembly (REBA), a digital processing unit that manages telemetry packet production from the raw instrument digital output. Further details about REBA and digital signal processing are described in \cite{2009_LFI_cal_E1} and \cite{2009_LFI_cal_D2}.
The various RCAs are tagged with labels from LFI18 to LFI28 (see Table~\ref{tab:rca_id_correspondence}); each of the two radiometers connected to the two OMT arms are be labelled as M-0 (\textit{main} OMT arm) and S-1 (\textit{side} OMT arm, see \cite{2009_LFI_cal_O2}) while the two output detectors from each radiometer are be labelled as 0 and 1. Therefore with the label LFI18S-1, for example, we indicate the radiometer S of the RCA LFI18, and with the label LFI24M-01 we indicate detector 1 of radiometer M-0 in RCA LFI24.
\begin{table}[h!]
\caption{Correspondence between receiver centre frequency and RCA label}
\label{tab:rca_id_correspondence}
\begin{center}
\begin{tabular}{l l}
\hline
\mbox{} & \\
70~GHz & LFI18 through LFI23\\
44~GHz & LFI24, LFI25 and LFI26\\
30~GHz & LFI27 and LFI28\\
\mbox{} & \\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Signal output}
\label{sec:signal_output}
If the receiver isolation is perfect (i.e. if the sky and reference load signals are completely separated after the second hybrid) the relationship linking $T_{\rm in}$ to $V_{\rm out}$ can be written as:
\begin{equation}
V_{\rm out} = G(T_{\rm in}, T_{\rm noise})\times\left(T_{\rm in}+T_{\rm noise}\right),
\label{eq:vout}
\end{equation}
where $T_{\rm in}$ refers to either $T_{\rm sky}$ or $T_{\rm ref}$, $V_{\rm out}$ is the corresponding voltage output, $T_{\rm noise}$ is the noise temperature and $G(T_{\rm in}, T_{\rm noise})$ is the calibration factor that, in general, may depend on the input and noise temperatures.
In case of a linear response the calibration factor is a constant so that $G(T_{\rm in}, T_{\rm noise})\equiv G_0$. In Planck-LFI all the 70 GHz have proved very linear over a wide span of temperature inputs, ranging from $\sim 8$~K to $\sim 40$~K, while receivers at 30 and 44 GHz, instead, have shown slight compression that called for the development of a non linear response model.
In the following of this section we provide an overview of the response model from the analytical point of view, while in Section~\ref{sec:linearity_receivers_response} we discuss the source of the non linearity, showing that it is linked to compression in the back-end RF amplifiers and in the detector diode.
The parametrisation has been chosen following the work described in \cite{1989_daywitt_nonlinear_equations}. According to this work compression in the back-end of a radiometric receiver is modelled with a variable gain (i.e. that depends on the input power) with the analytical form described in Eq.~(\ref{eq:non_linearity_parametrisation}):
\begin{eqnarray}
\label{eq:non_linearity_parametrisation}
\mbox{FEM} &=& \left\{ \begin{array}{ll}
\mbox{Gain} = G^{\rm FEM}\\
\mbox{Noise} = T_{\rm noise}^{\rm FEM}\end{array}
\right. \nonumber\\
&&\mbox{}\\
\mbox{BEM} &=& \left\{ \begin{array}{ll}
\mbox{Gain} = G^{\rm BEM} = \frac{G_0^{\rm BEM}}{1+b\cdot G_0^{\rm BEM}\cdot p}\\
\mbox{Noise} = T_{\rm noise}^{\rm BEM},\end{array}
\right.\nonumber
\end{eqnarray}
where FEM stands for \textit{front-end module}, $p$ is the power entering the BEM and $b$ is a parameter defining the BEM non linearity. This relationship is simple, correctly describes the limits of linear response ($b=0$) and infinite compression ($b=\infty$) and fits very well the radiometric response curves (see plots in Appendix~\ref{sec:best_fits}). This parametrisation therefore constituted our base model to characterise the radiometric voltage output response.
The power entering the BEM (we neglect waveguide attenuation which may be included in the FEM parameters) is:
\begin{equation}
p = k \beta G_0^{\rm FEM} \left(T_{\rm in} + \tilde T_{\rm noise}\right),
\label{eq:bem_input_power}
\end{equation}
where $\beta$ is the bandwidth, $k$ the Boltzmann constant, and $\tilde T_{\rm noise} = T_{\rm noise}^{\rm FEM} + \frac{T_{\rm noise}^{\rm WG}}{G_0^{\rm FEM}}$.
So at the output of the BEM we have (the diode constant is considered inside the BEM gain):
\begin{eqnarray}
V_{\rm out} &=& k\beta G_0^{\rm FEM}\frac{G_0^{\rm BEM}\left(T_{\rm in}+T_{\rm noise}\right)}{1+b k \beta G_0^{\rm FEM} G_0^{\rm BEM}\left(T_{\rm in}+T_{\rm noise}\right)}
= \frac{G_0\left(T_{\rm in}+T_{\rm noise}\right)}{1+b G_0\left(T_{\rm in}+T_{\rm noise}\right)}\nonumber\\
\mbox{}\\
G_0 &=& G_0^{\rm FEM} G_0^{\rm BEM} k \beta\nonumber
\label{eq:vout_full}
\end{eqnarray}
which can be written in the following compact form:
\begin{eqnarray}
&&V_{\rm out} = G_{\rm tot}\left(T_{\rm in}+T_{\rm noise}\right)\nonumber\\
&&G_{\rm tot} = \frac{G_0}{1+b G_0\left(T_{\rm in}+T_{\rm noise}\right)}
\label{eq:vout_compact}
\end{eqnarray}
We see from Eq.~(\ref{eq:vout_compact}) that the in the case of $b=0$ it reduces to the classical linear equation, whereas if $b\neq 0$ the equation tells us that the receiver gain is not constant but dependent on the input and noise temperatures coupled with the non-linearity parameter.
\subsection{Sources of compression in LFI receivers}
\label{sec:sources_of_compression}
The linearity in a microwave receiver depends on the response of its individual components: radio-frequency amplifiers, detector diode and back-end analog electronics.
The main potential sources of compression in the LFI receivers are represented by the RF amplifiers in the front-end and back-end modules and the back-end square-law detector. Let us now estimate the input power at the various stages (FEM amplification, BEM amplification and detector) expected during nominal operations, i.e. observing an input temperature of $\sim$2.7~K. The input power at a given stage in the radiometric chain can be calculated from the following relationship:
\begin{equation}
P_{\rm in} = k\beta G\left(T_{\rm in}+T_{\rm noise}\right)
\label{eq:input_power}
\end{equation}
where $T_{\rm in}$ is the input antenna temperature, $G$ and $T_{\rm noise}$ are gain and noise temperature of the radiometric chain before the stage considered in the calculation, $\beta$ is the bandwidth and $k$ the Boltzmann constant. Table~\ref{tab:input_powers} summarises estimates of the input power at the various receiver stages based on typical gain, noise temperature and bandwidth values.
\begin{table}[h!]
\caption{Typical input power in dBm to the various receiver stages. The calculation has been performed
using the following typical parameters: $G^{\rm FEM} = 30$~dB, $G^{\rm BEM} = 35$~dB, $\beta = 20\%$ of the centre frequency, $T_{\rm noise} = 10$~K at 30~GHz, 16~K at 44~GHz and 30~K at 70~GHz.}
\label{tab:input_powers}
\begin{center}
\begin{tabular}{l c c c}
\hline
\hline
&30 GHz &44 GHz &70 GHz\\
\hline
Front-end &-98 &-97 &-96\\
Back-end &-60 &-57 &-52\\
Diode &-25 &-22 &-17\\
\hline
\end{tabular}
\end{center}
\end{table}
From Table~\ref{tab:input_powers} it is apparent that the input power to front-end amplifiers is extremely low, and very far from the typical compression levels of HEMT devices. Back-end RF amplifiers and, especially, detector diodes, receive a much higher input power so that they can be a source of non linear response.
In particular this showed to be the case for 30 and 44 GHz back-end modules as discussed in detail in Section~\ref{sec:back_end_characterisation}. It must be noticed that input power received by back-end RF amplifier and detector diodes is actually higher in 70~GHz receivers compared to 30 and 44~GHz, which appears to be in contradiction with the observed behaviour. We must underline, however, that 30 and 44~GHz BEMs components are different compared to 70~GHz BEMs; in particular RF amplifiers in low frequency BEMs are based on GaAs MMIC devices while in 70~GHz BEMs InP MMIC devices have been used. Further details about BEMs components and response can be found in \cite{2009_LFI_cal_R9,2009_LFI_cal_R10}.
\subsection{Characterisation of non linearity}
\label{sec:non_linearity_characterisation}
\subsubsection{Characterisation of receiver response}
\label{sec:characterisation_of_receiver_response}
The linearity response of the LFI receivers has been derived by measuring, for each output channel, the radiometer voltage output, $V_{\rm out}$, at various input temperatures of the reference loads, $T_{\rm in}$, ranging from $\sim$8~K to $\sim$30~K. Then the linearity parameter $b$ can be determined by fitting the acquired data $V_{\rm out}^j(T_{\rm in}^j)$ with Eq.~(\ref{eq:vout_compact}), where the fitting parameters are $G_0$, $T_{\rm noise}$ and $b$ (see Section~\ref{sec:noise_t_calibration_const}).
We have also charecterised linearity with a different and somewhat simpler approach, that avoids a three-parameters fit and allows to define a normalised non-linearity parameter that is independent of the receiver characteristics provided that the temperature range over which linearity is characterised is approximately the same for all detectors. This parameter has been calculated as follows:
\begin{itemize}
\item remove the average from the measured input temperature and output voltage, i.e. calculate $\tilde V_{\rm out}^j = V_{\rm out}^j - \langle V_{\rm out}\rangle_j$ and $\tilde T_{\rm in}^j = T_{\rm in}^j - \langle T_{\rm in}\rangle_j$;
\item fit the $\tilde V_{\rm out}^j(\tilde T_{\rm in}^j)$ data with a straight line calculating a slope $s$;
\item multiply the voltage outputs by the calculated slope, i.e. calculate $\bar T_{\rm out}^j = s\times\tilde V_{\rm out}^j$;
\item calculate $L = \sum \left( \bar T_{\rm out}^j - \tilde T_{\rm in}^j \right)^2$.
\end{itemize}
In case of a perfect linear response then $\bar T_{\rm out}^j = \tilde T_{\rm in}^j$ (i.e. measured points, after normalisation, lie on a $y=x$ line) and $L=0$. The parameter $L$, therefore, provides a measure of deviation from linearity.
A comprehensive view of the values of $L$ for all detectors (calculated in a input temperature range of the reference load between $\sim$9~K and $\sim$30~K) is provided in Figure~\ref{fig:non_linearity_parameters_all}. From the figure it is apparent that 70~GHz detectors are extremely linear while 30 and, especially, 44~GHz detectors show significant non-linearity.
\begin{figure}[h!]
\begin{center}
\begin{tabular}{c|c}
\includegraphics[width = 6.6cm]{figures/nonlinearity-m00.eps} &
\includegraphics[width = 6.8cm]{figures/nonlinearity-m01.eps} \\
\hline
\includegraphics[width=6.8cm]{figures/nonlinearity-s10.eps} &
\includegraphics[width=7cm]{figures/nonlinearity-s11.eps}
\end{tabular}
\end{center}
\caption{Non linearity parameters for all LFI channels. In each plot a small inset provides a zoom on the 70 GHz non linearity parameters on an expanded scale.}
\label{fig:non_linearity_parameters_all}
\end{figure}
In Figure~\ref{fig:70_ghz_linearity} we show a comprehensive plot of the normalised receiver response from all 24 70 GHz detectors. Notice that the measured points almost perfectly lie on the $y = x$ line. Furthermore it may be noticed that the plot appears to display much less points than exptected from 24 detectors; this is because for each normalised temperature, $\tilde T_{\rm in}^j$, the normalised voltage values from the various detectors essentially overlap.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=10cm]{figures/nonlinearity_70.ps}
\caption{Normalised output response from all 70 GHz detectors.}
\end{center}
\label{fig:70_ghz_linearity}
\end{figure}
In Figures~\ref{fig:30_ghz_linearity} and \ref{fig:44_ghz_linearity} the same plot clearly shows significant deviations from linearity, especially for the 44~GHz receivers. Because, in this case, non linearity varies among the various detectors and overplotting all the data in each frequency channel would make the plots difficult to read, we have plotted the normalised voltage output for each RCA in different graphs.
\begin{figure}[h!]
\centering
\includegraphics[width=7.5cm]{figures/nonlinearity_LFI27.ps}
\includegraphics[width=7.5cm]{figures/nonlinearity_LFI28.ps}
\caption{Normalised output response from 30 GHz detectors. Each plot represents data from the 4 detectors of each RCA.}
\label{fig:30_ghz_linearity}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=7.5cm]{figures/nonlinearity_LFI24.ps}
\includegraphics[width=7.5cm]{figures/nonlinearity_LFI25.ps} \\
\includegraphics[width=7.5cm]{figures/nonlinearity_LFI26.ps}
\caption{Normalised output response from 44 GHz detectors. Each plot represents data from the 4 detectors of each RCA.}
\label{fig:44_ghz_linearity}
\end{figure}
Deviation from linearity in the 30 and 44 GHz receivers, instead, is caused by signal compression caused by back-end RF amplifiers and diodes in presence of a broad-band signal. This is discussed in more detail in the next section, where we present some tests that were performed on two back-end units at 44~GHz and that provided the best characterisation of the signal compression in a very wide input power range.
\subsubsection{Characterisation of back-end response}
\label{sec:back_end_characterisation}
A set of tests have been performed on two back end modules at 44 GHz with the aim to identify the source of compression (RF amplifier or diode). The test was performed by observing with the receiver a sky and reference signal at $\sim$25~K and $\sim$18~K, respectively, and varying the input power to the back end with a variable power attenuator placed between the front and back end and coupled to a multimeter. In Figure~\ref{fig:raw_attenuation_curves} we show the output (after offset removal) from the two back ends as a function of the attenuator position in millimetres.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=10cm]{figures/raw_attenuation_curves.ps}
\end{center}
\caption{Output voltage (with offset removed) from both back end modules as a function of the raw attenuation in mm.}
\label{fig:raw_attenuation_curves}
\end{figure}
The next step has been to calculate the input power to the back end module as a function of the attenuator position. This has been done using two independent methods, i.e.: (i) using a power meter to record the integrated signal reaching the back-end and (ii) using a noise figure meter to measure the input signal level versus frequency.
\paragraph{Attenuation curves using a power meter.}
A power meter with a dynamic range up to -70 dBm was previously calibrated using its internal reference source and used to measure signals from the front-end module attenuated down to -21 dB.
Three independent measurements taken in different days and configurations showed good repeatability, as shown in Figure~\ref{fig:attenuation_power_meter}.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=10cm]{figures/attenuation_power_meter.ps}
\end{center}
\caption{Attenuation as a function of attenuator position: curve and error bars are the result of three independent measurements.}
\label{fig:attenuation_power_meter}
\end{figure}
It is worth noting that the curve in Figure~\ref{fig:attenuation_power_meter} is an approximation of the effective attenuation, because it should be calculated by convolving in frequency the power exiting the front-end module with the back-end insertion gain. Since the RF insertion gain of these particular devices was unknown we have estimated the magnitude of this approximation by using the insertion gain measured on a different, but similar back-end module. Although non rigorous, this comparison (shown in Figure~\ref{fig:attenuation_power_meter_comparison_insertion_gain}) demonstrates that the power meter measurements provide a good approximation of the back-end module input power.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=10cm]{figures/attenuation_power_meter_comparison_insertion_gain.ps}
\end{center}
\caption{Comparison of attenuation curves from the direct power meter measurements and from the same measurements after convolution with the RF insertion gain of a similar back-end module. Differences are very small.}
\label{fig:attenuation_power_meter_comparison_insertion_gain}
\end{figure}
\paragraph{Attenuation curves using a noise figure meter.}
A noise figure meter was also used to measure power exiting the FEM for several positions of the variable attenuator, roughly corresponding to steps of 1 dB. For each position values have been integrated along the bandwidth and compared with those obtained with the power meter. In Figure~\ref{fig:attenuation_noise_meter} we show the results obtained with the noise figure meter integrated in two different frequency ranges compared with the power meter measurements. The results indicate a good matching of the curves obtained with the different methods.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=10cm]{figures/attenuation_noise_meter.ps}
\end{center}
\caption{Comparison between integrated power in two different intervals ([33 GHz -- 49 GHz] and [38 GHz -- 49 GHz]) using the noise meter and power detected using the power meter in the frequency interval [33 GHz -- 50 GHz].}
\label{fig:attenuation_noise_meter}
\end{figure}
These results eventually led us to use the average power meter measurements (see Figure~\ref{fig:attenuation_power_meter}) to convert the raw attenuation in mm into power units. In Figure~\ref{fig:normalised_compression_curves} we show the normalised compression curves for the two tested back end modules highlighting the deviation from the expected linear behaviour. Considering that the maximum power corresponded to $\sim 25$~K input temperature, the power range spanned by this test extends well into the temperature region where the receivers will operate in flight, i.e. with few K input temperature.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=10cm]{figures/normalised_compression_curves.ps}
\end{center}
\caption{Normalized compression curves for the two tested 44 GHz back-end modules.}
\label{fig:normalised_compression_curves}
\end{figure}
Analysing the derivative of the compression curves (shown in Figure~\ref{fig:derivative_normalised_compression_curves}) it was apparent that no truly linear response was found across all the input power range.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=10cm]{figures/derivative_output_voltage.ps}
\end{center}
\caption{Derivative of the compression curves. Notice how no region of constant derivative (i.e. linear behaviour) can be found.}
\label{fig:derivative_normalised_compression_curves}
\end{figure}
\subsection{Noise temperature and photometric calibration constant}
\label{sec:noise_t_calibration_const}
Noise temperature and photometric calibration constant have been calculated from experimental datasets in which the sky-load temperature was varied in a range between $\sim 8$~K and $\sim 30$~K. In the 30 and 44~GHz receivers for each detector we fitted the $V_{\rm out}(T_{\rm in}^{\rm ant})$ data against Eq.~(\ref{eq:vout_full}) to retrieve $G_0$, $T_{\rm noise}$ and $b$.
In Figure~\ref{fig:non_linear_fit_results} we show an example of the best fit for a 30~GHz and a 44~GHz receiver, while in Appendix~\ref{sec:best_fits} we display the whole set of best fits for the 30~GHz and 44~GHz detectors. The list of the best-fit parameters is reported in Table~\ref{tab:best_fit_parameters}. Further details about tests and data analysis leading to these values can be found in \cite{2009_LFI_cal_M4}.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=7.5cm]{figures/fit_example_2700.ps}
\includegraphics[width=7.5cm]{figures/fit_example_2400.ps}
\end{center}
\caption{Two examples of non-linear fitting of $V_{\rm out}$ vs. $T_{\rm in}^{\rm ant}$ data. left panel: 30~GHz receiver LFI27 (detector M-00); right panel: 44~GHz receiver LFI24 (detector M-00).}
\label{fig:non_linear_fit_results}
\end{figure}
\begin{table}[h!]
\caption{photometric calibration constant, noise temperature and non-linearity parameters obtained from the RCA test campaign (see \cite{2009_LFI_cal_M4}).}
\label{tab:best_fit_parameters}
\begin{center}
\begin{tabular}{|l c c c c|}
\multicolumn{5}{c}{$G_0$ (V/K)} \\
\hline
\hline
&M-00 &M-01 &S-10 &S-11\\
\hline
LFI24 & 0.0048 & 0.0044 & 0.0062 & 0.0062\\
LFI25 & 0.0086 & 0.0085 & 0.0079 & 0.0071\\
LFI26 & 0.0052 & 0.0067 & 0.0075 & 0.0082\\
LFI27 & 0.0723 & 0.0774 & 0.0663 & 0.0562\\
LFI28 & 0.0621 & 0.0839 & 0.0607 & 0.0518\\
\hline
\end{tabular}
\vspace{1cm}
\begin{tabular}{|l c c c c|}
\multicolumn{5}{c}{$T_{\rm noise}$ (K)} \\
\hline
\hline
&M-00 &M-01 &S-10 &S-11\\
\hline
LFI24 &15.5 & 15.3 & 15.8 & 15.8\\
LFI25 &17.5 & 17.9 & 18.6 & 18.4\\
LFI26 &18.4 & 17.4 & 16.8 & 16.5\\
LFI27 &12.1 & 11.9 & 13.0 & 12.5\\
LFI28 &10.6 & 10.3 & 9.9 & 9.8\\
\hline
\end{tabular}
\begin{tabular}{|l c c c c|}
\multicolumn{5}{c}{$b$}\\
\hline
\hline
&M-00 &M-01 &S-10 &S-11\\
\hline
LFI24 &1.794 & 1.486 & 1.444 & 1.446\\
LFI25 &1.221 & 1.171 & 0.800 & 1.013\\
LFI26 &1.085 & 1.418 & 0.943 & 1.218\\
LFI27 &0.123 & 0.122 & 0.127 & 0.140\\
LFI28 &0.190 & 0.157 & 0.187 & 0.196\\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{Calibrated in-flight sensitivity}
\label{sec:calibrated_in-flight_sensitivity}
One of the key performance parameters derived from data acquired during the calibration campaign is the in-flight calibrated sensitivity, estimated starting from the raw uncalibrated white noise sensitivity measured at laboratory conditions which were similar but not equal to the expected in flight conditions. In particular during laboratory experiments the input sky temperature was $\gtrsim 8$~K and the front-end unit temperature was, in some cases (e.g. during the instrument-level test campaign~\cite{2009_LFI_cal_M3}) greater than 20~K.
In this section we discuss how the raw noise measurements in the laboratory have been extrapolated to flight conditions with particular reference to the effect of response non-linearity on the calculations.
Our starting point is the the raw datum, a couple of uncalibrated white noise levels in V$\times\sqrt{\rm s}$ for the two detectors in a radiometer measured with the sky load at a temperature $T_{\rm {sky-load}}$ and the front end unit at physical temperature $T_{\rm test}$.
In order to derive the calibrated white noise level extrapolated to input temperature equal to $T_{\rm sky}$ and with the front end unit at a temperature of $T_{\rm nominal}$ we have performed the following three steps:
\begin{enumerate}
\item extrapolation to nominal front-end unit temperature;
\item extrapolation to nominal input sky temperature;
\item calibration in units of K$\times \sqrt{\rm s}$.
\end{enumerate}
A detailed discussion of the first step can be found in \cite{2009_LFI_cal_M3}. Here we will focus on the other points, which are affected by non linearity in the receiver response.
Let us start from the radiometer equation in which, for each detector, the white noise spectral density is given by:
\begin{equation}
\delta T_{\rm rms} = 2\frac{T_{\rm in}+T_{\rm noise}}{\sqrt{\beta}}
\label{eq:single_diode_radiometer_equation}
\end{equation}
Now we want to find a similar relationship for the uncalibrated white noise spectral density linking $\delta V_{\rm rms}$ to $V_{\rm out}$. We start from the following:
\begin{equation}
\delta V_{\rm rms} = \frac{\partial V_{\rm out}}{\partial T_{\rm in}}\delta T_{\rm rms};
\label{eq:delta_vrms_basic}
\end{equation}
calculating the derivative of $V_{\rm out}$ using Eq.~(\ref{eq:vout_full}) and using $\delta T_{\rm rms}$ from Eq.~(\ref{eq:single_diode_radiometer_equation}) we obtain:
\begin{equation}
\delta V_{\rm rms} = \frac{V_{\text{out}}}{\sqrt{\beta }}\left[1+
G_0 b \left(T_{\text{in}}+T_n\right)\right]^{-1},
\label{eq:white_noise}
\end{equation}
where $\beta$ is the bandwidth and $V_{\rm out}$ is the receiver DC voltage output. Considering the two input temperatures $T_{\rm sky-load}$ and $T_{\rm sky}$ then the ratio $\rho = \frac{\delta V_{\rm rms}(T_{\rm sky})}{\delta V_{\rm rms}(T_{\rm sky-load})}$ is:
\begin{equation}
\rho = \frac{ V_{\rm out}(T_{\rm sky})}
{V_{\rm out}(T_{\rm sky-load})}
\frac{1+G_0 b(T_{\rm sky-load}+T_{\rm noise})}{1+G_0 b(T_{\rm sky}+T_{\rm noise})}.
\label{eq:ratio_uncalibrated_white_noise}
\end{equation}
Using Eq.~(\ref{eq:vout_full}) to expand $\rho$ in Eq.~(\ref{eq:ratio_uncalibrated_white_noise}) we have:
\begin{equation}
\rho = \frac{
T_{\rm sky}+T_{\rm noise}}{T_{\rm sky-load}+T_{\rm noise} }
\left[\frac{1+ b\, G_0
(T_{\rm sky-load}+T_{\rm noise})}{1+b\, G_0
(T_{\rm sky}+T_{\rm noise})}\right]^2,
\label{eq:ratio_uncalibrated_white_noise_1}
\end{equation}
and $\delta V_{\rm rms}(T_{\rm sky}) = \rho\times \delta V_{\rm rms}(T_{\rm chamber})$.
From Eqs.~(\ref{eq:white_noise}) and (\ref{eq:vout_full}) we obtain that
\begin{equation}
\delta V_{\rm rms} = \frac{G_0}{\left[1+b\, G_0(T_{\rm sky}+T_{\rm noise})\right]^2}\times 2\frac{T_{\rm sky}+T_{\rm noise}}{\sqrt{\beta}}.
\label{eq:tilde_wn_final}
\end{equation}
The calibrated noise extrapolated at the sky temperature, $\delta T_{\rm rms}$, can be obtained considering that, by definition, $\delta T_{\rm rms} = 2\frac{T_{\rm sky}+T_{\rm noise}}{\sqrt{\beta}}$, therefore:
\begin{equation}
\delta T_{\rm rms} = \frac{\left[1+b\, G_0(T_{\rm sky}+T_{\rm noise})\right]^2}{G_0} \delta V_{\rm rms}.
\end{equation}
A summary of the expected in-flight sensitivities for the Planck-LFI can be found in \cite{2009_LFI_cal_M3}.
\subsection{Noise effective bandwidth}
\label{sec_noise_effective_bandwidth}
The well-known radiometer equation applied to the single-diode output links the white noise level to sky and noise temperatures and the receiver bandwidth. It reads \cite{seiffert02}:
\begin{equation}
\delta T_{\rm rms} = 2\frac{T_{\rm sky}+T_{\rm noise}}{\sqrt{\beta}}.
\label{eq:radiometer_equation}
\end{equation}
In the case of linear response we can write Eq.~\ref{eq:radiometer_equation} in its most useful uncalibrated form:
\begin{equation}
\delta V_{\rm rms} = 2\frac{V_{\rm out}}{\sqrt{\beta}},
\label{eq:radiometer_equation_uncalibrated}
\end{equation}
which is commonly used to estimate the receiver bandwidth, $\beta$, from a simple measurement of the receiver DC output and white noise level, i.e.:
\begin{equation}
\tilde\beta = 4\left(\frac{V_{\rm out}}{\delta V_{\rm rms}}\right)^2.
\label{eq:noise_effective_bandwidth}
\end{equation}
If the response is linear and if the noise is purely radiometric (i.e. all the additive noise from back end electronics is negligible and if there are no non-thermal noise inputs from the source) then $\tilde \beta$ is equivalent to the receiver bandwidth, i.e.
\begin{equation}
\tilde \beta \equiv \beta = 4\left(\frac{T_{\rm sky}+T_{\rm noise}}{\delta T_{\rm rms}}\right)^2.
\label{eq:bandwidths_equivalence}
\end{equation}
Conversely, if the receiver output is compressed, from Eq.~(\ref{eq:vout_full}) we have that:
\begin{equation}
\delta V_{\rm rms} = \frac{\partial V_{\rm out}}{\partial T_{\rm in}}\delta T_{\rm rms}.
\label{eq:delta_vrms}
\end{equation}
By combining Eqs.~ (\ref{eq:vout_full}), (\ref{eq:noise_effective_bandwidth}) and (\ref{eq:delta_vrms}) we find:
\begin{equation}
\tilde \beta = 4\left(\frac{T_{\rm sky}+T_{\rm noise}}{\delta T_{\rm rms}}\right)^2
\left[ 1 + b\, G_0(T_{\rm sky}+T_{\rm noise})\right]^2 \equiv
\beta \left[ 1 + b\, G_0(T_{\rm sky}+T_{\rm noise})\right]^2,
\label{eq:bandwidth_compressed}
\end{equation}
which shows that $\tilde \beta$ overestimates the ``optical'' bandwidth unless the non linearity parameter $b$ is very small. In the left panel of Figure~\ref{fig:eff_bw_lfi27} we show how the noise effective bandwidth calculated from Eq.~(\ref{eq:noise_effective_bandwidth}) is dependent from the level of the input signal if the receiver response is non linear. In the right panel of the same figure we show how the dependence on the level of the input signal disappears if we take into account the receiver non linearity via Eq.~(\ref{eq:bandwidth_compressed}).
Data presented in Figure~\ref{fig:eff_bw_lfi27} have been taken during the RCA test campaign with various levels of the reference load temperature and refer to the 30 GHz receiver LFI27.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=7.5cm]{figures/eff_bw_lfi27_nocorrection.ps}
\includegraphics[width=7.5cm]{figures/eff_bw_lfi27_corrected.ps}
\end{center}
\caption{Noise effective bandwidth 30~GHz receiver LFI27 calculated with different reference load input temperatures
neglecting (left) and considering (right) the compresson effect.}
\label{fig:eff_bw_lfi27}
\end{figure}
In Figure~\ref{fig:eff_bw_lfi19} we show similar data for the 70~GHz receiver LFI19. Data were acquired during the RCA test campaign with a variable input temperature at the sky load. In this case data clearly do not show a consistent trend in the noise effective bandwidth calculated from Eq.~(\ref{eq:noise_effective_bandwidth}) with input temperature, which provides an independent confirmation of the linear response of the 70~GHz receivers.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=9cm]{figures/eff_bw_lfi19_nocorrection.ps}
\end{center}
\caption{Noise effective bandwidth for the 70 GHz receiver LFI19 with different sky load input temperatures calculated without compression effect.
}
\label{fig:eff_bw_lfi19}
\end{figure}
\section{Best fits}
\label{sec:best_fits}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=3.5cm]{figures/2400.ps}
\includegraphics[width=3.5cm]{figures/2401.ps}
\includegraphics[width=3.5cm]{figures/2410.ps}
\includegraphics[width=3.5cm]{figures/2411.ps}\\
\includegraphics[width=3.5cm]{figures/2500.ps}
\includegraphics[width=3.5cm]{figures/2501.ps}
\includegraphics[width=3.5cm]{figures/2510.ps}
\includegraphics[width=3.5cm]{figures/2511.ps}\\
\includegraphics[width=3.5cm]{figures/2600.ps}
\includegraphics[width=3.5cm]{figures/2601.ps}
\includegraphics[width=3.5cm]{figures/2610.ps}
\includegraphics[width=3.5cm]{figures/2611.ps}\\
\includegraphics[width=3.5cm]{figures/2700.ps}
\includegraphics[width=3.5cm]{figures/2701.ps}
\includegraphics[width=3.5cm]{figures/2710.ps}
\includegraphics[width=3.5cm]{figures/2711.ps}\\
\includegraphics[width=3.5cm]{figures/2800.ps}
\includegraphics[width=3.5cm]{figures/2801.ps}
\includegraphics[width=3.5cm]{figures/2810.ps}
\includegraphics[width=3.5cm]{figures/2811.ps}
\end{center}
\caption{Non linear fits for all the 30 and 44~GHz detectors with the parameters in Table~\protect \ref{tab:best_fit_parameters}}
\end{figure}
\section{Introduction}
\label{sec:introduction}
\input{01_introduction}
\section{Theory}
\label{sec:theory}
\input{02_theory}
\section{Linearity in LFI receivers response}
\label{sec:linearity_receivers_response}
\input{03_linearity_receivers_response}
\section{Impact of compression of on ground calibration}
\label{sec:impact_ground_calibration}
\input{04_impact_ground_calibration}
\section{Impact of output compression in flight operations}
\label{sec:impact_flight_operations}
\input{05_impact_flight_operations}
\section{Conclusions}
\label{sec:conclusions}
\input{06_conclusions}
\acknowledgments
Planck is a project of the European Space Agency with instruments funded by ESA member states, and with special contributions from Denmark and NASA (USA). The Planck-LFI project is developed by an Interntional Consortium lead by Italy and involving Canada, Finland, Germany, Norway, Spain, Switzerland, UK, USA. The Italian contribution to Planck is supported by the Italian Space Agency (ASI). The work in this paper has been supported by in the framework of the ASI-E2 phase of the Planck contract. The US Planck Project is supported by the NASA Science Mission Directorate. In Finland, the Planck LFI 70 GHz work was supported by the Finnish Funding Agency for Technology and Innovation (Tekes).
\clearpage
|
1,108,101,565,560 | arxiv | \section{Introduction}
In this contribution we will deal with vibrational localization
in Hamiltonian lattices {\sl without} any kind of disorder. We consider
a solution of
a set of coupled ordinary differential equations (CODE) of
an underlying Hamiltonian system. The localization property
of the solution implies the solution to be essentially zero (constant)
outside a certain finite volume of the system. Inside the specified
volume the solution has some oscillatory time dependence. The absence of
disorder implies the existence of certain
discrete (CODE) translational symmetries of all possible
solutions.
Usually vibrational localization can be produced by considering
a lattice with a defect (diagonal or off-diagonal disorder) \cite{hb83}.
Another wellknown possibility is to consider lattices with
more than one groundstates (global minima of the potential energy)
and static kink-like distortions of the lattice. The presence
of the kink-like static (stable) distortion of the lattice
breaks the discrete translational symmetry as in the case of
a defect. This is the key ingridient to get localized
vibrations (localized modes) centered around either the defect
or the kink-like distortion \cite{hb83}.
It is worthwhile to mention that the existence of kink-like
distortions implies the underlying Hamiltonian lattice
to be nonlinear.
However it was known for a long time that special
partial differential equations admit breather solutions.
These breather solutions are exact localized vibrational
modes, which require neither disorder nor kinks.
In the case of the sine-Gordon (sG) equation the {\sl tangent}
of the breather solution
is given by a product of a space-dependent and a periodic time
dependent functions \cite{sdk53}.
The sG system has a phonon band with
a nonzero lower phonon band edge (the upper band edge is
not present, since its finiteness would imply the
discreteness of the system). The fundamental frequency of
the breather lies in the phonon gap below the phonon band.
The representation of the inverse
tangent of the periodic
time master function in a Fourier series shows up with contributions
from higher harmonics of the fundamental frequency. These higher
harmonics will certainly lie in the phonon band. The stability
of the breather solution in such a partial differential equation
will depend on some orthogonality properties between the
breather (higher harmonics) and the extended solutions (phonons)
\cite{ekns84}.
The fulfilling of all these
orthogonality relations seems to be connected to the
fact that the sG equation is integrable, i.e. admits an infinite
number of conservation laws.
Thus it appears logical that the sG breather
solutions survive only under nongeneric perturbations of
the underlying Hamiltonian field density \cite{bb93},\cite{jd93}.
Indeed efforts to find breather solutions in partial differential
equations of the Klein-Gordon type (i.e. closely related to
the sG case) failed e.g. for the $\Phi^4$ equation \cite{sk87}. The
$\Phi^4$ equation is not integrable.
Consider now instead of a partial differential equation
a Hamiltonian lattice. It will have at least one groundstate.
Generically the expansion of the potential energy around the
groundstate yields in lowest order a harmonical system and thus phonons.
However the phonon band will now have a finite upper band edge.
Thus we can imagine to create a breather-like localized
state with its frequency either above the phonon band
or even in a nonzero gap below the phonon band. In the first
case there will never be resonances between any harmonics
of the time function governing the evolution of the discrete
breather and phonons. In the second case we can again avoid
resonances by proper choice of the fundamental frequency
and the requirement that the phonon band width is smaller
than the gap width. Hence we seem to loose the necessity
to satisfy an infinite number of orthogonality relations
as in the continuum case. That could mean in turn that
the existence of discrete breather solutions will not be
restricted to the subset of nongeneric
Hamiltonian lattices.
Indeed over the past 6 years there have been several reports
on the existence of discrete breathers in various one-dimensional
nonintegrable Hamiltonian lattices of the Fermi-Pasta-Ulam
type and the Klein-Gordon type
\cite{st88},\cite{jbp90},\cite{cp90},\cite{bp91},\cite{th91},\cite{st92},\cite{sps92},\cite{bks93}. Unsurprisingly one will find no rigorous
derivation of the discrete breather solution in those reports, it
would be better to say that numerical results and several approximate
analytical results strongly imply the existence of discrete
breathers in one-dimensional nonintegrable Hamiltonian lattices.
Recently a careful study of the above mentioned system classes
revealed a first understanding for the phenomenon of
discrete breathers in terms of phase space properties of the
underlying system \cite{fw3},\cite{fw2},\cite{fw45},\cite{fw6}.
We will call these discrete breathers {\sl Nonlinear Localized
Excitations} (NLEs).
It was shown that the
NLE solution can be reproduced with
very high accuracy considering the dynamics of a reduced
problem.
In the reduced problem one keeps the few degrees
of freedom which are essentially involved in the NLE solution of the
extended system. It turned out that the NLE solutions correspond
to regular trajectories in the phase space of the reduced problem.
These regular trajectories belong to a certain compact subpart
of the phase space which can be called regular island. The NLE regular
island is separated by a separatrix from other regular islands which
correspond to extended states in the full system.
Trajectories of the reduced problem on the separatrix
itself as well as in a certain energy-dependent part of the phase space
surrounding it are chaotic because the full system as well as
the reduced problem are non integrable. The whole emerging picture
we will call the local integrability scenario.
As it follows from that scenario, single-frequency NLEs correspond
to the excitation of one main degree of freedom which can be
characterized by its action $J_1$ and frequency
$\omega_1=\partial H / \partial J_1$ \cite{fw2}.
Many frequency NLEs correspond
to the excitation of several secondary degrees of freedom which
can characterized by their actions $J_m$, $m=2,3$ and frequencies
$\omega_m=\partial H / \partial J_m$ \cite{fw2}. Stability of the NLEs
in the infinite lattice environment can be studied with the
help of mappings. A certain movability separatrix can be defined
by $\omega_3=0$. This separatrix separates the phase space into
stationary NLEs (i.e. the center of energy oscillates around a
given mean position) and movable NLEs (i.e. the center of energy
can travel through the lattice) \cite{fw6}.
On the basis of the local integrability scenario it was
recently possible to {\sl prove} the generic existence of NLE solutions
in a one-dimensional nonlinear lattice with {\sl arbitrary} number of
degrees of freedom per unit cell and {\sl arbitrary} (still finite)
interaction range \cite{fw9}. Moreover for the first time a rigorous proof
was given
that periodic NLE solutions {\sl do exist} in a class of Fermi-Pasta-Ulam
lattices \cite{fw9}.
{}From the local integrability scenario it follows that there are
no principal hurdles in going over to higher lattice dimensions
(by that we refer to the topology of the interactions rather than
the number of degrees of freedom per unit cell). Indeed the NLEs
are described through {\sl local} properties of the phase space
of the lattice {\sl and} no topological requirements on the potential
energy are necessary to allow for NLE existence. This is very
different compared to the well known topologically induced kink solutions,
for which the one-dimensional lattice is an analytical
requirement. Only under very special constraints can one
discuss kink-like solutions in lattices with higher dimensions.
Thus the NLE existence occurs to be a {\sl generic} property of
a nonlinear Hamiltonian lattice. Indeed a few numerical studies on NLEs in
two-dimensional Fermi-Pasta-Ulam lattices showed that NLEs
exist there \cite{bkp90},\cite{ff93}.
The purpose of this contribution is to apply the successfull
local integrability picture from one-dimensional lattices
\cite{fw3}-\cite{fw6}
to two-dimensional lattices. We will show that we indeed again
find NLE solutions (which are somewhat richer in their properties
compared to the one-dimensional case)
which are quantitatively describable with a reduced problem.
We will show this by comparing the phase space properties of
the full lattice and the reduced problem. We present a stability
analysis of the NLEs as well as a scheme to account for NLE
properties. Finally we present arguments about the statistical
relevance of the NLEs in the considered lattices at finite temperatures.
Thus we are able to show the correctness of our general approach
to vibrational localization in nonlinear Hamiltonian lattices and
of viewing NLEs as generic solutions in nonlinear discrete systems.
The paper is organized as follows. In section II we introduce the
model and briefly review the properties of NLEs in one dimension.
In section III examples of NLE solutions in two dimensions
are presented. Then we define the reduced problem for the
two-dimensional system,
its phase space structure is compared to the corresponding part of
the phase space of the whole lattice.
A stability analysis is described, and different
evolution scenarios of NLEs are explained.
Section IV is used for a discussion of the results.
\section{Model, solutions in one dimension}
We study the dynamics of lattices with one degree of freedom
per unit cell and nearest neighbour interaction. The general
Hamiltonian is given by
\begin{equation}
H = \sum_{\vec{R}} \frac{1}{2}P_{\vec{R}}^2 +
\sum_{\vec{R}} V(X_{\vec{R}}) + \frac{1}{2}
\sum_{\vec{R}} \sum_{nn}\Phi(X_{\vec{R}} - X_{\vec{R'}}) \;\;. \label{2-1}
\end{equation}
Here $P_{\vec{R}}$ and $X_{\vec{R}}$ are canonically conjugated
momentum and displacement of the particle in the unit cell
characterized by the $d$-dimensional lattice vector $\vec{R}$. The
$d$ components of $\vec{R}$ are multiples of the lattice constant
$a=1$. The interaction and on-site potentials $\Phi(z)$ and
$V(z)$ are defined through
\begin{eqnarray}
\Phi(z)=\sum_{n=2}^{\infty}\phi_n \frac{z^n}{n!} \;\;, \label{2-2} \\
V(z)=\sum_{n=2}^{\infty}v_n \frac{z^n}{n!}\;\;. \label{2-3}
\end{eqnarray}
The abbrevation $(nn)$ in \ref{2-1} means summation over all
nearest neighbour positions $\vec{R'}$
with respect to $\vec{R}$.
Hamilton's equations of motion for the model are
\begin{equation}
\dot{X}_{\vec{R}}=P_{\vec{R}} \;\;, \;\; \dot{P}_{\vec{R}} = -
\frac{\partial H}{\partial X_{\vec{R}}} \;\;. \label{2-4}
\end{equation}
Thus we exclude from our consideration cases with i) more than one
degrees of freedom per unit cell and ii) larger interaction range.
The reasons for that are pragmatic - it will become too hard at
the present stage to present a careful study for the excluded
cases. We mention the numerical investigations of one-dimensional
chains with two degrees of freedom per unit cell in
\cite{ma92},\cite{ats93} and
some qualitative thoughts in \cite{yak93} about long range interactions,
where no indications of a change of the NLE existence properties
are found.
Let us briefly review the results for NLE properties in the one-dimensional
case. They are reported for two major subclasses of \ref{2-1}-\ref{2-3} -
the Klein-Gordon lattices \cite{fw3},\cite{fw2},\cite{fw45}
and the Fermi-Pasta-Ulam lattices \cite{fw6}. In the
case of Klein-Gordon lattices one drops the nonlinearities in the
interaction $\phi_2 = C \neq 0$, $\phi_{n > 2} = 0$ and allows for
nonlinearities to appear in the onsite potentials. Examples are the
$\Phi^3$ model ($V(z)=1/2 z^2 + 1/3 z^3$), the $\Phi^4$ model
($V(z) = 1/4 (z^2 - 1)^2$, the sine-Gordon model ($V(z)= \cos(z)$).
In the case of Fermi-Pasta-Ulam lattices one drops the on-site potential
$V(z)=0$ and allows for nonlinearities to appear in the interaction $\Phi (z)$.
In a convinient notation we will referr to them as FPU$klm$ models, where
$k,l,m$ are positive integers indicating the corresponding nonvanishing
power coefficients in \ref{2-3}. The Klein-Gordon systems have
up with a nonzero lower phonon band edge frequency (if $v_2 \neq 0$)
whereas the Fermi-Pasta-Ulam systems have a zero lower phonon band
edge frequency. Consequently the FPU models exhibit total momentum
conservation and show up with a zero frequency Goldstone mode -
in contrast to the Klein-Gordon lattices. Stable periodic (in time) NLEs
can be created in nearly all cited systems with frequencies outside
the phonon band (below or above for the Klein-Gordon systems, above
only for the FPU systems). The lowest NLE energy is nonzero -
i.e. there is a gap in the density of states of NLEs for energies
lower than the threshold energy. There can be gaps at higher energies too,
depending on resonance conditions between the NLE frequency and
the phonon frequencies. To allow for stable NLEs with frequencies
below the phonon band for Klein-Gordon systems one has to require
that the phonon band width is smaller than the phonon gap width.
One can understand the existence of a gap in the NLE density of states
by an approximate method to account for the NLE frequency. It consists
out of constructing an effective nonlinear one-particle potential. The energy
of a particle moving in this effective potential is the NLE energy,
and the fundamental frequency of its oscillation is the NLE frequency.
For small amplitude oscillations (small energies) the frequency
will lie always inside the phonon band of the corresponding
lattice. Increasing the amplitude (energy) will change the frequency
because of the nonlinearity. Depending on the type of the
effective potential the frequency can decrease or increase. At a certain
value of the amplitude (energy) the frequency leaves the phonon band,
thus the NLE becomes a stable excitation. This is also a very simple
guide to the prediction of the existence/nonexistence of NLEs in
nonlinear lattices. There will be no stable NLEs allowed to
exist in systems with e.g. a zero lower phonon band edge and
an effective potential of the defocussing type, i.e. where the
frequency will always decay with increasing amplitude (energy).
Instructive examples are the Toda lattice and the FPU23 lattice.
Because of the localization character of the NLE solutions essentially
only a finite number of particles are involved in the motion. Thus
it is possible to define a {\sl reduced problem} \cite{fw2}. It consists
of defining a finite volume around the NLE center. All particles
inside the finite volume are involved in the NLE solution, particles
outside essentially should not be involved. There is a uncertainty
in the definition of the finite volume. It comes from the
fact that the NLE solutions are not compact, i.e. strictly speaking
they incorporate an infinite number of particles (degrees of freedom)
\cite{fw7}.
But a sharp exponential decay of the amplitudes starting from the
center of the NLE provides a good finite volume choice in many
cases. Since the finite volume (reduced problem) consists out of
a finite number of degrees of freedom, it becomes easier to
analyze its phase space properties. As it was shown in \cite{fw2},
there exist regular islands in the phase space of the reduced problem.
These regular islands are separated by stochastic layers (destroyed
regular motions on and near separatrices) from each other. The motion
in each of the regular islands appears to be confined to a torus
of corresponding dimension. Certain islands can be labeled NLE islands.
Periodic orbits (elliptic fixpoints in corresponding Poincare mappings)
from these NLE islands
appear to be (nearly) exactly the periodic NLE solutions from
the full system. The surprise came when it was shown that the
quasiperiodic orbits surrounding the periodic one correspond
to many-frequency NLEs in the full system \cite{fw3},\cite{fw2}.
Although a stability analysis
shows that these many frequency NLEs are strictly speaking unstable
(i.e. they can not exist for infinite times) \cite{fw7} it turned out that
their
energy radiation rate can be very weak, such that the lifetimes
of these many frequency NLEs can become several orders of magnitude
larger than the typical internal periods. In numerical experiments
more than five orders of magnitude were easily found \cite{fw2}.
The lifetime of the many frequency NLEs will increase to infinity
if one chooses quasiperiodic orbits which are closer and closer to
the periodic orbit (the periodic NLE).
Other regular islands did not yield NLEs in the full system. The same
can be said about the orbits in the stochastic layer. The reason
for that is the resonance of the fundamental frequencies in those
regular islands with the phonon frequencies. Motion in the
stochastic layer is chaotic, thus frequency spectra are continuous
rather than discrete. Consequently generically there is always overlap
with the phonon frequencies and thus strong energy loss of the
finite volume. We also mention interesting long-time evolutional
scenaria for many frequency NLEs as described in \cite{fw2}.
The clear correspondence between regular islands in the reduced problem
and NLE solutions in the full system allows for a deep understanding
of the NLE phenomenon on one side. On the other side it opens
possibilities to apply the apparatus of nonlinear dynamics to
explore NLE properties. That was done in \cite{fw6} to study
the movability properties of NLEs.
In the following we will apply the same procedure to characterize
NLE solutions in two-dimensional systems. The success of our
study will have several impacts. First it will be a proof of
the conjecture that the NLE existence is not a specific one-dimensional
solution as e.g. the kinks. This conjecture was formulated
on the basis of the local integrability picture \cite{fw2} as described above.
Thus we strengthen the whole local integrability picture. Secondly
by establishing NLE solutions in two dimensional lattices
undoubtly will increase the interest in the overall phenomenon
because of the variety of physical applications in contrast to
the one-dimensional case. Moreover by proving the conjecture
about the unimportance of the dimensionality of the lattice
with respect to the NLE occurence also three-dimensional
applications become of potential interest.
\section{The two-dimensional case}
\subsection{Model specification, numerical details}
As an example we choose the $\phi^4$ lattice in two dimensions, i.e.
$V(z)=1/4 (z^2-1)^2$, $\Phi(z)=1/2 C z^2$, $\vec{R}= (l,m)$
with $l,m = 0,\pm 1, \pm 2, ...$ (cf. \ref{2-1} - \ref{2-3}).
The two groundstates of the system are given by
$X_{\vec{R}}= \pm 1$. The model has a phase transition
at a finite temperature $T_c$ which is
of no further concern here since we are studying properties
of single excitations above the groundstate (i.e. because
of the localized character of the solutions at effectively
zero temperature). The parameter $C$ specifies the 'discreteness'
of the system, i.e. the ratio of the phonon band width to
the phonon gap width. Since we are interested in
vibrations localized on a few particles, it is reasonable to
compare the onsite potential energy ($V(z)$) to the spring
energy ($\Phi(z)$) of a given particle when it is displaced
relative to its nearest neighbours. As it was shown in \cite{fw2}
besides the interaction parameter $C$ the energy (per particle)
becomes a second significant parameter in order to choose a reasonable
ratio between the two components of the potential energy.
One can easily take over the results from \cite{fw2} if one rescales
the parameter $C$ there by multiplying it with 2 (because in the
cited one-dimensional case the coordination number was 2 compared
to 4 in the two-dimensional case). Thus a choice of $C=0.05$ turns
out to be a case of intermediate interaction for not too large
energies, i.e. the on-site potential energy is of the same order
as the interaction potential energy.
The dispersion relation for small amplitude phonons (small
amplitude oscillations around either groundstate) is given
by
\begin{equation}
\omega^2_{k_x,k_y}= 2 + 4 C \left( \sin ^2(\frac{\pi k_x}{N})
+ \sin ^2 ( \frac{\pi k_y}{N}) \right) \label{phonon}
\end{equation}
where $N$ is the length of one side of the squared lattice,
and $k_x$ and $k_y$ are two integers under the condition
$0 \leq k_x,k_y \leq (N-1)$.
In all numerical simulations
a Runge-Kutta method of 5th order with time step $\Delta t = 0.01$
was used. We compared our results to an independent code
where a Verlet algorithm with $\Delta t = 0.005$ was used and observed
no differences.
In the studies of one-dimensional systems
the simulation of an infinite system was replaced
with a finite chain of
such a length that the fastest phonons could not make a turn and
come back to the finite volume of the NLE excitation during the
simulation time. In two dimensions such a method would mean a squared
waste of computing time and ban us on parallel computers. However
there is another way to avoid recurrence of phonons which are radiated
from the NLE - to switch on a (reflectionless) friction outside a
given volume such that the radiated phonons will be captured
and eliminated.
The condition of reflectionlessness implies a gradual increase
of the friction with growing distance or in other words a large number
of collisions between phonons and friction applied lattice sites.
The friction is added to the right-hand side of \ref{2-4}
in the form $-\gamma_{\vec{R}} P_{\vec{R}}$.
In the case of a full system we work with a friction-free volume
of size $20 \times 20$ and an additional friction-applied
boundary of thickness $10$ particle distances on each side.
Thus the overall number of particles is $40 \times 40 = 1600$.
The friction is linearly increased
in the friction-applied walls from zero up to a maximum value of $\gamma_0$
at the boundary layer.
At the boundaries periodic boundary conditions are applied.
To proceed we have to optimize the maximum friction coefficient,
since for $\gamma_0=0$ or $\gamma_0=\infty$ the phonons are
completely transmitted or reflected respectively.
We simulate the linearized $\Phi^4$
lattice ($V(z)=z^2$, $\Phi(z)=1/2 C z^2$)
with an initial condition, where the central particle
is displaced by $\Delta X = 1$ from its groundstate position,
all other particles are held at their groundstate positions
and the velocities are zero. The corresponding initial energy
is $E=1.1$. We let the system evolve, and measure the energy
stored in the system $E(t)$ and the energy stored in the central
particle and its four neighbours $E_5(t)$ for $t=2000$. The result
is shown as a function of $\gamma_0$ in Fig.1. We find that for the choosen
geometry the optimum value for the maximum friction coefficient is
$\gamma_0 \approx 0.005$. The full time dependence of the
two energies $E(t)$ and $E_5(t)$ using $\gamma_0=0.005$ are
shown in Fig.2. We see that after waiting times of $t \leq 2000$
the central particle and its four neighbours loose more than
99.9\% of their initial energy. In the following we will use
the thus choosen value for the maximum friction coefficient
$\gamma_0=0.005$ in all described simulations.
\subsection{NLE solutions}
Let us show a stable NLE solution. For that we prepare the following
initial condition: central particle at groundstate position,
nearest neighbours displaced to $X_{(nn)}=-1.01163$, velocity
of nearest neighbours $P_{(nn)}=0.0225$, the velocity of the
central particle is adjusted to the initial energy $E=0.3$,
all other particles are at their groundstate positions with
zero velocities.
To characterize the localization properties we use
the local discrete energy density
\begin{equation}
e_{\vec{R}} = \frac{1}{2}P^2_{\vec{R}} +
V(X_{\vec{R}}) + \frac{1}{2}\sum_{nn}\Phi(X_{\vec{R}} - X_{\vec{R'}})
\;\;\;. \label{3-1}
\end{equation}
Let us define the energy stored on five particles
(the central particle $\vec{R}=(0,0)$ and its four neighbours)
\begin{equation}
e_{5}= \sum_{\vec{R}'} e_{\vec{R'}} \;\;, \;\;
|\vec{R'}| \leq 1 \;\;. \label{3-2}
\end{equation}
In the insert in Fig.3 we show $e_{5}$ as a function of time for the
above given initial condition. Clearly we observe localization
of vibrational energy for extremely long times. One has to keep in mind
that the typical oscillation times are of the order
of $t_0 = 4$. The stability property of the observed NLE is very similar
to examples from one-dimensional cases. The energy distribution
in the NLE solution after $t= 3000$
is shown in Fig.3. Essentially five particles are involved in
the NLE motion - a central particle and its four nearest neighbours.
Since we used symmetrical initial conditions essentially two degrees
of freedom are excited. To describe the NLE solution we construct
a {\sl reduced problem} in analogy to the one-dimensional problem.
The reduced problem consists out of the five particles which are
essentially involved in the NLE motion. The rest of the lattice
is held at its ground state position. Together with the consideration
of symmetric initial conditions we are left with the following
two-degree of freedom problem:
\begin{eqnarray}
\ddot{Q}= Q - Q^3 + 4C (q-Q) \;\;, \label{3-3} \\
\ddot{q} = q - q^3 + C(Q-q) - 3C(1+q) \;\;. \label{3-4}
\end{eqnarray}
Here $Q=X_{(0,0)}$ and $q=X_{(\pm 1, \pm 1)}$ are the coordinates
of the central and nearest neighbour particles respectively.
In the one-dimensional case it was shown that certain solutions
of the reduced problem correspond to NLE solutions in the
full system \cite{fw2},\cite{fw45}.
\subsection{The reduced problem}
Before we show that the same correspondence principle works
for the two-dimensional example in the present work,
we want to characterize the main features of the system
of equations \ref{3-3}-\ref{3-4}.
In Figs.4(a-d) we show Poincare mappings for the reduced problem
for energies $E=0.2/0.5/2.5/5$. As can be seen there the reduced
problem is not integrable since we find stochastic motion.
Thus the energy is the only integral of motion. However
we find islands of regular motion (regular islands) which are
separated from each other by stochastic layers. The topology of the
stochastic
layers indicates the topology of destroyed separatrices.
For small energies $E=0.2$ (Fig.4(a)) the thickness of the stochastic
layer is too small to be detected at all (in the presented
resolution) so that we find two regular islands which we label
with the numbers 1 and 2. The elliptic
fixed points of each regular island correspond to time-priodic
solutions of the reduced problem. Increasing the energy we
find a rather abrupt increase of the thickness of the stochastic
layer for $0.35 < E < 0.4$. Thus at $E=0.5$ (Fig.4(b)) we are
faced with effects of period doubling (increasing number of
regular islands) and a decrease of the size of the islands.
For $E=2.5$ (Fig.4(c)) nearly the whole available phase space is
filled with chaotic trajectories. However for higher energies
(here $E=5$ in Fig.4(d)) the size of the regular islands increases
again. In the limit $E \rightarrow \infty$ the reduced problem
becomes infinitely close to an integrable system of two
noninteracting quartic oscillators.
A proper characterization of the regular islands is
the frequency of their corresponding elliptic fixpoints.
In Fig.5 the fixed point frequencies of the main regular islands
are shown as a function of energy. For small energies the
frequencies of the fixed points of regular islands 1 and 2
become the eigenfrequencies of the linearized problem (around
the groundstate): $\omega^2= 2+2C$ for island 1 and
$\omega^2=2+6C$ for island 2, both of the frequencies are
in the phonon band of the infinite system \ref{phonon}.
{}From Fig.5 it follows that there is a nonzero lower energy
threshold above which the fixed point frequency from island 1
becomes nonresonant with the phonon band.
For of reasons discussed below we concentrate on island 1.
Its fixed point frequency we denote by $\omega_1$ (here the
index refers not to the island number but to the degree of freedom
excited in the island). Then several statements can be made
with respect to the secondary degrees of freedom which can
be excited (cf. torus intersection structure around fixed
point in island 1). Considering an infintesimally small excitation
of the second (symmetric) degree characterized by its
frequency $\omega_2$ one can show that in the limit
of zero energy the $\omega_2^2 = 2+6C$ (cf. Appendix).
If one lifts the symmetry of the initial conditions in the
reduced problem one has to consider the generalized reduced
problem
\begin{eqnarray}
\ddot{Q}= Q - Q^3 + \sum_{i=1}^4 C(q_i - Q) \;\;, \label{3-5} \\
\ddot{q}_i = q_i - q_i^3 - 3C(1+q_i) + C(Q-q_i) \;\;. \label{3-6}
\end{eqnarray}
Here $Q = X_{(0,0)}$ as in \ref{3-3}-\ref{3-4} and the four coordinates
$q_i$, $i=1,2,3,4$ denote the coordinates of the four nearest
neighbours of the central particle. Since we deal with five
degrees of freedom now we have to expect five (instead of two)
fundamental frequencies. System \ref{3-5}-\ref{3-6} has rotational
symmetry of order 4. It follows (cf. Appendix) that the three
new frequencies $\omega_3$,$\omega_4$,$\omega_5$ are equivalent
to each other in the limit of infinitely small assymetric perturbations
of the fixed point periodic solution. In the limit of zero energy
it follows $\omega_3^2=\omega_4^2=\omega_5^2= 2 + 4C$. Increasing the
energy from its lowest value leads to a decrease of all five
frequencies. The inequality $\omega_1 < \omega_{3,4,5} < \omega_2$
(which is true only for low energy values)
determines the sequence of the $\omega_i$ crossings of the
lower phonon band edge.
\subsection{The correspondence principle}
Let us show the connection between the reduced problem and
the NLE solutions of the full system. For that we plot
in Fig.5 the frequencies of (nearly) periodic NLEs as a function
of energy. We observe very good agreement with the data of
the fixed point frequency $\omega_1$ from island 1 of the reduced
problem. In fact one can check that the whole time-dependent periodic
NLE solution of the full system is very close to the corresponding
fixed point periodic solution from island 1 of the reduced problem.
Since the frequency $\omega_2$ of the symmetric perturbation
of the fixed point periodic NLE solution according to the
results from the reduced problem is in resonance with phonon
frequencies up to energy values of 1, we increase the energy
to $E=5$ (cf. Fig.4(d)) and perform a Poincare mapping for
the NLE solutions of the {\sl full system}. The result is shown in Fig.6
{together with the corresponding data from the reduced problem}
(cf. Fig.4(d)). The result is amazing - the torus intersections
are practically identically for the two frequency NLE solution
from the full system and the corresponding regular trajectories
from the regular island of the reduced problem. If one chooses
an initial condition in the full system that corresponds to
the chaotic trajectory in the reduced problem (Fig4(b))
then we find a quick decay of the energy excitation in the
full system as shown in Fig.7.
If the energy is low enough the frequency $\omega_2$ of the
symmetric perturbation of the periodic NLE will come into
the phonon band. Then we expect a loss of the energy part
stored in the corresponding second degree of freedom, leaving
the main degree of freedom essentially unaffected. To show
that we simply perform a Poincare mapping for the mentioned
case. The result is shown in Fig.8. Indeed instead of an intersection
line with a torus we find a spiral-like relaxation of the NLE
solution onto the periodic fixed point NLE. The fixed point periodic
NLE acts like a limit circle, although the whole system is
conservative.
\subsection{Effective potential}
Because of the smallness of the nearest neighbours amplitudes
compared to the amplitude of the central particle in a NLE solution,
we can try to account approximately for the motion of the
central particle assuming the nearest neighbours are at rest at
their groundstate positions. Then the central particle would move
in an effective potential
\begin{equation}
V_{eff}(z)= V(z)+2C(z+1)^2 \;\;. \label{3-7}
\end{equation}
The motion in this potential is periodic with an energy-dependent
(or amplitude-dependent) period $T_1=2\pi / \omega_1$. Since most
of the energy in the NLE solution is concentrated on the central
particle and its binding energy to the nearest neighbours,
it is reasonable to compare the results for the energy dependence
of $\omega_1$ for \ref{3-7} with the numerical result as given
in Fig.5. As can be seen in Fig.5, the overlap between the
result from the effective potential, the reduced problem and
the full system are very good. Thus we have a proper method for
predicting the behaviour of the main NLE frequency $\omega_1$
as a function of energy. This method is directly taken over from
the known results in the one-dimensional case.
\subsection{Stability properties}
As it was shown in \cite{fw3},\cite{fw2} for the one-dimensional case,
it is possible to carry out a stability analysis for
periodic NLEs (i.e. the fixed point solutions) with respect
to extended phonon-like perturbations. In fact the
procedure for the stability analysis in the two-dimensional case
is exactly the same. Thus we will highlight here only the
necessary parts of the steps one has to follow.
We assume we know an exact periodic NLE solution $X_{\vec{R}}(t)$.
Then we consider a slightly perturbed trajectory $\tilde{X}_{\vec{R}}
= X_{\vec{R}}(t) + \Delta_{\vec{R}}(t)$. Since the assumed
NLE solution is localized, it becomes infinitely small
for large distances from the NLE-center. This circumstance does
not pose a serious problem for the definition of the expression
'slightly perturbed'. One can just consider small
amplitude oscillations (phonons) around the groundstate of our system.
Then we have a well-defined small parameter determining the
weak nonlinear corrections to the linear equations. We take over
this definition of smallness to our problem. In the center of the NLE
the perturbation will thus be small compared to the NLE-contribution.
Far away from the center the perturbation can even become large
compared to the NLE-contribution, but it will be still small enough
to ensure the linearized equations work well. Then we can consider
small perturbations of the NLE solution which are extended.
In the next step we insert the perturbed ansatz into the
lattice equations of motion. Using the fact that the unperturbed
part is a solution of the equations of motion and linearizing
the equations with respect to the perturbation yields a set
of coupled differential equations with time-dependent (periodic)
coefficients. In analogy to \cite{fw2} we can define a map, the stability
of which
is equivalent to the nongrowing of the small perturbation
of the NLE solution. The sufficient condition for the stability
is that neither of any multiple of half the NLE frequency
is equal to a phonon frequency \ref{phonon}:
\begin{equation}
\frac{\omega_{k_x,k_y}}{\omega_1} \neq \frac{n}{2}\;\;,
\;\; n = 0,1,2,... \;\;. \label{3-8}
\end{equation}
As we see this result explains the existence of an energy threshold
(gap in the density of NLE states) for the NLE solutions.
Because the NLE frequencies according to the reduced problem
will always lie in the phonon band for small enough NLE energies,
the low energy NLEs are unstable against smallest perturbations. In the
one-dimensional case this statement was tested in the full system
using an entropy-like variable measuring the degree of
energy localization \cite{fw2}. On approaching the energy threshold ( predicted
by the results of the reduced problem together with the stability
analysis) from above the entropy drastically increases at the predicted
threshold value. It is still possible that low-energy NLEs exist
with a very small degree of localization and
with frequencies very close to the phonon band edge, but outside the
band itself. In the two-dimensional case considered here
we also observe a very sharp transition in the degree of
localization at the predicted energy threshold value. In fact it
becomes impossible to find a NLE solution with energies below
the threshold value. That indicates the tiny phase space part
at low energies which might be still occupied with weakly localized
states.
A more subtle problem is the {\sl internal} stability of periodic
NLEs. As it is known for several one-dimensional systems,
periodic NLE solutions can become internally unstable, i.e. a
weak symmetrybreaking perturbation of the periodic NLE will
transform the NLE solution into other existing periodic NLEs
of different parity or even into NLEs moving through the lattice
\cite{cp90},\cite{sps92},\cite{cku93}.
Currently it is unclear how to classify and find the different
possible periodic NLE solutions on a two-dimensional lattice. First
efforts to do so are reported in \cite{ff93}. We wish to emphasize that
the periodic NLE solutions reported in this paper are certainly not
the only ones allowed to exit in the underlying lattice. Thus we
can only make statements about the internal stability of the NLE solutions
considered in the present work. Using the results of the linearization
of the equations of motion around the periodic NLE solutions (Appendix)
we can trace the values of the squared secondary eigenfrequencies
$\omega_2^2,\omega_{3,4,5}^2$ and can report here that throughout
the considered cases all squared eigenfrequencies are positive. Consequently
the periodic NLE solutions discussed here are internally stable.
The results of the {\sl stability} analysis drawn above do not allow
us to conclude about the {\sl existence} of NLE solutions in a strict
general sense. As it was shown in \cite{fw7} for the one-dimensional
case, periodic NLEs do not exist if any multiple of
the NLE frequency resonates with phonon frequencies (this condition
corresponds to the cases of even integers $n$ in \ref{3-8}).
Also all multiple frequency NLEs are strictly speaking unstable,
since it is always possible to find combinations
of multiples of two or more frequencies (whose ratio is irrational)
resonating with phonon frequencies. It appears currently unclear
how to take over the methods used in \cite{fw7} for the one-dimensional
case, to the two-dimensional case in order to obtain existence
criteria for NLE solutions. However it can be expected that the
methodological problems do not alter the results obtained in the
one-dimensional case.
As it was shown in \cite{tks88} the decay of the periodic
NLE solutions far away from the NLE center can be well described
by a Green's function method, which yields exponential decay in
the amplitudes.
\section{Discussion}
In the present work we have shown, that it is possible
to take over the results on the existence and properties
of nonlinear localized excitations in nonlinear lattices
from lattice dimension one to lattice dimension two.
Thus several goals were achieved: i) the existence of
NLEs in two-dimensional lattices is verified;
ii) the theory developed for NLEs in one-dimensional
systems appears to be of validity independent
on the lattice dimension; iii) the power of the theoretical
framework to predict the existence of NLE solutions
in several one-dimensional lattices has been extended by
its correct prediction of the NLE existence in higher dimensional
lattices. Thus the NLE existence in three-dimensional lattices
can be considered as highly likely. There is at the present
no single reason supporting the nonexistence of NLEs due to
lattice dimensions.
Besides the novel analysis of the properties of the secondary
frequencies (cf. Appendix) the present work has also shown,
that the resonating of secondary frequencies with phonon frequencies
does not imply a shrinkage of the phase space part of the system
corresponding to NLE solutions. Indeed as long as the main frequency
$\omega_1$ stays outside the band, the choice of an initial condition
with excited secondary degrees of freedom will
still yield a NLE.
If the secondary frequencies are outside the phonon
band as well, the solution will be a (very weakly) decaying
multiple frequency NLE. If the secondary frequencies resonate
with the phonon band, the corresponding energy part stored in
the NLE is radiated away and the NLE 'collapses' onto its
periodic fixed point solution. This attractor-like behaviour
ensures, that there is still a finite phase space volume
around the fixed point periodic solution which corresponds
to NLEs even after extremely long waiting times.
Thus we have strong evidence for the statistical relevance
of NLE solutions in corresponding lattices at finite temperatures.
Indeed the only case when NLEs can become statistically
unimportant is when the main frequency $\omega_1$ resonates
with the phonon band.
We can conclude from our results on the energy radiation
of perturbed periodic NLEs and from the mentioned existence
proofs for periodic NLEs, that the results on radiation processes
accounted for in \cite{bam94} are wrong. In order to get
the leading order radiation of perturbed periodic NLEs one has to
linearize the phase space flow of the system {\sl around}
the unperturbed periodic NLE solution, and {\sl not}
around the groundstate of the system as it was done in \cite{bam94}.
Let us finally address the question: what are the physical
applications where one can expect NLEs to exist?
In the mathematical sense the answer is: when the main
frequency $\omega_1$ can be 'pulled out' of the phonon
band with increasing energy. To check the behaviour of the
main frequency we have to construct the effective potential.
Consider e.g. a monoatomic crystal. The pair potential of interaction
is usually
an assymetric potential around the stability position.
The effective potential can be constructed exactly as described
in the previous section. If the repelling part of the pair potential
goes nonlinearly enough, then there can be oscillations of
a particle in the effective potential with frequencies above
the phonon band. However one has also to check the minimum energy
(energy threshold) required for the NLE existence. As studies
for a particular class of crystals have shown, NLEs can not be excited
there thermally, because the melting point is too low. However
it could be still possible to excite the NLEs locally nonthermally
\cite{bs91}.
If we consider crystals with many atoms per unit
cell, we can expect at least the existence of phonon gaps
between acoustic and different optical zones. It is
a well-known approach to describe structural phase transitions
with the use of $\Phi^4$-like models, simulating the behaviour
of certain soft phonon modes essentially decoupled from
other nonsoftening modes \cite{bc81}. However
there might be too many
problems on this path to really make sure that $\Phi^4$-lattice
type NLEs can exist in such crystals.
Another way of thinking leads us to the fact that the
adding of a periodic external field (onsite potentials)
can produce a finite phonon gap eliminating the conservation
of mechanical momentum. Such situations are very likely in
the case of atomic monolayers on proper substrate surfaces.
If it becomes possible to choose such cases, where the phonon band width
becomes
small compared to the gap width, NLEs could exist.
With these arguments we did not intend to judge
the different physical situations where NLEs are likely or
unlikely to exist. It was the search strategy we had in mind.
It is one of the forthcoming tasks to provide a foundation for these ideas
in order to proceed in the question of applicability. Still
the mathematical result that NLEs are generic solutions of
nonlinear lattices \cite{fw9} serves as a powerful indicator of their
relevance in different physical realizations.
\\
\\
\\
\\
Acknowledgements
\\
\\
We thank J. Krumhansl, E.Olbrich for interesting discussions,
S. Takeno and A. J. Sievers for sending us their preprints
prior publication, and J. Denzler for helpful comments.
One of us (K.K.) thanks the University of Maine for
warm hospitality and support.
This work was supported in part (S.F.)
by the Deutsche Forschungsgemeinschaft
(Fl 200/1-1).
\newpage
|
1,108,101,565,561 | arxiv | \section{Introduction}
A vector space~$A$ with a bilinear product~$\circ$ satisfying the identities
\begin{gather}
(x_1\circ x_2)\circ x_3-x_1\circ(x_2\circ x_3)=(x_2\circ x_1)\circ x_3-x_2\circ(x_1\circ x_3), \label{LeftSym} \\
(x_1\circ x_2)\circ x_3=(x_1\circ x_3)\circ x_2, \label{RightCom}
\end{gather}
is called a Novikov algebra.
Novikov algebras were introduced in the study of Hamiltonian operators concerning integrability of certain partial differential equations~\cite{GelDor79}.
Later, Novikov algebras appeared in the study of Poisson brackets of hydrodynamic type~\cite{BalNov85}.
It is well-known that given a commutative algebra $A$ with a derivation $d$,
the space $A$ under the product
$x_1\circ x_2 = x_1d(x_2)$
is a Novikov algebra.
Moreover, all identities fulfilled in $(A,\circ)$ are consequences of~\eqref{LeftSym} and~\eqref{RightCom}. Applying the rooted trees, the monomial basis of the free Novikov algebra in terms of $\circ$ was constructed in~\cite{DzhLofwall}. In terms of Young diagram, the basis was constructed in~\cite{DzhIsmailov}.
An algebra~$A$ satisfying only the identity~\eqref{LeftSym} is called a left-symmetric algebra.
Left-symmetric algebras have been studying since 1960s, they have applications in affine geometry, ring theory, vertex algebras etc, see the survey~\cite{Burde06}. Left-symmetric algebras embeddable under the operation $x_1\circ x_2 = x_1d(x_2)$ into permutative algebras were studied in~\cite{KS2022}.
Note that every associative algebra is left-symmetric one, and every left-symmetric algebra under the commutator $[a,b] = a\circ b - b\circ a$ is a Lie algebra.
For that reason, every Novikov algebra under the commutator is a Lie algebra satisfying an additional identity of degree~5 of the following form:
$$
\sum_{\sigma\in S_4}(-1)^{\sigma}[x_{\sigma(1)},[x_{\sigma(2)},[x_{\sigma(3)},[x_{\sigma(4)},x_5]]]]=0.
$$
To find all special identities for Novikov algebras considered under the commutator is still an open problem.
It is equivalent to the same question formulated for a~commutative algebra~$C$ with a derivation~$d$
considered under the product
$$
[x_1,x_2] = x_1d(x_2) - x_2d(x_1),
$$
which is called Wronskian bracket.
Given a Poisson algebra~$(P,\cdot,\{,\})$ with a~derivation~$d$ due to both products, define on~$P$ new operations as follows,
$$
x_1\circ x_2 = x_1 d(x_2),\quad
[x_1,x_2] = \{x_1,x_2\}.
$$
Recall that the variety of Poisson algebras is defined by the identities,
\begin{gather}
x_1 \cdot x_2 = x_2\cdot x_1, \quad
(x_1 \cdot x_2)\cdot x_3 = x_1 \cdot (x_2\cdot x_3), \\
\{x_1,x_2\} = - \{x_2,x_1\}, \quad
\{\{x_1,x_2\},x_3\} + \{\{x_2,x_3\},x_1\} + \{\{x_3,x_1\},x_2\} = 0, \label{Lie-id} \\
\{x_1, x_2\cdot x_3\} = \{x_1,x_2\}\cdot x_3 + x_2\cdot\{x_1,x_3\}.
\end{gather}
The algebra~$P^{(d)}:=(P,\circ,[,])$ has a Novikov product~$\circ$, a Lie product~$[,]$, and moreover,
the following identities hold,
\begin{gather}
x_2\circ[x_1,x_3]=[x_1,x_2\circ x_3]-[x_3,x_2\circ x_1]+[x_2,x_1]\circ x_3-[x_2,x_3]\circ x_1, \label{gd} \\
[x_1,(x_2\circ x_3)\circ x_4]=[x_1,x_2\circ x_3]\circ x_4+[x_1,x_2\circ x_4]\circ x_3-([x_1,x_2]\circ x_3)\circ x_4, \label{spec1}
\end{gather}
\vspace{-0.7cm}
\begin{multline}\label{spec2}
[x_3\circ x_1,x_4\circ x_2]=[x_4\circ x_1,x_3\circ x_2]+[x_3,x_4\circ x_1]\circ x_2-[x_4,x_3\circ x_2]\circ x_1 \\
-[x_4,x_3\circ x_1]\circ x_2+[x_3,x_4\circ x_2]\circ x_1+2([x_4,x_3]\circ x_1)\circ x_2.
\end{multline}
There may exist identities of degree greater than~5 fulfilled in~$P$ which are independent from~\eqref{gd}--\eqref{spec2}.
An algebra $(G,\circ,[,])$ such that $(G,\circ)$ is Novikov, $(G,[,])$ is Lie, and the
identity~\eqref{gd} holds is called a Gelfand---Dorfman algebra ($\mathop {\fam0 GD }\nolimits$-algebra)~\cite{WenHong, Xu2000}.
$\mathop {\fam0 GD }\nolimits$-algebras appeared in~\cite{GelDor79} as a source of Hamiltonian operators:
with the help of the structure constants of a Gelfand–-Dorfman algebra one may construct a~differential operator.
In~\cite{Xu2000}, it was shown that $\mathop {\fam0 GD }\nolimits$-algebras are closely related with Lie conformal algebras.
It is worth to note that the identities~\eqref{spec1} and~\eqref{spec2} are not fulfilled in the free GD-algebra, thus, these identities are called {\it special identities}. Moreover, the identities~\eqref{spec1} and~\eqref{spec2} are mutually independent \cite{KSO,KS}.
A Gelfand---Dorfman algebra~$G$ is called {\it special} if there exists a Poisson algebra~$P$ with a derivation~$d$ such that $G$ injectively embeds into~$P^{(d)}$.
It is known that the class of special GD-algebras forms a variety~\cite{KS}, thus, we may consider
the free special Gelfand---Dorfman algebra~$\mathop {\fam0 SGD }\nolimits\langle X\rangle$ generated by a~set $X$.
By $\mathrm{Com}\mathrm{Der}\langle X\rangle$ and $\mathrm{Pois}\mathrm{Der}\langle X\rangle$ we denote
the free commutative and the free Poisson algebra with a derivation in the signature generated by~$X$, respectively.
In \cite{KS} it was proved that every 2-dimensional $\mathop {\fam0 GD }\nolimits$-algebra is special.
Another interesting result is that a $\mathop {\fam0 GD }\nolimits$-algebra such that $[a,b] = a\circ b - b\circ a$ is special~\cite{KP2020}.
Therefore, a natural problem arises:
To construct a monomial basis of the free $\mathop {\fam0 SGD }\nolimits$-algebra in terms of $\circ$ and $[\;,\;]$.
We solve this problem.
For the solution, we construct a new monomial basis of the free Novikov algebra.
For all mentioned varieties we have the following diagram:
\begin{picture}(30,80)
\put(195,57){$\hookrightarrow$}
\put(160,44){\normalsize\rotatebox[origin=c]{270}{$\hookrightarrow$}}
\put(195,32){$\hookrightarrow$}
\put(230,44){{\normalsize\rotatebox[origin=c]{270}{$\hookrightarrow$}}}
\put(143,57){$\mathrm{Nov}\langle X\rangle$}
\put(143,31){$\mathop {\fam0 SGD }\nolimits\langle X\rangle$}
\put(213,57){$\mathrm{Com}\mathrm{Der}\langle X\rangle$}
\put(213,31){$\mathrm{Pois}\mathrm{Der}\langle X\rangle$}
\end{picture}
\vspace*{-\baselineskip}
In~\S2, we construct new basis of the free Novikov algebra (Theorem~2).
In~\S3, we define what is a canonical form of monomials $\mathrm{Pois}\mathrm{Der}\langle X\rangle$ of weight~$-1$.
Finally, in~\S4, the linear basis of the free special $\mathop {\fam0 GD }\nolimits$-algebra is constructed (Theorem~3).
For simplicity, we identify the element $d(x)$ with $x'$.
In this paper, all algebras are defined over a field of characteristic~0.
\section{Basis of free Novikov algebra}\label{basisofNov}
Let $X = \{x_i \mid i\in I\}$, where $I$ is well ordered set.
The free commutative algebra $\mathrm{Com}\mathrm{Der}\langle X,d\rangle$ with a~derivation~$d$ in the signature has a standard linear basis consisting of monomials
$$
x_{i_1}^{(r_1)}\ldots x_{i_k}^{(r_k)}, \quad x_j\in X,\ i_1\leq \ldots\leq i_k,\ r_j\geq0.
$$
Here $x_j^{(0)} = x_j$, $x_j^{(n+1)} = \big(x_j^{(n)}\big)'$.
Thus, the elements $x^{(r)}$, where $x\in X$ and $r\in \mathbb{N}$, generate $\mathrm{Com}\mathrm{Der}\langle X,d\rangle$ as a commutative algebra.
For simplicity, we will denote $\mathrm{Com}\mathrm{Der}\langle X,d\rangle$ as $\mathrm{Com}\mathrm{Der}\langle X\rangle$
omitting the symbol~$d$.
\begin{definition}
Let $u$ be a monomial from the standard basis of $\mathrm{Com}\mathrm{Der}\langle X,d\rangle$.
Define the {\em weight} function $\mathop {\fam 0 wt}\nolimits(u)\in \mathbb Z$ by induction as follows,
\begin{gather*}
\mathop {\fam 0 wt}\nolimits(x)=-1,\quad x\in X; \\
\mathop {\fam 0 wt}\nolimits(d(u)) = \mathop {\fam 0 wt}\nolimits(u)+1; \quad \mathop {\fam 0 wt}\nolimits(uv)=\mathop {\fam 0 wt}\nolimits(u)+\mathop {\fam 0 wt}\nolimits(v).
\end{gather*}
\end{definition}
We may consider the space $\mathrm{Com}\mathrm{Der}\langle X\rangle$
under the product $u\circ v = ud(v)$, denote the obtained Novikov algebra as
$\mathrm{Com}\mathrm{Der}\langle X\rangle^{(d)}$.
Let $\mathrm{Com}\mathrm{Der}\langle X\rangle_{-1}$ be a span of monomials from the standard basis of $\mathrm{Com}\mathrm{Der}\langle X\rangle$ of weight~$-1$. Note that $\mathrm{Com}\mathrm{Der}\langle X\rangle_{-1}$ is closed under the Novikov product $\circ$, so, it is a Novikov subalgebra of $\mathrm{Com}\mathrm{Der}\langle X\rangle^{(d)}$.
Let us recall the well-known results related to the free Novikov algebra.
\newpage
\begin{theorem} \label{thm:embedding}
a)~\cite{Umirbayev,DzhLofwall}
We have $(\mathrm{Com}\mathrm{Der}\langle X\rangle_{-1},\circ)\cong \mathrm{Nov}\langle X\rangle$.
b)~\cite{BCZ2017,KS} Every Novikov algebra can be embedded into a free differential commutative algebra.
\end{theorem}
Let us define an order on the elements $x^{(r)}$ of $\mathrm{Com}\mathrm{Der}\langle X\rangle$ as follows:
$x_i^{(r_m)}>x_j^{(r_n)}$ if $r_m>r_n$ or $r_m=r_n$ and $i>j$.
We define a normal form of monomials $\mathrm{Com}\mathrm{Der}\langle X\rangle$ of weight $-1$ as follows,
\begin{equation}\label{goodwordnov}
x_{i_1}x_{i_2}\ldots x_{i_{l}}x_{j_n}^{(r_n)}\ldots x_{j_2}^{(r_2)}x_{j_1}^{(r_1)}x_{k_m}'\ldots x_{k_2}'x_{k_1}',
\end{equation}
where
\begin{gather*}
n\geq1,\quad r_1,\ldots, r_n\geq2, \quad
l=r_1+r_2+\ldots+r_n-n,\\
x_{i_1}\leq \ldots\leq x_{i_{l}}, \quad
x_{j_n}^{(r_n)}\geq\ldots\geq x_{j_1}^{(r_1)},\quad x_{k_m}\geq \ldots \geq x_{k_1}.
\end{gather*}
Denote by $N(X)$ the set of all normal forms~\eqref{goodwordnov} of monomials from the standard basis of $\mathrm{Com}\mathrm{Der}\langle X\rangle$ of weight $-1$.
For $a\in N(X)$, put
$$
L(a) = (r_1,\ldots,r_n),\quad
M(a) = (i_1,\ldots,i_l),\quad
R(a) = (k_m,\ldots,k_1),
$$
and define $S(a) = (L(a),R(a),M(a))$.
Given $a,b\in N(X)$, we say that $a<b$ if and only if $S(a)<S(b)$, we compare all tuples involved lexicographically.
Denote by $\mathop {\fam0 Magma }\nolimits\langle X\rangle$ the free magma algebra with binary operation~$\circ$ generated by~$X$. We define a~linear map
$\varphi\colon \mathrm{Com}\mathrm{Der}\langle X\rangle_{-1}\to \mathop {\fam0 Magma }\nolimits\langle X\rangle$.
By linearity it is enough to define $\varphi$ on the set $N(X)$, we do it inductively as follows,
\begin{equation} \label{ComDerPhiStart}
\varphi\big(x_{i_1}x_{i_2}\ldots x_{i_{n-1}}x_{i_n}^{(n-1)}\big)
= x_{i_{n-1}}\circ(x_{i_{n-2}}\circ\ldots\circ(x_{i_1}\circ x_{i_n})\ldots),
\end{equation}
\begin{multline*}
\varphi\big(x_{i_1}x_{i_2}\ldots x_{i_{l}}x_{j_n}^{(r_n)}\ldots x_{j_2}^{(r_2)}x_{j_1}^{(r_1)}x_{k_m}'\ldots x_{k_2}'x_{k_1}'\big) \\
= \varphi\big(x_{i_1}x_{i_2}\ldots x_{i_{l-r_n}}B_1x_{j_{n-1}}^{(r_{n-1})}\ldots x_{j_2}^{(r_2)}x_{j_1}^{(r_1)} x_{k_m}'\ldots x_{k_2}'x_{k_1}'\big),
\end{multline*}
where $B_1=\varphi\big(x_{i_{l+1-r_n}}x_{i_{l+2-r_n}}\ldots x_{i_l}x_{j_n}^{(r_n)}\big)$
is a new letter, so on this step we extend the generating set $X$ to $X_1 = X\cup \{B_1\}$.
Thus, we define $\mathop {\fam 0 wt}\nolimits(B_1)=-1$ and $x<B_1$ for all $x\in X$.
On each step~$i$, we add a new letter~$B_i$ to the generating set, i.\,e. $X_i = X_{i-1}\cup \{B_i\}$, we define $\mathop {\fam 0 wt}\nolimits(B_i)=-1$ and $y<B_i$ for all $y\in X_{i-1}$.
Calculating by the given rule, finally, we get
\begin{multline}\label{Novgeneralform}
\varphi\big(x_{i_1}x_{i_2}\ldots x_{i_{l}}x_{j_n}^{(r_n)}\ldots x_{j_2}^{(r_2)}x_{j_1}^{(r_1)}x_{k_m}'\ldots x_{k_2}'x_{k_1}'\big)\\
= (\ldots((B_{n-1}\circ(x_{i_{r_1-1}}\circ\ldots(x_{i_1}\circ x_{j_1})\ldots))\circ x_{k_m})\ldots)\circ x_{k_1},
\end{multline}
where all previous letters $B_1,\ldots,B_{n-2}$ are inside $B_{n-1}$.
\begin{example}
Let $x,y,z,t,q\in X$, $y>x$, $t>q$, then
$$
\varphi(xyz^{(2)}t'q')
= \varphi(B_1 t'q')
= \varphi(B_2 q')
= B_2\circ q
= (B_1\circ t)\circ q
= ((y\circ (x\circ z))\circ t)\circ q.
$$
\end{example}
Define a homomorphism
$$
\tau\colon \mathop {\fam0 Magma }\nolimits\langle X\rangle\to \mathrm{Com}\mathrm{Der}\langle X\rangle_{-1}
$$
by the formula
$\tau(x) = x$, $x\in X$; the last algebra is considered as a~Novikov one.
For example, if $x,y,z\in X$, then
$\tau(x\circ (y\circ z)) = x(yz')' = xy'z' + xyz^{(2)}$.
\begin{lemma} \label{partitionderivation}
Let $a\in N(X)$. Then $\tau(\varphi(a)) = a + \sum_j b_j$, where $b_j<a$ for all~$j$.
\end{lemma}
\begin{proof}
By the definition of~$\tau$, it is enough to prove the statement for
$$
a=x_{i_1}x_{i_2}\ldots x_{i_{l}}x_{j_n}^{(r_n)}\ldots x_{j_2}^{(r_2)}x_{j_1}^{(r_1)}.
$$
By the Leibniz rule fulfilled for~$d$, we have
\begin{multline*}
\tau\big(\varphi\big(x_{i_1}x_{i_2}\ldots x_{i_{l}}x_{j_n}^{(r_n)}\ldots x_{j_2}^{(r_2)}x_{j_1}^{(r_1)}\big)\big)
= \tau(B_{n-1}\circ(x_{i_{r_1-1}}\circ(\ldots\circ(x_{i_1}\circ x_{j_1}))\ldots)) \\
= \tau(B_{n-1})(\tau(x_{i_{r_1-1}}\circ(\ldots\circ(x_{i_1}\circ x_{j_1})\ldots)))' \\
= \tau(B_{n-1})x_{i_1}\ldots x_{i_{r_1-1}}x_{j_1}^{(r_1)}
+ \sum_{p<r_1} \tau(B_{n-1}) \ldots x_{j_1}^{(p)},
\end{multline*}
and all summands are less than
$x_{i_1}\ldots x_{i_{r_1-1}}\tau(B_{n-1})x_{j_1}^{(r_1)}$ due to the above defined order on~$N(X)$. Analogously, we deal with $\tau(B_{n-1})$ and so on.
\end{proof}
Define $N_\varphi = \{\varphi(a) \mid a\in N(X)\}$.
\begin{theorem}\label{newbasenov}
The set $N_\varphi$ forms a~basis of the free Novikov algebra~$\mathrm{Nov}\langle X\rangle$.
\end{theorem}
\begin{proof}
By Theorem~\ref{thm:embedding}a, we identify $\mathrm{Nov}\langle X\rangle$ with the Novikov algebra $\mathrm{Com}\mathrm{Der}\langle X\rangle_{-1}$.
Let $L$ be a linear span of $N_\varphi$ in $\mathop {\fam0 Magma }\nolimits\langle X\rangle$.
We want to show that $\tau$ is an isomorphism of~$L$ and $\mathrm{Nov}\langle X\rangle$ considered as vector spaces.
By Lemma~\ref{partitionderivation}, we have that $\tau(\varphi(a)) = a + \sum_j b_j$ with $b_j<a$ for every $a\in N(X)$. Thus, we derive that $\tau(N_\varphi)$ is linearly independent.
Suppose that $\tau(N_\varphi)$ is not complete, so, the set $M = \{a\in N(X)\mid a$ is not expressed through $\tau(N_\varphi)\}$ is not empty.
Choose a minimal $a\in M$ due to the order~$<$,
such element exists, since the set of tuples $S(a)$ is well-ordered.
We have $a - \tau([a]) = \sum_j b_j$.
By the assumption, all $b_j$ are expressed via $\tau(N_\varphi)$, so, $a$ is expressed too, a~contradiction.
So, $\tau\colon L\to \mathrm{Nov}\langle X\rangle$ is an isomorphism of vector spaces.
Thus, we may define the product on~$L$ by the formula
$n\circ m = \tau^{-1}(\tau(n)\tau(m)')$, where $n,m\in N_\varphi$.
Since $\tau$~is also a~homomorphism between algebras $L$ and $\mathrm{Nov}\langle X\rangle$, we have proved the statement.
\end{proof}
Let $n$ be a positive integer. We consider Young diagrams corresponding to the partitions
$$
\lambda_1+\ldots+\lambda_k=n,\quad \lambda_1>\lambda_2\geq\ldots\geq \lambda_k\geq 1.
$$
We fill the Young diagrams by elements of $X$:
$$
\ytableausetup{mathmode, boxsize=2.7 em}
\begin{ytableau}
x_{i_{{1}_1}} & x_{i_{{1}_{2}}} & \dots & x_{i_{{1}_{\lambda_1-1}}} & x_{t_{1}} \\
\vdots \\
x_{i_{{r}_1}} & \dots & x_{i_{{r}_{\lambda_r-1}}} & x_{t_{r}} \\
x_{t_{r+1}}\\
\vdots \\
x_{t_{r+p}}\\
\end{ytableau}
$$
Here
\begin{gather*}
i_{{1}_{\lambda_1-1}}\geq\ldots\geq i_{{1}_1} \geq\ldots\geq i_{{r}_{\lambda_r-1}}\geq\ldots\geq i_{{r}_1},\;\; t_{r+1}\geq\ldots\geq t_{r+p}, \\
t_{1}\geq t_{2}\;\textrm{if}\;\lambda_1=\lambda_{2}+1,\; \textrm{and} \;t_{s}\geq t_{s+1}\; \textrm{if} \; \lambda_s=\lambda_{s+1}\; \textrm{for} \; s=2,\ldots,r-1.
\end{gather*}
For the diagram with exactly one row
$(x_{i_{{1}_1}},x_{i_{{1}_{2}}},\dots,x_{i_{{1}_{\lambda_1-1}}},x_{t_{1}})$
we attach a monomial of $\mathrm{Nov}\langle X\rangle$ as follows,
$$
u_1:=x_{i_{{1}_{\lambda_1-1}}}\circ (\ldots \circ(x_{i_{{1}_{2}}}\circ(x_{i_{{1}_1}}\circ x_{t_{1}}))\ldots).
$$
For the diagram with $m$~rows we attach a~monomial of $\mathrm{Nov}\langle X\rangle$ inductively,
$$
m\textrm{-th row}\longrightarrow u_{m}
:=u_{m-1}\circ (x_{i_{{m}_{\lambda_m-1}}}\circ(\ldots \circ (x_{i_{{m}_2}}\circ(x_{i_{m_1}}\circ x_{t_{m}}))\ldots)).
$$
The set of the constructed Young diagrams with the corresponding monomials coincides with the set $N_\varphi$.
\section{Normal form of monomials of weight~$-1$ in $\mathrm{Pois}\mathrm{Der}\langle X\rangle$}
Let $Y$ be a well-ordered set with respect to an order
$<$, and let $Y^*$ be the set of all associative words
in the alphabet $Y$ (including the empty word denoting by 1).
Extend the order to $Y^*$ by induction on the word length as follows.
Put $u<1$ for every nonempty word $u$. Further,
$u < v$ for $u = y_i u'$, $v = y_jv'$, $y_i, y_j\in Y$
if either $y_i < y_j$ or $y_i = y_j$, $u'< v'$.
In particular, the beginning of every word is greater than the whole word.
{\it Definition 2}.
A word $w\in Y^*$ is called an associative Lyndon---Shirshov word if
for arbitrary nonempty $u$ and $v$ such that $w=uv$,
we have $w>vu$.
For example, a word $aabac$ is an associative Lyndon---Shirshov word when $a>b>c$.
Consider the set $Y^+$ of all nonassociative words in $Y$,
here we exclude the empty word from consideration.
{\it Definition 3}.
A nonassociative word $[u]\in Y^+$ is called a nonassociative
Lyndon---Shirshov word (an LS-word, for short) provided that
(LS1) the associative word $u$ obtained from $[u]$
by eliminating all parentheses is an associative Lyndon---Shirshov word;
(LS2) if $[u] = [[u_1],[u_2]]$, then
$[u_1]$ and $[u_2]$ are LS-words, and $u_1> u_2$;
(LS3) if $[u_1] = [[u_{11}],[u_{12}]]$, then $u_2\geq u_{12}$.
These words appeared independently for the algebras and groups~\cite{Lyndon1958,Shirshov1958}.
In~\cite{Shirshov1958}, it was proved that the set of all
LS-words in the alphabet $Y$ is a~linear basis for a~free Lie
algebra generated by $Y$.
Moreover, each associative Lyndon---Shirshov word~$w$ possesses the unique arrangement of parentheses which gives an LS-word $[w]$.
We consider the free Poisson algebra $\mathrm{Pois}\langle X\rangle$ generated by~$X$.
Here we denote the operations by~$x\cdot y$ and $\{x,y\}$.
Since $\mathrm{Pois}\langle X\rangle = \mathrm{Com}(\mathrm{Lie} \langle X\rangle)$, the set of commutative words
\begin{equation} \label{PoisFreeBasisForm}
A_1A_2\ldots A_n,\quad A_1\leq \ldots\leq A_n,
\end{equation}
where $A_i$ are Lyndon---Shirshov words in $\mathrm{Lie}\langle X\rangle$ forms a~standard basis of $\mathrm{Pois}\langle X\rangle$.
By~\cite{KSO}, for the free Poisson algebra generated by a~set~$X$ with a derivation~$d$, we have the equality
$\mathrm{Pois}\mathrm{Der}\langle X\rangle = \mathrm{Pois}\langle X_\infty\rangle$, where
$X_\infty = \big\{x_i^{(n)}\mid i\in I,\,n\in\mathbb{N}\big\}$.
Define an order on $X_\infty$ as follows:
$x_i^{(m)}>x_j^{(n)}$ if $m>n$ or $m=n$, $i>j$.
We define an order on Lyndon---Shirshov words forming the basis in $\mathrm{Lie}\langle X_\infty\rangle$ as follows. At first, we compare two Lie words $A_1$ and $A_2$ by degree, i.\,e., $A_1>A_2$ if $\deg A_1>\deg A_2$. If $\deg A_1 = \deg A_2$, then we compare corresponding associative Lyndon---Shirshov words as it was defined above.
Also, we define $A>x_{k}^{(m)}>B$, where $A$ and $B$ are LS-words on $X_\infty$ of degree at least two and $A$ but not $B$ involves~$d$ in its notation.
Recall the definition of the weight function~\cite[Definition\,2]{KSO} on basic monomials~\eqref{PoisFreeBasisForm} with Lie words taken from $H(X,d)$ of~$\mathrm{Pois}\mathrm{Der}\langle X\rangle$,
\begin{gather*}
\mathop {\fam 0 wt}\nolimits(x)=-1,\quad x\in X; \\
\mathop {\fam 0 wt}\nolimits(d(u)) = \mathop {\fam 0 wt}\nolimits(u)+1; \
\mathop {\fam 0 wt}\nolimits(\{u,v\})=\mathop {\fam 0 wt}\nolimits(u)+\mathop {\fam 0 wt}\nolimits(v)+1; \
\mathop {\fam 0 wt}\nolimits(uv)=\mathop {\fam 0 wt}\nolimits(u)+\mathop {\fam 0 wt}\nolimits(v).
\end{gather*}
Due to~\cite{KSO}, we have
$(\mathrm{Pois}\mathrm{Der}\langle X\rangle_{-1},\circ,[,])\cong \mathop {\fam0 SGD }\nolimits\langle X\rangle$, and the linear map
$\xi\colon \mathrm{Pois}\mathrm{Der}\langle X\rangle_{-1}\to \mathop {\fam0 SGD }\nolimits\langle X\rangle$ defined by the formulas $\xi(a\circ b)=ab'$, $\xi([a,b])=\{a,b\}$ provides the isomorphism.
Let us define a canonical form of monomials $\mathrm{Pois}\mathrm{Der}\langle X\rangle$ of weight $-1$ as follows:
\begin{equation}\label{poisgoodword}
x_{i_1}\ldots x_{i_k}B_1\ldots B_m A_n\ldots A_1 x_{j_l}^{(r_l)}\ldots x_{j_1}^{(r_1)},
\end{equation}
where $A_i,B_j$ are Lie-words of degree at least~2 and $A_i$ but not $B_j$ involves~$d$ in its notation, moreover,
\begin{gather*}
A_1\leq A_2\leq\ldots \leq A_n, \quad
B_m\geq B_{m-1}\geq\ldots\geq B_1, \\
x_{i_k}\geq x_{i_{k-1}}\geq \ldots\geq x_{i_1}, \quad
x_{j_l}^{(r_l)}\geq x_{j_{l-1}}^{(r_{l-1})}\geq \ldots \geq x_{j_1}^{(r_1)}.
\end{gather*}
Denote by $N(X)$ the set of all normal forms~\eqref{poisgoodword} of monomials from the standard basis of $\mathrm{Pois}\mathrm{Der}\langle X\rangle$ of weight $-1$.
Denote by $\mathop {\fam0 Magma }\nolimits_2\langle X\rangle$ the free algebra with two binary (magma) operations~$\circ$ and~$[,]$ generated by~$X$. We define a~linear map
$\psi\colon \mathrm{Pois}\mathrm{Der}\langle X\rangle_{-1}\to \mathop {\fam0 Magma }\nolimits_2\langle X\rangle$
by induction.
At first, we consider a Lie LS-word corresponding to an associative LS-word
$w = x_{i_1}^{(r_1)}\ldots x_{i_t}^{(r_t)}$ with $k = r_1+\ldots+r_t \geq 1$.
Let $\pi\in S_t$ be a permutation acting on the letters of $w\in X_\infty^*$ such that
in the associative word $w^{\pi} = x_{j_1}^{(p_1)}\ldots x_{j_t}^{(p_t)}$
we have $x_{j_m}^{(p_m)}\geq x_{j_{m+1}}^{(p_{m+1})}$ for $m=1,\ldots,t-1$.
Given $c_k\geq \ldots\geq c_1$ such that $\mathop {\fam 0 wt}\nolimits(c_i)=-1$
(here by $c_i$ we mean either $x\in X$ or a Lie LS-word in $X$), we put
$$
\psi(c_{1}\ldots c_{k}[w])\\
= [u],
$$
where the corresponding associative LS-word $u = v^\pi$
and the word $v = v_1 \ldots v_t$ is defined as follows,
$$
v_m = G(c_1,\ldots,c_k,w)_m
:= \varphi\big(c_{k-p_1-\ldots-p_m+1}\ldots c_{k-p_1-\ldots-p_{m-1}} x_{j_m}^{(p_m)}\big),
$$
the map~$\varphi$ is defined by~\eqref{ComDerPhiStart}.
Here, we add new generators
$G(c_1,\ldots,c_k,w)_m$ to $X$ for all $p_m\geq1$ to get the supset~$\widetilde{X}$.
We compare new generators by the length in terms on the~$\circ$ operation
and then after elimination of all signs of the~$\circ$ operation
we compare them as associative words.
By this rule, all new letters are greater than elements from~$X$.
Also, we put $\mathop {\fam 0 wt}\nolimits(G(c_1,\ldots,c_k,w)_m) = -1$.
Let $u$ be a word of the form~\eqref{poisgoodword}. We define $\psi(u)$ as follows,
$$
\psi(u)
= \psi\big(c_1\ldots c_m A_n \ldots A_1 x_{j_l}^{(r_l)}\ldots x_{j_1}^{(r_1)}\big)
= \psi\big(c_1\ldots c_{p_{n-1}} \tilde{A}_n A_{n-1}\ldots A_1 x_{j_l}^{(r_l)}\ldots x_{j_1}^{(r_1)}\big),
$$
where $\mathop {\fam 0 wt}\nolimits(c_{p_{n-1}+1}\ldots c_{m} A_n)=-1$,
$\tilde{A}_n=\psi(c_{p_{n-1}+1}\ldots c_{m}\{A_n\})$
is a~letter of the extended alphabet~$\widetilde{X}$.
Thus, by $n$ such steps we exclude all Lie words of degree at least two which involve $d$ in their notation and afterwards apply the map~$\varphi$ from Sec.~2:
\begin{multline}\label{psiAction}
\psi(u)
= \psi \big(c_1 \ldots c_{p_{n-2}}\tilde{A}_{n-1}A_{n-2}\ldots A_1 x_{j_l}^{(r_l)}\ldots x_{j_1}^{(r_1)}\big)
= \ldots \\
= \psi \big(c_1 \ldots c_{p_2}\tilde{A}_2A_1x_{j_l}^{(r_l)}\ldots x_{j_1}^{(r_1)}\big)
= \varphi \big(c_1 \ldots c_{p_1}\tilde{A}_1x_{j_l}^{(r_l)}\ldots x_{j_1}^{(r_1)}\big).
\end{multline}
\begin{example}
We have
\[
\psi(x_3x_4x_5\{x_1',x_2''\})
= [x_3\circ x_1,x_5\circ(x_4\circ x_2)],
\]
\begin{multline*}
\psi(x_5x_6x_7x_8\{x_3',x_4''\}\{x_1,x_2'\}x_9'')
= \psi(x_5[x_6\circ x_3,x_8\circ(x_7\circ x_4)]\{x_1,x_2'\}x_9'') \\
= \psi(x_5[x_1,[x_6\circ x_3,x_8\circ(x_7\circ x_4)]\circ x_2]x_9'')
= [x_1,[x_6\circ x_3,x_8\circ(x_7\circ x_4)]\circ x_2]\circ(x_5\circ x_9).
\end{multline*}
\end{example}
\section{Basis of free $\mathop {\fam0 SGD }\nolimits$-algebra}
Given $a,b\in N(X)$, we compare them as elements from $\mathrm{Com}\mathrm{Der}\langle X\rangle$ (see Sec.~2).
Define a homomorphism
$$
\tau\colon \mathop {\fam0 Magma }\nolimits_2\langle X\rangle\to \mathrm{Pois}\mathrm{Der}\langle X\rangle_{-1}
$$
by the formula
$\tau(x) = x$, $x\in X$.
\begin{lemma}\label{partitionLieword}
Let $a\in N(X)$. Then $\tau(\psi(a)) = a + \sum_j b_j$, where $b_j<a$ for all~$j$.
\end{lemma}
\begin{proof}
Let $u = c_1\ldots c_m A_n \ldots A_1 x_{j_l}^{(r_l)}\ldots x_{j_1}^{(r_1)}\in N(X)$.
By the definition of~$\psi$ and $\varphi$, we have (see also~\eqref{psiAction})
\begin{multline*}
\psi(u)
= \varphi \big(c_1 \ldots c_{q_1}\tilde{A}_1x_{j_l}^{(r_l)}\ldots x_{j_1}^{(r_1)}\big)
= \varphi \big(c_1 \ldots c_{q_2}B_1 x_{j_{l-1}}^{(r_{l-1})}\ldots x_{j_1}^{(r_1)}\big)
= \ldots \\
= \varphi \big(c_1 \ldots c_{q_l}B_{l-1}x_{j_1}^{(r_1)}\big)
= B_{l-1}\circ (c_{q_l}\circ(\ldots (c_1\circ x_{j_1})\ldots)).
\end{multline*}
Applying~$\tau$, we get
\begin{equation} \label{Lemma2Induction}
\tau(\psi(u))
= \tau(B_{l-1})(\tau( c_{q_l}\circ(\ldots (c_1\circ x_{j_1})\ldots) ))'.
\end{equation}
By Lemma~1,
$$
\tau( c_{q_l}\circ(\ldots (c_1\circ x_{j_1})\ldots) )
= c_1 \ldots c_{q_l}x_{j_1}^{(r_1-1)} + \sum_{j} b_j,
$$
where $b_j<c_1 \ldots c_{q_l}x_{j_1}^{(r_1-1)}$.
Writing down~$\tau(B_{l-1})$, we get the analogous expression for it as the right-hand side of~\eqref{Lemma2Induction}.
By this remark and by the induction reasons, it is enough to figure out with~$\tau(\tilde{A}_1)$.
Let $A_1 = [v_1\ldots v_k]$, where $v_i = x_{j_i}^{(t_i)}$.
We have by Lemma~1,
$$
\tau(\tilde{A}_1)
= \big[\tau(\tilde{A}_2)c_{l_1}\ldots c_{l_{t_1}} x_{j_1}^{(t_1)} + d_1,\ldots,c_{r_1}\ldots c_{r_{t_k}}x_{j_k}^{(t_k)} + d_k\big],
$$
where $d_i$ are less than the corresponding leading terms.
Extracting by the Leibniz rule $\tau(\tilde{A}_2)$ from the described Lie word as well as all $c_i$, we get the~$a$ and the sum of words less than~$a$ due to the defined order on $N(X)$.
\end{proof}
\begin{example}
If $u=x_3x_4\{x_1',x_2'\}$, then
\begin{multline*}
\tau(\psi(x_3x_4\{x_1',x_2'\}))
= \tau([x_3\circ x_1,x_4\circ x_2]) \\
= \{x_3x_1',x_4x_2'\}
= x_3x_4\{x_1',x_2'\} + \{x_3,x_4\}x_2'x_1'
+ x_4\{x_3,x_2'\}x_1' + x_3\{x_1',x_4\}x_2'.
\end{multline*}
\end{example}
Define $N_\psi = \{\psi(a) \mid a\in N(X)\}$.
\begin{theorem}\label{basis[S]}
The set $N_\psi$ forms a~basis of the free $\mathop {\fam0 SGD }\nolimits$-algebra~$\mathop {\fam0 SGD }\nolimits\langle X\rangle$.
\end{theorem}
\begin{proof}
Analogously to the proof of Theorem~2, where we apply Lemma~2.
\end{proof}
Applying Theorem \ref{basis[S]}, we obtain the multiplication table in the free SGD-algebra.
If $a,b\in N_{\psi}$ then we compute $a\circ b$ and $[a,b]$ as follows.
Firstly,
$$
\tau(a*b) = \sum_i \alpha_i c_i\in\mathrm{Pois}\mathrm{Der}\langle X\rangle,
$$
where $* = \circ$ or $* = [,]$, $\alpha_i \in F$ and $c_i\in N(X)$.
Secondly, by Theorem~\ref{basis[S]}, we have
$c_i = \tau\big(\sum_j \beta_{ij} d_{ij}\big)$
for some $\beta_{ij}\in F$ and $d_{ij}\in N_{\psi}$, which gives
$$
a*b = \sum_i\alpha_i\bigg(\sum_j \beta_{ij} d_{ij}\bigg).
$$
\begin{example}
We have that $[x_1,(x_2\circ x_3)\circ x_4]\notin N_{\psi}$ and
\begin{multline*}
\tau([x_1,(x_2\circ x_3)\circ x_4])=\{x_1,x_2\}x_3'x_4'+\{x_1,x_3'\}x_2x_4'+\{x_1,x_4'\}x_2x_3' \\
= \tau(([x_1,x_2]\circ x_3)\circ x_4
+ [x_1,x_2\circ x_3]\circ x_4
- ([x_1,x_2]\circ x_3)\circ x_4 \\
+ [x_1,x_2\circ x_4]\circ x_3 - ([x_1,x_2]\circ x_3)\circ x_4),
\end{multline*}
which gives the identity (\ref{spec1}).
Also, $[x_2\circ x_3,x_1\circ x_4]\notin N_{\psi}$ and
\begin{multline*}
\tau([x_4\circ x_1,x_3\circ x_2])
= x_3x_4\{x_1',x_2\}+\{x_3,x_4\}x_2'x_1'+x_3\{x_4,x_2'\}x_1'+x_4\{x_1',x_3\}x_2' \\
= \tau([x_3\circ x_1,x_4\circ x_2]-([x_3,x_4]\circ x_2)\circ x_1-[x_3,x_4\circ x_2]\circ x_1+([x_3,x_4]\circ x_2)\circ x_1 \\
- [x_3\circ x_1,x_4]\circ x_2+([x_3,x_4]\circ x_2)\circ x_1+([x_3,x_4]\circ x_2)\circ x_1 \\
+ [x_4,x_3\circ x_2]\circ x_1 - ([x_4,x_3]\circ x_2)\circ x_1 + [x_4\circ x_1,x_3]\circ x_2 - ([x_4,x_3]\circ x_2)\circ x_1),
\end{multline*}
which gives the identity (\ref{spec2}).
\end{example}
|
1,108,101,565,562 | arxiv | \section{Introduction}
Monolayer (ML) and bilayer (BL) graphene are semimetals with an electron and a hole band.
Both bands touch each other at two nodes. The low-energy dispersion in the
vicinity of these nodes is linear in ML graphene and quadratic in BL graphene. Exactly
at the nodes both systems obey a chiral symmetry which reflects the sublattice symmetry
of the underlying honeycomb lattice for ML or the inversion symmetry between single layers for BL.
These symmetries can be broken, either by adding hydrogen atoms to ML graphene~\cite{FootNote}~[\onlinecite{Duplock2004, elias2009}],
or by a biased gate voltage applied to BL graphene~[\onlinecite{ohta2006}]. The symmetry breaking is accompanied
by opening of a gap in the spectrum. Then the question is whether or not such a gap is suppressed
or supported by the Coulomb interaction. Previous studies have shown that disorder
induces random fluctuations of the gap which can suppress the effective gap and allow ML
and BL graphene to be a conductor and to have a metal-insulator transition for a sufficiently
large average gap~[\onlinecite{Ziegler2009a,Ziegler2009b}]. On the other hand, it has been discussed
that a short-range (Gross-Neveu) electron-electron interaction can dominate the long-range
Coulomb interaction, leading eventually to an insulating behavior~[\onlinecite{Juricic2009}].
In contrast to this works, we will follow subsequently a more direct route to an insulator by assuming
a small gap and study how this is affected by the Coulomb interaction itself.
The problem of Coulomb interaction in graphene has been previously studied by employing a perturbative
renormalization-group (RG) approach, for clean ML
graphene~[\onlinecite{Juricic2009,Son2007,Mishchenko2007,Mishchenko2008,Sheehy2007,Kane2006,Drut2009}] as well as for disordered ML graphene~[\onlinecite{Stauber2005,Foster2008,Herbut2008}].
These studies show clearly a strong renormalization of the Fermi velocity in the clean case
and considerable interplay between Coulomb interaction and disorder.
The outline of this paper is as follows. In Sec.~\ref{sec:model} we define the effective field theory for both graphene configurations with Coulomb interaction. We introduce the gap into the action by hand and perform decoupling in the interaction channel by means of the Hubbard-Stratonovich transformation. We obtain expressions for the bare bosonic and fermionic propagators and interaction vertices. In Sec.~\ref{sec:FRG} we write down renormalization-group flow equations for the gap parameter and fermionic wave-function renormalization factors and solve them for both gapless and gapped regimes. Furthermore, we analyze the fixed points for both graphene configurations.
\section{The Model}
\label{sec:model}
We start with the zero temperature model for gapped ML and BL graphene. In the real space the Euclidean action of noninteracting ML and BL graphene in vicinity of a nodal point is given by
\begin{eqnarray}
\label{eq:Model}
{\cal S}^{}_0[\psi^\dag,\psi] &=&
-\intop_{X}\psi^\dag_{X}\left(\hbar\partial^{}_t - i{\vec\tau}\cdot{\vec\nabla} + \Delta^{}_0\tau^{}_3\right)\psi^{}_X .
\end{eqnarray}
Here, Grassman fields $\psi^{\rm T}=(\psi^{}_{AK},\psi^{}_{BK},\psi^{}_{BK^\prime},\psi^{}_{AK^\prime})$ represent four-component spinors on the sublattices A and B in the vicinity of nodal points $K$ and $K^\prime$ in the momentum space which depend upon the $2+1$ dimensional vector $X$ that contains imaginary time $t$ and spatial vector $\vec x$ as components. Matrices $\tau^{}_{i,3}={\mathds 1}\otimes\sigma^{}_{i,3},\,i=1,2$, where $\sigma^{}_{i,3}$ denote usual Pauli matrices.
For ML graphene the operator $\vec\nabla$ reads
\begin{equation}
{\vec\nabla} = \hbar v \partial^{}_{\vec x},
\end{equation}
where $v = \sqrt{3}ta/2\hbar$ denotes the bare (nonrenormalized) Fermi velocity and $\partial^{}_{\vec x}$ usual differential operators. For BL graphene it has the components:
\begin{eqnarray}
\nabla^{}_{1} &=& \frac{\hbar^2}{2\mu i}(\partial^{2}_{x^{}_1}-\partial^{2}_{x^{}_2}),\\
\nabla^{}_{2} &=& \frac{\hbar^2}{2\mu i} 2\partial^{}_{x^{}_1}\partial^{}_{x^{}_2},
\end{eqnarray}
with the bare band mass of electrons defined as
$$
\mu = \frac{2t^{}_\perp\hbar^2}{3t^2a^2}.
$$
Here, $t$ and $t^{}_\perp$ are in- and out-of-plane hopping energies respectively; $a$ denotes the lattice spacing. The spectral gap $\Delta^{}_0$ is simply introduced by hand.
The instantaneous interaction is the same for both graphene configurations:
\begin{equation}
{\cal S}^{}_c[\psi^\dag,\psi]= \frac{\hbar g}{2} \intop_{X}\intop_{X^\prime}(\psi^\dag\psi)^{}_{X}\frac{\delta(t-t^\prime)}{|\vec{x}-\vec{x}^\prime|}(\psi^\dag\psi)^{}_{X^\prime}.
\end{equation}
The microscopic strength of the Coulomb interaction between electrons is given by
$$
g=\frac{e^2}{8\pi\epsilon^{}_0\epsilon\hbar} = \frac{\alpha c}{2\epsilon},
$$
where $e$ denotes the elementary charge, $\epsilon^{}_0$ the dielectric constant of the vacuum, $\alpha$ the fine structure constant, $c$ the speed of light in vacuum and $\epsilon$ the relative dielectric constant of the substrate. After performing a Fourier transform we obtain for both configurations
\begin{eqnarray}
\nonumber
{\cal S}[\psi^\dag,\psi] &=& -\intop_Q{\psi^\dag_Q}\left[i\hbar q^{}_0 + {\vec h}\cdot{\vec\tau}+\Delta^{}_0\tau^{}_3\right]\psi^{}_Q \\
\label{eq:ModelFourier}
&+&\frac{\hbar g^\prime}{2}\intop^{}_Q \frac{1}{q}~\rho^{}_{Q}\rho^{}_{-Q},
\end{eqnarray}
with different kinetic energy parts. The integrals over momentum and frequency $Q=(q^{}_0,{\vec q})$
with the absolute value of the momentum $q$ and zero-temperature Matsubara frequency $q^{}_0$ read
$\intop_Q=(2\pi)^{-3}\int dq^{}_0d^2{\vec q}$ and should be thought of being regularized by means of an UV-cutoff $\Lambda^{}_0$. Furthermore we have re-scaled the interaction strength by the factor $2\pi$ that appears after Fourier transform, introducing $g^\prime=2\pi g$. The fermionic densities are defined as
$$
\rho^{}_Q = \int^{}_P \psi^\dag_P\psi^{}_{P+Q}.
$$
For ML graphene the components of the vector ${\vec h}$ in the non-interacting part of the action read
\begin{subequations}
\begin{equation}
\label{eq:ML-Hamiltonian}
h^{}_i = \hbar v q^{}_i,
\end{equation}
while for BL graphene
\begin{equation}
\label{eq:BL-Hamiltonian}
h^{}_1 = \frac{\hbar^2}{2\mu}(q^{2}_1-q^{2}_2),\;\; h^{}_2=\frac{\hbar^2}{\mu} q^{}_1 q^{}_2.
\end{equation}
\end{subequations}
Below we will assume $\hbar=1$.
Now we map the pure fermionic action Eq.~(\ref{eq:ModelFourier}) onto the action containing both
fermionic and bosonic degrees of freedom by means of the Hubbard - Stratonovich transformation~[\onlinecite{Schuetz2005}],
\begin{eqnarray}
\label{eq:HubStrat}
{\cal S}^{}_0[\psi^\dag,\psi] + {\cal S}^{}_c[\psi^\dag,\psi]
\to {\cal S}^{}_0[\psi^\dag,\psi] + {\cal S}^{}_0[\phi]+{\cal S}^{}_{Y}[\psi^\dag,\psi,\phi],
\end{eqnarray}
where the free bosonic action reads
\begin{equation}
\label{eq:BosePart}
{\cal S}^{}_0[\phi] = \frac{1}{2 g^\prime}\intop_Q q\phi^{}_Q\phi^{}_{-Q},
\end{equation}
and the third term denotes the interacting Yukawa term describing coupling between fermions and bosons:
\begin{equation}
\label{eq:Yukawa}
{\cal S}^{}_{Y}[\psi^\dag,\psi,\phi] = i\intop_Q\intop_{K}\psi^\dag_{K}\psi^{}_{K+Q}\phi^{}_{-Q}.
\end{equation}
From the mixed action Eq.~(\ref{eq:HubStrat}) we obtain vertices and propagators by taking functional derivatives with respect to each field.
From
\begin{equation}
\left.\frac{\delta^2{\cal S}}{\delta\psi^{}_{Q}\delta\psi^\dag_{Q^\prime}}\right|_{\psi^\dag,\psi,\phi=0} =-(2\pi)^3\delta^{}_{Q,Q^\prime}G^{-1}_0(Q),
\end{equation}
we obtain the inverse fermionic propagator
\begin{equation}
\label{eq:BareFermiPropInv}
G^{-1}_0(Q) = iq^{}_0+{\vec h}\cdot{\vec\tau}+\Delta^{}_0\tau^{}_3,
\end{equation}
with components of the vector ${\vec h}$ defined in Eqs.~(\ref{eq:ML-Hamiltonian}) and (\ref{eq:BL-Hamiltonian}), and from
\begin{equation}
\left.\frac{\delta^2{\cal S}}{\delta\phi^{}_{Q}\delta\phi^{}_{Q^\prime}}\right|_{\psi^\dag,\psi,\phi=0}=
-(2\pi)^3\delta^{}_{Q,-Q^\prime}F^{-1}(Q)
\end{equation}
the inverse bosonic propagator
\begin{equation}
\label{eq:BareBosePropInv}
F^{-1}(Q) = -\frac{q}{g^\prime}.
\end{equation}
Finally, the bare Yukawa vertex is obtained as
\begin{equation}
\Gamma^{}(P^{}_1;P^{}_2,P^{}_3)=
\left.\frac{\delta^3{\cal S}}{\delta\psi^{}_{P_1}\delta\psi^{\dag}_{P_2}\delta\phi^{}_{P_3}}\right|_{\psi^\dag,\psi,\phi=0}.
\end{equation}
We arrive at
\begin{equation}
\label{eq:BareTriVert}
\Gamma^{}(P^{}_1;P^{}_2,P^{}_3) = i(2\pi)^3\delta^{}_{P^{}_1,P^{}_2+P^{}_3}.
\end{equation}
\section{Renormalization group equations}
\label{sec:FRG}
The functional RG is conveniently defined in terms of the field dependent functional of effective action ${\cal L}[\Phi]$,
which in turn represents the Legendre transform of the generating functional of connected Green functions. For our purposes
the ensemble average field $\Phi=(\bar\psi,\bar\psi^\dag,\bar\phi)$ is supposed to contain both fermionic and bosonic entries~[\onlinecite{Schuetz2005}].
The functional ${\cal L}$ depends on the IR-cutoff $\Lambda\leqslant\Lambda^{}_0$, which is eventually removed.
The derivation of the functional RG flow equation is described in detail in Refs.~{[\onlinecite{Schuetz2005, Schuetz2006, Wetterich1993, Morris1994}]. The RG flow of ${\cal L}$ is generated by the regulator function introduced into the propagator of the non-interacting system
and is determined by
\begin{equation}
\label{eq:FRGeq}
\partial^{}_{\Lambda}{\cal L}^{}_\Lambda[\Phi] = -\frac{1}{2}{\rm Tr}\left\{\partial^{}_\Lambda [G^{-1}_{0,R^{}_\Lambda}] \left(\frac{\delta^{2}{\cal L}^{R}_{\Lambda}}{\delta\Phi\delta\Phi}[\Phi]\right)^{-1}\right\},
\end{equation}
where $[G^{-1}_{0,R^{}_\Lambda}]$ is the propagator of the non-interacting system depending on the cutoff $\Lambda$ only via the regulator function. The matrix
\begin{equation}
\left.\frac{\delta^{2}{\cal L}^{R}_\Lambda}{\delta\Phi^{}_{}\delta\Phi^{}_{}}[\Phi]\right|^{}_{\Phi=0} = - [G^{R}_\Lambda]^{-1}=-([G^{-1}_{0,R^{}_\Lambda}]-\Sigma^{}_\Lambda)
\end{equation}
denotes the regularized full inverse propagator with $\Sigma^{}_\Lambda$ meaning the irreducible self-energy.
The choice of the regulator will be specified a few lines below. Note that all quantities which appear on the right hand-side of Eq.~(\ref{eq:FRGeq}) dwell on the space of composite fields $\Phi$ and therefore represent 3$\times$3 matrices.
Since our main interest is the determination of the spectrum renormalization of fermions due to the Coulomb interaction, we will focus on
the coupling parameters in the fermionic sector of the theory. In the simplest approximation we make the following ansatz for the running effective action
\begin{eqnarray}
\nonumber
{\cal L}^{}_\Lambda[\Phi] &\approx&
-\intop_Q \bar\psi^\dag_Q \left[iq^{}_0+Z^{-1}_\Lambda {\vec h}\cdot{\vec\tau}+\Delta^{}_\Lambda\tau^{}_3\right] \bar\psi^{}_Q\\
\nonumber
&&-\frac{1}{2}\intop_Q F^{-1}(Q)\bar\phi^{}_Q\bar\phi^{}_{-Q}\\
\label{eq:TruncAct}
&&+i\intop_Q\intop_K\bar\psi^\dag_{Q}\bar\psi^{}_{Q+K}\bar\phi^{}_{-K},\;\;
\end{eqnarray}
which takes only the renormalization of the energy gap and of the electronic dispersion into account.
We do not consider the renormalization of the Matsubara frequency since we assume the Coulomb
interaction to be absolutely instantaneous. The inverse bosonic propagator $F^{-1}(Q)$ is defined in Eq.~(\ref{eq:BareBosePropInv}).
For momenta larger than the UV-cutoff $\Lambda^{}_0$ the
action in Eq.~(\ref{eq:TruncAct}) must reproduce the bare action from Eq.~(\ref{eq:HubStrat}). Therefore
the initial conditions are chosen as $Z^{}_{\Lambda_0}=1$ and $\Delta^{}_{\Lambda^{}_0}=\Delta^{}_0$.
Taking functional derivatives with respect to both Grassmanian fields on both sides of Eq.~(\ref{eq:FRGeq}) and putting subsequently $\Phi=0$
we arrive at the RG flow equation for the inverse renormalized fermionic propagator.
For details of its derivation we reffer to Refs.~[\onlinecite{Schuetz2005,Schuetz2006}].
If we employ the regularization scheme with the regulator built in the fermionic lines only, this equation can be written in the following algebraic form (note an additional minus sign due to Fermi statistics):
\begin{equation}
\label{eq:FermionicFlow}
\partial^{}_{\Lambda} G^{-1}_\Lambda(Q) = \intop^{}_P \dot{G}^{}_{\Lambda}(P) F(P-Q),
\end{equation}
where $F(Q)$ is the bare Coulomb potential defined in Eq.~(\ref{eq:BareBosePropInv}), and the single scale propagator $\dot{G}^{}_{\Lambda}$ is defined~as
\begin{equation}
\label{eq:SingScPr}
\dot{G}^{}_{\Lambda} = - G^{R}_\Lambda~\partial^{}_\Lambda [G^{-1}_{0,R^{}_\Lambda}]G^{R}_\Lambda
\end{equation}
We will work within the so-called sharp cutoff regularization scheme~[\onlinecite{Morris1994}]. Then the momentum cutoff is introduced as follows:
\begin{equation}
\label{eq:CutoffPropI}
G^{}_{0,R^{}_\Lambda}(Q)=\Theta(\Lambda<q<\Lambda^{}_0)G^{}_0(Q),
\end{equation}
where $\Theta(\Lambda<q<\Lambda^{}_0)=\Theta(\Lambda^{}_0-q)-\Theta(\Lambda-q)\to\Theta(q-\Lambda)$ as $\Lambda^{}_0\to\infty$. For momenta smaller than the UV-cutoff $\Lambda_0$, the flowing fermionic propagator $G^{R}_\Lambda(Q)$ is
\begin{equation}
\label{eq:FermFullPr}
G^{R}_\Lambda(Q) = -\Theta(q-\Lambda)\frac{iq^{}_0-Z^{-1}_\Lambda {\vec h}\cdot{\vec\tau}-\Delta^{}_\Lambda\tau^{}_3}
{q^{2}_0 + E^2_\Lambda(q)},
\end{equation}
\noindent
and hence the single-scale propagator~[\onlinecite{Morris1994}]
\begin{equation}
\label{eq:FermFullSSPr}
\dot{G}^{}_\Lambda(Q) = \delta(q-\Lambda)\frac{iq^{}_0-Z^{-1}_\Lambda {\vec h}\cdot{\vec\tau}-\Delta^{}_\Lambda\tau^{}_3}
{q^{2}_0 + E^2_\Lambda(q)},
\end{equation}
where we have introduced $E^{}_\Lambda(q)=\displaystyle\sqrt{\Delta^2_\Lambda+\epsilon^2_\Lambda(q)}$
with the renormalized spectra of free fermions $\epsilon^{}_\Lambda(q) = Z^{-1}_\Lambda v q$
for ML and $\epsilon^{}_\Lambda(q)= {(2\mu Z^{}_\Lambda)^{-1} q^2}$ for BL.
For both ML and BL the flow equations for the coupling parameter $\Delta^{}_\Lambda$ is extracted from Eq.~(\ref{eq:FermionicFlow}) in the same way:
\begin{subequations}
\begin{eqnarray}
\label{eq:EqDelta}
\partial^{}_\Lambda\Delta^{}_\Lambda &=&
\frac{1}{4}{\rm Tr}^{}_{2}\left.\left\{\tau^{}_3\partial^{}_\Lambda G^{-1}_\Lambda(Q)\right\}\right|_{Q=0},
\end{eqnarray}
where ${\rm Tr}^{}_{2}$ denotes a trace operator acting on the pseudospin and valley space only. The RG flow equations for the factor $Z^{}_\Lambda$ are extracted differently for ML and BL due to the different scaling of the spectra in these configurations:
\begin{eqnarray}
\label{eq:EqZml}
{\rm ML}: &&
\partial^{}_\Lambda Z^{-1}_{\Lambda} =
\frac{1}{4v}\frac{\partial}{\partial q^{}_i}{\rm Tr}^{}_{2}\left.\left\{\tau^{}_i \partial^{}_\Lambda G^{-1}_\Lambda(Q)\right\}\right|_{Q=0},\;\;\;\;\\
\label{eq:EqZbl}
{\rm BL}: &&
\partial^{}_\Lambda Z^{-1}_{\Lambda} =
\frac{\mu}{4}
\frac{\partial^2}{\partial q^2_1}{\rm Tr}^{}_{2}\left.\left\{\tau^{}_1 \partial^{}_\Lambda G^{-1}_\Lambda(Q)\right\}\right|_{Q=0},
\end{eqnarray}
\end{subequations}
for $i=1,2$~\cite{footnote2}. Introducing the logarithmic flow parameter $\ell~=\log( \Lambda^{}_0/\Lambda)$ we obtain the same flow equation for the gap for both graphene configurations
\begin{subequations}
\begin{eqnarray}
\label{eq:Gap}
\partial^{}_\ell\Delta^{}_\ell &=& \frac{\bar{g}\Delta^{}_\ell\Lambda}{\sqrt{\Delta^2_\ell+\epsilon^{2}_\ell}},
\end{eqnarray}
where $\bar{g}=g^\prime/4\pi$, but different equations for the wave-function renormalization factor:
\begin{eqnarray}
\label{eq:ZetML}
{\rm ML}:&&\partial^{}_\ell Z^{}_\ell = -\frac{1}{2}\frac{\bar{g} Z^{}_\ell\Lambda}{\sqrt{\Delta^2_\ell+\epsilon^2_\ell}},\\
\label{eq:ZetBL}
{\rm BL}:&&\partial^{}_\ell Z^{}_\ell = -\frac{3}{8}\frac{\bar{g} Z^{}_\ell\Lambda}{\sqrt{\Delta^2_\ell+\epsilon^2_\ell}}.
\end{eqnarray}
\end{subequations}
Here we have used the identity $\partial^{}_\ell Z^{-1}_\ell = -Z^{-2}_\ell\partial^{}_\ell Z^{}_\ell$.
The flowing free fermion spectra are
\begin{subequations}
\begin{eqnarray}
\label{eq:FlSpML}
{\rm ML}: && \epsilon^{}_\ell = v Z^{-1}_\ell\Lambda,\\
\label{eq:FlSpBL}
{\rm BL}: && \epsilon^{}_\ell = (2\mu Z^{}_\ell)^{-1}\Lambda^2.
\end{eqnarray}
\end{subequations}
The scaling dimension of the energy, (i.e. the dynamical exponent) is defined as $z=1-\eta^{}_\ell$ for ML and $z=2-\eta^{}_\ell$ for
BL. Here, $\eta^{}_\ell$ is referred to as the anomalous dimension which can be obtained from the parameter $Z^{}_\ell$ by
\begin{equation}
\label{eq:AnDimension}
\eta^{}_\ell = -\partial^{}_\ell \log Z^{}_\ell.
\end{equation}
Below we discuss solutions of these equations in gapless and gapped regimes.
\subsection{Gapless regime}
In the gapless regime Eqs.~(\ref{eq:Gap})-(\ref{eq:ZetBL}) are easily solved. The only solution of Eq.~(\ref{eq:Gap}) is the trivial one $\Delta^{}_\ell = \Delta^{}_{\ell=0}=0$, while Eqs.~(\ref{eq:ZetML}) and (\ref{eq:ZetBL}) reduce to
$$
\partial^{}_\ell Z^{-1}_\ell~=~\lambda^{}_{\rm ML}
$$
with $\lambda^{}_{\rm ML}=\bar{g}/(2v)$
for ML and correspondingly
$$
\partial^{}_\ell Z^{-1}_\ell=\lambda^{}_{\rm BL}e^\ell
$$
with $\lambda^{}_{\rm BL}=3\mu \bar{g}/(4 \Lambda^{}_0)$ for BL with solutions:
\begin{eqnarray}
\label{eq:SolZml}
{\rm ML}: && Z^{-1}_\ell = 1 + \lambda^{}_{\rm ML}\ell,\\
\label{eq:SolZbl}
{\rm BL}: && Z^{-1}_\ell = 1-\lambda^{}_{\rm BL}+\lambda^{}_{\rm BL}e^\ell.
\end{eqnarray}
The result of Eq.~(\ref{eq:SolZml}) corresponds to the well-known logarithmic renormalization of the Fermi velocity $v^{}_\ell~=~Z^{-1}_\ell v$ in clean ML graphene due to the Coulomb interaction~[\onlinecite{Glonzalez1999,Mishchenko2007,Mishchenko2008,Son2007,Sheehy2007}].
Similarly, Eq.~(\ref{eq:SolZbl}) describes the renormalization of the electronic band mass $\mu^{}_\ell=Z^{}_\ell\mu$ in BL. At small momenta the band mass decreases proportionally to the momentum $\mu^{}_\ell\propto\Lambda$, i.e. the particles become effectively faster in analogy to ML.
Using Eq.~(\ref{eq:AnDimension}) we obtain expressions for the anomalous dimension
\begin{subequations}
\begin{eqnarray}
\label{eq:EtaML}
{\rm ML}: && \eta^{}_\ell = \lambda^{}_{\rm ML}Z^{}_\ell,\\
\label{eq:EtaBL}
{\rm BL}: && \eta^{}_\ell = \lambda^{}_{\rm BL} Z^{}_\ell e^\ell.
\end{eqnarray}
\end{subequations}
For $\ell\to0$ Eq.~(\ref{eq:EtaML}) approaches zero, meaning that the scaling dimension of the energy in ML remains $z=1$ and nothing changes the relativistic behavior of electrons.
In contrast, Eq.~(\ref{eq:EtaBL}) approaches in this limit unity. This means that the scaling dimension of the energy $z=2-\eta^{}_\ell$ becomes relativistic in BL with the velocity $v^{}_s=3\bar{g}/8$, i.e. in vacuum $c/v^{}_s\approx 1450$. Therefore, in absence of a gap in the spectrum of BL the Coulomb interaction attempts to linearize the fermionic dispersion in vicinity of the nodal points. Similar conclusions have been recently made by Kusminskiy et al.~[\onlinecite{CastroNeto2009}] for finite values of chemical potential.
Their findings provided a good explanation of recent cyclotron experiments~[\onlinecite{Stormer2008}], where the effects discussed here have been observed.
An estimation for the suitable scale below which this effect is observable can be made as follows: The only scale which affects the flow of the band mass in gapless BL graphene can be read off from Eq.~(\ref{eq:SolZbl}) (cf. Fig.~\ref{fig:BandMassBL}):
\begin{equation}
\label{eq:ScaleBL}
\ell^\prime \approx \log\left(\frac{1-\lambda^{}_{\rm BL}}{\lambda^{}_{\rm BL}}\right).
\end{equation}
Choosing $\Lambda^{}_0$ to be equal to the inverse lattice spacing we find for a realistic experimental situation ($\epsilon=1\div4$) $\ell^\prime\approx2.3\div3.8$ and the corresponding momentum scale to be of the order
$k^{}_c = \Lambda^{}_0e^{-\ell^\prime} \approx 1\cdot10^{-2}\div7\cdot10^{-2}~{\buildrel _{\circ}\over{\mathrm A}}^{-1}$.
\subsection{Gapped regime}
\begin{figure}[t]
\includegraphics[height=4cm,width=7cm]{Fig1.eps}
\caption{Renormalization of the gap $\Delta^{}_\ell$ in ML because of Coulomb interaction.
The crossover scale $\ell^{}_\ast$ is determined from Eq.~(\ref{eq:ScaleEq}).
The initial value of the gap is $\Delta^{}_0=0.2v^{}_0\Lambda^{}_0$, the dielectric constant $\epsilon=1$. The dashed line shows the Kane/Mele asymptote from Eq.~(\ref{eq:Kane}). The crossover scale $\ell^{}_\ast$ is determined from Eq.~(\ref{eq:ScaleEq}).}
\label{fig:MLGap}
\includegraphics[height=4cm,width=7cm]{Fig2.eps}
\caption{Renormalization of the Fermi velocity $v^{}_\ell=v^{}_0 Z^{-1}_\ell$ in ML because of Coulomb interaction with (solid line, full solutions of Eqs.~(\ref{eq:Gap}) and (\ref{eq:ZetML})) and without a gap (dashed line, Eq.~(\ref{eq:SolZml})).
The crossover scale $\ell^{}_\ast$ is determined from Eq.~(\ref{eq:ScaleEq}).}
\label{fig:FermiVelocity}
\end{figure}
Naively, for $\Delta^{}_\ell\ll\epsilon^{}_\ell$ we can neglect $\Delta^{}_\ell$ in the denominator. For ML we obtain from Eqs.~(\ref{eq:Gap})
\begin{eqnarray}
\label{eq:DlAs1ML}
\partial^{}_\ell\log\Delta^{}_\ell = 2\lambda^{}_{\rm ML}Z^{}_\ell,
\end{eqnarray}
which together with Eq.~(\ref{eq:SolZml})
reproduces the Kane/Mele result~[\onlinecite{Kane2006}]:
\begin{equation}
\label{eq:Kane}
\Delta^{}_\ell = \Delta^{}_0(1+\lambda^{}_{\rm ML}\ell)^2 = \Delta^{}_0 Z^{-2}_\ell,
\end{equation}
with the apparently logarithmically diverging gap. However, Eq.~(\ref{eq:Kane}) suggests that at large $\ell$ denominators in Eqs.~(\ref{eq:Gap}) and (\ref{eq:ZetML}) are dominated by the gap, i.e. $E^{}_{\ell\gg\ell^{}_\ast}=\sqrt{\Delta^2_\ell+\epsilon^{2}_\ell}\sim \Delta^{}_\ell$,
where the crossover scale $\ell^{}_\ast$ can be determined from the condition
\begin{equation}
\label{eq:ScaleEq}
\Delta^{}_{\ell^{}_\ast}\approx\epsilon^{}_{\ell^{}_\ast},
\end{equation}
which turns out to be a nonlinear algebraic equation if we take Eqs.~(\ref{eq:FlSpML}), (\ref{eq:SolZml}) and (\ref{eq:DlAs1ML}) into account. However, Eq.~(\ref{eq:ScaleEq}) can be uniquely solved numerically. The solution of Eq.~(\ref{eq:Gap}) in this case becomes
\begin{equation}
\label{eq:Dlrg}
\Delta^{}_\ell \approx \Delta^{}_{\ell^{}_\ast} + \bar{g}\Lambda^{}_\ast(1-e^{-\ell}),
\end{equation}
The physical gap is obtained for $\ell\to\infty$
\begin{equation}
\Delta^{}_c = \Delta^{}_{\ast}+{\bar{g}\Lambda^{}_\ast}\approx\epsilon^{}_{\ast}+{\bar g\Lambda^{}_\ast}.
\end{equation}
Therefore the Coulomb interaction in ML supports the gap once it is opened, independently of the bare gap magnitude. A typical flow of the gap parameter in ML is shown in Fig.~\ref{fig:MLGap}. At the same scale the logarithmic growth of the Fermi velocity stops and it also stabilizes at the finite value $v^{}_c \approx v\sqrt{1+\bar{g}\Lambda^{}_\ast/\Delta^{}_\ast}$ as depicted in Fig.~\ref{fig:FermiVelocity}.
\begin{figure}[t]
\includegraphics[height=4cm,width=7cm]{Fig3.eps}
\caption{Renormalization of the gap $\Delta^{}_\ell$ in BL because of Coulomb interaction. The crossover scale $\ell^{}_\ast$ is determined from Eq.~(\ref{eq:ScaleEq}). The initial value of the gap is $\Delta^{}_0=0.2v^{}_0\Lambda^{}_0$, the dielectric constant $\epsilon=1$. The dashed line shows the large kinetic energy asymptote from Eq.~(\ref{eq:DlAs1BL}). The crossover scale $\ell^{}_\ast$ is determined from Eq.~(\ref{eq:ScaleEq}).}
\label{fig:GapBLG}
\includegraphics[height=4cm,width=7cm]{Fig4.eps}
\caption{The renormalization of the band mass $\mu^{}_\ell = \mu^{}_0Z^{}_\ell$ due to the Coulomb interaction. The solid line shows the flow of the band mass of BL graphene with a gap. Dashed line shows asymptotic renormalization without a gap. In this case the electronic band mass scales to zero. This leads to the linear scaling of the spectrum. The scale $\ell^\prime$ is found from Eq.~(\ref{eq:ScaleBL}).}
\label{fig:BandMassBL}
\end{figure}
The solutions for the gap in BL are similar in spirit but with an extra fixed point. For $\Delta^{}_\ell\ll\epsilon^{}_\ell$ we obtain
\begin{equation}
\label{eq:DlAs1BL}
\Delta^{}_\ell = \Delta^{}_0 Z^{-8/3}_\ell
\end{equation}
and therefore $\Delta_{\ell\to\infty}\to\infty$. The solution for $\Delta^{}_\ell\gg\epsilon^{}_\ell$ is formally given by Eq.~(\ref{eq:Dlrg}), too, such that the flow of the gap stabilizes at some finite value $\Delta^{}_\ast$ (cf. Fig.~\ref{fig:GapBLG}). On the other hand, the presence of the gap stabilizes the flow of the wave function renormalization factor $Z^{}_\ell$ and therefore the flow of the electronic band mass $\mu^{}_\ell = \mu^{}_0Z^{}_\ell$, which in this case remains finite (cf. Fig~\ref{fig:BandMassBL}). The scaling of the kinetic energy is in this case also preserved and remains equal to 2.
In order to shed some light on the topological properties of the RG flow in the parametric space it is convenient to redefine Eqs.~(\ref{eq:ZetML}) and (\ref{eq:ZetBL}) in terms of kinetic energy and introduce dimensionless parameters by expressing both the gap and kinetic energy in units of Coulomb energy:
\begin{subequations}
\begin{eqnarray}
\label{eq:RescGap}
\bar{\Delta}^{}_\ell &=& \frac{\Delta^{}_\ell}{\bar{g}\Lambda},\\
\label{eq:RescSpect}
\bar{\epsilon}^{}_\ell &=& \frac{\epsilon^{}_\ell}{\bar{g}\Lambda},
\end{eqnarray}
\end{subequations}
with $\epsilon^{}_\ell$ defined in Eq.~(\ref{eq:FlSpML}) for ML and in Eq.~(\ref{eq:FlSpML}) for BL. For both ML and BL we arrive at the same equation for the rescaled gap
\begin{subequations}
\begin{eqnarray}
\label{eq:GapFlowRescaled}
\partial^{}_\ell\bar\Delta^{}_\ell = \displaystyle\bar\Delta^{}_\ell+\frac{\bar\Delta^{}_\ell}{\sqrt{\bar\epsilon^{2}_\ell+\bar\Delta^{2}_\ell}},
\end{eqnarray}
while equations for the rescaled kinetic energy are different due to different scaling behavior of spectra:
\begin{eqnarray}
\label{eq:ZFlowRescaledMLG}
{\rm ML}: && \partial^{}_\ell\bar\epsilon^{}_\ell = \displaystyle \frac{1}{2} \frac{\bar{\epsilon}^{}_\ell}{\sqrt{\bar{\epsilon}^2_\ell+\bar{\Delta}^2_\ell}},\\
\label{eq:ZFlowRescaledBLG}
{\rm BL}: && \partial^{}_\ell\bar\epsilon^{}_\ell = \displaystyle\frac{3}{8} \frac{\bar{\epsilon}^{}_\ell}{\sqrt{\bar{\epsilon}^2_\ell+\bar{\Delta}^2_\ell}} - \bar{\epsilon}^{}_\ell,
\end{eqnarray}
\end{subequations}
\begin{figure}[t]
\includegraphics[width=8cm]{Fig5.eps}
\caption{Schematic RG flow for both graphene configurations in the space spanned by the dimensionless kinetic energy $\bar\epsilon^{}_\ell$ and gap parameter $\bar\Delta^{}_\ell$.}
\label{fig:FlowDiagMLG}
\end{figure}
The flow in the parametric space is schematically shown in
Fig.~(\ref{fig:FlowDiagMLG}). The fixed points (FPs)
are obtained by setting the right-hand sides of Eqs.~(\ref{eq:GapFlowRescaled})-(\ref{eq:ZFlowRescaledBLG})
to zero and solving the emerging system of algebraic equations. For ML graphene the only instable fixed point is at both $\bar\Delta^{}_\ell=0$ and $\bar\epsilon^{}_\ell=0$. From Eq.~(\ref{eq:RescSpect}) follows that this fixed point can be reached if
\begin{equation}
\bar\epsilon^{}_\ell = \frac{vZ^{-1}_\ell}{\bar{g}}\to0.
\end{equation}
Since $Z^{-1}_\ell$ flows to a finite value, this can only be satisfied if $\bar{g}\to\infty$. This is a case of the famous quantum phase transition discussed in~Refs.~[\onlinecite{Son2007},\onlinecite{Sheehy2007}]. The instability of the fixed point means that the flow can leave it in every direction. For any finite initial value of the gap it develops infinitely large value which indicates a finite physical gap. In contrast to the ML graphene, there is a nontrivial fixed point at finite dimensionless kinetic energy $\bar\epsilon^{}_\ast=3/8$ in the case of gapless BL graphene. This fixed point is characterized by the anomalous scaling dimension $\eta^{}_\ell=1$, i.e. the spectrum of BL becomes in this case linear. However this fixed point is instable with respect to the finite gap direction, i.e. once a small gap is opened in the spectrum the flow cannot reach this fixed point anymore but runs towards an infinite value. On the other hand, since the numerical value of $\bar\epsilon^{}_\ast$ at this fixed point suggests a strong coupling regime we might need to go beyond the truncation Eq.~(\ref{eq:TruncAct}) and take additionally flow of the $\bar\psi\bar\psi^\dag\bar\phi$--vertex into account.
\section{Conclusions}
\label{sec:conclusions}
In conclusion, we have studied both ML and BL graphene with Coulomb interaction and a uniform gap by employing a renormalization-group technique.
In contrast to previous approaches to gapped ML graphene based on the renormalization group approach [\onlinecite{Juricic2009,Mishchenko2007,Sheehy2007,Kane2006}], which predict logarithmically divergent renormalization of the gap and the Fermi velocity, our results suggest a saturation of RG flows at an intrinsic scale related to the gap. This saturation takes place for both ML and BL graphene, for any finite initial value of the gap no matter how small it is, and since measured quantities should be finite, this might be suggestive of a gap in the spectrum of both configurations at energies below 0.1 eV.
In ML graphene the Coulomb energy exhibits the same scaling as the kinetic energy. Once a spectral gap is opened it creates an additional length scale which dominates the physics at small momenta. This scale cuts off the logarithmic divergence of the Fermi velocity and gap itself such that the flow of both quantities stabilizes at the finite value.
For gapless BL graphene is shown that Coulomb interaction renormalizes the electronic band mass which scales to zero for small momenta. This leads to a paradoxical result that the electronic spectrum should become linear close to the charge neutrality point. This regime corresponds to a stable fixed point and therefore the flow should inevitably go into this point.
The quadratic scaling of the spectrum is rescued by the presence of the gap, since for any finite starting values of the gap the flow of the band mass always saturates at a finite value.
\section*{ACKNOWLEDGEMENTS}
We gratefully acknowledge useful discussions with S.~Savel'ev, B.~D\'{o}ra, and A.~Sedrakyan. We thank A.~H.~Castro Neto for bringing Refs.~[\onlinecite{Stormer2008}] and [\onlinecite{CastroNeto2009}] to our attention. This work has been supported by the DPG-grant ZI 305/5-1.
\vspace{5mm}
|
1,108,101,565,563 | arxiv | \section{Introduction}
In the standard $\Lambda$CDM scenario, the first structures to form in
the universe are low-mass dark haloes, that progressively merge to
produce
larger and more massive structures \citep[e.g.][]{1984Natur.311..517B}.
Baryons then infall into the dark potential wells, forming
stars and rotationally supported galaxies \citep{1980MNRAS.193..189F}.
Merging is considered one of the main mechanisms for assembling mass in
galaxies and triggering the formation of new stars
\citep{1993MNRAS.264..201K,1998ApJ...498..504B}.
\smallskip
Dark matter haloes and galaxies grow by both mergers and the accretion of
diffuse gas component (smooth accretion).
In recent years, advances in computational power have allowed us to study
in detail the growth of dark matter haloes, which can be probed by
numerical simulations \citep[e.g.,][]{2008ApJ...679.1260M,
2010MNRAS.401.2245F, 2010ApJ...719..229G, 2011MNRAS.tmp.1293T,
2011MNRAS.413.1373W}, but the role of smooth accretion remains
uncertain.
Galaxy mass assembly has also been investigated using $N$-body
simulations including hydrodynamics
\citep[e.g.,][]{2002ApJ...571....1M,2005A&A...441...55S} and
semi-analytical models \citep[SAMs, e.g., ][]{1991ApJ...379...52W,
1999MNRAS.305....1S, 1999MNRAS.303..188K, 2000MNRAS.319..168C,
2003MNRAS.343...75H, 2003MNRAS.338..913H, 2006MNRAS.370..645B,
2006MNRAS.372..933N, 2007MNRAS.375....2D,2011A&A...533A...5C}.
\citet{2003MNRAS.345..349B} questioned the necessity of shock heating
and found a halo mass threshold of $\simeq 10^{11} \ensuremath{\mathrm{M}_\odot}$
below which the pressure of the shock-heated gas is insufficiently
high to bear its own gravity and the pressure of infalling material, and is
thus unstable.
High resolution simulations based on either particles
\citep{2005MNRAS.363....2K, 2009MNRAS.395..160K, 2009ApJ...694..396B,
2011MNRAS.417.2982F, 2011MNRAS.tmp..554V}
or a grid~\citep{2008MNRAS.390.1326O, 2009Natur.457..451D}, have
emphasised the coexistence of two modes of gas accretion.
They have demonstrated that hot accretion is spherical, isotropic, and dominates
at low redshift and for massive systems, while the cold mode is
anisotropic, coming from filaments, and is most significant for lower mass
galaxies and at high redshift.
Gas accretion by means of cold streams leads to the formation of
clumps in the disc that fall towards the centre of the galaxy and
merge to form a spheroid \citep{2009Natur.457..451D,
2009MNRAS.397L..64A, 2010MNRAS.404.2151C}.
High-redshift ($z\simeq 2$) star-forming galaxies (SFGs) have been
observed by integral field spectroscopy
\citep{2006Natur.442..786G,2009ApJ...706.1364F}.
They seem to contain discs, which is incompatible with major mergers, but
are very clumpy.
This may indicate that they have a high gas fraction, and is
consistent with the
theoretical predictions of cold accretion \citep{2009Natur.457..451D}.
The question now arises of the relative roles of merging and
external accretion in galaxy mass assembly and formation, and whether
some combinations of these processes can explain the observed
anti-hierarchical evolution of galaxies,
which is usually called the downsizing process
\citep[e.g.][]{1996AJ....112..839C}.
Through the analysis of cosmological $N$-body and hydro simulations, the
main focus of the present paper is to quantify the importance of
``smooth accretion'' relative to merger rates in the mass assembly of
galaxies.
\smallskip
\label{simu}
\begin{table*}[ht!]
\begin{center}
\begin{tabular}{lcccc}
\toprule
Zoom level & 0 & 1& 2& 3 \\
\otoprule
$m_{\text{DM}}$ (\ensuremath{\mathrm{M}_\odot}) & $7.27\times 10^{10}$ &$9.09\times
10^{9}$& $1.14\times 10^{9}$&$1.42 \times 10^{8}$ \\
$m_{\text{b}}$ (\ensuremath{\mathrm{M}_\odot}) & $1.54\times 10^{10}$ &$1.93 \times
10^{9}$&$2.41 \times 10^{8}$& $3.01 \times 10^{7}$\\
$L_\text{box}$ (Mpc) & 137.0&68.49&34.25& 17.12\\
$\ensuremath{\varepsilon}_\text{soft}$ (kpc) & 50.0 & 25.0 & 12.5 & 6.25 \\
$\ensuremath{\mathrm{d}} t$ (Myr)& 20 & 10 & 5 & 2.5 \\
$\Delta t$ (Gyr)& 0.2 & 0.2 & 0.2 & 0.1 \\
$z_\text{end}$ & 0 &0 &0&0.46\\
\bottomrule
\end{tabular}
\caption{Parameters of the multi-zoom simulation used here for the
four levels of zoom. $L_\text{box}$ is the cube length for level 0
zoom, and the diameter for higher level zoom; $\ensuremath{\varepsilon}_\text{soft}$
is the (comoving) softening parameter; $\ensuremath{\mathrm{d}} t$ is the timestep; and
$\Delta t$ is the separation between two consecutive outputs.
\label{tab:sim_zoom}}
\end{center}
\end{table*}
Both $N$-body simulations and SAMs have been used to
describe the hierarchical process, by building merging trees tracing
the formation of a given structure.
The Extended Press-Schechter (EPS) formalism
\citep{1991ApJ...379..440B,1993MNRAS.262..627L} has been a remarkably
successful approximation to obtain mass distributions and merging
histories.
However, compared to the results derived from $N$-body simulations, there are
fundamental differences, owing to the simple hypotheses of a collapse
independent of either the environment, the total mass of the
structure, or the shape, although a significant improvement was made
by considering ellipsoidal collapse
\citep{2001MNRAS.323....1S,2009MNRAS.397..299M}.
The simplifying assumptions consider the hierarchical collapse to be a
Markov process independent of the history of merging, while the
reality is that it is not.
Tracing merging trees from cosmological simulations, although more
realistic, is not a trivial task either.
Many new algorithms have been developed and published, that have
complementary strengths and weaknesses (the Friend-of-Friends
algorithm (FOF),
\citealt{1985ApJ...292..371D},
SUBFIND,
\citealt{2001MNRAS.328..726S},
AdaptaHOP,
\citealt{2004MNRAS.352..376A};
\citealt{2009A&A...506..647T}, hereafter T09), and
each relies on their own definitions and conventions to either avoid
anomalies, such as the blending of haloes and the unrealistic resulting
histories, or deal with substructures \citep{2003MNRAS.345.1200S,
2010MNRAS.404..502G}.
In this paper, we use the AdaptaHOP algorithm to detect dark
matter haloes and baryonic galaxies.
\smallskip
Cosmological simulations show strikingly that bound structures are not
the main component of the large-scale morphology of the universe, but
that the filamentary aspect is instead essential and the smooth component of
filaments could contain a large fraction of the mass (both dark matter (DM)
and baryons).
Several methods have tried to quantify the mass in the
various components \citep{2005A&A...434..423S,2010A&A...510A..38S}, or
to identify the structure of the filamentary skeleton
\citep{2006MNRAS.366.1201N,2009MNRAS.393..457S,2010MNRAS.404L..89A}. It
is essential to estimate the smooth accretion mass fraction on any
scale in $N$-body
simulations, to more clearly understand the relative role of mergers and
accretion in galaxy formation.
The problem is tightly related to the
amount of substructures and their evolution, and requires precise
definitions \citep[e.g.][]{2010MNRAS.404..502G}.
\smallskip
When the initial density fluctuations are expressed by a power-law
spectrum $P(k) \propto k^n$, the variance of the power spectrum on
mass scales $M$ is proportional to $M^{\frac{-(n+3)}{3}}$; the
low-mass structures should then collapse first, when $n > -3$.
Since this is the case in our universe, hierarchical clustering is
expected to occur from bottom up.
The fact that low-mass dark haloes are found statistically to be the
first to
assemble, has been confirmed both in $N$-body simulations or through the
EPS formalism \citep[e.g.][]{1993MNRAS.262..627L,1997MNRAS.292..835R}.
Their growth is quantified by the mass of the main progenitor, or main
sub-halo that will later merge into it.
Their epoch of formation can then be defined as the time at which the
mass of the main
progenitor is half the present mass of the halo.
However, it is possible to find the opposite trend, when the number of
merged haloes more massive than a fixed $M_\text{min}$ is
considered.
More massive haloes have indeed already assembled most of
their mass in terms of substructures more massive than $M_\text{min}$,
while less massive structures are less advanced at the same epoch
\citep{1991MNRAS.248..332B,2006MNRAS.372..933N}.
This trend can be called downsizing, which is similarly observed for
the dark matter, when a minimum mass $M_\text{min}$ is introduced.
\smallskip
Interestingly, and for other reasons, the downsizing observed in the
baryonic galaxies is also explained when a halo-mass floor
$M_\text{min}$, below which accretion is quenched, is introduced
\citep{2009MNRAS.397..299M, 2010ApJ...718.1001B}.
The cold gas accretion would then be limited to a particular mass
interval, the maximum mass being reached when the infalling gas is
sufficiently shock-heated.
Star formation, linked essentially to the cold
gas available for accretion, then occurs only in this narrow mass
interval.
At early times, the more massive haloes reach this floor earlier, and
the mass interval is then crossed more rapidly.
Star formation in massive galaxies occurs earlier and over a shorter
period of time, as the abundance of their elements indicates.
The value of $M_\text{min}$ required to account for observations is of
the order of $10^{11}$~\ensuremath{\mathrm{M}_\odot}\ \citep{2010ApJ...718.1001B}.
Gas exhaustion can then account for any decline in the star formation
activity \citep[e.g.][]{2007ApJ...660L..47N}.
However, no physical process can explain the existence of this mass
floor.
Another possible explanation could be that stars form rapidly
in low-mass haloes early in the universe, and that most of these small
galaxies then merge to form more massive ones, which are observed to
be passively evolving on the red sequence today.
In this paper, we wish to test this possible scenario, by analysing
$N$-body hydrodynamical multi-zoom simulations.
\smallskip
The numerical techniques and the simulation used are described in
\S~\ref{simu}. The derivations of the bound structure for both dark
matter and baryons are presented in \S~\ref{tree}.
In \S~\ref{resu}, we show physical results of our simulation.
\S~\ref{acc} describes the results in terms of the fraction of mass
accreted by galaxies from the smooth component versus mergers, the
influence of the environment being detailed.
Our considerations of downsizing are presented in \S~\ref{sec:downsiz}.
The discussion in \S~\ref{discuss} compares the various scenarios for
explaining the downsizing process, and our conclusions are drawn in
\S~\ref{conclu}.
\section{Simulations}
\subsection{Techniques}
We use a multi-zoom simulation based on a TreeSPH
code~\citep{1989ApJS...70..419H} that is described in
\citet{2002A&A...388..826S}.
Simulating galaxy formation in a cosmological framework involves a
wide dynamical range, to simulate a large enough simulation
box and to reach a high resolution.
We use here the multi-zoom technique described in
\citetalias{2005A&A...441...55S}.
The cosmological parameters used in this simulation are taken from
WMAP~3 results
$(\Omega_\text{b},\Omega_\text{m},\Omega_\Lambda,h,\sigma_8,n) =
(0.042,0.24,0.76,0.73,0.75,0.95)$.
We start from an initial low-resolution simulation, referred to as
the ``level 0 zoom'', consisting of $128^3/2$ DM particles and $128^3/2$ gas
particles, which has mass resolutions of $m_{0,\text{DM}}\simeq 7.2\times
10^{10}$~\ensuremath{\mathrm{M}_\odot}{} and $m_{0,\text{b}}\simeq 1.54\times 10^{10}$~\ensuremath{\mathrm{M}_\odot}{}
and a force resolution $\ensuremath{\varepsilon} = 50$~kpc, in a cubic box of length $L_0
= 137$~Mpc (comoving).
We
resimulate a sub-region of the original volume at a higher resolution.
Tidal fields and the inflow of particles into and the outflow of
particles away from the region of interest are recorded at every
timestep, and the resimulation is run with these boundary conditions.
At each level, we zoom in by a factor of two, improving the mass
resolution by a factor of eight, i.e. a low resolution particle at
level $N-1$ zoom
becomes eight high resolution particles at level $N$.
This technique enables any shape of zoom region, and we chose
spherical regions.
We used three levels of zoom, and ended with a spherical region of radius
$R_3 = 8.56$~Mpc, a mass resolution of $m_{3,\text{b}} = 3.01 \times
10^7 $~\ensuremath{\mathrm{M}_\odot}{}, $m_{3,\text{DM}} = 1.42\times 10^8 $~\ensuremath{\mathrm{M}_\odot}{}, and a
force resolution $\ensuremath{\varepsilon} = 6.25$~kpc (comoving).
\smallskip
\begin{figure}[t]
\begin{center}
\includegraphics[width=8cm]{img/hist_h.eps}
\caption{ Distribution of the SPH smoothing length $h$ with
respect to the gravitational softening parameter $\ensuremath{\varepsilon}$ at $t =
9$~Gyr in the level 3 zoom.
\label{fig:hist_h}
}
\end{center}
\end{figure}
These $128^3/2$ particles of mass $m_{0\text{DM}}$ and $m_{0\text b}$
at the level~0 give the same mass resolution as a simulation of $128^3\times
8^3/2 = 1024^3/2$ particles of mass $m_{3\text{DM}} \simeq 1.42\times
10^{8}$~\ensuremath{\mathrm{M}_\odot}{} and $m_{3\text b}
\simeq 3.01\times 10^{7}$~\ensuremath{\mathrm{M}_\odot}{}, but focused on a smaller volume of
radius 8.56~Mpc at level 3.
This technique enable us to simulate galaxies with a fairly high resolution
starting with a reasonably sized cosmological box, at a smaller CPU cost.
\smallskip
One of the main characteristics of this technique is that, except at
the zeroth level of zoom, the number of particles does not remain
constant during the whole simulation.
At higher levels of zoom, a particle at timestep $i$ inside the level
$N$ box, but outside the level $N+1$ box, can indeed enter the level
$N+1$ box at timestep $i+1$ and be split into eight high-resolution
particles.
Particles with unrecorded history the enter the box, and a special
care must be taken when establishing particle identities.
Because of this increasing number of particles, the third level of
simulations could not be run further than $t=9.1$~Gyr, or $z =
0.46$, but lower levels of zoom reached $z= 0$.
\smallskip
At the third level of zoom, we end up with 90 snapshots,
sampled every 100~Myr from $t=0.2$~Gyr to $t=9.1$~Gyr, which
enables us to build the merger tree of structures, while at the
three lower levels of zoom we have 70 snapshots, sampled every
200~Myr from $t=0.2$~Gyr to $t = 14$~Gyr.
The latter can be used for a resolution study (see
section~\ref{sect:resol}).
The properties of each level of zoom are summed up in
table~\ref{tab:sim_zoom}.
\subsection{Physical recipes and initial conditions}
\label{sec:ic}
While collisionless particles, namely stars and dark matter, undergo
only gravitational forces and are treated by a tree algorithm, gas
dynamics is treated by smooth particle hydrodynamics (SPH).
Additional recipes are needed to mimic subgrid physics such as star
formation and feedback.
Our physical treatment is described in \citet{2002A&A...388..826S},
{but the present paper we only use the SPH phase and not the
cold and clumpy gas that was described by sticky particles. }
The SPH gas is treated with the same equation of state and the same
viscosity prescription.
The range of temperatures is $800-2\times 10^6$~K.
Any dependance of cooling on metallicity is ignored, and only a primordial
metallicity ($10^{-3}$~Z$_\odot$) is considered.
A unique timestep per zoom level is adopted, which is respectively 20,
10, 5, and 2.5~Myr for levels 0 to 3.
The softening length for gravity is respectively 50, 25, 12.5, and
6.25~kpc (comoving), and we checked that the SPH smoothing length is
not far shorter than the softening length, as shown
on the histogram of Fig.~\ref{fig:hist_h} at $t = 9$~Gyr in the level
3 zoom.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=8cm]{img/rhom.eps}
\caption{ Evolution of the mean comoving density of the box with
respect to the cosmic density
$\rho_{i,\text{box}}(t)/(\bar\rho_i)$ for the four zoom
levels. Level 0 is shown as a solid line, level 1 a dashed line,
level 2 dash-dotted and level 3 dotted.
The box in the last snapshot a level 3 has a density of 14 times
the cosmic density for $i\in$ (DM, baryons).
\label{fig:rhom}
}
\end{center}
\end{figure}
The radiative cooling term $\Lambda$ is taken from the normalised
tables of \citet{1993ApJS...88..253S} modelling atomic
absorption-line cooling from $10^4$~K to $10^{8.5}$~K.
The background ultraviolet (UV) radiation field is modelled by a
constant uniform heating $\Gamma_\text{UV} = 10^{-24}$ erg s$^{-1}$ term.
\begin{figure*}[ht!]
\begin{center}
\includegraphics[width=17cm]{img/full_t91_4pan.eps}
\caption{View of the third zoom level of the simulation.
\emph{Top}: Gas (colour-coded by temperature, logarithmic scale
from 800 to
$1.4\times 10^6$~K), and DM (right). \emph{Bottom:}
Structures and substructures detected by AdaptaHOP. \emph{Left}:
baryonic galaxies and satellites, \emph{right}: DM haloes and
subhaloes.
Haloes and galaxies are represented in dark and bright blue and
green; subhaloes and satellites in yellow, orange, magenta, red,
and white.
The white bar indicates the comoving length-scale.
\label{fig:simu}
}
\end{center}
\end{figure*}
Star formation is modelled by a Schmidt law with a star formation rate
of
\begin{equation}
\label{eq:sf}
\deriv {\rho_*} t = C \rho_\text{gas}^n,
\end{equation}
with $n = 1$.
It is applied to gas particles with densities higher than a density
threshold of
\begin{equation}
\label{eq:thr}
\rho_\text{min} =3 \times 10^{-2}\,\text{at cm}^{-3}.
\end{equation}
Gas particles form stars, and have a fraction of stars within them.
When this fraction reaches a given threshold (set to 20\%), we
search among their neighbours to determine whether there is enough
material to form
a full star particle, \emph{i.e.} whether the sum of the star
fractions among the neighbouring gas particles is greater than 1.
If this is the case, the particle is turned into a star particle, and
the fraction of stars in the neighbouring particles becomes gas again.
In this way, the stellar fraction within a gas particle remains low,
which prevents stars from following the gas dynamics.
Kinetic feedback from supernovae (SNe) is also included.
Stars more massive than 8~\ensuremath{\mathrm{M}_\odot}{} are assumed to die as SNe,
releasing an energy of $10^{48}$~erg~\ensuremath{\mathrm{M}_\odot}$^{-1}$.
The released energy is distributed assuming a Salpeter initial mass
function, and an
efficiency parameter of 6\%.
Particles within the smoothing length of the former gas particle
receive a velocity kick in the radial direction.
We note that in this study, no AGN feedback is considered, which
leads to an overcooling problem and to high stellar masses in the more
massive haloes (see \S~\ref{sec:agn}).
\smallskip
{Initial conditions were obtained using Grafic
\citep{2001ApJS..137....1B} at the highest resolution ($1024^3$
particles) for level 3, and were then undersampled to build the
initial conditions of low-resolution levels. }
We ran the level 0 simulation and ran a FOF-like algorithm to detect
the haloes to be resimulated.
We chose the most massive halo of about $10^{15}\ensuremath{\mathrm{M}_\odot}$ at $z=0$, thus
all this work is done in a rich environment.
Figure~\ref{fig:rhom} shows the evolution of the mean density,
normalised by the cosmic density, for each zoom level.
The dotted line is level 0, the dash-dotted level 1, the dashed level
2, and the solid level 3.
\section{Building merger trees}
\label{tree}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=8.5cm]{img/full_t91_stars.eps}
\caption{View of the third zoom level of the simulation (continued):
stars.
\label{fig:simu_stars}
}
\end{center}
\end{figure}
Deriving merger trees from $N$-body simulations is not an easy task,
because there is still a lot of freedom in defining halo and subhalo
masses, and identifying both progenitors and sons in merger trees.
There are two essential steps building merger trees: detecting the
structures, and linking them from one timestep to the other.
A ``bad'' structure detection {results into} a bad
definition of the detected structures, and makes it impossible to
extract results.
\smallskip
We used here the AdaptaHOP algorithm, and we followed the rules of
\citetalias{2009A&A...506..647T} defining the haloes and subhaloes
hierarchy as well as the merger history.
\smallskip
In addition, since we aim to study galaxies, we wish to detect
separately the baryonic components, namely the central galaxies and
their satellites.
\subsection{Structure detection}
Structure finders are widely used in computational cosmology to
analyse simulations.
Structure finders of the first generation, such as the FOF
algorithm~\citep{1985ApJ...292..371D}, were able to detect virialised
DM haloes in the simulations.
\smallskip
{The FOF algorithm links together particles closer than
$b_\text{link}$ times the mean interparticle distance.
It is efficient in finding haloes, but tends to link together
separate objects when they are too close to each other, especially
during mergers.}
\smallskip
Spherical overdensities~\citep[SO, ][]{1994MNRAS.271..676L} follows
another approach: it detects density maxima, and grows from each
maximum a sphere such that
the mean density within this sphere is equal to a given value, e.g.
$\Delta_c \times\rho_c$, where $\Delta_c \simeq 200$ (slightly
depending on the cosmology and the redshift) is the virial
overdensity, and $\rho_c$ is the critical density of the universe.
\smallskip
\textsc{Denmax}~\citep{1994ApJ...436..467G}, Bound Density
Maximum~\citep[BDM,][]{1999ApJ...516..530K} and
SKID~\citep{2001PhDT........21S} compute the density field, and move
particles along the density gradients to find local maxima.
\textsc{Denmax} computes the density on a grid, while in SKID the
density field and its gradient are computed in a SPH way.
Unbound particles are then iteratively removed via an unbinding step.
HOP~\citep{1998ApJ...498..137E} has a similar spirit, but instead of
computing density gradients, it jumps {from one particle to
its denstest neighbour, thus efficiently indentifies local density
maxima}.
SUBFIND \citep{2001MNRAS.328..726S} finds subhaloes within FOF haloes
by identifying saddle points. The density is computed in a SPH
fashion.
Each particle has a list of its two densest neighbours.
Particles without denser neighbours are density maxima, thus
correspond to a new substructure.
Substructures then grow towards lower density particles.
Saddle points are defined as particles that have their two densest
neighbours belonging to two different substructures.
AdaptaHOP \citep{2004MNRAS.352..376A} is close to SUBFIND, except that
the construction of the substructures occurs in a bottom-up manner, as
we will later describe.
\textsc{Amiga}'s Halo finder \citep[AHF,][]{2009ApJS..182..608K} is
quite similar, but computes the density on an adaptive mesh refinement
(AMR) grid, taking advantage of the adaptive nature of the AMR.
VOBOZ~\citep{2005MNRAS.356.1222N} is again similar, but uses a Voronoi
tessellation to estimate the density.
PBS \citep{2006ApJ...639..600K} uses a total boundness criterion
combined with a tidal radius to define substructures within FOF
haloes.
\begin{figure*}[ht!]
\begin{center}
\includegraphics[width=17cm]{img/zoom_t91_4pan.eps}
\caption{Zoom on the most massive halo of the third level of
zoom.
Same legend as figure \ref{fig:simu}.
\label{fig:clus}
}
\end{center}
\end{figure*}
\smallskip
All those algorithms are however only 3D, and based on the position of
the particles.
Several algorithms take advantage of the availability of the velocit
data in addition to the positions.
Six-dimensional (6D) FOF~\citep{2006ApJ...649....1D} is a FOF algorthm
with a 6D metric.
Hierarchical structure finder~\citep{2009MNRAS.396.1329M} works in a
similar way to SUBFIND in 6D, using a 6D density estimator.
\texttt{Rockstar} \citep{2011arXiv1110.4372B} uses a 6D metric with
the addition of time information
\smallskip
For a recent and more detailed comparative study of halo finders, we
refer to \citet{2011MNRAS.415.2293K}.
The structure finder must be applied to every snapshot of the simulation
in order to build a merger tree.
We used AdaptaHOP to detect both DM and baryonic structures in the
simulation.
AdaptaHOP proceeds in four steps:
\begin{itemize}
\item First,the SPH density of particles is computed over $N_\text{ngb}$
neighbours. We chose $N_\text{ngb} = 32$.
\smallskip
\item Then, all particles whith density higher than the
user-defined threshold density \ensuremath{\rho_\text{T}}\xspace{} are selected, and the algorithm
then jumps (``hop'') to their
densest neighbour, thus finds the local maximum they belong to.
A tree of structures is then built, the leaves of the tree
corresponding to particles belonging to the same local maximum.
\item Finally, the saddle points that link together two structures are
identified.
\item The algorithm then reiterates within each leaf of the structure
tree to detect substructures.
\end{itemize}
The result is a tree of structures and substructures, where the leaves
correspond to physical structures.
The main parameters of AdaptaHOP are:
\begin{itemize}
\item $N_\text{members}$: minimal number of particles for a
(sub)structure to be considered.
It gives the minimal mass for a (sub)structure.
\item \ensuremath{\rho_\text{T}}\xspace:density threshold of the first level, which is used to
detect main haloes.
Particles with a SPH density below \ensuremath{\rho_\text{T}}\xspace are part of the
background.
\item {$\alpha$: peak of the substructure. Only substructures with a density maximum $\rho_\text{max} > \alpha
\bar\rho_\text{sub}$ are considered significant. }
\item $f_\text{p}$: Poisson noise parameter.
The existence of a substructure is tested by comparing its density
wih the Poisson noise:
a substructure with $\bar\rho_\text{sub} > \rho_\text{s}\left(1+\frac
{f_\text{p}}{\sqrt N}\right)$ is statistically significant, where
$\rho_\text{s}$ is the density of the saddle point that separates
the substructure from others, $\bar\rho_\text{sub}$ the mean density
of the substructure, and $N$ the number of particles belonging to
the substructure.
\item $f_\ensuremath{\varepsilon}$: controls the size of the structure.
Every structure must have a radius larger than $f_\ensuremath{\varepsilon}$ times the mean
interparticle distance.
\end{itemize}
Following \citetalias{2005MNRAS.363....2K}, we set $N_\text{members} = 64$,
which yields a minimum mass of
$ \simeq 8.96\times 10^9$~\ensuremath{\mathrm{M}_\odot}{} for theDM haloes, and $1.92 \times 10^9
$~\ensuremath{\mathrm{M}_\odot}{}
for baryonic structures.
\smallskip
We note that we had to slightly modify the algorithm to make it
compatible with the multi-zoom technique.
Since particles enter the box between successive
timesteps, $\bar\rho_\text{box}$ is indeed not constant, as opposed to
regular simulations, and the evolution of $\bar\rho_\text{box}(t)$ for
the four levels of zoom
is shown in figure~\ref{fig:rhom}.
We thus compared the density to \ensuremath{\rho_\text{T}}\xspace times the mean density in
the level 0 zoom, since the mean density in higher levels is higher
than the mean density of the universe.
At the last output, the mean density in the third level of zoom that we
analyse is about 14 times the cosmic density.
\smallskip
\subsubsection{DM halo detection}
\begin{figure}[t]
\begin{center}
\includegraphics[width=8.5cm]{img/zoom_t91_stars.eps}
\caption{Zoom on the most massive halo of the third level of zoom
(continued): stars.
\label{fig:clus_stars}
}
\end{center}
\end{figure}
Following~\citet{2004MNRAS.352..376A}
and~\citetalias{2009A&A...506..647T}, we defined haloes as groups of
particles with densities higher than a given threshold density, and
subhaloes as locally overdense groups of particles within a host (or
main) halo, which separed by density saddle points.
The centres of the haloes were defined as the positions of the densest
particles in the halo rather than the centres of mass.
The reason for this choice is that for a major merger, the centre of
mass can be halfway between the two merging objects, and we preferred
to define the ``real centre'' as the centre of the main halo.
{In the following, the mass of a halo is defined as the mass
of the main halo plus the mass of the subhaloes, therefore the mass
of a subhalo can be counted several times.}
As advocated in~\citet{1998ApJ...498..137E}, $\ensuremath{\rho_\text{T}}\xspace =
80\times \bar\rho$, where $\bar\rho$ is the mean density of the
universe (see previous section), was set for DM haloes, which is
roughly equivalent to a linking length $b_\text{link}=0.2$ in FOF.
We kept default values for the other parameters: $f_\text{P} = 3,
\alpha = 1, f_\ensuremath{\varepsilon} = 0.05$.
\subsubsection{Galaxy detection}
Several attempts have been made to build baryonic merger trees.
\citet{2007MNRAS.377....2M} used SKID~\citep{2001PhDT........21S} to
detect baryonic structures, and they referred to as ``family trees''
the merger trees of baryonic galaxies, in order to avoid any confusion
with subhaloes merger trees.
However, they have only considerred star particles, whereas we also
wish to take into account gas.
Other authors have used SKID to detect both stars and cold gas, with
$\rho/\bar{\rho} > 1000$ and $T < 3\times 10^{4}$~K
\citep[e.g.][]{2009MNRAS.399..650S}.
\smallskip
\citet{2006ApJ...647..763M} also used SKID and introduced ``virtual
galaxies'' in order to account for the fragmentation of baryonic
structure between consecutive outputs.
On the other hand, \citet{2009MNRAS.399..497D,2010MNRAS.405.1544D}
used a modified version of SUBFIND to detect simultaneously dark matter
and baryons, ending with a galaxy composed of DM, stars, and gas.
They used dynamical criteria to distinguish between the central galaxy
and the diffuse stellar component.
\smallskip
Our approach here was slightly different: we detected on the one hand
the hierarchy of DM haloes and subhaloes, and on the other hand the
baryonic component consisting of central galaxies and satellites.
We therefore also used AdaptaHOP to detect baryonic structures.
While input parameters are known for DM detection, we had to find a
more well-suited set of parameters in order to detect the baryonic
structures.
We set the density threshold \ensuremath{\rho_\text{T}}\xspace{} above which structures are
considered to 1000 times the mean (baryonic) density.
We took $f_p = 4$, $f_\ensuremath{\varepsilon} = 5\times 10^{-4}$
in order to allow to detect structures with sizes of the order of
1~kpc, and kept $\alpha$ = 1.
Higher values of $f_p$ tend to remove the smallest structures,
while lower values add unphysical substructures.
\subsubsection{Matching dark and baryonic structures}
\label{sec:matching}
An interesting question that we addressed is how much baryonic matter
there is in DM haloes.
To answer this question, we studied the link between galaxies and
haloes, taking advantage of their independent detections.
We then set several rules to decide whether a galaxy and a halo are
linked together.
\smallskip
The first rule is that a galaxy should belong to at most one (sub)halo
(and of course its host halo hierarchy if it is a subhalo).
The second rule is that the hierarchy of galaxies and satellites on
the one hand and haloes and subhaloes on the other has to be
respected, so that we avoid the case where a satellite is linked to
the host halo while the galaxy is linked to the subhalo.
With these rules, a halo $h$ can host several galaxies,
but at most one main galaxy $g$, which is the most massive
galaxy of $h$ and must have $h$ as its halo.
Bearing these rules in mind, we were able to match DM haloes to galaxies.
\subsection{Merger tree building}
In the context of structure finding, one of the most persistent
problems is the so-called flyby issue.
This phenomenon can occur when two haloes cross each other, and when
their respective centres are too close to each other.
They are then detected as only one halo -- even if they can sometimes
still be
distinguished by eye -- and thus considered as a merger, but
detected again as two separated haloes at a later timestep.
\smallskip
This problem can be partially resolved by using a subhalo finder such
as AdaptaHOP instead of a halo finder such as FOF, since the second
halo can still be tracked as a subhalo, thus conserve its identity;
however the problem remains when the subhalo is too close to the host
halo centre.
An improvement would be to use a phase-space halo finder, such as
HSF \citep[cf][]{2009MNRAS.393..703M,2009MNRAS.396.1329M}.
\citet{2012ApJ...751...17S} introduced an interesting method called
``halo interaction network'', which is a more complex
merger tree that takes into account flybies.
However, for this work we were only interested in discriminating
between particles entering smoothly from the background and particles
belonging to another structure and entering by means of mergers, hence
we did not need such a refinement.
\citet{2009A&A...506..647T} give different sets of rules building
DM merger trees that include subhaloes, where a (sub)halo at output
$t_{n+1}$ is the son of its progenitor at output $t_n$.
We refer to a structure as either a subhalo or a halo.
These rules are:
\begin{itemize}
\item A structure can have at most one son.
\item The son of structure $i$ at output $t_n$ is the structure $j$ at
output $t_{n+1}$, which inherits most of the mass of structure $i$.
\item Structure $i$ at output $t_n$ is a progenitor of structure $j$ at
output $t_{n+1}$, if $j$ is the son of $i$.
\end{itemize}
\begin{figure*}[t]
\begin{center}
\subfigure[face on]{
\includegraphics[width=17cm]{img/faceon.eps}
\label{fig:faceon}
}
\subfigure[edge on]{
\includegraphics[width=17cm]{img/edgeon.eps}
\label{fig:edgeeon}
}
\caption{
View of the most massive central galaxy, at $t=9.1$~Gyr, in the
third level of zoom. Left: gas colour-coded
by temperature from 800 to $1.4 \times 10^{6}\,\text{K}$, right:
stars.
\label{fig:central}
}
\end{center}
\end{figure*}
They introduced a two-step method called the branch history method (BHM)
to determine which of two local maxima should be the subhalo
and which the main halo, according to the results of the previous step.
This method tends to avoid identity switches between the main halo
and the satellite.
The basic idea is to take advantage of the previous snapshot to decide
which node should be the subhalo and which the halo.
\smallskip
Once again, we had to modify the algorithm to take into account the
number of particles, which is not constant with time owing to our use
of the multi-zoom method.
\subsection{Accretion history}
After we had built the full merger tree of each galaxy, we were able
to compute the mass history.
We traced back the main progenitor from the last snapshot to the first,
and tagged each particle entering the main structure at each
snapshot.
Particles coming from either a satellite of the considered galaxy or from
another galaxy were
tagged as \emph{merger}, while particles coming from the background
were defined as \emph{smooth accretion}.
\smallskip
Particles can also leave the main galaxy, either for the background,
which we refer to \emph{evaporation}, of for another substructure, which
we dub \emph{fragmentation}.
The latter happens mainly during mergers events: particles from a
satellite are detected as part of the main galaxy at a given snapshot,
but may have left before the following.
The former can happen at almost every snapshot: for particles at the
border of the structure, density can fluctuate without moving, thus
be on either one side of the saddle point or the other.
\smallskip
With these definitions, we were able to compute the baryonic mass assembly:
\emph{merger} $-$ \emph{fragmentation} and
\emph{accretion} $-$ \emph{evaporation}.
The mass of a structure was counted only once, since a particle entering the
galaxy is counted positively, and negatively when it leaves.
This enables us to overcome the fly-by issue mentioned above, since
a fly-by would be counted first as a merger, then as fragmentation and could
then vanish in the total accretion fraction.
\smallskip
However, there is another difficulty: as a consequence of the
multi-zoom technique, particles enter a higher zoom-level box at each
timestep, thus several galaxies enter the box when they have already
formed.
Accretion fractions are computed between $t_\text{app}$, the time when
the galaxy enters the last zoom level box, and $t_\text{end}$, the end
of the simulation.
We thus concentrated on galaxies entering the box before $t = 7$~Gyr
in order to follow them over a sufficient number of timesteps.
\section{Results}
\label{resu}
\subsection{Structure detection and merger trees}
\begin{figure*}[ht!]
\begin{center}
\includegraphics[width=17cm]{img/tree.eps}
\caption{Baryonic merger tree of the main galaxy. Dark blue circles
are galaxies, and bright blue squares satellites. The $x$ axis
shows the number of the branch, i.e. a galaxy that will eventually
merge with the main galaxy (branch 1), or stay as one of its
satellites.
\label{fig:tree}
}
\end{center}
\end{figure*}
Figures~\ref{fig:simu} and~\ref{fig:clus} show, respectively, a
large-scale view of the third level of zoom of the simulation and a
zoom on the most massive galaxy of the box.
The upper panel shows on the left gas
particles (colour-coded by temperature, on a logarithmic scale from
800~K to $1.4\times 10^6$~K), and on the right, DM
particles.
Figures~\ref{fig:simu_stars} and~\ref{fig:clus_stars} show the
corresponding star distributions.
\smallskip
The lower panel shows the structures detected by AdaptaHOP, baryonic
galaxies and satellites on the left, and DM haloes and subhaloes on
the right.
Haloes and main galaxies appear in dark and light blue and green, and
subhaloes and satellites appear in yellow, orange, red, magenta, and
white.
At the centre, the most massive halo (in blue) can be seen with
massive subhaloes (in magenta and white): it is undergoing a major
merger at this timestep, which explains why these massive substructures
{appear larger than several small and isolated haloes}.
\smallskip
We can see by eye the good agreement between the baryonic and DM
structure detected. However, since for a given galaxy the DM halo is
far more extended than the baryonic structure, this is not easy. In
section~\ref{sec:matching}, we explained how we matched galaxies and
haloes.
\smallskip
The zoom in figure~\ref{fig:clus} is instructive. We can still discern
the close correspondence between the dark and baryonic structures, and
most of the
small structures in the upper panels are indeed detected in the lower
panels.
However, some remarkable features can be found: in the bottom left
panel, there is a satellite
(in orange, under the largest satellite in yellow) with a tidal tail
(in red), which is detected by AdaptaHOP as a satellite whose tail is
a ``satellite'' of this satellite.
This structure finder is thus capable of detecting interesting
features.
Most strikingly, the red arc near the centre of the main galaxy is an
artefact: it is not a satellite, but rather an arm of the spiral
galaxy.
\smallskip
In figure~\ref{fig:clus}, we can see the central galaxy of the halo,
which contains a large disc of $\simeq 160$~kpc (comoving) at $z=0.46$.
Figures~\ref{fig:faceon} and~\ref{fig:edgeeon} show, respectively, a
face-on and an edge-on view of the central galaxy.
Looking at the corresponding galaxy at $z=0$ in our level 2
simulation, it appears that this galaxy still has a gaseous and
stellar disc today, which is quite unexpected since central galaxies
are supposed to be elliptical.
{
The large size and mass ($\simeq 10^{13}\,\ensuremath{\mathrm{M}_\odot}$) of the galaxy is
probably caused by our not taking into account AGN
feedback in our simulations.
}
Figure~\ref{fig:tree} shows the merger tree of this galaxy.
Only the 60 most massive branches of the tree are shown here.
The $y$ axis is the redshift, and each branch is a galaxy that either
completely merges with the main galaxy, or becomes a satellite of
this galaxy at the last timestep.
The first branch on the left is the \emph{main progenitor} branch,
i.e. the ancestors of the main galaxy.
Branches 2 to 35 are galaxies (bright blue circles) that became
satellites (dark blue square) of the main galaxy, and merged with it
before the last timestep.
Branches 36 to 61 are the satellites of the main galaxy at the last
timestep, and their merger trees.
Galaxies that seem to appear in the merger tree at low redshift are
actually entering the level 3 zoom at this time (e.g. branches
42--51).
However, satellites
that seem to appear late (e.g. branches 23, 24, 42) correspond to fly-bies:
they have no identifiable progenitor at any previous timestep.
\subsection{Evolution of the mass function}
\label{sec:massfunc}
Since we built the merger trees of all galaxies and dark matter
haloes, we were able to study the evolution of the mass function
with time.
Figure~\ref{fig:mspec_bar} and~\ref{fig:mspec_dm} show, respectively,
the cumulative distribution of mass of baryonic
galaxies and DM haloes at three timesteps of the simulation, the
three curves
(blue, green, and red) corresponding to $t = 3, 6,$ and 9~Gyr,
respectively (or $z = 2.23, 1.02,$ and 0.47), in our third level of
zoom.
We computed the mass of structures and substructures therein, counting
the mass of substructures several times: once as stand-alone
substructures, and then as part of their host structures.
The evolution of the mass function is compatible with a hierarchical
growth of structures, with fewer massive structures, both in galaxies
and DM haloes, existing at higher redshift than at lower, and with a
slope that flattens towards lower redshifts.
However, it must be emphasised here that the resulting mass functions
are biased we ahve zoomed into an overdense region.
These results could however be compared to
\citet{2009MNRAS.399.1773C}, who performed resimulations of several
regions of the Millennium simulations.
\begin{figure*}[t!]
\begin{center}
\subfigure[Galaxies]{
\includegraphics[width=8cm]{img/mfunc_gal_time.eps}
\label{fig:mspec_bar}
}
\subfigure[Haloes]{
\includegraphics[width=8cm]{img/mfunc_dm_time.eps}
\label{fig:mspec_dm}
}
\caption{
\label{fig:mspec}
Cumulative mass distribution of galaxies and satellites (left)
and haloes and subhaloes (right) at the third level of zoom,
at $t=3$~Gyr (blue), $t = 6$~Gyr (green), and $t = 9$~Gyr (red)
($z = 2.29, 1.02,$ and 0.47).}
\end{center}
\end{figure*}
\smallskip
To compare several structure-finding codes, we performed another
structure detection with FOF, using a linking length of $b=0.2$ times
the mean interparticular distance.
Figures~\ref{fig:4lev} shows the halo mass
function at the four zoom levels, respectively, for FOF (dashed line) and
AdaptaHOP (solid line) haloes.
This time we note that the subhaloes are not counted separately, but are
included within the AdaptaHOP haloes to permit us to compare them with
FOF haloes.
Level 3 is shown in green, level 2 in blue, level 1 in red, and level
0 in magenta.
The black solid line is a Press and Schechter function, computed by
the code described in~\citet{2007MNRAS.374....2R} for our adopted
cosmology, and the dotted line the mass function from the Millennium
simulation \citep{2005Natur.435..629S}.
The FOF and AdaptaHOP mass functions show differences, especially at
low masses.
Indeed, several small haloes that are detected by FOF and located close to the
edges of a larger FOF halo are detected by AdaptaHOP as substructures
of this halo, thus they do not appear as low mass structures in the
AdaptaHOP curve.
However, at higher masses there is a good agreement between the two
structure finders.
Only the level 0 (cosmological run) mass function can be directly
compared with the
theoretical predictions of Press \& Schechter and with the Millennium
mass function, although it is instructive to overplot the mass
functions for the three other simulations.
The comparison with the Press and Schechter mass function can be seen
as a probe of our environment and a way to quantify the overdensity.
Our mass function agrees with both the Millennium and the Press
\& Schechter mass functions.
The mass functions in the other zoom levels show the density of the
environment with respect to the cosmic average.
\begin{center}
\begin{figure}[t!]
\includegraphics[width=8.5cm]{img/mfunc_hm_fof.eps}
\caption{
\label{fig:4lev} {AdaptaHOP (solid line) and FOF (dashed line)
halo mass function for the
four zoom level: level 3 in green, 2 in blue, 1 in red, and 0 in
magenta. Note that subhaloes are not counted separately, but are
included within the AdaptaHOP haloes in order to compare with
FOF haloes.
The black line is a Press \& Schechter predicted mass function,
for the sake of comparison, and characterisation of the
environment.}
}
\end{figure}
\end{center}
\subsection{Baryonic fraction}
\label{sec:barratio}
According to WMAP data \citep{2011ApJS..192...18K}, baryons represent
about 4\% of the Universe content, yet only a small fraction are seen.
One can legitimately ask where the baryons are in the Universe.
We computed the baryonic fraction, i.e. the ratio of the baryonic
mass to the DM mass, for each halo in the simulation, at several
timesteps.
We defined the halo centre as the position of the densest particle,
and the radius of a (sub)halo as the distance between the centre and
the furthest-away particle.
For each halo or subhalo detected, we computed the total baryonic mass
within and plotted the baryonic fraction $m_\text{b}/m_\text{halo}$ as
a function of the (sub)halo (including all its subhaloes) mass, as
discussed in \S~\ref{sec:massfunc}.
For each baryonic particle, we computed its closest dark matter
particle, and assigned the baryonic particle to the halo of its
corresponding DM particle.
This method enabled us to consider any geometry of halo.
Haloes undergoing major mergers, which is the case for our
largest halo, may indeed have a non-spherical shape.
\smallskip
\begin{figure*}[t!]
\begin{center}
\includegraphics[width=17cm]{img/bfrac.eps}
\caption{Median of the baryonic fraction computed in logarithmic
bins as a function of the halo mass in the level 3 zoom at
$t=3, 6,$ and $9.0$~Gyr, normalised by the
universal fraction $\frac{\Omega_\text{b}}{\Omega_\text{DM}}$.
The total baryonic fraction is shown in black, the stellar
fraction in green, and the gas fraction in blue.
The errorbars represent the 85\textsuperscript{th}\xspace\ and 15\textsuperscript{th}\xspace\ percentiles.
\label{fig:bfrac}}
\end{center}
\end{figure*}
{
The baryon fraction, or the ratio of stellar mass to dark mass as a
function of time and galaxy mass can be compared with that expected
based on analyses of observations with the halo abundance matching
technique (HAM).
This tool was developped by \citet{2004MNRAS.353..189V} and has been
used by several groups.
Assuming that there is a tight correspondence between the stellar
masses of galaxies and the masses of their host haloes, and matching
their number density or abundance, this technique allows to deduce
average relations linking the baryon and dark matter growths over
time, given that the observed galaxy stellar-mass function, and its
variation with redshift, is satisfied as input.
\citet{2009ApJ...696..620C} show for instance that the stellar mass growth
is essentially due to both accretion and star formation, while the merger
process has little influence and most massive galaxies must form
their stars earlier than less massive ones (downsizing).
The stellar mass fraction with respect to the universal baryon
fraction reaches a maximum of 20\% in haloes that have a virial mass
of a few 10$^{12}$ M$_\odot$ at $z=2$, and the maximum is reached at
lower halo mass with time, down to a few $10^{12}\,\ensuremath{\mathrm{M}_\odot}$ at $z=0$.
Figure~\ref{fig:bfrac} shows the median of the baryon fraction (in
black), stellar fraction (green), and gas fraction (blue) computed in
logarithmic mass bins, at $t = 3,
6, \text{and } 9$~Gyr, and such an evolution can be seen, with a peak
in the stellar fraction at $\simeq 10^{12}\,\ensuremath{\mathrm{M}_\odot}$, which is compatible
with \citet{2009ApJ...696..620C}.
These values are to some extent compatible with the relation found by
\citet{2010ApJ...717..379B}, who consider in more detail the
uncertainties and scatter caused by various assumptions.
However, unlike these authors, we still have a higher stellar fraction
at higher than at lower mass.
This can be atribudet to our neglect of feedback from AGN in our
simulations, which is thought to be responsible for preventing star
formation in massive haloes.
Observations of galaxies in groups and clusters \citep[e.g.][]{2005ApJ...635...73H,
2010ApJ...719..119D} also show that there is a drop of the stellar fraction
towards high masses.
Interestingly, this plateau at high masses is compatible with the
simple prescription in \citet{2012MNRAS.422.1714N}, for which the stellar-to-halo mass
relation is modelled by a power law.
}
\begin{figure*}[ht]
\begin{center}
\includegraphics[width=17cm]{img/baryon_phases.eps}
\caption{\label{fig:baryon_phase}Baryon fraction in four phases:
diffuse gas (blue), condensed gas (cyan), hot and warm-hot
(red) and stars (green), at level 3 (solid), 2 (dashed), 1
(dash-dotted), and 0 (dotted).
In the top panels, the fraction is computed in the full box of
each level, and in the bottom panels, it is computed only within
the 8.56~Mpc radius of the level 3 box.
The differences are then only caused by those in the resolution.}
\end{center}
\end{figure*}
\smallskip
We studied the relative fraction of baryons in different phases,
following~\citet{2001ApJ...552..473D}.
We distinguish between four phases according to the gas temperature
and density contrast $\delta = \rho/\bar\rho - 1$.
\begin{itemize}
\item Diffuse gas: $\delta < 10^3$, $T < 10^5$~K.
\item Condensed gas: $\delta > 10^3$, $T < 10^5$~K.
\item Hot and warm-hot: $T > 10^5$~K.
\item Stars.
\end{itemize}
The evolution of each phase for the four levels of zoom is plotted in
Fig.~\ref{fig:baryon_phase}.
Top panels show the hot/warm-hot (red), diffuse (blue) gas, condensed
gas (cyan), and stars (green), computed for the four levels of zoom.
These four levels have a similar trend of diffuse gas that condenses
and forms stars as cosmic structures evolve, and the fraction of hot
and warm-hot gas that increases when there are massive enough
structures to heat the gas, before eventually reaching a plateau.
It is worth noting that the condensed gas fraction reaches a maximum
at $z\simeq 2$, which corresponds to the peak of the cosmic
star-formation rate \citep[e.g.,][]{2010ApJ...709L.133B}.
However, the condensation rate changes from one zoom level to the
other.
The difference between the four levels of zoom may be due to either
the resolution or the environment: level 0 is indeed a cosmological
box whereas level 3 is centred on a dense region, and we expect to
obtain different results owing to the different environments.
To distinguish both effects, we plotted in the bottom panels the same
fraction, but computed for each level only within the 8.56~Mpc
spherical box of level 3.
This time, the difference between the different zoom levels are due
only to the resolution.
We can see that there is a good convergence between levels 2 and 3.
\subsection{Dark and orphan galaxies}
Since we were able to match galaxies to haloes, we checked whether
there was any ``dark galaxy'', halo without any baryonic counterpart,
or ``orphan galaxy'', galaxy without a dark matter halo.
\smallskip
We were unable to detect any orphan galaxy: all galaxies in the
simulation lie within a halo.
However, not all galaxies are the main galaxy of either a halo or
subhalo.
We defined galaxies as the main structures identified with AdaptaHOP for
baryonic particles, and satellites to be their substructures.
These definitions differ from those usually assumed, where central
galaxies are at the centre of a halo, and all other galaxies are
satellites.
Therefore, several detected structures that we classified as
``galaxies'' would be called ``satellites'' by other authors.
Several of them are close to the halo centre, and cannot be associated
with a resolved subhalo.
Whether this is due to a lack of resolution or to physical subhalo
stripping is still unclear.
\smallskip
We detected several ``dark galaxies'' without any baryonic
counterpart, about 100 haloes and 100 subhaloes at $t = 9$~Gyr.
They are represented in figure~\ref{fig:bfrac} in cyan (haloes) and
magenta (subhaloes).
We investigated whether {they contained any baryons} and found
that they appear to contain few gas particles.
When looking at these dark haloes and subhaloes, it appears that most
of the dark haloes have gas, but that these structures are
insufficiently well-resolved hence not dense enough to form stars and
be detected as a galaxy.
We therefore checked that these dark haloes and several subhaloes are
also detected by FOF, and are not spurious detections by the halo
finder.
Most of the dark AdaptaHOP haloes were also detected by FOF, and one
can believe that they are of a physical significance.
They often contain clouds of gas that are not dense enough to be
detected as galaxies, which could be an effect caused by a too low
resolution.
{Some dark subhaloes were also detected as FOF haloes, most
of them however appear to be non-physical structures, for example
bridges between two real subhaloes containing a galaxy. }
\smallskip
By varying the sets of parameters in AdaptaHOP, we found different
numbers of dark galaxies.
This is because these haloes are very close to the detection
threshold, and are detected as haloes or subhaloes for a given set of
parameters, while they are undetected for a less conservative set.
\subsection{Velocity dispersion}
\begin{figure*}[ht!]
\begin{center}
\subfigure[Galaxies]{
\includegraphics[width=8.5cm]{img/msigma_bar.eps}
\label{fig:msigma_bar}
}
\subfigure[Haloes]{
\includegraphics[width=8.5cm]{img/msigma_dm.eps}
\label{fig:msigma_halo}
}
\caption{
Mass versus velocity dispersion . Left: galaxies; right:
haloes(red) and subhaloes (blue).
The solid black line is the median of the baryonic (stellar
plus gas) mass of galaxies (panel
\subref{fig:msigma_bar}) and dark matter mass of haloes and
subhaloes (panel \subref{fig:msigma_halo}) and the errorbars are
the 15\textsuperscript{th}\xspace\ and 85\textsuperscript{th}\xspace\ percentiles.
The dashed black line in the right panel shows the expected
slope $m\propto \sigma^{3}$, and on the right panel, the lines
show the slopes $m\propto \sigma^{3}$ and $m\propto \sigma^{2}$.
\label{fig:sigma_mass}
}
\end{center}
\end{figure*}
\begin{figure*}[ht!]
\begin{center}
\subfigure[Galaxy 2]{
\includegraphics[width=0.45\linewidth]{img/mhist_0002.eps}
\label{fig:gal1}
}
\subfigure[Galaxy 56]{
\includegraphics[width=0.45\linewidth]{img/mhist_0056.eps}
\label{fig:gal2}
}
\subfigure[Galaxy 469]{
\includegraphics[width=0.45\linewidth]{img/mhist_0469.eps}
\label{fig:gal3}
}
\subfigure[Galaxy 512]{
\includegraphics[width=0.45\linewidth]{img/mhist_0512.eps}
\label{fig:gal4}
}
\caption{ \emph{Top:}
Mass history of four typical galaxies. Blue curves: baryonic mass of
the galaxy. Red curves: baryonic mass of the galaxy plus its
satellites. Green curve: stellar mass of the main galaxy. Cyan:
gas mass.
\emph{Bottom:} Mass origin, where red represents a merger from
another (sub)structure and blue, smooth accretion from the
background.
\label{fig:mass_hist}
}
\end{center}
\end{figure*}
We computed for each substructure, baryonic and (sub)halo and galaxy
the velocity dispersion $\sigma_v$ at several snapshots.
Since AdaptaHOP does not perform an unbinding step, some particles
that are spatially close to a structure can be attached to the latter,
although they are not dynamically bound.
To get rid of the contamination of these particles, special care has
to be taken.
We used a two-step algorithm.
First, we defined the bulk velocity of the structure by computing the
median of each component of the velocity, which is more robust than
taking the mean value because high velocity particles have a weaker
influence on the median.
We then selected only particles with a velocity close enough to the
bulk velocity.
To do so, we computed the circular velocity at the half-mass radius,
i.e. the radius containing half the mass of the structure,
$v^* = \sqrt{\frac{GM(<r_\text{half})}{r_\text{half}}}$, which gives a
characteristic velocity for the structure.
{The velocity dispersion was then computed for particles
whose velocity relative to $\vec{v}_\text{bulk}$ is lower than $n v^*$.
These are} the most bound particles.
We checked that our result does not strongly depend on the choice of
$n$, and took $n=5$.
With this technique, only one halo and seven subhaloes could not be
treated because no particle was selected.
We checked that these structures corresponded to unbound structures
and dropped them from our analysis.
{We computed the 3D velocity dispersion for dark
matter (sub)haloes.
To compare our results for baryonic galaxies and satellites with
obsevations, we defined the velocity dispersion as follows: for massive
galaxies ($M > 10^{11}$~\ensuremath{\mathrm{M}_\odot}), we computed the dispersion in the
projected stellar velocity in radial bins along the $x$ axis, and defined
the velocity dispersion of the structure as the value in the bin
containing the half-mass radius.
For lower mass galaxies, which are less well-resolved, the velocity
dispersion is computed over all stellar particles.
}
In this section, the mass of the structures does not take into account
the substructures, since we do not wish to account for the velocity
dispersion of particles within the substructures.
Figure~\ref{fig:sigma_mass} shows the velocity dispersion as a function
of the mass for galaxies in panel~\subref{fig:msigma_bar} and DM
haloes (red) and subhaloes (blue) in panel~\subref{fig:msigma_halo}.
{
The dashed black line in the right pannel shows the expected
slope $m\propto \sigma^{3}$, and on the right pannel, the lines
show the slopes $m\propto \sigma^{3}$ and $m\propto \sigma^{2}$.
The relation between mass and velocity dispersion for dark matter
haloes is compatible with a power law with the expected slope of
three, although our data suggest a slightly shallower slope.
Observations indicate that there is a tight correlation between the
total baryonic mass of galaxies and $V_\text{max}$, the maximum of the
velocity curve, which is referred to as the baryonic Tully-Fisher
relation (BTRF).
However the observed exponent is close to four
\citep[eg][]{2000ApJ...533L..99M, 2007ApJ...671..203C,
2012AJ....143...40M}.
In our case, the slope agrees with the BTRF at high masses, but at low
masses it is lower than expected.
}
\section{Accretion and merger history}
\label{acc}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=8.5cm]{img/final_acc.eps}
\caption{Accretion fraction versus galaxy mass, and the associated
histogram (in number), computed for the 530 galaxies
entering the box before 7~Gyr.
The size of the markers is proportional to the
logarithm of the galaxy mass.
In the right panel, red dots correspond to galaxies that are
outside the level 3 box.
The black line show the median of the accretion fraction, and
the errorbars the 15\textsuperscript{th}\xspace\ and 85\textsuperscript{th}\xspace\ percentiles.
We find a mean accretion fraction of 77\%.
\label{fig:hist_acc}}
\end{center}
\end{figure}
The upper panels of figure~\ref{fig:mass_hist} shows the mass history of
four characteristic galaxies of the simulation: a massive galaxy both
accreting gas and growing by mergers, a small galaxy growing only
through accretion, a galaxy growing by means of bowth mergers and
fragmentation, and a galaxy losing mass by means of fragmentation
while passing a larger galaxy.
The blue curve is the baryonic mass of the galaxy itself, while the
red curve shows the mass of the galaxy plus its satellites. The
stellar mass is plotted in green, and the gas mass in cyan.
A galaxy may be detected as a satellite of another
galaxy during its history. These timesteps are plotted as circles
(panel \ref{fig:gal4}).
The bottom panels of figure~\ref{fig:mass_hist} show the origins of
the mass, separated into two components: merger and accretion.
Smooth accretion is shown in blue and mergers in red.
A negative value of merger or accretion, respectively, means that the
galaxy loses mass to either another galaxy (fragmentation) or the
background (evaporation).
These are the two components of the derivative of the blue curve shown
in the upper panel, since with our definition, all mass is acquired
by either merger or smooth accretion, and lost by fragmentation or
evaporation.
Those four galaxies have very different mass accretion histories.
The galaxy in panel~\subref{fig:gal1} undergoes a major
merger that can be seen in the lower panel of \subref{fig:gal1}, at
$t \simeq 5.1$~Gyr.
The galaxy in panel~\subref{fig:gal2} shows the opposite
behaviour:
it does not experience any merger and grows smoothly by accreting gas
until $t\simeq 7$~Gyr, then maintains a constant mass until the end of
the simulation, passively turning its gas reservoir into stars.
In panel~\subref{fig:gal3}, the galaxy grows mainly through accretion
until it reaches a maximum mass at $t\simeq 7$~Gyr, and then
interacts with another structure and
loses more mass than it gains from mergers, and ends with a somewhat
lower mass.
Galaxy in panel~\subref{fig:gal4} shows quite an unusual behaviour:
after entering the level 3 box at $t \simeq 5\text{ Gyr}$, it grows
from both mergers and accretion, and is suddenly accreted by a more
massive galaxy, becomes a satellite, then loses about one third of its
mass, which feeds the host galaxy.
It then leaves its host galaxy and continues to lose mass through
evaporation.
These behaviours are quite typical of what we can see in our
simulations, with high-mass central galaxies undergoing mergers and
accreting, and lower-mass, isolated galaxies accreting gas before
their growth is stopped.
\smallskip
We computed the accretion fraction by considering, at the last output,
the origin of each particle: particles belonging to the galaxy at the
first time of detection were defined as
``initial''; particles that came originally from the background, and
had never belonged to another structure were
labelled ``accretion''; and we defined as ``merger'' a particle that
had been accreted into the main progenitor of the galaxy and
previously belonged to another structure, {even if it is coming from
the background. }
We defined the accretion fraction as
\begin{equation}
f_\text{acc} = \frac{\text{accretion}}{\text{accretion + merger}},
\end{equation}
\noindent and the merger fraction $f_\text{merg}$ such that
$f_\text{acc} + f_\text{merg} = 1$. By definition, we have $0\le
f_\text{acc},f_\text{merg}\le 1$.
As a consequence of the multi-zoom thechnique, and that galaxies can
enter
the level 3 box during the simulation, we had to select galaxies to be
studied.
We only computed this accretion fraction for galaxies that could be
tracked back in time until before $t = 7$~Gyr so that the fraction
could be computed for at least 2~Gyr of its lifetime.
We discarded galaxies that could not be tracked earlier than 7~Gyr,
which either
entered the box later, or were lost when computing the merger tree,
possibly after a merger event.
This left us with 530 galaxies that had been tracked from
$t_\text{app} < 7.0$~Gyr to $t = 9.1$~Gyr.
Figure~\ref{fig:hist_acc} shows the accretion fraction as a function of
mass for these 530 galaxies, as well as the (number) histogram of the accretion
fraction.
We found a mean accretion fraction of 77\%, and a median value of 92\%.
In black, we plotted the median accretion fraction in mass bins, where
the errorbars are the 15\textsuperscript{th}\xspace\ and 85\textsuperscript{th}\xspace\ percentiles.
We can see that most galaxies have a very high accretion
fraction.
The trend for low-mass galaxies at $f_\text{acc}= 1$ indicates that
several galaxies undergo no mergers and are fed only by accretion.
This could be a spurious effect caused by galaxies entering the box at
late time, and experiencing no mergers.
However, even when we consider only galaxies that are present in the
box between 3~Gyr and 9.1~Gyr, the histogram still shows such a trend with
a mean value of 70\% and median of 82\%.
{ The four points in the upper right region are particularly
striking: they correspond to massive galaxies that would have
acquired their mass mostly smoothly.
When sudying the details, they correspond to galaxies that entered the
level 3 box a few snapshots before our limit of 7 Gyr, and have
experienced no merger since this date, hence have a high
accretionfraction.
However, they are likely to have undergone mergers before entering the
level 3 box.
}
\section{Downsizing}
\label{sec:downsiz}
\begin{figure*}[ht!]
\begin{center}
\includegraphics[width=17cm]{img/star_age_clouds.eps}
\caption{\label{fig:downsiz}Median of the mean stellar age of the galaxies as a
function of the galaxy mass at the last output for level 3
($z=0.46$) and 2 ($z=0$).
The markersize is proportional to the logarithm of the mass.
In the right panel, the blue points correspond to galaxies
within the $R_3 = 8.56$~Mpc of the third level of zoom, and the
red points to galaxies outside this region.
The errorbars show the 15\textsuperscript{th}\xspace\ and 85\textsuperscript{th}\xspace\ percentiles.
}
\end{center}
\end{figure*}
We now study the mean stellar age of galaxies as a function of the
galaxy mass.
For this study, we use both zoom levels two and three because the level
three simulation stops before $z=0$, but has a higher mass resolution.
Figure~\ref{fig:downsiz} shows the median of the mean stellar age of
the galaxies as a function of the galaxy mass at the last output for
level 3 ($z=0.46$) and 2 ($z=0$).
The errorbars correspond to the 15\textsuperscript{th}\xspace{} and 85\textsuperscript{th}\xspace\ percentiles.
The differences between the blue dots in the two panels are then due
bowth to our low resolution and the temporal evolution.
It is interesting to see that in the second level of zoom, galaxies
outside the $R_3 = 8.56$~Mpc radius (i.e., red points) formed their
stars more recently than the ones within the box (blue points).
This is certainly due to the lower average density of the region in
the level 2 simulation which is located outside of the level 3 box.
We can see that at low masses, the dispersion is large: among the least
massive galaxies, some form their stars early, while others form them
late.
At each epoch of the universe, there is a large number of dwarfs,
whose stars are forming actively.
This behaviour is not seen for massive galaxies.
The scatter in the stellar age progressively reduces with time, as
mass increases.
The most massive galaxies form their stars at a precise epoch, 7-8
Gyr, which correspond to half the universe age, depending slightly on
the level of resolution.
After this epoch, their star formation drops considerably, which may
be due to environmental effects that suppress the cold gas reservoirs.
This behaviour is compatible with both hierarchical structure formation,
since low-mass galaxies are the first to form and then be involved in
the formation of more massive galaxies, and the observed downsizing,
since the most massive galaxies are not observed to form stars at
$z=0$, but have formed most their stars when the universe was half of
its present age.
Today, star formation continues only in small galaxies, although it was
also the case in the early universe.
\section{Discussion}
\label{discuss}
\subsection{Influence of the resolution}
\label{sect:resol}
An important question is how variation in the numerical resolution
influences our results.
Our study of the baryon phase evolution in
figure~\ref{fig:baryon_phase} provides a first evidence of numerical
convergence, especially between levels 2 and 3.
To study the consistency in greater detail, we compared the mass assembly
history of several galaxies at zoom levels 2 and 3.
We identified these galaxies in those two levels of simulations, and
applying the same algorithms we compared their history.
Since the mass threshold is the same, 64 particles, we expect that in
the level 2 zoom, fewer satellites are detected and sub-resolution
mergers play a role, thus the accretion fraction should be larger.
There are some galaxies for which the evolution is far from
complete at $t=9$~Gyr.
We take the example of galaxy 1 in figure~\ref{fig:gal1}.
Figure~\ref{fig:lev2} shows its mass assembly, the plain line is level
3 and the dashed line level 2.
We can see that the mass of the galaxy is almost similar in the two
zoom levels, but the mass of the satellites is lower in the level 2
zoom.
However, in the level 2 zoom, the galaxy is first detected at
$t=2.8$~Gyr, whereas in the level 3 one, it is detected at
$t=1.2$~Gyr, owing to a lack of resolution at level 2.
We found that the accretion fraction between $ t= 0$ and $t = 9.0$~Gyr
is 0.64 at level 3 and 0.81 at level 2.
However, since level 2 reached $z=0$, we were able to compute the accretion
fraction for the entire formation history of the galaxy.
We found that $f_\text{acc} = 0.53$, which is lower than the accretion
fraction computed until $t=9.0$~Gyr.
This is because this galaxy undergoes major mergers at $t\simeq
12$~Gyr as we can see in figure~\ref{fig:lev2}.
We also computed the formation time $t_\text{form} = 11.6$~Gyr,
i.e. the time at which the galaxy assembled half of its mass at $z =
0$, which we show as a vertical black line in the figure.
For this galaxy, owing to the late major merger, $t_\text{form}$ is
greater than 9.1~Gyr, which means that it has assembled less than half of
its mass; however, most galaxies have a formation time that is lower
than 9.1 Gyr.
{
\subsection{Overcooling issue}
\label{sec:agn}
As pointed out in section~\ref{sec:ic}, no AGN feedback was included
in this simulations serie.
Active galactic nucleus feedback has been invoked to resolve the
over-cooling problem in massive galaxies.
This could be done in several ways, and the complete issue has not yet
been definitely settled.
At late times (low redshift), when large structures have formed, the
so-called ``radio mode'', coming from radio jets emitted by
super-massive black-hole, is certainly efficient in preventing cooling
flows \citep{2006MNRAS.365...11C, 2008MNRAS.390.1399B}, but
theirefficiency is believed to be local, and the
action of AGN at higher redshift, (or the quasar mode) even before the
formation of groups and clusters, is thought to be more effective in
preventing the over-cooling \citep[e.g.,][]{2011MNRAS.412.1965M}.
Studies taking into account AGN feedback
\citep[e.g., ][]{2011MNRAS.413..101G} obtain more realistic stellar
masses for massive haloes.
Supernova feedback is efficient for low-mass haloes, where kinetic energy
can be transferred into the gas, enabling it to escape the halo.
However, the present simulation is focused on a massive cluster, where
the escape velocity is so high that SNe feedback is insufficient to
expel the gas from the halo.
This results in the overcooling of baryons and eventually to very
high stellar masses in our more massive haloes (about
$10^{13}$~\ensuremath{\mathrm{M}_\odot}).
We believe that, although some results might suffer from overcooling,
our predictions should be robust at least for the range of lower mass
galaxies.
In a future paper, we will include various forms of AGN feedback and
address how our main conclusions should be modified as a consequence.
}
\subsection{Comparison with previous work}
We found that galactic mass assembly is dominated by gas accretion
rather than by mergers, even though we might be unable to detect
mergers of low mass satellites.
However, we do not expect them to add a significant
contribution~(\citealt{2002ApJ...571....1M},
\citetalias{2005MNRAS.363....2K}).
We note that we cover a comparable volume, with a comparable resolution
to \citetalias{2005MNRAS.363....2K},
although the main advantage of our simulation is that we are able to
simulate a rich environment starting from a cosmological box.
Our results are in good agreement with
\citetalias{2005A&A...441...55S}, who found a typical accretion
fraction of 70\%.
We also appear to agree with \citet{2009MNRAS.399..650S}, who studied
the mass growth of central and satellite galaxies.
However, we do not have the same definition of central galaxies and
satellites.
They call a central galaxy the most massive galaxy at the centre of a
FOF-halo, and all other galaxies lying in the halo or one of its
subhaloes are called satellites.
As a consequence, hey have ``central'' galaxies of subhaloes that are
still accreting satellites, but with their definitions these galaxies
are considered as satellites.
\citet{2011MNRAS.tmp..662N} studied the satellite loss mass in a SPH
simulation of a Milky-Way type galaxy and its satellites through
UV-ionisation, ram pressure stripping, stellar feedback, and tidal
stripping.
They found that tidal stripping reduces significantly the
mass of bright satellites, which become of lower mass than some dark
ones, and that stellar feedback mostly affects medium-mass
satellites.
\Citet{2011MNRAS.tmp..554V} performed a similar study using the
Gadget-3 TreeSPH OWLS simulations.
They computed the accretion rate onto both dark matter haloes and
galaxies, and found that the cold accretion dominates the mass
assembly.
However, we note that we have not separated gas accretion into two different
components, namely cold and hot accretion, and considered here only
accretion as opposed to mergers.
\citet{2011MNRAS.417.2982F} also found that the contribution of mergers
to mass assembly is minor, and confined to high mass haloes.
They studied in detail the fate of accreted gas, through cosmological
hydrodynamical simulations.
They confirmed that galaxies accrete mostly warm and hot gas above a
critical halo mass of $3 \times
10^{11}$~\ensuremath{\mathrm{M}_\odot}{} at $z=0$ as proposed by \citet{2003MNRAS.345..349B},
and that the fraction of cold gas accretion increases with redshift.
Their variation in the SNe feedback and efficiency of galactic
winds demonstrated that this essentially unknown parameter can decouple the
gas accretion from the star formation rate, and efficiently decrease
the baryon fraction in low-mass haloes.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=8.5cm]{img/mhist_2level.eps}
\caption{\label{fig:lev2} Mass assembly history of galaxy 1, same legend as
figure~\ref{fig:mass_hist}. Solid line: level 3 zoom; dashed line: level
2 zoom.}
\end{center}
\end{figure}
\smallskip
Interestingly, \citet{2010ApJ...725.2312O} studied
stellar assembly, distinguishing between ``in situ'' stars that were
formed within the galaxy, and ``ex situ'' stars that were formed in
another galaxy before entering the current one.
Although these definitions differ somewhat from ours, they are closely
related, since ``in situ stars'' are locally formed from cold gas, and
``ex situ'' are assembled by mergers.
They found that ``in situ'' stars dominate low mass galaxies at
earlier times, while massive galaxies are dominated by ``ex situ''
star accretion, and in their case the formation of ``in situ'' stars
occurs as the result of the accretion of cold flows.
We agree--at least qualitatively--with their evidence of downsizing
and observe a similar trend in mean stellar age as a function of
galaxy mass, although they have a smaller scatter in the mean stellar
age.
\citet{2011A&A...533A...5C} performed a similar analysis using
semi-analytical methods.
They found that galaxies less massive than $10^{11}\,\ensuremath{\mathrm{M}_\odot}$ assembled
most of their mass by mergers rather than by accretion.
We qualitatively agree with their results, even though our statistics
for massive galaxies are of insufficiently high quality to draw firm
conclusions.
Using high-resolution dark matter simulations (the Aquarius project),
\citet{2011MNRAS.413.1373W} also found that mergers with mass ratios
larger than 1:10 contribute very little to the DM mass assembly,
less than 20\%.
Most of the major merger contribution is confined to the central
parts of haloes, which does not represent the bulk of the mass.
This investigation can be extended to baryons, through semi-analytical
prescriptions \citep[e.g.][]{2000MNRAS.319..168C,
2006MNRAS.370..645B}, since the lowest mass halos have a small
baryon fraction.
\smallskip
\section{Conclusions}
\label{conclu}
We have studied the accretion histories of 530 galaxies using multi-zoom
simulations, starting from a cosmological simulation and resimulating
three times smaller zones of interest at higher resolution.
We selected a dense region and detected {the hierarchy of
dark matter haloes and
subhaloes, as well as baryonic galaxies and satellites}, at
each timestep of the simulations, which enabled us to follow the
structures in time, and build merger trees.
We computed the mass assembled through both smooth gas accretion and
mergers, and we found that accretion plays a dominant role, at
least until $z\simeq 0.4$, the end of our highest zoom level
simulation.
Massive galaxies have a lower mass-accretion fraction.
Over all galaxies, about three-quarters of the mass is on average assembled
through smooth accretion, and one-quarter through mergers.
This is in agreement with previous studies examining the role of gas
accretion \citep[eg,][]{2002ApJ...571....1M, 2003MNRAS.345..349B,
2005A&A...441...55S,2005MNRAS.363....2K, 2009MNRAS.395..160K,
2009ApJ...694..396B, 2011MNRAS.417.2982F, 2011MNRAS.tmp..554V}, but
we have extended these previous analyses to achieve higher quality
statistics.
The main originality of this work lies in the use of multi-zoom
simulations, which allow to simulate galaxies with a fairly high resolution
in a cosmological context, and to study their mass assembly history.
It is worth noting that, even in quite dense environments, where
mergers are expected to occur, mass assembly is still dominated by
smooth accretion.
\smallskip
We have also studied the evolution of the mass functions of galaxies and DM
haloes, especially in dense environments.
The galaxy density in our final zoom level is in-between group and
cluster environment, and the most massive galaxies are spiral (and not
elliptical) at the end of the simulation ($z=0$ at the level 2).
This evolution, especially for galaxies, clearly agrees with the
hierarchical model, with low-mass galaxies at high redshift and
massive ones at low redshift.
\smallskip
We have been able to match DM haloes and galaxies, finding no
galaxies without haloe.
In contrast, we did find some dark structures containing no galaxies,
although this is likely to be an artefact caused by a lack of
numerical resolution, and with a higher resolution these galaxies
would be detected.
We have studied the baryonic content of haloes and found that lower mass
haloes have smaller baryon fractions, as expected from the action of
stellar feedback.
{We have studied the evolution of the baryon phases in our simulations,
into different components, namely diffuse gas, condensed gas,
hot/warm-hot gas, and stars, and distinguishing between the effects of
environment and resolution. }
\smallskip
Finally, we have studied the mean stellar age of our galaxies, at both
$z=0.47$ and $z = 0$, and found evidence of downsizing: low mass
galaxies form stars at each epoch, whereas in massive galaxies, most
stars have formed when the universe was half its present age.
\smallskip
These results however suffer from the problem of overcooling that is
encountered at high masses.
The downsizing trend should remain detectable despite this problem
although the accretion fraction and the gas content of haloes depend
on the adopted feedback recipes.
The results for the more massive galaxies should thus be taken
with caution.
\smallskip
We propose to develop further simulations to explore different physical
parameters such as feedback or star formation recipes in order to help
us understand their influence on gas accretion as advocated by
\citet{2011MNRAS.tmp..554V} and \citet{2011MNRAS.417.2982F}.
In this next series of simulations, we will be able to address the
role of feedback at various epochs in determining the accretion fraction by
comparing these simulations with SNe feedback only with our present results.
\smallskip
We note that our present study has been performed on a dense region,
and new multi-zoom simulations centred on less dense regions will be
analysed to help us ascertain the role of the environment.
The relative importance of mergers and diffuse accretion might indeed
depend on environment, as \citet{2007ApJ...654...53M} have shown with
dark-matter only simulations.
The time scale for the assembly of massive haloes is shorter in dense
environments, and the relative role of mergers appears to be higher.
Environments with a wide range of densities must therefore be explored
by he use of full simulations with baryons and feedback to perform a
census of mass assembly encompassing wide ranges of environments and
redshifts.
The geometry of gas accretion from filaments onto discs will also be
studied in more detail.
\begin{acknowledgements}
We thank Dylan Tweed for providing us with tools to build merger trees and
help in the use of both AdaptaHOP and the merger trees St\'ephane
Colombi for stimulating discussions on structure detection, and
Ana\"elle Hall\'e for her comments on the paper.
The simulations were performed at the CNRS supercomputing center at
IDRIS, Orsay, France.
\end{acknowledgements}
\bibliographystyle{aa}
|
1,108,101,565,564 | arxiv | \section{Introduction}
\label{Intro}
While chiral perturbation theory ($\chi$pt) has a long
history\cite{Pagels}, modern applications have been driven by
the formulation given by Weinberg in 1979.\cite{Wphilo} Using
power counting techniques, Weinberg demonstrated that for the
most general, non--linear chiral lagrangian in the purely
mesonic sector a loop expansion can be systematically developed
{\it even} though such lagrangians are not renormalizable in the
traditional sense. Infinities generated by loops involving
terms of lower chiral power, a quantity which will be defined
shortly, are removed by terms of higher power in the lagrangian.
The systematics occur because higher power means higher order in
an expansion in terms of derivatives of the pion's field and the
pion's mass. Provided one restricts kinematically the
application of the theory to scales of the order of the pion's
mass, $m$, such an expansion has at least the hope of converging. The
expansion parameter naturally occuring in this loop expansion is
$(m/2 \pi f_\pi)^2$. Of course the introduction of
additional terms in the lagrangian requires additional
experimental information in order to fix the residual finite
piece of these higher power ``counter--terms''. The number of
independent experimental inputs increases rather rapidly with
the loop--expansion. For example, while the most general lowest
order chiral lagrangian in the mesonic sector, ${\cal L}_2$,
contains only two terms, there are ten independent terms at next
order, ${\cal L}_4$. Nevertheless, nontrivial predictions
follow once these new terms are determined. This program was
outlined by Weinberg in \cite{Wphilo}; its successful
implementation in the mesonic sector through the one loop level
was performed by Gasser and Leutwyler in their seminal papers of
the mid 1980s\cite{GassLeut}.
The extension of these methods to the nucleon sector was first
attempted by Gasser, Sainio and Svarc\cite{GassNuc}. The
inclusion of baryons adds the nontrivial complication that the
nucleon mass $M$ is comparable to that of the typical chiral
scale $\chi \sim 2 \pi f_\pi$. A loop expansion, when
calculated with the full nucleon propagator~\cite{GassNuc},
inevitably contains terms proportional to $M/\chi$ and powers
thereof. Clearly one does not hope to form a convergent series
with such an expansion parameter. Nevertheless the leading
infrared, ($m^2 \rightarrow 0$) nonanalytical behavior of
the graphs did appear in \cite{GassNuc} to be systematically
correlated with the loop expansion. The authors of
\cite{GassNuc} thus conjectured that this pattern would continue to all orders
in the loop expansion suggesting that such an expansion, if
organized properly, would be useful.
Weinberg ~\cite{Wcount} introduced the notion of chiral power.
A general $2N$ baryon legged graph is assigned the chiral power
$\nu$ given by the expression
\begin{equation}
\nu = 2 - N + 2L + \Sigma_i V_i (d_i + \frac{1}{2}n_i - 2),\label{powerc}
\end{equation}
in which $L$ is the number of loops, $V_i$ is the number of
vertices of type $i$ characterized by $d_i$ derivatives or
factors of $m$ and $n_i$ number of nucleon fields. The
systematic expansion required that the nucleon be considered
nonrelativistic. To the extent that all relevant momentum are
of the order of the pion's mass, this constraint is consistent
with the entire program of chiral perturbation theory.
Weinberg's scheme validated the conjecture of
Ref.~\cite{GassNuc}. We note that we use Eq. (\ref{powerc}) in all
further discussions to label the power of any particular graph.
Subsequent work of Weinberg\cite{WNNforce}
and others\cite{Kolck1} have focussed on the NN force.
By applying techniques developed for heavy quark
physics\cite{HeavyEff,pedagogy} to the baryon sector, Jenkins
and Manohar\cite{JenkMan1} formalized the nonrelativistic
treatment of the nucleon and made systematic counting of chiral
power possible. All terms proportional to the nucleon's mass
are absent by construction, and the loop expansion in terms of
momentum and the pion's mass is realized.
The success of the chiral perturbation theory in the nucleon
sector relies on a double expansion: a chiral expansion in
$1/\chi$, and the heavy baryon expansion in $1/M_B$. Among
graphs with the same number of $\pi N$ vertices these two
expansions are distinct in terms of the parameters of the QCD
lagrangian. The chiral expansion is based on the mass of the
light quarks $m_u, m_d, m_s \rightarrow 0$, while the heavy
baryon expansion can be associated with the limit of large $N_c$
among these graphs.
The first comprehensive application of chiral perturbation
theory to the problem of octet and decuplet baryon masses is due
to Jenkins~\cite{Jenkins}. She examined the question why the
two well-known predictions, namely, the Gell-Mann Okubo~\cite{GMO}
(GMO) relation,
\begin{equation}
\frac34\,M_{\Lambda}+\frac14\,M_{\Sigma}-\frac12\,M_N-\frac12\,M_{\Xi}=0,
\label {SU3GMO}
\end{equation}
and the Decuplet Equal Spacing Rule~\cite{nobel} (DES),
\begin{eqnarray}
(M_{\Sigma^*} - M_{\Delta}) - (M_{\Xi^*}-M_{\Sigma^*})& =&
\nonumber \\
(M_{\Xi^*} - M_{\Sigma^*}) - (M_{\Omega^-}-M_{\Xi^*})& =&
\nonumber \\
\frac12\{(M_{\Sigma^*} - M_{\Delta}) -(M_{\Omega^-}-M_{\Xi^*})\} &&
\label {SU3DES}
\end{eqnarray}
work as well as they do despite
apparently large corrections coming from the one-loop-${\cal O}(p^3)$
level. The experimental value of the left-hand side of
Eq.~(\ref{SU3GMO}) is $6.5\,MeV$ which is $~3\%$ of the average
intra-multiplet splitting among the octets. The average
experimental value of the mass combinations in
Eq.~(\ref{SU3DES}) is $27\,MeV$ which is $~20\%$ of the average
intra-multiplet splitting among the decuplets. We remind the
reader that the two predictions above are based on the assumption
that the flavor symmetry breaking term in the lagrangian
transforms like the $\lambda_8$ member of an octet (which is true for QCD)
and that its effect may be derived perturbatively. Jenkins went
up to ${\cal O}(p^4)$ level by
inserting octet and decuplet sigma terms in the loop diagrams
and stressed the importance of these terms in explaining the
susprising success of GMO and DES.
In this paper we reexamine the application of chiral
perturbation theory to the problem of octet and decuplet baryon
masses. We use the heavy baryon formalism of Jenkins and Manohar
to include the decuplet field\cite{JenkMan2}, and also many of the
useful tables which appear in Ref.~\cite{Jenkins}.
We differ from Jenkins on two points, one major and one minor.
We also report a new result concerning $1/M_B$ corrections to the
heavy fermion theory. The three points are listed below.
\begin{enumerate}
\item As we will see later, divergences occur at the one-loop-${\cal O}(p^3)$
level when the internal baryon and the external baryon are in different
flavor multiplets. The resulting counter--terms are
combinations of flavor singlet and flavor octet
(specifically, the $\lambda_8$ member).
Nevertheless one can predict the results of
GMO and DES, because the flavor structure of the
counter-terms ensures that they do not contribute to these mass combinations.
We also note that the counter--terms have structures similar to those appearing
in ${\cal L}^0$ and ${\cal L}^1$, but have higher chiral power,
namely, ${\cal O}(p^2)$. When one goes to the one-loop-${\cal O}(p^4)$ level
by
inserting octet and decuplet sigma terms one
needs two types of counter--terms not present in
${\cal L}^0$ or ${\cal L}^1$. The wavefunction renormalization
counter--terms arising from Fig.~1a are ${\cal O}(p^2)$ flavor
octets. They contribute through diagrams containing one of these terms
and a sigma term separated by a baryon propagator. The net
effect belongs to flavor
$8\otimes8=1\oplus 8\oplus 8\oplus 10\oplus\bar{10}\oplus 27$ space and
contributes to GMO and DES. The vertex renormalization graphs shown in Fig.~1b
generate counter-terms of ${\cal O}(p^3)$ proportional
to the square of the quark mass matrix. Hence they also belong to flavor
$1\oplus 8\oplus 8\oplus 10\oplus\bar{10}\oplus 27 $ and will also contribute
to GMO and DES. The
most important point is that these counter-terms are not of
forms already present in ${\cal L}^0$ or ${\cal L}^1$.
Unlike multiplicatively renormalizable theories where one can
discuss various regularization schemes
(e.g. MS or $\overline{{\rm MS}}$) without changing
the underlying number of inherent parameters in the theory, these
counter--terms have residual finite pieces that require further experimental
input to determine.
We do not know the relevant coupling
constants and, hence, cannot predict the values of the GMO or
DES mass combinations. We need experimental values of these
combinations and other experimental data to determine the unknown
coupling constants. Thus we are forced to conclude that so far
as GMO and DES are concerned we have at present power to predict
only up to the one-loop-${\cal O}(p^3)$ level and not beyond that.
We calculate the left hand sides of
Eqs.~(\ref{SU3GMO}) and (\ref{SU3DES}) at the one-loop-${\cal
O}(p^3)$ level only. In principle, these results are thus contained, at
least in part (see point 2 below) in the results
of Ref.~\cite{Jenkins}, but not explicitly identified. It is
important to know what the results are at one-loop-${\cal
O}(p^3)$ level because we find that this is the limit of
predictability of chiral perturbation theory in the area of
baryon masses.
We note that the GMO results at the one-loop-${\cal O}(p^3)$
level have been published already by Bernard {\it et al}~\cite{BerMeis1}.
Similar results for the DES, contained here, are new.
\item The physical value for the mass difference between the decuplet
and octet baryons, $ M_{10} - M_{8} \approx 2 m$,
and it should share with $m$ the chiral power $1$. Then,
according to Eq.~(\ref{powerc}), the decuplet-octet mass
difference term in the lagrangian has chiral power $0$ and is
included in $L^0_v$ of Jenkins~\cite{Jenkins}. The value of a
baryon-meson loop is expressed with the help of a function
$W(m,\delta,\mu)$ parametrized by the meson mass, $m$,
the renormalization scale, $\mu$, and the quantity $\delta$
defined below:
\begin{eqnarray}
{\rm Octet-octet}&&\,\,\,\,\,\,\delta=0, \nonumber \\
{\rm Octet-decuplet}&&\,\,\,\,\,\,\delta=M_{10}-M_8, \nonumber \\
{\rm Decuplet-octet}&&\,\,\,\,\,\,\delta=M_8-M_{10}, \nonumber \\
{\rm Decuplet-decuplet}&&\,\,\,\,\,\,\delta=0. \label{delta}
\end{eqnarray}
The first label on the left of each line is for the external leg
and the next one is for the internal leg.
This function, defined by Eqs.~(\ref{del0}), (\ref{mgtdel}) and (\ref{delgtm}),
has a branch point at $\delta= \mp m$, reflecting the instability of the
decuplet
(octet) to decay into an octet (decuplet) and a meson when the masses allow the
process. Because of the proximity of the
branch point and because both GMO and DES involve cancellation among
large quantities, we argue that the role of $\delta$ should not be
treated perturbatively~\cite{Jenkins}.
\footnote{There are circumstances where such a treatment is appropriate,
as in the case of isospin splittings discussed recently by Lebed\cite{Lebed}.}
It should be included in all
orders~\cite{gangof4}. In Table~1, which appears later in the paper, where
we justify our argument, we show that the difference between the results of
perturbative and exact treatments of the $\delta$ term is of the order of the
experimental values of the left hand sides of
Eqs.~(\ref{SU3GMO}) and (\ref{SU3DES}).
We note that interesting questions concerning the two limits,
$\delta \rightarrow 0$ and $m \rightarrow 0$,
in the context of large $N_c$ have been discussed by Cohen and
Broniowski\cite{XptandNc}.
\item The leading $1/M_B$ corrections to the heavy fermion theory
results is $\sim (m^4/M_B)$. We find that when the internal baryon
is a decuplet, the $1/M_B$ corrections to the one-loop result is
actually divergent.
Specifically, it has the form
$\sim (m^4/M_{10})(\frac{1}{\epsilon}-\gamma_E+{\rm ln}\,(4\pi))$.
This result has important implications. It means that one must add
counter-terms
$\sim (m^4/M_{10})$ to be fixed with the help of experimental data. It is not
possible to calculate the $1/M_B$ correction terms {\it ab initio}. We have
seen
earlier that counter-terms $\sim m^4$ are needed when one goes to the
one-loop-${\cal O}(p^4)$ level by inserting sigma terms into
one-loop-${\cal O}(p^3)$ graphs. Having the same flavor $SU(3)$ group
structure both counter-terms will be determined together from the same
experimental information, namely, the octet and decuplet masses. We cannot
separate the contributions to the experimentally fixed counter-term from the
two mechanisms - $1/M_B$ corrections and sigma term insertions.
We discover an additional complication for future one-loop-${\cal O}(p^4)$
level calculations. The $\pi N\Delta$\cite{Nath1} and $\pi\Delta\Delta$
couplings each contain an additional term which is $1/M_B$ suppressed compared
to the term retained in the heavy fermion theory. These terms
contribute to the ultraviolet divergent term in $\frac{m^4}{M_{10}}$.
This by is not a matter of concern. As we have noted above one can only
fix the strength of the total $\sim m^4$ counter-term and not the part coming
from $1/M_B$ effects. But these divergences are also accompanied by finite,
nonanalytic terms in $m/M_B$. Such terms must thus be calculated and
included in the
expressions for the baryon masses used to determine the counter-terms.
But it cannot be done without knowing the values of the secondary coupling
constants. As these coupling constants play their roles only when the $\Delta$
is
off its mass shell, to fix them reliably from experiment in a credible manner
may prove to be a nearly impossible task. The point is illustrated by the work
of
Benmerrouche, Davidson, and Mukhopadhyay~\cite{nimai}.
They attempted to fix the secondary coupling constant $\alpha$,
defined later in Eq.~(\ref{theta}), which appear in $\pi N\Delta$ interaction
and were able only to place its value within a rather broad range,
namely, $0.30\geq\alpha\geq -0.78$.
\end{enumerate}
The lowest order lagrangian depends upon four coupling
constants: $D$ and $F$ describe meson--octet couplings, ${\cal
C}$ baryon octet--decuplet couplings and, ${\cal H}$
meson--decuplet couplings. The values of the first three are
reasonablely well determined. The GMO combination of masses,
depends only on these quantities and, using the values of
Jenkins~\cite{Jenkins}, we obtain typically $9\,MeV$ while the
experimental value is $6.5\,MeV$. The value of the decuplet
spacing depends upon ${\cal H}$ which is difficult to determine
experimentally. If we chose to fit the average violation to the
decuplet equal spacing rule ($27\,MeV$) at the one-loop-${\cal O}(p^3)$ level
we obtain ${\cal H}^2 = 6.6$.
\section{Baryon Self--Energies}
\label{BSE}
\subsection{{\it The purely Octet sector}}
\label{Octet}
Up to ${\cal O}(p^3)$ the effective chiral lagrangian coupling
octet pseudoscalar mesons to octet baryons
is:\cite{pedagogy,Jenkins}
\begin{eqnarray}
{\cal L}_{eff} &=& {\cal L}_0^{\pi N} + {\cal L}_1^{\pi N} +
{\cal L}_2^{\pi \pi}
\nonumber\\
{\cal L}_0^{\pi N} &=& Tr \overline{B} (i\not\!\!D - M_B) B +
\nonumber\\
&&D Tr \overline{B} \gamma^\mu \gamma_5 \{A_\mu, B\} + F Tr
\overline{B} \gamma^\mu \gamma_5 [A_\mu,B] \nonumber\\
{\cal L}_1^{\pi N} &=& b_D Tr \overline{B} \{\xi^\dagger M
\xi^\dagger + \xi M \xi, B\}
\nonumber\\
&&+ b_F Tr \overline{B} [\xi^\dagger M \xi^\dagger + \xi M \xi,B]\nonumber\\
&&+ \sigma Tr M(\Sigma + \Sigma^\dagger) Tr \overline{B} B
\nonumber\\
{\cal L}_2^{\pi \pi} &=& \frac{f_\pi^2}{4} Tr \partial_\mu \Sigma
\partial^\mu \Sigma^\dagger
+ a Tr M(\Sigma + \Sigma^\dagger),
\label{lagfull}
\end{eqnarray}
\noindent in which,
\begin{eqnarray}
&\xi = e^{i \pi/f_\pi},
\hspace{.5in}
&\Sigma = \xi^2 = e^{i 2\pi/f_\pi},\nonumber\\
&V_\mu = \frac{1}{2}
[(\partial_\mu\xi) ^\dagger\xi + \xi^\dagger
(\partial_\mu \xi)],
&A_\mu = \frac{i}{2}[(\partial_\mu\xi) ^\dagger\xi - \xi^\dagger
(\partial_\mu \xi)] ,\nonumber\\
&D^\mu B = \partial^\mu B + [V^\mu, B].\hspace{.3in}&
\end{eqnarray}
The definitions of the mass matrix, $M$, and the octet meson and
baryon fields are, by now, standard, and are given
in Ref.~\cite{pedagogy,Jenkins}. Note that the subscripts on the
baryonic sector of ${\cal L}_{eff}$ refer to the chiral power
defined by Eq.~(\ref{powerc}).
The one loop nucleon self--energy, $\Sigma(p,M_B)$, is shown
diagramatically in Fig. (2). The expression for $\Sigma(P,
M_B)$ on mass--shell is given by
\begin{eqnarray}
&&\Sigma(P, M_B) = \frac{i\beta}{2 f_\pi^2}\int \frac{d^4
k}{(2\pi)^4}\frac{\gamma_5 \not\!k (\not\!P + \not\!k + M_B)
\gamma_5 \not\!k}{(k^2 - m^2_\pi + i\eta)(2P \cdot k + k^2 + i\eta)}
\nonumber\\
&&= \frac{-i\beta}{2 f_\pi^2}\int \frac{d^4 k}{(2\pi)^4}
\frac{ (M_B + \not\!P) k^2}
{(k^2 - m^2_\pi + i\eta)(2P \cdot k + k^2 +
i\eta)},\label{selfE1}
\end{eqnarray}
where $\beta$ represents SU(3) algebra factors.
The heavy baryon result\cite{Jenkins} for $\Sigma(P,M_B)$ can be
obtained by introducing $P = m v$ and taking the $M_B
\rightarrow \infty$ limit of the
{\it integrand} in the above, whereby one obtains that
\begin{equation}
\Sigma(P, M_B \rightarrow \infty) =
\frac{-i\beta}{2 f_\pi^2}\int \frac{d^4 k}{(2 \pi)^4} \frac{\frac12(1 +
\not\!v) k^2}
{(k^2 - m^2_\pi + i\eta) (v \cdot k + i\eta)}.\label{mshift1}
\end{equation}
The same result is obtained by first reducing the effective
lagrangian ${\cal L}_{eff}$ in the heavy fermion limit in terms
of velocity fields $B_v$,\cite{pedagogy}
\begin{equation}
\frac{1}{2}(1 + \not\!v) B(x) = e^{-iM_B v\cdot x} B_v(x).\label{wvf}
\end{equation}
Any reference in ${\cal L}_{eff}$ to $M_B$ is thereby removed, so
that, for example, ${\cal L}_0^{\pi N}$ becomes\cite{JenkMan1}
\begin{eqnarray}
{\cal L}_{v}^0 &=& i Tr \overline{B}_v v\cdot D B_v +
\nonumber\\
&&2 D Tr \overline{B_v} S^\mu_v \{A_\mu, B_v\} +2 F Tr
\overline{B_v} S^\mu_v [A_\mu, B_v]. \label{Lheavy}
\end{eqnarray}
where $S^\mu_v$ is a spin factor defined in
Refs.\cite{JenkMan1,JenkMan2}. Observe that from ${\cal
L}^0_v$, the nucleon's propagator is given directly to be
$i/(v\cdot k + i\eta)$.
As in previous works,\cite{GassNuc,Jenkins} we use dimensional
regularization to evaluate all integrals. In the purely mesonic
sector it is well known that dimensional regularization, by not
introducing any additional mass parameters, avoids
complications\cite{Gerstein} in the path--integral arising from
the chiral--invariance of the measure. We know of no such
similar result involving the baryon sector but find that the use
of alternative regularization schemes such as Euclidean cutoff,
that introduces additional mass parameters, would complicate the
power counting result of Weinberg, Eq. (\ref{powerc}). In
order to avoid these complications we use dimensional
regularization.
In Appendix A we present one method of evaluating Eq.
(\ref{mshift1}). One finds that the mass--splitting $\delta
M_B$ is given simply in the heavy baryon limit by
\begin{equation}
\delta M_B\left |_{M_B \rightarrow \infty} = \beta \frac{- m^3}{16
\pi f^2_\pi}\right. . \label{delmb}
\end{equation}
The wavefunction normalization, $Z_2$, is given by the following expression.
\begin{equation}
Z_2^{-1} =1 - \frac{ m^2}{8 \pi^2f^2_\pi} \left(
\frac{1}{\epsilon} -
\gamma_E + 1 + {\rm ln}(4\pi) -
{\rm ln}\frac{m^2}{\mu^2} \right).
\end{equation}
Clearly $Z_2$ requires renormalization which is accomplished
through counterterms of chiral power 2 in ${\cal L}_{eff}$. These
have been given by Lebed and Luty~\cite{LebedLuty}. These authors have
suggested that the wavefunction renormalization
counterterms may be absorbed by
redefining the baryon field. Although this is correct for the
$Tr\,\bar{B}iD\!\!\!\!/B$ term in the lagrangian, such a redefinition will
neccessarily generate new interaction terms. For example,
otherwise charge conservation, which requires cancellation
between wavefunction renormalization and vertex renormalization
($Z_1=Z_2$), cannot be maintained. Thus the need to deal with
these counterterms cannot be avoided.
Use of only the logarithmic piece in $Z_2$ is
not sufficient~\cite{JenkMan1,Jenkins,JenkMan2,gangof4,ButSav}.
For the present case, by confining ourselves to only
the one-loop-${\cal O}(p^3)$ level, we avoid the
complications of the wavefunction
renormalization as well as the $1/M_B$
corrections discussed earlier.
We will now discuss the inclusion of the decuplet, which
involves its own unique features.
\subsection{{\it The Decuplet}}
The decuplet is included as a spin $3/2$ Rarita--Schwinger
field\cite{RaritaS} $\Delta^\mu$. On--shell, $\Delta^\mu$ obeys
the Dirac equation
\begin{equation}
(i \not\!\partial - M_{10}) \Delta^\mu = 0
\end{equation}
along with the constraints
\begin{eqnarray}
\gamma_\mu \Delta^\mu &=& 0,\nonumber\\
\partial_\mu \Delta^\mu &=& 0, \label{constraints}
\end{eqnarray}
which eliminate the spin $1/2$ components of the $\Delta^\mu$
field. The most general free lagrangian for $\Delta^\mu$ that
generates the Dirac equation of motion and the constraints
is\cite{Nath1,MoldC,nimai,Griegel}
\begin{eqnarray}
{\cal L}_{\Delta} &= -\overline{\Delta^\mu}[ (i \not\!\partial -
&M_{10}) g_{\mu \nu} + i A(\gamma_\mu \partial_\nu + \gamma_\nu
\partial_\mu)\nonumber\noindent\\
&&+\frac{1}{2}(3A^2 + 2A + 1) \gamma_\mu \partial^\alpha
\gamma_\alpha \gamma_\nu\nonumber\\
&&+M_{10}(3A^2 + 3A + 1)\gamma_\mu \gamma_\nu ] \Delta^\nu,
\label{deltafree}
\end{eqnarray}
where $A$ is an arbitrary (real) parameter subject to the one
requirement that $A \ne -1/2$. Taking $A = -1$ leads to the
most commonly used expression for the decuplet propagator,
\begin{eqnarray}
G^{\mu \nu} &= \left. \frac{1}{i}\frac{\not\!P + M_{10}}{P^2 - M_{10}^2 +
i\eta}
\right[ g^{\mu \nu} - \frac{1}{3} \gamma^\mu \gamma^\nu
&-\frac{2}{3} \frac{P^\mu P^\nu}{M_{10}^2}\nonumber\\ &&\left. +
\frac{P^\mu \gamma^\nu - P^\nu \gamma^\mu}{3 M_{10}} \right]. \label{propdec}
\end{eqnarray}
To leading order in the heavy baryon expansion,
where\cite{HeavyEff,JenkMan2} one takes $P = M_{8} v + k$, the
decuplet propagator becomes
\begin{eqnarray}
G_v^{\mu \nu} &= \left. \frac{1}{i}\frac{\frac{1}{2}(1 + \not{v})} {v\cdot
k - \delta + i\eta}
\right[ g^{\mu \nu} - \frac{1}{3} \gamma^\mu \gamma^\nu &- \frac{2}{3}
v^\mu v^\nu\nonumber\\ &&\left. + \frac{1}{3}(v^\mu \gamma^\nu -
v^\nu\gamma^\mu)
\right]\nonumber\\
&\equiv \frac{1}{i}\frac{\frac{1}{2}(1 + \not{v})}{v\cdot k - \delta +
i\eta}P_v^{\mu \nu}.&
\label{decprop}
\end{eqnarray}
The quantity $\delta$, which we take to be $226\,MeV$, is the mass
difference between the baryon octet and baryon decuplet masses.
The constraints on the decuplet field in the heavy baryon theory have
been given by Jenkins and Manohar~\cite{JenkMan2}
\begin{eqnarray}
\gamma_\mu\Delta^\mu&=&0, \nonumber\\
v_\mu\Delta^\mu&=&0. \label{hfconst}
\end{eqnarray}
The most general, chirally invariant interaction lagrangian
involving decuplets, octet baryons and octet mesons is:
\begin{eqnarray}
{\cal L}^i &= {\cal C} (\overline{\Delta^\mu} \Theta_{\mu \nu}
A^\nu B +& h.c.) +{\cal H} \overline{\Delta^\mu} \gamma_\nu
\gamma_5 A^\nu \Delta_\mu\nonumber\\
&&+ \tilde{H} (\overline{\Delta^\mu} \gamma_\mu \gamma_5 A^\nu
\Delta_\nu + h.c.) \label{Lint}
\end{eqnarray}
where $\Theta_{\mu \nu}$ is given by\cite{Nath1,nimai}
\begin{equation}
\Theta_{\mu \nu} = g_{\mu \nu} + \alpha \gamma_\mu \gamma_\nu. \label{theta}
\end{equation} In the heavy fermion theory the last term vanishes
and the first two terms beome the interaction terms~\cite{JenkMan2}:
\begin{equation}
{\cal L}_v^i = {\cal C} (\overline{\Delta^\mu} A^\mu B + h.c.) +
2{\cal H} \overline{\Delta^\mu} S_{v\nu} A^\nu \Delta_\mu. \label {LI10v}
\end{equation}
Using the decuplet propagator given by
Eq. (\ref{decprop}) and the constraints given by Eq.~(\ref{hfconst}),
one obtains for the mass--shift from Fig. (3)
\begin{eqnarray}
\delta M^\prime &=& \frac{-i3 \beta^\prime}{4 f_\pi^2}
\int \frac{d^4 k}{(2 \pi)^4} \frac{k_\mu k_\nu P_v^{\mu \nu}}
{(k^2 - m^2 + i\eta)(v \cdot k - \delta + i\eta)}, \nonumber \\
&=& \frac{\beta^\prime}{16 \pi f^2_\pi} \left[
\frac{-3 \delta}{2 \pi}
( m^2 - \frac23\delta^2 ) ( \frac{1}{\epsilon} - \gamma_E +
{\rm ln}(4\pi) - {\rm ln}\frac{m^2 }{\mu^2} ) \right. \nonumber\\
&-&\left. \frac{3 \delta m^2}{2 \pi} - \frac{2}{\pi}(m^2 -
\delta^2)^{3/2}
{\rm tan}^{-1}\frac{\sqrt{m^2-\delta^2}}{\delta} \right]
.\label{uvterms}
\end{eqnarray}
It is clear from Eq. (\ref{uvterms}), and as announced earlier, that upon
inclusion of the decuplet, the mass--shift requires renormalization. Two
types of counter-terms belonging to ${\cal L}_2^{\pi N}$ are needed,
one to cancel the divergence proportional to $\delta^3$ and the
other in $\delta m^2 $. The $\delta^3$ term turns into an
overall mass--shift when all {\it relevant} intermediate states
are summed over. The $\delta m^2 $ term is a sum of flavor singlet and
flavor octet. As noted earlier all counterterms (divergences) cancel
{\it exactly} in the mass combinations which appear in the
GMO and the DES. For completeness, the counter-terms are listed in Appendix B.
\section{$1/M_B$ Corrections}
If in subsection~\ref{Octet}, instead of adopting the heavy fermion theory,
we had evaluated Eq.~(\ref{selfE1}) we would have obtained the
following expression for the octet self-energy contribution coming from a
loop containing an octet baryon internal line:
\begin{eqnarray}
\delta M_B &=& \frac{\beta}{16 \pi f^2_\pi}
\left[\frac{M_B^3}{\pi}\left(\frac{1}{\epsilon} - \gamma_E + {\rm ln}\,(4\pi) +
1
- {\rm ln}M_B^2\right)\right.\nonumber\\ &&+\frac{M_B m^2}{
\pi}\left(\frac{1}{\epsilon}
- \gamma_E + {\rm ln}\,(4\pi) + 2 - {\rm ln}M_B^2\right)\nonumber\\
&&\left.-m^3\left(1 - \frac{m}{\pi M_B}\left[1 + {\rm ln}
\frac{M_B}{m}
\right]\right)+\cdots\right].
\label{fullmass}
\end{eqnarray}
The ultraviolet divergences proportional to $M^3_B$ and $M_B$
are those first noted by Gasser, Sainio and Svarc\cite{GassNuc}.
The additional divergent terms obtained by letting $M_B\rightarrow\infty$
have identical flavor structure. Observe that no nonanalytical behavior
in $m_\pi$ is thus lost in the $M_B\rightarrow\infty$ limit.
The new information contained in Eq.~(\ref{fullmass}) is the $1/M_B$
correction term in the last line. The contribution of the correction term to
GMO is $\sim40\%$ of that of the $m^3$ term. Thus it is quite substantial.
Unfortunately, it is not enough to include finite, chirally nonanalytic terms
like these to obtain the leading $1/M_B$ corrections.
The reason is that a one-loop graph containing a decuplet internal line gives a
divergent contribution proportional to $m^4/M_B$. They arise from the presence
of
the $-\frac23\frac{P^\mu P^\nu}{M^2_{10}}+
\frac{P^\mu \gamma^\nu - P^\nu \gamma^\mu}{3 M_{10}} $
terms in the decuplet propagator given by Eq.~(\ref{propdec}).
Within a loop $P=p+k$, where $p$ is the external momentum and
$k$ the loop momentum. The appearance of extra powers of the loop
momentum is responsible for the divergent $1/M_B$ terms.
The interaction terms
${\cal C} \alpha (\overline{\Delta^\mu} \gamma_\mu\gamma_\nu A^\nu B
+ h.c.)$ and
$\tilde{H} (\overline{\Delta^\mu} \gamma_\mu \gamma_5 A^\nu \Delta_\nu +
h.c.)$,
which appear in Eq.~(\ref{Lint}), also contribute to the divergent terms.
The combined results are shown below.
$$\delta M_8 : \frac{-1}{64 f_\pi^2 \pi^2}
[\frac{(M_8+M_{10})}{M_{10}}(1+\alpha)(2-\alpha)-3 \alpha(1+\alpha)]$$
$$\hspace{1in} \sum_\lambda \,
\beta^\prime_8(\lambda)\frac{m^4_\lambda}{M_{10}}
[\frac{1}{\epsilon}-\gamma_E + {\rm ln} (4\pi)]$$
$$\delta M_{10} : -\frac{3 [1 - 2 \tilde{H}/{\cal H} - 3 (\tilde{H}/{\cal
H})^2]}
{320 f_\pi^2 \pi^2}\hspace{1.5in}$$
\begin{equation}
\hspace{1in} \sum_\lambda \, \beta_{10}(\lambda)\frac{m^4_\lambda}{M_{10}}
[\frac{1}{\epsilon}-\gamma_E + {\rm ln} (4\pi)]
\end{equation}
The presence of these counter-terms has four important consequences.
First, we can no longer calculate the $1/M_B$ corrections completely.
Second, the flavor structure of the divergences and, hence, of the
counter-terms
is identical to those appearing at the one-loop-${\cal O}(p^4)$ level due to
the
insertion of sigma terms into one-loop-${\cal O}(p^3)$ graphs.
The sum of the two groups of counter-terms have to be fixed with the help of
experimental data, which may include the octet and decuplet mass
splittings themselves.
Third, since the two share the same counter--term,
when one goes to the one-loop-${\cal O}(p^4)$ level in chiral perturbation
theory one must also include the leading $1/M_B$ corrections.
The finite, nonanalytic terms from both sources must be
regarded, {\it a priori}, as equally important. This leads us to the
next consequence.
The fourth consequence may prove to be a serious impediment to any
one-loop-${\cal O}(p^4)$ calculation. The $m^4/M_{10}$ divergent terms are
inevitably accompanied by nonanalytic terms,
e.g. $(m/M_{10})^4 {\rm ln}(m/M_{10})$ in
Eq.~(\ref{fullmass})). These terms must be calculated
and included in expressions used to fix the full $m^4$ counter-terms.
Unfortunately, these log terms depend on the quantities $\alpha$ and
$\tilde{H}$.
As stated earlier, these coupling constants affect physical results
only through the
role of virtual $\Delta$s. Until these constants can be reliably determined
from
experiment, we are at an impasse in the application of chiral
perturbation theory in the nucleon sector.
Anatomy of the $1/M_B$ divergences suggests that the pattern will continue to
chiral perturbation theory results involving more loops and powers. The $1/M_B$
correction to the heavy fermion theory result at any level will involve
divergent
terms of a level with one more
chiral powers, coming always from internal decuplet lines.
These divergences will always be inseparably entangled with chiral
divergences of the
same level in heavy fermion theory. Thus the $1/M_B$ correction terms cannot
be calculated without a complete calculation at the same level of chiral power.
It
must be noted that this requirement is imposed not merely by considerations of
consistency but by the appearance of divergences.
One is not left with any option in the matter.
\section{Mass Splittings}
Most of the algebraic quantities used in this section have appeared in
Refs.~\cite{Jenkins,gangof4,JenkManconf}. For the reader's convenince we
include the coefficients $\alpha_i$ and $\beta_i$ defined by
Jenkins~\cite{Jenkins}.
They appear in Tables~\ref{tablealp} and ~\ref{tablebet}.
Following the style of Ref. \cite{Jenkins} we write for the
mass, $M_i$, of the ``ith'' baryon through the one--loop order as
\begin{eqnarray}
M_i &=& M_B +\frac12(1 \mp 1)\delta + \alpha_i +
\alpha_i^\delta (\mu)
\frac{3 \delta\zeta}{32 \pi^2 f_\pi^2} \nonumber\\ &&-\Sigma_{\lambda
} \beta_i(\lambda) \frac{m_\lambda^3}{16 \pi f^2_\pi}
- \frac{\delta^3}{16 \pi^2 f^2_\pi} a_R^{\pm, \Delta}(\mu) \nonumber\\
&&- \Sigma_{\lambda} \beta^\prime_i(\lambda)
W(m_\lambda,\delta,\mu). \label{massformula}
\end{eqnarray}
In the second term the upper (lower) sign is for octet (decuplet) baryons.
The coefficients $\alpha_i$ come from ${\cal L}_1^{\pi
N}$. The term proportional to $\beta_i$ arises from the chiral
loops in Fig. 2 in which the propagating baryon is in the same
multiplet as as the baryon $i$, while the terms proportional to
$\beta^\prime_i$ arises from the loops in Fig. 3 where the
propagating baryon comes from the other multiplet. The
sum over $\lambda$ runs over $\pi$, $K$ and $\eta$ mesons. The
quantities $\alpha_i^\delta$, are obtained by adding a
superscipt $\delta$ to each of the entries in
Table~\ref{tablealp}. The set $\{b_D^\delta, b_F^\delta, \cdots\}$,
thus generated, is defined in the appendix. $\zeta$ is the proportionality
constant in the GMOR\cite{GMOR} relation,
$\zeta = m^2_K/(m_s + \tilde{m}) = m^2_\pi/2\tilde{m}$.
The coefficents $a_R^{N, \Delta}(\mu)$ and
$\alpha_i^\delta (\mu)$ depend implicitly upon a
choice of scale, $\mu$. This scale appears explicitly in the function
$W(m,\delta,\mu)$\cite{BerMeis1,JenkManconf}. We
give below the expressions for the function for three cases of
interest:
\begin{eqnarray}
\delta=0,\,\,\,\, && W(m,\delta, \mu)
=\frac{1}{16 \pi f^2_\pi}m^3, \label{del0}\\
m>\mid\delta\mid,\,\,\,\, && W(m,\delta,\mu)
=\nonumber\\ &&\frac{1}{8 \pi^2 f^2_\pi}(m^2 - \delta^2)^{3/2}
{\rm tan}^{-1}\frac{ \sqrt{m^2-\delta^2}}{\delta} \nonumber\\
&&- \frac{3\delta}{32 \pi^2
f^2_\pi}\left(m^2-\frac{2}{3}\delta^2\right)
{\rm ln}\frac{m^2}{4\delta^2}\nonumber\\ &&-\frac{3\delta}{32
\pi^2
f^2_\pi}\left(m^2-\frac{2}{3}\delta^2\right)\,{\rm ln}\frac{4\delta^2}{\mu
^2},\label{mgtdel} \\
\mid\delta\mid>m,\,\,\,\, &&W(m,\delta,\mu) =\nonumber\\
&&\frac{-1}{16 \pi^2f^2_\pi}(\delta^2-m^2)^{3/2}
{\rm ln}\frac{\delta-\sqrt{\delta^2-m^2}}{
\delta+\sqrt{\delta^2-m^2}}\nonumber\\
&&- \frac{3\delta}{32 \pi^2
f^2_\pi}\left(m^2-\frac{2}{3}\delta^2\right)
{\rm ln}\frac{m^2}{4\delta^2}\nonumber\\ &&-\frac{3 \delta}{32
\pi^2
f^2_\pi}\left(m^2-\frac{2}{3}\delta^2\right)\,{\rm ln}\frac{4\delta^2}{\mu
^2},\label{delgtm}
\end{eqnarray}
When the SU(3) algebra factors are included
the $W$'s appear in the combination
\begin{eqnarray}
V(\delta)&=&\left\{-\frac14\,W(m_\pi,\delta, \mu)+\right.\nonumber\\
&&\left.W(m_K,\delta,\mu)-
\frac34\,W(m_\eta,\delta,\mu)\right\} \label{Vcomb}
\end{eqnarray}
In the present order of the chiral expansion the octet meson
mass squares are taken to be proportional to the masses of the
current quarks~\cite{GMOR}:
\begin{equation}
m^2_\eta = \frac{4}{3}m^2_K-\frac13\,m^2_\pi.
\label{etamass}
\end{equation}
We note from the Eqs.~(\ref{del0}), (\ref{mgtdel}) and
(\ref{delgtm}) that the ${\rm ln}\,\mu^2$ term appears in
these expression with factors which are linear in $m^2$.
Combining this fact with Eq.~(\ref{etamass}) it is easy to see
that $V( \delta)$ does not contain terms proportional to
${\rm ln}\,\mu^2$.
{}From Eq.~(\ref{delgtm}) one finds that when $m\rightarrow\,0$
the quantity
\begin{eqnarray}
W(m,\delta,\mu)&&\simeq\,\frac{-1}{16\pi^2f^2_\pi}
\left[\frac32\delta(m^2-\frac23\delta^2)\,{\rm
ln}\frac{4\delta^2}{\mu^2}\right.
\nonumber\\
&&\left.
+\frac12\delta\,m^2-\frac{9}{16}\frac{m^4}{\delta}+\frac38\,\frac{m^4}
{\delta}\,{\rm ln}\frac{m^2}{4\delta^2}\right].
\end{eqnarray}
Thus it is perfectly well-behaved.
Finally one obtains for the Gell--Mann Okubo mass relation the
expression
\begin{eqnarray}
\frac34\,M_\Lambda + &&\frac14\,M_\Sigma-\frac12\,M_N-\frac12\,M_\Xi
=\nonumber\\ &&\frac23 \, (D^2-3F^2)V(0)-\frac19\,{\cal C}^2
V(\delta),\label{GMOf}
\end{eqnarray}
and for the violations to the Decuplet Equal Spacing rule:
\begin{eqnarray}
&(M_{\Sigma^*} -& M_{\Delta}) - (M_{\Xi^*}-M_{\Sigma^*})
=\nonumber\\
&(M_{\Xi^*} -& M_{\Sigma^*}) - (M_{\Omega^-}-M_{\Xi^*})
=\nonumber\\
&\frac12\{(M_{\Sigma^*} -& M_{\Delta}) -(M_{\Omega^-}-M_{\Xi^*})\}
=\nonumber \\
&&\frac29{\cal C}^2 V(\delta)-\frac{20}{81}\, {\cal H}^2 V(0).
\label{DESf}
\end{eqnarray}
We remind the reader of our convention, Eq. (\ref{delta}), by which
$\delta$ is a negative quantity in Eq. (\ref{DESf}).
We also remind the reader that all counterterms have explicitly
cancelled in these two relations and that the relations are
independent of the scale $\mu$.
Before discussing the numerical results following from
Eqs.~(\ref{DESf}) and (\ref{GMOf}) and their implications, we
comment on the accuracy of a perturbative evaluation of the
combination $V(\delta)$, Eq. (\ref{Vcomb}). The perturbative
value is obtained by expanding the combination in a power series
of $\delta$ and retaining only the terms independent of $\delta$
and linear in $\delta$. In Table~\ref{pert} we compare GMO and
DES using the exact and the perturbative values of
$V(\delta)$ using the parameter set of Ref. \cite{Jenkins}
given in set 1 of Table~\ref{tablex} below.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|}
Quantites&Exact&Perturbative\\ \hline $GMO(MeV)$&$10.0$&$-1.1$\\
\hline
$DES(MeV)$&$-4.2$&$24.2$\\
\end{tabular}
\end{center}
\caption{$GMO$ and $DES$ using exact and perturbative values of $V(\delta)$.}
\label{pert}
\end{table}
It is clear that the differences are comparable to the
experimental values of the mass combinations in the two cases.
Hence one cannot treat the effect of $\delta$ perturbatively.
There are certain difficulties in making numerical prediction at
the one loop level. As inputs we need the chiral limit values of
the parameters $D$, $F$, ${\cal C}$ and ${\cal H}$. Consistency
requires that these values be extracted from experimental data
by using chiral perturbation theory results calculated at the
one loop level. As we have discussed earlier, one loop
calculations inevitably lead to requiring new and undetermined
terms of chiral power $2$.
There are serious ambiguities of a different nature involving
the coefficients $\cal{C}$ and $\cal{H}$. The latter can be
determined only from the experimental value of the $\pi \Delta
\Delta$ vertex, etc. Needless to say,
no such data exists. Hence one must either rely on models or
use the results of chiral perturbation theory itself to fix
$\cal{H}$. One such approach has been pursued in Ref.
\cite{ButSavSp}, although without having included neccessary
counter--terms.
The quantity $\cal{C}$ can be determined from the decay width of
the decuplets. In principle, we should use the value of
$\cal{C}$ in the chiral limit. Consistency requires that the
decay width be calculated in chiral perturbation theory at the
one loop level and compared with the experimental value to
extract its chiral limit.
We follow the strategy\cite{JenkMan2} of extracting ${\cal C}$
using the full (unapproximated) phase--space expression for the
decay width
\begin{equation}
\Gamma =
\frac{{\cal C}^2 \lambda^{3/2} ((M_{10}^2 + M_{8}^2 - m^2 )}
{192 \pi f^2_\pi M_{10}^5}
\label{eval2}
\end{equation}
where $\lambda$ is the usual phase--space factor
\begin{equation}
\lambda = M_{10}^4 + M_{8}^4 + m^4 - 2 M_{8}^2 M_{10}^2 -
2 m^2 M^2_{8} -2 m^2 M^2_{10}.
\end{equation}
The average value obtained is ${\cal C}^2=2.56$\cite{JenkMan2}.
One should note an isssue which arises from the use of the heavy
baryon limit. The latter gives the formula:
\begin{equation}
\Gamma = 2 Im\{M_{10}\} =
\frac{{\cal C}^2 (\delta^2 - m^2 )^{3/2}}{12 \pi f^2_\pi}.
\label{eval1}
\end{equation}
For example, for the case of $\Delta\rightarrow N + \pi$, using
$\delta = 292\,MeV$ and $\Gamma=120.\,MeV$ , one obtains from
Eq. (\ref{eval1}) that ${\cal C}^2 = 1.2$, while from Eq.
(\ref{eval2}) that ${\cal C}^2 = 2.2$. The difference between
these two evaluations, nearly a factor of two, arises from what
are formally $1/M_B$ corrections (Eq. (\ref{eval1}) is indeed
the $M_B \rightarrow \infty$ limit of (\ref{eval2})). They are
nevertheless not small and would have to also be borne in mind
when going to higher power.
We present in Table~\ref{tablex} values of the mass combinations
which appear in GMO and Decuplet Equal Spacing (DES) rules.
Several sets of parameters have been used for the purpose of
comparison.
\newpage
\begin{table}[tbp]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
Quantites&Set 1&Set 2&Set 3&Set 4&Set 5 \\ \hline
$D$&$0.61$&$0.56$&$0.61$&$0.61$&$0.61$\\ \hline
$F$&$0.40$&$0.35$&$0.40$&$0.40$&$0.40$ \\ \hline ${\cal
C}^2$&$2.56$&$2.56$&$2.2$&$1.2$&$2.56$ \\ \hline ${\cal
H}^2$&$3.61$&$3.61$&$6.6$&$3.61$&$3.61$ \\ \hline $m_\pi
(MeV)$&$140.$&$140.$&$140.$&$140.$&$0.$ \\ \hline
$GMO(MeV)$&$10.0$&$8.8$&$9.0$&$6.0$&$12.2$ \\ \hline
$DES(MeV)$&-4.2&-4.2&27&14.7&-3.9 \\
\end{tabular}
\end{center}
\caption{The value of $\delta$ is $226\,MeV$ throughout.
The experimental value of $GMO$ is $6.5\,MeV$
and the average value of the violation of $DES$ is $27\,MeV$. }
\label{tablex}
\end{table}
The parameters of set 1 are those used by
Jenkins~\cite{Jenkins}. The sets 2, 3 and 4 are designed to
show the dependence of the results on the parameters $D$, $F$,
${\cal C}$ and ${\cal H}$. The set 5 shows the effect of
zero pion mass, which comes almost entirely from the change
in the mass of $\eta$ as given by Eq.~(\ref{etamass}). It is
clear that unlike GMO, the violations to the DES is
particularly sensitive to the parameters ${\cal C}$ and ${\cal
H}$. Since ${\cal H}$ can only be experimentally extracted
through loop corrections, the importance of a consistent
one--loop evaluation of all the parameters entering the chiral
lagrangian must be emphasized. Set 3 contains a typical set of
parameters which fit the average violation to the DES rule.
\section{Conclusions}
We have calculated the combinations of baryon masses,
given by Eqs.~(\ref{SU3GMO}) and (\ref{SU3DES}), which appear in
the GMO and DES rules, at one--loop-${\cal O}(p^3)$ level in chiral
perturbation theory. At this level, these combinations depend neither on any
counterterms nor on a renormalization scale $\mu$. Some ambiguities
remain concerning the values of the coupling constants. We find that one
cannot calculate the mass combinations at one--loop-${\cal O}(p^4)$ level
because of the presence of undetermined counter--terms required to handle
ultraviolet divergences of two varieties.
One class of divergences arise from the chiral expansion and involves
both wavefunction and vertex renormalization. The other class of
divergences arise from the $1/M_B$ corrections of graphs
containing internal decuplet lines. The $1/M_B$ corrections
also give rise to terms which are finite but non-analytic in $m^2$.
Unfortunately they include a dependence on interactions which arise only when a
decuplet is off its mass shell. The associated coupling constants are
not known at present.
\bigskip
{\centerline{ACKNOWLEDGEMENTS}} Many thanks to Dave Griegel for
his discussions concerning the decuplet. We also thank Ulf-G.
Mei{\ss}ner for bringing to our attention the work of Bernard
{\it et al.}, Ref. \cite{BerMeis1}. This work was supported in
part by DOE Grant DOE-FG02-93ER-40762.
\newpage
\section{Appendix A}
It might be illuminating, especially for the issue of $1/M_B$
corrections, to describe one method of evaluation of the
integral in Eq. (\ref{mshift1}) using dimensional
regularization. An alternative approach, with of course the
same result, can be found in \cite{Jenkins}. Using standard
replacements for the nucleon propagator in terms of real and
imaginary parts, Eq. (\ref{mshift1}) can be rewritten as
\begin{eqnarray}
\delta M_B\left|_{M_B \rightarrow \infty} \right. &=& \frac{-i \beta}{2
f_\pi^2}
\int \frac{d^4 k}{(2 \pi)^4} \frac{k^2}
{(k^2 - m^2 + i\eta)} \times\nonumber\\ &&\left(P\frac{1}{v
\cdot k} - i \pi \delta(v
\cdot k)\right).
\label{mshift2}
\end{eqnarray}
Note that the integrand arising from the principle valued part
of the nucleon's propagator is odd under the transformation $k
\rightarrow -k$ and hence integrates to
zero. Working in the nucleon's rest frame, the $k_0$ integral
is used to integrate over the delta--function. Dimensional
regularization is then used for the remaining integrals over
space--like momenta. We thereby obtain that
\begin{eqnarray}
\delta M_B\left |_{M_B \rightarrow \infty}\right. &=& -\frac{\beta}
{4 f_\pi^2 (2 \pi)^3}
\int d^3 \vec{k} \frac{\vec{k}^2}{\vec{k}^2 + m^2 }\nonumber\\
&=& -\frac{\beta}{4 f_\pi^2 (2 \pi)^3} \int d^3 \vec{k}
\frac{(\vec{k}^2+m^2 ) - m^2 }{\vec{k}^2 + m^2 }\nonumber\\
&=& \frac{\beta m^2 }{4 f_\pi^2 (2 \pi)^3}\int \frac{d^3
\vec{k}}
{\vec{k}^2 +m^2 }\nonumber\\ &=& \frac{\beta m^2 }{32
\pi^3 f_\pi^2} \pi^{3/2}
\Gamma(-1/2) (m^2 )^{1/2}\nonumber\\
&=& -\frac{\beta m^3 }{16 \pi f_\pi^2}.\label{mshift3}
\end{eqnarray}
The fact that the $1/M_B$ corrections to this result (given in
Eq. (\ref{fullmass})) are small might have been anticipated
when noting that the singularity at $v \cdot k = 0$ in
(\ref{mshift2}) is not pinched.
\section{Appendix B}
The factor $a_R^{N, \Delta}(\mu)$ is the residual
finite piece of the counterterm in ${\cal L}_2^{\pi N}$ used to
renormalize the infinity in Eq. (\ref{uvterms}) proportional to
$\delta^3$. Explicitly, these are:
\begin{eqnarray}
{\cal L}_2^{\pi N} &\ni& \frac{\delta^3}{16 \pi^2 f^2_\pi}
\left(\frac53 \kappa + a_R^N \right)
Tr \overline{B} B\nonumber \\ &&+\frac{\delta^3}{16 \pi^2
f^2_\pi}\left(\frac23 \kappa + a_R^\Delta
\right) \overline{\Delta} \Delta
\label{deltathreect}
\end{eqnarray}
where $\kappa$ is an ultraviolet divergent constant given by
\begin{equation}
\kappa = {\cal C}^2\left(- \frac{1}{\epsilon} + \gamma_E -
{\rm ln}(4\pi) \right).
\end{equation}
The counterterms from ${\cal L}_2^{\pi N}$ that renormalize the
infinities in Eq. (\ref{uvterms}) proportional to $\delta
m^2 $ are:
\begin{eqnarray}
{\cal L}_2^{\pi N} &\ni& \frac{3 \delta \zeta}{32 \pi^2 f_\pi^2}\left(
\frac{1}{3}\kappa^\prime +
b_D^\delta\right) Tr \overline{B} \{\xi^\dagger M \xi^\dagger +
\xi M
\xi, B\}\nonumber\\
&&+ \frac{3 \delta \zeta}{32 \pi^2 f_\pi^2}\left(
\frac{-5}{18}\kappa^\prime +
b_F^\delta\right) Tr \overline{B} [\xi^\dagger M \xi^\dagger +
\xi M
\xi, B]\nonumber\\
&&+ \frac{3 \delta \zeta}{32 \pi^2 f_\pi^2}\left(
\frac{-8}{9}\kappa^\prime +
\sigma^\delta\right) Tr M(\Sigma + \Sigma^\dagger) Tr
\overline{B} B\nonumber\\
&&+ \frac{3 \delta \zeta}{32 \pi^2 f_\pi^2} (\tilde{m}- m_s)
\left(\frac{-1}{27} \kappa^\prime \right)
Tr (\Sigma + \Sigma^\dagger) Tr \overline{B} B\nonumber\\ &&+
\frac{3 \delta \zeta}{32 \pi^2 f_\pi^2}\left( \frac{1}{6}\kappa^\prime +
c^\delta \right) \overline{\Delta}( \xi^\dagger M \xi^\dagger +
\xi M
\xi) \Delta\nonumber\\
&&- \frac{3 \delta \zeta}{32 \pi^2 f_\pi^2}\left(
\frac{-1}{6}\kappa^\prime +
\tilde{\sigma}^\delta \right) Tr M(\Sigma + \Sigma^\dagger)
\overline{\Delta} \Delta
\label{deltapict}
\end{eqnarray}
where the ultraviolent divergent constant $\kappa^\prime$ is
given by
\begin{equation}
\kappa^\prime= {\cal C}^2 \left( \frac{1}{\epsilon} - \gamma_E +
{\rm ln}(4\pi) + 1 \right).
\end{equation}
$\zeta$ is the proportionality constant in the
GMOR\cite{GMOR} relation, $\zeta = m^2_K/(m_s + \tilde{m}) =
m^2_\pi/2\tilde{m}$. The set $\{b_D^\delta, b_F^\delta, \cdots
\}$ which enter in the
definition of $\alpha_i^\delta$ in Eq. (\ref{massformula}), are
the residual finite pieces of these counterterms. Note that the
counterterms given in Eqs. (\ref{deltathreect}) and
(\ref{deltapict}) are the only terms from ${\cal L}_2^{\pi,N}$
that contribute to the baryon masses.
|
1,108,101,565,565 | arxiv | \section{Introduction}
\IEEEPARstart{I}{mage} super-resolution (SR) is an important computer vision task that aims at designing effective models to reconstruct the high-resolution (HR) images from the low-resolution (LR) images~\cite{freeman2000learning,guo2020closed,yang2019lightweight,yang2019multilevel,huang2019pyramid}.
Most SR models consist of two components, namely several upsampling layers that increase spatial resolution and a set of computational blocks (\mbox{\textit{e.g.}}, residual block) that increase the model capacity. These two kinds of blocks/layers often follow
a two-level architecture, where the network-level architecture determines the positions of the upsampling layers (\mbox{\textit{e.g.}}, SRCNN~\cite{dong2015image} and LapSRN~\cite{lai2017deep}) and the cell-level architecture controls the computation of each block/layer (\mbox{\textit{e.g.}}, RCAB~\cite{zhang2018image}).
In practice, designing deep models is often very labor-intensive
and the hand-crafted architectures are often not optimal in practice.
Regarding this issue, many efforts have been made to automate the model designing process via Neural Architecture Search (NAS)~\cite{pham2018efficient}.
Specifically, NAS methods seek to find the optimal cell architecture~\cite{zoph2018learning,liu2018darts,pham2018efficient,guo2019nat,guo2020breaking} or a whole network architecture~\cite{zoph2016neural,cai2019proxylessnas,tan2019mnasnet,Cai2020Once}.
However, existing NAS methods may suffer from two limitations if we apply them to search for an optimal SR architecture.
First, it is hard to directly search for the optimal two-level SR architecture.
For SR models, both the cell-level blocks and network-level positions of upsampling layers play very important roles.
However, existing NAS methods only focus on one of the architecture levels. Thus, how to simultaneously find the optimal cell-level block and network-level positions of upsampling layers is still unknown.
Second, most methods only focus on improving SR performance but ignore the computational complexity.
{As a result, SR models are often very large and}
become hard to be applied to real-world applications~\cite{zhuang2018discrimination,liu2020discrimination} when the computation resources are limited.
Thus, it is important to design promising architectures with low computation cost.
To address the above issues, we propose a novel Hierarchical Neural Architecture Search (HNAS) method to automatically design SR architectures. Unlike existing methods, HNAS simultaneously searches for the optimal cell-level blocks and the network-level positions of upsampling layers. Moreover, by considering the computation cost to build the joint reward, our method is able to produce promising architectures with low computation cost.
Our contributions are summarized as follows:
\begin{itemize}
\item We propose a novel Hierarchical Neural Architecture Search (HNAS) method to automatically design cell-level blocks and determine network-level positions of upsampling layers.
\item We propose a joint reward that considers both the SR performance and the computation cost of SR architectures. By training HNAS with such a reward, we can obtain a series of architectures with different performance and computation cost.
\item Extensive experiments on several benchmark datasets demonstrate the superiority of the proposed method.
\end{itemize}
\section{Proposed Method}
\begin{figure*}[!ht]
\centering
\includegraphics[scale=0.35]{merge.jpg}
\vspace{-0.3cm}
\caption{{The overview of the proposed HNAS method. (a) The architecture of the two-branch super network. The red line represents a searched model with the upsampling layer at a specific layer. $N$ and $M$ denote the number of layers before and after the upsampling blocks. (b) The proposed hierarchical controller.}}
\label{overview}
\end{figure*}
In this paper, we propose a Hierarchical Neural Architecture Search (HNAS) method to automatically design promising two-level SR architectures, \mbox{\textit{i.e.}}, with good performance and low computation cost.
To this end, we first define our hierarchical search space that consists of a cell-level search space and a network-level search space.
Then, we propose a hierarchical controller as an agent to search for good architectures. To search for promising SR architectures with low computation cost, we develop a joint reward by considering both the performance and computation cost. We show the overall architecture and the controller model of HNAS in Figure~\ref{overview}.
\begin{figure}
\centering
\includegraphics[width=0.75\linewidth]{SRNAS-B_1.png}
\caption{An example of DAG that represents a cell architecture.}
\label{fig:cell_example}
\end{figure}
\subsection{Hierarchical SR Search Space}
\label{sec41}
In general, SR models often consist of two components, namely several upsampling layers that increase spatial resolution and a series of computational blocks that increase the model capacity.
These two components form a two-level architecture,
where the cell-level identifies the computation of each block and the network-level determines the positions of the upsampling layers.
Based on the hierarchical architecture, we propose a hierarchical SR search space that contains a cell-level search space and a network-level search space.
\textbf{Cell-level search space.}
In the cell-level search space, as shown in Fig.~\ref{fig:cell_example}, we represent a cell as a directed acyclic graph (DAG)~\cite{pham2018efficient,liu2018darts,xu2020pcdarts,chen2019progressive}, where the nodes denote the feature maps in deep networks and the edges denote some computational operations, \mbox{\textit{e.g.}}, convolution.
In this paper, we define two kinds of cells: (1) the normal cell that controls the model capacity and keeps the spatial resolution of feature maps unchanged,
and (2) the upsampling cell that
increases the spatial resolution.
To design these cells, we collect the two sets of operations that have been widely used in SR models. We show the candidate operations for both cells in TABLE~\ref{operation}.
For the normal cell, we consider seven candidate operations, including identity mapping, $3 \times 3$ and $5 \times 5$ dilated convolution, $3 \times 3$ and $5 \times 5$ separable convolution, up and down-projection block (UDPB)~\cite{haris2018deep}, and residual channel attention block (RCAB)~\cite{zhang2018image}.
For the upsampling cell, we consider 5 widely used operations to increase spatial resolution.
Specifically, there are 3 interpolation-based upsampling operations, including area interpolation, bilinear interpolation \cite{dong2015image}, nearest-neighbor interpolation \cite{dumoulin2016learned}. Moreover, we also consider 2 trainable convolutional layers, namely the deconvolution layer (also known as transposed convolution)~\cite{zeiler2011adaptive} and the sub-pixel convolution \cite{shi2016real}.
Based on the candidate operations, the goal of HNAS is to select the optimal operation for each edge of DAG and learn the optimal connectivity among nodes (See more details in Section~\ref{sec:controller}).
\begin{table}[t]
\centering
\caption{Candidate operations for normal cell and upsampling cell.}
\begin{tabular}{l|l}
\toprule
Normal Cell/Block & Upsampling Cell/Block \\
\hline
$\bullet \text{ identity (skip connection)}$ &
$\bullet \text{ area interpolation}$ \\
$\bullet \text{ 3} \times \text{3 dilated convolution}$ &
$\bullet \text{ bilinear interpolation}$ \\
$\bullet \text{ 5} \times \text{5 dilated convolution}$ &
$\bullet \text{ nearest-neighbor interpolation}$ \\
$\bullet \text{ 3} \times \text{3 separable convolution}$ &
$\bullet \text{ sub-pixel layer}$ \\
$\bullet \text{ 5} \times \text{5 separable convolution}$ &
$\bullet \text{ deconvolution layer}$ \\
$\bullet \text{ up and down-projection block}$~\cite{haris2018deep} & \\
$\bullet \text{ residual channel attention block}$~\cite{zhang2018image} & \\
\bottomrule
\end{tabular}
\label{operation}
\end{table}
\textbf{Network-level search space.}
Note that the position of upsampling block/layer plays an important role in both the performance and computation cost of SR models. Specifically, if we put the upsampling block in a very shallow layer, the feature map would increase too early and hence significantly increase the computational cost of the whole model. By contrast, when we put the upsampling block in a deep layer, there would be little or no layers to process the upsampled features and hence the computation to obtain high-resolution images may be insufficient, leading to suboptimal SR performance. Regarding this issue, we seek to find the optimal position of the upsampling block for different SR models.
{To this end, we design a two-branch super network whose architecture is fixed through the whole search process. As shown in Fig.~\ref{overview}, there are two kinds of cells (\mbox{\textit{i.e.}}, normal cell and upsampling cell) at each layer. Given a specific position of the upsampling cell, we set the selected layer to the upsampling cell and set the others to the normal cells. In this way, the model with a specific position of the upsampling cell becomes a sub-network of the proposed super network.}
Let $N$ and $M$ denote the number of layers before and after the upsampling blocks.
Thus, there are $L=M+N+1$ blocks in total.
We will show how to determine the position of the upsampling layer in Section~\ref{sec:controller}.
\subsection{Hierarchical Controller for HNAS}\label{sec:controller}
Based on the hierarchical search space, we seek to search for the optimal cell-level and network-level architectures.
Following~\cite{zoph2016neural,pham2018efficient}, we use a long short-term memory (LSTM)~\cite{hochreiter1997long} as the controller to produce candidate architectures (represented by a sequence of tokens~\cite{zoph2016neural}).
Regarding the two-level hierarchy of SR models, we propose a hierarchical controller to produce promising architectures.
Specifically, we consider two kinds of controllers, including a cell-level controller that searches for the optimal architectures for both normal block and upsampling block, and a network-level controller that determines the positions of upsampling layers.
\textbf{Cell-level controller.}
We utilize a cell-level controller to find the optimal computational DAG with $B$ nodes (See example in Fig.~\ref{fig:cell_example}).
In a DAG, the input nodes $-2$ and node $-1$
denote the outputs of the second nearest and the nearest cell in front of the current block, respectively.
The remaining $B {-} 2$ nodes are intermediate nodes, each of which also takes two previous nodes in this cell as inputs.
For each intermediate node, the controller makes two kinds of decisions: 1) which previous node should be taken as input and 2) which operation should be applied to each edge.
All of these decisions can be represented as a sequence of tokens and thus can be predicted using the LSTM controller~\cite{zoph2016neural}.
After repeating $B {-} 2$ times, all of the $B {-} 2$ nodes are concatenated together to obtain the final output of the cell, \mbox{\textit{i.e.}}, the output node.
\textbf{Network-level controller.}
Once we have the normal block and upsampling block, we seek to further determine where we should put the upsampling block to build the SR model.
Given a model with $L$ layers, we predict the position, \mbox{\textit{i.e.}}, an integer ranging from 1 to $L$, where we put the upsampling block. Since such a position relies on the design of both normal and upsampling blocks, we build the network-level controller that takes the embeddings (\mbox{\textit{i.e.}}, hidden states) of two kinds of blocks as inputs to determine the position.
Specifically, let $h_N$ and $h_U$ denote the last hidden states of the controllers for normal block and upsampling block, respectively.
We concatenate these embeddings as the initial state of the network level controller (See Fig.~\ref{overview}(b)). Since the network-level controller considers the information of the architecture design of both the normal and upsampling blocks, it becomes possible to determine the position of the upsampling block.
\begin{algorithm}[t]
\caption{\small{Training method for HNAS.}}
\begin{algorithmic}[1]\small
\REQUIRE The number of iterations $T$, learning rate $\eta$, shared parameters $w$, controller parameters $\theta$.
\STATE {Initialize $w$ and $\theta$.}
\FOR{$i{=}1$ to $T$}
\STATE // \emph{Update $w$ by minimizing the training loss}
\FOR{each iteration on training data}
\STATE Sample $\alpha \sim \pi(\alpha;\theta)$;
\STATE $w \gets w - \eta \nabla_{w} \mathcal{L}(\alpha,w)$. \\
\ENDFOR
\STATE // \emph{Update $\theta$ by maximizing the reward}
\FOR{each iteration on validation data}
\STATE Sample $\alpha \sim \pi(\alpha;\theta)$;
\STATE $\theta \gets \theta + \eta \mathcal{R}(\alpha) \nabla_\theta\log \pi(\alpha;\theta,\Omega_i)$; \\
\ENDFOR
\ENDFOR
\end{algorithmic}
\label{algorithm1}
\end{algorithm}
\subsection{Training and Inference Methods}
To train HNAS, we first propose the joint reward to guide the architecture search process. Then, we depict the detailed training and inference methods of HNAS.
\textbf{Joint reward.}
Designing promising architectures with low computation cost is important for real-world SR applications.
To this end, we build a joint reward by considering both performance and computation cost to guide the architecture search process.
Given any architecture $\alpha$, let ${\rm PSNR}(\alpha)$ be the PSNR performance of $\alpha$, ${\rm Cost}(\alpha)$ be the computation cost of $\alpha$ in terms of FLOPs (\mbox{\textit{i.e.}}, the number of multiply-add operations). The joint reward can be computed by
\begin{equation}
\mathcal{R}(\alpha) = \lambda * \text{PSNR}(\alpha) - (1 - \lambda) * \text{Cost}(\alpha),
\label{eq:reward}
\end{equation}
{where $\lambda$ controls the trade-off between the PSNR performance and the computation cost. Such a trade-off exists when there is a limited budget of computation resources and we can adjust $\lambda$ in the proposed joint reward function to meet different requirements of real-world applications. In general, a larger $\lambda$ makes the controller pay more attention to improving the PSNR performance but regardless of the computation cost. By contrast, a smaller $\lambda$ makes the controller focus more on reducing the computation cost.}
\textbf{Training method for HNAS.}
With the joint reward, following~\cite{zoph2016neural,pham2018efficient}, we apply the policy gradient \cite{williams1992simple} to train the controller. We show the training method in Algorithm~\ref{algorithm1}.
To accelerate the training process, we adopt the parameter sharing technique~\cite{pham2018efficient}, \mbox{\textit{i.e.}}, we construct a large computational graph, where each subgraph represents a neural network architecture, hence forcing all architectures to share the parameters.
Let $\theta$ and $w$ be the parameters of the controller model and the shared parameters. The goal of HNAS is to learn an optimal policy $\pi(\cdot)$ and produce candidate architectures by conduct sampling $\alpha {\sim} \pi(\alpha)$.
To encourage exploration, we introduce an entropy regularization term into the objective.
{In this way, we can train as diverse architectures as possible in the super network and thus alleviate the unfairness issue~\cite{chu2019fairnas} that some sub-networks (or candidate operations) may be over-optimized while the others are under-optimized.}
\begin{table*}[t!]
\centering
\caption{Comparisons with the state-of-the-art methods based on $\times$2 super-resolution task. Results marked with ``$^\dagger$'' were obtained by training the corresponding architectures using our setup. ``-'' denotes the results that are not reported.}
\begin{tabular}{c|c|ccccc}
\toprule
Model & \#FLOPs (G)
& \tabincell{c}{SET5 \\ PSNR / SSIM}
& \tabincell{c}{SET14 \\ PSNR / SSIM}
& \tabincell{c}{B100 \\ PSNR / SSIM}
& \tabincell{c}{Urban100 \\ PSNR / SSIM}
& \tabincell{c}{Manga109 \\ PSNR / SSIM}
\\
\hline
Bicubic & - & 33.65 / 0.930 & 30.24 / 0.869 & 29.56 / 0.844 & 26.88 / 0.841 & 30.84 / 0.935 \\
SRCNN \cite{dong2015image} & 52.7 & 36.66 / 0.954 & 32.42 / 0.906 & 31.36 / 0.887 & 29.50 / 0.894 & 35.72 / 0.968 \\
VDSR \cite{kim2016accurate} & 612.6 & 37.53 / 0.958 & 33.03 / 0.912 & 31.90 / 0.896 & 30.76 / 0.914 & 37.16 / 0.974 \\
DRCN \cite{kim2016deeply} & 17,974.3 & 37.63 / 0.958 & 33.04 / 0.911 & 31.85 / 0.894 & 30.75 / 0.913 & 37.57 / 0.973 \\
DRRN \cite{tai2017image} & 6,796.9 & 37.74 / 0.959 & 33.23 / 0.913 & 32.05 / 0.897 & 31.23 / 0.918 & 37.92 / 0.976 \\
SelNet \cite{choi2017deep} & 225.7 & 37.89 / 0.959 & \textbf{33.61} / 0.916 & 32.08 / 0.898 & - & - \\
CARN \cite{ahn2018fast} & 222.8 & 37.76 / 0.959 & 33.52 / 0.916 & 32.09 / 0.897 & 31.92 / 0.925 & 38.36 / 0.976 \\
MoreMNAS-A \cite{chu2019multi} & 238.6 & 37.63 / 0.958 & 33.23 / 0.913 & 31.95 / 0.896 & 31.24 / 0.918 & - \\
FALSR \cite{chu2019fast} & 74.7 & 37.61 / 0.958 & 33.29 / 0.914 & 31.97 / 0.896 & 31.28 / 0.919 & 37.46 / 0.974 \\
Residual Block~\cite{he2016deep}$^\dagger$ & 47.5 & 36.72 / 0.955 & 32.20 / 0.905 & 31.30 / 0.888& 29.53 / 0.897 & 33.36 / 0.962\\
RCAB~\cite{zhang2018image}$^\dagger$ & 84.9 &37.66 / 0.959& 33.17 / 0.913 & 31.93 / 0.896& 31.19 / 0.918 & 37.80 / 0.974\\
Random$^\dagger$ & 111.7 & 37.83 / 0.959 & 33.31 / 0.915 & 31.98 / 0.897 & 31.42 / 0.920 & 38.31 / 0.976 \\
\hline
HNAS-A ($\lambda=0.2$) & \textbf{30.6} & 37.84 / 0.959 & 33.39 / 0.916 & 32.06 / 0.898 & 31.50 / 0.922 & 38.15 / 0.976 \\
HNAS-B ($\lambda=0.6$) & 48.2 & 37.92 / 0.960 & 33.46 / 0.917 & 32.08 / 0.898 & 31.66 / 0.924 & 38.46 / 0.977 \\
HNAS-C ($\lambda=0.9$) & 83.6 & \textbf{38.11} / \textbf{0.964} & 33.60 / \textbf{0.920} & \textbf{32.17} / \textbf{0.902} & \textbf{31.93} / \textbf{0.928} & \textbf{38.71} / \textbf{0.985}\\
\bottomrule
\end{tabular}
\label{exp1}
\end{table*}
\textbf{Inferring Architectures.}
Based on the learned policy $\pi(\cdot)$,
we conduct sampling to obtain promising architectures. Specifically, we first sample several candidate architectures and then select the architecture with the highest validation performance. Finally, we build SR models using the searched architectures (including both the cell-level blocks and network-level position of upsampling blocks) and train them from scratch.
\section{Experiments}
\label{exp_set}
In the experiments, we use the DIV2K dataset \cite{timofte2017ntire} to train all the models and conduct comparisons on five benchmark datasets, including Set5~\cite{bevilacqua2012low}, Set14~\cite{zeyde2010single}, BSD100~\cite{martin2001database}, Urban100~\cite{huang2015single}, and Manga109~\cite{fujimoto2016manga109}.
We compare different models in terms of PSNR, SSIM, and FLOPs.
Please see more training details in supplementary.
We have made the code of HNAS available at \href{https://github.com/guoyongcs/HNAS-SR}{https://github.com/guoyongcs/HNAS-SR}.
\begin{figure}[t]
\centering
\includegraphics[width = 1\columnwidth]{nas-visual.jpg}
\caption{Visual comparisons of different methods for $2 \times$ SR.}
\label{exp2}
\end{figure}
\subsection{Quantitative Results}
In this experiment, we {consider three settings (\mbox{\textit{i.e.}}, $\lambda=\{0.2, 0.6, 0.9\}$)}
{and use HNAS-A/B/C to represent the searched architectures under these settings, respectively.}
We show the detailed architectures in supplementary.
Table~\ref{exp1} shows the quantitative comparisons for $2\times$ SR. Note that all FLOPs are measured based on a $3 \times 480 \times 480$ input LR image. Compared with the hand-crafted models, our models tend to yield higher PSNR and SSIM and lower fewer FLOPs.
Specifically, HNAS-A yields the lowest FLOPs but still outperforms a large number of baseline methods.
Moreover, when we gradually increase $\lambda$, HNAS-B and HNAS-C take higher computation cost and yield better performance.
These results demonstrate that HNAS can produce architectures with promising performance and low computation cost.
\subsection{Visual Results}
To further show the effectiveness of the proposed method, we also conduct visual comparisons between HNAS and several state-of-the-arts.
We show the results in Fig.~\ref{exp2}.
From Fig.~\ref{exp2}, the considered baseline methods often produce very blurring images with salient artifacts. By contrast, the searched models by HNAS are able to produce sharper images than other methods.
These results demonstrate the effectiveness of the proposed method.
\section{Conclusion}
In this paper, we have proposed a novel Hierarchical Neural Architecture Search (HNAS) method to automatically search for the optimal architectures for image super-resolution (SR) models. Since most SR models follow the two-level architecture design, we define a hierarchical SR search space and develop a hierarchical controller to produce candidate architectures. Moreover, we build a joint reward by considering both SR performance and computation cost to guide the search process of HNAS. With such a joint reward, HNAS is able to design promising architectures with low computation cost.
Extensive results on five benchmark datasets demonstrate the effectiveness of the proposed method.
\section*{Acknowledgement}
This work was partially supported by the Guangdong Basic and Applied Basic Research Foundation (No. 2019B1515130001), the Guangdong Special Branch Plans Young Talent with Scientific and Technological Innovation (No. 2016TQ03X445), the Guangzhou Science and Technology Planning Project (No. 201904010197) and Natural Science Foundation of Guangdong Province (No. 2016A030313437).
\bibliographystyle{IEEEtran}
\IEEEtriggeratref{22}
|
1,108,101,565,566 | arxiv | \subsection{Experimental setting}
In order to simulate the distributed environment using a shared memory system,
we use threads to simulate the task nodes, and the number of threads is equal to
the number of tasks. As such, each thread is responsible for learning the model parameters of one task and communicating with the central node to achieve knowledge transfer, where the central node is simulated by the shared memory.
Though the framework can be used to solve many regularized MTL formulations,
in this paper we focus on one specific MTL formulation--the low-rank MTL for shared
subspace learning. In the formulation, we assume that all tasks learn a regression
model with the least squares loss $\sum_{t=1}^T \left\|x_{t}w_{t} -
y_{t}\right\|_{2}^{2}$. Recall that $x_{t},~n_{t},~x_{t,i}$, and $y_{t,i}$
denote the data matrix, sample size, $i$-th data sample of the task $y$, and the $i$-th label of
the task $t$, respectively. The nuclear norm is used as the regularization
function that couples the tasks by learning a shared low-dimensional subspace,
which serves as the basis for knowledge transfer among tasks. The formulation
is given by:
\begin{align}
\min_{W} \left\{ \sum\nolimits_{t=1}^T \left\|x_{t}w_{t} - y_{t}\right\|_{2}^{2} + \lambda \|W\|_* \right\}.
\end{align}
As it was discussed in the previous sections, we used the backward-forward
splitting scheme for AMTL, since the nuclear norm is not separable. The
proximal mapping for nuclear norm regularization is given
by~\cite{ji2009accelerated,cai2010singular}:
\begin{equation}
\begin{split}
\mbox{Prox}_{\eta \lambda g} \left(\hat{{V}}^{k}\right) &= \sum_{i=1}^{\mbox{min} \left\{d,T\right\}} \mbox{max} \left(0, \sigma_{i} - \eta \lambda\right)u_{i}v_{i}^{\top} \\
&= U \left(\Sigma - \eta \lambda I\right)_{+} V^{\top}
\end{split}
\label{eq:soft_thresholding}
\end{equation}
where $\{u_i\}$ and $\{v_i\}$ are columns of $U$ and $V$, respectively, $\hat{{V}}^{k} = U \Sigma V^{\top}$ is the singular value decomposition
(SVD) of $\hat{{V}}^{k}$ and $\left(x\right)_{+} =
\mbox{max}\left(0,x\right)$. In every stage of the AMTL framework,
copies of all the model vectors are stored in the shared memory. Therefore,
when the central node is performing the backward step, it retrieves the current
versions of the models from the shared memory. Since the proposed framework is
an asynchronous distributed system, copies of the models may be changed by
task nodes while the central node is performing the proximal mapping. Whenever a task node completes its computation, it sends its model to the central
node for proximal mapping without waiting for other task nodes to finish their
forward steps.
As it is seen in Eq. \eqref{eq:soft_thresholding}, every backward step
requires a singular value decomposition (SVD) of the model matrix. When the
number of tasks $T$ and the dimensionality $d$ are high, SVD is a
computationally expensive operation. Instead of computing the full SVD at
every step, online SVD can also be used~\cite{brand2003fast}. Online SVD updates $U,~V$, and $\Sigma$
matrices by using the previous values. Therefore, SVD is performed once
at the beginning and those left, right, and singular value matrices are used to
compute the SVD of the updated model matrix. Whenever a column of the model
matrix is updated by a task node, central node computes the proximal mapping.
Therefore, instead of performing the full SVD every time, we can update
the SVD of the model matrix according to the changed column. When we need to
deal with a huge number of tasks and high dimensionality, online SVD can be used
to mitigate computational complexity.
\subsection{Comparison between AMTL and SMTL}
\subsubsection{Public and Synthetic Datasets}
In this section, the difference in computation times of AMTL and SMTL is
investigated with varying number of tasks, dimensionality, and sample sizes since both AMTL and SMTL have nearly identical progress per iteration (every task node updates one forward step for each iteration).
Synthetic datasets were randomly generated. In
Fig.~\ref{fig:async_vs_sync}, the computation time for a varying number of
tasks, sample sizes, and dimensionalities is shown. In Fig.~\ref{tasks}, the dimensionality of the dataset was chosen as $50$, and the
sample size of each task was chosen as $100$. As observed from
Fig.~\ref{tasks}, computation time increases with increasing
number of tasks because the total computational complexity increases. However, the increase is more drastic for
SMTL. Since each task node has to wait for other task nodes to finish their
computations in SMTL, increasing the number of tasks causes the algorithm to
take much longer than AMTL. For Fig.~\ref{tasks}, computation time still increases after $100$ tasks for AMTL because the number of backward steps increases as the number of tasks increases. An important point we should note is that the number of cores we ran our experiments was less than $100$. There is a dependecy on hardware.
\begin{figure*}[t!]
\centering
\subfloat[Varying number of tasks]{
\includegraphics[width=0.32\textwidth]{figures/async_vs_sync.png}
\label{tasks}}
\subfloat[Varying sample sizes in each task]{
\includegraphics[width=0.32\textwidth]{figures/async_vs_sync_tasksize.png}
\label{sizes}}
\subfloat[Varying dimensionalities of the model]{
\includegraphics[width=0.32\textwidth]{figures/async_vs_sync_dimensionality.png}
\label{dim}}
\caption{Computation times of AMTL and SMTL for a) varying number of tasks for $50$ dimensional $100$ samples in each task, b) varying number of task sizes with $50$ dimensions, and $5$ tasks, c) varying dimensionality of $5$ tasks with $100$ samples in each. SMTL requires more computational time than AMTL for a fixed number of
iterations.}
\label{fig:async_vs_sync}
\end{figure*}
In the next experiment, random datasets were generated with different sample
sizes, and the effect of varying sample sizes on the computation times of AMTL
and SMTL is shown in Fig.~\ref{sizes}. The number of
tasks were chosen as $5$, and the dimensionality was $50$. Increasing the
sample size did not cause abrupt changes in computation times for both
asynchronous and synchronous settings. That is because computing the gradient has a similar cost to the proximal mapping for small sample sizes and the cost for computing the gradient increases as the sample size increases while the cost for the proximal mapping keeps unchanged.
However, AMTL is still faster than SMTL
even for a small number of tasks.
In Fig.~\ref{dim}, the
computational times of AMTL and SMTL for different dimensionalities is shown. As
expected, the time required for both schemes increases with higher
dimensionalities. On the other hand, we can observe from
Fig.~\ref{dim} that the gap between AMTL and
SMTL also increases. In SMTL, task nodes have to wait longer for the updates,
since the calculations of the backward and the forward steps are prolonged
because of higher $d$.
In Table~\ref{tab:summary}, computation times of AMTL and SMTL with different
network characteristics for synthetic datasets with a varying number of tasks
are summarized. Synthetic datasets were randomly generated with $100$ samples
for each task, and the dimensionality was set to $50$. The results are shown for
datasets with $5, 10$, and $15$ tasks. Similar to the previous
experimental settings, a regression problem with the squared loss and nuclear norm
regularization was taken into account. As it was shown, AMTL outperforms SMTL
at every dimensionality, sample size, and number of tasks considered here. AMTL also
performs better than SMTL under different communication delay patterns. In Table~\ref{tab:summary}, AMTL-5, AMTL-10, and AMTL-30 represent the AMTL framework
where the offset values of the delay were chosen as 5, 10, and 30 seconds.
Simulating the different network settings for our experiments, an offset
parameter was taken as an input from the user. This parameter represents an
average delay related to the infrastructure of the network. Then the amount of
delay was computed as the sum of the offset and a random value in each task
node. AMTL is shown to be more time efficient than SMTL under the same network
settings.
\begin{table}[!t]
\small
\caption{Computation times (sec.) of AMTL and SMTL with different network
characteristics. The offset value of the delay for AMTL-5, 10, 30 was chosen as 5, 10, 30 seconds. Same network settings were used to
compare the performance of AMTL and SMTL. AMTL performed better than SMTL for
all the network settings and numbers of tasks considered here.}
\label{tab:summary}
\centering
\begin{tabular}{| c | c | c | c |}
\hline
Network & 5 Tasks & 10 Tasks & 15 Tasks \\
\hline
AMTL-5 & 156.21 & 172.59 & 173.38 \\
\hline
AMTL-10 & 297.34 & 308.55 & 313.54 \\
\hline
AMTL-30 & 902.22 & 910.39 & 880.63\\
\hline
SMTL-5 & 239.34 & 248.23 & 256.94 \\
\hline
SMTL-10 & 452.84 & 470.79 & 494.13\\
\hline
SMTL-30 & 1238.16 & 1367.38 & 1454.57\\
\hline
\end{tabular}
\vspace{-0.07in}
\end{table}
The performance of AMTL and SMTL is also shown for three public datasets.
The number of tasks, sample sizes, and the dimensionality of the data sets are
given in Table~\ref{tab:table_datasets}. School and MNIST are commonly used
public datasets. School dataset has exam records of 139 schools in 1985, 1986,
and 1987 provided by the London Education Authority (ILEA) \cite{school}. MNIST is a popular handwritten digits dataset with $60,000$
training samples and $10,000$ test samples \cite{MNIST}. MNIST is prepared as $5$ binary
classification tasks such as $0$ vs $9$, $1$ vs $8$, $2$ vs $7$, $3$ vs $6$,
and $4$ vs $5$ . Another public dataset used in experiments was Multi-Task
Facial Landmark (MTFL) dataset~\cite{MTFL}. In this dataset, there are
$12,995$ face images with different genders, head poses, and characteristics.
The features are five facial landmarks, attribute of gender, smiling/not
smiling, wearing glasses/not wearing glasses, and head pose. We designed four
binary classification tasks such as male/female, smiling/not smiling,
wearing/not wearing glasses, and right/left head pose. The logistic loss
was used for binary classification tasks, and the squared loss was used for the
regression task. Training times of AMTL and SMTL are given in Table
\ref{tab:table_computation_time}. When the number of tasks is high, the gap
between training times of AMTL and SMTL is wider. The training time of AMTL is
always less than the training time of SMTL for all of the real-world datasets
for a fixed amount of delay in this experiment.
\begin{table}[!t]
\small
\caption{Benchmark public datasets used in this paper.
\label{tab:table_datasets}
\centering
\begin{tabular}{| c | c | c | c |}
\hline
Data set & Number of tasks & Sample sizes & Dimensionality \\
\hline
\hline
School & 139 & 22-251 & 28 \\
\hline
MNIST & 5 & 13137-14702 & 100 \\
\hline
MTFL & 4 & 2224-10000 & 10 \\
\hline
\end{tabular}
\end{table}
\begin{table}[!t]
\small
\caption{Training time (sec.) comparison of AMTL and SMTL for public datasets.
Training time of AMTL is less than the training time of SMTL for real-world
datasets with different network settings.}
\label{tab:table_computation_time}
\centering
\begin{tabular}{| c | c | c | c |}
\hline
Network & School & MNIST & MTFL \\
\hline
\hline
AMTL-1 & 194.22 & 54.96 & 50.40 \\
\hline
AMTL-2 & 231.58 & 83.17 & 77.44 \\
\hline
AMTL-3 & 460.15 & 115.46 & 103.45 \\
\hline
SMTL-1 & 299.79 & 57.94 & 50.59 \\
\hline
SMTL-2 & 298.42 & 114.85 & 92.84 \\
\hline
SMTL-3 & 593.36 & 161.67 & 146.87 \\
\hline
\end{tabular}
\vspace{-0.06in}
\end{table}
Experiments show that AMTL is more time efficient than SMTL, especially,
when there are delays due to the network communication. In this situation, asynchronous
algorithm becomes a necessity, because network communication increases the
training time drastically. Since each task node in AMTL performs the backward-forward splitting steps and the variable updates without waiting for any other
node in the network, it outperforms SMTL for both synthetic and real-world
datasets with a various number of tasks and network settings. Moreover, AMTL
does not need to carry the raw data samples to a central node. Therefore, it is
very suitable for private datasets located at different data centers compared
to many distributed frameworks in the literature. In addition to time efficiency, convergence of AMTL and SMTL under same network configurations are compared. In Fig.~\ref{fig:convergence}, convergence curves of AMTL and SMTL are given for a fixed number of iterations and synthetic data with $5$ and $10$ tasks. As seen in the figure, AMTL tends to converge faster than SMTL in terms of the number of iteratios as well.
\vspace{-0.1in}
\begin{figure}[t!]
\vspace{-0.3in}
\centering
\includegraphics[width=0.4\textwidth]{figures/Conv_5_10.png}
\caption{Convergence of AMTL and STML under the same network configurations. Experiment was conducted for randomly generated synthetic datasets with $5$ and $10$ tasks. AMTL is not only more time efficient than SMTL, it also tends to converge faster than STML.}
\label{fig:convergence}
\end{figure}
\subsection{Dynamic step size}
In this section, we present the experimental result of the proposed dynamic
step size. A dynamic step size is proposed by utilizing the delays in the
network. We simulate the delays caused by the network communication by keeping
the task nodes idle for while after it completes the forward step. The
dynamic step size was computed by Eq.~(\ref{eq:transformation}). In this
experiment, the average delay of the last 5 iterations was used to modify the step
size. The effect of the dynamic step size was shown for randomly generated
synthetic datasets with $100$ samples in each task with the dimensionality was set to $50$. The objective values of each dataset with different numbers of tasks and
with different offset values were examined. These objective values were
calculated at the end of the total number of iterations and the final updated
versions of the model vectors were used. Because of the delays, some of the
task nodes have to wait much longer than other nodes, and the convergence slows
down for these nodes. If we increase the step size of the nodes which had to
wait for a long time in previous iterations, we can boost the convergence. The
experimental results are summarized in Tables
\ref{tab:dynamic1},~\ref{tab:dynamic2}, and
\ref{tab:dynamic3}. According to our observation, if we use dynamic step size,
the objective value decreases compared with the objective value of the AMTL
with constant step size. The convergence criteria was chosen as a fixed number
of iterations such as~$10$. As we can see from the tables, the dynamic step
size helps to speed up the convergence. Another observation is that objective
values tend to decrease with the increasing amount of delay, when the dynamic
step size is used. Although no theoretical results are available to quantify the dynamic step size in existing literature, the empirical results we obtained indicate that the dynamic step size is a very promising strategy and is especially effective when there are significant delays in the network.
\begin{table}[!t]
\vspace{-0.1in}
\caption{Objective values of the synthetic dataset with 5 tasks under different network settings.}
\small
\label{tab:dynamic1}
\centering
\begin{tabular}{| c | c | c |}
\hline
Network & Without dynamic step size & Dynamic step size \\
\hline
\hline
AMTL-5 & 163.62 & 144.83\\
\hline
AMTL-10 & 163.59 & 144.77\\
\hline
AMTL-15 & 163.56 & 143.82\\
\hline
AMTL-20 & 168.63 & 143.50\\
\hline
\end{tabular}
\end{table}
\begin{table}[!t]
\caption{Objective values of the synthetic dataset with 10 tasks. Objective values are shown for different network settings. Dynamic step size yields lower objective values at the end of the last iteration than fixed step size.}
\small
\label{tab:dynamic2}
\centering
\begin{tabular}{| c | c | c |}
\hline
Network & Without dynamic step size & Dynamic step size \\
\hline
\hline
AMTL-5 & 366.27 & 334.24\\
\hline
AMTL-10 & 367.63 & 333.71 \\
\hline
AMTL-15 & 366.26 & 333.12 \\
\hline
AMTL-20 & 366.35 & 331.13 \\
\hline
\end{tabular}
\end{table}
\begin{table}[!t]
\caption{Objective values of the synthetic dataset with 15 tasks. The difference between objective values of AMTL with and without dynamic step size is more visible when the amount of delay increases.}
\small
\label{tab:dynamic3}
\centering
\begin{tabular}{| c | c | c |}
\hline
Network & Without dynamic step size & Dynamic step size \\
\hline
\hline
AMTL-5 & 559.07 & 508.65 \\
\hline
AMTL-10 & 561.68 & 505.64 \\
\hline
AMTL-15 & 561.87 & 500.05\\
\hline
AMTL-20 & 561.21 & 499.97\\
\hline
\end{tabular}
\vspace{-0.06in}
\end{table}
\section{Introduction}
\label{sect:intro}
\input{intro}
\section{Related Work}
\label{sect:related-work}
\input{relatedwork}
\section{Asynchronous Multi-Task Learning}
\label{sect:method}
\input{method}
\section{Numerical Experiments}
\label{sect:exp}
\input{exp}
\section{Conclusion}
\label{sect:conclusion}
In conclusion, a distributed regularized multi-task learning approach is
presented in this paper. An asynchronous distributed coordinate update method
is adopted to perform full updates on model vectors. Compared to other
distributed MTL approaches, AMTL is more time efficient because task nodes do not need to wait for other nodes to perform the gradient updates.
A dynamic step size to boost the convergence performance is investigated by
scaling the step size according to the delays in the communication. Training
times are compared for several synthetic, and public datasets and the results
showed that the proposed AMTL is faster than traditional synchronous MTL.
We also study the convergence behavior of AMTL and SMTL by comparing the precision of the two approaches.
We note that current AMTL implementation is based on the ARock~\cite{Arock} framework, which largely limits our capability of conducting experiments for different network structures. As our future work, we will develop
a standalone AMTL implementation\footnote{Available at \url{https://github.com/illidanlab/AMTL}} that allows us to validate AMTL in real-world network settings. Stochastic gradient approach will also be incorporated into the current distributed
AMTL setting.
\section*{Acknowledgment}
This research is supported in part by the National Science Foundation (NSF) under grant
numbers IIS-1565596, IIS-1615597, and DMS-1621798 and the Office of Naval Research (ONR) under grant number
N00014-14-1-0631.
\bibliographystyle{IEEEtran}
\subsection{Regularized multi-task learning}
In MTL, multiple related learning tasks are involved, and the goal of
MTL is to obtain models with improved generalization performance for {\it all} the tasks involved. Assume that we have $T$ supervised learning tasks, and for each
task, we are given the training data $\mathcal D_t = \{x_t, y_t\}$ of $n_t$ data
points, where $x_t \in \Real^{n_t \times d}$ is the feature vectors for the
training data points and $y_t \in \Real^{n_t}$ includes the corresponding
responses. Assume that the target model is a linear model parametrized by the vector
$w_t \in \Real^d$ (in this paper we slightly abuse the term ``model'' to denote
the vector $w_t$), and we use $\ell_t(w_t)$ to denote the loss
function $\ell_t(x_t, y_t; w_t)$, examples of which include the least squares loss and the logistic loss. In addition, we assume that the tasks can be heterogeneous~\cite{yang2009heterogeneous},
i.e., some tasks can be regression while the other tasks are classification. In a
single learning task, we treat the task independently and minimize the
corresponding loss function, while in MTL, the tasks are related, and we hope
that by properly assuming task relatedness, the learning of one task (the
inference process of $w_t$) can benefit from other tasks~\cite{Caruana97}. Let $W = [w_1, \dots,
w_T] \in \Real^{d \times T}$ collectively denote the learning parameters from
all $T$ tasks. We note that simply minimizing the joint objective
$f(W) =\sum\limits_{t=1}^T \ell_t(w_t)$ cannot achieve the desired
knowledge transfer among tasks because the minimization problems are
decoupled for each $w_t$. Therefore, MTL is typically achieved by adding
a penalty term~\cite{Theo2004,zhou2011malsar,zhou2011clustered}:
\begin{align}
\min_{W} \left\{ \sum\nolimits_{t=1}^T \ell_t(w_t) + \lambda g(W) \right\}
\equiv f(W) + \lambda g(W), \label{eqt:mtl}
\end{align}
where $g(W)$ encodes the assumption of task relatedness and couples $T$ tasks,
and $\lambda$ is the regularization parameter controlling how much knowledge
to shared among tasks. In this paper, we assume that the loss function $f(W)$ is convex and $L$-Lipschitz differentiable with $L>0$ and $g(W)$ is closed proper convex.
One representative task relatedness is joint feature learning, which is
achieved via the grouped sparsity induced by penalizing the $\ell_{2,1}$-norm~\cite{JunLiu09}
of the model matrix $W$: $g(W) = \|W\|_{2,1} = \sum_{i=1}^d \|w^i\|_2$ where
$w^i$ is the $i$-th row of $W$. The grouped sparsity would encourage many
rows of $W$ to be zero and thus remove the effects of the corresponding features on the
predictions in linear models. Another commonly used MTL method is the shared
subspace learning~\cite{argyriou2008convex}, which is achieved by penalizing
the nuclear norm $g(W) = \|W\|_* = \sum_{i=1}^{\min(d, T)} \sigma_i(W)$, where
$\sigma_i(W)$ is the $i$-th singular value of the matrix $W$. Intuitively, a
low-rank $W$ indicates that the columns of $W$ (the models of tasks) are
linearly dependent and come from a shared low-dimensional subspace. The nuclear
norm is the tightest convex relaxation of the rank
function~\cite{fazel2001rank}, and the problem can be solved via proximal gradient
methods~\cite{ji2009accelerated}.
\vspace{-0.05in}
\subsection{(Synchronized) distributed optimization of MTL}
Because of the non-smooth regularization $g(W)$, the composite objectives in
MTL are typically solved via the proximal gradient based first order
optimization methods such as FISTA~\cite{beck2009fast},
SpaRSA~\cite{wright2009sparse}, and more recently, second order proximal
methods such as PNOPT~\cite{lee2014proximal}. Below we review the two key
computations involved in these algorithms:
\vspace{+0.03in}
\noindent {\it 1) Gradient Computation}. The gradient of the smooth component
is computed by aggregating gradient vectors from the loss function of each task:
\begin{align}
\nabla & f(W) =
\nabla \sum\nolimits_{t=1}^T \ell_t(w_t) = [\nabla\ell_1(w_1), \dots, \nabla\ell_T(w_T) ]. \label{eqt:pg_gradient}
\end{align}
\vspace{+0.03in}
\noindent {\it 2) Proximal Mapping}. After the gradient update, the next search
point is obtained by the proximal mapping which solves the following optimization problem:
\begin{align}
\Prox{\eta\lambda g}(\hat W) = \argmin_W \frac{1}{2\eta}\|W - \hat W\|_F^2 + \lambda g(W),
\label{eqt:pg_proximal}
\end{align}
where $\eta$ is the step size, $\hat W$ is computed from the gradient descent with step size $\eta$:
$\hat W = W - \eta \nabla f(W)$, and $\|\cdot\|_F$ is the Frobenius norm.
When the size of data is big, the data is typically stored in different pieces
that spread across multiple computers or even multiple data centers. For MTL,
the learning tasks typically involve different sets of samples, i.e., $\mathcal
D_1, \dots, \mathcal D_T$ have different data points, and it is very common
that these datasets are collected through different procedures and stored at
different locations. It is not always feasible to transfer the relevant data pieces
to one center to provide a centralized data environment for optimization. The reason why a centralized data environment is difficult to achieve is simply
because the size of the dataset is way too large for efficient network
transfer, and there would be concerns such as network security and data privacy.
Recall the scenario in the introduction, where the learning involves patients'
medical records from multiple hospitals. Transferring patients' data outside
the respective hospital data center would be a huge concern, even if the data is
properly encrypted. Therefore, it is imperative to seek distributed
optimization for MTL, where summarizing information from data is computed only
locally and then transferred over the network.
Without loss of generality, we assume a general setting where the datasets
involved in tasks are stored in different computer systems that are connected
through a star-shaped network. Each of these computer systems, called a {\it node}, {\it
worker} or an {\it agent}, has full access to the data $\mathcal D_t$ for one
task, and is capable of numerical computations (e.g., computing gradient). We
assume that there is a {\it central server} that collects the information
from the task agents and performs the proximal mapping. We now investigate
aforementioned key operations involved and see how the optimization can be
distributed across agents to fit this architecture and minimize the
computational cost. Apparently, the gradient computation in
Eq.~(\ref{eqt:pg_gradient}) can be easily parallelized and distributed because
of the independence of gradients among tasks. Naturally, the proximal mapping
in Eq.~(\ref{eqt:pg_proximal}) can be carried out as soon as the gradient
vectors are collected and $\hat W$ is obtained. The projected solution after
the proximal mapping is then sent back to the task agents to prepare for the next
iteration. This {\it synchronized} distributed approach assembles a map-reduce
procedure and can be easily implemented. The term ``synchronize'' indicates that,
at each iteration, we have to wait for all the gradients to be collected before the
server (and other task agents) can proceed.
Since the server waits for {\it all} task agents to finish at every iteration,
one apparent disadvantage is that when one or more task agents are suffering from
high network delay or even failure, all other agents must wait. Because the
first-order optimization algorithm typically requires many iterations to
converge to an acceptable precision, the extended period of waiting time in a
synchronized optimization will lead to prohibitive algorithm running
time and a waste of computing resources.
\subsection{Asynchronized framework for distributed MTL}
To address the aforementioned challenges in distributed MTL, we propose to
perform asynchronous multi-task learning (AMTL), where the central server begins to
update model matrix $W$ after it receives a gradient computation from one
task node, without waiting for the other task nodes to finish their computations.
While the server and all task agents maintain their own copies of $W$ in the memory,
the copy at one task node may be different from the copies at other nodes. The
convergence analysis of the proposed AMTL framework is backed up by a recent
approach for asynchronous parallel coordinate update problems by using
Krasnosel'skii-Mann (KM) iteration~\cite{Arock,TMAC}.
A task node is said to be {\it activated} when it performs computation and network
communication with the central server for updates. The framework is based on
the following assumption on the activation rate:
\begin{assumption}\label{asp:act_rate}
All the task nodes follow independent Poisson processes and
have the same activation rate.
\end{assumption}
\begin{remark}\label{rmk:act_rate}
We note that when the activation rates are different for different task nodes,
we can modify the theoretical result by changing the step size: if a task
node's activation rate is large, then the probability that this task node is activated is large and thus the corresponding step size should be small. In
Section~\ref{sect:method:step_size} we propose a dynamic step size strategy
for the proposed AMTL.
\end{remark}
\begin{algorithm*}[t!]
\small
\caption{The proposed Asynchronous Multi-Task Learning framework}
\label{alg:amtl}
\begin{algorithmic}
\REQUIRE Multiple related learning tasks reside at task nodes, including the training data and the loss
function for each task $\{x_1, y_1, \ell_t\},...,\{x_T, y_T, \ell_T\}$, maximum delay $\tau$, step size $\eta$, multi-task regularization parameter $\lambda$.
\ENSURE Predictive models of each task $v_{1},...,v_{T}$.
\STATE Initialize task nodes and the central server.
\STATE Choose $\eta_k \in [\eta_{\min}, \frac{c}{2\tau/\sqrt{T}+1} ]$ for any
constant $\eta_{\min}>0$ and $0<c<1$
\WHILE{ \textit{every task node asynchronously and continuously} }
\STATE Task node $t$ requests the server for the forward step computation $\mbox{Prox}_{\eta \lambda g}\left(\hat{{v}}^{k}\right)$, and
\STATE Retrieves
$\left(\mbox{Prox}_{\eta \lambda g}\left(\hat{{v}}^{k}\right)\right)_t$ from the central server and
\STATE Computes the coordinate update on $v_t$
\begin{equation}
{v}_t^{k+1} = v_t^k + \eta_k \left(\left(\mbox{Prox}_{\eta \lambda g}\left(\hat{{v}}^{k}\right)\right)_t-\eta \nabla \ell_t \left(\left(\mbox{Prox}_{\eta \lambda g} (\hat{{v}}^{k})\right)_t\right)-v_t^k\right)
\label{eq:update}
\end{equation}
\STATE Sends updated ${v}_t$ to the central node.
\ENDWHILE
\end{algorithmic}
\end{algorithm*}
The proposed AMTL uses a backward-forward operator splitting method
\cite{combettes2005signal,Coordinate_update} to solve problem~\eqref{eqt:mtl}. Solving
problem~\eqref{eqt:mtl} is equivalent to finding the optimal solution $W^*$
such that $0\in \nabla f(W^*)+ \lambda \partial g(W^*)$, where $\partial g(W^*)$ denotes the set of subgradients of non-smooth
function $g(\cdot)$ at $W^*$ and we have the
following:
\begin{align*}
0\in \nabla f(W^*) & + \lambda \partial g(W^*)
\iff -\nabla f(W^*)\in \lambda \partial g(W^*) \\
\iff & W^* -\eta \nabla f(W^*)\in W^* +\eta \lambda \partial g(W^*).
\end{align*}
Therefore the forward-backward iteration is given by:
\begin{align*}
W^+=(I+\eta \lambda\partial g)^{-1}(I-\eta \nabla f)(W),
\end{align*}
which converges to the solution if $\eta\in (0,2/L)$. Since $\nabla
f(W)$ is separable, i.e., $\nabla f(W) = [\nabla \ell_1(w_1), \nabla
\ell_2(w_2),\cdots,\nabla \ell_T(w_T)]$, the forward operator, i.e.,
$I-\eta\nabla f$, is also separable. However, the backward operator, i.e.,
$(I+\eta\lambda\partial g)^{-1}$, is not separable. Thus, we can not apply
the coordinate update directly on the forward-backward iteration. If we switch the
order of forward and backward steps, we obtain the following backward-forward iteration:
\begin{align*}
V^+=(I-\eta \nabla f)(I+\eta \lambda \partial g)^{-1}(V),
\end{align*}
where we use an auxiliary matrix $V\in\Real^{d\times T}$ instead of $W$ during
the update. This is because the update variables in the forward-backward and
backward-forward are different variables. Moreover, one additional backward step is
needed to obtain $W^*$ from $V^*$. We can thus follow~\cite{Arock} to perform
task block coordinate update at the backward-forward iteration, where each
{\it task block} is defined by the variables associated to a task. The update
procedure is given as follows:
\begin{align*}
v_t^+=(I-\eta \nabla \ell_t)\left((I+\eta \lambda \partial g)^{-1}(V)\right)_t,
\end{align*}
where $v_t \in \Real^d$ is the corresponding auxiliary variable of $w_t$ for
task $t$. Note that updating one task block $v_t$ will need one full backward
step and a forward step only on the task block. The overall AMTL algorithm
is given in~\ref{alg:amtl}. The formulation in Eq.~\ref{eq:update} is the update rule of
KM iteration discussed in~\cite{Arock}. KM iteration provides a generic framework to solve fixed point
problems where the goal is to find the fixed point of a nonexpansive operator. In the problem
setting of this study, backward-forward operator is our fixed-point operator.
We refer to Section 2.4 of~\cite{Arock} to see how Eq.~\ref{eq:update} is derived.
In general, the choice between forward-backward or backward-forward is largely
dependent on the difficulty of the sub problem. If the backward step is
easier to compute compared to the forward step, e.g., data $(x_t,y_t)$ is large,
then we can apply coordinate update on the backward-forward iteration.
Specifically in the MTL settings, the backward step is given by a proximal
mapping in Eq.~\ref{eqt:pg_proximal} and usually admits an analytical solution
(e.g., soft-thresholding on singular values for trace norm). On the other hand, the gradient computation in
Eq.~\ref{eqt:pg_gradient} is typically the most time consuming step for large datasets. Therefore
backward-forward provides a more computational efficient optimization
framework for distributed MTL. In addition, we note that the backward-forward
iteration is a non-expansive operator if $\eta\in(0,2/L)$ because both the
forward and backward steps are non-expansive.
When applying the coordinate update scheme in~\cite{Arock}, all task nodes have
access to the shared memory, and they do not communicate with each other. Further
the communicate between each task node and the central server is only the vector
$v_t$ , which is typically much smaller than the data $\mathcal D_t$ stored
locally at each task node. In the proposed AMTL scheme, the task nodes do not
share memory but are exclusively connected and communicate with the central
node. The computation of the backward step is located in the central node,
which performs the proximal mapping after one gradient update is received from
a task node (the proximal mapping can be also applied after several gradient updates depending on the speed of gradient update). In this case, we further save the communication cost between each
task node and the central node, because each task node only need the task block
corresponding to the task node.
To illustrate the asynchronous update mechanism in AMTL, we show an example in
Fig.~\ref{fig:async_update_fig}. The figure shows order of the backward and
the forward steps performed by the central node and the task nodes. At time
$t_1$, the task node 2 receives the model corresponding to the task 2 which
was previously updated by the proximal mapping step in the central node. As
soon as task node~2 receives its model, the forward (gradient) step is
launched. After the task gradient descent update is done, the model of the task~2
is sent back to the central node. When the central node receives the updated
model from the task node~2, it starts applying the proximal step on the whole
multi-task model matrix. As we can see from the figure, while task node~2 was
performing its gradient step, task node~4 had already sent its updated model
to the central node and triggered a proximal mapping step during time steps $t_2$ and $t_3$. Therefore, the model matrix was updated upon the request of
the task node~4 during the gradient computation of task node 2. Thus we know
that the model received by the task node~2 at time $t_1$ is not the same copy
as in the central server any more.
When the updated model from the task node 2 is received by the central node,
proximal mapping computations are done by using the model received from task
node~2 and the updated models at the end of the proximal step triggered by the
task node 4. Similarly, if we think of the model received by task node~4 at
time $t_3$, we can say that it will not be the same model as in the central
server when task node 4 is ready to send its model to the central node
because of the proximal step triggered by the task node~2 during time steps $t_4$ and $t_5$. This is because in AMTL, there is no memory lock during reads.
As we can see, the asynchronous update scheme has inconsistency when it
comes to read model vectors from the central server. We note that such
inconsistency caused by the backward step is already taken into
account in the convergence analysis.
\begin{figure}[t!]
\center
\includegraphics[width=0.5\textwidth]{figures/async_update.pdf}
\caption{Illustration of asynchronous updates in AMTL.
The asynchronous update scheme has an inconsistency when it
comes to read model vectors from the central server. Such
inconsistencies caused by the backward step of the AMTL is taken into
account in the convergence analysis.}
\vspace{-0.1in}
\label{fig:async_update_fig}
\end{figure}
We summarize the proposed AMTL algorithm in
Algorithm~\ref{alg:amtl}. The AMTL framework
enjoys the following convergence property:
\begin{theorem}
Let $(V^k)_{k\geq0}$ be the sequence generated by the proposed AMTL with
$\eta_k \in [\eta_{\min}, \frac{c}{2\tau/\sqrt{T}+1} ]$ for any
$\eta_{\min}>0$ and $0<c<1$, where $\tau$ is the maximum delay. Then $(V^k)_{k\geq 0}$ converges to an
$V^*$-valued random variable almost surely. If the MTL problem in
Eq.~\ref{eqt:mtl} has a unique solution, then the sequence converges to the
unique solution.
\end{theorem}
According to our assumptions, all the task nodes are independent Poisson
processes and each task node has the same activation rate. The probability
that each task node is activated before other task nodes is
$1/T$~\cite{larson1981urban}, and therefore we can assume that each coordinate
is selected with the same probability. The results in~\cite{Arock} can be
applied to directly obtain the convergence. We note that some MTL algorithms based on
sparsity-inducing norms may not have a unique solution, as commonly seen in
many sparse learning formulations, but in this case we typically can add an
$\ell_2$ term to ensure the strict convexity and obtain linear convergence as shown in~\cite{Arock}. An example of such technique is
the elastic net variant from the original Lasso
problem~\cite{zou2005regularization}.
\subsection{Dynamic step size controlling in AMTL}
\label{sect:method:step_size}
As discussed in Remark~\ref{rmk:act_rate}, the AMTL is based on the same
activation rate in Assumption~\ref{asp:act_rate}. However because of the
topology of the network in real-world settings, the same activation rate is
almost impossible. In this section, we propose a dynamic step size for AMTL to
overcome this challenge. We note that in order to ensure convergence, the step
sizes used in asynchronous optimization algorithms are typically much smaller
than those used in the synchronous optimization algorithms, which
limits the algorithmic efficiency of the solvers. The dynamic step was recently
used in a specific setting via asynchronous optimization to achieve
better overall performance~\cite{cheung2014amortized}.
Our dynamic step size is motivated by this design, and revises the update of AMTL in
Eq.~\ref{eq:update} by augmenting a time related multiplier:
\begin{equation}
\begin{split}
v_t^{k+1} &= v_t^k + c_{(t, k)}\eta_k \left(\left(\mbox{Prox}_{\tau \lambda g
}\left(\hat{{v}}^{k}\right)\right)_t \vphantom{\sum_{t=1}^{T}} \right. \\
& \left. - \eta \nabla \ell_t
\left(\left(\mbox{Prox}_{\tau \lambda g}
(\hat{{v}}^{k})\right)_t\right)-v_t^k\right)
\end{split}
\label{eq:update-dynamic}
\end{equation}%
where the multiplier is given by:
\begin{equation}
c_{(t, k)} = \log \left(\max \left(\bar \nu_{t, k},10\right)\right)
\label{eq:transformation}
\end{equation}
and $\bar \nu_{t, k} = \tfrac{1}{k+1}\sum_{i=z-k}^{z} \nu_t^{(i)} $ is the
average of the last $k+1$ delays in task node $t$, $z$ is the current time point,
and $\nu_t^{(i)}$ is the delay at time $i$ for task $t$. As such, the actual
step size given by $c_{(t, k)} \eta_k$ will be scaled by the history of
communication delay between the task nodes and the central server. The longer
the delay, the larger is the step size to compensate for the loss from
the activation rate. We note that though in our experiments we show the
effectiveness of the propose scheme, instead of using
Eq.~(\ref{eq:transformation}), different kinds of function could also be used.
Currently, these are no theoretical results on how a dynamic step could improve
the general problem. Further what types of dynamics could better serve
the purpose remains an open problem in the optimization research.
\subsection{Distributed optimization}
Distributed optimization techniques exploit technological improvements in hardware to solve massive optimization problems fast. One commonly used distributed optimization approach is alternating direction method of multipliers (ADMM), which was firstly proposed in the 1970s~\cite{boyd2011}. Boyd {\it et al.} defined it as a well suited distributed convex optimization method. In the distributed ADMM framework in~\cite{boyd2011}, local copies are introduced for local subproblems, and the communication between the work nodes and the center node is for the purpose of consensus. Though it can fit in a large-scale distributed optimization setting, the introduction of local copies increases the number of iterations to achieve the same accuracy. Furthermore, the approaches in~\cite{boyd2011} are all synchronized.
In order to avoid introducing multiple local copies and introduce asynchrony, Iutzeler {\it et al.} proposed an asynchronous distributed approach using randomized ADMM~\cite{Franck2013} based on randomized Gauss-Seidel iterations of a Douglas-Rachford splitting (DRS) because ADMM is equivalent to DRS. For the asynchronous distributed setting, they assumed a set of network agents, where each agent has an individual cost function. The goal was to find a consensus on the minimizer of the overall cost function which consists of individual cost functions.
Aybat {\it et al.} introduced an asynchronous distributed proximal gradient method by using the randomized block coordinate descent method in~\cite{Aybat2014} for minimizing the sum of a smooth and a non-smooth functions. The proposed approach was based on synchronous distributed first-order augmented Lagrangian (DFAL) algorithm.
Liu {\it et al.} proposed an asynchronous stochastic proximal coordinate descent method in~\cite{Liu2015}. The proposed approach introduced a distributed stochastic optimization scheme for composite objective functions. They adopted an inconsistent read mechanism where the elements of the optimization variable may be updated by multiple cores during being read by another core. Therefore, cores read a hybrid version of the variable which had never been existed in the memory. It was shown that the proposed algorithm has a linear convergence rate for a suitable step size under the assumption that the optimal strong convexity holds.
Recently, Peng {\it et al.} proposed a general asynchronous parallel framework for coordinate updates based on solving fixed-point problems with non-expansive operators~\cite{Arock,TMAC}. Since many famous optimization algorithms such as gradient descent, proximal gradient method, ADMM/DRS, and primal-dual method can be expressed as non-expansive operators. This framework can be applied to many optimization problems and monotone inclusion problems~\cite{Coordinate_update}. The procedure for applying this framework includes transferring the problem into a fixed-point problem with a non-expansive operator, applying ARock on this non-expansive operator, and transferring it back to the original algorithm. Depending on the structures of the problem, there may be multiple choices of non-expansive operators and multiple choices of asynchronous algorithms. These asynchronous algorithms may have very different performance on different platforms and different datasets.
\subsection{Distributed multi-task learning}
In this section, studies about distributed MTL in the literature are summarized. In real-world MTL problems, geographically distributed vast amount of data such as healthcare datasets is used. Each hospital has its own data consisting of information such as patient records including diagnostics, medication, test results, etc. In this scenario, two of the main challenges are network communication and the privacy. In traditional MTL approaches, data transfer from different sources is required. However, this task can be quite costly because of the bandwidth limitations. Another point is that hospitals may not want to share their data with others in terms of the privacy of their patients. Distributed MTL provides a solution by using distributed optimization techniques for the aforementioned challenges. In distributed MTL, data is not needed to be transferred to a central node. Since only the learned models are transferred instead of the whole raw data, the cost of network communication is reduced. In addition, distributed MTL mitigates the privacy problem, since each worker updates their models independently. For instance, Dinuzzo and Pillonetto in~\cite{client11} proposed a client-server MTL from distributed datasets in 2011. They designed an architecture that was simultaneously solving multiple learning tasks. In their setting, client was an individual learning task. Server was responsible for collecting the data from clients to encode the information in a common database in real time. In this setting, each client could access the information content of all the data on the server without knowing the actual data. MTL problem was solved by regularized kernel methods in that paper.
Mateos-N$\acute{\mbox{u}}\tilde{\mbox{n}}$ez and Cort$\acute{\mbox{e}}$z focused on distributed optimization techniques for MTL in~\cite{nunez15}. Authors defined a separable convex optimization problem on local decision variables where the objective function comprises a separable convex cost function and a joint regularization for low rank solutions. Local gradient calculations were divided into a network of agents. Their second solution consisted of a separable saddle-point reformulation through Fenchel conjugation of quadratic forms. A separable min-max problem was derived, and it has iterative distributed approaches that prevent from calculating the inverses of local matrices.
Jin {\it et al.}, on the other hand, proposed collaborating between local and global learning for distributed online multiple tasks in~\cite{online15}. Their proposed method learned individual models with continuously arriving data. They combined MTL along with distributed learning and online learning. Their proposed scheme performed local and global learning alternately. In the first step, online learning was performed locally by each client. Then, global learning was done by the server side. In their framework, clients still send a portion of the raw data to the global server and the global server coordinates the collaborative learning.
In 2016, Wang {\it et al.} also proposed a distributed scheme for MTL with shared representation. They defined shared-subspace MTL in a distributed multi-task setting by proposing two subspace pursuit approaches. In their proposed setting, there were $m$ separate machines and each machine was responsible for one task. Therefore, each machine had access only for the data of the corresponding task. Central node was broadcasting the updated models back to the machines. As in~\cite{nunez15}, a strategy for nuclear norm regularization that avoids heavy communication was investigated. In addition, a greedy approach was proposed for the subspace pursuit which was communication efficient. Optimization was done by a synchronous fashion. Therefore, workers and master nodes had to wait for each other before proceeding.
The prominent result is that all these methods follow a synchronize approach while updating the models. When there is a data imbalance, the computation in the workers which have larger amount of data will take longer. Therefore, other workers will have to wait, although they complete their computations. In this paper, we propose to use an asynchronous optimization approach to prevent workers from waiting for others and to reduce the training time for MTL.
|
1,108,101,565,567 | arxiv | \section{Introduction}
The idea of spontaneous violation of Lorentz invariance through tensor fields
with non-vanishing expectation values has garnered substantial attention in
recent years \cite{Will:1972zz, Gasperini:1987nq, Kostelecky:1989jw, Colladay:1998fq, Jacobson:2000xp, Eling:2003rd, Carroll:2004ai, Jacobson:2004ts,Lim:2004js, Eling:2004dk,Dulaney:2008ph, Jimenez:2008sq}.
Hypothetical interactions between Standard Model fields and Lorentz-violating (LV)
tensor fields are tightly constrained by a wide variety of experimental probes,
in some cases leading to limits at or above the Planck scale \cite{Colladay:1998fq, Kostelecky:2000mm, Carroll:2004ai, Elliott:2005va, Mattingly:2005re, Will:2005va, Jacobson:2008aj}.
If these constraints are to be taken seriously, it is necessary to have a sensible theory
of the dynamics of the LV tensor fields themselves, at least at the level of
low-energy effective field theory. The most straightforward way to construct such a theory
is to follow the successful paradigm of scalar field theories with spontaneous
symmetry breaking, by introducing a tensor potential that is minimized at some
non-zero expectation value, in addition to a kinetic term for the fields. (Alternatively,
it can be a derivative of the field that obtains an expectation value, as in
ghost condensation models \cite{ArkaniHamed:2003uy, ArkaniHamed:2005gu, Cheng:2006us}.) As an additional simplification, we may
consider models in which the nonzero expectation value is enforced by a
Lagrange multiplier constraint, rather than by dynamically minimizing a potential;
this removes the ``longitudinal'' mode of the tensor from consideration, and may
be thought of as a limit of the potential as the mass near the minimum is
taken to infinity. In that case, there will be a vacuum manifold of zero-energy
tensor configurations, specified by the constraint.
All such models must confront the tricky question of stability.
Ultimately, stability problems stem from the basic fact that the metric has an
indefinite signature in a Lorentzian spacetime. Unlike in the case of scalar fields,
for tensors it is necessary to use the spacetime metric to define both the kinetic
and potential terms for the fields. A generic choice of potential would
have field directions in which the energy is unbounded from below, leading to
tachyons, while a generic choice of kinetic term would have modes with negative
kinetic energies, leading to ghosts. Both phenomena represent instabilities;
if the theory has tachyons, small perturbations grow exponentially in time at
the linearized level, while if the theory has ghosts, nonlinear interactions
create an unlimited number of positive- and negative-energy excitations
\cite{Carroll:2003st}. There is no simple argument that these
unwanted features are necessarily present in any model of LV tensor fields,
but the question clearly warrants careful study.
In this paper we revisit the question of the stability of theories of dynamical Lorentz
violation, and argue that most such theories are unstable. In particular, we
examine in detail the case of a vector field $A_\mu$ with a nonvanishing expectation
value, known as the ``{\ae}ther'' model or a ``bumblebee'' model. For generic choices of
kinetic term, it is straightforward to show that the Hamiltonian of such a model is
unbounded from below, and there exist solutions with bounded initial data that grow exponentially in time.
There are three specific
choices of kinetic term for which the analysis is more subtle. These are the
sigma-model kinetic term,
\begin{equation}
{\cal{L}}_K = -\frac{1}{2} \partial_\mu A_\nu \partial^\mu A^\nu\,,
\label{sigmamodel}
\end{equation}
which amounts to a set of four scalar fields defined on a target space with a Minkowski metric;
the Maxwell kinetic term,
\begin{equation}
{\cal L}_K = -\frac{1}{4}F_{\mu\nu}F^{\mu\nu}\,,
\label{maxwell}
\end{equation}
where $F_{\mu\nu} = \partial_\mu A_\nu - \partial_\nu A_\mu$ is familiar from
electromagnetism; and what we call the ``scalar'' kinetic term,
\begin{equation}
{\cal L}_K = \frac{1}{2} (\partial_\mu A^\mu)^2\,,
\label{scalar}
\end{equation}
featuring a single scalar degree of freedom.
Our findings may be summarized as follows:
\begin{itemize}
\item The sigma-model Lagrangian with the vector field constrained by a Lagrange
multiplier to take on a timelike expectation value is the only {\ae}ther\ theory for which the
Hamiltonian is bounded from below in every frame, ensuring stability. In a companion paper,
we examine the cosmological behavior and observational constraints on this
model \cite{Carroll:2009en}. If the vector field is
spacelike, the Hamiltonian is unbounded and the model is unstable.
However, if the constraint in the sigma-model theory
is replaced by a smooth potential, allowing the length-changing mode to
become a propagating degree of freedom, that mode is necessarily ghostlike (negative
kinetic energy) and tachyonic (correct sign mass term), and the Hamiltonian is unbounded
below, even in the timelike case.
It is therefore unclear whether models of this form can arise in any full theory.
\item In the Maxwell case, the Hamiltonian is unbounded below; however, a perturbative
analysis does not reveal any explicit instabilities in the form of tachyons or ghosts.
The timelike mode of the vector acts as a Lagrange multiplier, and there are fewer
propagating degrees of freedom at the linear level
(a ``spin-1'' mode propagates, but not a ``spin-0'' mode).
Nevertheless, singularities can arise in evolution from generic initial data: for a spacelike vector,
for example, the field evolves to a configuration in which the
fixed-norm constraint cannot be satisfied (or perhaps just to a point where the effective field theory breaks down). In the timelike case, a certain subset of initial data
is well-behaved, but, provided the vector field couples only to conserved currents, the theory reduces precisely to conventional electromagnetism, with no
observable violations of Lorentz invariance. It is unclear whether there exists a subset
of initial data that leads to observable violations of Lorentz invariance while
avoiding problems in smooth time evolution.
\item The scalar case is superficially similar to the Maxwell case, in that the Hamiltonian
is unbounded below, but a perturbative analysis does not reveal any instabilities.
Again, there are fewer degrees of freedom at the linear
level; in this case, the spin-1 mode does not propagate.
There is a scalar degree of freedom, but it does not correspond to a propagating
mode at the level of perturbation theory (the dispersion relation is conventional, but the
energy vanishes to quadratic order in the perturbations).
For the timelike {\ae}ther\ field, obstacles arise in the time evolution that are similar to
those of a spacelike vector in the Maxwell case; for a spacelike {\ae}ther\ field with
a scalar action, the behavior is less clear.
\item For any other choice of kinetic term, {\ae}ther\ theories are always unstable.
\end{itemize}
Interestingly, these three choices of {\ae}ther\ dynamics are precisely those for which
there is a unique propagation speed for all dynamical modes; this is the same
condition required to ensure that the Generalized Second Law is respected by a
Lorentz-violating theory \cite{Dubovsky:2006vk,Eling:2007qd}.
One reason why our findings concerning stability seem more restrictive than those
of some previous analyses is that we insist on perturbative stability in all Lorentz frames,
which is necessary in theories where the form of the Hamiltonian is frame-dependent.
In a Lorentz-invariant field theory, it suffices to pick a Lorentz frame and examine the
behavior of small fluctuations; if they grow exponentially, the model is unstable, while
if they oscillate, the model is stable. In Lorentz-violating theories, in contrast, such
an analysis might miss an instability in one frame that is manifest at the linear level in some other
frame \cite{Kostelecky:2001xz, Mattingly:2005re,Adams:2006sv}.
This can be traced to the fact that a perturbation that is ``small'' in one
frame (the value of the perturbation is bounded everywhere along some initial
spacelike slice), but grows exponentially with time as measured in that frame,
will appear ``large'' (unbounded on every spacelike slice) in some other frame.
As an explicit example, consider a model of a timelike vector with a background
configuration $\bar{A}_\mu = (m, 0, 0, 0)$, and perturbations
$\delta a^\mu = \epsilon^\mu e^{-i\omega t} e^{i\vec k \cdot \vec x}$, where $\epsilon^\mu$
is some constant polarization vector. In this frame, we will see that
the dispersion relation takes the form
\begin{equation}
\omega^2 = v^2 \vec k^2\,.
\end{equation}
Clearly, the frequency $\omega$ will be real for every real wave vector $\vec k$,
and such modes simply oscillate rather than growing in time. It is tempting to conclude
that models of this form are perturbatively stable for any value of $v$.
However, we will see below that when $v > 1$, there exist other frames (boosted with
respect to the original) in which $\vec k$ can be real but $\omega$ is necessarily
complex, indicating an instability.
These correspond to wave vectors for which, evaluated in the original
frame, both $\omega$ and $\vec k$ are complex. Modes with complex
spatial wave vectors are not considered to be ``perturbations,'' since
the fields blow up at spatial infinity. However, in the presence of
Lorentz violation, a complex spatial wave vector in one frame may
correspond to a real spatial wave vector in a boosted frame. We will
show that instabilities can arise from initial data defined on a
constant-time hypersurface (in a boosted frame) constructed solely
from modes with real spatial wave vectors. Such modes are bounded at
spatial infinity (in that frame), and could be superimposed to form
wave packets with compact support. Since the notion of stability is
not frame dependent, the existence of at least one such frame
indicates that the theory is unstable, even if there is no linear
instability in the {\ae}ther\ rest frame.
Several prior investigations have considered the question of stability in theories with
LV vector fields. Lim \cite{Lim:2004js} calculated the Hamiltonian for small perturbations
around a constant timelike vector field in the rest frame, and derived restrictions on
the coefficients of the kinetic terms. Bluhm et al. \cite{Bluhm:2008yt} also examined the
timelike case with a Lagrange multiplier constraint, and showed that the Maxwell kinetic
term led to stable dynamics on a certain branch of the solution space if the
vector was coupled to a conserved current. It was also found, in \cite{Bluhm:2008yt}, that
most LV vector field theories have Hamiltonians that are unbounded below. Boundedness of the Hamiltonian was also considered in \cite{Chkareuli:2006yf}. In the context of effective field theory, Gripaios \cite{Gripaios:2004ms} analyzed small fluctuations of LV vector fields about a flat background. Dulaney,
Gresham and Wise \cite{Dulaney:2008ph} showed that only the Maxwell choice was stable to small perturbations in the spacelike case assuming the energy of the linearized modes was non-zero.\footnote{This effectively eliminates the scalar case.} Elliot, Moore, and Stoica \cite{Elliott:2005va} showed that the sigma-model kinetic term is stable in the presence of a constraint, but not with a potential.
In the next section, we define notation and fully specify the models we are considering.
We then turn to an analysis of the Hamiltonians for such models, and show that they are
always unbounded below unless the kinetic term takes on the sigma-model form and the vector field is timelike.
This result does not by itself indicate an instability, as there may not be any dynamical
degree of freedom that actually evolves along the unstable direction. Therefore, in the following
section we look carefully at linear stability around constant configurations, and isolate
modes that grow exponentially with time. In the section after that we show that the models
that are not already unstable at the linear level
end up having ghosts, with the exception of the Maxwell and scalar cases.
We then examine some features of those two theories in particular.
\section{Models}
We will consider a dynamical vector field $A_\mu$
propagating in Minkowski spacetime with signature $(-+++)$. The action takes the form
\begin{equation}
S_A = \int d^4x \left({\cal L}_K + {\cal L}_V\right)\,,
\end{equation}
where ${\cal L}_K$ is the kinetic Lagrange density and ${\cal L}_V$ is (minus) the potential.
A general kinetic term that is quadratic in derivatives of the field can be written\footnote{In terms of the coefficients, $c_i$, defined in \cite{Jacobson:2004ts} and used in many other publications on {\ae}ther theories,
\begin{equation}
\b_i = {c_i \over 16 \pi G m^2}
\end{equation}
where $G$ is the gravitational constant.}
\begin{equation}
\label{aLag}
{\cal{L}}_K = -\b_1(\partial_\mu} \def\n{\nu} \def\o{\omega A_\n)(\partial^\mu} \def\n{\nu} \def\o{\omega A^\n) - \beta_2 (\partial_\mu} \def\n{\nu} \def\o{\omega A^\mu} \def\n{\nu} \def\o{\omega)^2
- \b_3 (\partial_\mu} \def\n{\nu} \def\o{\omega A_\n)(\partial^\n A^\mu} \def\n{\nu} \def\o{\omega)
-{\beta_4} {A^\mu} \def\n{\nu} \def\o{\omega A^\n \over m^2} (\partial_\mu} \def\n{\nu} \def\o{\omega A_\rho)(\partial_\n A^\rho)\,.
\end{equation}
In flat spacetime, setting the fields to constant values at infinity, we can integrate by parts
to write an equivalent Lagrange density as
\begin{equation}
\label{aLag2}
{\cal{L}}_K = -{1\over 2}\b_1 F_{\mu} \def\n{\nu} \def\o{\omega\n}F^{\mu} \def\n{\nu} \def\o{\omega\n}
-\beta_{*}(\partial_\mu} \def\n{\nu} \def\o{\omega A^\mu} \def\n{\nu} \def\o{\omega)^2 - {\beta_4 } {A^\mu} \def\n{\nu} \def\o{\omega A^\n \over m^2} (\partial_\mu} \def\n{\nu} \def\o{\omega A_\rho)(\partial_\n A^\rho)\,,
\end{equation}
where $F_{\mu} \def\n{\nu} \def\o{\omega\n} = \partial_\mu} \def\n{\nu} \def\o{\omega A_\n - \partial_\n A_\mu} \def\n{\nu} \def\o{\omega$ and we have defined
\begin{equation}
\beta_{*} = \beta_1 + \beta_2 +\beta_3\,.
\end{equation}
In terms of these variables, the models specified above with no linear instabilities or
negative-energy ghosts are:
\begin{itemize}
\item Sigma model: $\b_1 = \beta_{*}$,
\item Maxwell: $\beta_{*} = 0$, and
\item Scalar: $\b_1 = 0$,
\end{itemize}
in all cases with $\b_4 = 0$.
The vector field will obtain a nonvanishing vacuum expectation value from the
potential. For most of the paper we will take the potential to be a Lagrange
multipler constraint that strictly fixes the norm of the vector:
\begin{equation}
{\cal L}_V = \lambda(A^{\mu} A_{\mu} \pm m^2)\,,
\label{constraintl}
\end{equation}
where $\l$ is a Lagrange multiplier whose variation enforces the constraint
\begin{equation}
A^{\mu} A_{\mu} = \mp m^2\,.
\label{constraint}
\end{equation}
If the upper sign is chosen, the vector will be timelike, and it will be spacelike for the
lower sign. Later we will examine how things change when the constraint is replaced
by a smooth potential of the form ${\cal L}_V = - V(A_\mu) \propto \xi(A_\mu A^\mu \pm m^2)^2$.
It will turn out that the theory defined with a smooth potential is only stable in the limit as $\xi \rightarrow \infty$. In any case, unless we specify otherwise, we assume that the norm of the vector is determined by the constraint (\ref{constraint}).
We are left with an action
\begin{equation}
\label{vf action}
S_A = \int d^4x \left[-{1\over 2}\b_1 F_{\mu} \def\n{\nu} \def\o{\omega\n}F^{\mu} \def\n{\nu} \def\o{\omega\n}
- \beta_{*}(\partial_\mu} \def\n{\nu} \def\o{\omega A^\mu} \def\n{\nu} \def\o{\omega)^2
-{\beta_4} {A^\mu} \def\n{\nu} \def\o{\omega A^\n \over m^2} (\partial_\mu} \def\n{\nu} \def\o{\omega A_\rho)(\partial_\n A^\rho)
+ \lambda(A^{\mu} A_{\mu} \pm m^2)\right]\,.
\end{equation}
The Euler-Lagrange equation obtained by varying with respect to $A_\mu$ is
\begin{equation}
\label{ele1}
\b_1\partial_\mu} \def\n{\nu} \def\o{\omega F^{\mu} \def\n{\nu} \def\o{\omega \n} + \beta_{*} \partial^\n \partial_\mu} \def\n{\nu} \def\o{\omega A^\mu} \def\n{\nu} \def\o{\omega
+ \beta_4 G^\n = -\lambda A^\n\,,
\end{equation}
where we have defined
\begin{equation}
\label{Gnu}
G^\n = \frac{1}{m^2}\left[A^\lambda (\partial_\lambda A^\sigma)F_\sigma{}^\n
+A^\sigma(\partial_\lambda A^\lambda \partial_\sigma A^\nu
+A^\lambda \partial_\lambda \partial_\sigma A^\nu)\right]\,.
\end{equation}
Since the fixed-norm condition (\ref{constraint}) is a constraint, we can consistently
plug it back into the equations of motion. Multiplying (\ref{ele1}) by $A_\nu$ and
using the constraint, we can solve for the Lagrange multiplier,
\begin{equation}
\lambda = \pm\frac{1}{m^2}\left(\b_1 \partial_\mu} \def\n{\nu} \def\o{\omega F^{\mu} \def\n{\nu} \def\o{\omega \n}
+ \beta_{*} \partial^\n \partial_\mu} \def\n{\nu} \def\o{\omega A^\mu} \def\n{\nu} \def\o{\omega + \b_4 G^\n\right)A_\n \,.
\end{equation}
Inserting this back into (\ref{ele1}), we can write the equation of motion as a system of
three independent equations:
\begin{align}
Q_\r \equiv \left(\eta_{\r \n} \pm {A_\r A_\n \over m^2}\right)
\left(\b_1\partial_\mu} \def\n{\nu} \def\o{\omega F^{\mu} \def\n{\nu} \def\o{\omega \n} + \beta_{*} \partial^\n \partial_\mu} \def\n{\nu} \def\o{\omega A^\mu} \def\n{\nu} \def\o{\omega
+ \b_4 G^\n\right) = 0.
\label{eoms}
\end{align}
The tensor $\eta_{\rho\nu} \pm m^{-2}A_\rho A_\nu$ acts to take what would be the equation
of motion in the absence of the constraint, and project it into the hyperplane orthogonal
to $A_\mu$.
There are only three independent equations because $A^\r Q_\r$ vanishes identically, given the fixed norm constraint.
\subsection{Validity of effective field theory}\label{Validity of effective field theory}
As in this paper we will restrict our attention to classical field theory, it is important to
check that any purported instabilities are found in a regime where a low-energy
effective field theory should be valid. The low-energy degrees of freedom in our
models are Goldstone bosons resulting from the breaking of Lorentz invariance.
The effective Lagrangian will consist of an infinite series of terms of progressively
higher order in derivatives of the fields, suppressed by appropriate powers of some
ultraviolet mass scale $M$. If we were dealing with the theory of a scalar field
$\Phi$, the low-energy effective theory would be valid when the canonical kinetic
term $(\partial \Phi)^2$ was large compared to a higher-derivative term such as
\begin{equation}
\frac{1}{M^2}(\partial^2 \Phi)^2\, .
\end{equation}
For fluctuations with wavevector $k^\mu = (\omega, \vec k)$, we have $\partial\Phi \sim k \Phi$,
and the lowest-order terms accurately describe the dynamics whenever
$|\vec k| < M$. A fluctuation that has a low momentum in one frame can, of course,
have a high momentum in some other frame, but the converse is also true; the set of
perturbations that can be safely considered ``low-energy'' looks the same in any frame.
With a Lorentz-violating vector field, the situation is altered.
In addition to higher-derivative terms of the form $M^{-2}(\partial^2 A)^2$, the
possibility of extra factors of the vector expectation value leads us to consider terms
such as
\begin{equation}
{\cal L}_4 = \frac{1}{M^8} A^6 (\partial^2 A)^2\, .
\end{equation}
The number of such higher dimension operators in the effective field
theory is greatly reduced because $A_\mu} \def\n{\nu} \def\o{\omega A^\mu} \def\n{\nu} \def\o{\omega = -m^2$ and, therefore,
$A_\mu} \def\n{\nu} \def\o{\omega \partial_\n A^\mu} \def\n{\nu} \def\o{\omega =0$. It can be shown that an independent
operator with $n$ derivatives includes at most $2 n$ vector fields, so that the term highlighted
here has the largest number of $A$'s with four derivatives. We expect that the ultraviolet
cutoff $M$ is of order the vector norm, $M\approx m$. Hence, when we
consider a background timelike vector field in its rest frame,
\begin{equation}
\bar A_\mu = (m, 0, 0, 0)\,,
\end{equation}
the ${\cal L}_4$ term reduces to $m^{-2}(\partial^2 A)^2$, and the effective field theory is
valid for modes with $k< m$, just as in the scalar case.
But now consider a highly boosted frame, with
\begin{equation}
\bar A_\mu = (m\cosh\h, m\sinh\h, 0, 0)\,.
\end{equation}
At large $\h$, individual components of $A$ will scale as $e^{|\h|}$, and the
higher-derivative term schematically becomes
\begin{equation}
{\cal L}_4 \sim \frac{1}{m^2} e^{6|\h|} (\partial^2 A)^2\,.
\end{equation}
For modes with spatial wave vector $k=|\vec k|$ (as measured in this boosted frame),
we are therefore comparing $m^{-2}e^{6|\h|}k^4$ with the canonical term
$k^2$. The lowest-order terms therefore only dominate for wave vectors with
\begin{equation}
k < e^{-3|\h|}m\,.
\end{equation}
In the presence of Lorentz violation, therefore, the realm of validity of the effective
field theory may be considerably diminished in highly boosted frames. We will be
careful in what follows to restrict our conclusions to those that can be reached by
only considering perturbations that are accurately described by the two-derivative terms.
The instabilities we uncover are infrared phenomena, which cannot be cured by
changing the behavior of the theory in the ultraviolet.
We have been careful to include all of the lowest order terms in the effective field theory expansion---the terms in~\eqref{aLag2}.
\section{Boundedness of the Hamiltonian}
We would like to establish whether there are any values of the parameters $\b_1$,
$\beta_{*}$ and $\b_4$ for which the {\ae}ther\ model described above is physically
reasonable. In practice, we take this to mean that there exist background configurations
that are stable under small perturbations. It seems hard
to justify taking an unstable background as a starting point for phenomenological
investigations of experimental constraints, as we would expect the field to evolve
on microscopic timescales away from its starting point.
``Stability'' of a background solution $X_0$ to a set of classical equations of motion
means that, for any small neighborhood $U_0$ of $X_0$ in the phase space, there is another
neighborhood $U_1$ of $X_0$ such that the time evolution of any point in
$U_0$ remains in $U_1$ for all times. More informally, small perturbations oscillate
around the original background, rather than growing with time.
A standard way of demonstrating stability is to show that the Hamiltonian is a local
minimum at the background under consideration. Since the Hamiltonian is
conserved under time evolution, the allowed evolution of a small perturbation will
be bounded to a small neighborhood of that minimum, ensuring stability.
Note that the converse does not necessarily hold; the presence of other conserved
quantities can be enough to ensure stability even if the Hamiltonian is not
bounded from below.
One might worry about invoking the Hamiltonian in a theory where Lorentz
invariance has been spontaneously violated. Indeed, as we shall see, the form of
the Hamiltonian for small perturbations will depend on the Lorentz frame in which
they are expressed. To search for possible linear instabilities, it is necessary to consider
the behavior of small perturbations in every Lorentz frame.
The Hamiltonian density, derived from the action (\ref{vf action})
via a Legendre transformation, is
\begin{align} \label{Hamiltonian Density}
{\cal{H}} &= {\partial {\mathcal{L}}_A \over \partial(\partial_0 A_\mu} \def\n{\nu} \def\o{\omega)} \partial_0 A_\mu} \def\n{\nu} \def\o{\omega - {\mathcal{L}}_A \\
&= {\b_1 \over 2} F_{ij}^2 + \b_1 (\partial_0 A_i)^2 -\b_1 (\partial_i A_0)^2
+ \beta_{*}(\partial_i A_i)^2 - \beta_{*} (\partial_0 A_0)^2 \nonumber \\
&\qquad + \beta_4 {A^j A^k \over m^2} (\partial_j A_\rho)(\partial_k A^\rho)
- \beta_4 {A^0 A^0 \over m^2} (\partial_0 A_\rho)(\partial_0 A^\rho) ,
\end{align}
where Latin indices $i, j$ run over $\{1, 2, 3\}$. The total Hamiltonian corresponding to this density is
\begin{align}
H &= \int d^3 x {\cal{H}} \nonumber \\
&= \int d^3 x \big( \b_1(\partial_\mu} \def\n{\nu} \def\o{\omega A_i \partial_\mu} \def\n{\nu} \def\o{\omega A_i - \partial_\mu} \def\n{\nu} \def\o{\omega A_0 \partial_\mu} \def\n{\nu} \def\o{\omega A_0)
+(\b_1- \beta_{*})[(\partial_0 A_0)^2 - (\partial_i A_i)^2 ] \nonumber \\
&\qquad+ \beta_4 {A_j A_k \over m^2} (\partial_j A_\rho)(\partial_k A^\rho)
- \beta_4 {A_0 A_0 \over m^2} (\partial_0 A_\rho)(\partial_0 A^\rho)\big)\,.
\label{hamiltonian}
\end{align}
We have integrated by parts and assumed that $\partial_i A_j$ vanishes at spatial infinity; repeated lowered indices are summed (without any factors of the metric). Note that this Hamiltonian is identical to that of a theory with a smooth (positive semi-definite) potential instead of a Lagrange multiplier term, evaluated at field configurations for which the potential is minimized. Therefore, if the Hamiltonian is unbounded when the
fixed-norm constraint is enforced by a Lagrange multiplier, it will also be unbounded in the
case of a smooth potential.
There are only three dynamical degrees of freedom, so we may reparameterize $A_\mu} \def\n{\nu} \def\o{\omega$ such
that the fixed-norm constraint is automatically enforced and the allowed three-dimensional
subspace is manifest.
We define a boost variable $\phi$ and angular variables $\theta$ and $\psi$, so that
we can write
\begin{align}
A_0 &\equiv m \cosh \phi \\
A_i &\equiv m \sinh \phi f_i(\theta, \psi)
\end{align}
in the timelike case with $A_\mu} \def\n{\nu} \def\o{\omega A^\mu} \def\n{\nu} \def\o{\omega = - m^2$, and
\begin{align}
A_0 &\equiv m \sinh \phi \\
A_i &\equiv m \cosh \phi f_i(\theta, \psi)
\end{align}
in the spacelike case with $A_\mu} \def\n{\nu} \def\o{\omega A^\mu} \def\n{\nu} \def\o{\omega = + m^2$.
In these expressions,
\begin{align}
f_1 &\equiv \cos \theta \cos \psi \\
f_2 &\equiv \cos \theta \sin \psi \\
f_3 &\equiv \sin \theta\,,
\end{align}
so that $f_if_i = 1$.
In terms of this parameterization, the Hamiltonian density for a timelike {\ae}ther\ field becomes
\begin{align}
\label{tham}
{{\cal{H}}^{(t)} \over m^2} &= \b_1\sinh^2\phi \partial_\mu} \def\n{\nu} \def\o{\omega f_i \partial_\mu} \def\n{\nu} \def\o{\omega f_i +\b_1\partial_\mu} \def\n{\nu} \def\o{\omega \phi \partial_\mu} \def\n{\nu} \def\o{\omega \phi
+(\b_1 - \beta_{*})\left[ (\partial_0 \phi)^2 \sinh^2\phi - (\cosh \phi f_i \partial_i \phi + \sinh \phi \partial_i f_i)^2 \right] \nonumber \\
&\qquad +\beta_4 \sinh^2\phi \left[ (f_i \partial_i \phi)^2 + \sinh^2\phi (f_i \partial_i f_l)(f_j \partial_j f_l)\right] - \beta_4 \cosh^2\phi \left[ (\partial_0 \phi)^2 + \sinh^2\phi (\partial_0 f_i)^2 \right],
\end{align}
while for the spacelike case we have
\begin{align}
\label{sham}
{{\cal{H}}^{(s)} \over m^2} &= \b_1 \cosh^2\phi \partial_\mu} \def\n{\nu} \def\o{\omega f_i \partial_\mu} \def\n{\nu} \def\o{\omega f_i
-\b_1 \partial_\mu} \def\n{\nu} \def\o{\omega \phi \partial_\mu} \def\n{\nu} \def\o{\omega \phi + (\b_1- \beta_{*}) \left[ (\partial_0 \phi)^2 \cosh^2\phi
- (\sinh \phi f_i \partial_i \phi + \cosh \phi \partial_i f_i)^2 \right] \nonumber \\
&\qquad -\beta_4 \cosh^2\phi \left[ (f_i \partial_i \phi)^2 - \cosh^2\phi (f_i \partial_i f_l)(f_j \partial_j f_l)\right] +\beta_4 \sinh^2\phi \left[ (\partial_0 \phi)^2 - \cosh^2\phi (\partial_0 f_i)^2 \right].
\end{align}
Expressed in terms of the variables $\phi, \theta, \psi$, the Hamiltonian is a function of initial data that automatically respects the fixed-norm constraint. We assume that the derivatives $\partial_\mu} \def\n{\nu} \def\o{\omega A_\n (t_0, \vec{x})$ vanish at spatial infinity.
\subsection {Timelike vector field}
We can now determine which values of the parameters $\{ \b_1, \beta_{*}, \b_4\}$ lead to
Hamiltonians that are bounded below, starting with the case of a timelike {\ae}ther\
field. We can examine the various possible cases in turn.
\begin{itemize}
\item {\bf Case One: $\b_1=\beta_{*}$ and $\beta_4 = 0$.}
This is the sigma-model kinetic term (\ref{sigmamodel}).
In this case the Hamiltonian density simplifies to
\begin{equation}
{\cal{H}}^{(t)} = m^2 \b_1(\sinh^2\phi \partial_\mu} \def\n{\nu} \def\o{\omega f_i \partial_\mu} \def\n{\nu} \def\o{\omega f_i
+\partial_\mu} \def\n{\nu} \def\o{\omega \phi \partial_\mu} \def\n{\nu} \def\o{\omega \phi) \,.
\end{equation}
It is manifestly non-negative when $\beta_1 >0$, and non-positive when $\beta_1 < 0$.
The sigma-model choice $\b_1=\beta_{*} >0$ therefore results in a theory that is stable. (See also \S 6.2 of \cite{Eling:2004dk}.)
\item {\bf Case Two: $\b_1 < 0$ and $\beta_4 = 0$.}
In this case, consider configurations with
$(\partial_0 f_i) \neq 0$, $(\partial_i f_j) = 0$, $\partial_\mu} \def\n{\nu} \def\o{\omega \phi = 0$, $\sinh^2 \phi \gg 1$. Then we have
\begin{equation}
{\cal{H}}^{(t)} \sim m^2 \b_1 \sinh^2\phi (\partial_0 f_i)^2.
\end{equation}
For $\b_1<0$, the Hamiltonian can be arbitrarily negative for any value of $\beta_{*}$.
\item {\bf Case Three: $\b_1 \geq 0$, $\beta_{*} < \b_1$, and $\beta_4 = 0$.}
We consider configurations with
$\partial_\mu} \def\n{\nu} \def\o{\omega f_i = 0$, $f_i \partial_i \phi \neq 0 $, $\partial_0 \phi = 0$, $\cosh^2 \phi \gg 1 $, which gives
\begin{equation}
{\cal{H}}^{(t)} \sim m^2 (\beta_{*}-\b_1) \cosh^2\phi (f_i \partial_i \phi)^2.
\end{equation}
Again, this can be arbitrarily negative.
\item {\bf Case Four: $\b_1 \geq 0$, $\beta_{*} > \b_1$, and $\beta_4 = 0$.}
Now we consider configurations with
$\partial_\mu} \def\n{\nu} \def\o{\omega f_i = 0$, $f_i \partial_i \phi = 0$, $\partial_0 \phi \neq 0$, $\sinh^2 \phi \gg 1 $. Then,
\begin{equation}
{\cal{H}}^{(t)} \sim m^2 (\b_1-\beta_{*}) \sinh^2\phi (\partial_0 \phi)^2,
\end{equation}
which can be arbitrarily negative.
\item {\bf Case Five: $\beta_4 \neq 0$.}
Now we consider configurations with $\partial_\mu} \def\n{\nu} \def\o{\omega f_i \neq 0$, $\partial_\mu} \def\n{\nu} \def\o{\omega \phi =0$ and $\sinh^2 \phi \gg 1 $. Then,
\begin{equation}
{\cal{H}}^{(t)} \sim m^2 \beta_4 \left[ \sinh^4\phi (f_i \partial_i f_l)(f_k \partial_k f_l) - \sinh^2\phi \cosh^2\phi (\partial_0 f_i)^2\right]\,,
\end{equation}
which can be arbitrarily negative for any non-zero $\beta_4$ and for any values of
$\b_1$ and $\beta_{*}$.
\end{itemize}
For any case other than the sigma-model choice $\b_1=\beta_{*}$, it is therefore straightforward
to find configurations with arbitrarily negative values of the Hamiltonian.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{hamplotcropped.pdf}
\caption{Hamiltonian density (vertical axis) when $\b_1 = 1$, $\beta_{*} = 1.1$, and $\theta = \psi = \partial_y \phi = \partial_z \phi = 0$ as a function of $\partial_t \phi$ (axis pointing into page) and $\partial_x \phi$ (axis pointing out of page) for various $\phi$ ranging from zero to $\phi_{crit} = \tanh^{-1} \sqrt{\b_1/\beta_{*}}$, the value of $\phi$ for which the Hamiltonian is flat at $\partial_x \phi = 0$, and beyond. Notice that the Hamiltonian density turns over and becomes negative in the $\partial_t \phi$ direction when $\phi > \phi_{crit}$.}\label{hamiltonian plots}
\end{figure}
Nevertheless, a perturbative analysis of the Hamiltonian would not necessarily discover
that it was unbounded. The reason for this is shown in
Fig.~\ref {hamiltonian plots}, which shows the Hamiltonian density for the theory
with $\b_1 = 1$, $\beta_{*} = 1.1$, in a restricted subspace
where $\partial_y\phi = \partial_z\phi = 0$ and $\theta = \phi = 0$, leaving only $\phi$,
$\partial_t\phi$, and $\partial_x\phi$ as independent variables. We have plotted
${\cal{H}}$ as a function of $\partial_t\phi$ and $\partial_x\phi$ for four different values of
$\phi$.
When $\phi$ is sufficiently small, so that the vector is close to being purely timelike,
the point $\partial_t\phi = \partial_x\phi = 0 $ is a local minimum. Consequently,
perturbations about constant configurations with small $\phi$ would appear stable. But for large values of $\phi$, the unboundedness of the Hamiltonian
becomes apparent. This phenomenon will arise again when we consider the evolution
of small perturbations in the next section. At the end of this section, we will explain why such regions of large $\phi$ are still in the regime of validity of the effective field theory expansion.
\subsection{Spacelike vector field}
We now perform an equivalent analysis for an {\ae}ther\ field with a spacelike expectation
value. In this case all of the possibilities lead to Hamiltonians \eqref{sham} that are unbounded below,
and the case $\b_1=\beta_{*} > 0$ is not picked out.
\begin{itemize}
\item {\bf Case One: $\b_1 < 0$ and $\beta_4 = 0$.}
Taking $(\partial_\mu} \def\n{\nu} \def\o{\omega \phi) = 0$, $\partial_j f_i = 0$, $\partial_0 f_ i \neq 0$, we find
\begin{equation}
{\cal{H}}^{(s)} \sim m^2 \b_1\cosh^2\phi (\partial_0 f_i)^2.
\end{equation}
\item {\bf Case Two: $\b_1 > 0$, $\beta_{*} \leq \b_1$, and $\beta_4 = 0$.}
Now we consider $\partial_\mu} \def\n{\nu} \def\o{\omega f_i = 0$, $\partial_i \phi \neq 0 $, $\partial_0 \phi = 0$, giving
\begin{equation}
{\cal{H}}^{(s)} \sim m^2 \left[ - \b_1 \partial_i \phi \partial_i \phi + (\beta_{*}-\b_1) \sinh^2\phi (f_i \partial_i \phi)^2\right].
\end{equation}
\item {\bf Case Three: $\b_1 \geq 0$, $\beta_{*} > \b_1$, and $\beta_4 = 0$.}
In this case we examine $(\partial_0 \phi) \neq 0$, $\partial_\mu} \def\n{\nu} \def\o{\omega f_i = 0$, $\partial_i \phi = 0$, which leads to
\begin{equation}
{\cal{H}}^{(s)} \sim m^2 (\b_1-\beta_{*}) \cosh^2\phi (\partial_0 \phi)^2.
\end{equation}
\item {\bf Case Four: $\beta_4 \neq 0$.}
Now we consider configurations with $\partial_\mu} \def\n{\nu} \def\o{\omega f_i \neq 0$, $\partial_\mu} \def\n{\nu} \def\o{\omega \phi =0$ and $\sinh^2 \phi \gg 1 $. Then,
\begin{equation}
{\cal{H}}^{(s)} \sim m^2 \beta_4 \left( \cosh^4\phi (f_i \partial_i f_l)(f_k \partial_k f_l) - \cosh^2\phi \sinh^2\phi (\partial_0 f_i)^2\right).
\end{equation}
\end{itemize}
In every case, it is clear that we can find initial data for a spacelike vector field that makes the
Hamiltonian as negative as we please, for all possible $\b_1$, $\beta_4$ and $\beta_{*}$.
\subsection{Smooth Potential}
The usual interpretation of a Lagrange multiplier constraint is that it is the low-energy limit of smooth potentials when the massive degrees of freedom associated with excitations away from the minimum cannot be excited. We now investigate whether these degrees of freedom can destabilize the theory. Consider the most general, dimension four, positive semi-definite smooth potential that has a minimum when the vector field takes a timelike vacuum expectation value,
\begin{equation}
V = {\xi \over 4} (A_\mu} \def\n{\nu} \def\o{\omega A^\mu} \def\n{\nu} \def\o{\omega + m^2)^2,
\end{equation}
where $\xi$ is a positive dimensionless parameter. The precise form of the potential should not affect the results as long as the potential is non-negative and has the global minimum at $A_\mu} \def\n{\nu} \def\o{\omega A^\mu} \def\n{\nu} \def\o{\omega = -m^2$.
We have seen that the Hamiltonian is unbounded from below unless the kinetic term takes the sigma-model form, $(\partial_\mu} \def\n{\nu} \def\o{\omega A_\n)(\partial^\mu} \def\n{\nu} \def\o{\omega A^\n)$. Thus we take the Lagrangian to be
\begin{equation}
{\cal{L}} = -{1\over 2}(\partial_\mu} \def\n{\nu} \def\o{\omega A_\n)(\partial^\mu} \def\n{\nu} \def\o{\omega A^\n) - {\xi \over 4} (A_\mu} \def\n{\nu} \def\o{\omega A^\mu} \def\n{\nu} \def\o{\omega + m^2)^2.
\end{equation}
Consider some fixed timelike vacuum $\bar A_\mu$ satisfying $\bar{A}_\mu} \def\n{\nu} \def\o{\omega \bar{A}^\mu} \def\n{\nu} \def\o{\omega = -m^2$.
We may decompose the {\ae}ther\ field into a scaling of the norm, represented by a scalar $\Phi$,
and an orthogonal displacement, represented by vector $B_\mu$ satisfying $\bar{A}_\mu B^\mu = 0$.
We thus have
\begin{equation}
A_\mu} \def\n{\nu} \def\o{\omega = \bar{A}_\mu} \def\n{\nu} \def\o{\omega - {\bar{A}_\mu} \def\n{\nu} \def\o{\omega \Phi \over m} + B_\mu} \def\n{\nu} \def\o{\omega\,,
\end{equation}
where
\begin{equation}
B_\mu} \def\n{\nu} \def\o{\omega = \left(\eta_{\mu} \def\n{\nu} \def\o{\omega\n}+{\bar{A}_\mu} \def\n{\nu} \def\o{\omega\bar{A}_\n \over m^2}\right) A^\n~~\text{and}~~~ \Phi = {\bar{A}_\mu} \def\n{\nu} \def\o{\omega A^\mu} \def\n{\nu} \def\o{\omega \over m}+m.
\end{equation}
With this parameterization, the Lagrangian is
\begin{equation}
{\cal{L}} = {1 \over 2} (\partial_\mu} \def\n{\nu} \def\o{\omega \Phi)( \partial^\mu} \def\n{\nu} \def\o{\omega \Phi) - {1 \over 2} (\partial_\mu} \def\n{\nu} \def\o{\omega B_\n)(\partial^\mu} \def\n{\nu} \def\o{\omega B^\n) -{\xi \over 4} (2m\Phi + B_\mu} \def\n{\nu} \def\o{\omega B^\mu} \def\n{\nu} \def\o{\omega - \Phi^2)^2.
\end{equation}
The field $\Phi$ automatically has a wrong sign kinetic term, and, at the linear level, propagates with a dispersion relation of the form
\begin{equation}
\omega_\Phi^2 = \vec{k}^2 - 2\xi m^2.
\end{equation}
We see that in the case of a smooth potential, there exists a ghostlike mode (wrong-sign kinetic term) that is also tachyonic with spacelike wave vector and a group velocity that generically exceeds the speed of light. It is easy to see that sufficiently long-wavelength perturbations will exhibit exponential growth. The existence of a ghost when the norm of the vector field is not strictly fixed was shown in \cite{Elliott:2005va}.
In the limit as $\xi$ goes to infinity, the equations of motion enforce a fixed-norm constraint and the ghostlike and tachyonic degree of freedom freezes. The theory is equivalent to one of a Lagrange multiplier if the limit is taken appropriately.
\subsection{Discussion}
To summarize, we have found that the action in~\eqref{vf action} leads to a Hamiltonian that is globally bounded from below only in the case of a timelike sigma-model Lagrangian,
corresponding to $\b_1 = \beta_{*} > 0$ and $\beta_4 = 0$. Furthermore, we have verified (as was shown in \cite{Elliott:2005va}) that if the Lagrange multiplier term is replaced by a smooth, positive semi-definite potential, then a tachyonic ghost propagates and the theory is destabilized.
If the Hamiltonian is bounded below, the theory is stable, but the converse is not
necessarily true. The sigma-model theory is the only one for which this criterion suffices to guarantee
stability. In the next section, we will examine the linear stability of these models by
considering the growth of perturbations. Although some models are stable at the linear
level, we will see in the following section that most of these have negative-energy ghosts,
and are therefore unstable once interactions are included. The only exceptions, both
ghost-free and linearly stable, are the
Maxwell \eqref{maxwell} and scalar \eqref{scalar}
models.
We showed in the previous section that, unless
$\beta_{*} - \beta_1$ and $\beta_4$ are exactly zero, the
Hamiltonian is unbounded from below.
However, the effective field theory breaks down before arbitrarily
negative values of the Hamiltonian can be reached; when $\beta_{*} \neq
\b_1$ and/or $\b_4 \neq 0$, in regions of phase space in which ${\cal H} < 0$ (schematically),
\begin{equation}
{\cal H} \sim - m^2 e^{4 |\phi|} (\partial \Theta)^2 \qquad \text{where} \qquad \Theta \in \{\phi, \theta, \psi\}.
\end{equation} The effective field theory breaks down when kinetic terms with four derivatives (the terms of next highest order in the effective field theory expansion) are on the order of terms with two derivatives, or, in the angle parameterization, when
\begin{equation}
m^2 e^{4 |\phi|} (\partial \Theta)^2 \sim e^{8 |\phi|} (\partial \Theta)^4.
\end{equation}
In other words, the effective field theory is only valid when
\begin{equation}
e^{2 |\phi|} |\partial \Theta | < m.
\end{equation}
In principle, terms in the effective action with four or more derivatives could
add positive contributions to the Hamiltonian to make it bounded from below.
However, our analysis shows that the Hamiltonian (in models other than the timelike sigma model
with fixed norm) is necessarily concave down around the
set of configurations with constant {\ae}ther\ fields. If higher-derivative terms intervene to
stabilize the Hamiltonian, the true vacuum would not have $H=0$.
Theories could also be deemed stable if there are additional symmetries that lead to conserved currents (other than energy-momentum density) or to a reduced number of physical degrees of freedom.
Regardless of the presence of terms beyond leading order in the
effective field theory expansion, due to the presence of the
ghost-like and tachyonic mode (found in the previous section),
there is an unavoidable problem with
perturbations when the field moves in a smooth, positive semi-definite
potential. This exponential instability will be present regardless
of higher order terms in the effective field theory expansion because it occurs for very long-wavelength modes
(at least around constant-field backgrounds).
\section{Linear instabilities}
We have found that the Hamiltonian of a generic {\ae}ther\ model is unbounded below.
In this section, we investigate whether there exist actual physical instabilities at the
linear level---{\it i.e.}, whether small perturbations grow exponentially with time.
It will be necessary to consider the behavior of small fluctuations in every
Lorentz frame,\footnote{The theory of perturbations about a constant background is
equivalent to a theory with explicit Lorentz violation because the first order
Lagrange density includes the term, $\lambda \bar{A}^\mu} \def\n{\nu} \def\o{\omega \delta} \def\e{\epsilon} \def\f{\phi A_\mu} \def\n{\nu} \def\o{\omega$, where
$\bar{A}^\mu} \def\n{\nu} \def\o{\omega$ is effectively some constant coefficient.} not
only in the {\ae}ther\ rest frame \cite{Kostelecky:2001xz, Mattingly:2005re,Adams:2006sv}.
We find a range of parameters $\beta_i$ for which the theories are tachyon-free; these
correspond (unsurprisingly) to dispersion relations for which the phase velocity
satisfies $0 \leq v^2 \leq 1$. In \S \ref{ghosts} we consider the existence of ghosts.
\subsection{Timelike vector field}
Suppose Lorentz invariance is spontaneously broken so that there is a preferred rest frame, and imagine that perturbations of some field in that frame have the following dispersion relation:
\begin{equation} \label{disprelation}
v^{-2} \o^2 = \vec{k} \cdot \vec{k}.
\end{equation}
This can be written in frame-invariant notation as
\begin{equation}\label{Tdispersion}
(v^{-2} - 1) (t^\mu} \def\n{\nu} \def\o{\omega k_\mu} \def\n{\nu} \def\o{\omega)^2 = k_\mu} \def\n{\nu} \def\o{\omega k^\mu} \def\n{\nu} \def\o{\omega,
\end{equation} where $t^\mu} \def\n{\nu} \def\o{\omega$ is a timelike Lorentz vector that characterizes the
4-velocity of the preferred rest frame. So, in the rest frame, $t^\mu} \def\n{\nu} \def\o{\omega =
\{1,0,0,0\}$. Indeed, in the Appendix, we find dispersion relations for the {\ae}ther\ modes of exactly the form in \eqref{Tdispersion} with $t^\mu} \def\n{\nu} \def\o{\omega = \bar{A}^\mu} \def\n{\nu} \def\o{\omega / m$ and \eqref {apA:spin-1}
\begin{equation}
v^2 = {\b_1 \over \b_1 - \b_4}
\end{equation} and \eqref{scalar mode}
\begin{equation}
v^2 = {\b_* \over \b_1 - \b_4}.
\end{equation}
Now consider the dispersion relation for perturbations
of the field in another (``primed'') frame. Let's solve for $k_0' =
\o'$, the frequency of perturbations in the new
frame. Expanded out, the dispersion relation reads
\begin{equation}
\o'^2(1 + (v^{-2}-1)(t'^0)^2) + 2 \o' (v^{-2} - 1)t'^0 t'^i k_i'
- \vec{k}' \cdot \vec{k}' + (v^{-2} - 1) (t'^i k_i')^2 = 0
\end{equation}
where $i \in \{ 1,2,3 \}$. The solution for $\o'$ is:
\begin{equation}
\label{omegaTdispersion}
\o' = {-(v^{-2} - 1)t'^0 t'^i k'_i \pm \sqrt{D_{(t)}} \over 1 +(v^{-2}-1)(t'^0)^2}\,,
\end{equation}
where
\begin{equation}
{D_{(t)}} = \vec{k}' \cdot \vec{k}' + (v^{-2}-1)\left((t'^0)^2 \vec{k}' \cdot \vec{k}' - (t'^i k'_i)^2 \right).
\end{equation}
In general, $t'^0 = \cosh \h$ and $t'^i = \sinh \h \, \hat{n}^i$, where $\hat{n}_i \hat{n}^i
= 1$ and $\h = \cosh^{-1}\gamma} \def\h{\eta} \def\i{\iota$ is a boost parameter. We therefore have
\begin{equation}
{D_{(t)}} = \vec{k}' \cdot \vec{k}' \left\{1 + (v^{-2}-1)\left[\cosh^2\h -
\sinh^2\h \, (\hat{n}\cdot \hat{k}')^2 \right] \right\},
\end{equation}
where $\hat{k}' = \vec{k}'/|\vec{k}'|$.
Thus ${D_{(t)}} $ is clearly greater than zero if $v \leq 1$. However, if $v > 1$ then ${D_{(t)}} $ can be negative for very large boosts if $\vec{k}'$ is not parallel to the boost direction.
The sign of the discriminant ${D_{(t)}} $ determines whether the frequency
$\o'$ is real- or complex-valued. We have shown that when the phase
velocity $v$ of some field excitation is greater than the speed of
light in a preferred rest frame, then there is a (highly boosted)
frame in which the excitation looks unstable---that is, the frequency
of the field excitation can be imaginary. More specifically, plane
waves traveling along the boost direction with boost parameter $\gamma = \cosh \h$ have a growing
amplitude if $\gamma^2 > 1/(1-v^{-2}) > 0$.
In Appendix~\ref{Ap:A}, we find dispersion relations of the form in~\eqref{Tdispersion} for the various massless excitations about a constant timelike background ($t^\mu} \def\n{\nu} \def\o{\omega = \bar{A}^\mu} \def\n{\nu} \def\o{\omega /m$). Requiring stability and thus $0 \leq v^2 \leq 1$ leads to the inequalities,
\begin{equation} \label{spin1timelike}
0 \le {\beta_1 \over \beta_1 - \beta_4} \le 1
\end{equation}
and
\begin{equation} \label{spin0timelike}
0 \le {\beta_{*} \over \beta_1 - \beta_4} \le 1 \, .
\end{equation}
Models satisfying these relations are stable with respect to linear perturbations in any
Lorentz frame.
\subsection{Spacelike vector field}\label{s superluminal}
We show in Appendix~\ref{Ap:A} that fluctuations about a spacelike,
fixed-norm, vector field background have dispersion relations of the form
\begin{equation}\label{Sdispersion}
(v^2 - 1) (s^\mu} \def\n{\nu} \def\o{\omega k_\mu} \def\n{\nu} \def\o{\omega)^2 = - k_\mu} \def\n{\nu} \def\o{\omega k^\mu} \def\n{\nu} \def\o{\omega,
\end{equation} with $s^\mu} \def\n{\nu} \def\o{\omega = \bar{A}^\mu} \def\n{\nu} \def\o{\omega / m$ and \eqref {apA:spin-1}
\begin{equation}
v^2 = { \b_1 + \b_4 \over \b_1}
\end{equation} and \eqref{scalar mode}
\begin{equation}
v^2 = { \b_1 + \b_4 \over \b_*}.
\end{equation}
In frames where $s^\mu} \def\n{\nu} \def\o{\omega = \{0, \hat{s}\}$,
$v$ is the phase velocity in the $\hat{s}$ direction.
Consider solving for $k'_0 = \omega'$ in an arbitrary (``primed'') frame. The
solution is as in~\eqref{omegaTdispersion}, but with $v^{-2} \rightarrow
2 - v^{2}$ and $t'^\mu} \def\n{\nu} \def\o{\omega \rightarrow s'^\mu} \def\n{\nu} \def\o{\omega$. Thus,
\begin{equation}
\label{omegaSdispersion}
\o' = {(v^{2} - 1)s'^0 s'^i k'_i \pm \sqrt{D_{(s)}} \over 1 +(1-
v^{2})(s'^0)^2}\,,
\end{equation}
where
\begin{equation}
D_{(s)} = \vec{k}' \cdot \vec{k}' - (v^{2}-1)\left[(s'^0)^2 \vec{k}' \cdot \vec{k}' - (s'^i k'_i)^2 \right].
\end{equation}
In general, $s'^0 = \sinh \h$ and $s'^i = \cosh \h \, \hat{n}^i$ where $\hat{n}_i \hat{n}^i
= 1$ and $\h = \cosh^{-1}\gamma} \def\h{\eta} \def\i{\iota$ is a boost parameter. So,
\begin{equation}
D_{(s)} = \vec{k}' \cdot \vec{k}' \left\{ 1 - (v^{2}-1)\left[\sinh^2\h -
\cosh^2\h \, (\hat{n}\cdot \hat{k'})^2 \right] \right\}.
\end{equation}
which can be rewritten,
\begin{equation}
D_{(s)} = \vec{k}' \cdot \vec{k}'\left\{ v^{2} + (1-v^{2})\cosh^2\h\left[1-
(\hat{n}\cdot \hat{k'})^2 \right]\right\}.
\end{equation}
It is clear that $D_{(s)}$ is non-negative for all values of $\eta$ if and only if $ 0 \leq v^2 \leq 1$.
The theory will be unstable unless $0 \leq v^2 \leq 1$.
The dispersion relations of the form~\eqref{Sdispersion} for the massless excitations about the spacelike background are given in Appendix~\ref{Ap:A}. The requirement that $ 0 \leq v^2 \leq 1$ implies
\begin{equation} \label{spin1spacelike}
0 \le {\beta_1 + \beta_4 \over \beta_1} \le 1
\end{equation}
and
\begin{equation} \label{spin0spacelike}
0 \le {\beta_1 + \beta_4 \over \beta_{*}} \le 1 \, .
\end{equation}
Models of spacelike {\ae}ther\ fields will only be stable with respect to linear perturbations
if these relations are statisfied.
The requirements \eqref{spin0timelike} or \eqref{spin0spacelike} do not apply in the Maxwell case
(when $\b_*=0=\b_4$), and those of \eqref{spin1timelike} or \eqref{spin1spacelike} do not
apply in the scalar case (when $\b_1 = 0 = \b_4$), since the corresponding degrees of freedom in each case do not propagate.
\subsection{Stability is not frame-dependent}
The excitations about a constant background are massless (\emph{i.e.}~the frequency is proportional to the magnitude of the spatial wave vector), but they generally do not propagate along the light cone. In fact, when $v >1$, the wave vector is timelike even though the cone along which excitations propagate is strictly outside the light cone. We have shown that such excitations blow up in some frame.
The exponential instability occurs for observers in boosted frames. In these frames, portions of constant-time hypersurfaces are actually inside the cone along which excitations propagate.
Why do we see the instability in only \emph{some} frames when performing a linear stability analysis?
Consider boosting the wave four-vectors of such excitations with
complex-valued frequencies and real-valued spatial wave vectors back
to the rest frame. Then, in the rest frame, both the frequency and the
spatial wave vector will have non-zero imaginary parts. Such solutions
with complex-valued $\vec{k}$ require initial data that grow at
spatial infinity and are therefore not really ``perturbations'' of the
background. But even though the {\ae}ther\ field defines a rest frame,
there is no restriction against considering small perturbations
defined on a constant-time hypersurface in any frame. Well-behaved
initial data can be decomposed into modes with real spatial wave
vectors; if any such modes lead to runaway growth, the theory is
unstable.
\section{Negative Energy Modes} \label{ghosts}
We found above that manifest perturbative stability in all frames requires $0\le v^2 \le 1$. In the Appendix, we show that there are two kinds of propagating modes, except when $\b_*= \b_4 = 0$ or when $\b_1 = \b_4 = 0$. Based on the dispersion relations for these modes, the $0\le v^2 \le 1$ stability requirements translated into the inequalities for $\b_*, \b_1$, and $\b_4$ in~\eqref{spin1timelike}-\eqref{spin0timelike} for timelike {\ae}ther\ and~\eqref{spin1spacelike}-\eqref{spin0spacelike} for spacelike {\ae}ther. We shall henceforth assume that these inequalities hold and, therefore, that $\o$ and $\vec{k}$ for each mode are real in every frame. We will now show that, even when these requirements
are satisfied and the theories are linearly stable, there will be negative-energy ghosts that imply
instabilities at the nonlinear level (except for the sigma model, Maxwell, and scalar cases).
For timelike vector fields, with respect to the {\ae}ther rest frame, the various modes correspond to two spin-1 degrees of freedom and one spin-0 degree of freedom. Based on their similarity in form to the timelike {\ae}ther rest frame modes, we will label these modes once and for all as ``spin-1'' or ``spin-0,'' even though these classifications are only technically correct for timelike fields in the {\ae}ther rest frame.
The solutions to the first order equations of motion for perturbations
$\delta} \def\e{\epsilon} \def\f{\phi A_\mu} \def\n{\nu} \def\o{\omega$ about an arbitrary, constant, background $\bar{A}_\mu} \def\n{\nu} \def\o{\omega$ satisfying $\bar{A}^\mu} \def\n{\nu} \def\o{\omega \bar{A}_\mu} \def\n{\nu} \def\o{\omega \pm m^2 = 0$
are (see Appendix~\ref{Ap:A}):
\begin{equation}
\delta} \def\e{\epsilon} \def\f{\phi A_\mu} \def\n{\nu} \def\o{\omega = \int d^4 k \, q_\mu} \def\n{\nu} \def\o{\omega(k) e^{i k_\mu} \def\n{\nu} \def\o{\omega x^\mu} \def\n{\nu} \def\o{\omega}, \qquad q_\mu} \def\n{\nu} \def\o{\omega(k) = q_\mu} \def\n{\nu} \def\o{\omega^*(-k)
\end{equation} where either,
\begin{equation}
\label{mode1}
q_\mu} \def\n{\nu} \def\o{\omega(k) = i \alpha} \def\b{\beta} \def\c{\chi^\n k^\r {\bar{A}^\sigma} \def{\tilde}{\tau} \def\u{\upsilon\over m} \e_{\mu} \def\n{\nu} \def\o{\omega \n \r
\sigma} \def{\tilde}{\tau} \def\u{\upsilon}~~\text{and}~~\b_1 k_\mu} \def\n{\nu} \def\o{\omega k^\mu} \def\n{\nu} \def\o{\omega + \b_4{ ( \bar{A}_\mu} \def\n{\nu} \def\o{\omega k^\mu} \def\n{\nu} \def\o{\omega)^2 \over m^2}= 0~~\text{and}~~\alpha} \def\b{\beta} \def\c{\chi^\n \bar{A}_\n = 0 \qquad \text{(spin-1)}
\end{equation} where $\alpha} \def\b{\beta} \def\c{\chi^\n$ are real-valued constants or
\begin{equation}
\label{mode2}
q_\mu} \def\n{\nu} \def\o{\omega = i \alpha} \def\b{\beta} \def\c{\chi \left(\h_{\mu} \def\n{\nu} \def\o{\omega \n} \pm {\bar{A}_{\mu} \def\n{\nu} \def\o{\omega} \bar{A}_{\n}
\over m^2}\right)k^\n \qquad
\text{and} \qquad \left( \beta_{*} \h_{\mu} \def\n{\nu} \def\o{\omega \n} + \left(\b_4 \pm (\beta_{*} -
\b_1) \right) {\bar{A}_{\mu} \def\n{\nu} \def\o{\omega} \bar{A}_{\n}\over m^2}\right)k^\mu} \def\n{\nu} \def\o{\omega k^\n = 0 \qquad \text{(spin-0)}
\end{equation}
where $\alpha} \def\b{\beta} \def\c{\chi$ is a real-valued constant.
Note that when $\b_1 = \b_4 = 0$, corresponding to the scalar form of \eqref{scalar}, the spin-1 dispersion relation is satisfied trivially, because the spin-1 mode does not propagate in this case. Similarly, when $\b_* = \b_4 = 0$, the kinetic term takes on the Maxwell form in \eqref{maxwell} and the spin-0 dispersion relation becomes $\bar{A}_\mu} \def\n{\nu} \def\o{\omega k^\mu} \def\n{\nu} \def\o{\omega = 0$; the spin-0 mode does not propagate in that case.
The Hamiltonian~\eqref{hamiltonian} for either of these modes is
\begin{equation}
\label{k hamiltonian}
H = \int d^3 k \, \left\{ \left[ \b_1 (\o^2+\vec{k}\cdot \vec{k}) +\b_4(-(\bar{a}^0 \o)^2 + (\bar{a}^i k_i)^2) \right] q^\mu} \def\n{\nu} \def\o{\omega q_\mu} \def\n{\nu} \def\o{\omega^{*} + (\b_1 - \beta_{*}) (\o^2 q_0^*q_0 + k_i q_i^* k_j q_j ) \right\}\,,
\end{equation}
where $k_0 = \o = \o(\vec{k})$ is given by the solution to a dispersion relation and where
$\bar{a}^\mu} \def\n{\nu} \def\o{\omega \equiv \bar{A}^\mu} \def\n{\nu} \def\o{\omega / m$.
One can show that, as long as $\b_1$ and $\b_4$ satisfy the conditions~\eqref{spin1timelike}
or~\eqref{spin1spacelike} that guarantee real frequencies $\o$ in all frames, we will have
\begin{equation}
q^*_\mu} \def\n{\nu} \def\o{\omega q^\mu} \def\n{\nu} \def\o{\omega \geq 0
\end{equation}
for all timelike and spacelike vector perturbations.
We will now proceed to evaluate the Hamiltonian for each mode in different
theories.
\subsection{Spin-1 energies}
In this section we consider nonvanishing $\b_4$, and show that the spin-1 mode can carry
negative energy even when the conditions for linear stability are satisfied.
\paragraph*{Timelike vector field.}
Without loss of generality, set
\begin{equation}
\bar{A}_\mu} \def\n{\nu} \def\o{\omega = m (\cosh \h, \sinh \h \, \hat{n}).
\end{equation} where $\hat{n} \cdot \hat{n} = 1$.
The energy of the spin-1 mode in the timelike case is given by
\begin{equation}
H = \int d^3k (\vec{k}\cdot \vec{k}) q^*_\mu} \def\n{\nu} \def\o{\omega q^\mu} \def\n{\nu} \def\o{\omega \left[{2 X \mp \beta_4 \sinh(2\h) (\hat{n}\cdot\hat{k})\sqrt{X} \over \beta_1 -\beta_4 \cosh^2\h } \right],
\end{equation}
where
\begin{equation}
X = \beta_1\left\{\beta_1 + \beta_4\left[(\hat{n}\cdot \hat{k})^2 \sinh^2\h - \cosh^2\h
\right]\right\}. \end{equation}
Looking specifically at modes for which $\hat{n} \cdot \hat{k} = +1$, we find
\begin{equation}
H = \int d^3k (\vec{k}\cdot \vec{k})q^*_\mu} \def\n{\nu} \def\o{\omega q^\mu} \def\n{\nu} \def\o{\omega \left[{2\beta_1(\beta_1 - \beta_4) \mp \beta_4 \sinh(2\h) \sqrt{\beta_1(\beta_1 - \beta_4)} \over \beta_1 -\beta_4 \cosh^2\h } \right]\,.
\end{equation}
The energy of such a spin-1 perturbation can be negative when
$|\beta_4 \sinh(2\h)| > 2\sqrt{\beta_1(\beta_1 - \beta_4)}$. Thus it is possible to have negative energy perturbations whenever $\beta_4 \neq 0$. Perturbations with wave numbers perpendicular to the boost direction have positive semi-definite energies.
\paragraph*{Spacelike vector field.}
Without loss of generality, for the spacelike case we set
\begin{equation}
\bar{A}_\mu} \def\n{\nu} \def\o{\omega = m (\sinh \h, \cosh \h\,\hat{n})\,,
\end{equation} where $\hat{n} \cdot \hat{n} = 1$.
The energy of the spin-1 mode in this case is given by
\begin{equation}
H = \int d^3k (\vec{k}\cdot \vec{k}) q^*_\mu} \def\n{\nu} \def\o{\omega q^\mu} \def\n{\nu} \def\o{\omega
\left[{2 X \mp \beta_4 \sinh(2\h) (\hat{n}\cdot\hat{k})\sqrt{X} \over \beta_1 -\beta_4 \sinh^2\h } \right],
\end{equation}
where
\begin{equation}
X = \beta_1\left\{\beta_1 + \beta_4\left[(\hat{n}\cdot \hat{k})^2 \cosh^2\h - \sinh^2\h\right]\right\}.
\end{equation}
Looking at modes for which $\hat{n} \cdot \hat{k} = +1$, we find
\begin{equation}
H = \int d^3k (\vec{k}\cdot \vec{k})q^*_\mu} \def\n{\nu} \def\o{\omega q^\mu} \def\n{\nu} \def\o{\omega \left[{2\beta_1(\beta_1 + \beta_4) \mp \beta_4 \sinh(2\h) \sqrt{\beta_1(\beta_1 + \beta_4)} \over \beta_1 - \beta_4 \sinh^2\h } \right]\,.
\end{equation}
Thus, the energy of perturbations can be negative when $|\beta_4 \sinh(2\h)| > 2\sqrt{\beta_1(\beta_1 + \beta_4)}$. Thus it is possible to have negative energy perturbations whenever $\beta_4 \neq 0$. Perturbations with wave numbers perpendicular to the boost direction have positive semi-definite energies. In either the timelike or spacelike case, models with $\b_4\neq 0$ feature spin-1
modes that can be ghostlike.
We note that the effective field theory is valid when $k < e^{- 3 |\h|} m$, as detailed in \S \ref{Validity of effective field theory}. But even if $\h$ is very large, the effective field theory is still valid for very long wavelength perturbations, and therefore such long wavelength modes with negative energies lead to genuine instabilities.
\subsection{Spin-0 energies}
We now assume the inequalities required for linear stability, \eqref{spin0timelike} or~\eqref{spin0spacelike}, and also that $\b_4 = 0$. We showed above that, otherwise, there are growing modes in some frame or there are propagating spin-1 modes that have negative energy in some frame.
When $\beta_{*} \neq 0$, the energy of the spin-0 mode in~\eqref{mode2} is given by
\begin{equation}
\label{mode2 energy}
H = 2 \b_1 \alpha} \def\b{\beta} \def\c{\chi^2 \int d^3 k \, (\bar{a}_\r k^\r)^2\left( \o^2(\vec{k})\left[\pm 1 - (1 - \b_1/\beta_{*}) \bar{a}_0^2 \right] + \o(\vec{k})\, \bar{a}_0 (1 - \b_1/\beta_{*}) \bar{a}_i k_i \right)
\end{equation} for $\bar{A}_\mu} \def\n{\nu} \def\o{\omega \bar{A}^\mu} \def\n{\nu} \def\o{\omega \pm m^2 = 0$ and $\bar{a}_\mu} \def\n{\nu} \def\o{\omega \equiv \bar{A}_\mu} \def\n{\nu} \def\o{\omega / m$.
\paragraph*{Timelike vector field.}
We will now show that the quadratic order Hamiltonian can be negative when the background is timelike and the kinetic term does not take one of the special forms (sigma model, Maxwell, or scalar).
Without loss of generality we set $\bar{a}_0 = \cosh \h $ and $\bar{a}_i = \sinh \h \, \hat{n}_i$, where $\hat{n}\cdot \hat{n} = 1$. Then plugging the freqency $\o(\vec{k})$, as defined by the spin-0 dispersion relation, into the Hamiltonian~\eqref{mode2 energy} gives
\begin{equation}
\label{T spin-0 energy}
H = \b_1 \alpha} \def\b{\beta} \def\c{\chi^2 \int d^3 k \, (\bar{a}_\r k^\r)^2 \left[{
2 X \pm (1- \beta_1/\beta_{*})\sinh 2\h (\hat{n}\cdot \hat{k})\sqrt{X}
\over
1 + (\b_1/\beta_{*} - 1)\cosh^2\h
}\right],
\end{equation}
where
\begin{equation}
X = {1+ (\b_1/\beta_{*} -1) [\cosh^2\h - (\hat{n}\cdot\hat{k})^2\sinh^2\h ]}.
\end{equation}
If $\hat{n}\cdot\hat{k} \neq 0$, the energy can be negative. In particular, if $\hat{n}\cdot\hat{k} = 1$ we have
\begin{equation}
H = \b_1 \alpha} \def\b{\beta} \def\c{\chi^2 \int d^3 k \, (\bar{a}_\r k^\r)^2 \left[{
2 \b_1/\beta_{*} \pm (1- \beta_1/\beta_{*})\sinh 2\h \sqrt{\b_1 / \beta_{*}}
\over
1+ (\b_1/\beta_{*} - 1)\cosh^2\h
}\right].
\end{equation}
Given that $ \b_1 / \beta_{*} -1 \geq 0$, $H$ can be negative when $| \sinh 2 \h | > 2 \sqrt{\b_1 / \beta_{*}} / (\b_1 / \beta_{*} - 1)$.
We have thus shown that, for timelike backgrounds, there are modes that in some frame have
negative energies and/or growing amplitudes as long as $\b_1 \neq \beta_{*}$, $\b_1 \neq 0$,
and $\beta_{*}\neq 0$. Therefore, the only possibly stable theories of timelike {\ae}ther\ fields
are the special cases mentioned earlier: the sigma-model ($\b_1 = \beta_{*}$), Maxwell
($\beta_{*} = 0$), and scalar ($\b_1=0$) kinetic terms.
\paragraph*{Spacelike vector field.} For the spacelike case,
without loss of generality we set $\bar{a}_0 = \sinh \h $ and $\bar{a}_i = \cosh \h \, \hat{n}_i$,
where $\hat{n}\cdot \hat{n} = 1$. Once again, plugging
the frequency $\o(k)$ into the Hamiltonian~\eqref{mode2 energy} gives
\begin{equation}
\label{S spin-0 energy}
H = \b_1 \alpha} \def\b{\beta} \def\c{\chi^2 \int d^3 k \, (\bar{a}_\r k^\r)^2 \left[{
- 2 X \pm (1 - \beta_1/\beta_*)\sinh 2\h (\hat{n}\cdot \hat{k})\sqrt{X}
\over
1 + (1- \b_1 / \beta_{*})\sinh^2\h
}\right] ,
\end{equation}
where
\begin{equation}
X = {1 + (1- \b_1 / \beta_{*} ) \left[\sinh^2\h - (\hat{n}\cdot\hat{k})^2\cosh^2\h \right]}.
\end{equation}
Upon inspection, one can see that there are values of
$\hat{n}\cdot \hat{k}$ and $\h$ that make $H$ negative, except when $\beta_{*} = 0$
(Maxwell) or $\b_1 = 0$ (scalar).
Again, the Hamiltonian density is less than zero for modes with wavelengths sufficiently long
($k < e^{-3 |\h|} m$), so the effective theory is valid.
\section{Maxwell and Scalar Theories}
We have shown that the only version of the {\ae}ther\ theory \eqref{vf action} for which the
Hamiltonian is bounded below is the timelike
sigma-model theory ${\cal L}_K = -(1/2)(\partial_\mu A_\nu)
(\partial^\mu A^\nu)$, corresponding to the choices $\b_1=\beta_{*}$, $\b_4=0$, with the
fixed-norm condition imposed by a Lagrange multiplier constraint. (Here and below, we
rescale the field to canonically normalize the kinetic terms.)
However, when we looked for explicit instabilities in the form of
tachyons or ghosts in the last two sections, we found two other models for which such
pathologies are absent: the Maxwell Lagrangian
\begin{equation}
{\cal L}_K = -\frac{1}{4}F_{\mu\nu}F^{\mu\nu}\, ,
\label{maxwell2}
\end{equation}
corresponding to $\beta_{*} = 0 = \b_4$, and the scalar Lagrangian
\begin{equation}
{\cal L}_K = \frac{1}{2} (\partial_\mu A^\mu)^2\, ,
\end{equation}
corresponding to $\b_1 = 0 = \b_4$.
In both of these cases, we found that the Hamiltonian is unbounded below,\footnote{Boundedness of the Hamiltonian was considered in \cite{Clayton:2001vy}.} but
a configuration with a small positive energy does not appear to run away into an
unbounded region of phase space characterized by large negative and positive balancing
contributions to the total energy.
These two models are also distinguished in another way:
there are fewer than three propagating degrees of freedom at first order in perturbations in the Maxwell and scalar Lagrangian cases, while there are three in all others. This is closely tied
to the absence of perturbative instabilities; the ultimate cause of those instabilities can
be traced to the difficulty in making all of the degrees of freedom simultaneously well-behaved.
The drop in number of degrees of freedom stems from the fact that $A_0$ lacks time derivatives in the Maxwell Lagrangian and that the $A_i$ lack time derivatives in the scalar Lagrangian. In other words, some of the vector components are themselves Lagrange multipliers in these special cases.
Only two perturbative degrees of freedom---the spin-1 modes---propagate in the Maxwell case (cf.~\eqref{mode1}-\eqref{mode2} when $\b_* = 0 = \b_4$). The ``mode'' in~\eqref{mode2} is a gauge degree of freedom; at first order in perturbations the Lagrangian has a gauge-like symmetry under $\delta} \def\e{\epsilon} \def\f{\phi A_\mu} \def\n{\nu} \def\o{\omega \rightarrow \delta} \def\e{\epsilon} \def\f{\phi A_\mu} \def\n{\nu} \def\o{\omega + \partial_\mu} \def\n{\nu} \def\o{\omega \phi(x)$ where $\bar{A}^\mu} \def\n{\nu} \def\o{\omega \partial_\mu} \def\n{\nu} \def\o{\omega \phi = 0$. As expected of a gauge degree of freedom, the spin-0 mode has zero energy and does not propagate. Meanwhile, the spin-1 perturbations propagate as well-behaved plane waves and have positive energy. We note that the Dirac method for counting degrees of freedom in constrained dynamical systems implies that there are \emph{three} degrees of freedom \cite{Bluhm:2008yt}.\footnote{For a discussion of constrained dynamical systems see \cite{Henneaux:1992ir}.} The additional degree of freedom, not apparent at the linear level, could conceivably cause an instability; this mode does not propagate because it is gauge-like at the linear level, but there is no gauge symmetry in the full theory.
In the scalar case, there are no propagating spin-1 degrees of freedom. The spin-0 degree of freedom has a nontrivial dispersion relation but no energy density (cf.~\eqref{mode1}-\eqref{mode2},~\eqref {T spin-0 energy}, and \eqref {S spin-0 energy} when $\b_1 = 0 = \b_4$) at leading order in the
perturbations. Essentially, the fixed-norm constraint is incompatible with what would be
a single propagating scalar mode in this model; the theory is still dynamical, but perturbation
theory fails to capture its dynamical content.
Each of these models displays some idiosyncratic features, which we now consider in turn.
\subsection{Maxwell action}
The equation of motion for the Maxwell Lagrangian with a fixed-norm constraint is
\begin{equation}
\partial_\mu F^{\mu\nu} = -2\lambda A^\nu\,.
\end{equation}
Setting $A_\mu A^\mu = \mp m^2$, the Lagrange multiplier is given by
\begin{equation}
\lambda = \pm \frac{1}{2m^2}A_\nu \partial_\mu F^{\mu\nu}\,.
\end{equation}
For timelike {\ae}ther fields, the sign of $\l$ is preserved along timelike trajectories since, when the kinetic term takes the special Maxwell form, there is a conserved current (in addition to energy-momentum density) due to the Bianchi identity\footnote{If $\l > 0$ initially, then it must pass through $\l =0$ to reach $\l < 0$---but $\l = 0$ is conserved along timelike trajectories, so $\l$ can at best stop at $\l = 0$.}:
\begin{equation}
\label{eq:conservedcurrent}
0 = \partial_\n (\partial_\mu} \def\n{\nu} \def\o{\omega F^{\mu} \def\n{\nu} \def\o{\omega \n}) = -2 \partial_\n (\lambda A^\n).
\end{equation}
In particular, the condition that $\lambda = 0$ is conserved along timelike $A^\n$
\cite{Jacobson:2000xp,Bluhm:2008yt}. In the presence of interactions this will continue to be true
only if the coupling to external sources takes the form of an
interaction with a conserved current, $A_\mu J^\mu$ with $\partial_\mu J^\mu=0$.
If we take the timelike Maxwell theory coupled to a conserved current
and restrict to initial data satisfying $\lambda = 0$
at every point in space, the theory reduces precisely to Maxwell electrodynamics---not only in the equation of motion, but also in the energy-momentum tensor.
We can therefore be confident that this theory, restricted to this subset of initial
data, is perfectly well-behaved, simply because it is identical to conventional electromagnetism
in a nonlinear gauge \cite{Nambu:1968qk, Chkareuli:2006yf, Bluhm:2007bd}.
In the case of a spacelike vector expectation value, there is an explicit obstruction to
finding smooth time evolution for generic initial data. In this case, the constraint equations are
\begin{equation}
- A_0^2 + A_i A_i = m^2 \qquad \text{and} \qquad \partial_i \partial^i A_0 - \partial_0 \partial_i A^i = -2\l A_0.
\end{equation}
Suppose spatially homogeneous initial conditions for the $A_i$ are given. Without loss of generality, we can align axes such that
\begin{equation}
A_\mu(t_0) = (A_0(t_0),0,0,A_3(t_0)),
\end{equation}
where $-A^2_0 + A^2_3 = m^2$. If $A_i A_i \neq m^2$, the equations of motion are
\begin{equation}\label{badexample}
\partial_\mu {F^{\mu}}_\nu = 0.
\end{equation}
The $\nu = 3$ equation reads
\begin{equation}
\partial_\mu {F^\mu}_3 = -\frac{\partial^2 A_3}{\partial t^2} = 0,
\end{equation}
whose solutions are given by
\begin{equation}
A_3(t) = A_3(t_0) + C (t-t_0),
\end{equation}
where $C$ is determined by initial conditions. $A_0$ is determined by the fixed-norm constraint $A_0 = \pm \sqrt{A_3^2 - m^2}$. If $C \neq 0$, $A_0$ will eventually evolve to zero. Beyond this point, $A_3$ keeps decreasing, and the fixed-norm condition requires that $A_0$ be imaginary, which is unacceptable since $A_\mu$ is a real-valued vector field. Note that this never happens in the timelike case, as there always exists some real $A_0$ that satisfies the constraint for any value of $A_3$. The problem is that $A_3$ evolves into the ball $A^2_i < m^2$, which is catastrophic for the spacelike, but not the timelike, case. An analogous problem arises even when the
Lagrange multiplier constraint is replaced by a smooth potential.
It is possible that this obstruction to a well-defined evolution will be regulated by terms of higher order in the effective field theory. Using the fixed-norm constraint and solving for $A_0$, the derivative is
\begin{equation}
{\partial_\mu} \def\n{\nu} \def\o{\omega A_0} = \frac{A_i}{\sqrt{A_j A_j - m^2}} {\partial_\mu} \def\n{\nu} \def\o{\omega A_i }.
\end{equation}
As $A_j A_j$ approaches $m^2$, with finite derivatives of the spatial components, the derivative of the $A_0$ component becomes unbounded. If higher-order terms in the effective action have time derivatives of
the component $A_0$, these terms could become relevant to the vector
field's dynamical evolution, indicating that we have left the realm of
validity of the low-energy effective field theory we are considering.
We are left with the question of how to interpret the timelike Maxwell theory with intial
data for which $\l \neq 0$. If we restrict our attention to initial data for which $\l < 0$ everywhere,
then the evolution of the $A_i$ would be determined and the Hamiltonian would be positive. We have
\begin{align}
H &= {1 \over 2} \int d^3 x \,\left( {1 \over 2}F^2_{ij} + (\partial_0 A_i)^2 - (\partial_i A_0)^2 \right)\\
&= {1 \over 2} \int d^3 x \,\left( {1 \over 2}F^2_{ij} + F_{0i}F_{0i} - 2(\partial_i A_0)F_{i0} \right)\\
&= {1 \over 2} \int d^3 x \,\left( {1 \over 2}F^2_{ij} + F_{0i}F_{0i} + 2A_0 \partial_i F_{i0} \right)\\
&= {1 \over 2} \int d^3 x \,\left( {1 \over 2}F^2_{ij} + F_{0i}F_{0i} - 4 \l A_0^2 \right), \label{ham with lambda}
\end{align}
which is manifestly positive when $ \l < 0 $. However, it is not clear why we should be restricted
to this form of initial data, nor whether even this restriction is enough to ensure stability
beyond perturbation theory.
The status of this model in both the spacelike and timelike cases remains unclear. However, there are indications of further problems. For the spacelike case, Peloso \emph{et.~al.}~find a linear instability for perturbations with wave numbers on the order of the Hubble parameter in an exponentially expanding cosmology \cite{Himmetoglu:2008zp, Himmetoglu:2008hx}. For the timelike case, Seifert found a gravitational instability in the presence of a spherically symmetric source \cite{Seifert:2007fr}.
\subsection{Scalar action}
The equation of motion for the scalar Lagrangian with a fixed-norm constraint is
\begin{equation}
\partial^\n \partial_\mu} \def\n{\nu} \def\o{\omega A^\mu} \def\n{\nu} \def\o{\omega = 2 \lambda A^\n.
\end{equation}
Using the fixed-norm constraint ($A_\mu} \def\n{\nu} \def\o{\omega A^\mu} \def\n{\nu} \def\o{\omega = \mp m^2$), we can solve for the Lagrange multiplier field,
\begin{equation}
\lambda = \mp {1 \over 2 m^2} A_\n \partial^\n \partial_\mu} \def\n{\nu} \def\o{\omega A^\mu} \def\n{\nu} \def\o{\omega.
\end{equation}
In contrast with the Maxwell theory, in the scalar theory it is the timelike case for which
we can demonstrate obstacles to smooth evolution, while the spacelike case is less clear.
(The Hamiltonian is bounded below, but there are no perturbative instabilities or known
obstacles to smooth evolution.)
When the vector field is timelike, we have four constraint equations in the scalar case,
\begin{equation}
A_0^2 - A_iA_i = m^2 \qquad \text{and} \qquad \partial_i(\partial_\mu} \def\n{\nu} \def\o{\omega A^\mu} \def\n{\nu} \def\o{\omega) = 2\lambda A_i.
\end{equation}
Suppose we give homogeneous initial conditions such that $A_0(t_0) > m$. Align axes such that,
\begin{equation}
A_\mu} \def\n{\nu} \def\o{\omega(t_0) = \left(A_0(t_0),0,0,A_3(t_0) \right),
\end{equation}
where $A_3(t_0)^2 = A_0(t_0)^2 - m^2$. Note that, since $A_3(t_0) \neq 0$, we have that $\lambda = 0$ from the $\n = 3$ equation of motion. The $\n = 0$ equation of motion therefore gives,
\begin{equation}
{d^2 A_0 \over d t^2} = 0.
\end{equation}
We see that the timelike component of the vector field has the time-evolution,
\begin{equation}
A_0(t) = A_0(t_0) + C (t-t_0).
\end{equation}
For generic homogeneous initial conditions, $C \neq 0$. In this case, $A_0$ will not have a smooth time evolution since $A_0$ will saturate the fixed-norm constraint, and beyond this point $A_0$ will continue to decrease in magnitude. To satisfy the fixed-norm constraint, the spatial components of the vector field $A_i$ would need to be imaginary, which is unacceptable since $A_\mu} \def\n{\nu} \def\o{\omega$ is a real-valued vector field. This problem never occurs for the spacelike case since there always exist real values of $A_i$ that satisfy the constraint for any $A_0$.
Again, it is possible that this obstruction to a well-defined evolution will be regulated by terms of higher order in the effective field theory. The time derivative of $A_3$ is
\begin{equation}
{\partial_\mu} \def\n{\nu} \def\o{\omega A_3} = \frac{A_0}{\sqrt{A_0 A_0 - m^2}} {\partial_\mu} \def\n{\nu} \def\o{\omega A_0 }.
\end{equation}
As $A_0 A_0$ approaches $m^2$, with finite derivatives of $A_0$, the derivative of the spatial component $A_3$ becomes unbounded. If higher-order terms in the effective action have time derivatives of
the components $A_i$, these terms could become relevant to the vector
field's dynamical evolution, indicating that we have left the realm of
validity of the low-energy effective field theory we are considering.
Whether or not a theory with a scalar kinetic term and fixed expectation value is viable remains uncertain.
\section{Conclusions}
In this paper, we addressed the issue of stability in theories in which Lorentz invariance is spontaneously broken by a dynamical fixed-norm vector field with an action
\begin{equation}
S = \int d^4 x \, \left( -{1\over 2}\b_1 F_{\mu} \def\n{\nu} \def\o{\omega\n}F^{\mu} \def\n{\nu} \def\o{\omega\n}
-\beta_{*}(\partial_\mu} \def\n{\nu} \def\o{\omega A^\mu} \def\n{\nu} \def\o{\omega)^2 -\beta_4 {A^\mu} \def\n{\nu} \def\o{\omega A^\n \over m^2} (\partial_\mu} \def\n{\nu} \def\o{\omega A_\rho)(\partial_\n A^\rho) + \lambda(A^{\mu} A_{\mu} \pm m^2) \right)\,,
\end{equation}
where $\lambda$ is a Lagrange multiplier that strictly enforces the fixed-norm constraint. In the spirit of effective field theory, we limited our attention to only kinetic terms that are quadratic in derivatives, and took care to ensure that our discussion applies to regimes in which an effective field theory expansion is valid.
We examined the boundedness of the Hamiltonian of the theory and showed that, for generic choices of kinetic term, the Hamiltonian is unbounded from below. Thus for a generic kinetic term, we have shown that a constant fixed-norm background is not the true vacuum of the theory. The only exception is the timelike sigma-model Lagrangian ($\b_1 = \beta_{*}$, $\beta_4 = 0$ and $A^{\mu} A_{\mu} = -m^2$), in which case the Hamiltonian is positive-definite, ensuring stability. However, if the vector field instead acquires its vacuum expectation value by minimizing a smooth potential, we demonstrated (as was done previously in \cite{Elliott:2005va}) that the theory is plagued by the existence of a tachyonic ghost, and the Hamiltonian is unbounded from below.
The timelike fixed-norm sigma-model theory nevertheless serves as a viable starting point for phenomenological investigations of Lorentz invariance; we explore some of this phenomenology in a separate paper \cite{Carroll:2009en}.
We next examined the dispersion relations and energies of first-order perturbations about constant background configurations. We showed that, in addition to the sigma-model case, there are only two other choices of kinetic term for which perturbations have non-negative energies and do not grow exponentially in any frame: the Maxwell ($\beta_{*} = \b_4 = 0$) and scalar ($\b_1 = \b_4 = 0$) Lagrangians. In either case, the theory has fewer than three propagating degrees of freedom
at the linear level, as some of the vector components in the action lack time derivatives and act as additional Lagrange multipliers.
A subset of the phase space for the Maxwell theory with a timelike {\ae}ther\ field is
well-defined and stable, but is identical to ordinary electromagnetism.
For the Maxwell theory with a spacelike {\ae}ther\ field, or the scalar theory with a timelike field,
we can find explicit obstructions to smooth time evolution. It remains unclear whether
the timelike Maxwell theory or the spacelike scalar theory can exhibit true violation of Lorentz
invariance while remaining well-behaved.
\section*{Acknowledgments}
We are very grateful to Ted Jacobson, Alan Kostelecky, and Mark Wise for helpful comments.
This research was supported in part by the U.S. Department of Energy and by the
Gordon and Betty Moore Foundation.
|
1,108,101,565,568 | arxiv | \section*{Acknowledgments}
This work is supported in part by
the National Science Council of R.O.C. under Grant No:
NSC-97-2112-M-006-001-MY3.
|
1,108,101,565,569 | arxiv |
\section{Introduction}
\begin{figure}[b]
\centering
\includegraphics[width=0.9\linewidth]{figures/teaser.pdf}
\caption{We introduce a method to perform 3D reconstruction from the shadow cast on the floor by occluded objects. In the middle, we visualize our reconstruction of the occluded chair from a new camera view.}
\label{fig:teaser}
\end{figure}
Reconstructing the 3D shape of objects is a fundamental challenge in computer vision, with a number of applications in robotics, graphics, and data science. The task aims to estimate a 3D model from one or more camera views, and researchers over the last twenty years have developed excellent methods to reconstruct visible objects \cite{horry1997tour,hoiem2005automatic,ye2021shelf,mescheder2019occupancy,mildenhall2020nerf,hartley2003multiple,agarwal2010bundle}. However, objects are often occluded, with the line of sight obstructed either by another object in the scene, or by themselves (self-occlusion).
Reconstruction from a single image is an under-constrained problem, and occlusions further reduce the number of constraints.
To reconstruct occluded objects, we need to rely on additional context.
One piece of evidence that people use to uncover occlusions is the shadow cast on the floor by the hidden object. For example, Figure \ref{fig:teaser} shows a scene with an object that has become fully occluded. Even though no appearance features are visible, the shadow reveals that another object exists behind the chair, and the silhouette constrains the possible 3D shapes of the occluded object. What hidden object caused that shadow?
In this paper, we introduce a framework for reconstructing 3D objects from their shadows. We formulate a generative model of objects and their shadows cast by a light source, which we use to jointly infer the 3D shape and the location of the light source. Our model is fully differentiable, which allows us to use gradient descent to efficiently search for the best shapes that explain the observed shadow. Our approach integrates both learned empirical priors about the geometry of typical objects and the geometry of cameras in order to estimate realistic 3D volumes that are often encountered in the visual world.
Since we model the image formation process, we are able to jointly reason over the object geometry and the parameters of the light source. When the light source is unknown, we recover multiple different shapes and multiple different positions of the light source that are consistent with each other. When the light source location is known, our approach can make use of that information to further refine its outputs.
We validate our approach for a number of different object categories on a new ground truth dataset that we will publicly release.
The primary contribution of this paper is a method to use the shadows in a scene to infer the 3D structure, and the rest of the paper will analyze this technique in detail. Section 2 provides a brief overview of related work for using shadows. Section 3 formulates a generative model for objects and how they cast shadows, which we are able to invert in order to infer shapes from shadows. Section 4 analyzes the capabilities of this approaches with a known and unknown light source. We believe the ability to use shadows to estimate the spatial structure of the scene will have a large impact on computer vision systems' ability to robustly handle occlusions.
\section{Related Work}
We briefly review related work in 3D reconstructions, shadows, and generative models. Our paper combines a model of image formation with generative models.
\textbf{Single-view 3D Reconstruction and 3D Generative Models:}
The task of single-view 3D reconstruction -- given a single image view of a scene or object, generate its underlying 3D model -- has been approached by deep learning methods in recent years. This task is related to unconditional 3D model generation; while unconditional generation creates 3D models a priori, single-view reconstruction can be thought of as generation a posteriori where the condition is the input image view. Given the under-constrained nature of the problem, this is usually done with 3D supervision. Different lines of work address this by generating 3D models in different types of representations \cite{shin_pixels_2018}: specifically, whether they use voxels \cite{brock_generative_2016}, point cloud representations \cite{fan_point_2016}, meshes \cite{groueix2018papier,pontes2018image2mesh}, or the more recently introduced \textit{occupancy networks} \cite{mescheder2019occupancy}. While our general approach is compatible with any of these types of 3D representations, we elect to use occupancy networks in our implementation here as they achieve excellent performance at generating 3D structure.
Occupancy networks~\cite{mescheder2019occupancy} learn a function mapping from 3D position to a scalar describing the probability of occupancy -- that is, whether part of the 3D object occupies that position or not. This function is parameterized by a neural network. The learned occupancy function can be thought of as a binary classifier for each point, and the decision boundary of this classifier then provides a description of the object's surface. To extend this to the generative model setting, a latent vector with a Gaussian prior can be provided to the occupancy function in conjunction with the position vector~\cite{mescheder2019occupancy}. This enables sampling different occupancy functions by sampling different latent vectors.
The cost to obtain 3D ground truth for supervision \cite{h36m_pami} poses a great limitation to the single-view 3D reconstruction. To scale up the applications, another line of work uses multi-view 2D images as supervision \cite{niemeyer2020differentiable,humanMotionKanazawa19,yu2020pixelnerf}, or even only single image as supervision \cite{cmrKanazawa18,liu2019soft,ucmrGoel20,wu2020unsupervised,li2020self,ye2021shelf}. More classically, approaches using Multi-View Stereo (MVS) reconstruct 3D object by combining multiple views \cite{bleyer2011patchmatch,de1999poxels,broadhurst2001probabilistic,galliani2016gipuma,schonberger2016pixelwise,seitz2006comparison,seitz1999photorealistic}.
\textbf{Occlusions and Shadows:}
A major challenge towards 3D reconstruction is the existence of occlusions. From a single image, we do not have access to the full 3D structure of what we are viewing, as some parts are covered up either by other objects, or even by the object we are trying to reconstruct itself (self-occlusion). For example, when we view a chair from behind, the back of the chair may occlude the sides from our viewing angle, making it unclear whether the chair has arms or not.
Shadows present a naturally-occurring visual signal that can help to clarify this uncertainty. By observing the shadows cast by what we cannot see, we gain insight into the 3D structure of the unseen portion. Previous work has considered the use of shadows towards elucidating structure in a classic vision context. \cite{waltz_understanding_1975} first applied shadows to determine shapes in 3D line drawings. This was extended by \cite{shafer_using_1983}, who determined surface orientations for polyhedra and curved surfaces with shadow geometry. Shadows can also be used more actively to recover 3D shape. \cite{bouguet1993shadows} shows how shadows can help infer the geometry of a shaded region. They cast shadows on an object sitting on a plane, by moving a straight-line occlusion (a stick) around in front of a light source, and propose an efficient method to extract the underlying 3D object shape from the moving contours of the shadow. \cite{savarese_3d_2007} also propose \textit{shadow carving}, a way of using multiple images from the same viewpoint but with different lighting conditions to discover object concavities. They prove that shadows can provide information that guarantees conservative and reasonable shape estimates under some assumptions. Meanwhile, \cite{troccoli_shadow_2004} use shadows as cues to determine parameters for refining 3D textures. Recent work has leveraged deep learning tools to enable detection of shadows from realistic images \cite{wang_instance_2020}, making it possible to extend the use of shadows to realistic settings. Thus far, shadows have not seen much application in determining structure using the tools afforded to vision by the latest deep learning techniques.
\textbf{Generation Under Constraints:}
Generation under constraints appears throughout the literature in many forms. It falls under the general framework of analysis by synthesis \cite{krull2015learning,yuille2006vision}. Tasks such as super-resolution, image denoising, and image inpainting, begin with an incomplete image and ask for possible reconstructions of the complete image \cite{ongie_deep_2020}. In other words, the goal is to generate realistic images that satisfy the constraint imposed by the given information. Typical approaches consider this as conditional generation, where a function (usually, a neural network) is learned to map from corrupted inputs to the desired outputs \cite{dong_image_2015,kim_accurate_2016,ongie_deep_2020}. More recently, \cite{menon2020pulse} propose using search rather than regression to address these types of tasks in the context of the super-resolution problem. They use pretrained generative models as a map of the natural image manifold, using gradient-based search to find points in the latent space that map to high-resolution images that downscale to the appropriate low-resolution. They avoid generating unrealistic outputs by constraining the search in the latent space to a constant deviation from a spherical Gaussian mean. While posing the inverse problem as search makes inference more costly, it has benefits in realism and the ability to generate multiple solutions. It further eschews the need to learn the generative process from scratch by leveraging the knowledge captured by the unconditional generator. Recent work by~\cite{sadekar2022shadow} uses differentiable rendering to deform an icosphere to generate targeted shadow art sculptures, with interesting results. Unlike their work, ours focuses on generating a set of \textit{plausible} objects which could explain a given shadow in a real scene. Our more general approach also handles the common scenario in which the light source generating the shadow is unknown.
\section{Method}
We represent the observation of the shadow as a binary image ${\bf s} \in \mathbb{R}^{W \times H}$. Our goal is to estimate a set of possible 3D shapes, their poses, and corresponding light sources that are consistent with the shadow ${\bf s}$. We approach this problem by defining a generative model for objects and their shadows. We will use this forward model to find the best 3D shape that could have produced the shadow.
\subsection{Explaining Shadows with Generative Models}
Let $\Omega = G(\bm{z})$ be a generative model for 3D objects, where $\Omega$ parameterizes a 3D volume and $\bm{z} \sim \mathcal{N}(0, I)$ is a latent vector with an isotropic prior. When the volume blocks light, it will create a shadow. We write the location of the illumination source as ${\bf c} \in \mathbb{R}^3$ in world coordinates, which radiates light outwards in all directions. The camera will observe the shadow $\hat{s} = \pi(c, \Omega)$, where $\pi$ is a rendering of the shadow cast by the volume $\Omega$ onto the ground plane.
To reconstruct the 3D objects from their shadow, we formulate the problem as finding a latent vector $\bm{z}$, object pose $\phi$, and light source location ${\bf c}$ such that the predicted shadow $\hat{{\bf s}}$ is consistent with the observed shadow ${\bf s}$. We perform inference by solving the optimization problem:
\begin{equation}
\min_{\bm{z}, {\bf c}, \phi} \; \mathcal{L}\left(s, \pi({\bf c}, \Omega) \right) \quad \textrm{where} \quad \Omega = \mathcal{T}_\phi\left(G(\bm{z})\right)
\label{eqn:inference}
\end{equation}
The loss function $\mathcal{L}$ compares the candidate shadow $\hat{s} = \pi({\bf c}, \Omega)$ and the observed shadow ${\bf s}$, and since silhouettes are binary images, we use a binary cross-entropy loss. We model the object pose with an SE(3) transformation $\mathcal{T}$ parameterized by quaternions $\phi$. In other words, we want to find a latent vector that corresponds to an appropriate 3D model of the object that, in the appropriate pose, casts a shadow matching the observed shadow.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/method.pdf}
\vspace{-2em}
\caption{\textbf{Overview of our method.} Given an observation of a shadow ${\bf s}$, we optimize for an explanation jointly over the location of the light ${\bf c}$, the pose of the object $\mathcal{T}_\phi$, and the latent vector of the object 3D shape $\bm{z}$. Since every step is differentiable, we are able to solve this optimization problem with gradient descent. By starting the optimization algorithm with different initializations, we are able to recover multiple possible explanations $\Omega$ for the shadow.\vspace{-1em}}
\label{fig:method}
\end{figure}
Figure \ref{fig:method} illustrates an overview of this setup.
The solution $\bm{z}^*$ of the optimization problem will correspond to a volume that is consistent with the observed shadow. We can obtain the resulting shape through $\Omega^* = \mathcal{T}_{\phi^*}\left(G(\bm{z}^*)\right)$.
By solving Equation \ref{eqn:inference} multiple times with different initializations, we obtain a set of solutions $\{ \bm{z}^ *\}$ yielding multiple possible 3D reconstructions.
\subsection{$G(\bm{z})$: Generative Models of Objects}
To make the reconstructions realistic, we need to incorporate priors about the geometry of objects typically observed in the visual world.
Rather than searching over the full space of volumes $\Omega$, our approach searches over the latent space $\bm{z}$ of a pretrained deep generative model $G(\bm{z})$. Generative models that are trained on large-scale 3D data are able to learn empirical priors about the structure of objects; for example, this can include priors about shape (e.g., automobiles usually have four wheels) and physical stability (e.g., object parts must be supported). By operating over the latent space $\bm{z}$, we can use our knowledge of the generative model's prior to constrain our solutions to 3D objects that match the generative model's output distribution.
Our approach is compatible with many choices of 3D representation. In this implementation, we choose to model our 3D volumes with an occupancy field \cite{mescheder2019occupancy}. An occupancy network ${\bf y} = f_\Omega({\bf x})$ is defined as a neural network that estimates the probability ${\bf y} \in \mathbb{R}$ that the world coordinates ${\bf x} \in \mathbb{R}^3$ contains mass. The generative model $G(\bm{z})$ is trained to produce the parameters $\Omega$ of the occupancy network.
\subsection{$\pi$: Differentiable Rendering of Shadows}
\begin{SCfigure}[1][t]
\centering
\includegraphics[width=0.5\linewidth]{figures/shadowprocess.pdf}
\caption{\textbf{Differentiable rendering of shadows.} A point ${\bf p}$ on the ground plane will be a shadow if the corresponding ray from the light source intersects with the volume $f$. We calculate whether ${\bf p}$ is a shadow by finding the intersecting ray ${\bf r}_\theta$, and max pooling $f_\Omega$ along the ray.}
\label{fig:imageformation}
\end{SCfigure}
To optimize Equation \ref{eqn:inference} with gradient descent, we need to calculate gradients of the shadow rendering $\pi$ and its projection to the camera. This operation can be made differentiable by max-pooling the value of the occupancy network along a light ray originating at the light source. Although integrating occupancy along the ray may be more physically correct to deal with partially transmissive media as in NeRF \cite{mildenhall2020nerf}, since we are primarily concerned with solid, opaque objects and binary shadow masks, we find max-pooling to be a useful simplifying approximation.
Let ${\bf r}_\theta \in \mathbb{R}^3$ be a unit vector at an angle $\theta$, and let ${\bf n} \in \mathbb{R}^3$ be a vector normal to the ground plane. We need to calculate whether the ray from the light source ${\bf c}$ along the direction of ${\bf r}_\theta$ will intersect with the ground plane ${\bf n}$, or whether it will be blocked by the object $\Omega$.
The shadow will be an image $\pi({\bf c},\Omega)$ formed on the ground plane, and the intensity on the plane at position ${\bf p}$ is given by:
\begin{align}
\pi({\bf c},\Omega)[{\bf p}] = \max_{d \in \mathbb{R}} \; f_\Omega({\bf c} + d {\bf r}_\theta) \quad \textrm{s.t.} \quad {\bf p} = {\bf c} - \frac{{\bf c}^T{\bf n}}{{\bf r}_\theta^T {\bf n}} {\bf r}_\theta
\end{align}
where we use the notation $\pi({\bf c},\Omega)[{\bf p}]$ to index into $\pi({\bf c},\Omega)$ at coordinate ${\bf p}$. The right-hand constraint between ${\bf p}$ and ${\bf r}_\theta$ is obtained by calculating the intersection of the light ray with the ground plane.
For the light ray ${\bf r}_\theta$ landing at ${\bf p}$, the result of $\pi$ is the maximum occupancy value
$f_\Omega$ along that ray.
Since $\pi({\bf c},\Omega)$ is an image of the shadow on a plane, it is straightforward to use a homography to transform $\pi({\bf c},\Omega)$ into the perspective image $\hat{{\bf s}}$ captured by the camera view.
\subsection{Optimization}
Given a shadow ${\bf s}$, we optimize $\bm{z}$, ${\bf c}$, and $\phi$ in Equation \ref{eqn:inference} with gradient descent while holding the generative model $G(\bm{z})$ fixed. We randomly initialize $\bm{z}$ by sampling from a multivariate normal distribution, and we randomly sample both a light source location ${\bf c}$ and an initial pose $\phi$. We then calculate gradients using back-propagation to minimize the loss between the predicted shadow $\hat{{\bf s}}$ and the observed shadow ${\bf s}$.
During optimization, we need to enforce that $\bm{z}$ resembles a sample from a Gaussian distribution. If this is not satisfied, the inputs to the generative model will no longer match the inputs it has seen during training. This could result in undefined behavior and will not make use of what the generator has learned. We follow the technique from \cite{menon2020pulse}, which made the observation that the density of a high-dimensional Gaussian distribution will condense around the surface of a hyper-sphere (the `Gaussian bubble' effect). By enforcing a hard constraint that $\bm{z}$ should be near the hyper-sphere, we can guarantee the optimization will find a solution that is consistent with the generative model prior.
The objective in Equation \ref{eqn:inference} is non-convex, and there are many local solutions for which gradient descent can become stuck. Motivated by \cite{sgld}, we found that adding linearly decaying Gaussian noise helped the optimization find better solutions. Algorithm \ref{alg:method} summarizes the full procedure.
\begin{algorithm}[t]
\caption{Inference by Inverting the Generative Model}
\label{algorithm: defense}
\begin{algorithmic}[1]
\STATE {\bfseries Input:} Shadow image ${\bf s}$, step size $\eta$, number of iterations $K$, and generator $G$.
\STATE {\bfseries Output:} Parameters of a 3D volume $\Omega$
\STATE{\bfseries Inference: }
\STATE{Randomly initialize $\bm{z} \sim \mathcal{N}(0, I)$}
\FOR{$k=1,...,K$}
\STATE{$J(\bm{z}, {\bf c}, \phi) = \mathcal{L}\left({\bf s}, \pi({\bf c}, \mathcal{T}_\phi(G(\bm{z}))) \right)$ }
\STATE{$\bm{z} \leftarrow \bm{z} - \eta \cdot (\nabla_{\bm{z}} J(\bm{z}, {\bf c}, \phi) + \mathcal{N}(0, \sigma I))$ where $\sigma = \frac{K-1-k}{K}$}
\STATE{$\bm{z} \leftarrow \bm{z} / ||\bm{z}||_2$}
\STATE{${\bf c} \leftarrow {\bf c} - \eta \nabla_{{\bf c}} J(\bm{z}, {\bf c}, \phi)$}
\STATE{$\phi \leftarrow \phi - \eta \nabla_{\phi} J(\bm{z}, {\bf c}, \phi)$}
\ENDFOR
\STATE{Return parameters of 3D volume $\Omega = \mathcal{T}_\phi(G(\bm{z}))$}
\end{algorithmic}
\label{alg:method}
\end{algorithm}
\subsection{Implementation Details}
We will release all code, models, and data. To create $G(\bm{z})$, we use the unconditional 3D generative model from \cite{mescheder2019occupancy}, which is trained to produce an occupancy network with a $128$-dimensional latent vector. The generative model is trained separately on four categories of the ShapeNet dataset, as in~\cite{mescheder2019occupancy}. When the location of the illumination source is unknown, we sample a 3-dimensional coordinate ${\bf c}$ from the surface of the northern hemisphere above the ground plane with a fixed radius of 3. When the SE(3) transformation for the object pose is unknown, we sample a 4-dimensional quaternion $\phi$ to parameterize the rotation matrix. A non-zero rotation for ``pitch'' and ``roll'' are physically implausible given a level ground plane and the assumption of upright object, so we constrain them to be zero during optimization. To optimize the full model, we use spherical gradient descent to optimize $\bm{z}$ (and optionally ${\bf c}$ and $\phi$) for up to 300 steps. We use a step size of 1.0 for known light and pose experiments and 0.01 for unknown light and pose experiments.
To accomplish the differentiable shadow rendering $\pi$, we evenly sample 128 points along each light ray emitted from the illumination source, then evaluate them for occupancy. In the case of occlusion from other objects as well as self-occlusion, we calculate the segmentation mask of all objects in the scene, and disable gradients coming from light rays intersecting with these masks.
\section{Experimental Results}
The goal of our experiments is to analyze how well our method can estimate 3D shapes that are consistent with an observed shadow. We first introduce a new 3D shadow dataset, then we perform two different quantitative experiments to evaluate the 3D reconstruction performance of our model. We further provide several visualizations and qualitative analysis of our method.
\subsection{Kagemusha: A Dataset of Shadows}
Kagemusha\footnote{Named after Akira Kurosawa's movie, which translates to ``shadow warrior''.} is a dataset of 3D objects and their shadows. The dataset contains four common objects of the ShapeNet dataset \cite{chang2015shapenet}. For each 3D object, we sample a random point light source location from the northern hemisphere with a radius of 3 and a camera location sampled from the northern hemisphere with a radius of 2, to create a scene with shadow. We then compute the segmentation mask and shadow mask of the object. The dataset uses the same train/validation/test split as the original ShapeNet dataset \cite{chang2015shapenet}.
\subsection{Common Experimental Setup}
\textbf{Evaluation Metric:}
We use volumetric IoU to evaluate the accuracy of 3D reconstruction. Volumetric IoU is calculated by dividing the intersection of the two volumes by their union. We uniformly sample 100k points in the bounding volume. We then calculate the occupancy agreement of the points between the candidate 3D volume and the original 3D volume.
\begin{table}[t]
\centering
\begin{tabular}{p{3.5cm} p{1cm} p{1cm} p{1cm} p{1cm} p{1cm}}
\toprule
Method & Car & Chair & Plane & Sofa & All \\
\midrule
Random & .329 & .203 & .211 & .209 & .238 \\
Nearest Neighbor & .414 & .299 & .349 & .352 & .322 \\
Regression & .611 & .274 & .410 & .524 & .467 \\
Latent Search (Ours) & \textbf{.706} & \textbf{.371} & \textbf{.537} & \textbf{.598} & \textbf{.553} \\
\midrule
\midrule
Im2Mesh (full image) & .737 & .501 & .571 & .680 & .622 \\
\bottomrule
\end{tabular}
\
\caption{Results for 3D reconstruction from the shadows assuming the object pose and the light source position are both known. We report volumetric IoU, and higher is better. The Im2Mesh result shows the performance at 3D reconstruction when the entire image is observable, not just the shadows.}
\label{tab:known}%
\end{table}
\textbf{Baselines:}
To validate our method quantitatively, we selected several baselines for comparison. Since we are analyzing how well generative models can explain shadows in images, we compare against the following approaches.
\textit{(i) Regression:}
An alternative approach to the same problem is to train a regression model to map images of shadows ${\bf s}$ to 3D volumes $\Omega$. We modified the occupancy network from \cite{mescheder2019occupancy} to perform this task, which is a widely adopted and highly competitive model for single-view 3D reconstruction. During training and inference, we replace the input RGB image with a shadow image. We also loaded the occupancy decoder pre-trained on ShapeNet\cite{chang2015shapenet} and supervised further training with 3D ground truth. \textit{(ii) Nearest Neighbor:} We experiment with a nearest neighbor approach. For each shadow image ${\bf s}$ in the test set, we search in the training set for the object whose shadow ${\bf s}'$ minimizes $||{\bf s} - {\bf s}'||_2$ and use the corresponding 3D object as prediction.
\textit{(iii) Random:} We compute chance by selecting a random 3D object from the training set.
\textit{(iv) Full Image:} For analysis purposes, we also compare against an off-the-shelf 3D reconstruction method~\cite{mescheder2019occupancy} that is able to see the entire image (not just the shadows). Since this approach has more information than our method, we do not expect to outperform it. However, this comparison allows us to quantify the amount of information lost when we only operate with shadows.
\begin{figure}[p]
\centering
\includegraphics[width=\linewidth]{figures/occlusion.pdf}
\caption{\textbf{3D Reasoning Under Occlusion.} We show several examples of 3D object reconstruction under occlusion. The \textbf{1st} column shows the original scenes including both objects. Shadow masks shown in the \textbf{2nd} column. The \textbf{3rd} and \textbf{4th} column are our reconstruction as seen from another camera view. Note that the red chair in the front is not being reconstructed by our model.}
\label{fig:occlusion}
\end{figure}
\subsection{Reconstruction with Known Light and Object Pose}
We first evaluate our method on the task of 3D reconstruction when the light position and pose are known. For each scene, we randomize the location of the light source, and put the objects in their canonical pose. Since the problem is under-constrained, there is not a single unique answer. We consequently run each method eight times to generate diverse predictions, and calculate the average volumetric IoU using the best reconstruction for each example.
Table \ref{tab:known} compares the performance of our approach versus baselines on this task. Our approach is able to significantly outperform the baselines on this task (by nearly 9 points), showing that it can effectively find 3D object shapes that are consistent with the shadows. Since our approach integrates empirical priors from generative models with the geometry of camera and shadows, it is able to better generalize to the testing set. The regression baseline, for example, does not benefit from these inductive biases, and instead must learn them from data, which our results show is difficult.
When the full image is available, Table \ref{tab:known} shows that established 3D reconstruction methods are able to perform better, which is expected because more information is available. However, when there is an occlusion, the full image will not be available, and we instead must rely on shadows to reconstruct objects.
Figure \ref{fig:occlusion} shows qualitative examples where we were able to reconstruct objects that are occluded other objects. Although there is no appearance information, these results show that shadows allow our model to ``see through'' occlusions in many cases. The examples show that the method is able to reconstruct objects faithfully with diverse shapes and across different categories. We include more examples from all categories in the supplementary materials.
\subsection{Reconstruction with Unknown Light and Object Pose}
Since our approach is generative and not discriminative, a key advantage is the flexibility to adapt to different constraints and assumptions. In this experiment, we relax our previous assumption that the light source location and the object pose are both known. We evaluate our approach at reconstruction where all three variables (latent vector $\hat{\bm{z}}$, light source location $\hat{{\bf c}}$, and object pose parameters $\phi$) must be jointly optimized by gradient descent to minimize the shadow reconstruction loss.
\begin{table}[h]
\centering
\begin{tabular}{p{3.5cm} p{1cm} p{1cm} p{1cm} p{1cm} p{1cm}}
\toprule
Method & Car & Chair & Plane & Sofa & All \\
\midrule
Random & .283 & .175 & .177 & .161 & .199 \\
Nearest Neighbor & .346 & .\textbf{233} & .241 & .233 & .264 \\
Regression & .559 & .116 & .218 & .317 & .303 \\
Latent Search (Ours) & \textbf{.618} & .187 & \textbf{.343} & \textbf{.413} & \textbf{.390} \\
\bottomrule
\end{tabular}
\\
\caption{Results for 3D reconstruction from the shadows assuming the object pose and the light source position are \textbf{both unknown}. We report volumetric IoU, and higher is better.\vspace{-1em}}
\label{tab:unknown}%
\end{table}
Table \ref{tab:unknown} shows the performance of our model at reconstructing objects with an unknown illumination position and pose. In this under-constrained setting, our approach is able to significantly outperform the baseline methods as much as 29\%. In this setting, the most difficult object to reconstruct is a chair, which often has thin structures in the shadow.
Discriminative regression models are limited to produce reconstructions that are consistent with their training conditions, which is a principle restriction of prior methods. As we relax the number of known variables, the size of the input space significantly increases, which requires the regression baseline to increasingly generalize outside of the training set. Table \ref{tab:unknown} shows that regression is only marginally better than a nearest neighbor search on average. However, since our approach is generative, and integrates inductive biases about scene illumination, it is able to better generalize to more unconstrained settings.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/soba.pdf}
\caption{
Qualitative results of 3D reconstructions in natural images. We first automatically segment shadow masks with~\cite{wang_instance_2020}. We then run our algorithm.
}
\label{fig:soba}
\end{figure}
\subsubsection{Natural Image}
We applied our method to the real-world dataset in~\cite{wang_instance_2020}, and automatically obtain shadow segmentations with the detector proposed by the same work. Fig.~\ref{fig:soba} shows our 3D reconstructions from just the estimated shadows. Our method remains robust both for real-world images and slightly inaccurate shadow masks.
These results also show our method estimates reasonable reconstructions when the ground-truth camera pose or light source location are unknown. Our method also returns reasonable-looking models even if the floor is not flat (e.g.\ car on sand).
\subsection{Diversity of Reconstructions}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/diversity.pdf}
\caption{\textbf{Diversity of Reconstructions.} Given one shadow (\textbf{left}), our method is able to estimate multiple possible reconstructions (\textbf{middle}) that are consistent with the shadow. We show four samples from the model (columns), each under two different camera views (rows). The \textbf{right} side shows the original object.}
\label{fig:diversity}
\end{figure}
\begin{SCfigure}[2][h]
\centering
\includegraphics[width=0.47\linewidth]{figures/random_initialization_carl.pdf}
\caption{\textbf{Performance with Diverse Samples.} We show that our approach is able to make diverse 3D reconstructions from shadows. We plot best volumetric IoU versus the number of random samples from our method. The upward trends indicate the diversity of the prediction results from our method.}
\label{fig:random}
\end{SCfigure}
By modeling the generative process of shadows, our approach is able to find multiple possible 3D shapes to explain the observed shadow. When we sample different latent vectors as initialization, coupled with stochasticity from Gaussian noise in gradient descent, our method can generate a diverse set of solutions to minimize the shadow reconstruction loss.
Estimation of multiple possible scenes is an important advantage of our approach when compared with a regression model. There are many correct solutions to the 3D reconstruction task. When a regression model is trained to make a prediction for these tasks, the optimal solution is to predict the average of all the possible shapes in order to minimize the loss. In comparison, our approach does not regress to the mean under uncertainty.
Figure \ref{fig:diversity} shows how the generative model is able to produce multiple, diverse samples that are all consistent with the shadow. For example, when given a shadow of a car, the method is able to produce both trucks and sedans that might cast the same shadow. When given the shadow of a sofa, the latent search discovers both L-shaped sofas and rectangular sofas that are compatible with the shadow. Figure \ref{fig:random} quantitatively studies the diversity of samples from our method. As we perform latent search on the generative model with different random seeds, the likelihood of producing a correct prediction monotonically increases. This is valuable for our approach to be deployed in practice to resolve occlusions, such as robotics, where systems need to reason over all possible hypotheses for up-stream decision making.
\subsection{Analysis}
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{figures/optimization_process.pdf}
\vspace{-1em}
\caption{\textbf{Visualizing Optimization Iterations.} Visualizing the process of our model searching for 3D shapes that cast a shadow consistent with the input. The \textbf{1st row} shows the shadow used as a constraint for searching. \textbf{The middle} sequence of figure shows the process of searching in the latent space. The \textbf{last row} shows the original object as a reference, which is unseen by our model.\vspace{-1em}}
\label{fig:optimization}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=\linewidth]{figures/manipulation.pdf}
\vspace{-2em}
\caption{\textbf{Reconstructing Manipulated Shadows.} We manually modify a shadow mask and comparing the reconstructed 3D object between the original and modified shadows. View 1 is the same as the original shadow image. View 2 is a second view for visualizing more details.}
\label{fig:manipulation}
\end{figure}
\begin{SCfigure}[2][t!]
\centering
\includegraphics[width=0.55\linewidth]{figures/failures.pdf}
\caption{\textbf{Failures.} We visualize representative failures where the model produces incorrect shapes that still match the shadow. Our experiments suggest that results can be further improved with more priors, such as physical knowledge (\textbf{top}) and refined generative models (\textbf{bottom}).}
\label{fig:failures}
\end{SCfigure}
\textbf{Optimization Process:}
To gain intuition into how our model progresses in the latent space to reach the final shadow-consistent reconstruction, we visualize in Figure \ref{fig:optimization} the optimization process by extracting the meshes corresponding to several optimization iterations before converging at the end. Figure \ref{fig:optimization} shows a clear transition from the first mesh generated from a randomly sampled latent vector, to the last mesh that accurately cast shadows matching the input ones. The reconstructed meshes at the end also match the original objects.
\textbf{Reconstructions of Modified Shadows:}
We found that our approach is able to exploit subtle details in shadows in order to produce accurate reconstructions. To study this behavior, we manually made small modifications to some of the shadow images, and analyzed how the resulting reconstructions changed. Figure \ref{fig:manipulation} shows two examples. In the example on the left, we modified the shadow of a chair to add an arm rest in the shadow image. In the comparison between the original reconstruction and modified reconstruction, we can see an arm rest being added to the reconstructed chair. In the example on the right, we take a shadow image of a sedan and make the shadow corresponding to the rear trunk part higher. The reconstructed car from the modified image becomes an SUV to adapt to the modified shadow.
\textbf{Analysis of Failures:} We show a few representative examples of failures in Figure \ref{fig:failures}. Although these shapes match the provided shadow, they are incorrect because they either lack physical stability or produce objects that are unlikely to be found in the natural visual world. These failures suggest that further progress on this problem is possible by integrating more extensive priors about objects and physics.
\section{Conclusions}
This paper shows that generative models are a promising mechanism to explain shadows in images. Our experiments show that jointly searching the latent space of a generative model and parameters for the light position and object pose allows us to reconstruct 3D objects from just an observation of the shadow. We believe tightly integrating empirical priors about objects with models of image formation will be an excellent avenue for resolving occlusions in scenes.\\
\textbf{Acknowledgements:} This research is based on work supported by Toyota Research Institute, the NSF CAREER Award \#2046910, and the DARPA MCS program under Federal Agreement No. N660011924032. SM is supported by the NSF GRFP fellowship. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the sponsors.
\bibliographystyle{splncs04}
\subsection{Regression: Learning a Inverse Model}
\section{Method}
We begin by modeling how shadows are formed in images. Based off this model, we then propose two different approaches to recover possible 3D shapes of objects from just their shadows.
\subsection{Image Formation of Shadows}
A point light source positioned in the world $l_0 \in \mathbb{R}^3$ will cast ray vectors $r_i \in R_\odot$ radially outward. When some of these light rays are blocked by an object, they will never reach the ground plane, which creates a shadow. To model the silhouette that is formed, we can denote the plane of the floor as a set of points that satisfy $p-p_0^T n = 0$, where $n \in \mathbb{R}^3$ is the normal vector and $p_0 \in \mathbb{R}^3$ is a point on the plane. The regions of the floor that are illuminated will be the intersection between the floor and the ray vectors:
\begin{equation}
p_i=l_0 + r_i d \quad \textrm{for} \quad d = \frac{(p_0 - l_0) \cdot n}{r_i \cdot n}
\end{equation}
Shadows on the floor are formed by points of intersection between the floor and the ray vectors that also intersect with the object.
When a camera captures a picture of the shadow on the ground plane, the image will
be subject to perspective effects. In our model, we assume the geometric center of the object is located at the center of the world. Let $P = (P_x, P_y, P_z) \in \mathbb{R}^3$ be a point in the 3D world, which is projected to a pixel $p = (u, v, 1)$ in the reference of the camera with the following projections:
\begin{equation}
p = K T P, \hspace{0.1in} K = \begin{pmatrix}
f & 0 & c_u \\
0 & f & c_v \\
0 & 0 & 0
\end{pmatrix}, \hspace{0.1in}
T = \left(
\begin{array}{c|c}
R & t \\
\hline
0 & 1
\end{array} \right)
\end{equation}
where K denotes the camera intrinsic matrix and T denotes the camera extrinsic matrix. We also assume the image plane is located at 1m away from the camera sensor.
Because our 3D object is placed on top of a floor, not all of the shadow can be captured by our camera. Let $R_\otimes$ be the rays passing through a pixel in the image plane and converging at the camera sensor. The part of the shadow that is captured in our image contains the rays from the shadow on the ground that is not blocked by the object itself. Therefore, the resulting shadow image $S$ is partially occluded.
Our goal is to learn a model that, given only the partially self-occluded shadow, reconstructs the 3D shape of the object.
\begin{equation}
S \xrightarrow{} V
\end{equation}
To solve this extremely under-constrained problem, we propose two different approaches. First is an end-to-end regression model. Second is a generative model that generates a set of shadow-consistent 3D shapes.
\subsection{Regression Model}
One way to solve this problem, like many other problems in computer vision, is to train an end-to-end regression model where the input to the model is a shadow image of an object, and the output of the model is a 3D shape of the object.
\subsubsection{Encoder-Decoder Architecture}
\begin{figure}
\centering
\includegraphics[width=0.\textwidth]{figures/regression.png}
\caption{Place holder for regression model architecture}
\label{fig:dataset}
\end{figure}
We propose an encoder-decoder style regression model $\phi$ to predict a 3D representation $\hat{V}=\phi(S)$ from a partially occluded shadow mask $S$. The encoder projects the input shadow mask image $S$ to a latent variable $\hat{z} = \phi_E(S)$. The decoder then generate a 3D volumetric representation from the latent variable $\hat{V} = \phi_D(\hat{z})$. Since the camera and light locations are randomly sampled, the decoder learns to generate 3D representation independent of input view.
\subsubsection{Training}
In order to train the regression model, we calculate the reconstruction loss between the predicted and ground truth 3D model. \begin{equation}
L_{recon} = M(V_{GT}, \hat{V})
\end{equation}
where $M$ is a reconstruction similarity metric dependent on the type of 3D representation being used.
\subsection{Shadow-Constrained Latent Space Exploration}
Due to the highly under-constrained nature of the problem of reconstructing a 3D shape from a partially occluded shadow mask of the object, given a large number of samples with various 3D shapes, a shadow can be the product of many different 3D objects. As a result, a deterministic regression model tends to predict the average of all the possibilities that minimizes the overall objective function. To be able to generate a set of diverse 3D shapes that are consistent with the input shadow, we propose our second approach of Shadow-Constrained Latent Space Exploration.
Concretely, we propose to search in the latent space of a 3D generative model for a shape that produces the shadow most similar to the input. In other words, instead of using shadow as input to produce a 3D shape, we use shadow as a constraint to guide our latent variable search.
\begin{figure}
\centering
\includegraphics[width=1.0\textwidth]{figures/approach.png}
\caption{Method}
\label{fig:dataset}
\end{figure}
\subsubsection{Inverting the Generative Process}
We first randomly sample a latent variable from the latent space $z \in Z$ as an initialization. We then take a 3D generative model termed $G$ regardless of its architecture and the type of 3D representation used. The generative model outputs a 3D shape given the input latent variable $\hat{V} = G(z)$. With the 3D shape $V$, we can render the shadow mask given the object pose and the location of the light source, $\hat{S} = \pi_c(P(\hat{V}))$, where $\pi_c$ denotes a shadow rendering function and $P$ denotes an $SE(3)$ transformation representing the object pose. Now that we have the rendered shadow mask, we can calculate the shadow reconstruction loss:
\begin{equation}
L_{S} = L(\pi_c(P(G(\bm{z}))), S)
\end{equation}
where $L(\cdot, \cdot)$ is a binary cross-entropy classification loss.
In order to backpropagate the gradient from $L_{S}$ to latent variable $z$, we need to make the shadow rendering function $\pi_c$ differentiable. Inspired by \cite{niemeyer2020differentiable}, we designed a similar shadow rendering function based on differentiable ray marching from the light source location in the world towards the ground, where the shadow is located. We introduced the details of the differentiable shadow rendering function in the ###SPACE HOLDER###.
\subsubsection{Light Source Location and Object Pose Estimation}
Given the flexibility of our approach, we can further relax our assumption of a known light location and object pose. Similar to latent variable $z$, we can sample a random light location $l_0 = (P_x, P_y, P_z) \in \mathbb{R}^3$ and an object pose matrix $M \in SE(3)$ parametrized by a translation vector $T = (P_x, P_y, P_z) \in \mathbb{R}^3$ and a set of quaternions $Q = (x, y, z, w)$ subject to a constraint $||x^2 + y^2 + z^2 + w^2|| = 1$. Given the whole process is differentiable, we allow the gradients to backpropagate from shadow reconstruction loss to all three variables being searched.
\subsubsection{Optimization}
Now that we've defined our shadow reconstruction loss and a differentiable generative process to obtain a shadow image from a randomly sampled latent variable, and optionally, light source location and object pose. Ideally, inside the latent space there exists a data manifold that covers all physically plausible and naturally-looking shapes of a given class of object, we just need to progress along the manifold to minimize our shadow reconstruction loss. Such well-defined data manifold doesn't exists in the real world. Prior work \cite{menon2020pulse} encountered a similar problem in the task of image super-resolution -- another under-constrained task. To tackle this, \cite{menon2020pulse} observes that the density of a high-dimensional Gaussian distribution condenses around the surface of a hypersphere, where generative models such as VAEs and GANs sample the latent variables from. Thus they proposed to approximate the natural manifold of image by constraining the latent search on the surface of the hypersphere. Following the same philosophy, we apply a similar constraint to our search in the latent space of a 3D generative model. We also show an ablation study the effect of such constraint in the experiments section.
\clearpage
\bibliographystyle{splncs04}
|
1,108,101,565,570 | arxiv |
\section{Introduction}\label{sec:introduction}
The scalar resonance discovered by the CMS and ATLAS Collaborations at the LHC~\cite{Higgs-Discovery_ATLAS,Higgs-Discovery_CMS,Higgs-Discovery_CMS_long} in 2012 has been found to have properties consistent with the predictions of the standard model (SM) for a Higgs boson with a mass of about $125\GeV$~\cite{Aad:2015zhl}.
In particular, its couplings to bosons (\ensuremath{g_{\PH\mathrm{VV}}}\xspace) and fermions ($y_\mathrm{f}$) corroborate an SM-like dependence on the respective masses.
Furthermore, data indicate that it has zero spin and positive parity~\cite{HIG-14-018}.
Recently, the associated production of top quark pairs with a Higgs boson (\ensuremath{\ttbar\PH}\xspace) and Higgs boson decays to pairs of bottom quarks have been observed~\cite{cms_tthobs,Aaboud:2018urx,HIG18016}, thereby directly probing the Yukawa interactions between the Higgs boson and top as well as bottom quarks for the first time.
In addition to measuring the absolute strengths of Higgs boson couplings, it is pertinent to assess the possible existence of relative phases among the couplings, as well as their general Lorentz structure.
Hence a broad sweep of Higgs boson production mechanisms and decay modes must be considered to reveal any potential deviations from the SM expectations.
The production rate of \ensuremath{\ttbar\PH}\xspace\ is sensitive only to the magnitude of the top quark-Higgs boson Yukawa coupling \ensuremath{y_\cPqt}\xspace\ and has no sensitivity to its sign.
Measurements of processes such as Higgs boson decays to photon pairs~\cite{Biswas:2012bd} or the associated production of \PZ\ and Higgs bosons via gluon-gluon fusion~\cite{Hespel:2015zea} on the other hand do have sensitivity to the sign, because of indirect effects in loop interactions.
Those measurements currently disfavor a negative value of the coupling~\cite{Khachatryan:2014jba,Khachatryan:2016vau}, but rely on the assumption that only SM particles contribute to the loops~\cite{Ellis}.
In contrast, the production of Higgs bosons in association with single top quarks in proton-proton (\Pp\Pp) collisions proceeds via two categories of Feynman diagrams~\cite{Maltoni:2001hu,ennio,Agrawal:2012ga,Demartin:2015uha}, where the Higgs boson couples either to the top quark or the \PW\ boson.
The two leading order (LO) diagrams for the $t$ channel production process (\ensuremath{\cPqt\PH\Pq}\xspace) are shown in Fig.~\ref{fig:thqdiagrams}, together with one of the five LO diagrams for the \cPqt\PW\ process (\ensuremath{\cPqt\PH\PW}\xspace), for illustration.
Because of the interference of these diagrams, the production cross section is uniquely sensitive to the magnitude as well as the relative sign and phase of the couplings.
\begin{figure*}[!htb]
\centering
\includegraphics[width=0.30\textwidth]{Figure_001-a.pdf} \hspace{\fill}
\includegraphics[width=0.30\textwidth]{Figure_001-b.pdf} \hspace{\fill}
\includegraphics[width=0.30\textwidth]{Figure_001-c.pdf}
\caption{Leading order Feynman diagrams for the associated production of a single top quark and a Higgs boson in the $t$ channel, where the Higgs boson couples either to the top quark (left) or the \PW\ boson (center), and one representative diagram of \ensuremath{\cPqt\PH\PW}\xspace\ production, where the Higgs boson couples to the top quark (right).}
\label{fig:thqdiagrams}
\end{figure*}
In the SM, the interference between these two diagrams is maximally destructive and leads to very small production cross sections of about $71$, $16$, and $2.9$\unit{fb} for the $t$ channel, \cPqt\PW\ process, and $s$ channel, respectively, at a center-of-mass energy $\sqrt{s} = 13\TeV$~\cite{deFlorian:2016spz,Demartin:2016axk}.
Hence measurements using the data collected at the LHC so far are not yet sensitive to the SM production.
However, in the presence of new physics, there may be relative opposite signs between the \cPqt-\PH\ and \PW-\PH couplings which lead to constructive interference and enhance the cross sections by an order of magnitude or more.
In such scenarios, realized, \eg, in some two-Higgs doublet models~\cite{Celis:2013rcs}, \ensuremath{\cPqt\PH\Pq}\xspace\ production would exceed that of \ensuremath{\ttbar\PH}\xspace\ production, making it accessible with current LHC data sets.
In this paper, the \ensuremath{\cPqt\PH\Pq}\xspace\ and \ensuremath{\cPqt\PH\PW}\xspace\ processes are collectively referred to as \ensuremath{\cPqt\PH}\xspace\ production, while $s$ channel production is neglected.
The event topology of \ensuremath{\cPqt\PH\Pq}\xspace\ production is that of two heavy objects---the top quark, and the Higgs boson---in the central portion of the detector recoiling against one another, while a light-flavor quark and a soft \cPqb\ quark escape in the forward-backward regions of the detector.
Leptonic top quark decays produce high-momentum electrons and muons that can be used to trigger the detector readout.
Higgs boson decays to vector bosons or $\tau$ leptons ($\PH\to\PW\PW^*$, $\PZ\PZ^*$, or \ensuremath{\Pgt\Pgt}\xspace), which subsequently decay to light leptons ($\ell$ = $\Pepm$, $\Pgm^\pm$), lead to a multilepton final state with comparatively small background contributions from other processes.
Higgs boson decays to bottom quark-antiquark pairs ($\PH\to\bbbar$), on the other hand, provide a larger event rate albeit with challenging backgrounds from \ensuremath{\ttbar{+}\text{jets}}\xspace\ production.
In contrast, the rarer Higgs boson decays to two photons ($\PH\to\Pgg\Pgg$) result in easily triggered and relatively clean signals for both leptonic or fully hadronic top quark decays, with backgrounds mainly from other production modes of the Higgs boson.
The production of \ensuremath{\cPqt\PH\PW}\xspace\ lacks the presence of forward activity and involves three heavy objects and therefore does not exhibit the defining features of \ensuremath{\cPqt\PH\Pq}\xspace\ events, while closely resembling the \ensuremath{\ttbar\PH}\xspace\ topologies, having identical final states.
The CMS Collaboration has previously searched for anomalous \ensuremath{\cPqt\PH\Pq}\xspace\ production in \Pp\Pp\ collision data at $\sqrt{s} = 8\TeV$, assuming a negative sign of the top quark Yukawa coupling relative to its SM value, $\ensuremath{y_\cPqt}\xspace = -\ensuremath{y_\cPqt}\xspace^{\mathrm{SM}}$, using all the relevant Higgs boson decay modes, and set limits on the cross section of this process~\cite{Khachatryan:2015ota}.
This paper describes two new analyses targeting multilepton final states and single-lepton + \bbbar\ final states, using a data set of \Pp\Pp\ collisions at $\sqrt{s}=13\TeV$ corresponding to an integrated luminosity of 35.9\fbinv, collected in 2016.
Furthermore, a previous measurement of Higgs boson properties in the $\PH\to\Pgg\Pgg$ final state at 13\TeV~\cite{Sirunyan:2018ouh} has been reinterpreted in the context of \ensuremath{\cPqt\PH\Pq}\xspace\ signal production and the results are included in a combination with those from the other channels.
This paper is structured as follows: the experimental setup and data samples are described in Sections~\ref{sec:experiment} and \ref{sec:datasimulation} respectively; the two analysis channels and their event selection, background estimations, and signal extraction techniques are described in Sections~\ref{sec:multilepton} and \ref{sec:bbchannels}; the reinterpretation of the $\PH\to\Pgg\Pgg$ result is described in Section~\ref{sec:aa}; and the results and interpretation are given in Section~\ref{sec:results}. The paper is summarized in Section~\ref{sec:conclusion}.
\section{The CMS experiment}\label{sec:experiment}
The central feature of the CMS apparatus is a superconducting solenoid of 6\unit{m} internal diameter, providing a magnetic field of 3.8\unit{T} along the beam direction.
Within the solenoid volume are a silicon pixel and strip tracker, a lead tungstate crystal electromagnetic calorimeter, and a brass and scintillator hadron calorimeter, each composed of a barrel and two endcap sections providing pseudorapidity coverage up to $\abs{\eta}<3.0$.
Forward calorimeters employing Cherenkov light detection extend the acceptance to $\abs{\eta}<5.0$.
Muons are detected in gas-ionization chambers embedded in the steel flux-return yoke outside the solenoid with a fiducial coverage of $\abs{\eta}<2.4$.
The silicon tracker system measures charged particles within the range $\abs{\eta}<2.5$.
The impact parameters in the transverse and longitudinal direction are measured with an uncertainty of about 10 and 30\mum, respectively~\cite{Chatrchyan:2014fea}.
Tracks of isolated muons of transverse momentum $\pt\geq100\GeV$ and $\abs{\eta}<1.4$ are reconstructed with an efficiency close to 100\% and a $\pt$ resolution of about 1.3 to 2\% and smaller than 6\% for higher values of $\eta$.
For $\pt \leq 1 \TeV$ the resolution in the central region is better than 10\%.
A two-level trigger system is used to reduce the rate of recorded events to a level suitable for data acquisition and storage.
The first level of the CMS trigger system~\cite{Khachatryan:2016bia}, composed of custom hardware processors, uses information from the calorimeters and muon detectors to select the most interesting events in a time interval of less than 4\mus.
The high-level trigger processor farm further decreases the event rate from around 100\unit{kHz} to about 1\unit{kHz}.
A more detailed description of the CMS detector, together with a definition of the coordinate system and the kinematic variables used in the analysis, can be found in Ref.~\cite{Chatrchyan:2008aa}.
A full event reconstruction is performed by the particle-flow (PF) algorithm using optimized and combined information from all the subdetectors~\cite{Sirunyan:2017ulk}.
The individual PF candidates reconstructed are muons, electrons, photons, and charged and neutral hadrons, which are then used to reconstruct higher-level objects such as jets, hadronic taus, and missing transverse momentum (\ptmiss).
Additional quality criteria are applied to the objects to improve the selection purity.
Collision vertices are reconstructed using a deterministic annealing algorithm~\cite{Chabanat:2005zz,Fruhwirth:2007hz}.
The reconstructed vertex position is required to be compatible with the location of the LHC beam in the $x$--$y$ plane.
The vertex with the largest value of summed physics-object $\pt^2$ is considered to be the primary \Pp\Pp\ interaction (PV).
Charged particles, which are subsequently reconstructed, are required to be compatible with originating from the selected PV.
The identification of muons is based on linking track segments reconstructed in the silicon tracker and in the muon system~\cite{Chatrchyan:2012xi}.
If a link can be established, the track parameters are recomputed using the combination of hits in the inner and outer detectors.
Quality requirements are applied on the multiplicity of hits in the track segments, on the number of matched track segments, and on the quality of the track fit~\cite{Chatrchyan:2012xi}.
Electrons are reconstructed using an algorithm that matches tracks found in the silicon tracker with energy deposits in the electromagnetic calorimeter while limiting deposits in the hadronic calorimeter~\cite{Khachatryan:2015hwa}.
A dedicated algorithm takes into account the emission of bremsstrahlung photons and determines the energy loss~\cite{Khachatryan:2015iwa}.
A multivariate analysis (MVA) approach based on boosted decision trees (BDT) is employed to distinguish real electrons from hadrons mimicking an electron signature.
Additional requirements are applied in order to remove electrons originating from photon conversions~\cite{Khachatryan:2015hwa}.
Both muons and electrons from signal events are expected to be isolated, while those from heavy-flavor decays are often situated near jets.
Lepton isolation is quantified using the scalar \pt\ sum over PF candidates reconstructed within a cone centered on the lepton direction and shrinking with increasing lepton \pt.
The effect of additional \Pp\Pp\ interactions in the same and nearby bunch crossings (pileup) on the lepton isolation is mitigated by considering only charged particles consistent with the PV in the sum, and by subtracting an estimate of the contribution from neutral pileup particles within the cone area.
Jets are reconstructed from charged and neutral PF candidates using the anti-\kt\ algorithm~\cite{Cacciari:2008gp,Cacciari:2011ma} with a distance parameter of $0.4$, and with the constraint that the charged particles are compatible with the selected PV.
Jets originating from the hadronization of \cPqb\ quarks are identified using the ``combined secondary vertex'' (CSVv2) algorithm~\cite{Sirunyan:2017ezt}, which exploits observables related to the long lifetime of \cPqb\ hadrons and to the higher particle multiplicity and mass of \cPqb\ jets compared to light-quark and gluon jets.
Two working points of the CSVv2 discriminant output are used: a ``medium'' one, with a tagging efficiency for real \cPqb\ jets of $69\%$ and a probability of wrongly tagging jets from light-flavor quarks and gluons of about $1\%$, and a ``loose'' one, with a tagging efficiency of $83\%$ and a mistag rate for light-flavor jets of $8\%$.
Finally, the missing transverse momentum is defined as the magnitude of the vectorial \pt\ sum of all PF candidates in the event.
\section{Data and simulation}\label{sec:datasimulation}
Collision events for this analysis are selected by the following high-level trigger algorithms.
Events in the multilepton channels must pass at least one of single-lepton, dilepton, or trilepton triggers with loose identification and isolation requirements and with a minimum \pt\ threshold based on the lepton multiplicity in the final state.
Events in the single lepton + \bbbar\ channels must pass the same single-lepton triggers, or a dilepton trigger for the control region described in Section~\ref{sec:bbchannels}.
The minimum $\pt$ threshold for single lepton triggers is 24\GeV for muons and 27\GeV for electrons.
For dilepton triggers, the $\pt$ thresholds on the leading and subleading leptons are 17\GeV and 8\GeV for muons, and 23\GeV and 12\GeV for electrons, respectively.
For the trilepton trigger, the third hardest lepton $\pt$ must be greater than 5\GeV for muons and 9\GeV for electrons.
The data are compared to signal and background estimations based on Monte Carlo (MC) simulated samples and techniques based on control samples in data.
All simulated samples include the response of the CMS detector based on the \GEANTfour~\cite{GEANT} toolkit and are generated with a Higgs boson mass of 125\GeV and a top quark mass of 172.5\GeV.
The event generator used for the \ensuremath{\cPqt\PH\Pq}\xspace\ and \ensuremath{\cPqt\PH\PW}\xspace\ signal samples is \MGvATNLO\ (version 2.2.2)~\cite{amcatnlo} at LO precision~\cite{Alwall:2007fs} and using the \textsc{NNPDF3.0} set of parton distribution functions (PDF)~\cite{Ball:2014uwa} with the \textsc{PDF4LHC} prescription~\cite{Botje:2011sn,Alekhin:2011sk}.
The samples are normalized to next-to-leading order (NLO) SM cross sections at 13\TeV of 71.0 and 15.6\unit{fb} for \ensuremath{\cPqt\PH\Pq}\xspace\ and \ensuremath{\cPqt\PH\PW}\xspace, respectively~\cite{deFlorian:2016spz,Demartin:2016axk}.
The Higgs boson production cross sections and branching fractions are expressed as functions of Higgs boson coupling modifiers in the kappa framework~\cite{Heinemeyer:2013tqa}, where the coupling modifiers $\kappa$ are defined as the ratio of the actual value of a given coupling to the one predicted by the SM.
Particularly relevant for the \ensuremath{\cPqt\PH}\xspace\ case are the top quark and vector boson coupling modifiers: $\ensuremath{\kappa_\cPqt}\xspace\equiv\ensuremath{y_\cPqt}\xspace/\ensuremath{y_\cPqt}\xspace^{\mathrm{SM}}$ and $\ensuremath{\kappa_\text{V}}\xspace\equiv\ensuremath{g_{\PH\mathrm{VV}}}\xspace/\ensuremath{g_{\PH\mathrm{VV}}}\xspace^{\mathrm{SM}}$, where V stands for either \PW\ or \PZ\ bosons.
The dependence of the \ensuremath{\cPqt\PH\Pq}\xspace\ and \ensuremath{\cPqt\PH\PW}\xspace\ production cross sections on \ensuremath{\kappa_\cPqt}\xspace\ and \ensuremath{\kappa_\text{V}}\xspace\ are assumed to be as follows (calculated at NLO using \MGvATNLO~\cite{deFlorian:2016spz,Demartin:2015uha,Demartin:2016axk}):
\begin{align*}
\sigma_{\ensuremath{\cPqt\PH\Pq}\xspace} &= (2.63\,\ensuremath{\kappa_\cPqt}\xspace^2 + 3.58\,\ensuremath{\kappa_\text{V}}\xspace^2 - 5.21\,\ensuremath{\kappa_\cPqt}\xspace\ensuremath{\kappa_\text{V}}\xspace) \sigma_{\ensuremath{\cPqt\PH\Pq}\xspace}^{\mathrm{SM}}, \\
\sigma_{\ensuremath{\cPqt\PH\PW}\xspace} &= (2.91\,\ensuremath{\kappa_\cPqt}\xspace^2 + 2.31\,\ensuremath{\kappa_\text{V}}\xspace^2 - 4.22\,\ensuremath{\kappa_\cPqt}\xspace\ensuremath{\kappa_\text{V}}\xspace) \sigma_{\ensuremath{\cPqt\PH\PW}\xspace}^{\mathrm{SM}}.
\end{align*}
Event weights are produced in the generation of both samples corresponding to $33$ values of \ensuremath{\kappa_\cPqt}\xspace\ between $-6.0$ and $+6.0$, and for $\ensuremath{\kappa_\text{V}}\xspace=1.0$.
The \ensuremath{\cPqt\PH\Pq}\xspace\ events are generated with the four-flavor scheme (4FS) while the \ensuremath{\cPqt\PH\PW}\xspace\ process uses the five-flavor scheme (5FS) to disentangle the LO interference with the \ensuremath{\ttbar\PH}\xspace\ process~\cite{Demartin:2016axk}.
The \MGvATNLO\ generator is also used for simulation of the \ensuremath{\ttbar\PH}\xspace\ process and the main backgrounds: associated production of \ttbar\ pairs with vector bosons, \ensuremath{\ttbar\PW}\xspace, \ensuremath{\ttbar\PZ}\xspace, at NLO~\cite{Frederix:2012ps}, and with additional jets or photons, \ensuremath{\ttbar{+}\text{jets}}\xspace, $\ttbar\gamma+\text{jets}$ at LO.
All the rates are normalized to next-to-next-leading order cross sections.
In particular, the \ensuremath{\ttbar\PH}\xspace\ production cross section is taken as 0.507\unit{pb}~\cite{deFlorian:2016spz}.
A set of minor backgrounds are also simulated with \MGvATNLO\ at LO, or with other generators, such as NLO \POWHEG v2~\cite{Nason:2004rx,Frixione:2007vw,Alioli:2010xd,Re:2010bp,Alioli:2009je,Melia:2011tj}.
All generated events are interfaced with \PYTHIA\ (8.205)~\cite{PYTHIA8} for the parton shower and hadronization steps.
The object reconstruction in MC events uses the same algorithm as used in data.
Furthermore, the trigger selection is simulated and applied for generated signal events.
However, the triggering and selection efficiencies for leptons are different between data and simulation, at the level of 1$\%$.
All simulated events used in the analyses are corrected by applying small data-to-MC scale factors to improve the modeling of the data.
Separate scale factors are applied to correct for the difference in trigger efficiency, lepton reconstruction and selection efficiency, as well as the \cPqb\ tagging efficiency and the resolution of the missing transverse momentum.
Simulated events are weighted according to the number of pileup interactions so that the distribution of additional \Pp\Pp\ interactions in the simulated samples matches that observed in data, as estimated from the measured bunch-to-bunch instantaneous luminosity and the total inelastic cross section.
\section{Multilepton channels}\label{sec:multilepton}
Signal \ensuremath{\cPqt\PH}\xspace\ events where the top quark decay produces leptons and the Higgs boson decays to vector bosons or \Pgt\ leptons can lead to final states containing multiple isolated, high-\pt\ leptons with different charge and flavor configurations.
Of particular interest among these are those with three or more charged leptons or with two leptons of the same electric charge, as they appear with comparatively low backgrounds.
Selecting such events in $\Pp\Pp$ collisions while requiring the presence of \cPqb-tagged jets typically yields a mixture of mostly \ensuremath{\ttbar{+}\text{jets}}\xspace\ events with nonprompt leptons and events from the associated production of \ttbar\ with a vector boson (\ensuremath{\ttbar\PW}\xspace\ and \ensuremath{\ttbar\PZ}\xspace) or with a Higgs boson (\ensuremath{\ttbar\PH}\xspace) that decay to additional prompt leptons.
The analysis described in this section separates the \ensuremath{\cPqt\PH\Pq}\xspace\ signal from these two dominant background sources by training two multivariate classifiers using features such as the forward light jet, the difference in multiplicities of jets and \cPqb-tagged jets (``\cPqb\ jets''), as well as the kinematic properties of the leptons.
The two classifier outputs are combined into a single binned distribution, which is then fit to the data to extract the signal yield and constrain the background contributions.
\subsection{Event and object selections}\label{ssec:multilepselection}
In the multilepton channels, events are selected with trigger algorithms involving one, two, or three leptons passing the given \pt\ thresholds.
At the offline analysis level, a distinction is made between prompt signal leptons (from \PW, \PZ, or leptonic \Pgt\ decays) and nonprompt leptons (either genuine leptons from heavy-flavor hadron decays, asymmetric \Pgg\ conversions, or jets misidentified as leptons).
For this purpose an MVA classifier is used~\cite{tthlepton}, exploiting the properties of the jet associated with individual leptons in addition to the lepton kinematics, isolation, and reconstruction quality.
The leptons are selected if they pass a certain threshold of the classifier output and are referred to as ``tight'' leptons, with a lower threshold defined for a relaxed selection and ``loose'' leptons.
The final \ensuremath{\cPqt\PH}\xspace\ event selection targets signatures with \ensuremath{\PH\to\PW\PW}\xspace\ and $\cPqt\to\PW\cPqb\to \ell\nu\cPqb$, which results in three \PW\ bosons, one \cPqb\ quark, and a light quark at high rapidity.
Three mutually exclusive channels are defined based on the number of tight leptons and their flavors: exactly two same-sign leptons (\ensuremath{2\ell\text{ss}}\xspace), either \ensuremath{\Pgm^\pm\Pgm^\pm}\xspace\ or \ensuremath{\Pepm\Pgm^\pm}\xspace, or exactly three leptons (\ensuremath{\ell\ell\ell}\xspace, $\ell=\Pgm$ or $\Pe$).
The same-sign dielectron channel suffers from larger backgrounds and does not add sensitivity and is therefore not included in the analysis.
There is an additional requirement of at least one \cPqb-tagged jet (using the medium working point of the CSVv2 algorithm) and at least one light-flavor (untagged, using the loose working point) jet.
The full selection is summarized in Table~\ref{tab:mlcuts}.
\begin{table}[!ht]
\topcaption{Summary of the event selection for the multilepton channels.\label{tab:mlcuts}}
\centering
\begin{scotch}{p{8cm}}
\quad Same-sign channel (\ensuremath{\Pgm^\pm\Pgm^\pm}\xspace or \ensuremath{\Pepm\Pgm^\pm}\xspace) \\
Exactly two tight SS leptons \\
$\pt>25/15\GeV$ \\
No loose leptons with $m_{\ell\ell} < 12\GeV$ \\
One or more \cPqb-tagged jet with $\pt>25\GeV$ and $\abs{\eta}<2.4$ \\
One or more untagged jets with $\pt>25\GeV$ for $\abs{\eta}<2.4$ and $\pt>40\GeV$ for $\abs{\eta}>2.4$ \\
[\cmsTabSkip]
\quad $\ensuremath{\ell\ell\ell}\xspace$ channel \\
Exactly three tight leptons \\
$\pt>25/15/15\GeV$ \\
No lepton pair with $\abs{m_{\ell\ell}-m_\PZ}<15\GeV$ \\
No loose leptons with $m_{\ell\ell} < 12\GeV$ \\
One or more \cPqb-tagged jet with $\pt>25\GeV$ and $\abs{\eta}<2.4$ \\
One or more untagged jets with $\pt>25\GeV$ for $\abs{\eta}<2.4$ and $\pt>40\GeV$ for $\abs{\eta}>2.4$ \\
\end{scotch}
\end{table}
About one quarter of the events in the finally selected sample are from \ensuremath{\PH\to\tautau}\xspace\ and \ensuremath{\PH\to\PZ\PZ}\xspace\ decays, with the rest coming from \ensuremath{\PH\to\PW\PW}\xspace\ decays, as determined from the \ensuremath{\cPqt\PH\Pq}\xspace\ signal simulation.
A significant fraction of selected events also pass the selection used in the dedicated search for \ensuremath{\ttbar\PH}\xspace\ in multilepton channels~\cite{tthlepton}: about 50\% in the dilepton channels and about 80\% in the \ensuremath{\ell\ell\ell}\xspace\ channels.
\subsection{Backgrounds}\label{ssec:multilepbackgrounds}
The background processes contributing to the signal sample can be divided into two classes, reducible and irreducible, and are estimated respectively from data and MC simulation.
Irreducible physics processes, such as the associated production of an electroweak boson with a top quark pair (\ensuremath{\ttbar\cmsSymbolFace{V}}\xspace, $\mathrm{V}=\PW, \PZ$), give rise to final states very similar to the \ensuremath{\cPqt\PH\Pq}\xspace\ signal and are directly estimated from MC simulation.
However, the dominant contribution is from the reducible background arising from nonprompt leptons, mainly from \ttbar\ production.
This background is suppressed to a certain extent by tightening the lepton selection criteria.
The background estimation methods employed here and summarized below are identical to those used in the dedicated search for \ensuremath{\ttbar\PH}\xspace\ in multilepton channels~\cite{tthlepton}.
The yield of reducible backgrounds is estimated from the data, using a ``tight-to-loose'' ratio measured in a control region dominated by nonprompt leptons.
The ratio represents the probability with which the nonprompt leptons that pass the looser selection can also pass the tight criteria, and is measured in categories of the lepton \pt\ and $\eta$.
A sideband region in data which has loosely selected leptons is then extrapolated with this ratio to obtain the nonprompt background contribution.
A further background in the same-sign dilepton channels arises from events where the charge of one lepton is wrongly assigned.
This can be estimated from the data, by measuring the charge misidentification probability using the \PZ\ boson mass peak in same-sign dilepton events, and weighting events with opposite-sign leptons to determine the yield in the signal region.
The effect is found to be negligible for muons but sizable for electrons.
The production of \ensuremath{\PW\PZ}\xspace\ pairs with leptonic \PZ\ boson decays has similar leptonic features as the signal, but usually lacks the hadronic activity required in the signal selection.
To determine the corresponding diboson contribution in the signal region, simulated \ensuremath{\PW\PZ}\xspace\ events have been used along with a normalization scale factor determined from data in an exclusive control region.
Other subdominant backgrounds are estimated from MC simulation and include additional multiboson production, such as \ensuremath{\PZ\PZ}\xspace, \ensuremath{\PW^{\pm}\PW^{\pm}{\Pq\Pq}}\xspace, VVV, same-sign \PW\ boson production from double-parton scattering (DPS), associated production of top quarks with \PZ\ bosons (\ensuremath{\cPqt\PZ\Pq}\xspace, \ensuremath{\cPqt\PZ\PW}\xspace), events with four top quarks, and \ttbar\ production in association with photons and subsequent asymmetric conversions.
The expected and observed event yields after the selections described in Table~\ref{tab:mlcuts} are shown in Table~\ref{tab:mlyields}.
\begin{table*}[!htbp]
\topcaption{Data yields and expected backgrounds after the event selection for the three multilepton search channels in 35.9\fbinv\ of integrated luminosity. Quoted uncertainties include statistical uncertainties reflecting the limited size of MC samples and data sidebands, and unconstrained systematic uncertainties.\label{tab:mlyields}}
\centering
\begin{scotch}{lrrr}
Process & \ensuremath{\Pgm^\pm\Pgm^\pm}\xspace & \ensuremath{\Pepm\Pgm^\pm}\xspace & $\ell\ell\ell$ \\
\hline
$\ensuremath{\ttbar\PW}\xspace$ & $ 68 \pm 10 $ & $97 \pm 13 $ & $22.5 \pm 3.1 $ \\
$\ensuremath{\ttbar\PZ}\xspace/\ensuremath{\ttbar\gamma}\xspace$ & $ 25.9 \pm 3.9 $ & $64.8 \pm 9.0 $ & $32.8 \pm 5.1 $ \\
$\ensuremath{\PW\PZ}\xspace$ & $ 15.1 \pm 7.7 $ & $26 \pm 13 $ & $ 8.2 \pm 2.4 $ \\
$\ensuremath{\PZ\PZ}\xspace$ & $ 1.16\pm 0.65$ & $2.9 \pm 1.5 $ & $ 1.62 \pm 0.87$ \\
$\ensuremath{\PW^{\pm}\PW^{\pm}{\Pq\Pq}}\xspace$ & $ 4.0 \pm 2.1 $ & $7.0 \pm 3.6 $ & \NA\ \\
$\PW^\pm\PW^\pm$ (DPS) & $ 2.5 \pm 1.3 $ & $4.2 \pm 2.2 $ & \NA\ \\
VVV & $ 3.0 \pm 1.5 $ & $4.9 \pm 2.5 $ & $ 0.42 \pm 0.26$ \\
$\ensuremath{\ttbar\ttbar}\xspace$ & $ 2.3 \pm 1.2 $ & $4.1 \pm 2.1 $ & $ 1.8 \pm 1.0 $ \\
$\ensuremath{\cPqt\PZ\Pq}\xspace$ & $ 5.8 \pm 3.6 $ & $10.7 \pm 6.1 $ & $ 3.9 \pm 2.5 $ \\
$\ensuremath{\cPqt\PZ\PW}\xspace$ & $ 2.1 \pm 1.1 $ & $3.9 \pm 2.0 $ & $ 1.70 \pm 0.86$ \\
$\gamma$ conversions & \NA\ & $23.8 \pm 7.8 $ & $ 7.4 \pm 2.8 $ \\ [\cmsTabSkip]
Nonprompt & $ 80.9 \pm 9.4$ & $135 \pm 35 $ & $26 \pm 14 $ \\
Charge misidentification & \NA\ & $58 \pm 17 $ & \NA\ \\ [\cmsTabSkip]
Total background & $211 \pm 17 $ & $443 \pm 45 $ & $106 \pm 16 $ \\ [\cmsTabSkip]
$\ensuremath{\ttbar\PH}\xspace$ & $ 24.2 \pm 2.1 $ & $ 35.2 \pm 2.9 $ & $ 18.3 \pm 1.7 $ \\
$\ensuremath{\cPqt\PH\Pq}\xspace$ (SM) & $ 1.43\pm 0.12$ & $ 1.92 \pm 0.15$ & $ 0.52 \pm 0.04$ \\
$\ensuremath{\cPqt\PH\PW}\xspace$ (SM) & $ 0.71\pm 0.06$ & $ 1.11 \pm 0.09$ & $ 0.62 \pm 0.05$ \\ [\cmsTabSkip]
Total SM & $237 \pm 17 $ & $482 \pm 45 $ & $126 \pm 16 $ \\ [\cmsTabSkip]
$\ensuremath{\cPqt\PH\Pq}\xspace$ ($\ensuremath{\kappa_\text{V}}\xspace=1=-\ensuremath{\kappa_\cPqt}\xspace$) & $ 18.5 \pm 1.6 $ & $ 27.4 \pm 2.1 $ & $ 7.48 \pm 0.58$ \\
$\ensuremath{\cPqt\PH\PW}\xspace$ ($\ensuremath{\kappa_\text{V}}\xspace=1=-\ensuremath{\kappa_\cPqt}\xspace$) & $ 7.72\pm 0.65$ & $ 11.23 \pm 0.91$ & $ 7.38 \pm 0.60$ \\ [\cmsTabSkip]
Data & 280 & 525 & 127 \\
\end{scotch}
\end{table*}
\subsection{Signal extraction}\label{ssec:multilepsignalextrac}
After applying the event selection of the multilepton channels, only about one percent of selected events are expected to be from \ensuremath{\cPqt\PH}\xspace\ production (assuming SM cross sections), while roughly 10\% of events are from \ensuremath{\ttbar\PH}\xspace\ production.
To discriminate this small signal from the backgrounds, an MVA method is employed: a classification algorithm is trained twice with \ensuremath{\cPqt\PH\Pq}\xspace\ events as the signal class, and either \ensuremath{\ttbar\cmsSymbolFace{V}}\xspace\ (mixing \ensuremath{\ttbar\PW}\xspace\ and \ensuremath{\ttbar\PZ}\xspace\ according to their respective cross sections) or \ensuremath{\ttbar{+}\text{jets}}\xspace\ as background classes.
The two separate trainings allow the exploitation of the different jet and \cPqb\ jet multiplicity distributions, and of the different kinematic properties of the leptons in the two dominant background classes.
Several machine learning algorithms were studied for potential use, and the best performance was obtained with a gradient BDT using a maximum tree depth of three and an ensemble of 800 trees~\cite{Hocker:2007ht}.
Events from \ensuremath{\cPqt\PH\PW}\xspace\ and \ensuremath{\ttbar\PH}\xspace\ production are not used in the training and, because of their kinematic similarity with the \ensuremath{\ttbar\cmsSymbolFace{V}}\xspace\ background, tend to be classified as backgrounds.
As observed above, the features of the \ensuremath{\cPqt\PH\Pq}\xspace\ signal can be split into three broad categories: those related to the forward jet activity; those related to jet and \cPqb-jet multiplicities; and those related to kinematic properties of leptons, as well as their total charge.
A set of ten observables were used as input features to the classification training, and are listed in Table~\ref{tab:mlbdtinputs}.
The training is performed separately for the \ensuremath{2\ell\text{ss}}\xspace\ and the \ensuremath{\ell\ell\ell}\xspace\ channels with the same or equivalent input features.
\begin{table}[ht!]
\topcaption{Input observables to the signal discrimination classifier.}
\centering
\begin{scotch}{lp{12cm}}
& Number of jets with $\pt>25\GeV$, $\abs{\eta}<2.4$\\
& Maximum $\abs{\eta}$ of any (untagged) jet (``forward jet'')\\
& Sum of lepton charges \\
& Number of untagged jets with $\abs{\eta}>1.0$\\
& $\Delta \eta$ between forward light jet and leading \cPqb-tagged jet\\
& $\Delta \eta$ between forward light jet and subleading \cPqb-tagged jet \\
& $\Delta \eta$ between forward light jet and closest lepton\\
& $\Delta \phi$ of highest-\pt\ same-sign lepton pair\\
& Minimum $\Delta R$ between any two leptons\\
& \pt\ of subleading (or $3^{rd}$) lepton\\
\end{scotch}
\label{tab:mlbdtinputs}
\end{table}
A selection of the main discriminating input observables is shown in Figs.~\ref{fig:2lss_inputs_mm}--\ref{fig:2lss_inputs_3l}, comparing the data and the estimated distribution of signal and background processes.
\begin{figure*}[htb]
\centering
\includegraphics[width=0.32\linewidth]{Figure_002-a.pdf}
\includegraphics[width=0.32\linewidth]{Figure_002-b.pdf}
\includegraphics[width=0.32\linewidth]{Figure_002-c.pdf} \\
\caption{Distributions of discriminating observables for the same-sign \ensuremath{\Pgm^\pm\Pgm^\pm}\xspace\ channel, normalized to 35.9\fbinv, before fitting the signal discriminant to the data. The grey band represents the unconstrained (pre-fit) statistical and systematic uncertainties. In the panel below each distribution, the ratio of the observed and predicted event yields is shown. The shape of the two \ensuremath{\cPqt\PH}\xspace\ signals for $\ensuremath{\kappa_\cPqt}\xspace=-1.0$ is shown, normalized to their respective cross sections for $\ensuremath{\kappa_\cPqt}\xspace=-1.0, \ensuremath{\kappa_\text{V}}\xspace=1.0$.\label{fig:2lss_inputs_mm}}
\end{figure*}
\begin{figure*}[htb]
\centering
\includegraphics[width=0.32\linewidth]{Figure_003-a.pdf}
\includegraphics[width=0.32\linewidth]{Figure_003-b.pdf}
\includegraphics[width=0.32\linewidth]{Figure_003-c.pdf} \\
\caption{Distributions of discriminating observables for the same-sign \ensuremath{\Pepm\Pgm^\pm}\xspace\ channel, normalized to 35.9\fbinv, before fitting the signal discriminant to the data. The grey band represents the unconstrained (pre-fit) statistical and systematic uncertainties. In the panel below each distribution, the ratio of the observed and predicted event yields is shown. The shape of the two \ensuremath{\cPqt\PH}\xspace\ signals for $\ensuremath{\kappa_\cPqt}\xspace=-1.0$ is shown, normalized to their respective cross sections for $\ensuremath{\kappa_\cPqt}\xspace=-1.0, \ensuremath{\kappa_\text{V}}\xspace=1.0$.\label{fig:2lss_inputs_em}}
\end{figure*}
\begin{figure*}[htb]
\centering
\includegraphics[width=0.32\linewidth]{Figure_004-a.pdf}
\includegraphics[width=0.32\linewidth]{Figure_004-b.pdf}
\includegraphics[width=0.32\linewidth]{Figure_004-c.pdf} \\
\caption{Distributions of discriminating observables for the three lepton channel, normalized to 35.9\fbinv, before fitting the signal discriminant to the data. The grey band represents the unconstrained (pre-fit) statistical and systematic uncertainties. In the panel below each distribution, the ratio of the observed and predicted event yields is shown. The shape of the two \ensuremath{\cPqt\PH}\xspace\ signals for $\ensuremath{\kappa_\cPqt}\xspace=-1.0$ is shown, normalized to their respective cross sections for $\ensuremath{\kappa_\cPqt}\xspace=-1.0, \ensuremath{\kappa_\text{V}}\xspace=1.0$.\label{fig:2lss_inputs_3l}}
\end{figure*}
The six classifier output distributions, trained against \ensuremath{\ttbar\cmsSymbolFace{V}}\xspace\ and \ensuremath{\ttbar{+}\text{jets}}\xspace\ processes for each of the three channels, are shown in Fig.~\ref{fig:mlpostfitshapes}, before a fit to the data.
The events are then sorted into ten categories depending on the output of the two BDT classifiers according to an optimized binning strategy, resulting in a one-dimensional histogram with ten bins.
Figure~\ref{fig:mlfinalbins} shows the post-fit categorized classifier output distributions for each of the three channels, after the combined maximum likelihood fit to extract the limits, as described in Section~\ref{sec:results}.
\begin{figure*}[htb]
\begin{center}
\includegraphics[width=0.32\textwidth]{Figure_005-a.pdf}
\includegraphics[width=0.32\textwidth]{Figure_005-b.pdf}
\includegraphics[width=0.32\textwidth]{Figure_005-c.pdf} \\
\includegraphics[width=0.32\textwidth]{Figure_005-d.pdf}
\includegraphics[width=0.32\textwidth]{Figure_005-e.pdf}
\includegraphics[width=0.32\textwidth]{Figure_005-f.pdf}
\end{center}
\caption{Pre-fit classifier outputs, for the \ensuremath{\Pgm^\pm\Pgm^\pm}\xspace\ channel (left), \ensuremath{\Pepm\Pgm^\pm}\xspace\ channel (center), and three-lepton channel (right), for training against \ensuremath{\ttbar\cmsSymbolFace{V}}\xspace\ (top row) and against \ensuremath{\ttbar{+}\text{jets}}\xspace\ (bottom row).
In the box below each distribution, the ratio of the observed and predicted event yields is shown.
The shape of the two \ensuremath{\cPqt\PH}\xspace\ signals for $\ensuremath{\kappa_\cPqt}\xspace=-1.0$ is shown, normalized to their respective cross sections for $\ensuremath{\kappa_\cPqt}\xspace=-1.0, \ensuremath{\kappa_\text{V}}\xspace=1.0$.
The grey band represents the unconstrained (pre-fit) statistical and systematic uncertainties.\label{fig:mlpostfitshapes}}
\end{figure*}
\begin{figure*}[htb]
\begin{center}
\includegraphics[width=0.32\textwidth]{Figure_006-a.pdf}
\includegraphics[width=0.32\textwidth]{Figure_006-b.pdf}
\includegraphics[width=0.32\textwidth]{Figure_006-c.pdf}
\end{center}
\caption{Post-fit categorized classifier outputs as used in the maximum likelihood fit for the \ensuremath{\Pgm^\pm\Pgm^\pm}\xspace\ channel (left), \ensuremath{\Pepm\Pgm^\pm}\xspace\ channel (center), and three-lepton channel (right), for 35.9\fbinv.
In the box below each distribution, the ratio of the observed and predicted event yields is shown.
The shape of the \ensuremath{\cPqt\PH}\xspace\ signal is indicated with 10 times the SM.
\label{fig:mlfinalbins}}
\end{figure*}
\subsection{Systematic uncertainties}\label{ssec:multilepsystematics}
The yield of signal and background events after the selection, as well as the shape of the classifier output distributions for signal and background processes, have systematic uncertainties from a variety of sources, both experimental and theoretical.
Experimental uncertainties relate either to the reconstruction of physics objects or to imprecisions in estimating the background contributions.
Uncertainties in the efficiency of reconstructing and selecting physics objects affect all yields and kinematic shapes taken from MC simulation, for both signal and background.
Background contributions estimated from the data are not affected by these.
Uncertainties from unknown higher-order contributions to \ensuremath{\cPqt\PH\Pq}\xspace\ and \ensuremath{\cPqt\PH\PW}\xspace\ production are estimated from a change in the factorization and renormalization scales of double and half the initial value, evaluated separately for each point of \ensuremath{\kappa_\cPqt}\xspace.
The \ensuremath{\ttbar\PH}\xspace\ component has an uncertainty of between 5.8--9.3\% from scale variations and an additional $3.6\%$ from the knowledge of PDFs and the strong coupling constant $\alpS$~\cite{deFlorian:2016spz}.
Uncertainties related to the choice of the PDF set and its scale are estimated to be $3.7\%$ for \ensuremath{\cPqt\PH\Pq}\xspace\ and $4.0\%$ for \ensuremath{\cPqt\PH\PW}\xspace.
The effect of missing higher-order corrections to the kinematic shape of the classifier outputs is taken into account for the \ensuremath{\cPqt\PH}\xspace, \ensuremath{\ttbar\PH}\xspace, and \ensuremath{\ttbar\cmsSymbolFace{V}}\xspace\ components by independent changes of the renormalization and factorization scales of double and half the nominal value.
The cross sections of \ensuremath{\ttbar\PZ}\xspace\ and \ensuremath{\ttbar\PW}\xspace\ production are known with uncertainties of $+9.6\%/\!-11.2\%$ and $+12.9\%/\!-11.5\%$, respectively, from missing higher-order corrections to the perturbative expansion.
The corresponding values due to uncertainties in the PDFs and $\alpS$ are $3.4$ and $4.0\%$, respectively~\cite{deFlorian:2016spz}.
The efficiency for events passing the combination of trigger requirements is measured separately for events with two or more leptons, and has an uncertainty in the range of 1--3\%.
Efficiencies for the reconstruction and selection of muons and electrons are measured as a function of their \pt, using a tag-and-probe method with uncertainties of 2--4\%~\cite{EWK-10-002}.
The energy scale of jets is determined using event balancing techniques and carries uncertainties of a few percent, depending on \pt\ and $\eta$ of the jets~\cite{Khachatryan:2016kdb}.
Their impact on the kinematic distributions used in the signal extraction are estimated by varying the scales within their respective uncertainty and propagating the effects to the final result, recalculating all kinematic quantities and reapplying the event selection criteria.
The \cPqb\ tagging efficiencies are measured in heavy-flavor enriched multijet events and in \ttbar\ events, with \pt- and $\eta$-dependent uncertainties of a few percent~\cite{Sirunyan:2017ezt}.
The uncertainty in the integrated luminosity is 2.5\%~\cite{LUM-17-001} and affects the normalization of all processes modeled in simulation.
The estimate of events containing nonprompt leptons is subject to uncertainties in the determination of the tight-to-loose ratio on one hand and to the inherent bias in the selection of the control region dominated by nonprompt leptons, as tested in simulated events, on the other hand.
The measurement of the lepton tight-to-loose rate has statistical as well as systematic uncertainties from the removal of residual prompt leptons in the control region, amounting to a total uncertainty of 10--40\%, depending on the flavor of the leptons and their \pt\ and $\eta$.
The validity of the method itself is tested in simulated events and contributes a small additional uncertainty both to the normalization and shape of the classifier distributions for such events.
The estimate of backgrounds from electron charge misidentification in the \ensuremath{\Pepm\Pgm^\pm}\xspace\ channel carries an uncertainty of about 30\% from the measurement of the misidentification probability.
It is composed of a dominant statistical component from the limited event yields, and one related to the residual disagreement observed when testing the prediction in a control region.
The estimate of backgrounds from \ensuremath{\PW\PZ}\xspace\ production is normalized in a control region with three leptons and carries uncertainties due to its limited statistics (10\%), the residual non-\ensuremath{\PW\PZ}\xspace\ backgrounds (20\%), the \cPqb\ tagging rate ($10\%$), and the theoretical uncertainties related to the flavor composition of jets produced in association with the boson pair (up to 10\%).
In the dilepton channels, this uncertainty is increased to 50\% to account for the differences with respect to the control region.
Additional smaller backgrounds which have not yet been observed at the LHC (VVV, same-sign \PW\ boson production, \ensuremath{\cPqt\PZ\Pq}\xspace, \ensuremath{\cPqt\PZ\PW}\xspace, \ensuremath{\ttbar\ttbar}\xspace) are assigned a normalization uncertainty of 50\%.
Of these sources of systematic uncertainties, the ones with largest impact on the final result are found to be those related to the normalization of the nonprompt backgrounds, the scale variations for the \ensuremath{\ttbar\cmsSymbolFace{V}}\xspace\ and \ensuremath{\ttbar\PH}\xspace\ processes, and the lepton selection efficiencies.
\section{Single-lepton + \texorpdfstring{\bbbar}{bbbar} channels}\label{sec:bbchannels}
Events from a \ensuremath{\cPqt\PH}\xspace\ signal where the Higgs boson decays to a bottom quark-antiquark pair (\ensuremath{\PH\to\bbbar}\xspace) produce final states with at least three central \cPqb\ jets and a hard lepton from the top quark decay chain used for triggering.
Selecting such events leads to challenging backgrounds from \ttbar\ production with additional heavy-flavor quarks, which can be produced in gluon splittings from initial- or final-state radiation.
The analysis described in this section uses two selections aimed at identifying signal events, with either three or four \cPqb-tagged jets, and a separate sample with opposite-sign dileptons, dominated by \ensuremath{\ttbar{+}\text{jets}}\xspace\ events, to control $\ttbar+\text{heavy-flavor}$ (\ensuremath{\ttbar{+}\mathrm{HF}}\xspace) events in a simultaneous fit.
A multivariate classification algorithm is trained to discriminate different \ensuremath{\ttbar{+}\text{jets}}\xspace\ background components in the control region.
Additional multivariate algorithms are used to optimize the jet-parton assignment used to reconstruct kinematic properties of signal and background events which, in turn, are used to distinguish these components.
\subsection{Selection}\label{ssec:bbselection}
Selected events in the single-lepton + \bbbar\ signal channels must pass a single-lepton trigger.
Each event is required to contain exactly one muon or electron.
Muon (electron) candidates are required to satisfy $\pt > 27$ $(35)\GeV$ and $\abs{\eta} < 2.4$ $(2.1)$, motivated by the trigger selection, and to be isolated and fulfill strict quality requirements.
Events with additional leptons that have $\pt > 15\GeV$ and pass less strict quality requirements are rejected.
At the analysis level, the selection criteria target the \ensuremath{\PH\to\bbbar}\xspace\ and $\cPqt\to\PW\cPqb\to\ell\nu\cPqb$ decay channels.
With these decays, the final state of the \ensuremath{\cPqt\PH\Pq}\xspace\ process consists of one \PW\ boson, three \cPqb\ quarks, and the light-flavor quark recoiling against the top quark-Higgs boson system.
In addition, a fourth \cPqb\ quark is expected because of the initial gluon splitting, but often falls outside the detector acceptance.
The main signal region is therefore required to have either three or four \cPqb-tagged jets and at least one additional untagged jet, both defined using the medium working point.
Central jets with $\abs{\eta} < 2.4$ are required to have $\pt > 30\GeV$, while jets in the forward region ($2.4 \leq \abs{\eta} \leq 4.7$) are required to have $\pt > 40\GeV$.
The neutrino is accounted for by requiring a minimal amount of missing transverse momentum of $\ptmiss>35\GeV$ in the muon channel and $\ptmiss>45\GeV$ in the electron channel.
This renders the background from QCD multijet events negligible.
In addition to the signal regions, a control region is defined to constrain the main background contribution from top quark pair production.
Events selected for this control region must pass a dilepton trigger.
Each event is required to contain exactly two oppositely charged leptons, where their flavor can be any combination of muons or electrons.
Two jets in each event must be \cPqb\ tagged.
Furthermore, at least one additional jet must pass the loose \cPqb\ tagging requirement.
Similarly to the signal regions, each event is further required to have a minimum amount of missing transverse momentum.
All selection criteria are summarized in Table~\ref{tab:bb_cuts}.
\begin{table}[!h]
\topcaption{Summary of event selection for the single-lepton + \bbbar\ channels.\label{tab:bb_cuts}}
\centering
\begin{scotch}{p{8cm}}
\quad Signal region \\
One muon (electron) with $\pt>27 (35)\GeV$ \\
No additional loose leptons with $\pt>15\GeV$ \\
Three or four medium \cPqb-tagged jets \\
$\pt>30\GeV$ and $\abs{\eta}<2.4$ \\
One or more untagged jets \\
$\pt>30\GeV$ for $\abs{\eta}<2.4$ or \\
$\pt>40\GeV$ for $\abs{\eta}\ge2.4$ \\
$\ptmiss>35 (45)\GeV$ for muons (electrons) \\
[\cmsTabSkip]
\quad Control region \\
Two leptons: $\pt>20/20\GeV$ ($\Pgm^\pm\Pgm^\mp$) \\
or $\pt>20/15\GeV$ ($\Pepm\Pe^\mp/\Pgm^\pm\Pe^\mp$)\\
No additional loose leptons \\
with $\pt>20/15\GeV$ ($\Pgm^\pm/\Pepm$) \\
Two medium \cPqb-tagged jets \\
$\pt>30\GeV$ and $\abs{\eta}<2.4$ \\
One or more additional loose \cPqb-tagged jets \\
$\pt>30\GeV$ and $\abs{\eta}<2.4$ \\
$\ptmiss>40\GeV$\\
\end{scotch}
\end{table}
\subsection{Backgrounds}\label{ssec:bbbackgrounds}
The main background contribution in the single-lepton + \bbbar\ channels arises from SM processes with multiple \cPqb\ quarks.
The modeling and estimation of all background processes are done using samples of simulated events.
In particular, the dominant background process is top quark pair production because of the similar final state and, comparatively, a large cross section.
As the modeling of the additional heavy-flavor partons in \ttbar\ events is theoretically difficult, the sample of simulated \ttbar events is further divided into different subcategories, defined by the flavor of possible additionally radiated quarks and taking into account a possible merging of \cPqb\ hadrons into single jets.
The control region is specifically designed to separate the \ensuremath{\ttbar{+}\mathrm{HF}}\xspace\ and $\ttbar+\text{light-flavor}$ (\ensuremath{\ttbar{+}\mathrm{LF}}\xspace) components with a multivariate approach.
The different categories are listed in Table~\ref{tab:ttjetcats}.
\begin{table}[htb]
\topcaption{Subcategories of \ensuremath{\ttbar{+}\text{jets}}\xspace\ backgrounds used in the analysis.\label{tab:ttjetcats}}
\centering
\begin{scotch}{ll}
\ensuremath{\ttbar{+}\bbbar}\xspace & Two additional jets arising from \cPqb\ hadrons \\
\ensuremath{\ttbar{+}2\cPqb}\xspace & One additional jet arising from two merged \\
& \cPqb\ hadrons \\
\ensuremath{\ttbar{+}\cPqb}\xspace & One additional jet arising from one \cPqb\ hadron \\
\ensuremath{\ttbar{+}\mathrm{c\bar{c}}}\xspace & The three former categories combined for \cPqc\ hadrons \\
& instead of \cPqb\ hadrons \\
\ensuremath{\ttbar{+}\mathrm{LF}}\xspace & All events that do not meet the criteria of the other \\
& four categories \\
\end{scotch}
\end{table}
Other backgrounds contributing to the signal region are single top quark production and top quark pair production in association with electroweak bosons, namely \ensuremath{\ttbar\PW}\xspace\ and \ensuremath{\ttbar\PZ}\xspace.
An irreducible background for the \ensuremath{\cPqt\PH\Pq}\xspace\ processes comes from \ensuremath{\cPqt\PZ\Pq}\xspace\ production with $\PZ\to\bbbar$.
Background contributions also arise from \ensuremath{\PZ{+}\text{jets}}\xspace production, especially in the dilepton control region.
The expected and observed event yields for the signal and control regions are listed in Table~\ref{tab:bb_yields}.
\begin{table*}[htb]
\topcaption{Data yields and expected backgrounds after the event selection for the two signal regions and in the dilepton control region. Uncertainties include both systematic and statistical components.\label{tab:bb_yields}}
\centering
\begin{scotch}{lrrr}
Process & 3 tags & 4 tags & Dilepton \\
\hline
\ensuremath{\ttbar{+}\mathrm{LF}}\xspace & $24100 \pm 5800$ & $320 \pm 180$ & $5300 \pm 1000$ \\
\ensuremath{\ttbar{+}\mathrm{c\bar{c}}}\xspace & $8500 \pm 4900$ & $340 \pm 260$ & $2100 \pm 1200$ \\
\ensuremath{\ttbar{+}\bbbar}\xspace & $4100 \pm 2300$ & $780 \pm 430$ & $750 \pm 440 $ \\
\ensuremath{\ttbar{+}\cPqb}\xspace & $4000 \pm 2100$ & $180 \pm 110$ & $770 \pm 430 $ \\
\ensuremath{\ttbar{+}2\cPqb}\xspace & $2300 \pm 1200$ & $138 \pm 88 $ & $400 \pm 230 $ \\
Single top & $1980 \pm 350 $ & $78 \pm 26 $ & $285 \pm 37 $ \\
\ensuremath{\ttbar\PZ}\xspace & $202 \pm 30 $ & $32.0 \pm 6.6$ & $54.8 \pm 7.3 $ \\
\ensuremath{\ttbar\PW}\xspace & $90 \pm 23 $ & $4.2 \pm 2.8$ & $31.4 \pm 5.9 $ \\
\ensuremath{\cPqt\PZ\Pq}\xspace & $28.3 \pm 5.7 $ & $2.9 \pm 2.3$ & \NA\ \\
\ensuremath{\PZ{+}\text{jets}}\xspace\ & \NA\ & \NA\ & $69 \pm 32$ \\
[\cmsTabSkip]
Total background & $45300 \pm 8300$ & $1880 \pm 550$ & $9700 \pm 1700$ \\
[\cmsTabSkip]
\ensuremath{\ttbar\PH}\xspace & $268 \pm 31$ & $62.0 \pm 9.9$ & $48.9 \pm 5.9 $ \\
$\ensuremath{\cPqt\PH\Pq}\xspace$ (SM) & $11.1 \pm 3.3$ & $ 1.3 \pm 0.3$ & $0.31 \pm 0.08$ \\
$\ensuremath{\cPqt\PH\PW}\xspace$ (SM) & $ 7.6 \pm 1.1$ & $ 1.1 \pm 0.3$ & $1.4 \pm 0.2 $ \\
[\cmsTabSkip]
Total SM & $45700 \pm 8300$ & $1940 \pm 550$ & $9700 \pm 1700$ \\
[\cmsTabSkip]
$\ensuremath{\cPqt\PH\Pq}\xspace$ ($\ensuremath{\kappa_\text{V}}\xspace=1=-\ensuremath{\kappa_\cPqt}\xspace$) & $160 \pm 38$ & $19.1 \pm 5.2$ & $ 3.9 \pm 1.0$ \\
$\ensuremath{\cPqt\PH\PW}\xspace$ ($\ensuremath{\kappa_\text{V}}\xspace=1=-\ensuremath{\kappa_\cPqt}\xspace$) & $ 92 \pm 12$ & $13.7 \pm 2.3$ & $17.6 \pm 2.2$ \\
[\cmsTabSkip]
Data & 44311 & 2035 & 9065 \\
\end{scotch}
\end{table*}
\subsection{Signal extraction}\label{ssec:bbsignalextrac}
As the assignment of final state quarks to reconstructed jets is non-trivial for the multijet environment of the 3 and 4 tag signal regions, the jet-to-quark assignment is achieved with dedicated jet assignment BDTs (JA-BDTs).
Each event is reconstructed under three different hypotheses: \ensuremath{\cPqt\PH\Pq}\xspace\ signal event, \ensuremath{\cPqt\PH\PW}\xspace\ signal event, or \ensuremath{\ttbar{+}\text{jets}}\xspace\ background event.
Each assignment hypothesis utilizes a separate BDT, which is trained with correct and wrong jet-to-quark assignments of the respective process.
When a JA-BDT is applied, all possible jet-to-quark assignments are evaluated and the one with the highest JA-BDT output value is chosen for the given hypothesis.
The matching efficiency for a complete \ensuremath{\cPqt\PH\Pq}\xspace\ event is 58 (45)\% in the 3 (4) tag signal region, for a complete \ensuremath{\cPqt\PH\PW}\xspace\ event 38 (29)\% and for a complete \ttbar\ event 58 (31)\%.
The different assignment hypotheses provide sensitive variables, which can be exploited in a further signal classification BDT (SC-BDT) to separate the \ensuremath{\cPqt\PH\Pq}\xspace\ and \ensuremath{\cPqt\PH\PW}\xspace\ processes from the main background of the analysis, \ttbar\ events.
Global event variables that do not rely on any particular jet-to-quark assignment are used in addition to the assignment-dependent variables.
The input variables used for the SC-BDT are listed in Table~\ref{tab:class-vars} with the result of the training illustrated in Fig.~\ref{FigClassTHMVA}.
\begin{table*}[h!]
\renewcommand{\arraystretch}{1.3}
\center{
\topcaption[Classification variable description]{Description of variables used in the SC-BDT.
There are four types of variables: variables independent of any jet assignment, and variables based on objects obtained under the \ttbar, \ensuremath{\cPqt\PH\Pq}\xspace, or \ensuremath{\cPqt\PH\PW}\xspace\ jet assignment. The natural logarithm transformation is used to smoothen and constrain broad distributions to a more narrow range.}
\begin{scotch}{p{0.35\textwidth}p{0.6\textwidth}}
Variable & Description \\
\hline
\quad Event variables & \\
$\ln{m_3}$ & Invariant mass of three hardest jets in the event\\
Aplanarity & Aplanarity of the event~\cite{Barger:1993ww} \\
Fox--Wolfram \#1 & First Fox--Wolfram moment~\cite{PhysRevLett.41.1581} of the event\\
$q(\ell)$ & Electric charge of the lepton \\
[\cmsTabSkip]
\quad \ttbar\ jet assignment variables & \\
$\ln{m(\ensuremath{\cPqt_{\text{had}}}\xspace)}$ & Invariant mass of the reconstructed hadronically decaying top quark \\
CSV(\ensuremath{\PW_{\text{had}}}\xspace jet 1) & Output of the \cPqb\ tagging discriminant for the first jet assigned to the hadronically decaying \PW\ boson \\
CSV(\ensuremath{\PW_{\text{had}}}\xspace jet 2) & Output of the \cPqb\ tagging discriminant for the second jet assigned to the hadronically decaying \PW\ boson \\
$\Delta R$(\ensuremath{\PW_{\text{had}}}\xspace jets) & $\Delta R$ between the two light jets assigned to the hadronically decaying \PW\ boson \\
[\cmsTabSkip]
\quad \ensuremath{\cPqt\PH\Pq}\xspace\ jet assignment variables & \\
$\ln{\pt(\PH)}$ & Transverse momentum of the reconstructed Higgs boson candidate \\
$\abs{\eta(\text{light-flavor jet})}$ & Absolute pseudorapidity of light-flavor forward jet \\
$\ln{m(\PH)}$ & Invariant mass of the reconstructed Higgs boson candidate\\
CSV(\PH jet 1) & Output of the \cPqb\ tagging discriminant for the first jet assigned to the Higgs boson candidate \\
CSV(\PH jet 2) & Output of the \cPqb\ tagging discriminant for the second jet assigned to the Higgs boson candidate \\
$\cos \theta(\text{b}_\text{t},\,\ell)$ & Cosine of the angle between the \cPqb-tagged jet from the top quark decay and the lepton\\
$\cos \theta^{*}$ & Cosine of the angle between the light-flavor forward jet and the lepton in the top quark rest frame \\
$\abs{\eta(\text{t})$ - $\eta(\text{H}) }$ & Absolute pseudorapidity difference of reconstructed Higgs boson and top quark\\
$\ln{\pt(\text{light jet})}$ & Transverse momentum of the light-flavor forward jet \\
[\cmsTabSkip]
\quad \ensuremath{\cPqt\PH\PW}\xspace jet assignment variable & \\
{JA-BDT response } & Best output of the \ensuremath{\cPqt\PH\PW}\xspace\ JA-BDT\\
\end{scotch}
\label{tab:class-vars}
}
\end{table*}
\begin{figure}[h]
\centering
\includegraphics[width=0.50\textwidth]{Figure_007.pdf}
\caption{Output values of the SC-BDT.}
\label{FigClassTHMVA}
\end{figure}
In addition, a dedicated flavor classification BDT (FC-BDT) is used in the dilepton region to constrain the contribution of different $\ttbar+\mathrm{X}$ components.
The training is performed with \ensuremath{\ttbar{+}\mathrm{LF}}\xspace\ as signal process and \ensuremath{\ttbar{+}\bbbar}\xspace\ as background process.
This FC-BDT exploits information of the number of jets per event and their response to \cPqb\ and \cPqc\ tagging algorithms.
The full list of input variables is provided in Table~\ref{TabFlavorVars} and the result of the training of the FC-BDT is shown in Fig.~\ref{FigFlavorMVA}.
\begin{table*}[tb]
\renewcommand{\arraystretch}{1.5}
\topcaption{Input variables used in the training of the FC-BDT. The variables are sorted by their importance in the training within each category. In total, eight variables are used for the training of the FC-BDT.}
\label{TabFlavorVars}
\small
\begin{scotch}{p{0.3\textwidth}p{0.65\textwidth}}
Variable & Description \\
\hline
{CSV(\cPqb jet 3)} & Output of the \cPqb\ tagging discriminant for the \cPqb-tagged jet with the third-highest \cPqb\ tagging value in the event \\
{n$_{\text{jets}}$(tight)} & Number of jets in the event passing the tight working point of the \cPqb\ tagging algorithm \\
{CvsL(jet \pt 3)} & Output of the charm \vs\ light-flavor tagging algorithm for the jet with the third-highest transverse momentum in the event \\
{CSV(\cPqb-tagged jet 2)} & Output of the \cPqb\ tagging discriminant for the \cPqb-tagged jet with the second-highest \cPqb\ tagging value in the event \\
{CvsL(jet \pt 4}) & Output of the charm \vs\ light-flavor tagging algorithm for the jet with the fourth-highest transverse momentum in the event \\
{CvsB(jet \pt 3)} & Output of the charm \vs\ bottom flavor tagging algorithm for the jet with the third-highest transverse momentum in the event \\
{CSV(\cPqb-tagged jet 4)} & Output of the \cPqb\ tagging discriminant for the \cPqb-tagged jet with the fourth-highest \cPqb\ tagging value in the event \\
{n$_{\text{jets}}$(loose)} & Number of jets in the event passing the loose working point of the \cPqb\ tagging algorithm \\
\end{scotch}
\end{table*}
\begin{figure}
\centering
\includegraphics[width=0.50\textwidth]{Figure_008.pdf}
\caption{Response values of the FC-BDT. The background consists of \ensuremath{\ttbar{+}\bbbar}\xspace, $\ttbar+\mathrm{1\bar{b}}$, and $\ttbar+\mathrm{2\bar{b}}$ events.}
\label{FigFlavorMVA}
\end{figure}
To determine the signal yield, the output distributions of the SC-BDT in the three and four \cPqb\ tag regions are fitted simultaneously with the output of the FC-BDT in the dilepton region.
The SC-BDT output distributions before the fit are shown in Fig.~\ref{fig:bb_prefit} and the result of the fit is shown in Fig.~\ref{fig:bb_postfit}.
The pre- and post-fit distributions of the FC-BDT are shown in Fig.~\ref{fig:bb_dilepton}.
\begin{figure*}[!htb]
\hspace{0.5cm}
\begin{center}
\includegraphics[width=0.48\textwidth]{Figure_009-a.pdf}
\includegraphics[width=0.48\textwidth]{Figure_009-b.pdf}
\end{center}
\caption{Pre-fit classifier outputs of the signal classification BDT for the 3 tag channel (left) and the 4 tag channel (right), for 35.9\fbinv. In the box below each distribution, the ratio of the observed and predicted event yields is shown. The shape of the \ensuremath{\cPqt\PH}\xspace\ signal is indicated with 800 times the SM.
\label{fig:bb_prefit}}
\end{figure*}
\begin{figure*}[!htb]
\begin{center}
\includegraphics[width=0.48\textwidth]{Figure_010-a.pdf}
\includegraphics[width=0.48\textwidth]{Figure_010-b.pdf}
\end{center}
\caption{Post-fit classifier outputs of the signal classification BDT as used in the maximum likelihood fit for the 3 tag channel (left) and the 4 tag channel (right). In the box below each distribution, the ratio of the observed and predicted event yields is shown. The shape of the \ensuremath{\cPqt\PH}\xspace\ signal is indicated with 800 times the SM.
\label{fig:bb_postfit}}
\end{figure*}
\begin{figure*}[!htb]
\begin{center}
\includegraphics[width=0.48\textwidth]{Figure_011-a.pdf}
\includegraphics[width=0.48\textwidth]{Figure_011-b.pdf}
\end{center}
\caption{Pre- (left) and post-fit (right) classifier outputs of the flavor classification BDT for the dilepton selection. In the box below each distribution, the ratio of the observed and predicted event yields is shown.
\label{fig:bb_dilepton}}
\end{figure*}
\subsection{Systematic uncertainties}\label{ssec:bbsystematics}
Many systematic uncertainties affect the result of the analysis, arising both from experimental and theoretical sources.
All uncertainties are parametrized as nuisance parameters in the statistical inference performed in the final analysis step described in Section~\ref{sec:results}.
The uncertainty in the signal normalization due to the choice of factorization and renormalization scales is evaluated by changing their values to double and half of the nominal values.
A rate uncertainty of around $5\%$ is assigned to each process to account for the choice of PDFs, since shape variations are found to be negligible.
Furthermore, for each \ensuremath{\ttbar{+}\mathrm{HF}}\xspace\ category, an individual $50\%$ rate uncertainty is assigned, since the modeling of these components is theoretically difficult and cross section measurements are affected by large systematic uncertainties~\cite{Khachatryan:2015mva,Sirunyan:2017snr}.
The observed top quark \pt\ spectrum in \ttbar\ events is found to be softer than the theoretical prediction~\cite{Khachatryan:2015oqa}.
A systematic uncertainty for this effect is derived by applying event-by-event weights that correct the disagreement.
Efficiency corrections for the selection of isolated leptons by the trigger and quality requirements are evaluated with a tag-and-probe method.
Uncertainties in correcting the distribution of PV interactions are accounted by varying the total inelastic cross section by $4.6\%$~\cite{Sirunyan:2018nqx}.
The corrections applied to the jet energy scale and resolution are varied within their given uncertainties and the migration between different categories is used to determine the effect.
In addition, the contribution to \ptmiss\ of unclustered particles is varied within the resolution of each particle~\cite{Khachatryan:2014gga}.
The \cPqb\ tagging efficiencies for jets are measured in QCD multijet and \ttbar\ enriched samples and varied within their uncertainties~\cite{Sirunyan:2017ezt}.
As for the multilepton channel, an uncertainty of $2.5\%$ is assigned to the integrated luminosity~\cite{LUM-17-001} and affects the normalization of all processes.
The dominant systematic uncertainties are those related to the factorization and renormalization scales, as well as the uncertainties in the overall normalization of the \ensuremath{\ttbar{+}\mathrm{HF}}\xspace\ processes and the jet energy corrections.
\section{Reinterpretation of the \texorpdfstring{$\PH\to\ensuremath{\gamma\gamma}\xspace$}{H to gamma gamma} measurement}\label{sec:aa}
The standard model \ensuremath{\cPqt\PH\Pq}\xspace\ and \ensuremath{\cPqt\PH\PW}\xspace\ signal processes with $\PH\to\Pgg\Pgg$ were included in previous measurements of the Higgs boson properties in the inclusive diphoton final state~\cite{Sirunyan:2018ouh}.
Events with two prompt high-\pt\ photons were divided into different event categories, each enriched with a particular production mechanism of the Higgs boson.
The \ensuremath{\cPqt\PH\Pq}\xspace\ and \ensuremath{\cPqt\PH\PW}\xspace\ processes contribute mostly to the ``\ensuremath{\ttbar\PH}\xspace\ hadronic'', and ``\ensuremath{\ttbar\PH}\xspace\ leptonic'' categories as defined in Ref.~\cite{Sirunyan:2018ouh}, which target the \ensuremath{\ttbar\PH}\xspace\ process for fully hadronic top quark decays and for single-lepton or dilepton decay modes, respectively.
Events in the \ensuremath{\ttbar\PH}\xspace\ leptonic category are selected to have at least one lepton well separated from the photons, and well reconstructed, as well as at least two jets of which at least one passes the medium \cPqb\ tagging requirement.
The \ensuremath{\ttbar\PH}\xspace\ hadronic category is defined as events with no identified leptons and at least three jets, of which at least one is \cPqb\ tagged with the loose working point.
The signal is modeled with a sum of Gaussian functions describing the diphoton invariant mass ($m_{\Pgg\Pgg}$) shape derived from simulation.
The background contribution is determined from the data without reliance on simulated events, using the discrete profiling method~\cite{Dauncey:2014xga,Khachatryan:2014ira,Aad:2015zhl}.
Different classes of models describing the falling $m_{\Pgg\Pgg}$ distribution in the background processes are used as input to the method.
Sources of systematic uncertainties affecting the signal model and leading to migrations of signal events among the categories are considered.
The inputs to Ref.~\cite{Sirunyan:2018ouh} from the \ensuremath{\ttbar\PH}\xspace\ categories are used here in a combination with the multilepton and single-lepton + \bbbar\ channels to put constraints on the coupling modifier \ensuremath{\kappa_\cPqt}\xspace\ and on the production cross section of \ensuremath{\cPqt\PH}\xspace\ events.
The coupling modifiers \ensuremath{\kappa_\cPqt}\xspace\ and \ensuremath{\kappa_\text{V}}\xspace\ affect both the \ensuremath{\cPqt\PH}\xspace\ and \ensuremath{\ttbar\PH}\xspace\ production cross sections, as well as the Higgs boson decay branching fraction into two photons through the interference of bosonic and fermionic loops.
Changes in the kinematic properties of the \ensuremath{\cPqt\PH}\xspace\ signal arising from the modified couplings are taken into account by considering their effect on the signal acceptance and selection efficiency.
Figure~\ref{fig:effaa} shows the modified \ensuremath{\cPqt\PH\Pq}\xspace\ and \ensuremath{\cPqt\PH\PW}\xspace\ selection efficiencies including acceptances for the two relevant categories of the $\PH\to\ensuremath{\gamma\gamma}\xspace$ measurement as a function of the ratio of coupling modifiers $\ensuremath{\kappa_\cPqt}\xspace/\ensuremath{\kappa_\text{V}}\xspace$.
The signal diphoton mass shape is found to be independent of $\ensuremath{\kappa_\cPqt}\xspace/\ensuremath{\kappa_\text{V}}\xspace$.
\begin{figure}[htb]
\centering
\includegraphics[width=\cmsFigWidth]{Figure_012.pdf}
\caption{Acceptance and selection efficiency for the \ensuremath{\cPqt\PH\Pq}\xspace\ (red) and \ensuremath{\cPqt\PH\PW}\xspace\ (blue) signal processes as a function of $\ensuremath{\kappa_\cPqt}\xspace/\ensuremath{\kappa_\text{V}}\xspace$, for the \ensuremath{\ttbar\PH}\xspace\ leptonic (solid lines) and \ensuremath{\ttbar\PH}\xspace hadronic categories (dashed lines) of the $\PH\to\ensuremath{\gamma\gamma}\xspace$ measurement.}
\label{fig:effaa}
\end{figure}
The dependence of the signal acceptance and efficiency on $\ensuremath{\kappa_\cPqt}\xspace/\ensuremath{\kappa_\text{V}}\xspace$ is implemented in the same statistical framework as that of Ref.~\cite{Sirunyan:2018ouh}, modifying the signal only in the \ensuremath{\ttbar\PH}\xspace\ categories.
\section{Results and interpretation}\label{sec:results}
The different discriminator output distributions in the multilepton and single-lepton + \bbbar\ channels and the $\ensuremath{\gamma\gamma}\xspace$ invariant mass distributions in the diphoton channel are compared to the data in a combined maximum likelihood fit for various assumptions on the signal kinematics and normalizations, and are used to derive constraints on the signal yields.
The event selections in the different channels are mutually exclusive, therefore allowing a straightforward combination.
Common systematic uncertainties such as the integrated luminosity normalization, the \cPqb\ tagging uncertainties, and the theoretical uncertainties related to the signal modeling are taken to be fully correlated among different channels.
A profile likelihood scan is performed as a function of the coupling modifier \ensuremath{\kappa_\cPqt}\xspace, which affects the production cross sections of the three signal components \ensuremath{\cPqt\PH\Pq}\xspace, \ensuremath{\cPqt\PH\PW}\xspace, and \ensuremath{\ttbar\PH}\xspace, as well as the Higgs boson branching fractions.
Effects on Higgs boson decays via fermion and boson loops to \ensuremath{\gamma\gamma}\xspace, $\PZ\gamma$, and gluon-gluon final states also affect the branching fractions in other channels.
Furthermore, the kinematic properties of the two \ensuremath{\cPqt\PH}\xspace\ processes and thereby the shape of the classifier outputs entering the fit depend on the value of $\ensuremath{\kappa_\cPqt}\xspace$.
Systematic uncertainties are included in the form of nuisance parameters in the fit and are treated via the frequentist paradigm, as described in Refs.~\cite{ATL-PHYS-PUB-2011-011,HIG-11-032}.
Uncertainties affecting the normalization are constrained either by $\Gamma$-function distributions, if they are statistical in origin, or by log-normal probability density functions.
Systematic uncertainties that affect both the normalization and shape in the discriminating observables are included in the fit using the technique detailed in Ref.~\cite{Conway:2011in}, and represented by Gaussian probability density functions.
Table~\ref{tab:systematics} shows the impact of the most important groups of nuisances parameters on the \ensuremath{\cPqt\PH+\ttH}\xspace\ signal yield.
Pre-fit systematic uncertainties of the same groups are shown for comparison.
\begin{table}[htb]
\topcaption{Summary of the main sources of systematic uncertainty. $\Delta\mu/\mu$ corresponds to the relative change in \ensuremath{\cPqt\PH+\ttH}\xspace\ signal yield induced by varying the systematic source within its associated uncertainty.\label{tab:systematics}}
\begin{center}
\begin{scotch}{lcc}
Source & Uncertainty [\%] & $\Delta\mu/\mu$ [\%]\\
\hline
\Pe, \Pgm\ selection efficiency & 2--4 & 17 \\
$\cPqb$ tagging efficiency & 2--15 & 6 \\
Jet energy calibration & 2--15 & 3 \\
Forward jet modeling & 10--35 & 3 \\
Integrated luminosity & 2.5 & 10 \\
Reducible background estimate & 10--40 & 14 \\
Theoretical sources & $\approx$10 & 14 \\
\ensuremath{\ttbar{+}\mathrm{HF}}\xspace\ normalization & $\approx$50 & 7 \\
PDFs & 2--6 & 8 \\
\end{scotch}
\end{center}
\end{table}
To derive constraints on \ensuremath{\kappa_\cPqt}\xspace\ for a fixed value of $\ensuremath{\kappa_\text{V}}\xspace=1.0$, a scan of the likelihood ratio $\mathcal{L}(\ensuremath{\kappa_\cPqt}\xspace)/\mathcal{L}(\hat{\ensuremath{\kappa_\cPqt}\xspace})$ is performed, where $\hat{\ensuremath{\kappa_\cPqt}\xspace}$ is the best fit value of \ensuremath{\kappa_\cPqt}\xspace.
Figure~\ref{fig:nll_scan} shows the negative of twice the logarithm of this likelihood ratio ($-2\Delta\ln{(\mathcal{L})}$), for scans on the data, and for an Asimov data set~\cite{Cowan2011} with SM expectations for \ensuremath{\ttbar\PH}\xspace\ and \ensuremath{\cPqt\PH}\xspace.
On this scale, a 95\% confidence interval covers values below 3.84, while standard deviations are at values of 1, 4, 9, 16, \etc
The expected performance for an SM-like signal is to favor a value of $\ensuremath{\kappa_\cPqt}\xspace=1.0$ over one of $\ensuremath{\kappa_\cPqt}\xspace=-1.0$ by more than four standard deviations, and to exclude values outside of about $-0.5$ and $1.6$ at 95\% confidence level (\CL).
In the combined scan, the data slightly favor a positive value of \ensuremath{\kappa_\cPqt}\xspace\ over a negative one, by about 1.5 standard deviations, while excluding values outside the ranges of about $[-0.9, -0.5]$ and $[1.0, 2.1]$ at 95\% \CLnp.
The sensitivity is driven by the \ensuremath{\gamma\gamma}\xspace\ channel at negative values of the coupling modifiers and by the multilepton channels at positive values.
An excess of observed over expected events is seen both in the multilepton and \ensuremath{\gamma\gamma}\xspace\ channels, with a combined significance of about two standard deviations.
Consequently, floating a signal strength modifier (defined as the ratio of the fitted signal cross section to the SM expectation) of a combined \ensuremath{\cPqt\PH+\ttH}\xspace\ signal yields a best fit value of $2.00\pm0.53$ under the SM hypothesis.
These results are in agreement with those from the dedicated \ensuremath{\ttbar\PH}\xspace\ searches~\cite{cms_tthobs}, as expected, since they share a large fraction of events with the data set used here.
To establish limits on \ensuremath{\cPqt\PH}\xspace\ production, a different signal strength parameter is introduced for the combination of \ensuremath{\cPqt\PH\Pq}\xspace\ and \ensuremath{\cPqt\PH\PW}\xspace, not including \ensuremath{\ttbar\PH}\xspace.
A maximum likelihood fit for this signal strength is then performed based on the profile likelihood test statistic~\cite{ATL-PHYS-PUB-2011-011,HIG-11-032} at fixed points of \ensuremath{\kappa_\cPqt}\xspace.
Upper limits on the signal strength are then derived using the \CLs method~\cite{Junk:1999kv,Read:2002hq} and using asymptotic formulae for the distribution of the test statistic~\cite{Cowan2011}.
They are multiplied by the \ensuremath{\kappa_\cPqt}\xspace-dependent \ensuremath{\cPqt\PH}\xspace\ production cross section times the combined Higgs boson branching fractions to $\PW\PW^*+\ensuremath{\Pgt\Pgt}\xspace+\PZ\PZ^*+\bbbar+\ensuremath{\gamma\gamma}\xspace$ and are shown in Fig.~\ref{fig:limits_smexp}.
Limits for the SM and for a scenario with $\ensuremath{\kappa_\cPqt}\xspace=-1.0$ for the individual channels are shown in Table~\ref{tab:limits}.
The \ensuremath{\ttbar\PH}\xspace\ contribution is kept fixed to its \ensuremath{\kappa_\cPqt}\xspace-dependent expectation.
The fiducial cross section for SM-like \ensuremath{\cPqt\PH}\xspace\ production is limited to about 1.9\unit{pb}, with an expected limit of 0.9\unit{pb}, corresponding, respectively, to about 25 and 12 times the expected cross section times branching fraction in the combination of the channels explored.
The visible discrepancy between observed and expected limits around $\ensuremath{\kappa_\cPqt}\xspace=0.0$ is caused by the fact that the predicted \ensuremath{\ttbar\PH}\xspace\ cross section vanishes in that scenario while the data favor even larger than expected yields for \ensuremath{\ttbar\PH}\xspace.
\begin{table}[htb]
\topcaption{Expected and observed 95\% \CL upper limits on the \ensuremath{\cPqt\PH}\xspace\ production cross section times $\PH\to\PW\PW^*+\ensuremath{\Pgt\Pgt}\xspace+\PZ\PZ^*+\bbbar+\ensuremath{\gamma\gamma}\xspace$ branching fraction for a scenario of inverted couplings ($\ensuremath{\kappa_\cPqt}\xspace=-1.0$, top rows), vanishing top quark Yukawa coupling ($\ensuremath{\kappa_\cPqt}\xspace=0.0$, middle rows), and for an SM-like signal ($\ensuremath{\kappa_\cPqt}\xspace=1.0$, bottom rows), in pb. The expected limit is calculated on a background-only data set, \ie, without \ensuremath{\cPqt\PH}\xspace\ contribution, but including a \ensuremath{\kappa_\cPqt}\xspace-dependent contribution from the \ensuremath{\ttbar\PH}\xspace production. The \ensuremath{\ttbar\PH}\xspace\ normalization is kept fixed in the fit, while the \ensuremath{\cPqt\PH}\xspace\ signal strength is allowed to float. Limits can be compared to the expected product of \ensuremath{\cPqt\PH}\xspace\ cross sections and branching fractions of 0.83, 0.28, and 0.077\unit{pb} for the inverted top quark Yukawa coupling, the $\ensuremath{\kappa_\cPqt}\xspace=0$ scenario, and for the SM, respectively.\label{tab:limits}}
\begin{center}
\begin{scotch}{llcc}
Scenario & Channel & Observed & Expected \\
\multirow{4}{*}{$\ensuremath{\kappa_\cPqt}\xspace=-1$}
& \bbbar\ & $4.98\unit{pb}$ & $2.52\,^{+1.29}_{-0.81}\unit{pb}$ \\
& \ensuremath{\gamma\gamma}\xspace\ & $0.84\unit{pb}$ & $0.88\,^{+0.46}_{-0.28}\unit{pb}$ \\
& $\ensuremath{\Pgm^\pm\Pgm^\pm}\xspace+\ensuremath{\Pepm\Pgm^\pm}\xspace+\ensuremath{\ell\ell\ell}\xspace$ & $0.85\unit{pb}$ & $0.77\,^{+0.36}_{-0.24}\unit{pb}$ \\
& Combined & $0.74\unit{pb}$ & $0.53\,^{+0.24}_{-0.16}\unit{pb}$ \\
[\cmsTabSkip]
\multirow{4}{*}{\shortstack[l]{$\ensuremath{\kappa_\cPqt}\xspace=0$ \\ ($\ensuremath{\ttbar\PH}\xspace=0$)}}
& \bbbar\ & $5.18\unit{pb}$ & $2.60\,^{+1.35}_{-0.84}\unit{pb}$ \\
& \ensuremath{\gamma\gamma}\xspace\ & $2.63\unit{pb}$ & $0.96\,^{+0.50}_{-0.31}\unit{pb}$ \\
& $\ensuremath{\Pgm^\pm\Pgm^\pm}\xspace+\ensuremath{\Pepm\Pgm^\pm}\xspace+\ensuremath{\ell\ell\ell}\xspace$ & $0.83\unit{pb}$ & $0.76\,^{+0.36}_{-0.23}\unit{pb}$ \\
& Combined & $1.50\unit{pb}$ & $0.54\,^{+0.25}_{-0.16}\unit{pb}$ \\
[\cmsTabSkip]
\multirow{4}{*}{\shortstack[l]{$\ensuremath{\kappa_\cPqt}\xspace=1$ \\ (SM-like)}}
& \bbbar\ & $6.88\unit{pb}$ & $3.19\,^{+1.64}_{-1.02}\unit{pb}$ \\
& \ensuremath{\gamma\gamma}\xspace\ & $3.68\unit{pb}$ & $2.03\,^{+1.05}_{-0.67}\unit{pb}$ \\
& $\ensuremath{\Pgm^\pm\Pgm^\pm}\xspace+\ensuremath{\Pepm\Pgm^\pm}\xspace+\ensuremath{\ell\ell\ell}\xspace$ & $1.36\unit{pb}$ & $1.18\,^{+0.53}_{-0.35}\unit{pb}$ \\
& Combined & $1.94\unit{pb}$ & $0.92\,^{+0.40}_{-0.27}\unit{pb}$ \\
\end{scotch}
\end{center}
\end{table}
\begin{figure}[htb]
\centering
\includegraphics[width=\cmsFigWidth]{Figure_013.pdf}
\caption{Scan of $-2\Delta\ln{(\mathcal{L})}$ versus \ensuremath{\kappa_\cPqt}\xspace\ for the data (black line) and the individual channels (blue, red, and green), compared to Asimov data sets corresponding to the SM expectations (dashed lines).}
\label{fig:nll_scan}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=\cmsFigWidth]{Figure_014.pdf}
\caption{Observed and expected 95\% \CL upper limit on the \ensuremath{\cPqt\PH}\xspace\ cross section times combined $\PH\to\PW\PW^*+\ensuremath{\Pgt\Pgt}\xspace+\PZ\PZ^*+\bbbar+\ensuremath{\gamma\gamma}\xspace$ branching fraction for different values of the coupling ratio \ensuremath{\kappa_\cPqt}\xspace. The expected limit is calculated on a background-only data set, \ie, without \ensuremath{\cPqt\PH}\xspace\ contribution, but including a \ensuremath{\kappa_\cPqt}\xspace-dependent contribution from \ensuremath{\ttbar\PH}\xspace. The \ensuremath{\ttbar\PH}\xspace\ normalization is kept fixed in the fit, while the \ensuremath{\cPqt\PH}\xspace\ signal strength is allowed to float.
}
\label{fig:limits_smexp}
\end{figure}
\section{Summary}\label{sec:conclusion}
Events from proton-proton collisions at $\sqrt{s}=13\TeV$ compatible with the production of a Higgs boson (\PH) in association with a single top quark (\cPqt) have been studied to derive constraints on the magnitude and relative sign of Higgs boson couplings to top quarks and vector bosons.
Dedicated analyses of multilepton final states and final states with single leptons and a pair of bottom quarks are combined with a reinterpretation of a measurement of Higgs bosons decaying to two photons for the final result.
For standard model-like Higgs boson couplings to vector bosons, the data favor a positive value of the top quark Yukawa coupling, \ensuremath{y_\cPqt}\xspace, by 1.5 standard deviations and exclude values outside the ranges of about $[-0.9, -0.5]$ and $[1.0, 2.1]$ times $\ensuremath{y_\cPqt}\xspace^\mathrm{SM}$ at the 95\% confidence level.
An excess of events compared with expected backgrounds is observed, but it is still compatible with the standard model expectation for \ensuremath{\cPqt\PH+\ttH}\xspace\ production.
\begin{acknowledgments}
We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centers and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses. Finally, we acknowledge the enduring support for the construction and operation of the LHC and the CMS detector provided by the following funding agencies: BMBWF and FWF (Austria); FNRS and FWO (Belgium); CNPq, CAPES, FAPERJ, FAPERGS, and FAPESP (Brazil); MES (Bulgaria); CERN; CAS, MoST, and NSFC (China); COLCIENCIAS (Colombia); MSES and CSF (Croatia); RPF (Cyprus); SENESCYT (Ecuador); MoER, ERC IUT, and ERDF (Estonia); Academy of Finland, MEC, and HIP (Finland); CEA and CNRS/IN2P3 (France); BMBF, DFG, and HGF (Germany); GSRT (Greece); NKFIA (Hungary); DAE and DST (India); IPM (Iran); SFI (Ireland); INFN (Italy); MSIP and NRF (Republic of Korea); MES (Latvia); LAS (Lithuania); MOE and UM (Malaysia); BUAP, CINVESTAV, CONACYT, LNS, SEP, and UASLP-FAI (Mexico); MOS (Montenegro); MBIE (New Zealand); PAEC (Pakistan); MSHE and NSC (Poland); FCT (Portugal); JINR (Dubna); MON, RosAtom, RAS, RFBR, and NRC KI (Russia); MESTD (Serbia); SEIDI, CPAN, PCTI, and FEDER (Spain); MOSTR (Sri Lanka); Swiss Funding Agencies (Switzerland); MST (Taipei); ThEPCenter, IPST, STAR, and NSTDA (Thailand); TUBITAK and TAEK (Turkey); NASU and SFFR (Ukraine); STFC (United Kingdom); DOE and NSF (USA).
\hyphenation{Rachada-pisek} Individuals have received support from the Marie-Curie program and the European Research Council and Horizon 2020 Grant, contract No. 675440 (European Union); the Leventis Foundation; the A. P. Sloan Foundation; the Alexander von Humboldt Foundation; the Belgian Federal Science Policy Office; the Fonds pour la Formation \`a la Recherche dans l'Industrie et dans l'Agriculture (FRIA-Belgium); the Agentschap voor Innovatie door Wetenschap en Technologie (IWT-Belgium); the F.R.S.-FNRS and FWO (Belgium) under the ``Excellence of Science - EOS" - be.h project n. 30820817; the Ministry of Education, Youth and Sports (MEYS) of the Czech Republic; the Lend\"ulet (``Momentum") Program and the J\'anos Bolyai Research Scholarship of the Hungarian Academy of Sciences, the New National Excellence Program \'UNKP, the NKFIA research grants 123842, 123959, 124845, 124850 and 125105 (Hungary); the Council of Science and Industrial Research, India; the HOMING PLUS program of the Foundation for Polish Science, cofinanced from European Union, Regional Development Fund, the Mobility Plus program of the Ministry of Science and Higher Education, the National Science Center (Poland), contracts Harmonia 2014/14/M/ST2/00428, Opus 2014/13/B/ST2/02543, 2014/15/B/ST2/03998, and 2015/19/B/ST2/02861, Sonata-bis 2012/07/E/ST2/01406; the National Priorities Research Program by Qatar National Research Fund; the Programa Estatal de Fomento de la Investigaci{\'o}n Cient{\'i}fica y T{\'e}cnica de Excelencia Mar\'{\i}a de Maeztu, grant MDM-2015-0509 and the Programa Severo Ochoa del Principado de Asturias; the Thalis and Aristeia programs cofinanced by EU-ESF and the Greek NSRF; the Rachadapisek Sompot Fund for Postdoctoral Fellowship, Chulalongkorn University and the Chulalongkorn Academic into Its 2nd Century Project Advancement Project (Thailand); the Welch Foundation, contract C-1845; and the Weston Havens Foundation (USA). \end{acknowledgments}
|
1,108,101,565,571 | arxiv | \section{Introduction}
Hidden Markov models (HMMs) are flexible probabilistic models for sequential data which assume the observations to depend on an underlying latent state process.
Emerging from the field of speech recognition \citep{rab89}, they find applications in various areas, such as medicine \citep{lan13}, psychology \citep{vis02}, finance \citep{ngu18}, and ecology \citep{beu20}, where they are used for classification tasks, forecasting, or general inference on the data-generating process; for an overview of the various HMM applications, see, for example, \citet{zuc16}. In an HMM's basic model formulation, the underlying state sequence is assumed to be a finite-state first-order Markov chain. This assumption is mathematically and computationally very convenient and allows for an efficient likelihood evaluation and inference \citep{zuc16}. However, it also implicitly restricts the state dwell time, that is the number of consecutive time points spent in a given state, to follow a geometric distribution. Thus, the modal dwell time is fixed at one and the dwell time's distributional shape, with a strictly monotonically decreasing probability function, is completely predefined \citep{lan11}. This might be appropriate for some applications, but inappropriate or too restrictive for others. Examples for the latter include the modelling of daily share returns \citep{bul06}, the analysis of rainfall event data \citep{san01}, and speech unit modelling \citep{gue90}.
Hidden semi-Markov models (HSMMs) overcome this limitation by assuming the underlying state sequence to be a semi-Markov chain, thereby allowing for arbitrary dwell-time distributions defined on the natural numbers. First introduced in the field of speech recognition \citep{fer80}, the additional flexibility makes HSMMs attractive for various areas of application; an overview is provided by \citet{yu10}. However, in order to formulate an HSMM and apply it to data, again some class of dwell-time distributions must be chosen. This raises a new problem: How to select distributions which adequately describe the unobserved states' dwell times? The usual choice is a family of standard discrete parametric distributions, such as the (shifted) Poisson or negative binomial \citep{bul06,eco14,van15}. In that case, the geometric dwell-time distribution implied by conventional HMMs is replaced by another parametric distribution, which again corresponds to a restrictive assumption on the distribution's shape, and hence on the way the state process evolves over time.
An alternative approach which avoids restrictions on the distribution's shape is the use of discrete non-parametric distributions, that is, for each dwell time and state, an individual dwell-time probability is estimated (see, for example, \citealp{san01, gue03}). Such procedures usually require finite dwell-time domains with fixed maximum dwell times for each state \citep{bul10}. This is not necessarily restrictive if the domain is chosen large enough to capture the main dwell-time support, however, a large domain implies a large number of parameters to be estimated. Thus, usually, a large number of observations is needed to fit the model \citep{bul10}. More importantly, there is a high risk to obtain wiggly dwell-time distributions with implausible gaps and spikes. Consequently, the estimation could suffer from both overfitting and numerical instability due to probabilities estimated close to zero.
We aim to overcome these problems by proposing a penalised maximum likelihood (PML) approach that allows for the exploration of the underlying state dynamics in a data-driven way while providing flexible yet smooth estimates. Our method is built on dwell-time distributions with an unstructured (i.e.\ `non-parametric') start and a geometric tail \citep{san01, lan11} to avoid the use of finite dwell-time domains. The introduced penalty term then penalises higher-order differences between adjacent dwell-time probabilities of the unstructured start. This leads to smoothed probability functions and thereby helps to avoid overfitting. Using a state expansion trick, the considered HSMM can exactly be represented by an HMM, thereby opening the way for an efficient likelihood evaluation and numerical (penalised) maximum likelihood estimation \citep{lan11}.
The remaining paper is structured as follows: In Section \ref{Sec2}, we discuss the HSMM model formulation and introduce our PML approach. Section \ref{Sec3} illustrates the feasibility and potential usefulness of the method with a real data case study using movement data from a muskox tracked in northeast Greenland. We conclude with a discussion in Section \ref{Sec4}.
\section{Methodology}\label{Sec2}
\subsection{Hidden semi-Markov models}\label{Sec2.1}
An HSMM is a doubly stochastic process comprising a latent $N$-state semi-Markov chain $\{S_{t}\}_{t=1}^{T}$ and an observed state-dependent process $\{Y_{t}\}_{t=1}^{T}$. Its basic dependence structure is illustrated in Figure \ref{fig:HSMM}. The model assumes that at each time point, the observation $Y_{t}$ is generated by one out of $N$ \textit{state-dependent distributions} $f(y_{t}|S_{t}=i)=f_{i}(y_{t})$, $i=1,\ldots,N$, as selected by the current state. Thus, given the current state $S_t=s_t$, $Y_{t}$ is assumed to be conditionally independent of past observations and states. Note that here, $f$ is used either to denote a probability mass function, if $Y_{t}$ is discrete, or a density function, if $Y_{t}$ is continuous-valued. For multivariate time series, $\mathbf{Y}_{t}=(Y_{1,t},\ldots,Y_{p,t})$, another simplifying assumption is often made, that is, given the current state $S_{t}=s_{t}$, the observations are contemporaneously conditionally independent of each other: $f(\mathbf{y}_t|S_{t}=s_{t})=\prod_{k=1}^{p} f(y_{k,t}|S_{t}=s_{t})$. This allows to choose suitable classes of univariate distributions for the different variables observed. Alternatively, multivariate state-dependent distributions can be used.
The underlying semi-Markov chain $\{S_{t}\}_{t=1}^{T}$ is described by two components: (i) Whenever the chain enters a new state $i$ at some time point $t$, a draw from the corresponding \textit{state dwell-time distribution} $d_{i}$ determines the number of consecutive time points the chain spends in that state. It is defined by its probability mass function (PMF)
$$
d_{i}(r)=\Pr(S_{t+r} \neq i, S_{t+r-1}=i,\ldots,S_{t}=i|S_{t}=i,S_{t-1} \neq i),
$$
with $r\in \mathbb{N}$ denoting the duration; (ii) The state switching is described by an \textit{embedded Markov chain} with \textit{conditional transition probabilities} $\omega_{ij}=\Pr(S_{t}=j|S_{t-1}=i, S_{t} \neq i)$, summarised in the $N \times N$ conditional transition probability matrix $\boldsymbol{\Omega}$ with $\omega_{ii}=0$. The \textit{initial distribution} describes the state probabilities at $t=1$, $\bm{\delta}=(\Pr(S_{1}=1),\ldots,\Pr(S_{1}=N))$.
In case that all state dwell times are geometrically distributed, the HSMM reduces to the special case of an HMM and the underlying state-sequence $\{S_{t}\}_{t=1}^{T}$ becomes a first-order Markov chain. The state-switching is then characterised by the $N \times N$ \textit{transition probability matrix} (TPM) $\boldsymbol{\Gamma}=(\gamma_{ij})$ with $\gamma_{ij}=\Pr(S_{t}=j|S_{t-1}=i)$ denoting the \textit{transition probabilities}. This automatically implies the geometric dwell-time distribution with $d_{i}(r)=(1-\gamma_{ii})\gamma_{ii}^{r-1}$ for each state $i=1,\ldots,N$.
The parameter vector $\bm{\theta}$ characterising an $N$-state HSMM contains the parameters defining the dwell-time distributions $d_i(r)$ and the state-dependent distributions $f_{i}(y_{t})$, for $i=1,\ldots,N$, the conditional transition probabilities $\omega_{ij}$, for $i,j=1,\ldots,N$, $i\neq j$, and the initial probabilities $\delta_i$, $i=1,\ldots,N$. Thus, for parameter estimation, it is necessary to choose classes of parametric or non-parametric state-dependent and state dwell-time distributions. Although not trivial, the former can usually be chosen and evaluated directly based on an inspection of the observations at hand. For instance, for daily share return data, normal or t-distributions are common options \citep{bul06, oel20}, and for movement data, gamma or Weibull distributions are often suitable to model the observed step lengths \citep{lan12}. The state dwell times, however, are usually unobserved, which makes the choice of appropriate distributions difficult. As a way to solve this problem, in the subsequent section, we propose a penalised maximum likelihood approach which avoids strong assumptions about the distributions' shape.
\begin{center}
\begin{figure}[!t]
\centering
\begin{tikzpicture}[node distance = 1.5cm]
\tikzset{state/.style = {circle, draw, minimum size = 55pt, scale = 0.725}}
\node [state] (1) at (0,0) {$S_{t-1}=j$};
\node [state] (2) at (2,0) {$S_{t}=i$};
\node [state] (3) at (4,0) {$S_{t+1}=i$};
\node [] (4) at (6,0) {$\ldots$};
\node [state] (5) at (8,0) {$S_{t+r-1}=i$};
\node [state] (6) at (10,0) {$S_{t+r}=j$};
\node [state] (7) at (0,2) {$Y_{t-1}$};
\node [state] (8) at (2,2) {$Y_{t}$};
\node [state] (9) at (4,2) {$Y_{t+1}$};
\node [state] (11) at (8,2) {$Y_{t+r-1}$};
\node [state] (12) at (10,2) {$Y_{t+r}$};
\draw[->, line width=0.3pt, black] (1) to (2);
\draw[->, line width=0.3pt,black] (2) to (3);
\draw[->, line width=0.3pt,black] (3) to (4);
\draw[->, line width=0.3pt,black] (4) to (5);
\draw[->, line width=0.3pt,black] (5) to (6);
\draw[->, line width=0.3pt,black] (1) to (7);
\draw[->, line width=0.3pt,black] (2) to (8);
\draw[->, line width=0.3pt,black] (3) to (9);
\draw[->, line width=0.3pt,black] (5) to (11);
\draw[->, line width=0.3pt,black] (6) to (12);
\draw[decoration={brace,mirror,raise=0.5cm,amplitude=1em},decorate,thick] (2,-0.6) to (8,-0.6);
\node[] (13) at (5,-2) {\footnotesize{dwell time $r$ drawn from $d_i$}};
\draw [->] (1) to [out=-60,in=-120] (2);
\node[] (15) at (1,-1.2) {\footnotesize{$\omega_{ji}$}};
\draw [->] (5) to [out=-60,in=-120] (6);
\node[] (16) at (9,-1.2) {\footnotesize{$\omega_{ij}$}};
\end{tikzpicture}
\caption{Dependence structure of an HSMM. Whenever the semi-Markov chain enters a new state $i$ at time $t$, the dwell time $r$, i.e.\ the time spent in that state, is drawn from the corresponding dwell-time distribution $d_i(r)$. Consequently, a state switch must occur at time $t+r$ and state $j$ is entered with the conditional probability $\omega_{ij}$. Each observation $Y_t$ depends on the corresponding state $S_t$ and is generated by the associated state-dependent distribution $f_{S_{t}}(y_t)$.}
\label{fig:HSMM}
\end{figure}
\end{center}
\subsection{Flexible estimation of the state dwell-time distributions}\label{Sec2.2}
\subsubsection{Flexible dwell-time distributions and HMM representation}\label{Sec2.2.1}
Similar to \citet{san01} and \citet{lan11}, we consider dwell-time distributions with an unstructured start and a geometric tail. That is, for each state $i=1,\ldots, N$ and dwell times $r \in \{1,2,\ldots,R_{i}\}$, we assign a parameter $\pi_{i,r}$ to each individual dwell-time probability $d_{i}(r)$, where $R_i$ denotes the upper boundary for the unstructured start. A geometric tail accounts for dwell-times $r>R_{i}$:
$$d_{i}(r)=\begin{cases}
\pi_{i,r} & \text{if } 0<r\leq R_{i}; \\
\pi_{i,R_{i}}\left(\ \cfrac{1-\sum_{r=1}^{R_{i}} \pi_{i,r}}{1-\sum_{r=1}^{R_{i}-1} \pi_{i,r}} \right)^{r-R_{i}} & \text{if }r>R_{i},
\end{cases}
$$
with $0 < \pi_{i,r} < 1$ and $\sum_{r=1}^{R_{i}} \pi_{i,r} < 1$. This allows for a flexible and data-driven shape on the support $\{1,\ldots,R_i\}$ while avoiding a restriction for the dwell-time domain. Usually, only small ranges are considered for the unstructured start (for instance, $R_i=1$ in \citealp{san01}; $R_i \in \{1,2,3\}$ in \citealp{lan11}); for our purposes, however, the upper boundary $R_{i}$ should be chosen large enough to capture the main dwell-time support. This can be explored by initially using large values for $R_{i}$, which can subsequently be replaced by suitable smaller values.
Using a state-space expansion and a suitable block structure in the resulting enlarged TPM, an HSMM with such dwell-time distributions can \textit{exactly} be represented as an HMM \citep{lan11,zuc16}. This opens up the way for the efficient standard HMM machinery for parameter estimation and further inference. In the HMM representation, each HSMM state $i$ is represented by a set of $R_{i}$ sub-states forming a so-called state aggregate $I_i=\{\tilde{i}_{1},\ldots, \tilde{i}_{R_{i}}\}$, which leads to a state space of dimension $\tilde{N}=\sum_{i=1}^{N} R_{i}$. We denote the corresponding HMM Markov chain by $\{\tilde{S_t}\}_{t=1}^{T}$. Each HMM sub-state belonging to the state aggregate $I_i$ is associated with the same state-dependent distribution $f_i(y_t)$ and the corresponding transition probabilities are structured and parameterised such that they exactly mirror the HSMM dwell-time distribution $d_i(r)$. For instance, except for the last sub-state $\tilde{i}_{R_{i}}$ which is associated with the geometric tail, no self-transitions are allowed and the state aggregate can only be traversed through in the indexed order, starting with $\tilde{i}_{1}$. This structure is illustrated in Figure \ref{fig:transition_graph} for a 2-state HSMM. For the HMM transition probabilities within the state aggregates, this implies: $\gamma_{\tilde{i}_{r},\tilde{i}_{r}}=\Pr( \tilde{S}_{t}=\tilde{i}_{r}|\tilde{S}_{t}=\tilde{i}_{r})=0$ and $\gamma_{\tilde{i}_{r},\tilde{i}_{l}}=\Pr(\tilde{S}_{t}=\tilde{i}_{l}|\tilde{S}_{t}=\tilde{i}_{r})=0$ for $r=1,\ldots,R_{i}-1$ and $l \neq r+1$. Furthermore, $\gamma_{\tilde{i}_{R_i},\tilde{i}_{r}}=\Pr( \tilde{S}_{t}=\tilde{i}_{r}|\tilde{S}_{t}=\tilde{i}_{R_i})=0$ for $r \neq R_i$. Thus, most of the transition probabilities are fixed to zero. Further details about the HMM representation are provided in the appendix.
\begin{center}
\begin{figure}[!t]
\centering
\begin{tikzpicture}[node distance = 1.5cm]
\tikzset{state/.style = {circle, draw, minimum size = 55pt, scale = 0.725},every loop/.style={}}
\node [state] (1) at (-0.55,0) {$\tilde{i}_1$};
\node [state] (2) at (1.5,0) {$\tilde{i}_2$};
\node [] (3) at (3,0) {$\ldots$};
\node [state] (4) at (4.5,0) {$\tilde{i}_{R_{i}}$};
\node [state] (5) at (1.5,2) {$\tilde{j}_{1}$};
\node [] (6) at (3,2) {$\ldots$};
\node [state] (7) at (4.5,2) {$\tilde{j}_{R_{j}}$};
\draw[->, line width=0.3pt, black] (1) to (2);
\draw[->, line width=0.3pt,black] (2) to (3);
\draw[->, line width=0.3pt,black] (3) to (4);
\draw[->, line width=0.3pt,black] (1) to (5);
\draw[->, line width=0.3pt,black] (2) to (5);
\draw[->, line width=0.3pt,black] (4) to (5);
\draw[->, line width=0.3pt,black] (5) to (6);
\draw[->, line width=0.3pt,black] (6) to (7);
\path (4) edge [loop right,<-,line width=0.3pt,black] (4);
\draw [->] (5) to [out=180,in=90] (1);
\path (7) edge [loop right,<-,line width=0.3pt,black] (7);
\draw [->] (7) to [out=120,in=120] (1);
\end{tikzpicture}
\caption{Example transition graph illustrating the structure of the HMM-representation for a 2-state HSMM. The actual HSMM states are represented by the state aggregates $I_{i}=\{\tilde{i}_{1}, \ldots, \tilde{i}_{R_{i}}\}$ and $I_{j}=\{\tilde{j}_{1},\ldots,\tilde{j}_{R_{j}}\}$, respectively.}
\label{fig:transition_graph}
\end{figure}
\end{center}
\subsubsection{Penalised maximum likelihood estimation}\label{Sec2.2.2}
For parameter estimation, we use the HMM representation described above (Section \ref{Sec2.2.1}) and focus on numerical maximisation of the (penalised) log-likelihood. Alternatively, maximum likelihood estimation can be carried out using expectation-maximisation (EM) algorithms specifically tailored for HSMM applications (for example, \citealp{san01,gue03,yu03}). However, they usually assume that a new state is entered at the beginning of the observation period ($t=0$). Besides being unrealistic in some cases, this also impedes stationarity \citep{lan11}. For Bayesian HSMM parameter estimation, see, for example, \citet{eco14}.
Using its HMM representation, the likelihood of the HSMM can efficiently be evaluated using the so-called forward algorithm (see, for example, \citealp{zuc16}). It exploits the fact that the likelihood of an HMM can be written as a matrix product,
$$\mathcal{L}(\boldsymbol{\theta}|y_1,\ldots,y_T)=\bm{\delta}\bm{\Gamma}\mathbf{P}(y_{1})\bm{\Gamma}\mathbf{P}(y_{2})\ldots \bm{\Gamma}\mathbf{P}(y_{T})\bm{1}^\top,$$
where $\bm{\delta}$ is the $\Tilde{N}$-dimensional initial distribution, $\bm{\Gamma}$ is the corresponding $\Tilde{N} \times \Tilde{N}$ TPM (see the appendix for further details on its structure), $\bm{1}$ is an $\Tilde{N}$-dimensional row-vector of ones, and $\mathbf{P}(y_t)$ is an $\tilde{N} \times \tilde{N}$ diagonal matrix containing the state-dependent densities evaluated at $y_t$,
$$\mathbf{P}(y_t)=\text{diag} \bigl(\underbrace{f_{1}(y_{t}),\ldots,f_{1}(y_{t})}_{R_{1} \text{ times}},\ldots,\underbrace{f_{N}(y_{t}),\ldots,f_{N}(y_{t})}_{R_{N} \text{ times}}\bigr).$$
The forward algorithm corresponds to a recursive calculation of the likelihood with computational costs of order $\mathcal{O}(\Tilde{N}^2T)$, which renders numerical maximisation practically feasible. We denote the corresponding log-likelihood by $\ell(\boldsymbol{\theta}|y_1,\ldots,y_T)=\log(\mathcal{L}(\boldsymbol{\theta}|y_1,\ldots,y_T))$.
To avoid overfitting with respect to the dwell-time PMFs, we enforce smoothness by adding a penalty term for the $m$-th order differences of adjacent state dwell-time probabilities. Thus, for parameter estimation, we maximise the resulting penalised log-likelihood,
$$\hat{\boldsymbol{\theta}} = \underset{\boldsymbol{\theta}}{\text{argmax}} \; \ell(\boldsymbol{\theta}|y_1,\ldots,y_T)-\sum_{i=1}^{N} \lambda_{i} \sum_{r=m+1}^{R_{i}} (\Delta^m \pi_{i,r})^{2},$$
where $\Delta^m \pi_{i,r}$ denotes the $m$-th order difference, $\Delta \pi_{i,r}=\pi_{i,r}-\pi_{i,r-1}$ and $\Delta^{m}=\Delta^{m-1} (\Delta \pi_{i,r})$. There are three types of tuning parameters which influence the estimation. First, the smoothing parameter vector $\bm{\lambda}=(\lambda_{1},\ldots,\lambda_N)$ controls the balance between goodness-of-fit and smoothness of the dwell-time PMFs $d_{i}(r)$. For $\bm{\lambda}=\bm{0}$, the penalty term completely disappears from the equation and the estimation reduces to a simple maximum likelihood estimation. Since in general, the different states' dwell-time distributions require different degrees of smoothing, the smoothing parameters are chosen for each state individually, i.e.\ $\lambda_i \neq \lambda_j$ for $i \neq j$ is possible. A common way to select the smoothing parameters is via cross validation (see \citealp{lan15, ada19}). Second, the difference order $m$ influences the shape of $d_{i}(r)$, especially when $\lambda_{i}$ becomes large. For instance, for $m=1$ and $\lambda_{i} \rightarrow \infty$, $d_i(r)$ approaches a uniform distribution, while for $m=2$ and $\lambda_{i} \rightarrow \infty$, $d_i(r)$ approaches a distribution with a linearly decreasing PMF. Higher-order differences can result in more flexible distributional shapes. We recommend a pragmatic choice of $m$ based on the data at hand, the results arising from an initial unpenalised estimation and a close inspection of the goodness of fit. Similar to \citet{ada19}, we made the experience that $m \ge 3$ provides a reasonable choice in many applications. Third, the upper boundary $R_{i}$ determines the range for which $d_{i}(r)$ is explored. If chosen too small, the estimation might miss important patterns of the dwell-time distribution. If chosen very large, numerical instabilities might arise (especially for small $\lambda_{i}$), the required memory increases and the computational costs become demanding. A simple and pragmatic approach to find suitable boundary values for the unstructured start is to carry out an initial estimation with large values for $R_{i}$, $i=1,\ldots,N$, and no penalisation, i.e.\ $\bm{\lambda}=\bm{0}$. This provides first insights about the core dwell-time support which can then be used to adjust $R_i$ accordingly.
\section{Case study: Investigating dwell times in muskox movements}\label{Sec3}
\label{Sec3}
We illustrate our PML approach using real GPS-based muskox (\textit{Ovibos moschatus}) movement data. For HMMs, movement ecology is an important area of application with the states usually being interpreted as proxies for the animals' unobserved behavioural modes driving the observed movement patters \citep{mcc20}. Similarly, HSMMs with parametric (e.g.\ shifted Poisson and negative binomial) dwell-time distributions have successfully been applied in this context \citep{lan12,lan14,van15}. For muskox movements in northeast Greenland, \citet{beu20} found that a 3-state HMM adequately describes the main behavioural states `resting', `foraging', and `relocating'. They applied the model to step length (metre) and turning angle (radian) based on hourly GPS locations. While \citet{beu20} account for temporal variation in the transition probabilities using environmental covariates, here, we focus on the direct estimation of the state dwell-time distribution. As ruminants, muskoxen need to forage and rest on a regular basis. Thus, the explicit estimation of the states' dwell-time distributions could provide new insights into the animals' behavioural patterns, in particular into the durations of foraging and resting bouts.
For simplicity, we consider the movement track from a single muskox during the winter season 2013/14 with length $T=6825$ (including $6769$ registered GPS locations and $56$ missing locations), a subset of the data used by \citet{beu20}.
\begin{figure}[h!t]
\centering
\includegraphics[width=0.8\textwidth]{track14_1314_zoom_x.pdf}
\caption{Recorded muskox movement track based on hourly GPS locations.}
\label{fig:track}
\end{figure}
The movement track is displayed in Figure \ref{fig:track}. Assuming contemporaneous conditional independence, we consider a 3-state HMM and 3-state PML-based HSMMs, hereafter denoted as PML-HSMMs, with state-dependent gamma distributions for step length and von Mises distributions for turning angle. This is in line with the analysis of \citet{beu20}. To account for the zero step length observations included in the data, we consider additional parameters corresponding to point masses on zero. The tuning parameters $R_{i}$ within the PML-HSMM are selected based on a preliminary unpenalised estimation ($\bm{\lambda}=\bm{0}$) using $30$ freely estimated dwell-time probabilities for each state, respectively (i.e.\ $R_1=R_2=R_3=R=30$). The resulting PMFs are displayed in Figure S1 in the Supplementary Material, indicating that dwell times $r \le 10$ capture most of the probability mass for all three states ($98.24 \%$, $98.74 \%$, and $94.73 \%$ for state 1, 2, and 3, respectively). This is also biologically reasonable as the muskox is generally expected to switch its behavioural modes during the day. Thus, for our analysis, we use an unstructured start of length $R=10$ for all states. To ensure enough flexibility for the dwell-time distributions, we penalise the $4$-th order differences ($m=4$). However, in the Supplementary Material, we provide results arising from $m \in \{1,2,3\}$ using $R=10$, and $R \in \{5,20\}$ using $m=4$, to provide information about the sensitivity of these choices. All models were fitted in \texttt{R} \citep{rco20} using the numerical optimisation procedure \texttt{nlm}. To speed up estimation, the forward algorithm was implemented in \texttt{C++}.
To demonstrate the effect of the penalisation, we first present results from simplified PML-HSMMs with $\lambda_{1}=\lambda_{2}=\lambda_{3}=\lambda$ and $\lambda \in \{0,10^1,10^2,10^5\}$. Figure \ref{fig:sdd} shows the estimated state-dependent gamma distributions (for step length) and von Mises distributions (for turning angle) resulting from the fitted 3-state HMM and PML-HSMMs, respectively. The state-specific patterns are very similar across the models and comparable to the results of \citet{beu20}. Thus, the states can reasonably be interpreted as corresponding roughly to resting (state 1), foraging (state 2), and relocating (state 3), respectively.
\begin{figure}[!t]
\centering
\includegraphics[width=0.95\textwidth]{sdds_step_diff4.pdf}
\vspace{-3em}
\includegraphics[width=0.95\textwidth]{sdds_angle_diff4.pdf}
\caption{Estimated state-dependent gamma distributions for step length and von Mises distributions for turning angles, resulting from the 3-state models considered. The left panels show the results of the HMM. The right panel shows the results of all PML-HSMMs for which the distributions resulting from different choices of $\lambda$ are plotted on top of each other. It is, however, difficult to see any differences between the results of the different PML-HSMMs, because the corresponding estimates are very similar to each other. All distributions are weighted by the stationary distribution and the background shows the corresponding histograms of the observed variables.}
\label{fig:sdd}
\end{figure}
The dwell-time distributions, however, are very different across the fitted models, as displayed in Figure \ref{fig:dwell_time_sl}. Regardless of the choice of $\lambda$, the estimated PML-HSMM dwell-time distributions differ substantially from geometric distributions, especially for state 2 and 3 where the modal dwell time is clearly greater than one. This suggests that a basic HMM would not correctly represent the dynamics in the state process. The necessity of penalisation becomes clear for example in view of $\hat{d}_3(r)$, the dwell-time distribution estimated for state 3: when increasing $\lambda$, the distribution becomes smoother, and in particular the gaps in the PMF, as obtained when not penalising ($\lambda=0$; top right panel in Figure \ref{fig:dwell_time_sl}), are filled due to the enforced smoothness. With a strong penalisation using $\lambda=10^5$, even the second mode in $\hat{d}_3(r)$ diminishes (bottom right panel), which otherwise appears when using the smaller smoothing parameter values. Note that especially for large values of $\lambda$, the shape of the smoothed PMFs depends on the choice of the difference order $m$. This is illustrated in the Supplementary Material where Figures S2--S4 display the dwell-time distributions resulting from $m=1,2,3$, respectively. While for $\lambda=10^1$ and $\lambda=10^2$, the results are comparable across the choice of $m$, for $\lambda=10^5$, the estimated dwell-time distributions greatly differ. For instance, the PMFs approach uniform distributions on $r \le 10$ when penalising the first-order differences ($m=1$, Figure S2) and linearly decreasing distributions using the second-order differences ($m=2$, Figure S3). Based on the biological context and the results from $\lambda=0$, both do not seem to be appropriate in this case study. We expect this to be the case for most applications.
\begin{figure}[!t]
\centering
\includegraphics[width=\textwidth]{d_i_diff4_HMM.pdf}
\caption{Estimated dwell-time distributions of the 3-state HMM and 3-state PML-HSMMs using different smoothing parameter values $\lambda$.}
\label{fig:dwell_time_sl}
\end{figure}
To find an appropriate model for the muskox movement data, we carried out a two-step model selection procedure: (i) To select an appropriate vector $\bm{\lambda}=(\lambda_{1},\lambda_{2},\lambda_{3})$ for the PML-HSMM, we used a $10$-fold cross validation based on the neighbourhood algorithm proposed by \citet{lan15} with scores being the averaged log-likelihood across the validation samples. With the focus being on the dwell-time distributions, we used a blockwise partitioning of the data and considered a $3$-dimensional grid of powers of tens, i.e.\ $\{10^0,10^1,10^2,\ldots\}^3$. This resulted in the selection of $\bm{\lambda}=(10^5,10^4,10^2)$. (ii) The HMM, HSMM with negative binomial distribution, and PML-HSMM with $\bm{\lambda}=\bm{0}$ form a set of natural candidate models for the PML-HSMM selected via cross validation. We used AIC to select among these candidate models, where for the PML-HSMM, we approximated the effective degrees of freedom using the trace of the empirical Fisher matrix of the unpenalised model ($\bm{\lambda}=\bm{0}$) multiplied by the Fisher matrix of the penalised model with $\bm{\lambda}=(10^5,10^4,10^2)$ (following the approach of \citealp{gra92}; see also \citealp{lan18}).
For estimation, the 3-state HSMM with negative binomial distribution was approximated by an HMM as proposed by \citet{lan11} with state aggregates of dimension $30$ per HSMM state. The resulting AIC values are displayed in Table \ref{tab:AIC}. The PML-HSMM is clearly preferred over both the HMM and the negative binomial HSMM. According to the AIC, the best model among the candidate models is the PML-HSMM with $\bm{\lambda}=(10^5,10^4,10^2)$.
The corresponding dwell-time distributions are displayed in Figure \ref{fig:dwell_time_cv}. The results suggest that the tracked muskox tends to forage and travel for several hours before switching to a different state, with modal values being $r=4$ and $r=3$, respectively. However, $\hat{d}_3(r)$ seems to be almost bimodal, indicating that there might be different types of travelling periods, i.e.\ long and short travelling phases. This distributional shape would not have been captured by standard parametric HSMMs. The modal dwell time for state 1 (resting) is $r=1$, but with a rather slow decay compared to the geometric distribution. Thus, the resting periods tend to be slightly shorter than the foraging and relocation periods and tend to last only a few hours. A pseudo-residual analysis is provided in Section 2 of the Supplementary Material, indicating a good model fit for the selected PML-HSMM.
\begin{table}[t!]
\centering
\begin{tabular}{l|cccc}
\toprule
model & no.\ par.\ / df & $\ell$ & AIC & $\Delta$ AIC \\\midrule
HMM & 21 & -44964.04 & 89970.07 & 231.31 \\
nbHSMM & 24 & -44897.09 & 89842.18 & 103.41 \\
PML-HSMM$_{(0,0,0)}$ & 48 & -44823.71 & 89743.43 & 4.66\\
PML-HSMM$_{(10^5,10^4,10^2)}$ & 32.70 & -44835.96 & \textbf{89737.32} & 0\\
\bottomrule
\end{tabular}
\caption{Number of parameters/effective degrees of freedom, log-likelihood values, AIC values and $\Delta$ AIC for the 3-state models considered.}
\label{tab:AIC}
\end{table}
\begin{figure}[t!]
\centering
\includegraphics[width=\textwidth]{d_i_selected_diff4.pdf}
\caption{Estimated dwell-time distributions of the 3-state PML-HSMM selected by cross validation with smoothing parameter vector $\bm{\lambda}=(10^5,10^4,10^2)$.}
\label{fig:dwell_time_cv}
\end{figure}
\section{Discussion and conclusions}\label{Sec4}
As the state process is unobserved, it is often unclear how to select a model that appropriately reflects the underlying state dynamics. We introduced a penalised estimation approach which combines PMFs with an unstructured start and higher-order difference penalties to derive flexible yet smooth estimates for the states' dwell-time distributions. While HSMMs with standard parametric distributions are in general more parsimonious than PML-HSMMs, they are restricted in their distributional shapes and therefore might fail in capturing the underling dwell-time patterns. For instance, consider the negative binomial distribution shifted by one, which comprises the geometric distribution as a special case (with shape parameter equal to one). Thus, to some extent, negative binomial HSMMs actually allow for different shapes, can identify states for which geometric dwell-time distributions suffice \citep{gue05} and can be tested against the nested HMMs \citep{bul06}. However, they are not able to identify more complex patterns like bimodal dwell-time distributions. Avoiding strong distributional assumptions, our penalised estimation approach can be used as an exploratory tool to investigate the unknown shapes of the states' dwell-time distributions. The method can either serve for direct modelling purposes, or as a basis for subsequent modelling choices, for example, in order to decide whether an HMM would be appropriate for the data at hand, or what distributional assumption may be adequate within a conventional HSMM (in the spirit of \citealp{san01}). Thereby, it could also indicate if different states require different families of parametric distributions.
Due to the HMM representation, inference is straightforward and can completely rely on well-known HMM techniques \citep{lan11}. This is in line with \citet{joh05} who, based on a comparison of different algorithms and HSMM-like model formulation, argues that the use of standard models with special state topologies is practically more reasonable than the use of more complex and expensive algorithms. The HMM representation makes it fairly easy to change the distributional assumption in the state-dependent process and to adapt the model to the application at hand. Only when the number of states or the number of sub-states in the state aggregates becomes large, the likelihood evaluation might suffer from the use of large matrices and the memory required. An alternative approach would be the implementation of an EM algorithm with a roughness penalty term which is shortly discussed by \citet{gue03} for HSMMs with non-parametric dwell-time distributions.
The PML-HSMM approach allows for a straightforward incorporation of covariates into the state-dependent process \citep{lan11}. However, as for HSMMs in general, it is conceptually unclear how to integrate covariates into the state process of the model. Especially in movement ecology, the interest often lies in the influence of environmental variables on the animal's movement behaviours (for example, \citealp{vbe19,beu20,pho20}). Within HMMs, the transition probabilities and covariates can be linked via (multinomial) logit link functions \citep{zuc16}. Thus, depending on the covariate values, the transition probabilities change over time. This also affects the probability to remain in the current state and consequently, the implicit states' dwell-time distributions. While in principle, the conditional transition probabilities of an HSMM can be linked to covariates in the same way, this would not directly affect the dwell-time distributions of the model as within an HSMM, the dwell-time distributions are modelled separately from the conditional transition probabilities. Alternatively, the HSMM parameters defining the dwell-time distributions could be linked to covariates. But as the time at which the state process enters a new state is unknown, it is unclear on which covariate observations the dwell-time parameters should depend on. Therefore, if the interest of the analysis lies on the influence of time-varying covariates on the state process, HMMs provide a more convenient framework. However, in cases where covariates are not assumed to influence the state process, or where no covariates are available, the proposed PML-HSMM approach can provide new insights into the states' dwell-time distributions and the underlying latent state dynamics. For univariate time series and common state-dependent distributions, the PML-HSMM approach is implemented in the \texttt{R} package \texttt{PHSMM} \citep{poh21} on CRAN.
\section*{Acknowledgements}
The authors are very grateful to Roland Langrock for inspiring and valuable discussions and helpful advice that considerably improved the paper. They also thank Niels Martin Schmidt for providing the muskox tracking data.
\renewcommand\refname{References}
\makeatletter
\renewcommand\@biblabel[1]{}
\markboth{}{}
\section{Additional PML-HSMM results for the muskox case study}
\subsection{Main dwell-time support}
In this section, we provide supplementary results for the muskox movement data discussed in Section 3 in the main manuscript. Figure \ref{fig:d_i_l0} displays the estimated 3-state PML-HSMM dwell-time distributions using $\lambda_{1}=\lambda_{2}=\lambda_{3}=\lambda=0$ and $R_{1}=R_{2}=R_{3}=R=30$. It indicates that the core dwell-time support for the muskox movement data is $\{1,\ldots,10\}$. For $r > 10$, most dwell-time probabilities are estimated very close to zero. This result is the basis for setting $R=10$ for the subsequent analysis.
\vspace{5em}
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{d_i_HSMM_l0.pdf}
\caption{Estimated dwell-time distributions of the 3-state PLM-HSMM with $\lambda=0$ and an unstructured start of length $R=30$ for each state.}
\label{fig:d_i_l0}
\end{figure}
\newpage
\subsection{Influence of the difference order}
To illustrate the influence of the choice of the difference order $m$ on the resulting dwell-time distributions, for $\lambda \in \{10^1,10^2,10^5\}$ and $R=10$, we additionally fitted 3-state PML-HSMMs with $m=1,2,3$ to the muskox movement data. The resulting probability mass functions (PMFs) are displayed in Figures \ref{fig:d_i_df1}, \ref{fig:d_i_df2}, and \ref{fig:d_i_df3}, respectively. For $\lambda=10^1$ and $\lambda=10^2$, the fitted dwell-time distributions only differ slightly across the choice of $m$. However, using a strong difference penalisation, i.e.\ $\lambda=10^5$, the difference order $m$ clearly affects the shape of the estimated distributions: For $m=1$, the PMFs approach uniform distributions on $r \le 10$ (with a geometric tail for $r >10$; Figure \ref{fig:d_i_df1}, bottom panel), for $m=2$, a linearly decreasing distribution is approached (with a geometric tail for $r >10$; Figure \ref{fig:d_i_df2}, bottom panel). When penalising third-order differences ($m=3$), the estimated PML-HSMM is able to capture more complex patters. For instance, the estimated distributions $\hat{d}_1(r)$ and $\hat{d}_2(r)$ differ in their shapes (Figure \ref{fig:d_i_df3}, bottom panel).
\vspace{5em}
\begin{figure}[hb]
\centering
\includegraphics[width=\textwidth]{d_i_HSMM_diff1.pdf}
\caption{Estimated dwell-time distributions of the 3-state PLM-HSMMs with penalisation of first-order differences ($m=1$) and an unstructured start of length $R=10$ for each state.}
\label{fig:d_i_df1}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{d_i_HSMM_diff2.pdf}
\caption{Estimated dwell-time distributions of the 3-state PLM-HSMMs with penalisation of second-order differences ($m=2$) and an unstructured start of length $R=10$ for each state.}
\label{fig:d_i_df2}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{d_i_HSMM_diff3.pdf}
\caption{Estimated dwell-time distributions of the 3-state PLM-HSMMs with penalisation of third-order differences ($m=3$) and an unstructured start of length $R=10$ for each state.}
\label{fig:d_i_df3}
\end{figure}
\clearpage
\subsection{Length of the unstructured start}
To illustrate the influence of the length of the unstructured start, for $\lambda \in \{10^1,10^2,10^5\}$ and $m=4$, we further estimated 3-state PML-HSMMs using $R=5$ and $R=20$, respectively. The resulting PMFs are displayed in Figures \ref{fig:d_i_5df4} and \ref{fig:d_i_20df4}. With $R=5$, the penalisation has hardly any effect on the estimation. This can partly be explained by the fact that the unpenalised estimation ($\lambda=0$; Figure \ref{fig:d_i_5df4}, top panels) already results in smooth PMFs and the geometric tails carry a considerable share of the probability masses, namely $19.33\%$, $26.46\%$, and $30.50\%$ in state 1, 2, and 3, respectively. Furthermore, for only $R=5$ freely estimated probabilities, the difference order $m=4$ is chosen too large.
The PMFs resulting from $R=20$ are comparable to the ones arising from setting $R=10$ (Figure 5 in the main manuscript). Hence, the choice of $R=10$ seems suitable for the muskox case study, as it captures the main patterns, is more parsimonious than $R=20$ and computationally less expensive.
\vspace{5em}
\begin{figure}[hb]
\centering
\includegraphics[width=\textwidth]{d_i_HSMM5_diff4.pdf}
\caption{Estimated dwell-time distributions of the 3-state PLM-HSMMs with penalisation of fourth-order differences ($m=4$) and an unstructured start of length $R=5$ for each state.}
\label{fig:d_i_5df4}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{d_i_HSMM20_diff4.pdf}
\caption{Estimated dwell-time distributions of the 3-state PLM-HSMMs with penalisation of fourth-order differences ($m=4$) and an unstructured start of length $R=20$ for each state.}
\label{fig:d_i_20df4}
\end{figure}
\clearpage
\section{Step length pseudo-residuals for the selected PML-HSMM}
For model checking, we consider ordinary pseudo-residuals as described in \citet{zuc16}. We focus on the pseudo-residuals for the step length observations, as due to their cyclic nature, a residual analysis for turning angles is less amenable. A good model fit is indicated by standard normally distributed pseudo-residuals. Figure \ref{fig:pr} displays the histogram, qq-plot, autocorrelation function, and a time series sequence of the ordinary pseudo-residuals corresponding to the 3-state PML-HSMM with $\lambda=(10^{5},10^{4},10^{2})$ as selected via cross-validation (see main manuscript, Section 3). Overall, the model provides a reasonable fit to the data.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{pseudo_res.pdf}
\caption{Ordinary step lengths pseudo-residuals of the PLM-HSMM with $\lambda=(10^{5},10^{4},10^{2})$ which was selected for the muskox movement data. The top left panel shows the histogram of the pseudo-residuals overlaid with the standard normal density function. The upper right panel displays the qq-plot comparing the quantiles of the empirical pseudo-residual distribution and the standard normal distribution. On the bottom, the left panel shows the empirical autocorrelation function and the right panel a sequence of the calculated ordinary pseudo residuals.}
\label{fig:pr}
\end{figure}
\renewcommand\refname{References}
\makeatletter
\renewcommand\@biblabel[1]{}
\markboth{}{}
|
1,108,101,565,572 | arxiv | \section{ Introduction}
Top quark physics is among the central physical topics at the
Tevatron and will continue to be so at the Large Hadron Collider
(LHC) in the next few years. Compared to other lighter SM fermions,
the top quark is the only fermion with mass at the electroweak
symmetry breaking scale, so it is widely speculated that the
properties of the top quark are sensitive to new physics. Among
various top quark processes at present and future colliders, the
flavor changing neutral current (FCNC) processes are often utilized
to probe new physics (NP) because in the SM, the FCNC processes are
highly suppressed \cite{1}, while in NP models, there may be no such
suppression. Therefore, searching for top FCNC at colliders can
serve as an effective way to hunt for NP.
The two-body FCNC decays of top quark such as $ t\rightarrow cg,
c\gamma, cZ, cH$ received much attention in the past. In the SM,
the rates of these decays are less than $10^{-11}$ \cite{sm-2t},
which is far below the reaches of the LHC
\cite{limit-LHC1,limit-LHC2} and the International Linear Collider
(ILC) \cite{limit-ILC}, while in many NP models these decays may be
enhanced to detectable levels \cite{detectable}. By now, the
two-body processes $ t\rightarrow cg, c\gamma, cZ, cH$ have been
extensively investigated in the minimal supersymmetric standard
model (MSSM) \cite{MSSM-2t}, the left-right supersymmetric models
\cite{LR-SUSY}, the supersymmetric model with R-parity violation
\cite{SUSY-RV}, the two-Higgs doublet model (2HDM) \cite{2HDM-2t},
the topcolor-assisted technicolor model (TC2) \cite{TC2-2t}, as well
as models with extra singlet quarks \cite{Extra-quark}. Beside this,
some three-body FCNC decays of the top quark, such as $t\rightarrow
cVV~(V=\gamma,Z,g)$ and $t\rightarrow cf\bar{f}~(f=b, \tau, \mu,
e)$, were also studied in the framework of the SM \cite{SM-3t,
SM-3t-domin.,SM-2HDM-3t,SM-MSSM-2HDM-3t}, 2HDM
\cite{SM-2HDM-3t,2HDM-3t,SM-MSSM-2HDM-3t}, MSSM
\cite{MSSM-3t,SM-MSSM-2HDM-3t,RPV-MSSM,Eff.-vert.-MSSM-3t}, TC2
\cite{TC2-tcVV,TC2-tcgg,tcll-TC2,tcbb-TC2}, or in a
model-independent way \cite{ind-3t}.
The aim of this work is to perform a comprehensive analysis of the
FCNC top quark decays $t\rightarrow cVV~(V=\gamma,Z,g)$ and
$t\rightarrow cf\bar{f}~(f=b, \tau, \mu, e)$ in the little Higgs
model with T-parity (LHT) \cite{LHT}. In the LHT model, the related
two-body decays $t\rightarrow cg, c\gamma, cZ, cH$ and the
three-body decays $t\rightarrow cl\bar{l}~(l=\tau,\mu,e)$ have been
studied in \cite{LHT-2t,LHT-3t} respectively, and these studies show
that, compared with the SM, the rates of these decays can be greatly
enhanced. So taking the completeness and the phenomenon of higher
order dominance into consideration \cite{SM-3t-domin.}, it is
necessary to consider all the three-body decays, which will be done
in this work.
\indent This paper is organized as follows. In Sec II a brief
review of the LHT is given. In Sec III we present the details of our
calculation of the decays $t\rightarrow cVV$ and $t\rightarrow
cf\bar{f}$, and show some numerical results. Finally, we give a
short conclusion in Sec IV.
\section{ A brief review of the LHT}
One of the major motivations for the little Higgs model
\cite{LH1,LH2} is to resolve the little hierarchy problem
\cite{Hierarchy}, in which the quadratic divergence of the Higgs
mass term at one-loop level was canceled by the new diagrams with
additional gauge bosons and a heavy top-quark partner. It was soon
recognized that the scale of the new particles should be in the
multi-TeV range in order to satisfy the constraints from electroweak
precision measurements, which in turn reintroduces the little
hierarchy problem \cite{Reintr.}. This problem has been eased in
the LHT model where a new $\mathbb{Z}_2$ discrete symmetry called
``T-parity" is introduced, and in this way, all dangerous tree level
contribution to the precision measurements are forbidden \cite{LHT}.
Just like the little Higgs model, in the LHT model the assumed
global symmetry $SU(5)$ is spontaneously broken down to $SO(5)$ at a
scale $f\sim\mathcal {O}(TeV)$, and the embedded $[SU(2)\otimes
U(1)]^2$ gauge symmetry is simultaneously broken at $f$ to the
diagonal subgroup $SU(2)_{L}\otimes U(1)_{Y}$, which is identified
with the SM gauge group. From the $SU(5)/SO(5)$ breaking, there
arise 14 Goldstone bosons which are described by the ``pion" matrix
$\Pi$, given explicitly by
\begin {equation}
\Pi=
\begin{pmatrix}
-\frac{\omega^0}{2}-\frac{\eta}{\sqrt{20}}&-\frac{\omega^+}{\sqrt{2}}
&-i\frac{\pi^+}{\sqrt{2}}&-i\phi^{++}&-i\frac{\phi^+}{\sqrt{2}}\\
-\frac{\omega^-}{\sqrt{2}}&\frac{\omega^0}{2}-\frac{\eta}{\sqrt{20}}
&\frac{v+h+i\pi^0}{2}&-i\frac{\phi^+}{\sqrt{2}}&\frac{-i\phi^0+\phi^P}{\sqrt{2}}\\
i\frac{\pi^-}{\sqrt{2}}&\frac{v+h-i\pi^0}{2}&\sqrt{4/5}\eta&-i\frac{\pi^+}{\sqrt{2}}&
\frac{v+h+i\pi^0}{2}\\
i\phi^{--}&i\frac{\phi^-}{\sqrt{2}}&i\frac{\pi^-}{\sqrt{2}}&
-\frac{\omega^0}{2}-\frac{\eta}{\sqrt{20}}&-\frac{\omega^-}{\sqrt{2}}\\
i\frac{\phi^-}{\sqrt{2}}&\frac{i\phi^0+\phi^P}{\sqrt{2}}&\frac{v+h-i\pi^0}{2}&-\frac{\omega^+}{\sqrt{2}}&
\frac{\omega^0}{2}-\frac{\eta}{\sqrt{20}}
\end{pmatrix}
\end{equation}
Among the Goldstone bosons, the fields $\omega^0, \omega^\pm$ and
$\eta$ are eaten by the new heavy gauge bosons $Z_H$, $W^{\pm}_H$
and $A_H$ so that the gauge bosons acquire following masses:
\begin{equation}
M_{W_{H}^{\pm
}}=M_{Z_{H}}=gf(1-\frac{v^2}{8f^2}),~~~~~~~~M_{A_H}=\frac{g'}{\sqrt{5}}f(1-\frac{5v^2}{8f^2}).
\end{equation}
Likewise, the fields $\pi^0$ and $\pi^\pm$ are eaten by the SM gauge
bosons $Z$ and $W^{\pm}$, but one minor difference from the SM is
the masses of these bosons, up to $\mathcal {O}(v^2/f^2)$, are given
by
\begin{equation}
M_{W_{L}}=\frac{gv}{2}(1-\frac{v^2}{12f^2}),~~~~~~~~M_{Z_L}=\frac{gv}{2\cos\theta_W}(1-\frac{5v^2}{12f^2}),
\end{equation}
where $g$ and $g'$ are the SM $SU(2)$ and $U(1)$ gauge couplings
respectively, and $v = 246 {\rm GeV}$.
In the framework of the LHT model, all the SM particles are assigned
to be T-parity even, and the other particles, such as the new gauge
bosons, are assigned to T-parity odd. In particular, in order to
implement the T-parity symmetry, each SM fermion must be accompanied
by one heavy fermion called the mirror fermion. In the following,
we denote the mirror fermions by $u_{H}^{i}$ and $d_{H}^{i}$ with $i
=1, 2, 3$ being the generation index. At the order of $\mathcal
{O}(v^2/f^2)$, their masses are given by
\begin{eqnarray}
m^i_{d_H}=\sqrt{2}\kappa_if,~~~~~~
m^i_{u_H}=m^i_{d_H}(1-\frac{v^2}{8f^2}),
\end{eqnarray}
where the Yukawa couplings $\kappa_i$ generally depend on the
fermion species $i$.
Since the T-parity is conserved in the LHT model, the fermion pairs
interacting with the T-odd gauge boson must contain one SM fermion
and one mirror fermion. In this case, due to the misalignment of the
mass matrices for the SM fermions and for the mirror fermions, new
gauge bosons can mediate flavor changing interactions. As pointed
out in \cite{FCNC-LHT,Feyn.-rules}, these interactions can be
described by two correlated CKM-like unitary mixing matrices
$V_{H_u}$ and $V_{H_d}$ satisfying
$V_{H_u}^{\dagger}V_{H_d}=V_{CKM}$ with the subscripts $u$ and $d$
denoting which type of the SM fermion is involved in the
interaction. The details of the Feynman rules for such interactions
were given in Ref. \cite{Feyn.-rules}, and in order to clarify our
results, we list some of them:
\begin{eqnarray}
&& \bar{u}^i_H\eta u^j:~-\frac{ig'}{10m_{A_H}}(m^u_{Hi}P_L-m^j_{u}P_R)(V_{H_u})_{ij}, \\
&& \bar{u}^i_H\omega^0u^j:~\frac{ig'}{2m_{Z_H}}(m^u_{Hi}P_L-m^j_{u}P_R)(V_{H_u})_{ij},\\
&& \bar{d}^i_H\omega^-u^j:~\frac{g}{\sqrt{2}m_{W_H}}(m^d_{Hi}P_L-m^j_{u}P_R)(V_{H_u})_{ij},\\
&& \bar{u}^i_HA_H u^j:~-\frac{ig'}{10}(V_{H_u})_{ij}\gamma^\mu P_L,\\
&& \bar{u}^i_HZ_H u^j:~-\frac{ig}{2}(V_{H_u})_{ij}\gamma^\mu P_L,\\
&& \bar{d}^i_HW^{-\mu}_Hu^j:~\frac{ig}{\sqrt{2}}(V_{H_u})_{ij}\gamma^\mu P_L.
\end{eqnarray}
The unitary matrix $V_{H_d}$ is usually parameterized with three
angles $\theta^d_{12},~\theta^d_{23},~\theta^d_{13}$ and three
phases $\delta^d_{12},~\delta^d_{23},~\delta^d_{13}$ \cite{FC-LHT0}:
\begin{eqnarray}
V_{H_d}=
\begin{pmatrix}
c^d_{12}c^d_{13}&s^d_{12}c^d_{13}e^{-i\delta^d_{12}}&s^d_{13}e^{-i\delta^d_{13}}\\
-s^d_{12}c^d_{23}e^{i\delta^d_{12}}-c^d_{12}s^d_{23}s^d_{13}e^{i(\delta^d_{13}-\delta^d_{23})}&
c^d_{12}c^d_{23}-s^d_{12}s^d_{23}s^d_{13}e^{i(\delta^d_{13}-\delta^d_{12}-\delta^d_{23})}&
s^d_{23}c^d_{13}e^{-i\delta^d_{23}}\\
s^d_{12}s^d_{23}e^{i(\delta^d_{12}+\delta^d_{23})}-c^d_{12}c^d_{23}s^d_{13}e^{i\delta^d_{13}}&
-c^d_{12}s^d_{23}e^{i\delta^d_{23}}-s^d_{12}c^d_{23}s^d_{13}e^{i(\delta^d_{13}-\delta^d_{12})}&
c^d_{23}c^d_{13} \label{mixing}
\end{pmatrix}
\end{eqnarray}
and with the relation $V_{H_u}^{\dagger}V_{H_d}=V_{CKM}$, one can
determine the expression of $V_{H_u}$.
\begin{figure}
\begin{center}
\includegraphics [scale=0.5] {1.eps}
\caption{The Feynman diagrams of the LHT model contributing to the
FCNC couplings $t\bar{c}V~(V=\gamma,Z,g)$.} \label{fig:fig2}
\end{center}
\end{figure}
\section{Calculations}
\subsection{The loop-level FC couplings $t\bar{c}V~(V=\gamma,Z,g)$
in the LHT model}
As introduced above, in the LHT model new contributions to the FCNC
top quark coupling $t\bar{c}V$ come from the new gauge interactions
mediated by ($A_H,Z_H,W^{\pm}_H$), which are shown in Fig. 1. Since
we use Feynman gauge in our calculation, the Goldstone bosons
$\eta$, $\omega^0$ and $\omega^{\pm}$ also appear in the diagrams.
The heavy scalar triplet $\Phi$, in principle, may also contribute
to the FCNC coupling, but since such a contribution is suppressed
by the factor ${v^2}/{f^2}$, we neglect it hereafter. It should be
noted that the rules in (5)-(10) imply that the form factors of the
loop-induced $t\bar{c}V$ interaction, $F$, must take the following
form
\begin{eqnarray}
F &\propto& \sum_{i=1}^3 \left ({V^\dag_{H_u}}\right)_{ti} f(m_{Hi})
\left({V_{H_u}}\right)_{ic} \label{formfactor}
\end{eqnarray}
where $f(m_{Hi})$ is a universal function for three generation
mirror quarks, but its value depends on the mass of $i$th-generation
mirror quark, $m_{Hi}$. Obviously, for the degeneracy of the three
generation mirror quarks, $F$ vanished exactly due to the unitary of
$V_{H_u}$, while for the degeneracy of the first two generations as
discussed below, the factor behaviors like $( V^\dag_{H_u} )_{t3}
\left ( f(m_{H3}) - f(m_H) \right ) \left({V_{H_u}}\right)_{3c} $
with $m_H$ being the common mass of the first two generations. In
the case of very heavy third generation mirror quarks, $f(m_{H3})$
vanish, that is its effect decouples, then $F$ is proportional to $
( V^\dag_{H_u} )_{t3} f(m_H) \left({V_{H_u}}\right)_{3c} $, which
are independent of $m_{H3}$.
The Feynman diagrams for the top quark decays $t\rightarrow cVV$ and
$t\rightarrow cf\bar{f}$ are shown in Fig. 2 with the black square
denoting the loop-induced $t\bar{c}V$ vertex. One important
difference of the effective $t\bar{c}V$ verteices in Figs. 2(a, d,
e) from those in Fig. 2(b, c) is for the former cases, both top and
charm quarks are on-shell, while for the latter case, either top or
charm quark is off-shell. In order to simplify our calculation, we
adopt the calculation method introduced in \cite{Eff.-vert.-MSSM-3t}
which uses a universal form of the effective $t\bar{c}V$ verteices,
but is valid for all the cases. In Appendix A we give the analytical
expressions of the effective verteices $t\bar{c}V$ and use the codes
LoopTools \cite{LoopTools} to get the numerical results of the
relevant loop functions. To secure the correctness of our results,
we recalculated the two-body decay $t\rightarrow cV$ and find our
results agree with those in Ref. \cite{LHT-2t}.
\begin{figure}
\begin{center}
\includegraphics [scale=0.5] {2.eps}
\caption{The Feynman diagrams for the decays $t\rightarrow cVV$
and $t\rightarrow cf\bar{f}(f=b,\tau,\mu,e)$ in the LHT.}
\label{fig:fig1}
\end{center}
\end{figure}
\subsection{ Amplitude for $t\rightarrow cVV$
in the LHT model}
Since the expressions of the amplitudes for $t\rightarrow cgg,
cg\gamma,cgZ,c\gamma\gamma$ are quite similar, we only list the
result for $t\rightarrow cgg$, which is given by
\begin{eqnarray}
\mathcal {M} (t\rightarrow cgg)=\mathcal {M}^g_{a}+\mathcal
{M}^g_{b}+\mathcal {M}^g_{c}
\end{eqnarray}
with
\begin{eqnarray}
\mathcal{M}^g_{a}&=&-ig_sf^{abc}G(p_t-p_c,0)\bar{u}^i_{c}(p_c)\Gamma^{\mu
cji}_{tcg}
[(p_1-p_2)_\mu\varepsilon^{a}(p_1)\cdot\varepsilon^{b}(p_2)
+2p_2\cdot\varepsilon^{a}(p_1)\varepsilon^b_\mu(p_2)\nonumber~~~~\\&&
-2p_1\cdot\varepsilon^{b}(p_2)\varepsilon^{a}_\mu(p_1)]{u_{t}^j}(p_t)
\\
\mathcal {M}^g_{b}&=&g_sT^{aki}G(p_t-p_2,m_c)
\bar{u}^i_{c}(p_c)\rlap/\varepsilon^{a}_1(p_1)(\pslash_t-\pslash_2+m_c)
\Gamma^{\mu
bjk}_{tcg}(p_t-p_2,p_c)\varepsilon^{b}_\mu(p_2){u_{t}^j}(p_t) \\
\mathcal {M}^g_{c}&=&g_sT^{bjk}G(p_t-p_2,m_t)
\bar{u}^i_{c}(p_c)\Gamma^{\mu aki}_{tcg}(p_c,p_t-p_1)
\varepsilon^{a}_\mu(p_1)(\pslash_t-\pslash_2+m_t)\rlap/\varepsilon^{b}_2(p_2)
{u_{t}^j}(p_t)~~~~
\end{eqnarray}
In above expressions, $P_{L,R}=\frac{1}{2}(1\mp\gamma_5)$ are the
left and right chirality projectors, $p_t$ is the top quark
momentum, $p_c, p_1, p_2$ are the momentum of the charm quark and
gluons respectively, $\varepsilon$s are wave functions of the
gluons, and $G(p, m)$ is defined as $\frac{1}{p^2-m^2}$. In actual
calculation, we compute the amplitudes numerically by using the
method of Ref. \cite{Eff.-vert.-MSSM-3t}, instead by calculating the
amplitude square analytically. This greatly simplifies our
calculations.
\subsection{Numerical results for $t\rightarrow cVV$ and $t\rightarrow cf\bar{f}$
in the LHT model}
In this work, we take the SM parameters as: $m_t$ = 172.0 GeV, $m_c$
= 1.27 GeV, $m_{e}=$0.00051 GeV, $m_{\mu}=$0.106 GeV,
$m_\tau=$1.777 GeV, $m_b$ = 4.2 GeV, $m_Z$ = 91.2 GeV,
$sin^{2}\theta_{W}$ = 0.231, $\alpha_e$=1/128.8,
$\alpha_s(m_t)$=0.107 \cite{SM-paramet.}. For the parameters in the
LHT model, the breaking scale $f$, the three generation mirror quark
masses $m_{H_i}(i=1,2,3)$ and six mixing parameters ($\theta^d_{ij}$
and $\delta^d_{ij}$ with $i,j=1,2,3$ and $i\neq j$)
in Eq. (\ref{mixing}) are involved.
The breaking scale $f$ determines the new gauge boson masses, and it
has been proven that, as long as $f\geq500$ GeV, the LHT model can
be consistent with the precision electroweak data\cite{EW
constraint}. So we set $f=500{\rm GeV}, 1000{\rm GeV}$ as two
representative cases. The matrix elements of $V_{H_d} $ have been
severely constrained by the FCNC processes in $K$, $B$ and $D$ meson
systems \cite{Feyn.-rules,KB}. To simplify our discussion, we
consider two scenarios which can easily escape the constraints
\cite{LHT-2t,LHT-3t,eeppeq-LHT}:
\begin{eqnarray}
{\rm Case~I}:~~V_{H_d} &=& {\rm I},~
V_{H_u}=V^{\dag}_{\rm CKM}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\
{\rm Case~II}:~~s_{23}^{d}&=&1/{\sqrt{2}},~
s_{12}^{d}=s_{13}^{d}=0, ~\delta
_{12}^{d}=\delta_{23}^{d}=\delta_{13}^{d}=0~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
\end{eqnarray}
As for the mirror quark masses, it has been shown that the
experimental bounds on four-fermi interactions require $m_{H_i}\leq
4.8f^2/{\rm TeV}$\cite{EW constraint}. In our discussion, we take
this bound. We also assume a common mass for the first two
generation up-type mirror quarks, i.e. $m_{H_1}=m_{H_2}=500$ GeV and
let the third generation quark mass $m_{H_3}$ to vary from $600{\rm
GeV}$ to $1200{\rm GeV}$ for $f=500 {\rm GeV}$ and from $600{\rm
GeV}$ to $4800{\rm GeV}$ for $f=1000 {\rm GeV}$. To make our
predictions more realistic, we apply some kinematic cuts as did in
Ref. \cite{t-cggcut}, that is, we require the energy of each decay
product larger than 15 GeV in the top quark rest frame.
\begin{figure}
\setlength\subfigcapskip{-15pt} \vspace{-0.6cm}
\subfigure[][]{\includegraphics[width=3.0in,height=2.4in]{3a.eps}}
\hspace{-0.1in
\subfigure[][]{\includegraphics[width=3.0in,height=2.4in]{3b.eps}}
\hspace{-0.1in
\subfigure[][]{\includegraphics[width=3.0in,height=2.4in]{3c.eps}}
\hspace{-0.1in
\subfigure[][]{\includegraphics[width=3.0in,height=2.4in]{3d.eps}}
\vspace{0.01cm} \caption{\small The rates for
$t\rightarrow cgg, cgZ, cg\gamma,c\gamma\gamma$ as a function of $m_{H_3}$
for different values of $f$ and $V_{H_d}$.
We take a common mass for the first two generation mirror quarks, i.e. $m_{H_1}=m_{H_2}=500$ GeV . \label{fig3}}
\end{figure}
\begin{figure}
\setlength\subfigcapskip{-15pt} \vspace{-0.0cm}
\subfigure[0.2][]{\includegraphics[width=3.0in,height=2.4in]{4a.eps}}
\hspace{-0.1in
\subfigure[][]{\includegraphics[width=3.0in,height=2.4in]{4b.eps}}
\hspace{-0.1in
\subfigure[][]{\includegraphics[width=3.0in,height=2.4in]{4c.eps}}
\hspace{-0.1in
\subfigure[][]{\includegraphics[width=3.0in,height=2.4in]{4d.eps}}
\vspace{0.01cm} \caption{\small Same as Fig.\ref{fig3}, but for the rates of
$t\rightarrow cf\bar{f}~(f=b,e,\mu,\tau)$. }
\end{figure}
In Fig.3 we show the dependence of the rates for $t\rightarrow cgg,
cgZ, cg\gamma,c\gamma\gamma$ on $m_{H_3}$. This figure indicates
that the dependence is quite strong, i.e. more than 1 order of
magnitude change when $m_{H_3}$ varies from 600 GeV to 1200 GeV in
Fig. (3a)and (3c) and from 600 GeV to 4800 GeV in Figs. (3b) and
(3d). The reason is, as explained in Eq. (\ref{formfactor}) and
below, the cancellation between the third generation mirror quark
contribution and the first two generation mirror quark contribution
is alleviated with the increase of $m_{H_3}$. This figure also
indicates that the rates $t\rightarrow cgg, cgZ,
cg\gamma,c\gamma\gamma$ are also sensitive to the parametrization
scenarios of $V_{H_d}$ when one compares the results in Fig.3 (a)
and Fig.3 (b) with those in Fig.3 (c) and Fig.3 (d). This character
can be easily understood from the expression in Eq.
(\ref{formfactor}). From Fig.3, one may conclude that in the LHT
model, the rate for the decay $t\rightarrow cgg$ is much larger than
the others, reaching $10^{-3}$ in optimal cases, while the rates of
the decays $t\rightarrow cgZ, cg\gamma, c\gamma\gamma$ are all below
$10^{-5}$.
We investigate the same dependence of the decays $t\rightarrow
cf\bar{f}~(f=b,e,\mu,\tau)$ in Fig.4. Since the lepton masses are small compared with top quark mass,
the rates for the decay $t\rightarrow c l\bar{l} $ with $l=e, \mu, \tau$ are approximately equal.
This figure shows that that the
dependence of $t\rightarrow cf\bar{f}$ on $m_{H_3}$ is similar to
that of $t\rightarrow cVV$ shown in Fig.3. This figure also shows
the rate of the decay $t\rightarrow cb\bar{b}$ can reach $10^{-3}$
in the optimum case, while the rate of the decay $t\rightarrow
cl\bar{l}$ is usually less than $10^{-6}$.
The authors of \cite{collider} have roughly estimated the discovery potentials of the
high energy colliders in probing top quark FCNC decay for $100 fb^{-1}$ of integrated luminosity, and they obtained
\begin{eqnarray}
{\rm LHC}:Br(t\rightarrow cX)\geq5\times10^{-5}\\
{\rm ILC}:Br(t\rightarrow cX)\geq5\times10^{-4}\\
{\rm TEV33}:Br(t\rightarrow cX)\geq5\times10^{-3}
\end{eqnarray}
Then by the results presented in Figs. 3 and 4, one can learn that
the LHT model can enhance the decays $t\rightarrow cgg(b\bar{b})$ to
the observable level of the LHC. So we may conclude that the LHC is
capable in testing the flavor structure of the LHT model.
\begin{center}
{\bf Table I: Optimum predictions for the decays $t\rightarrow cgg, cb\bar{b}, cl\bar{l}$ in different models.}\\
\doublerulesep 0.8pt \tabcolsep 0.008in
\begin{tabular}
{|c|c|c|c|c|c|c|}
\hline
& SM & MSSM & TC2 & 2HDM & LHT Case I/Case II\\
\hline
$Br(t\rightarrow cgg)$
&$\mathcal {O}(10^{-9})$\cite{SM-3t}
&$\mathcal{O}(10^{-4})$\cite{Eff.-vert.-MSSM-3t}
&$\mathcal{O}(10^{-3})$\cite{TC2-tcgg}
&$\mathcal {O}(10^{-3})$\cite{SM-2HDM-3t}
&$\mathcal {O}(10^{-5})$ /$\mathcal {O}(10^{-3})$ \\
\hline
$Br(t\rightarrow cl\bar{l})$
&$10^{-14}$\cite{SM-MSSM-2HDM-3t}
&$\mathcal{O}(10^{-7})$\cite{RPV-MSSM}
&$\mathcal{O}(10^{-6})$\cite{tcll-TC2}
&$\mathcal {O}(10^{-8})$\cite{2HDM-3t}
&$\mathcal {O}(10^{-8})$ /$\mathcal {O}(10^{-6})$ \\
\hline
$Br(t\rightarrow cb\bar{b})$
&$\mathcal{O}(10^{-5})$\cite{tcbb-TC2}
&$\mathcal{O}(10^{-7})$\cite{tcbb-TC2}
&$\mathcal {O}(10^{-3})$\cite{tcbb-TC2}
&---
&$\mathcal {O}(10^{-5})$ /$\mathcal {O}(10^{-3})$ \\
\hline
\end{tabular}
\end{center}
Finally, we summarize the LHT model predictions for the FCNC
three-body decays $t\rightarrow cgg, cb\bar{b},cl\bar{l}$ in
comparison with the predictions of the SM, the MSSM, the TC2, and
the 2HDM in Table I. This table indicates that the optimum rates of
the decays in the LHT model are comparable with those in the TC2
model, and the predictions of the two models are significantly
larger than the corresponding predictions of the SM and the MSSM. As
far as the decay $Br (t\rightarrow cb\bar{b})$ is concerned, its
branching ratios may reach $10^{-3}$. So if the decays $t\rightarrow
cgg$ and $t\rightarrow cb\bar{b}$ are observed at the LHC, more
careful theoretical analysis and more precise measurement are
needed to distinguish the models; while on the other hand, if these
decays are not observed, one can constrain the parameter space of
the LHT model. This table also indicates that, even in the optimum
cases, the rate for $t\rightarrow cl\bar{l}$ is only $10^{-6}$,
which implies that it is difficult to detect such decay.
\section{Conclusion}
In this work, we investigate the FCNC three-body decays
$t\rightarrow cVV ~(V=\gamma, Z, g)$ and $t\rightarrow
cf\bar{f}~(f=b, e, \mu, \tau)$ in the LHT. We conclude that: i) The
rates of these decays strongly depend on the mirror quark mass
splitting. ii) The rates rely significantly on the flavor structure
of the mirror quarks, namely $V_{H_u}$ and $V_{H_d}$. iii) In the
optimum case of the LHT model, the rates for the decays
$t\rightarrow cgg$ and $t\rightarrow cb\bar{b}$ are large enough to
be observed at present or future colliders and with the running of
the LHC, one get some useful information about the flavor structure
of the LHT model by detecting these decays.
\begin{acknowledgments}
We would like to thank Junjie Cao and Lei Wu for helpful discussions
and suggestions. This work is supported by the National Natural
Science Foundation of China under Grant Nos.10775039, 11075045, by
Specialized Research Fund for the Doctoral Program of Higher
Education under Grant No.20094104110001, 20104104110001 and by
HASTIT under Grant No.2009HASTIT004.
\end{acknowledgments}
|
1,108,101,565,573 | arxiv | \section{Introduction}
\label{sec:Introduction}
The success of deep neural networks (DNNs) \cite{Krizhevsky,lenet,googlenet,vgg,resnet}
has led to them being used in many real world applications.
However, these models are also known to be susceptible to adversarial attacks, i.e.,
minimal patterns crafted by attackers who try to fool learning machines
\cite{Goodfellow, Papernotb, szegedy, Nguyena, Eykholt,athalye3d}. Such adversarial patterns
do not affect human perception much, while they can manipulate learning machines, e.g., to give wrong classification outputs.
DNN's complex interactions between different layers enable high accuracy under the controlled setting, while they make the outputs unpredictable in \emph{untrained spots} where training samples exist sparsely.
If attackers can find such a spot close to a normal data sample,
they can manipulate DNNs by adding a very small (optimally invisible in computer vision applications) perturbation to the original sample,
leading to fatal errors,
e.g., manipulating an autonomous driving system can cause serious accidents.
Two attacking scenarios are considered in general---whitebox and blackbox.
The whitebox scenario assumes that the attacker has access to the complete target system, including the architecture and the weights of the DNN, as well as the defense strategy if the system is equipped with any.
Typical whitebox attacks optimize the classification output with respect to the input by backpropagating through the defended classifier
\cite{Carlinib, chen2017ead, sharma2017ead, moosavi2016deepfool}.
On the other hand, the blackbox scenario assumes that the attacker has only access to the output. Under this scenario, the attacker has to rely on blackbox optimization, where the objective can be computed for arbitrary inputs, but the gradient information is not directly accessible.
Although the whitebox attack
is more powerful,
it is much less likely
that attackers can get full knowledge of the target system in reality.
Accordingly, the blackbox scenario
is considered to be a more realistic
threat.
Existing blackbox attacks can be classified into two types---the transfer attack and the decision based attack.
In the transfer attack, the attacker trains a student network which mimics the output of the target classifier.
The trained student network is then used to get the gradient information for optimizing the adversarial input.
In the decision based attack,
the attacker simply
performs random walk exploration.
In the \emph{boundary attack} \cite{brendel2017decision}, a state-of-the-art method in this category,
the attacker first generates an initial adversarial sample from a given original sample
by drawing a uniformly distributed random pattern multiple times
until it happens to lead to misclassification.
Initial patterns generated in this way typically have too large amplitudes to be hidden from human perception.
The attacker therefore polishes the initial adversarial pattern by Gaussian random walk in order to minimize the amplitude, keeping the classification output constant.%
\footnote{In the case of the untargeted attack, the classification output is kept \emph{wrong}, i.e., random walk can go through the areas of any label except the true one.}
Here our question arises. Is the Gaussian appropriate to drive the adversarial pattern to minimize the amplitude?
It could be a reasonable choice if we only consider that the attacker minimizes the $L_2$ norm of the adversarial pattern.
However, it is also required to keep the classification output constant through the whole random walk sequence.
Provided that the decision boundary of the classifier has complicated structure, reflecting the real-world data distribution,
we expect that more efficient random walk can exist.
In this paper, we pursue this possibility, and
investigate how statistics of random variables affect the performance of attacking strategies.
To this end,
we generalize the boundary attack,
and propose the L\'evy-Attack{} where
the random walk exploration is
driven by symmetric $\alpha$-stable random variables.
We expect that
the impulsive characteristic of the $\alpha$-stable distribution induces sparsity in random walk steps,
which would drive adversarial patterns along the complicated decision boundary structure efficiently.
Naturally, our expectation is reasonable only if the decision boundary has some structure aligned to the coordinate system defined in the data space,
so that moving along the canonical direction keep more likely the classification output than moving isotropic directions.
In our experiments
on MNIST and CIFAR10 datasets,
L\'evy-Attack{} with $\alpha \sim 1.0$ or less shows significantly better performance than
the original boundary attack with Gaussian random walk.
This implies that our hypothesis on the decision boundary holds at least in those two popular image benchmark datasets.
Our results also give an insight into the recently found fact in the whitebox attacking scenario that the choice of the norm for measuring the amplitude of the adversarial patterns is essential.
\section{Proposed Method}
In this section, we first introduce the $\alpha$-stable distribution,
and propose the L\'evy-Attack{} as a generalization of the boundary attack.
\subsection{Symmetric $\alpha$-stable Distribution}
\label{sec:alpha_stable}
The symmetric $\alpha$-stable distribution is a generalization of the Gaussian distribution which can model characteristics too impulsive for the Gaussian model. This family of distributions
is most conveniently defined by their characteristic functions \cite{samorodnitsky94} due to the lack of an analytical expression for the probability density function.
The characteristic function is given as
\begin{equation}
\phi (s) = \exp [i\mu s - \left| {\gamma s} \right|^\alpha ],
\end{equation}
where $\alpha \in (0,2], \mu \in (-\infty,\infty)$, and $\gamma \in (0,\infty)$ are parameters.
We denote the $D$-dimensional symmetric $\alpha$-stable distribution by $\mathcal{SA}_D(\alpha, \mu, \gamma)$.
$\alpha $ is the characteristic exponent expressing
the degree of \emph{impulsiveness} of the distribution---%
the smaller $\alpha $ is, the more impulsive the distribution is.
The symmetric $\alpha$-stable distribution reduces to the Gaussian distribution for $\alpha = 2$,
and to the Cauchy distribution for $\alpha = 1$, respectively.
$\mu$ is the location parameter, which corresponds to the mean in the Gaussian case,
while
$\gamma$ is the scale parameter measuring of the spread of the samples around the mean, which corresponds to the variance in the Gaussian case.
For more details on $\alpha$-stable distributions, readers are referred to \cite{samorodnitsky94}.
\begin{algorithm}[t]
\begin{algorithmic}[1]
\ite
\begin{flushleft}
\textbf{Input:}
Classifier $\bff(\cdot)$, original image $\bfx$ and label $\bfy$
Max. number $T$ of iterations, termination threshold $\psi$ \\
\textbf{Output:} Adversarial sample $\bfx^-$
\end{flushleft}
\REPEAT
\STATE $\bfx^{-}_{0} \leftarrow \bfx + {\bf\Delta}$ for ${\bf\Delta} \sim \mathcal{U}_D (0,255)$
\UNTIL{$\bfy \neq \bff(\bfx^{-}_{0}) $}
\FOR {$t = 0$ to $T-1$}
\STATE $(\bfx^{-}_{t+1}, \epsilon) \leftarrow \textit{$\alpha$-stable random update}(\bfx^{-}_{t})$
\IF{$\bfy = \bff(\bfx^{-}_{0}) $}
\STATE $\bfx^{-}_{t+1} \leftarrow \bfx^{-}_{t}$
\ENDIF
\IF { $\epsilon < \psi$}
\item \textbf{break}
\ENDIF
\ENDFOR
\end{algorithmic}
\caption{(Untargeted) L\'evy-Attack{} }
\label{alg:LevyAttack}
\vspace{0mm}
\end{algorithm}
\begin{comment}
\begin{algorithm}[t]
\begin{algorithmic} [1]
\begin{flushleft}
\textbf{Input:}
Original image $\bfx$, previous image $\bfx_{\mathrm{o}}$.\\
\textbf{Output:} Proposal image $\bfx_{\mathrm{n}}$
\end{flushleft}
\STATE $\bfeta \sim \mathcal{SA}_D(\alpha, 0, 1)$
\STATE \textit{orthogonal step}($\bfx^{-}_{t}, \bfeta_t$) s.t $d(\bfx^{-}_{t}, \bfx) = d(\bfx^{-}_{t} + \bfeta_t, \bfx)$
\STATE
\STATE \textit{minimize distance step}()
\end{algorithmic}
\caption{$\alpha$-stable random update}
\label{alg:Proposal}
\vspace{0mm}
\end{algorithm}
\end{comment}
\subsection{L\'evy-Attack{}}
\label{sec:boundary_attack}
Now, we propose our L\'evy-Attack{} as a generalization of the boundary attack \cite{brendel2017decision},
a simple yet effective attack under the blackbox scenario, where the attacker has access only to the classification output.
We denote the classifier output by $\bfy = \bff(\bfx)$, where
$\bfx$ is a data point, and $\bff$ is the target (blackbox) classifier.
The L\'evy-Attack{} performs the procedure as described in Algorithm~\ref{alg:LevyAttack}, which reduces to the original boundary attack if we set $\alpha=2$.
While the attack is very simple, it can be a very effective blackbox attack \cite{brendel2017decision}.
Naturally,
the success of the L\'evy-Attack{} relies on the effectiveness of the proposal distribution.
In the proposal distribution we accommodate sampling from a symmetric $\alpha$-stable distribution $\bfeta_t \sim \mathcal{SA}_D(\alpha,0,1)$.
$\| \bfeta_t \|_2 = \delta \cdot d(\bfx^{-}_{t}, \bfx)$, where $d(\bfx^{-}_{t}, \bfx) = \| \bfx^{-}_{t+1} - \bfx \|^2_2$ and $\delta$ is the relative size of the perturbation.
An orthogonal step is taken, where $\bfeta_t$ is projected onto a sphere around the original image such that $d(\bfx^{-}_{t}, \bfx) = d(\bfx^{-}_{t+1}, \bfx)$.
Finally, a step is taken towards the original image so that the adversarial perturbation is reduced by a small amount $\epsilon$, $d(\bfx^{-}_{t}, \bfx) - d(\bfx^{-}_{t+1}, \bfx) = \epsilon \cdot d(\bfx^{-}_{t}, \bfx)$.
$\delta$ and $\epsilon$ are the two hyper-parameters which are dynamically tuned to adjust to the local geometry of the decision boundary.
The orthogonal step in the proposal distribution encourages around $50\%$ orthogonal perturbations to be adversarial.
The length of the step, $\epsilon$ is adjusted according to the success rate of the step. If the success rate is too high, then $\epsilon$ is increased and vice versa.
As the random walk moves closer towards the original image, the success rate of the attack becomes lesser. The attack gives up further exploration as $\epsilon$ converges to $0$.
\begin{comment}
\memo{Vignesh: The algorithm description is still unclear. Can you give a pseudo code in Algorithm~\ref{alg:LevyAttack}?
Describe equations for rescaling and clipping. Only specifying the image is not sufficient.
I don't understand why you can set the norm of $\eta$ to some value by clipping $\bfx$.
"should" is not an appropriate word to describe an algorithm procedure.
Termination criterion should also be described, since I mentioned something on that in Section IV.
}
{\color{blue}
\begin{itemize}
\item Initialize the attack, $\bfx^{-}_{0}$ with a target image or $\mathcal{U}(0,1)$ for targeted and untargeted attacks respectively.
\item Perform a random walk, where in each step $t$:
\begin{itemize}
\item $\bfeta_t \sim \mathcal{P}(\bfx^{-}_{t})$, where $\mathcal{P}$ is a proposal distribution.
\item $\bfx^{-}_{t+1} = \bfx^{-}_{t}$ + $\bfeta_t $
\item Accept or reject step: The step is taken if $\bfx^{-}_{t+1}$ is misclassified
\end{itemize}
\end{itemize}
While the attack is very simple, it can be very effective blackbox attack \cite{brendel2017decision}.
Naturally,
the success of the Boundary Attack relies on the effectiveness of the proposal distribution. In \cite{brendel2017decision}, the proposal distribution was defined to accommodate sampling from an iid Gaussian distribution $\bfeta_t \sim \mathcal{N}(0,1)$.
The proposal distribution conditions the noise to then have the following constraints:
\begin{itemize}
\item Rescale $\bfx^{-}_{t+1} \in [0,1] $
\item Clip $\bfx^{-}_{t+1}$ such that $\| \bfeta_t \|_2 = \delta \cdot d(\bfx^{-}_{t+1}, \bfx)$.
Where, $d(\bfx^{-}_{t+1}, \bfx) = \| \bfx^{-}_{t+1} - \bfx \|^2_2$ and $\delta$ is the relative size of the perturbation.
\item The perturbation added should reduce the distance between the adversarial image and the original image by a small amount $\epsilon$, such that
$d(\bfx^{-}_{t}, \bfx) - d(\bfx^{-}_{t+1}, \bfx) = \epsilon \cdot d(\bfx^{-}_{t}, \bfx)$.
\end{itemize}
In the beginning of the random walk, $\delta$ is high and is gradually decreased with the steps. The length of the step, $\epsilon$ is adjusted according to the success rate of the step. If the success rate is too high, then $\epsilon$ is increased and vice versa. The attack is assumed to be converged, when $\epsilon$ converges to $0$.
}
\end{comment}
\begin{table}[t]
\caption{The mean $ \mathcal{S}_{m}$ and the median $ \mathcal{S}_{d}$ of the $L_\infty$, $L_1$, and $L_2$ norms of the adversarial patterns generated by L\'evy-Attack{} for the MNIST dataset.}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\textbf{Attack} & \multicolumn{2}{c}{\textbf{$L_\infty$}} & \multicolumn{2}{c}{\textbf{$L_1$}} & \multicolumn{2}{c}{\textbf{$L_2$}} \\
\cline{2-7}
& $\mathcal{S}_{m}$ & $\mathcal{S}_{d}$ & $\mathcal{S}_{m}$ & $\mathcal{S}_{d}$ & $\mathcal{S}_{m}$ & $\mathcal{S}_{d}$ \\
\hline
Gaussian & \textbf{0.56} & \textbf{0.56} & 11.36 & 10.73 & 1.38 & 1.39 \\
\hline
$\alpha = 1.5$ & 0.57 & 0.58 & 9.62 & 9.16 & 1.31 & 1.31\\
\hline
$\alpha = 1.0$ & 0.57 & 0.58 & 8.89 & \textbf{8.54} & \textbf{1.29} & \textbf{1.30} \\
\hline
$\alpha = 0.5$ & 0.58 & 0.58 & \textbf{8.84} & 8.71 & 1.30 & 1.32 \\
\hline
\end{tabular}
\label{table:mnist}
\end{center}
\end{table}
\begin{figure}[t]
\centering
\centering
\includegraphics[width=\linewidth]{images/mnist_v2/701.png}
\centering
\includegraphics[width=\linewidth]{images/mnist_v2/906.png}
\caption{Adversarial samples generated by L\'evy-Attack{} on MNIST dataset for "7" and "9".
"Gaussian" corresponds to $\alpha = 2$, with which L\'evy-Attack{} reduces to the original boundary attack \cite{brendel2017decision}.
The classification output is shown at the top right corner of each image. In each block (for "7" as well as "9"), the top row displays adversarial samples generated with different $\alpha$, while the bottom row displays the corresponding adversarial patterns (the differences from the original image). }
\label{fig:mnist}
\end{figure}
\section{Experiment}
We report on experiments performed using our L\'evy-Attack{} on the following datasets:
\begin{itemize}
\item MNIST: The MNIST dataset consists of $60,000$ images in total, with $50,000$ images for training and $10,000$ images for testing. It has $10$ different classes each corresponding to the $10$ numerical digits. The image size is $28 \times 28$.
\item CIFAR10: This dataset also contains $50,000$ training images and $10,000$ test images. The images are of resolution $32 \times 32 \times 3$ with $10$ different classes in total.
\end{itemize}
In the MNIST experiment, we target the state-of-the-art robust classifier proposed by Madry et al. \cite{madry2017towards},%
\footnote{https://github.com/MadryLab/mnist\_challenge}
where the classifier is trained, in addition to the original training set, on the adversarial samples generated by the PGD attack of bounded $L_\infty$ distortion by $0.3$.
The classification accuracy on the original test samples is $98.68\%$.
In the CIFAR10 experiment, we trained a state-of-the-art Resnet model \cite{resnet}.
The classification accuracy on the original test samples is $92.93\%$.
To generate samples by L\'evy-Attack{},
we
modify the cod
provided by \cite{brendel2017decision} for the boundary attack,
so that the random walk is performed by
the symmetric $\alpha$-stable distribution, instead of the Gaussian distribution.
We evaluated adversarial samples for
$\alpha =2.0, 1.5, 1.0$, and $0.5$.
The other parameters specifying the $\alpha$-stable distribution is set to $\delta = 0.0$ and $\gamma = 1.0$.
We limit the number of random walk steps to $5,000$.
Having such an upper-bound is reasonable
because it is not realistic to assume that
the attacker may access to the classifier output unlimited times.
\begin{table}[t]
\caption{The mean $ \mathcal{S}_{m}$ and the median $ \mathcal{S}_{d}$ of the $L_\infty$, $L_1$, and $L_2$ norms of the adversarial patterns generated by L\'evy-Attack{} for the CIFAR10 dataset.}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\textbf{Attack} & \multicolumn{2}{c}{\textbf{$L_\infty$}} & \multicolumn{2}{c}{\textbf{$L_1$}} & \multicolumn{2}{c}{\textbf{$L_2$}} \\
\cline{2-7}
& $\mathcal{S}_{m}$ & $\mathcal{S}_{d}$ & $\mathcal{S}_{m}$ & $\mathcal{S}_{d}$ & $\mathcal{S}_{m}$ & $\mathcal{S}_{d}$ \\
\hline
Gaussian & \textbf{2.92} & 2.47 & 895.22 & 755.06 & 23.72 & 20.45 \\
\hline
$\alpha = 1.5$ & 2.99 & 2.44 & 859.49 & 708.54 & 23.15 & 19.49 \\
\hline
$\alpha = 1.0$ & 2.97 & 2.427 & 847.20 & 700.42 & 23.06 & 19.39 \\
\hline
$\alpha = 0.5$ & 2.94 & \textbf{2.421} & \textbf{826.29} & \textbf{685.76} & \textbf{22.78} & \textbf{19.28} \\
\hline
\end{tabular}
\label{table:cifar}
\end{center}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{images/cifar_v2/304.png}
\includegraphics[width=\linewidth]{images/cifar_v2/406.png}
\caption{Adversarial samples generated by L\'evy-Attack{} on CIFAR10 dataset for "cat" and "deer".
}
\label{fig:cifar}
\end{figure}
\begin{table}
\caption{The average number of iterations that L\'evy-Attack{} performed to generate adversarial samples.}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
\textbf{Attack} & MNIST & CIFAR10 \\
\hline
Gaussian & 2700.22 & 4996.49 \\
\hline
$\alpha = 1.5$ & 2629.04 & 4995.96 \\
\hline
$\alpha = 1.0$ & 2792.52 & 4987.04 \\
\hline
$\alpha = 0.5$ & 3407.54 & 4997.37 \\
\hline
\end{tabular}
\label{table:iterations}
\end{center}
\end{table}
For both datasets, we randomly sample $N=1,000$ images from the test set,
and evaluate the quality of the adversarial patterns.
As evaluation scores,
we use the mean and the median of $3$ different $L_p$-norms for $p = \infty, 1$, and $2$, over the 1,000 samples:
\begin{align}
\mathcal{S}_{m} &= \frac{1}{N} \sum^{N}_{i=1} ( \| \bftau_i \|_p ) ,
& \mathcal{S}_{d} &= median_{i=1}^{N} ( \| \bftau_i \|_p ) ,
\notag
\end{align}
where $\{\bftau_i\}$ are the adversarial patterns.
Smaller norms indicate that the adversarial pattern is less visible,
and therefore a better attack.
Table~\ref{table:mnist} shows the results
on the MNIST dataset, where we see that the L\'evy-Attack{} with $\alpha$ smaller than 2 (Gaussian)
gives significantly smaller $L_1$ and $L_2$ norms with the $L_{\infty}$ norm almost unchanged.
Similar results are obtained on the CIFAR10 dataset (Table~\ref{table:cifar}),
where $\alpha < 2$ gives better $L_1$ and $L_2$ norms with the $L_{\infty}$ norm almost unchanged,
although the performance difference is smaller than the MNIST results.
Table~\ref{table:iterations} summarizes the average number of iterations the L\'evy-Attack{} performs.
We see the tendency that smaller $\alpha$ leads to more iterations,
which implies that $\alpha$-stable random walk continues exploring when Gaussian random walk has already been terminated due to a low success rate in further adversarial exploration.
Also, Tables~\ref{table:mnist} and \ref{table:cifar} show that the L\'evy-Attack{} for $\alpha < 2$ reaches to the point closer to the original than the Gaussian ($\alpha = 2$)
random walk.
These results imply that the impulsive random walk is suitable to explore the data space without crossing decision boundaries,
and
indirectly support our hypothesis in Section~\ref{sec:Introduction}---decision boundaries have some structure aligned to the coordinate system.
Figs.~\ref{fig:mnist} and \ref{fig:cifar} show a few illustrative examples of adversarial samples and adversarial patterns generated by L\'evy-Attack{}.
For each block for the examples ("7" and "9" in MINST, and "cat" and "deer" in CIFAR10),
the top row shows the generated adversarial samples, while the bottom row shows the corresponding adversarial patterns (the differences from the original image).
In the "7" example of MNIST (Fig.~\ref{fig:mnist}), L\'evy-Attack{} with $\alpha \geq 1.0$ consistently tries to modify the sample close to "2",
while L\'evy-Attack{} for $\alpha = 0.5$ tries to modify the sample close to "7".
Apparently, the latter is more efficient, i.e., it requires fewer pixels to make "7" to "9" than to make "7" to "2".
The same applies to the "9" example---it seems more efficient to make "9" close to "7" than to make "9" to "0" or "3".
However, only L\'evy-Attack{} with a very small $\alpha$ can find those efficient solution,
because it is little likely to get a sparse random walk step if it is driven by non-sparse distributions like Gaussian.
The CIFAR10 examples, although less obvious than the MNIST examples, also show similar tendency---the $\alpha$-stable random walk with smaller $\alpha$
provides sparser adversarial patterns.
\section{Discussion}
Many defense strategies have been proposed to counter adversarial attacks \cite{madry2017towards, Pouya, SriArXiv18}.
However, it happened many times that a new defense strategy is broken down by a newer attacking strategy only a few months after its proposal.
Thus, the adversarial defense problem has not been solved even on the toy MNIST data set,
although defense is considered much harder for larger data sets.
One recent finding in this ensuing arms race between new defense and attacking strategies
is the importance of the metric of the distortion, i.e., how to measure the distance from the original sample.
In whitebox attacks,
the $L_\infty$, $L_1$, and $L_2$ norms are often used to measure the distortion
\cite{Goodfellow, madry2017towards, Carlinib, chen2017ead}.
Interestingly, the state-of-the-art defense method proposed by Madry et al. \cite{madry2017towards} has shown to be robust against attacks with $L_\infty$ bounded perturbations,
while it has been found to be vulnerable against attacks with the elastic net ($L_1$ plus $L_2$) bounded perturbations \cite{chen2017ead, sharma2017ead}.
Naturally, the choice of the perturbation metric impacts the human perception, e.g.,
limiting the $L_2$ norm makes the visual quality of the image better while limiting the $L_1$ norm assures sparsity of the distortion.
However, the finding above implies that sparser regularization might help gradient-based whitebox optimization find stronger adversarial samples.
Our results in this paper might imply something similar or at least related---sparser random walk steps help exploration move along the decision boundaries,
and produce stronger adversarial samples under the blackbox scenario.
Further investigation is left as future work.
\section{Conclusion}
In this paper, we investigated how statistics of random variables affect random walk based blackbox attacking strategy.
Specifically, we proposed L\'evy-Attack{}, a generalization of the state-of-the-art {boundary attack},
where random walk is driven by symmetric $\alpha$-stable random variables.
Our experiments showed that
the impulsive characteristics of the $\alpha$-stable distribution enables efficient exploration in the data space without crossing decision boundaries,
producing stronger adversarial samples.
In our future work, we investigate the use of explanation methods \cite{bach2015pixel, MonDSP18, SamITU18b, LapNCOMM19} for adversarial attack detection and further study the relation between norm bounds, sparse exploration, and the quality of adversarial samples.
\bibliographystyle{unsrt}
|
1,108,101,565,574 | arxiv | \section{Introduction}
\addtolength{\belowdisplayskip}{-3mm}
\addtolength{\abovedisplayskip}{-3mm}
Recently, the enlarged Krylov subspace methods \cite{EKS} were introduced in the aim of obtaining methods that converge faster than classical Krylov methods, and are parallelizable with less communication, whereby communication is the data movement between different levels of memory hierarchy (sequential) and different processors (parallel).
Different methods and techniques have been previously introduced for reducing communication in Krylov subspace methods such as Conjugate Gradient (CG) \cite{cgor}, Generalized Minimal Residual (GMRES) \cite{ssch}, bi-Conjugate Gradient \cite{bicg1,bicg2}, and bi-Conjugate Gradient Stabilized \cite{bicgstab}. The interest in such methods is due to the communication bottleneck on modern-day computers and the fact that the Krylov subspace methods are governed by BLAS 1 and BLAS 2 operations that are communication-bound.
These methods and techniques can be categorized depending on how the communication reduction is achieved. There are three main categories where the reduction is achieved at the mathematical/theoretical level, algorithmic level, and implementation level.
The first category is introducing methods based on different Krylov subspaces
such as the augmented Krylov methods \cite{augmKSM, augmKSM2}, and the Block Krylov methods \cite{bcg} that are based on the augmented and block Krylov subspaces respectively. The recently introduced enlarged Krylov subspace methods fall into this category since the methods search for the approximate solution in the enlarged Krylov subspace. The second category is to restructure the algorithms such as the s-step methods that compute $s$ basis vectors per iteration \cite{sstepcg1, chronop,walker, erhel, Carson} and the communication avoiding methods that further reduce the communication \cite{cagmres, hoemmen, grigori}. The third category is to hide the cost of communication by overlapping it with other computation, like pipelined CG \cite{hidecg,hidecg2} and pipelined GMRES \cite{hide}.
In this paper, we introduce the s-step enlarged Krylov subspace methods, whereby $s$ iterations of the enlarged Krylov subspace methods are merged in one iteration. The idea of s-step methods is not new, as mentioned previously. However, the aim of this work is the introduction of methods that reduce communication with respect to the classical Krylov methods, at the three aforementioned levels (mathematical/theoretical, algorithmic, and implementation level).
Similarly to the enlarged Krylov subspace methods , the s-step enlarged Krylov subspace methods have a faster convergence than Krylov methods, in terms of iterations. In addition, computing $st$ basis vectors of the enlarged Krylov subspace $\mathscr{K}_{k,t}(A,r_0)$ at the beginning of each s-step enlarged Krylov subspace iteration reduces the number of sent messages in a distributed memory architecture.
We introduce several s-step enlarged Conjugate Gradient versions, based on the short recurrence enlarged CG methods (SRE-CG and SRE-CG2) and MSDO-CG presented in \cite{EKS}. After briefly introducing CG, a Krylov projection method for symmetric (Hermitian) positive definite (SPD) matrices, and the enlarged CG methods in section \ref{sec:overview}, we discuss the new s-step enlarged CG versions (section \ref{sec:sstep}) in terms of numerical stability (section \ref{sec:SRNum}), preconditioning (section \ref{sec:precCG}), and communication reduction in parallel (section \ref{sec:par}). Although we only consider in this article a distributed memory system, however the introduced methods reduce communication even in shared memory systems. Finally we conclude in section \ref{sec:conc}.
\section{From Conjugate Gradient (CG) to Enlarged Conjugate Gradient Methods}\label{sec:overview}
The Conjugate Gradient method of Heistens and Stiefel \cite{cgor} was introduced in 1952. Since then, different CG versions were introduced for different purposes. In 1980, Dian O'Leary introduced the block CG method \cite{bcg} for solving an SPD system with multiple right-hand sides. Block CG performs less work than solving each system apart using CG. In addition, it may converge faster in terms of iterations and time in some cases discussed in \cite{bcg}. In 1989, Chronopoulos and Gear introduced the s-step CG method, that performs $s$ CG iterations simultaneously with the goal of reducing communication by performing more flops using the data in fast memory. Several CG versions where introduced for solving successive linear systems with different right-hand sides, by recycling the computed Krylov subspace, such as \cite{erhelASCG}. Moreover, several preconditioned and parallelizable CG versions were introduced, such as deflated CA-CG \cite{Carson2}, MSD-CG \cite{msdcg}, augmented CG \cite{augmKSM, augmKSM2}. Recently, enlarged Conjugate Gradient methods such as SRE-CG, SRE-CG2, and MSDO-CG were introduced \cite{EKS}. In this section, we briefly discuss CG, s-step CG, and enlarged CG versions. For a brief overview of other related CG versions such as the block CG, coop-CG, and MSD-CG, refer to \cite{sophiethesis}.
The CG method is a Krylov projection method that finds a sequence of approximate solutions $x_k \in x_0 + \mathcal{K}_k(A,r_0)$ ($k>0$) of the system $Ax = b$, by imposing the Petrov-Galerkin condition, $\; r_k \perp \mathcal{K}_k$, where $\mathcal{K}_k (A, r_0) = span\{ r_0, Ar_0, A^2 r_0,..., A^{k-1} r_0 \}$ is the Krylov subspace of dimension $k$, $x_0$ is the initial iterate, and $r_0$ is the initial residual. At the $k^{th}$ iteration, CG computes the new approximate solution $x_k = x_{k-1} +\alpha_k p_k$ that minimizes $\phi (x) = \frac12 (x)^t A x - b^t x$ over the corresponding space $x_0 + \mathcal{K}_k(A, r_0)$, where $k>0$, $p_k= r_{k-1}+\beta_k p_{k-1} \in \mathcal{K}_k(A, r_0)$ is the $k^{th}$ search direction, $p_1 = r_0$, and $\alpha_{k} = \frac{(p_{k})^t r_{k-1}}{(p_{k})^t A p_{k}} = \frac{||r_{k-1}||^2_2} {||p_{k}||_A^2}$ is the step along the search direction. As for $\beta_k = - \frac{(r_{k-1})^t A p_{k-1}} {(p_{k-1})^t A p_{k-1}} = \frac{||r_{k-1}||^2_2}{||r_{k-2}||^2_2}$, it is defined so that the search directions are A-orthogonal ($p_k^tAp_i = 0$ for all $i\neq k$), since otherwise the Petrov-Galerkin condition is not guaranteed.
The s-step CG method \cite{chronop} introduced by Chronopoulos and Gear in 1989 is also a Krylov projection method that solves the system $Ax = b$ by imposing the Petrov-Galerkin condition. However, it finds a sequence of approximate solutions $x_{k} \in x_0 + \mathcal{K}_{sk}(A,r_0)$, where $k>0$, $s>0$, and $\mathcal{K}_{sk}(A,r_0) = span\{ r_0, Ar_0, A^2 r_0,..., A^{sk-2} r_0, A^{sk-1} r_0 \}$. At the $k^{th}$ iteration, $x_{k} = x_{k-1} - P_k\alpha_k$ is obtained by minimizing $\phi (x)$, where $P_k$ is a matrix containing the $s$ search directions and $\alpha_k = ((P_{k})^t A P_{k})^{-1} P_k^tr_{k-1} $ is a vector containing the $s$ corresponding step lengths. Initially, $P_1 = R_0 = [r_0 \; Ar_0 \;... \; A^{s-1}r_0]$ is defined as the first $s$ basis vectors of the Krylov subspace. Then $P_k = R_{k-1}+P_{k-1}\beta_k$ for $k>1$, where $R_{k-1} = [r_{k-1} \;Ar_{k-1}\; ... \;A^{s-1}r_{k-1}]$, and $\beta_k = - (P_{k-1}^tAP_{k-1})^{-1}(R_{k-1}^tP_{k-1})$ is an $s \times s$ matrix.
On the other hand, the enlarged CG methods are enlarged Krylov projection methods that find a sequence of approximate solutions $x_k \in x_0 + \mathscr{K}_{k,t}(A,r_0)$ ($k>0$) of the system $Ax = b$, by imposing the Petrov-Galerkin condition, $\; r_k \perp \mathscr{K}_{k,t}(A,r_0)$, where $\mathscr{K}_{k,t} (A, r_0) = span\{ T(r_0), AT(r_0), A^2 T(r_0),..., A^{k-1} T(r_0) \}$ is the enlarged Krylov subspace of dimension at most $tk$, $x_0$ is the initial iterate, $r_0$ is the initial residual, and $T(r_0)$ is an operator that splits $r_0$ into $t$ vectors based on a domain decomposition of the matrix $A$. Several enlarged CG versions were introduced in \cite{EKS}, such as MSDO-CG, LRE-CG, SRE-CG, SRE-CG2, and the truncated SRE-CG2.
Moreover, in \cite{ECG} a block variant of SRE-CG is proposed, whereby the number of search directions per iteration is reduced using deflation.
\section{s-step Enlarged CG versions}\label{sec:sstep}
The aim of s-step enlarged CG methods is to merge $s$ iterations of the enlarged CG methods, and perform more flops per communication, in order to reduce communication.
In the case of the SRE-CG and SRE-CG2 versions, reformulating into s-step versions is straight forward since these methods build an A-orthonormal basis $\{T(r_0), AT(r_0),...,A^kT(r_0)\}$ and update the approximate solutions $x_k$. The basis construction is independent from the consecutive approximate solutions. But the challenge is in constructing a numerically stable A-orthonormal basis of $st$ vectors, where $t$ is number of domains and $s$ is the number of merged iterations.
As for MSDO-CG, at each iteration $k$, $t$ search directions are built and A-orthonormalized and used to update the approximate solution. Moreover, the construction of the search directions depends on the previously computed approximate solution. So merging $s$ iterations of the MSDO-CG algorithm requires more work, since it is not possible to separate the search directions construction from the solution's update. Hence, a new version will be proposed where a modified enlarged Krylov subspace is built.
\subsection{s-step SRE-CG and SRE-CG2} \label{sec:sstepS}
The short recurrence enlarged CG (SRE-CG) and SRE-CG2 methods, are iterative enlarged Krylov subspace projection methods that build at the $k^{th}$ iteration, an A-orthonormal ``basis" $Q_k$ ($Q_k^tAQ_k = I$) for the enlarged Krylov subspace \vspace{-5mm}$$\mathscr{K}_{k,t} = span\{T(r_0), AT(r_0),...,A^{k-1}T(r_0)\},$$ and approximate the solution, $x_k = x_{k-1}+Q_k\alpha_k$, by imposing the orthogonality condition on $r_k = r_{k-1} - AQ_k\alpha_k$, ($r_k \perp \mathscr{K}_{k,t}$), and minimizing $$\phi(x) = \frac{1}{2}x^tAx - x^tb,$$ where $Q_k$ is an $n\times kt$ matrix and $T(r_0)$ is the set of $t$ vectors obtained by projecting $r_0$ on the $t$ distinct domains of $A$.
There are 2 phases in these methods, building the ``basis" and updating the approximate solution. The difference between SRE-CG and SRE-CG2 is in the ``basis" construction. After A-orthonormalizing $W_1 = \mathscr{T}_0$, where $\mathscr{T}_0$ is the matrix containg the $t$ vectors of $T(r_0)$, it is shown in \cite{EKS} that at each iteration $k\geq 3$, $W_k = AW_{k-1}$ has to be A-orthonormalized only against $W_{k-1}$ and $W_{k-2}$ and then against itself. Finally, the approximate solution $x_k$ and the residual $r_k$ are updated, $x_k = x_{k-1}+W_k\alpha_k$ and $r_k = r_{k-1} - AW_k\alpha_k$, where $\alpha_k = W_k^tr_{k-1}$. This is the SRE-CG method.
However, in finite arithmetic there might be a loss of A-orthogonality at the $k^{th}$ iteration between the vectors of $Q_k = [W_1, W_2, ..., W_k]$. Hence, in SRE-CG2 $W_k = AW_{k-1}$ is A-orthonormalized against all $W_i$'s for $i=1,2,..,k-1$.
The construction of $W_k$ matrix is independent from updating the approximate solution $x_k$. Thus it is possible to restructure the SRE-CG and SRE-CG2 algorithms by first computing $W_1$, $W_2,..., W_s$, and then updating $x_1$, $x_2,...,x_s$ as shown in Algorithm \ref{alg:restSRE-CG} and Algorithm \ref{alg:restSRE-CG2}.
The advantage of such reformulations (Algorithm \ref{alg:restSRE-CG} and Algorithm \ref{alg:restSRE-CG2}) is that the matrix $A$ is fetched once from memory per construction of $st$ $A$-orthonormal vectors; as opposed to fetching it $s$ times in the SRE-CG and SRE-CG2 algorithms. However, the number of messages and words sent in parallel is unchanged since the 2 corresponding algorithms perform the same operations but in a different order.
\begin{algorithm}[h!]
\centering
\caption{ Restructured SRE-CG2 }
{\renewcommand{\arraystretch}{1.3}
\begin{algorithmic}[1]
\Statex{\textbf{Input:} $A$, $n \times n$ symmetric positive definite matrix; $k_{max}$, maximum allowed iterations}
\Statex{\qquad \quad $b$, $n \times 1$ right-hand side; $x_0$, initial guess; $\epsilon$, stopping tolerance; $s$, s-step}
\Statex{\textbf{Output:} $x_k$, approximate solution of the system $Ax=b$}
\State$r_0 = b - Ax_0$, $\rho_0 = ||r_0||_2$ , ${\rho} = \rho_0$, $k = 1$;
\While {( ${\rho} > \epsilon {\rho_0}$ and $k < k_{max}$ )}
\If {($k==1$)}
\State A-orthonormalize $W_k = \mathscr{T}_0$, and let $Q = W_k$
\Else
\State A-orthonormalize $W_k = AW_{k-1}$ against $Q$
\State A-orthonormalize $W_k$ and let $Q = [Q \; W_k]$
\EndIf
\For {($i=1:s-1$)}
\State A-orthonormalize $W_{k+i} = AW_{k+i-1}$ against $Q$
\State A-orthonormalize $W_{k+1}$ and let $Q = [Q \; W_{k+1}]$
\EndFor
\For {($i=k:k+s-1$)}
\State $\tilde{\alpha} = (W_i^t r_{i-1})$, \;\; $x_i = x_{i-1} + W_i\tilde{\alpha} $
\State $r_i = r_{i-1} - AW_i\tilde{\alpha} $
\EndFor
\State $k = k+s$, \;\; $\rho = ||r_{k-1}||_2$
\EndWhile
\end{algorithmic}}
\label{alg:restSRE-CG2}
\end{algorithm}
\begin{algorithm}[h!]
\centering
\caption{ Restructured SRE-CG }
{\renewcommand{\arraystretch}{1.3}
\begin{algorithmic}[1]
\Statex{\textbf{Input:} $A$, $n \times n$ symmetric positive definite matrix; $k_{max}$, maximum allowed iterations}
\Statex{\qquad \quad $b$, $n \times 1$ right-hand side; $x_0$, initial guess; $\epsilon$, stopping tolerance; $s$, s-step}
\Statex{\textbf{Output:} $x_k$, approximate solution of the system $Ax=b$}
\State$r_0 = b - Ax_0$, $\rho_0 = ||r_0||_2$ , ${\rho} = \rho_0$, $k = 1$;
\While {( ${\rho} > \epsilon {\rho_0}$ and $k < k_{max}$ )}
\If {($k==1$)}
\State A-orthonormalize $W_k = \mathscr{T}_0
\Else
\State A-orthonormalize $W_{k} = AW_{k-1}$ against $W_{k-2}$ and $W_{k-1}$
\State A-orthonormalize $W_{k}$
\EndIf
\For {($i=1:s-1$)}
\State A-orthonormalize $W_{k+i} = AW_{k+i-1}$ against $W_{k+i-2}$ and $W_{k+i-1}$
\State A-orthonormalize $W_{k+i}$
\EndFor
\For {($i=k:k+s-1$)}
\State $\tilde{\alpha} = (W_i^t r_{i-1})$, \;\; $x_i = x_{i-1} + W_i\tilde{\alpha} $
\State $r_i = r_{i-1} - AW_i\tilde{\alpha} $
\EndFor
\State $k = k+s$, \;\; $\rho = ||r_{k-1}||_2$
\EndWhile
\end{algorithmic}}
\label{alg:restSRE-CG}
\end{algorithm}
To reduce communication the inner for loops have to be replaced with a set of denser operations. Lines 3 till 12 of Algorithm \ref{alg:restSRE-CG2} can be viewed as a block Arnoldi A-orthonormalization procedure, whereas lines 4 till 13 of Algorithm \ref{alg:restSRE-CG} can be viewed as a truncated block Arnoldi A-orthonormalization procedure. As for the second loop, by updating $x_k$ and $r_k$ once, we obtain an s-step version.
At the $k^{th}$ iteration of an s-step enlarged CG method, $st$ new basis vectors of $ \mathscr{K}_{ks,t}= span\{T(r_0), AT(r_0),...,A^{sk-1}T(r_0)\}$, are computed and stored in $V_{k}$, an $n \times st$ matrix. Since, $$ \mathscr{K}_{ks,t}= \mathscr{K}_{(k-1)s,t} + span\{A^{s(k-1)}T(r_0), A^{s(k-1)+1}T(r_0)...,A^{sk-1}T(r_0)\},$$ then $Q_{ks} = [Q_{(k-1)s}, V_{k}]$, where $Q_{(k-1)s}$ is an $n \times (k-1)st$ matrix that contains the $(k-1)st$ vectors of $\mathscr{K}_{(k-1)s,t}$, and $Q_{ks}$ is $n \times kst$ matrix.
Then, $x_k = x_{k-1} + Q_{ks}\alpha_k \in \mathscr{K}_{ks,t}$, where $\alpha_k = (Q_{ks}^tAQ_{ks})^{-1}(Q_{ks}^tr_{k-1})$ is defined by minimizing $\phi(x) = \frac{1}{2}x^tAx - b^tx$ over $x_0 + \mathscr{K}_{ks,t}$. As a consequence, $r_k = b-Ax_k = r_{k-1}-AQ_{ks}\alpha_k \in \mathscr{K}_{(k+1)s,t}$ satisfies the Petrov-Galerkin condition $r_k \perp \mathscr{K}_{ks,t}$, i.e. $r_k^ty = 0$ for all $y \in \mathscr{K}_{ks,t}$.
In the s-step SRE-CG2 version, $Q_{ks}$ is A-orthonormalized ($Q_{ks}^tAQ_{ks} = I$), then \vspace{+1mm}
$$\alpha_k = (Q_{ks}^tAQ_{ks})^{-1}(Q_{ks}^tr_{k-1}) = Q_{ks}^tr_{k-1}.$$\vspace{+2mm}
But $r_{k-1} \perp \mathscr{K}_{(k-1)s,t}$, i.e. $r_{k-1}^ty = 0$ for all $y \in \mathscr{K}_{(k-1)s,t}$. Thus,
\begin{eqnarray}
\alpha_k &=& Q_{ks}^tr_{k-1} = [Q_{(k-1)s}, V_{k}]^tr_{k-1} = [0_{(k-1)st \times n}; V_{k}^tr_{k-1}]\\
x_k &=& x_{k-1} + Q_{ks}\alpha_k = x_{k-1} +[Q_{(k-1)s}, V_{k}][0_{(k-1)st \times n}; V_{k}^tr_{k-1}]\\
&=& x_{k-1} + V_{k}V_{k}^tr_{k-1} = x_{k-1} + V_{k}\tilde{\alpha}_k,
\end{eqnarray} where $\tilde{\alpha}_k=V_{k}^tr_{k-1}$. Then, $r_k = r_{k-1} - AV_{k}\tilde{\alpha}_k$.
\begin{algorithm}[h!]
\centering
\caption{ s-step SRE-CG2 }
{\renewcommand{\arraystretch}{1.3}
\begin{algorithmic}[1]
\Statex{\textbf{Input:} $A$, $n \times n$ symmetric positive definite matrix; $k_{max}$, maximum allowed iterations}
\Statex{\qquad \quad $b$, $n \times 1$ right-hand side; $x_0$, initial guess; $\epsilon$, stopping tolerance; $s$, s-step }
\Statex{\textbf{Output:} $x_k$, approximate solution of the system $Ax=b$}
\State$r_0 = b - Ax_0$, $\rho_0 = ||r_0||_2$ , $\rho = \rho_0$, $k = 1$;
\While {( ${\rho} > \epsilon {\rho_0}$ and $k < k_{max}$ )}
\State Let $j = (k-1)s+1$
\If {($k==1$)}
\State A-orthonormalize $W_j = \mathscr{T}_0$, and let $Q = W_j$
\Else
\State A-orthonormalize $W_j = AW_{j-1}$ against $Q$
\State A-orthonormalize $W_j$, and let $Q = [Q, \; W_j]$
\EndIf
\State Let $V = W_j$
\For {($i=1:s-1$)}
\State A-orthonormalize $W_{j+i} = AW_{j+i-1}$ against $Q$
\State A-orthonormalize $W_{j+i}$, let $V = [V, \; W_{j+i}]$ and $Q = [Q, \; W_{j+i}]$
\EndFor
\State $\tilde{\alpha} = V^t r_{k-1}$
\State $x_k = x_{k-1} + V\tilde{\alpha} $
\State $r_k = r_{k-1} - AV\tilde{\alpha} $
\State $\rho = ||r_{k}||_2$, $k = k+1$
\EndWhile
\end{algorithmic}}
\label{alg:sstepSRE-CG2}
\end{algorithm}
\begin{algorithm}[h!]
\centering
\caption{ s-step SRE-CG }
{\renewcommand{\arraystretch}{1.3}
\begin{algorithmic}[1]
\Statex{\textbf{Input:} $A$, $n \times n$ symmetric positive definite matrix; $k_{max}$, maximum allowed iterations}
\Statex{\qquad \quad $b$, $n \times 1$ right-hand side; $x_0$, initial guess; $\epsilon$, stopping tolerance; $s$, s-step }
\Statex{\textbf{Output:} $x_k$, approximate solution of the system $Ax=b$}
\State$r_0 = b - Ax_0$, $\rho_0 = ||r_0||_2$ , $\rho = \rho_0$, $k = 1$;
\While {( ${\rho} > \epsilon {\rho_0} $ and $k < k_{max}$ )}
\State Let $j = (k-1)s+1$
\If {($k==1$)}
\State A-orthonormalize $W_j = \mathscr{T}_0$, and let $V = W_j$
\Else
\State A-orthonormalize $W_{j} = AW_{j-1}$ against $W_{j-2}$ and $W_{j-1}$
\State A-orthonormalize $W_{j}$ and let $V = W_{j}$
\EndIf
\For {($i=1:s-1$)}
\State A-orthonormalize $W_{j+i} = AW_{j+i-1}$ against $W_{j+i-2}$ and $W_{j+i-1}$
\State A-orthonormalize $W_{j+i}$ and let $V = [V \; W_{j+i}]$
\EndFor
\State $\tilde{\alpha} = (V^t r_{k-1})$
\State $x_k = x_{k-1} + V\tilde{\alpha} $
\State $r_k = r_{k-1} - AV\tilde{\alpha} $
\State $\rho = ||r_{k}||_2$, $k = k+1$
\EndWhile
\end{algorithmic}}
\label{alg:sstepSRE-CG1}
\end{algorithm}
In Algorithm \ref{alg:sstepSRE-CG2}, the $st$ new vectors are computed similarly to Algorithm \eqref{alg:restSRE-CG2}, where $t$ vectors are computed at a time ($W_j$), A-orthonormalized against all the previously computed vectors using CGS2 A-orthonormalization method \cite{sophiethesis}, and finally A-orthonormalized using A-CholQR \cite{A-ortho} or Pre-CholQR \cite{A-ortho2, sophiethesis}. At the $k^{th}$ s-step iteration, all the $kst$ vectors have to be stored in $Q_{ks}$.
Note that in exact arithmetic, at the $k^{th}$ s-step iteration, the A-orthonormalization of $W_j$ for $j\geq (k-1)s+1$, against $\tilde{Q} = [W_1, W_2, W_3, ... W_{j-1} ]$ can be summarized as follows,
where $Q_{(k-1)s} = [W_1, W_2, W_3, ... W_{(k-1)s}]$,
\begin{eqnarray*}
W_j &=& AW_{j-1} - \tilde{Q}\tilde{Q}^tA(AW_{j-1}) \\
&=& AW_{j-1} - \tilde{Q}[W_1^tA(AW_{j-1});\, W_2^tA(AW_{j-1});\, ... \,;\, W_{j-1}^tA(AW_{j-1})] \\
&=& AW_{j-1} - \tilde{Q}[0;\, 0;\, ... \,;\,0;\, W_{j-2}^tA(AW_{j-2});\,W_{j-1}^tA(AW_{j-1})] \\
&=& AW_{j-1} - W_{j-1}W_{j-1}^tA(AW_{j-1}) - W_{j-2}W_{j-2}^tA(AW_{j-1}),\\ \vspace{-2mm}
\end{eqnarray*}
\noindent since $(AW_{i})^tAW_{j-1} = 0$ for all $i<j-2$ by the A-orthonormalization process.
This version (Algorithm \ref{alg:sstepSRE-CG1}) is called the s-step short recurrence enlarged conjugate gradient (s-step SRE-CG), where only the last $zt$ computed vectors ($z = max(s,3)$) are stored, and every $t$ vectors $W_j$ are A-orthonormalize against the previous $2t$ vectors $W_{j-2}$ and $W_{j-1}$ for $j>2$. As for $x_{k}$, and $r_{k}$, they are defined as in the s-step SRE-CG2 method.
For $s=1$, Algorithms \ref{alg:sstepSRE-CG2} and \ref{alg:sstepSRE-CG1} are reduced to the SRE-CG2 and SRE-CG methods, where the total number of messages sent in parallel is $6klog(t)$, assuming that the number of processors is set to $t$ and that the methods converge in $k$ iterations. Note that, more words are sent in the SRE-CG2 Algorithm \ref{alg:sstepSRE-CG2}, than in SRE-CG Algorithm \ref{alg:sstepSRE-CG1}, due to the A-orthonormalization procedure \cite{EKS}.
For $s>1$, Algorithms \ref{alg:sstepSRE-CG2} and \ref{alg:sstepSRE-CG1} send $(s-1)log(t)$ less messages and words per s-step iteration, than Algorithm \ref{alg:restSRE-CG2} and \ref{alg:restSRE-CG}, assuming we have $t$ processors with distributed memory. This communication reduction is due to the computation of one $\alpha$ which consists of an $n\times st$ matrix vector multiplication, rather than $s$ computations of $n\times t$ matrix vector multiplications. Thus the total number of messages sent in parallel in Algorithms \ref{alg:sstepSRE-CG2} and \ref{alg:sstepSRE-CG1} is $5s{k}_{s}log(t) + {k}_{s}log(t)$, where $k_s$ is the number of $s-step$ iterations needed till convergence. Similarly to the case of $s=1$, the s-step SRE-CG2 Algorithm \ref{alg:sstepSRE-CG2} sends more words than the s-step SRE-CG Algorithm \ref{alg:sstepSRE-CG1}.
Algorithms \ref{alg:sstepSRE-CG2} and \ref{alg:sstepSRE-CG1} will converge in $k_s$ iterations, where $k_s \geq \ceil{\frac{k}{s}}$, and $k$ is number of iterations needed for convergence for $s=1$. In exact arithmetic, every s-step iteration of Algorithms \ref{alg:sstepSRE-CG2} and \ref{alg:sstepSRE-CG1} is equivalent to $s$ iteration of the SRE-CG2 and SRE-CG Algorithms, respectively. However, they might not be equivalent in finite arithmetic due to the loss of A-orthogonality of the $Q_{ks}$ matrix.
At the first iteration of Algorithms \ref{alg:sstepSRE-CG2} and \ref{alg:sstepSRE-CG1},
\begin{eqnarray}
x_1 &=& x_{0} + V_1\tilde{\alpha}_1 = x_{0} + V_1(V_1^tr_0) \nonumber \\
&=& x_{0} + [W_1 W_2 ... \; Ws][W_1 W_2 ... \; Ws]^t r_0 \nonumber \\
&=& x_{0} + \sum_{i=1}^{s} W_iW_i^t r_0 \label{x1}
\end{eqnarray}
For $s=3$, \begin{eqnarray} x_1 &=& x_{0} + W_1W_1^t r_0 + W_2W_2^t r_0 + W_3W_3^t r_0.\nonumber \end{eqnarray}
On the other hand, after 3 iterations of the SRE-CG2 and SRE-CG Algorithms, the solution $x_3$ is:
\begin{eqnarray}
x_1 &=& x_{0} + W_1(W_1^tr_0) \nonumber \\
r_1 &=& r_0 - AW_1W_1^tr_0 \nonumber \\
x_2 &=& x_1 + W_2(W_2^tr_1) = x_{0} + W_1(W_1^tr_0) + W_2W_2^t(r_{0} - AW_1W_1^tr_0) \nonumber\\
&=& x_{0} + W_1W_1^tr_0 + W_2W_2^tr_{0} - W_2(W_2^tAW_1)W_1^tr_0 \nonumber\\
r_2 &=& r_0 - AW_1W_1^tr_0 - AW_2W_2^tr_{0} + AW_2(W_2^tAW_1)W_1^tr_0\nonumber \\
x_3 &=& x_2 + W_3W_3^tr_2 = x_{0} + W_1W_1^tr_0 + W_2W_2^tr_{0} - W_2(W_2^tAW_1)W_1^tr_0 \nonumber \\
&& + W_3W_3^t( r_0 - AW_1W_1^tr_0 - AW_2W_2^tr_{0} + AW_2W_2^tAW_1W_1^tr_0 ) \nonumber \\
&=& x_{0} + W_1W_1^tr_0 + W_2W_2^tr_{0} + W_3W_3^tr_0 - W_2(W_2^tAW_1)W_1^tr_0 \nonumber \\
&& - W_3(W_3^tAW_1)W_1^tr_0 - W_3(W_3^tAW_2)W_2^tr_{0} + W_3(W_3^tAW_2)(W_2^tAW_1)W_1^tr_0 \nonumber
\end{eqnarray}
For $s>3$, more terms with $W_j^tAW_i$ will be added. Assuming that $W_j^tAW_i = 0$ for all $j<i$, then the obtained $x_s$ in the SRE-CG2 and SRE-CG Algorithms, is equivalent to $x_1$ \eqref{x1} in the s-step SRE-CG2 and s-step SRE-CG Algorithms. Similarly, under the same assumptions, $x_{is}$ in the SRE-CG2 and SRE-CG Algorithms, is equivalent to $x_i$ in the s-step SRE-CG2 and s-step SRE-CG Algorithms. In case for some $j<i$, $W_j^tAW_i \neq 0$, then all the subsequent s-step solutions will not be equal to the corresponding SRE-CG2 and SRE-CG solutions.
Assuming that the s-step versions converge in $k_s = \ceil{\frac{k}{s}}$ iterations,
then, $5{k}log(t) + \frac{k}{s}log(t)$ messages are sent in parallel.
Hence, by merging $s$ iterations of the enlarged CG methods for some given value $t$, communication is reduced by a total of at most $(s-1)log(t)k_s = \frac{s-1}{s} log(t) k$ less messages and words.
Theoretically, it is possible to further reduce communication by replacing the block Arnoldi A-orthonormalization (Algorithm \ref{alg:sstepSRE-CG2} lines 3-12) and the truncated block Arnoldi A-orthonormalization (Algorithm \ref{alg:sstepSRE-CG1} lines 4-13) with a communication avoiding kernel that first computes the $st$ vectors and then A-orthonormalize them against previous vectors and against themselves, as summarized in Algorithm \ref{alg:CA-Arnoldi}. These methods are called communication avoiding SRE-CG2 (CA SRE-CG2) and communication avoiding SRE-CG (CA SRE-CG2) respectively. For the first iteration ($k=1$), $W_{j-1} = [T(r_0)] = \mathscr{T}_0$ in Algorithm \ref{alg:CA-Arnoldi}.
\begin{algorithm}[h!]
\centering
\caption{CA-Arnoldi A-orthonormalization}
{\renewcommand{\arraystretch}{1.3}
\begin{algorithmic}[1]
\Statex{\textbf{Input:} $W_{j-1}$, $n \times t$ matrix}; $k$, iteratio
\Statex{\qquad \quad $Q$, $n \times m$ matrix, $m = (s+1)t$ in CA SRE-CG and $m = kst$ in CA SRE-CG2}
\Statex{\textbf{Output:} $V$, the $n \times st$ matrix containing the A-orthonormalized $st$ computed vectors}
\State \textbf{if}{ ($k==1$)} \textbf{then} $W_j = W_{j-1}$, and $V = W_j$
\State \textbf{else} $W_j = AW_{j-1}$, and $V = W_j$
\State \textbf{end if}
\For {($i=1:s-1$)}
\State Let $W_{j+i} = AW_{j+i-1}$
\State Let $V = [V \; W_{j+i}]$
\EndFor
\State \textbf{if}{ ($k>1$)} \textbf{then} A-orthonormalize $V$ against $Q$ \textbf{end if}
\State A-orthonormalize $V$
\end{algorithmic}}
\label{alg:CA-Arnoldi}
\end{algorithm}
In s-step SRE-CG, the $st$ vectors are computed and A-orthonormalized against the previous $2t$ vectors, $t$ vectors at a time.
But in the case of CA SRE-CG, the $st$ vectors are all computed before being A-orthonormalized.
Thus, it is not sufficient to just A-orthonormalize the $st$ computed vectors against the last $2t$ vectors. Instead, the $st$ computed vectors should be A-orthonormalized against the last $(s+1)t$ vectors.
Assuming that $Q = Q_{(k-1)s} = [W_1, W_2, W_3, ... W_{(k-1)s}]$ is A-orthonormal, then for all $i+l<j$ where $j=(k-1)s$, we have that $$(A^iW_{l})^tAW_{j}=W_{l}^tA(A^iW_{j}) = 0.$$ After computing $W_{j+i} = A^i W_{(k-1)s} = A^iW_j$ for $i = 1, .., s$, the A-orthonormalization is summarized as follows:
\begin{eqnarray*}
W_{j+i} &=& A^iW_{j} - {Q}{Q}^tA(A^iW_{j}) \\
&=& A^iW_{j} - {Q}[W_1^tA(A^iW_{j});\, W_2^tA(A^iW_{j});\, ... \,;\, W_{j}^tA(A^iW_{j})] \\
&=& A^iW_{j} - \sum_{l=1}^{j}W_l W_{l}^tA(A^iW_{j}) = A^iW_{j} - \sum_{l=j-i}^{j}W_l W_{l}^tA(A^iW_{j})
\end{eqnarray*}
This implies that $W_{j+1}$ should be A-orthonormalized against the last $2t$ vectors $W_{j-1},$ and $W_{j}$. Whereas, $W_{j+s}$ should be A-orthonormalized against the last $(s+1)t$ vectors
\noindent $W_{j-s},W_{j-s+1},..., W_{j}$. And in general, $W_{j+i}$ should be A-orthonormalized against the last $(i+1)t$ vectors. To reduce communication, in CA-SRE-CG all of the $st$ computed vectors, $W_{j+1}, W_{j+2}, ..., W_{j+s}$, are A-orthonormalized against the previous $(s+1)t$ vectors.
Given that we are computing the monomial basis, the $st$ computed vectors might be linearly dependent, which leads to a numerically unstable basis. The numerical stability and convergence of such communication avoiding and s-step versions is discussed in section \ref{sec:SRNum}.
\subsection{s-step MSDO-CG}
The MSDO-CG method \cite{EKS} computes $t$ search directions at each iteration $k$, $P_k = \mathscr{T}_{k-1} + P_{k-1}diag(\beta_k)$ where $P_0 = \mathscr{T}_{0}$ and $\mathscr{T}_{i}$ is the matrix containing the $t$ vectors of $T(r_i)$. Then, $P_k$ is A-orthonormalized against all $P_i$'s ($i<k$), and used to update $x_k = x_{k-1} + P_k\alpha_k$ and $r_k = r_{k-1} - AP_k\alpha_k$, where $\alpha_k = P_k^tr_{k-1}$. This procedure is interdependent since we can not update $P_k$ without $r_{k-1}$, and we can not update $r_{k-1}$ without $P_{k-1}$. Thus, to build an s-step version we need to split the computation of $P_k$ and the update of $x_k$, which is not possible. For that purpose we introduce a modified version of MSDO-CG where we build a modified Enlarged Krylov basis rather than computing search directions.
As discussed in \cite{EKS}, the vectors of $P_k$ belong to the Enlarged Krylov subspace $$ \mathscr{K}_{k,t} = span \{ T(r_0), AT(r_0), .., A^{k-1}T(r_0)\}.$$ Moreover, the vectors of $P_k$ belong to the modified Enlarged Krylov subspace \begin{equation}
\overline{\mathscr{K}}_{k,t} = span\{ T(r_0), T(r_1), T(r_2), ..., T(r_{k-1}) \}. \label{modeks}\end{equation}
In general, we define the modified Enlarged Krylov subspace for a given $s$ value as follows
\begin{eqnarray}
\overline{\mathscr{K}}_{k,t,s} = span & \{& T(r_0), AT(r_0), ..., A^{s-1}T(r_0), \nonumber \\
& &T(r_1), AT(r_1), ..., A^{s-1}T(r_1), \nonumber \\
& &T(r_2), AT(r_2), ..., A^{s-1}T(r_2), \nonumber \\
& & \vdots \nonumber \\
& & T(r_{k-1}), AT(r_{k-1}), ..., A^{s-1}T(r_{k-1}) \nonumber \}
\end{eqnarray}
Note that for $s=1$, the modified Enlarged Krylov subspace becomes $\overline{\mathscr{K}}_{k,t}$ defined in \eqref{modeks}.
Moreover, for $t=s=1$, the modified Enlarged Krylov subspace becomes $$ \overline{\mathscr{K}}_{k,1,1} = span\{ r_0, r_1, r_2, ..., r_{k-1} \}, $$ where the Krylov subspace $\mathcal{K}_k = span\{ r_0, Ar_0, A^2r_0, ..., A^{k-1}r_{0} \} = span\{ r_0, r_1, r_2, ..., r_{k-1}\}$.
Similarly to the Enlarged Krylov subspace $\mathscr{K}_{ks,t} $, the modified Enlarged Krylov subspace $\overline{\mathscr{K}}_{k,t,s}$ is of dimension at most $kst$.
\begin{theorem}\label{subk22}
The Krylov subspace $\mathcal{K}_k$ is a subset of the modified enlarged Krylov subspace $ \overline{\mathscr{K}}_{k,t,1}$ ($\mathcal{K}_k \subset \overline{\mathscr{K}}_{k,t,1}$).
\end{theorem}
\begin{proof}
Let $y \in \mathcal{K}_k$ where $\mathcal{K}_k = span\{ r_0, Ar_0,.., A^{k-1}r_0\} = span\{ r_0, r_1, r_2, ..., r_{k-1}\}$. Then,
\begin{eqnarray}y&=& \sum_{j=0}^{k-1} a_jr_j = \sum_{j=0}^{k-1} a_j\mathscr{T}_j*\mathbbm{1}_t = \sum_{j=0}^{k-1} \sum_{i=1}^t a_jT_i(r_j) \in \overline{\mathscr{K}}_{k,t,1} \nonumber \end{eqnarray}
since $r_j = \mathscr{T}_j*\mathbbm{1}_t = [T_1(r_j) \,T_2(r_j)\, ....\, T_t(r_j)]*\mathbbm{1}_t$ ,
where $\mathbbm{1}_t$ is a $t \times 1$ vector of ones, and $\mathscr{T}_j = [T_1(r_j) \,T_2(r_j)\, ....\, T_t(r_j)] = [T(r_j)]$ is the matrix containing the $t$ vectors of $T(r_j)$.
\end{proof}\vspace{+2mm}
Then one possible s-step reformulation of MSDO-CG would be to compute basis vectors of $\overline{\mathscr{K}}_{k,t,s}$ and use them to update the solution and the residual, similarly to the s-step SRE-CG versions. At iteration $k$ of the s-step MSDO-CG method (Algorithm \ref{alg:sstepMSDO-CG}), the $st$ vectors \vspace{+1mm}
$$T(r_{k-1}), AT(r_{k-1}), ..., A^{s-1}T(r_{k-1}) \vspace{+1mm}$$
are computed and A-orthonormalized similarly to the s-step SRE-CG2 method, and stored in the $n \times st$ matrix $V_{k}$. Then, these $st$ A-orthonormalized vectors are used to define $\tilde{\alpha}_k = V_{k}^t r_{k-1}$ and update $x_k = x_{k-1} + V_{k}\tilde{\alpha}$ and $r_k = r_{k-1} - AV_{k}\tilde{\alpha}$.
\begin{algorithm}[h!]
\centering
\caption{ s-step MSDO-CG }
{\renewcommand{\arraystretch}{1.3}
\begin{algorithmic}[1]
\Statex{\textbf{Input:} $A$, $n \times n$ symmetric positive definite matrix; $k_{max}$, maximum allowed iterations}
\Statex{\qquad \quad $b$, $n \times 1$ right-hand side; $x_0$, initial guess; $\epsilon$, stopping tolerance; $s$, s-step }
\Statex{\textbf{Output:} $x_k$, approximate solution of the system $Ax=b$}
\State$r_0 = b - Ax_0$, $\rho_0 = ||r_0||_2$ , ${\rho} = \rho_0$, $k = 1$;
\While {( ${\rho} > \epsilon {\rho_0} $ and $k < k_{max}$ )}
\If {($k==1$)}
\State A-orthonormalize $W_1 = \mathscr{T}_0$, let $V = W_1$ and $Q = W_1$
\Else
\State A-orthonormalize $W_1 = \mathscr{T}_{k-1}$ against $Q$
\State A-orthonormalize $W_1$, let $V = W_1$ and $Q = [Q \; W_1]$
\EndIf
\For {($i=1:s-1$)}
\State A-orthonormalize $W_{i+1} = AW_i$ against $Q$
\State A-orthonormalize $W_{i+1}$, let $V = [V \; W_{i+1}]$ and $Q = [Q \; W_{i+1}]$
\EndFor
\State $\tilde{\alpha} = (V^t r_{k-1})$
\State $x_k = x_{k-1} + V\tilde{\alpha} $
\State $r_k = r_{k-1} - AV\tilde{\alpha} $
\State $\rho = ||r_{k}||_2$, $k = k+1$
\EndWhile
\end{algorithmic}}
\label{alg:sstepMSDO-CG}
\end{algorithm}
Note that for $s=1$, the s-step MSDO-CG method is reduced to a modified version of MSDO-CG. Although the s-step MSDO-CG method for $s=1$ is different algorithmically than the MSDO-CG method, but they converge in the same number of iterations as shown in section \ref{sec:SRNum} due to their theoretical equivalence. Moreover, each iteration of the s-step MSDO-CG method with $s>1$ is not equivalent to $s$ iterations of the modified version of MSDO-CG, since the constructed bases of $\overline{\mathscr{K}}_{k,t,s}$ and $\overline{\mathscr{K}}_{ks,t,1}$ are different. For example, in the second iteration of s-step MSDO-CG ($k = 2$), $T(r_1), AT(r_1),...,A^{s-1}T(r_1)$ are computed. Whereas in the second $s$ iterations of the modified version of MSDO-CG ($ks = 2s$), the vectors $T(r_s), T(r_{s+1}), ..., T(r_{2s-1})$ are computed.
The communication avoiding MSDO-CG differs from the s-step version (Algorithm \ref{tab:MSDOCG}) in the basis construction where at the $k^{th}$ iteration the $st$ vectors $T(r_{k-1}), AT(r_{k-1}),...,A^{s-1}T(r_{k-1})$ are first computed, and then A-orthonormalized against previous vectors and against themselves. Thus the communication avoiding MSDO-CG algorithm is Algorithm \ref{tab:MSDOCG} with the replacement of lines 3-12 by the CA-Arnoldi A-orthonormalization Algorithm \ref{alg:CA-Arnoldi}. However, Algorithm \ref{alg:CA-Arnoldi} is slightly modified, where in line 2 $W_j = W_{j-1}$ rather than $W_j = AW_{j-1}$, with $W_{j-1} =\mathscr{T}_{k-1} = [T(r_{k-1})]$ for $k\geq 1$.
The advantage of building the modified Enlarged Krylov subspace basis is that at iteration $k$, each of the $t$ processors can compute the $s$ basis vectors
$$T_i(r_{k-1}), AT_i(r_{k-1}), A^2T_i(r_{k-1}),..., A^{s-1}T_i(r_{k-1}) $$
independently, where $T_i(r_{k-1})$ is the projection of the vector $r_{k-1}$ on the $i^{th}$ domain of the matrix $A$, i.e. a vector of all zeros except at $n/t$ entries that correspond to the $i^{th}$ domain. Thus, there is no need for communication avoiding kernels, since processor $i$ needs a part of the matrix $A$ and a part of the vector $r_{k-1}$ to compute the $s$ vectors. As a consequence, assuming that enough memory is available, then any preconditioner can be applied to the CA MSDO-CG since the Matrix Powers Kernel is not used to compute the basis vectors, as discussed in section \ref{sec:pm}.
\section{Numerical Stability and Convergence}\label{sec:SRNum}
We compare the convergence behavior of the different introduced s-step enlarged CG versions and their communication avoiding versions for solving the system $Ax = b$ using different number of partitions ($t = 2, 4, 8, 16, 32$, and $64$ partitions) and different number of $s$-values (1, 2, 3, 4, 5, 8, 10). Similarly to the enlarged CG methods \cite{EKS}, the matrix $A$ is first reordered using Metis's kway partitioning \cite{metis} that defines the $t$ subdomains. Then $x$ is chosen randomly using MATLAB's rand function and the right-hand side is defined as $b = Ax$. The initial iterate is set to $x_0 = 0$, and the stopping criteria tolerance is set to $tol = 10^{-8}$ for all the matrices, except {\poisso} ($tol = 10^{-6}$).
The characteristics of the test matrices are summarized in Table \ref{tab:testmatrices}. The {\poisso} matrix is a block tridiagonal matrix obtained from Poisson's equation using MATLAB's ``gallery(`poisson',100)''.
The remaining matrices, referred to as {\nho}, {\skyo}, {\skyto}, and {\anio}, arise from different boundary value problems of convection diffusion equations, and generated using FreeFem++ \cite{freefem}. For a detailed description of the test matrices, refer to \cite{EKS}.
\begin{table}[h!]
\centering
{\renewcommand{\arraystretch}{1.4}\footnotesize
\caption{The test matrices}
\begin{tabular}{|c|c|c|c|c|}
\hline
\textbf{Matrix} & \textbf{Size} & \textbf{Nonzeros } & \textbf{2D/3D} & \textbf{Problem}\\ \hline
{\poiss} & 10000 & 49600 & 2D & Poisson equations \\% &Yes&Achdou \cite{achdou} \\
{\nh} & 10000 &49600 & 2D & Boundary value \\% &Yes&Achdou \cite{achdou} \\
{\sky} & 10000 & 49600 & 2D & Boundary value \\% &Yes&Achdou \cite{achdou} \\
{\skyt} & 8000 & 53600 & 3D & Skyscraper \\% &Yes&Achdou \cite{achdou} \\
{\ani} & 8000& 53600 & 3D & Anisotropic Layers \\
\hline
\end{tabular}\label{tab:testmatrices}}
\end{table}
The first phase in all the discussed algorithms, is building the A-orthonormal basis by A-orthonormalizing a set of vectors against previous vectors using Classical Gram Schmidt A-orthonormalization (CGS), CGS2, or MGS, and then against themselves using A-CholQR \cite{A-ortho} or Pre-CholQR \cite{A-ortho2}.
As discussed in \cite{sophiethesis}, the combinations CGS2+A-CholQR and CGS2+Pre-CholQR are both numerically stable and require less communication. In this paper, we test the introduced methods using CGS2 (Algorithm 18 in \cite{sophiethesis}), A-CholQR (Algorithm 21 in \cite{sophiethesis}), and Pre-CholQR (Algorithm 23 in \cite{sophiethesis}). Based on the performed testing, Pre-CholQR is numerically more stable than A-CholQR. However, for most of the tested cases, the versions with CGS2+A-CholQR or CGS2+Pre-CholQR A-orthonormalization converge in the same number of iterations.
In Table \ref{tab:SRECG} we compare the convergence behavior of the different SRE-CG versions with respect to number of partitions $t$ and the $s$ values. The restructured SRE-CG is a reordered version of SRE-CG, where the same operations of $s$ iterations are performed, but in a different order. In addition, the check for convergence is done once every $s$ iterations.
Thus the restructured SRE-CG Algorithm \ref{alg:restSRE-CG} converges in $s * k_s$ iterations. In Table \ref{tab:SRECG}, $k_s$ is shown rather than $s*k_s$, for comparison purposes with the s-step versions. Moreover, for $s=1$, the restructured SRE-CG is reduced to SRE-CG, and it converges in $k$ iterations.
In case in Algorithm \ref{alg:restSRE-CG}, the second inner for loop is replaced by a while loop with a check for convergence ($||r_{i-1}||_2> \epsilon ||r_0||_2$), then the Algorithm converges in exactly $s*\ceil{\frac{k}{s}}$ iterations, where SRE-CG converges in $k$ iterations and $k \leq s*\ceil{\frac{k}{s}} \leq k + s-1$.
\begin{table}[h!]
\setlength{\tabcolsep}{2pt}
\caption{\label{tab:SRECG} Comparison of the convergence of different SRE-CG versions (restructured SRE-CG, s-step SRE-CG, and CA SRE-CG with Algorithm \ref{alg:CA-Arnoldi2}), with respect to number of partitions $t$ and $s$ values. }
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{||c||c||c||c|c|c|c|c|c|c||c|c|c|c|c|c||c|c|c||}
\cline{4-19}
\cline{4-19}
\multicolumn{1}{c}{}& \multicolumn{1}{c}{}& \multicolumn{1}{c||}{} & \multicolumn{7}{c||}{\multirow{2}{*}{\bf Restructured SRE-CG }} & \multicolumn{6}{c||}{\multirow{2}{*}{\bf s-step SRE-CG }} & \multicolumn{3}{c||}{\multirow{2}{*}{\bf CA SRE-CG}}\\
\multicolumn{1}{c}{}& \multicolumn{1}{c}{}& \multicolumn{1}{c||}{} & \multicolumn{7}{c||}{\multirow{2}{*}{}} & \multicolumn{6}{c||}{\multirow{2}{*}{}} & \multicolumn{3}{c||}{{\bf }}\\
\cline{2-19}
\multicolumn{1}{c||}{} & \multicolumn{1}{c||}{\bf CG} & \backslashbox{$\bf t$}{$\bf s$} & \bf 1 &\bf 2 &\bf 3 &\bf 4 &\bf 5 &\bf 8 &\bf 10 &\bf 2 &\bf 3 &\bf 4 &\bf 5 &\bf 8 &\bf 10 &\bf 2 &\bf 3 & \bf 4 \\
\hline \hline
\multirow{6}{*}{\rotatebox[origin=c]{90}{\bf \poiss}}
&\multirow{6}{*}{195} &\bf 2 &193 & 97&65 &49 & 39& 25& 20& 97&65 &49 & 39 &25& 20& 97 &65&49 \\
\cline{3-19}
& & \bf 4 &153& 77& 51& 39& 31& 20& 16& 77 &51&39&31& 20& 16& 77&51&39 \\
\cline{3-19}
&&\bf 8 &123& 62 &41 &31& 25&16& 13& 62 &41 &31 &25&16& 13& 62& 41 &31 \\
\cline{3-19}
&&\bf 16 &95& 48& 32 &24& 19&12& 10& 48& 32 &24& 19&12& 10& 48& 32 &24 \\
\cline{3-19}
& &\bf 32 &70& 35& 24& 18& 14&9& 7& 35& 24& 18& 14& 9& 7&35& 24& 18 \\
\cline{3-19}
&&\bf 64 &52& 26& 18& 13& 11&7 &6& 26& 18& 13& 11& 7 &6& 26& 18& 13
\\
\hline
\hline
\multirow{6}{*}{\rotatebox[origin=c]{90}{\nh} }
&\multirow{6}{*}{259} &\bf 2 & 245& 123& 82& 62& 49& 31& 25& 123& 82& 62& 49& 31& 25& 123& 82& 62 \\
\cline{3-19}
& &\bf 4 & 188& 94& 63& 47& 38& 24& 19& 94& 63 &47 &38& 24& 19& 94 &63 &47\\
\cline{3-19}
&&\bf 8 &149 &75& 50& 38& 30& 19& 15& 75& 50& 38& 30& 19& 15& 75& 50& 38 \\
\cline{3-19}
&&\bf 16 & 112& 56& 38& 28& 23&14& 12& 56& 38& 28& 23& 14& 12& 56& 38& 28 \\
\cline{3-19}
& &\bf 32 & 82& 41& 28& 21& 17& 11& 9& 41& 28& 21& 17& 11& 9& 41& 28& 21 \\
\cline{3-19}
&&\bf 64 &60& 30& 20& 15& 12& 8& 6& 30& 20& 15& 12& 8& 6& 30& 20& 15
\\
\hline
\hline
\multirow{6}{*}{\rotatebox[origin=c]{90}{\sky}}
&\multirow{6}{*}{5951} &\bf 2 & 5526& 2763& 1842& 1395& 1116& 708 & 558& 2763& 1842& 1395& 1116& 708 & 558& 2793& 1854& x \\
\cline{3-19}
& &\bf 4 & 4526& 2263& 1521& 1141& 913&571&457& 2263& 1521& 1141& 913&571&457& 2328& 1575 & x\\
\cline{3-19}
&&\bf 8 &2843& 1423& 949 &712& 575& 356&288& 1423& 949 &712& 575& 356&334 & 1405& 973& x \\
\cline{3-19}
&&\bf 16 & 1770& 885&590& 450& 354&225&177&885&590& 450& 354&225&183& 910&605 &x \\
\cline{3-19}
& &\bf 32 & 999& 500& 333& 250& 200&125&100& 500& 333& 250& 200&125&x& 492& 340 &x \\
\cline{3-19}
&&\bf 64 &507& 255& 169& 128& 102&64&51&255&169&128&102&x&x& 257&179&x
\\
\hline
\hline
\multirow{6}{*}{\rotatebox[origin=c]{90}{\skyt }}
&\multirow{6}{*}{902} &\bf 2&829&435&290&218&174&109&87&435&290&218&174&109&87& 426& 285& x \\
\cline{3-19}
& &\bf 4 & 745& 382&255& 191& 149&98&78&382&255&191&149&142&473& 373&251&x\\
\cline{3-19}
&&\bf 8&590&295&197&148&118&74&59&295&197&148&118&110&199& 294&198& x \\
\cline{3-19}
&&\bf 16 & 436 &218&146&109&89&56&45& 218&146&109&89&58&74& 223&150&x\\
\cline{3-19}
& &\bf 32 & 279 &142&93& 71&57&36&29&142&93&71&57&x&x& 140&97&x \\
\cline{3-19}
&&\bf 64 &157 &79& 53& 40& 32&20&16 &79& 53& 40& 32&314&250& 78& 54&x\\
\hline
\hline
\multirow{6}{*}{\rotatebox[origin=c]{90}{\ani}}
&\multirow{6}{*}{4146} &\bf 2 &4005&2030&1335&1015&801&510&406&2030&1335&1015&801&510&406& 1985&1346&x\\
\cline{3-19}
& &\bf 4 & 3570& 1785& 1190& 909& 714&464&357&1785&1190&909&714&464&357& 1776& 1201&x\\
\cline{3-19}
&&\bf 8 &3089&1612& 1075&806&645&403&325&1612&1075&x&x&x&x&1548&1070& x \\
\cline{3-19}
&&\bf 16 & 2357&1219&815& 610& 488&305&244& 1219&815&x&x&x&x& 1169&800&x\\
\cline{3-19}
& &\bf 32 & 1640&820&552&410&328&205&164&820&552&1686&2729&1804&1499&816&549&x \\
\cline{3-19}
&&\bf 64 &928 &464&315&232&189&116&95&464&315& 792& 777& 492&501& 453& 316&x\\
\hline
\hline
\end{tabular}\vspace{-5mm}
\end{table}
Thus, it is expected that the restructured SRE-CG (Algorithm \ref{alg:restSRE-CG}) converges in $k_1$ iterations, where $k_1 \geq s*\ceil{\frac{k}{s}}$. In case SRE-CG converges in $k$ iterations, and $k$ is divisible by $s$, then Algorithm \ref{alg:restSRE-CG} converges in exactly $k_1 = s*\ceil{\frac{k}{s}} = k$ iterations. On the other hand, if $k$ is not divisible by $s$, then Algorithm \ref{alg:restSRE-CG} either converges in $k_1 = s*\ceil{\frac{k}{s}} \leq k + s-1$, or it converges in $k_1 \geq k+s$ iterations. The first case occurs when the norm of the residual in the $s*\ceil{\frac{k}{s}}$ iteration remains less than $tol*||r_0||_2$. Otherwise, if the L2 norm of the residual fluctuates, then Algorithm \ref{alg:restSRE-CG} requires slightly more iterations to converge.
For the matrices {\poisso} and {\nho}, the restructured SRE-CG (Algorithm \ref{alg:restSRE-CG}) converges in exactly $k_1 = s*\ceil{\frac{k}{s}}$ iterations. For the other matrices, the three discussed cases are observed, i.e. the restructured SRE-CG converges in $k_1$ iterations where $ k_1 \geq s*\ceil{\frac{k}{s}}$. For example, for the matrix {\skyo } with $t = 32$ and $2 \leq s \leq 10$, Algorithm \ref{alg:restSRE-CG} converges in $k_1 = s*\ceil{\frac{k}{s}}$ iterations. But for $s = 4$, Algorithm \ref{alg:restSRE-CG} converges in $k_1 = s*\ceil{\frac{k}{s}} + j$ iterations where $j=0 $ for $t=32$, $j = 4$ for $t = 8$ or $64$, $j=28$ for $t = 16$, $j=36$ for $t=4$, and $j= 52$ for $t=2$.
The s-step SRE-CG method (Algorithm \ref{alg:sstepSRE-CG1}) differs from the restructured version in the update of the approximate solutions $x_k$. As discussed in section \ref{sec:sstepS}, if there is no loss of A-orthogonality of the basis, then the s-step SRE-CG method should converge in $k_s$ iterations, where the restructured SRE-CG method converges in $k_1 = s*k_s$ iterations. This is the case for the matrices {\poisso } and {\nho} for all the tested $t$ and $s$ values ($2\leq t \leq 64$ and $2\leq s \leq 10$).
On the other hand, for the remaining 3 matrices for some values of $s$ and $t$, the s-step SRE-CG method converges in $k_s+j$ iterations due to loss of A-orthogonality of the basis. For example, for {\skyo } matrix with $t=2,4$ and $2\leq s \leq 10$ the s-step SRE-CG method converges in exactly $k_s$ iterations. Similarly for $t=8,16, 32$ with $2\leq s \leq 8$, and for $t=64$ with $2\leq s \leq 5$. But, for $t=8,16$ with $8\leq s \leq 10$, s-step SRE-CG converges in $k_s+j$ iterations. However, for $t=32$ with $s=10$ and $t=64$ with $8\leq s \leq 10$, the s-step SRE-CG requires more iterations to converge than the SRE-CG does for the corresponding $t$, that is why an $\times$ is placed in table \eqref{tab:SRECG}. A similar convergence behavior is observed for the matrix {\skyto}.
As expected, the Communication-Avoiding SRE-CG method (Algorithm \ref{alg:sstepSRE-CG1} with CA-Arnoldi A-orthonormalization Algorithm \ref{alg:CA-Arnoldi}) is numerically unstable due to the enlarged monomial basis construction. Unlike the s-step version, at the $i^{th}$ iteration the $st$ vectors $AW, A^{2}W, ...,A^{s}W$ are first computed and stored in $V$, then A-orthonormalized with respect to the $(s+1)t$ previous vectors and against themselves, where $W$ is an $n \times t$ matrix containing the A-orthonormalized $A^{s(i-1) - 1}{T}(r_0)$ vectors.
To stabilize the CA-Arnoldi A-orthonormalization (Algorithm \ref{alg:CA-Arnoldi}), the first $t$ vectors $AW$ are A-orthonormalized with respect to the previous vectors and against themselves, and then the $(s-1)t$ vectors $A(AW), A^{2}(AW),...,A^{s-1}(AW)$ are computed, as shown in Algorithm \ref{alg:CA-Arnoldi2} .
In Table \ref{tab:SRECG}, we test the CA SRE-CG method, where the $st$ vectors are A-orthonormalized against the previous $(s+1)t$ vectors using Algorithm \ref{alg:CA-Arnoldi2} for $k>1$. The CA SRE-CG with Algorithm \ref{alg:CA-Arnoldi2} converges at a similar rate as the s-step version for ill-conditioned matrices, such as {\skyo}, {\skyto}, and {\anio}, with $s=2, \mbox{ and }3$ only.
However, for the matrices {\nho} and {\poisso} CA SRE-CG converges in the same number of iterations as s-step SRE-CG, even for $s\geq 4$ (not shown in the table). This implies that CA SRE-CG should converge in approximately $\ceil{\frac{k}{s}}$ iterations for $s \geq 4$, once the ill-conditioned systems are preconditioned.
\begin{algorithm}[h!]
\centering
\caption{Modified CA-Arnoldi A-orthonormalization}
{\renewcommand{\arraystretch}{1.3}
\begin{algorithmic}[1]
\Statex{\textbf{Input:} $W_{j-1}$, $n \times t$ matrix; $k$, iteration }
\Statex{\qquad \quad $Q$, $n \times m$ matrix, $m = (s+1)t$ in CA SRE-CG and $i = kst$ in CA SRE-CG2}
\Statex{\textbf{Output:} $V$, the $n \times st$ matrix containing the A-orthonormalized $st$ computed vectors}
\State \textbf{if}{ ($k==1$)} \textbf{then} Let $W_j = W_{j-1}$
\State \textbf{else} Let $W_j = AW_{j-1}$, A-orthonormalize $W_j$ against $Q$.
\State \textbf{end if}
\State A-orthonormalize $W_j$, let $V = W_j$.
\For {($i=1:s-1$)}
\State Let $W_{j+i} = AW_{j+i-1}$
\State Let $V = [V \; W_{j+i}]$
\EndFor
\State \textbf{if}{ ($k>1$)} \textbf{then} A-orthonormalize $V$ against $Q$ \textbf{end if}
\State A-orthonormalize $V$
\end{algorithmic}}
\label{alg:CA-Arnoldi2}
\end{algorithm}
In Table \ref{tab:SRECG2}, we compare the convergence behavior of the different SRE-CG2 versions with respect to number of partitions $t$ and the $s$ values. In general, a similar behavior to the corresponding SRE-CG versions in Table \ref{tab:SRECG} is observed.
Yet, the SRE-CG2 versions converge faster than their corresponding SRE-CG versions and are numerically more stable.
For $s=1$, the restructured SRE-CG2 method is equivalent to the SRE-CG2 method and converges in $k$ iterations. For $s>1$, the restructured SRE-CG2 method converges in $k_1$ iterations, where $k_1 = s*k_s \geq s*\ceil{\frac{k}{s}}$, similarly to the restructured SRE-CG method. The s-step SRE-CG2 converges in $k_s$ iterations for $s\geq 2$ for all the tested matrices. As for the communication avoiding version (Algorithm \ref{alg:sstepSRE-CG2}) with the modified CA-Arnoldi A-orthonormalization (Algorithm \ref{alg:CA-Arnoldi2}), it does not converge as fast as the s-step version for ill-conditioned matrices, such as \skyo, \skyto, and \anio, with large $s$-values ($s \geq 4$). Yet, CA SRE-CG2 converges in the same number of iterations as s-step SRE-CG2, for the matrices {\nho} and {\poisso}, even with $s\geq 4$ (not shown in the table).
\begin{table}[h!]
\setlength{\tabcolsep}{2pt}
\caption{\label{tab:SRECG2} Comparison of convergence of different SRE-CG2 versions (restructured SRE-CG2, s-step SRE-CG2, and CA SRE-CG2 with Algorithm \ref{alg:CA-Arnoldi2}) with respect to number of partitions $t$ and $s$ values. }
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{||c||c||c||c|c|c|c|c|c|c||c|c|c|c|c|c||c|c|c||}
\cline{4-19}
\cline{4-19}
\multicolumn{1}{c}{}& \multicolumn{1}{c}{}& \multicolumn{1}{c||}{} & \multicolumn{7}{c||}{\multirow{2}{*}{\bf Restructured }} & \multicolumn{6}{c||}{\multirow{2}{*}{\bf s-step }} & \multicolumn{3}{c||}{\multirow{2}{*}{\bf CA}}\\
\multicolumn{1}{c}{}& \multicolumn{1}{c}{}& \multicolumn{1}{c||}{} & \multicolumn{7}{c||}{\multirow{2}{*}{\bf SRE-CG2 }} & \multicolumn{6}{c||}{\multirow{2}{*}{\bf SRE-CG2}} & \multicolumn{3}{c||}{\multirow{2}{*}{\bf SRE-CG2}}\\
\multicolumn{1}{c}{}& \multicolumn{1}{c}{}& \multicolumn{1}{c||}{} & \multicolumn{7}{c||}{\multirow{2}{*}{}} & \multicolumn{6}{c||}{\multirow{2}{*}{}} & \multicolumn{3}{c||}{\multirow{2}{*}{}}\\
\cline{2-19}
\multicolumn{1}{c||}{} & \multicolumn{1}{c||}{\bf CG} & \backslashbox{$\bf t$}{$\bf s$} & \bf 1 &\bf 2 &\bf 3 &\bf 4 &\bf 5 &\bf 8 &\bf 10 &\bf 2 &\bf 3 &\bf 4 &\bf 5 &\bf 8 &\bf 10 &\bf 2 &\bf 3 & \bf 4 \\
\hline \hline
\multirow{6}{*}{\rotatebox[origin=c]{90}{\bf \poiss}
&\multirow{6}{*}{195} &\bf 2 &193& 97 &65& 49& 39 &25 & 20& 97& 65& 49& 39& 25 & 20& 97& 65& 49 \\
\cline{3-19}
& & \bf 4 &153& 77& 51& 39& 31& 20 & 16& 77 &51&39&31& 20 &16 & 77&51&39 \\
\cline{3-19}
&&\bf 8 &123& 62 &41 &31& 25& 16 &13 & 62 &41 &31 &25& 16 & 13& 62& 41 &31 \\
\cline{3-19}
&&\bf 16 &95& 48& 32 &24& 19&12 &10 & 48& 32 &24& 19&12 & 10& 48& 32 &24 \\
\cline{3-19}
& &\bf 32 &70& 36& 24& 18& 14& 9 & 8& 35& 24& 18& 14&9 &8 & 35& 24& 18 \\
\cline{3-19}
&&\bf 64 &52& 26& 18& 13& 11& 7 &6 & 26& 18& 13& 11& 7 &6 & 26& 18& 13
\\
\hline
\hline
\multirow{6}{*}{\rotatebox[origin=c]{90}{\nh}
&\multirow{6}{*}{259} &\bf 2 & 243& 122& 81& 61& 49&31 &25 & 122& 81& 61& 49&31 &25 & 123& 81& 61 \\
\cline{3-19}
& &\bf 4 & 194& 97& 65& 49& 39& 25 & 20& 97& 65 &49 &39& 25 &20 & 94 &65&47\\
\cline{3-19}
&&\bf 8 &150 &75& 50& 38& 30& 19 & 15& 75& 50& 38& 30& 19 &15 & 75& 50& 38 \\
\cline{3-19}
&&\bf 16 & 113& 57& 38& 29& 23&15 & 12& 57& 38& 29& 23& 15 & 12& 56& 38& 29 \\
\cline{3-19}
& &\bf 32 & 84& 42& 28& 21& 17&11 & 9& 42& 28& 21& 17& 11 &9 & 41& 28& 21 \\
\cline{3-19}
&&\bf 64 &60& 30& 20& 15& 12& 8 &6 & 30& 20& 15& 12&8 & 6& 30& 20& 15
\\
\hline
\hline
\multirow{6}{*}{\rotatebox[origin=c]{90}{\sky}
&\multirow{6}{*}{5951} &\bf 2 &1415& 708& 472& 354& 283& 177 & 142 & 708& 472& 354& 283& 177 &142 & 708& 472& 365 \\
\cline{3-19}
& &\bf 4 & 756& 378& 252& 189& 152& 95&76 & 378& 252& 189& 152& 95 &76 & 378& 252& 576\\
\cline{3-19}
&&\bf 8 &399& 200& 133& 100& 80& 50 &40 & 200& 133& 100& 80& 50&40 & 199& 133& 295 \\
\cline{3-19}
&&\bf 16 & 219& 110& 73& 55& 44&28 &22 & 110& 73& 55& 44&28 &22 & 109& 73& 147\\
\cline{3-19}
& &\bf 32 & 125& 63& 42& 32& 25& 16 &13 & 63& 42& 32& 25& 16 &13 & 63& 42& 77 \\
\cline{3-19}
&&\bf 64 &74& 37& 25& 19& 15&10 &8 & 37& 25& 19& 15& 10 &8 & 37& 25& 39
\\
\hline
\hline
\multirow{6}{*}{\rotatebox[origin=c]{90}{\skyt}}
&\multirow{6}{*}{902} &\bf 2 & 570& 285& 190& 143& 114&72 &57 & 285& 190& 144& 114 & 72& 57& 285& 190& 155\\
\cline{3-19}
& &\bf 4 & 375& 190& 125& 95& 75&48 & 38& 190& 125& 95& 75& 48 & 38& 190 &127& 101\\
\cline{3-19}
&&\bf 8 &213& 107& 71& 54& 43&27 &22 & 107& 71& 54& 43&27 & 22& 107& 71& 224 \\
\cline{3-19}
&&\bf 16 & 117& 59& 39& 30& 24&15 & 12& 59& 39& 30& 24& 15 &12 & 59& 39& 124\\
\cline{3-19}
& &\bf 32 & 69& 35& 23& 18& 14&9 & 7& 35& 23& 18& 14&9 & 7& 35& 23& 69 \\
\cline{3-19}
&&\bf 64 &43& 22& 15& 11& 9&6 &5 & 22& 15& 11& 9& 6& 5& 22& 15& x \\
\hline
\hline
\multirow{6}{*}{\rotatebox[origin=c]{90}{\ani}
&\multirow{6}{*}{4146} &\bf 2 & 875& 438& 292& 219& 175& 110 &88 & 438& 292& 219& 175& 110 &88 & 438& 292& 219 \\
\cline{3-19}
& &\bf 4 &673& 340& 229& 170& 131&81 &66 & 340& 229& 170& 131&81 &66 & 340& 229& 170\\
\cline{3-19}
&&\bf 8 &449& 225& 150& 113& 90& 57 &45 & 225& 150& 113& 90&57 &45 & 225& 150& 113 \\
\cline{3-19}
&&\bf 16 & 253& 127& 85& 64& 51& 32& 26& 127& 85& 64& 51&32 & 26& 127& 85& 78\\
\cline{3-19}
& &\bf 32 & 148& 74& 49& 37& 30&19 &15 & 74& 50& 37& 30&19 & 15& 74& 50& 58 \\
\cline{3-19}
&&\bf 64 &92& 46& 31& 23& 19&12 & 10& 46& 31& 23& 19&12 & 10& 46& 31& 31\\
\hline
\hline
\end{tabular}
\end{table}
\begin{table}[h!]
\setlength{\tabcolsep}{2pt}
\caption{\label{tab:MSDOCG} Comparison of the convergence of different MSDO-CG versions (s-step MSDO-CG, CA MSDO-CG with Algorithm \ref{alg:CA-Arnoldi}, and CA MSDO-CG with Algorithm \ref{alg:CA-Arnoldi2}) with respect to number of partitions $t$ and $s$ values. }
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{||c||c||c||c||c|c|c|c|c|c|c|c|c|||c|c|c|c|||c|c|c|c|c|c||}
\cline{4-23}
\cline{4-23}
\multicolumn{1}{c}{}& \multicolumn{1}{c}{}& \multicolumn{1}{c||}{} &\multicolumn{1}{c||}{\multirow{2}{*}{\bf MSD }} & \multicolumn{9}{c||}{\multirow{2}{*}{\bf s-step }} & \multicolumn{4}{c||}{\multirow{2}{*}{\bf CA MSDO-CG}}& \multicolumn{6}{c||}{\multirow{2}{*}{\bf CA MSDO-CG}}\\
\multicolumn{1}{c}{}& \multicolumn{1}{c}{}& \multicolumn{1}{c||}{} & \multicolumn{1}{c||}{\multirow{2}{*}{\bf OCG }} & \multicolumn{9}{c||}{\multirow{2}{*}{\bf MSDO-CG }} & \multicolumn{4}{c||}{\multirow{2}{*}{\bf with Algorithm\ref{alg:CA-Arnoldi}}}& \multicolumn{6}{c||}{\multirow{2}{*}{\bf with Algorithm\ref{alg:CA-Arnoldi2}}}\\
\multicolumn{1}{c}{}& \multicolumn{1}{c}{}& \multicolumn{1}{c||}{} & \multicolumn{1}{c||}{} & \multicolumn{9}{c||}{\multirow{2}{*}{}} & \multicolumn{4}{c||}{\multirow{2}{*}{}} & \multicolumn{6}{c||}{\multirow{2}{*}{}} \\
\cline{2-23}
\multicolumn{1}{c||}{} & \multicolumn{1}{c||}{\bf CG} & \backslashbox{$\bf t$}{$\bf s$} & \bf 1 & \bf 1 &\bf 2 &\bf 3 &\bf 4 &\bf 5 & \bf 6 & \bf 7 &\bf 8 &\bf 10 &\bf 2 &\bf 3 &\bf 4 &\bf 5 &\bf 2 &\bf 3 &\bf 4 &\bf 5 &\bf 6 &\bf 7 \\
\hline \hline
\multirow{6}{*}{\rotatebox[origin=c]{90}{\bf \poiss}}
&\multirow{6}{*}{195} &\bf 2 &198 &198 & 99 &66& 49& 40 & 33& 28& 23& 18 &
99& 66& 50& 40 &99& 66 & 49& 34& 33& 28 \\
\cline{3-23}
& & \bf 4 &166 &166 & 83& 56& 42& 34 & 28& 24 &21&17& 83& 56& 42& 34& 83& 55 &42 & 33&28&25 \\
\cline{3-23}
&&\bf 8 &137 &137 & 68 &46 &34& 27& 23 &19 & 17 &13 & 69& 46& 36& 28&
68& 46 & 35& 28& 24 &21 \\
\cline{3-23}
&&\bf 16 &121 &121 & 59& 39 &29& 23&18&16 & 14& 11
&61& 41& 31& 25&
59&38 & 28&23&19 &16 \\
\cline{3-23}
& &\bf 32 &95 &95 & 45& 29& 22& 17& 14 & 12& 10& 8& 48& 32& 25& 20&
45&30 &22 & 18& 15& 13 \\
\cline{3-23}
&&\bf 64 &69 &69 & 33& 21& 16& 12& 10 &9 & 8& 6& 37& 25& 20& 16&
33& 22 &16 & 14& 12& 10
\\
\hline
\hline
\multirow{6}{*}{\rotatebox[origin=c]{90}{\nh}}
&\multirow{6}{*}{259} &\bf 2 & 255& 255 &127&84& 63& 51& 42& 36& 32& 26& 127& 85& 64& 51& 127& 84& 63& 51& 42& 37 \\
\cline{3-23}
& &\bf 4 & 210& 210 &104 &69 &52& 42& 34& 30& 26& 20& 105& 71& 53& 42&
104& 68& 52& 42& 35& 31\\
\cline{3-23}
&&\bf 8 &170& 170 &84 &56 &42& 33& 27& 23& 20& 16&85& 57& 43& 34&
84& 56& 42& 33& 29& 25 \\
\cline{3-23}
&&\bf 16 & 138& 138 &68 &44 &33& 26& 21& 18& 16& 12&70& 47& 36& 28&
68& 44& 33& 27& 23& 20 \\
\cline{3-23}
& &\bf 32 & 106 &106&51 &33& 25& 19& 16& 13& 12& 10& 54& 36& 28& 23&
51& 33& 25& 21& 18& 16\ \\
\cline{3-23}
&&\bf 64 &76 &76& 37 &24& 17& 14& 11& 10& 9& 7&41& 28& 22& 17&
37& 24& 19& 16& 14& 12 \\
\hline
\hline
\multirow{6}{*}{\rotatebox[origin=c]{90}{\sky}}
&\multirow{6}{*}{5951} &\bf 2 &1539&1539&719&480&360&288&240&206& 180& 144&778& 527& 401& 327& 720&493&374&307&259&224 \\
\cline{3-23}
& &\bf 4 &916& 916& 397& 259& 194& 154& 129& 110& 96& 77&466& 312&x&x& 395& 271& 207& 170& 143& x\\
\cline{3-23}
&&\bf 8 &517& 517& 214& 141& 105& 84& 70& 60& 52& 42&260& 167& 129& x& 214& 149& 114& 95& 80& x \\
\cline{3-23}
&&\bf 16 & 277& 277& 122& 81& 60& 47& 40& 34& 30& 24&141& 95& x& x& 122& 85& 66& 54& x& x\\
\cline{3-23}
& &\bf 32 & 192& 192& 74& 48& 36& 28& 23& 20& 17& 14&92& 61& x& x& 73& 51& 40& 33& x& x \\
\cline{3-23}
&&\bf 64 &123& 123& 47& 29& 22& 17& 14& 12& 11& 8&60& x& x& x& 47& 32& 25& 23& x& x \\
\hline
\hline
\multirow{6}{*}{\rotatebox[origin=c]{90}{\skyt}}
&\multirow{6}{*}{902} &\bf 2 & 637& 633& 334& 211& 169& 139& 111& 89& 81& 66& 339& 235& 180& 153&333& 242& 193& 160& 134& 119\\
\cline{3-23}
& &\bf 4 & 374& 373& 205& 137& 103& 81& 68& 58& 51& 41& 204& 138& 107& 86&
206& 140& 109& 89& 75& 66\\
\cline{3-23}
&&\bf 8 &224& 224& 112& 74& 56& 44& 37& 32& 28& 22&117& 82& 63& 50&
112& 78& 61& 49& 42& 37 \\
\cline{3-23}
&&\bf 16 &137& 137& 63& 42& 31& 25& 21& 18& 16& 13& 73& 50& 38& 32&
63& 45& 35& 29& 25& 22\\
\cline{3-23}
& &\bf 32 & 89& 89& 38& 25& 19& 15& 13& 11& 9& 8& 45& 30& 23& x&
38& 27& 21& 18& 19& x\\
\cline{3-23}
&&\bf 64 &50& 50& 24& 16& 12& 10& 8& 7& 6& 5&27& 19& 16& x&
24& 17& 14& 12& x& x \\
\hline
\hline
\multirow{6}{*}{\rotatebox[origin=c]{90}{\ani}}
&\multirow{6}{*}{4146} &\bf 2 & 896& 896& 456& 301& 227& 181& 152& 130& 114& 91&458& 312& 237& 195& 452& 309& 242& 201& 168& 148 \\
\cline{3-23}
& &\bf 4 &796& 796& 362& 238& 177& 140& 115& 99& 88& 69&413& 281& 216& 176& 362& 253& 194& 159& 135& 117\\
\cline{3-23}
&&\bf 8 &473& 473& 231& 154& 115& 92& 77& 66& 58& 46& 238& 163& 126& x&231& 158& 120& 98& 83& x \\
\cline{3-23}
&&\bf 16 & 292& 292& 130& 86& 64& 51& 43& 37& 32& 26&140& 95& x& x& 130& 91& 70& 57& 55& x\\
\cline{3-23}
& &\bf 32 &213& 213& 77& 51& 38& 30& 25& 22& 19& 15&97& 61& x& x& 76& 55& 42& 35& x& x \\
\cline{3-23}
&&\bf 64 &115& 115& 48& 31& 24& 19& 16& 14& 12& 10& 56& x& x& x&48& 34& 27& 24& x& x\\
\hline
\hline
\end{tabular}
\end{table}
In Table \ref{tab:MSDOCG}, we compare the convergence behavior of MSDO-CG, s-step MSDO-CG, CA MSDO-CG with Algorithm \ref{alg:CA-Arnoldi}, and CA MSDO-CG with Algorithm \ref{alg:CA-Arnoldi2} (where $W_{j-1} = [T(r_{k-1})]$ and $W_j = W_{j-1}$ for $k \geq 1$) versions with respect to number of partitions $t$ and the $s$ values. We do not test a restructured MSDO-CG since the s-step version is not exactly equivalent to the merging of $s$ iterations of MSDO-CG. The s-step MSDO-CG with $s=1$ is equivalent to a modified version of MSDO-CG which differs algorithmically from MSDO-CG but is equivalent theoretically. Moreover, MSDO-CG and s-step MSDO-CG with $s=1$ converge in the same number of iterations for all $t$ values and matrices. For $s\geq 2$, s-step MSDO-CG converges in $m$ iterations where in most cases $m \leq \ceil{\frac{k}{s}}$ and MSDO-CG converges in $k$ iterations. Moreover, for all the matrices, the s-step MSDO-CG converges for $s = 10$ and all values of $t$.
Unlike the CA SRE-CG and CA SRE-CG2 with the CA-Arnoldi Algorithm \ref{alg:CA-Arnoldi}, the CA MSDO-CG with Algorithm \ref{alg:CA-Arnoldi} converges for $s= 2$, and $3$, as shown in table \ref{tab:MSDOCG}. The difference is that in the SRE-CG and SRE-CG2 we are computing a modified block version of the powers method, where $t$ vectors ($T(r_0)$) are multiplied by powers of A and are A-orthonormalized. Thus there is a higher chance that these vectors converge to the largest eigenvector in a very fast rate, leading to a numerically linearly dependent basis. Whereas, in CA MSDO-CG at every iteration we are computing a block version of the powers method but starting with a new set of $t$ vectors, i.e. $T(r_{k-1})$ at the $k^{th}$ iteration. For the matrices {\nho} and {\poisso}, CA MSDO-CG scales even for $s>5$. But for the other matices, as $s$ grows, the CA MSDO-CG requires much more than $\ceil{\frac{k}{s}}$ and $\ceil{\frac{k}{s-1}}$ iterations to converge, due to the stagnation of the relative error.
For the matrices {\nho} and {\poisso}, CA MSDO-CG with Algorithm \ref{alg:CA-Arnoldi2} converges in exactly the same number of iterations as CA MSDO-CG with Algorithm \ref{alg:CA-Arnoldi} and s-step MSDO-CG up to $s=10$.
On the other hand, the CA MSDO-CG with Algorithm \ref{alg:CA-Arnoldi2} converges faster than CA MSDO-CG with Algorithm \ref{alg:CA-Arnoldi} version for the corresponding $s$ and $t$ values, for the matrices {\skyto} (except for $t = 2,4$), {\skyo}, and {\anio} (except for $t = 2,4$).
More importantly, CA MSDO-CG with Algorithm \ref{alg:CA-Arnoldi2} is numerically more stable than CA MSDO-CG with Algorithm \ref{alg:CA-Arnoldi} and CA SRE-CG2 with Algorithm \ref{alg:CA-Arnoldi2}, as it scales up to at least $s=5$, or $6$. Whereas CA SRE-CG2 with Algorithm \ref{alg:CA-Arnoldi2} and CA MSDO-CG with Algorithm \ref{alg:CA-Arnoldi} scales up to $s=3$, or $4$ as shown in Tables \ref{tab:SRECG2} and \ref{tab:MSDOCG}.
As a summary, for the well-conditioned matrices such as {\nho} and {\poisso},the s-step and communication avoiding with Algorithm \ref{alg:CA-Arnoldi2} versions of SRE-CG and SRE-CG2 converge in the same number of iterations and scale up to at least $s=10$. But the communication avoiding with Algorithm \ref{alg:CA-Arnoldi} version of SRE-CG and SRE-CG2 do not converge due to the instability in the basis construction, specifically the A-orthonormalization process.
On the other hand, the s-step, communication avoiding with Algorithm \ref{alg:CA-Arnoldi} and communication avoiding with Algorithm \ref{alg:CA-Arnoldi2} versions of MSDO-CG for the matrices {\nho} and {\poisso} converge in the same number of iterations and scale up to at least $s=10$. Moreover, the corresponding versions of SRE-CG, SRE-CG2, and MSDO-CG converge in approximately the same number of iterations.
For the other matrices, the s-step versions of SRE-CG, SRE-CG2, and MSDO-CG converge and scale up to at least $s=10$, as expected. The communication avoiding with Algorithm \ref{alg:CA-Arnoldi} versions of SRE-CG and SRE-CG2 do not converge. But the communication avoiding MSDO-CG with Algorithm \ref{alg:CA-Arnoldi} converges. Moreover, the communication avoiding MSDO-CG with Algorithm \ref{alg:CA-Arnoldi2} scales better than the communication avoiding SRE-CG2 with Algorithm \ref{alg:CA-Arnoldi2}, even though it might require more iterations.
\section{The Preconditioned Versions} \label{sec:precCG}
Krylov subspace methods are rarely used without preconditioning. Moreover, Conjugate Gradient is a method for solving symmetric positive definite matrices. For this purpose, split preconditioned versions of the above-mentioned s-step methods for solving the system $L^{-1}AL^{-t}(L^{t}x) = L^{-1}b$ are introduced, where the preconditioner is $M = LL^t$. Then, the numerical stability of the preconditioned methods is briefly discussed.
\subsection{Preconditioned Algorithms}
One possible way for preconditioning the s-step versions is by simply replacing $A$ by $L^{-1}AL^{-t}$ and $b$ by $L^{-1}b$ in the algorithms, where $L^{-1}AL^{-t}y = L^{-1}b$ is first solved and then the solution $x$ is obtained by solving $y = L^{t}x$. In \cite{sophiethesis}, MSDO-CG is preconditioned in this manner (Algorithm 40), where the vectors are $L^{-1}AL^{-t}$-orthonormalized (Algorithms 19 and 22) rather than $A$-orthonormalized. In this paper we will precondition the s-step and communication avoiding methods by avoiding the use of $L^{-1}AL^{-t}$-orthonormalization.
Given the following system $\widehat{A}\widehat{x} = \widehat{b}$, where $\widehat{A} = L^{-1}AL^{-t}$, $\widehat{x} = L^{t}x$, and $\widehat{b} = L^{-1}b$. The following relations summarized the SRE-CG, SRE-CG2, and modified MSDO-CG methods for this system:
\begin{eqnarray}
\widehat{\alpha}_k &=& \widehat{V}^t_k \widehat{r}_{k-1} \nonumber \\
\widehat{x}_k &=& \widehat{x}_{k-1} + \widehat{V}_k\widehat{\alpha}_k \nonumber\\
\widehat{r}_k &=& \widehat{r}_{k-1} - \widehat{A}\widehat{V}_k\widehat{\alpha}_k \nonumber
\end{eqnarray}
The difference is in how the $\widehat{V}_k$ vectors are constructed. In the modified MSDO-CG, $\widehat{V}_k$ is set to $[T(\widehat{r}_{k-1})]$, and then $\widehat{A}$-orthonormalized against all previous vectors. In SRE-CG and SRE-CG2 methods,
\begin{eqnarray}\widehat{V}_k = \begin{cases}
[T(\widehat{r}_0)], & \mbox{ if $k = 1$ }\\
\widehat{A}\widehat{V}_{k-1}, & \mbox{ if $k \geq 2$ }
\end{cases} \nonumber \end{eqnarray}
and then $\widehat{V}_k$ is $\widehat{A}$-orthonormalized against the previous $2t$ vectors (SRE-CG) or against all previous vectors (SRE-CG2). In the three methods, $\widehat{V}_i^t \widehat{A} \widehat{V}_i = I$ and $\widehat{V}_k^t \widehat{A} \widehat{V}_i = 0$ where $i = k-2,k-1$ for SRE-CG and $i < k$ for SRE-CG2 and modified MSDO-CG .
Note that $\widehat{r}_{k} = \widehat{b} - \widehat{A}\widehat{x}_{k} = L^{-1}b - L^{-1}AL^{-t}L^{t}x_{k} = L^{-1}(b - Ax_{k}) = L^{-1}r_{k}$. Thus, we derive the corresponding equations for $x_k$, and $r_k$. \vspace{2mm}
\begin{eqnarray}
\widehat{\alpha}_k &=& \widehat{V}^t_k \widehat{r}_{k-1} = \widehat{V}^t_k L^{-1}r_{k} = (L^{-t}\widehat{V}_k)^tr_k \nonumber \\
\widehat{x}_k &=& L^{t}x_k = \widehat{x}_{k-1} + \widehat{V}_k\widehat{\alpha}_k = L^{t}{x}_{k-1} + \widehat{V}_k\widehat{\alpha}_k \;\;\;
\implies x_k = {x}_{k-1} + (L^{-t}\widehat{V}_k)\widehat{\alpha}_k \nonumber \\
\widehat{r}_k &=& L^{-1}r_{k} = \widehat{r}_{k-1} - \widehat{A}\widehat{V}_k\widehat{\alpha}_k = L^{-1}{r}_{k-1} - L^{-1}AL^{-t}\widehat{V}_k\widehat{\alpha}_k \nonumber \\
\implies r_{k} &=& {r}_{k-1} - A(L^{-t}\widehat{V}_k)\widehat{\alpha}_k \nonumber
\end{eqnarray}
Let $V_k = L^{-t}\widehat{V}_k$, then
\begin{eqnarray}
\widehat{\alpha}_k &=& V_k^tr_k,\nonumber \\
x_k &=& {x}_{k-1} + {V}_k\widehat{\alpha}_k,\nonumber\\
r_{k} &=& {r}_{k-1} - A{V}_k\widehat{\alpha}_k.\nonumber
\end{eqnarray}
Moreover, $T(\widehat{r}_{k}) = T(L^{-1}r_{k})$ and $\widehat{A}\widehat{V}_{k-1} = L^{-1}AL^{-t}\widehat{V}_{k-1} = L^{-1}A{V}_{k-1}$. As for the $\widehat{A}$-orthonormalization, we require that $\widehat{V}_k^t \widehat{A} \widehat{V}_i = 0$ for some values of $i\neq k$. But $$\widehat{V}_k^t \widehat{A} \widehat{V}_i = \widehat{V}_k^t L^{-1}AL^{-t} \widehat{V}_i = (L^{-t}\widehat{V}_k)^tA(L^{-t} \widehat{V}_i) = {V}_k^tA{V}_i.$$ Thus, it is sufficient to A-orthonormalize $V_k = L^{-t}\widehat{V}_k$ instead of $\widehat{A}$-orthonormalizing $\widehat{V}_k$, where in modified MSDO-CG, $$V_k = L^{-t}[T(\widehat{r}_{k-1})] = L^{-t}[T(L^{-1}r_{k})],$$ and in SRE-CG and SRE-CG2 $$V_k = L^{-t}\widehat{V}_k = \begin{cases}
L^{-t}[T(\widehat{r}_0)], & \mbox{ if } k = 1\\
L^{-t} L^{-1}{A}{V}_{k-1} = M^{-1}A{V}_{k-1}, & \mbox{ if $k \geq 2$ }
\end{cases}.$$
This summarizes the three methods for $s=1$. In general, for $s>1$ the s-step methods are described in Algorithms \ref{alg:psstepSRE-CG1}, \ref{alg:psstepSRE-CG2}, and \ref{alg:psstepMSDO-CG}.
As for the communication avoiding versions, in Algorithms \ref{alg:CA-Arnoldi} and \ref{alg:CA-Arnoldi2}, $AW_{j+i-1}$ is replaced by $M^{-1}AW_{j+i-1}$, and $W_{j-1} = L^{-t}[T(L^{-1}r_k)]$, for $k\geq 1$ in CA MSDO-CG and for $k= 1$ in CA SRE-CG and CA SRE-CG2.
If the preconditioner is a block diagonal preconditioner, with $t$ blocks that correspond to the $t$ partitions of the matrix $A$, then $[T(L^{-1}r_k)] = L^{-1}[T(r_k)]$ and $L^{-t}[{T}(L^{-1}r_k)] = M^{-1}[T(r_k)]$. In this case, no need for split preconditioning, similarly to CG. \vspace{-5mm}
\begin{algorithm}[H]
\centering
\caption{Split preconditioned s-step SRE-CG }
{\renewcommand{\arraystretch}{1.3}
\begin{algorithmic}[1]
\Statex{\textbf{Input:} $A$, $n \times n$ symmetric positive definite matrix; $k_{max}$, maximum allowed iterations}
\Statex{\qquad \quad $b$, $n \times 1$ right-hand side; $x_0$, initial guess; $\epsilon$, stopping tolerance; $M = LL^t$; $s$ }
\Statex{\textbf{Output:} $x_k$, approximate solution of the system $L^{-t}AL^t(L^{-t}x)=L^{-t}b$}
\State$r_0 = b - Ax_0$, $\rho_0 = ||r_0||_2$ , $\rho =\rho_0$, $\widehat{r}_0 = L^{-1}r_{0}$, $k = 1$;
\While {( ${\rho} > \epsilon \rho_0$ and $k < k_{max}$ )}
\State Let $j = (k-1)s+1$
\If {($k==1$)}
\State A-orthonormalize $W_j = L^{-t}[{T}(\widehat{r}_0)]$, and let $V = W_j$
\Else
\State A-orthonormalize $W_{j} = M^{-1}AW_{j-1}$ against $W_{j-2}$ and $W_{j-1}$
\State A-orthonormalize $W_{j}$ and let $V = W_{j}$
\EndIf
\For {($i=1:s-1$)}
\State A-orthonormalize $W_{j+i} = M^{-1}AW_{j+i-1}$ against $W_{j+i-2}$ and $W_{j+i-1}$
\State A-orthonormalize $W_{j+i}$ and let $V = [V \; W_{j+i}]$
\EndFor
\State $\widehat{\alpha} = V^t r_{k-1}$, \;\;\; $x_k = x_{k-1} + V \widehat{\alpha} $
\State $r_k = r_{k-1} - AV \widehat{\alpha} $, \;\;\; $\rho = ||r_{k}||_2$, \;\;\; $k = k+1$
\EndWhile
\end{algorithmic}}
\label{alg:psstepSRE-CG1}
\end{algorithm}\vspace{-15mm}
\begin{algorithm}[H]
\centering
\caption{Split preconditioned s-step SRE-CG2 }
{\renewcommand{\arraystretch}{1.3}
\begin{algorithmic}[1]
\Statex{\textbf{Input:} $A$, $n \times n$ symmetric positive definite matrix; $k_{max}$, maximum allowed iterations}
\Statex{\qquad \quad $b$, $n \times 1$ right-hand side; $x_0$, initial guess; $\epsilon$, stopping tolerance; $M = LL^t$; $s$ }
\Statex{\textbf{Output:} $x_k$, approximate solution of the system $L^{-t}AL^t(L^{-t}x)=L^{-t}b$}
\State$r_0 = b - Ax_0$, $\rho_0 = ||r_0||_2$ , $\rho = \rho_0 $, $\widehat{r}_0 = L^{-1}r_{0}$, $k = 1$;
\While {( ${\rho} > \epsilon \rho_0 $ and $k < k_{max}$ )}
\State Let $j = (k-1)s+1$
\If {($k==1$)}
\State A-orthonormalize $W_j = L^{-t}[{T}(\widehat{r}_0)]$, let $Q = W_j$, and $V = W_j$
\Else
\State A-orthonormalize $W_j = M^{-1}AW_{j-1}$ against $Q$
\State A-orthonormalize $W_j$, let $Q = [Q, \; W_j]$, and $V = W_j$
\EndIf
\For {($i=1:s-1$)}
\State A-orthonormalize $W_{j+i} = M^{-1}AW_{j+i-1}$ against $Q$
\State A-orthonormalize $W_{j+i}$, let $V = [V, \; W_{j+i}]$ and $Q = [Q, \; W_{j+i}]$
\EndFor
\State $\widehat{\alpha} = V^t r_{k-1}$, \;\;\; $x_k = x_{k-1} + V \widehat{\alpha} $
\State $r_k = r_{k-1} - AV \widehat{\alpha} $, \;\;\; $\rho = ||r_{k}||_2$, \;\;\; $k = k+1$
\EndWhile
\end{algorithmic}}
\label{alg:psstepSRE-CG2}
\end{algorithm}
\begin{algorithm}[H]
\centering
\caption{Split preconditioned s-step MSDO-CG }
{\renewcommand{\arraystretch}{1.3}
\begin{algorithmic}[1]
\Statex{\textbf{Input:} $A$, $n \times n$ symmetric positive definite matrix; $k_{max}$, maximum allowed iterations}
\Statex{\qquad \quad $b$, $n \times 1$ right-hand side; $x_0$, initial guess; $\epsilon$, stopping tolerance; $M = LL^t$; $s$ }
\Statex{\textbf{Output:} $x_k$, approximate solution of the system $L^{-t}AL^t(L^{-t}x)=L^{-t}b$}
\State$r_0 = b - Ax_0$, $\rho_0 = ||r_0||_2$, $\rho =\rho_0$, $k = 1$;
\While {( ${\rho} > \epsilon \rho_0 $ and $k < k_{max}$ )}
\State$\widehat{r}_{k-1} = L^{-1}r_{k-1}$ and $W_1 = L^{-t}[{T}(\widehat{r}_{k-1})]$
\If {($k==1$)}
\State A-orthonormalize $W_1$, let $V = W_1$ and $Q = W_1$
\Else
\State A-orthonormalize $W_1$ against $Q$
\State A-orthonormalize $W_1$, let $V = W_1$ and $Q = [Q \; W_1]$
\EndIf
\For {($i=1:s-1$)}
\State A-orthonormalize $W_{i+1} = M^{-1}AW_i$ against $Q$
\State A-orthonormalize $W_{i+1}$, let $V = [V \; W_{i+1}]$ and $Q = [Q \; W_{i+1}]$
\EndFor
\State $\widehat{\alpha} = V^t r_{k-1}$, \;\;\; $x_k = x_{k-1} + V \widehat{\alpha} $
\State $r_k = r_{k-1} - AV\widehat{\alpha} $, \;\;\; $\rho = ||r_{k}||_2$, \;\;\; $k = k+1$
\EndWhile
\end{algorithmic}}
\label{alg:psstepMSDO-CG}
\end{algorithm}
\subsection{Convergence}\label{sec:precconv}
We test the preconditioned versions using block Jacobi preconditioner. First, the graphs of the matrices are partitioned into 64 domains using Metis Kway dissection \cite{metis}. Each of the 64 diagonal blocks is factorized using Cholesky decomposition (Table \ref{tab:precECG}) or Incomplete Cholesky zero fill-in decomposition (Table \ref{tab:precECG2}).
Then for a given $t$, each of the $t$ domains is the union of $64/t$ consecutive domains, where the preconditioner $M = LL^t$, the $L_i$'s are lower triangular blocks for $i=1,2,..,64$, and
$$L = \begin{bmatrix
L_1 &0&0&0&\hdots&0\\
0&L_2&0&0&\hdots&0\\
0&0&\ddots&0&\hdots&0\\
0&0&0&L_i&\hdots&0\\
0&0&0&0&\ddots&0\\
0&0&0&\hdots&0&L_{64}
\end{bmatrix}. $$
In Table \ref{tab:precECG2}, we test the convergence of Incomplete Cholesky block Jacobi preconditioned s-step and CA versions of SRE-CG, SRE-CG2 and MSDO-CG for $t = 2,4,8,16,32,64$ and $s = 1,2,4,8$. For the matrices {\poisso}, {\nho}, {\skyo}, and {\anio}, and for all the $s$ and $t$ values, the preconditioned s-step versions and their corresponding CA versions with Algorithm \ref{alg:CA-Arnoldi2} converge in the same number of iterations and scale for $s\geq 8$.
CA SRE-CG with Algorithm \ref{alg:CA-Arnoldi} stagnates, whereas CA SRE-CG2 with Algorithm \ref{alg:CA-Arnoldi} converges in exactly the same number of iterations as s-step SRE-CG2 and CA SRE-CG2 with Algorithm \ref{alg:CA-Arnoldi2}. Moreover, the corresponding preconditioned SRE-CG and SRE-CG2 versions converge in the similar number of iterations.
As for {\skyto}, the CA SRE-CG stagnates for $s=8$ and $t=4,8$ only. The CA MSDO-CG with Algorithm \ref{alg:CA-Arnoldi2} converges as fast as s-step MSDO-CG, whereas CA MSDO-CG with Algorithm \ref{alg:CA-Arnoldi} requires more iterations, in some cases ( {\skyo}, {\skyto})
A similar convergence behavior is observed for the Complete Cholesky block Jacobi preconditioned s-step and CA versions of SRE-CG, SRE-CG2 and MSDO-CG, in Table \ref{tab:precECG}, where the only difference is that the methods converge faster than the corresponding Incomplete Cholesky block Jacobi preconditioned versions.
\begin{table}[H]
\setlength{\tabcolsep}{2pt}
\caption{\label{tab:precECG2} Comparison of the convergence of different Block Jacobi with Incomplete Cholesky preconditioned E-CG versions (s-step SRE-CG, CA SRE-CG with Algorithm \ref{alg:CA-Arnoldi2}, s-step SRE-CG2, CA SRE-CG2 with Algorithm \ref{alg:CA-Arnoldi} or \ref{alg:CA-Arnoldi2}, s-step MSDO-CG, CA MSDO-CG with Algorithm \ref{alg:CA-Arnoldi} and Algorithm \ref{alg:CA-Arnoldi2}) with respect to number of partitions $t$ and $s$ values.}
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{||c||c||c||c|c|c|c||c|c|c|||c|c|c|c||c|c|c|||c|c|c|c||c|c|c||c|c|c||}
\cline{4-27}
\cline{4-27}
\multicolumn{1}{c}{}& \multicolumn{1}{c}{}& \multicolumn{1}{c||}{} & \multicolumn{7}{c|||}{\multirow{1}{*}{\bf SRE-CG }} & \multicolumn{7}{c|||}{\multirow{1}{*}{\bf SRE-CG2}}& \multicolumn{10}{c||}{\multirow{1}{*}{\bf MSDO-CG}}\\ \cline{4-27}
\multicolumn{1}{c}{}& \multicolumn{1}{c}{}& \multicolumn{1}{c||}{} & \multicolumn{4}{c||}{\multirow{2}{*}{\bf s-step }} & \multicolumn{3}{c|||}{\multirow{2}{*}{\bf CA Alg7}}& \multicolumn{4}{c||}{\multirow{2}{*}{\bf s-step }}& \multicolumn{3}{c|||}{\multirow{2}{*}{\bf CA Alg5/7}}& \multicolumn{4}{c||}{\multirow{2}{*}{\bf s-step }}& \multicolumn{3}{c||}{\multirow{2}{*}{\bf CA Alg5}}& \multicolumn{3}{c||}{\multirow{2}{*}{\bf CA Alg7}}\\
\multicolumn{1}{c}{}& \multicolumn{1}{c}{}& \multicolumn{1}{c||}{}& \multicolumn{4}{c||}{\multirow{2}{*}{}} & \multicolumn{3}{c|||}{\multirow{2}{*}{}} & \multicolumn{4}{c||}{\multirow{2}{*}{}} & \multicolumn{3}{c|||}{\multirow{2}{*}{}} & \multicolumn{4}{c||}{\multirow{2}{*}{}} & \multicolumn{3}{c||}{\multirow{2}{*}{}}& \multicolumn{3}{c||}{\multirow{2}{*}{}}\\
\cline{2-27}
\multicolumn{1}{c||}{} & \multicolumn{1}{c||}{\bf PCG} & \backslashbox{$\bf t$}{$\bf s$} & \bf 1 &\bf 2 &\bf 4 &\bf 8 &\bf 2 &\bf 4 &\bf 8 &\bf 1 &\bf 2 &\bf 4 &\bf 8 &\bf 2 &\bf 4 &\bf 8 &\bf 1 &\bf 2 &\bf 4 &\bf 8 &\bf 2 &\bf 4 &\bf 8 &\bf 2 &\bf 4 &\bf 8 \\
\hline \hline
\multirow{6}{*}{\rotatebox[origin=c]{90}{\bf \poiss}}
&\multirow{6}{*}{86} &\bf 2 &82& 41& 21& 11& 41& 21& 11& 82& 41& 21& 11& 41& 21& 11& 79& 40& 21& 11& 40& 20& 11& 40& 21& 12 \\
\cline{3-27}
& & \bf 4 &65& 33& 17& 9& 33& 17& 9& 65& 33& 17& 9& 33& 17& 9& 71& 36& 18& 9& 35& 18& 9& 36& 18& 10 \\
\cline{3-27}
&&\bf 8 &55& 28& 14& 7& 28& 14& 7& 55& 28& 14& 7& 28& 14& 7& 59& 30& 14& 7& 32& 16& 8& 30& 15& 9 \\
\cline{3-27}
&&\bf 16 &41& 21& 11& 6& 21& 11& 6& 41& 21& 11& 6& 21& 11& 6& 51& 26& 12& 6& 27& 14& 7& 26& 12& 7 \\
\cline{3-27}
& &\bf 32 &30& 15& 8& 4& 15& 8& 4& 30& 15& 8& 4& 15& 8& 4& 40& 21& 9& 5& 23& 12& 5& 21& 10& 6 \\
\cline{3-27}
&&\bf 64 &23& 12& 6& 3& 12& 6& 3& 23& 12& 6& 3& 12& 6& 3& 30& 16& 7& 3& 19& 9& 4& 16& 8& 5 \\
\hline
\hline
\multirow{6}{*}{\rotatebox[origin=c]{90}{\nh}}
&\multirow{6}{*}{115} &\bf 2 & 105& 53& 27& 14& 53& 27& 14& 105& 53& 27& 14& 53& 27& 14& 106& 53& 27& 13& 54& 27& 14& 53& 27& 15 \\
\cline{3-27}
& &\bf 4 & 82& 41& 21& 11& 41& 21& 11& 82& 41& 21& 11& 41& 21& 11& 87& 45& 22& 11& 46& 22& 12& 45& 22& 13 \\
\cline{3-27}
&&\bf 8 &65& 33& 17& 9& 33& 17& 9& 65& 33& 17& 9& 33& 17& 9& 71& 36& 17& 9& 38& 19& 10& 36& 18& 11 \\
\cline{3-27}
&&\bf 16 & 49& 25& 13& 7& 25& 13& 7& 49& 25& 13& 7& 25& 13& 7& 60& 30& 13& 7& 32& 16& 8& 30& 15& 9 \\
\cline{3-27}
& &\bf 32 & 36& 18& 9& 5& 18& 9& 5& 36& 18& 9& 5& 18& 9& 5& 45& 24& 11& 5& 27& 13& 6& 24& 12& 7 \\
\cline{3-27}
&&\bf 64 &27& 14& 7& 4& 14& 7& 4& 27& 14& 7& 4& 14& 7& 4& 34& 18& 8& 4& 22& 10& 5& 18& 9& 6 \\
\hline
\hline
\multirow{6}{*}{\rotatebox[origin=c]{90}{\sky}}
&\multirow{6}{*}{305} &\bf 2 &237& 119& 60& 30& 119& 60& 31& 233& 112& 56& 28& 112& 56& 28& 233& 143& 75& 34& 160& 81& 41& 143& 75& 35 \\
\cline{3-27}
& &\bf 4 &135& 68& 34& 17& 68& 34& 18& 131& 66& 33& 17& 66& 33& 17& 193& 124& 49& 20& 135& 68& 36& 124& 47& 22 \\
\cline{3-27}
&&\bf 8 &83& 42& 21& 11& 42& 21& 11& 83& 42& 21& 11& 42& 21& 11& 127& 84& 31& 12& 106& 54& 27& 84& 32& 14 \\
\cline{3-27}
&&\bf 16 &54& 27& 14& 7& 27& 14& 7& 54& 27& 14& 7& 27& 14& 7& 94& 56& 20& 8& 80& 41& 20& 56& 20& 10 \\
\cline{3-27}
& &\bf 32 & 39& 20& 10& 5& 20& 10& 5& 39& 20& 10& 5& 20& 10& 5& 62& 37& 13& 6& 58& 28& 12& 37& 15& 7 \\
\cline{3-27}
&&\bf 64 &29& 15& 8& 4& 15& 8& 4& 29& 15& 8& 4& 15& 8& 4& 43& 25& 9& 4& 41& 21& 8& 25& 11& 6 \\
\hline
\hline
\multirow{6}{*}{\rotatebox[origin=c]{90}{\skyt}}
&\multirow{6}{*}{245} &\bf 2 &216& 108& 56& 28& 108& 56& 29& 201& 103& 52& 26& 103& 52& 25& 203& 116&50& 26& 117& 69& 30& 116& 60& 34 \\
\cline{3-27}
& &\bf 4 &170& 85& 43& 22& 84& 43& x& 149& 77& 39& 20& 77& 39& 20& 170& 134& 53& 18& 123& 68& 23& 130& 58& 29 \\
\cline{3-27}
&&\bf 8 &108& 54& 27& 14& 55& 28& x& 101& 51& 26& 13& 51& 26& 13& 140& 99& 32& 14& 124& 55& 16& 99& 36& 18 \\
\cline{3-27}
&&\bf 16 &61& 31& 15& 8& 30& 16& 9& 58& 29& 15& 8& 29& 15& 8& 105& 63& 19& 8& 101& 37& 11& 63& 21& 10 \\
\cline{3-27}
& &\bf 32 & 34& 17& 9& 5& 18& 9& 5& 34& 17& 9& 5& 17& 9& 5& 77& 38& 11& 5& 71& 23& 7& 38& 13& 7 \\
\cline{3-27}
&&\bf 64 &23& 12& 6& 3& 12& 6& 3& 23& 12& 6& 3& 12& 6& 3& 53& 24& 7& 4& 48& 16& 5& 24& 9& 5 \\
\hline
\hline
\multirow{6}{*}{\rotatebox[origin=c]{90}{\ani}}
&\multirow{6}{*}{73} &\bf 2 &70& 35& 18& 9& 35& 18& 9& 70& 35& 18& 9& 35& 18& 9& 70& 35& 18& 9& 35& 18& 9& 35& 18& 9 \\
\cline{3-27}
& &\bf 4 &63& 32& 16& 8& 32& 16& 8& 63& 32& 16& 8& 32& 16& 8& 66& 33& 16& 9& 33& 17& 9& 33& 17& 10 \\
\cline{3-27}
&&\bf 8 &57& 29& 15& 8& 29& 15& 8& 57& 29& 15& 8& 29& 15& 8& 59& 30& 15& 8& 30& 15& 8& 30& 17& 11\\
\cline{3-27}
&&\bf 16 &50& 25& 13& 7& 25& 13& 7& 50& 25& 13& 7& 25& 13& 7& 54& 27& 14& 7& 28& 14& 7& 27& 16& 10 \\
\cline{3-27}
& &\bf 32 &43& 22& 11& 6& 22& 11& 6& 43& 22& 11& 6& 22& 11& 6& 51& 25& 12& 6& 25& 13& 7& 25& 15& 9\\
\cline{3-27}
&&\bf 64 &35& 18& 9& 5& 18& 9& 5& 35& 18& 9& 5& 18& 9& 5& 44& 21& 10& 5& 23& 11& 6& 21& 13& 7\\
\hline
\hline
\end{tabular}
\end{table}
\begin{table}[H]
\setlength{\tabcolsep}{2pt}
\caption{\label{tab:precECG} Comparison of the convergence of different Block Jacobi with Complete Cholesky preconditioned E-CG versions (s-step SRE-CG, CA SRE-CG with Algorithm \ref{alg:CA-Arnoldi2}, s-step SRE-CG2, CA SRE-CG2 with \ref{alg:CA-Arnoldi} or \ref{alg:CA-Arnoldi2}, s-step MSDO-CG, CA MSDO-CG with Algorithm \ref{alg:CA-Arnoldi} and Algorithm \ref{alg:CA-Arnoldi2}) with respect to number of partitions $t$ and $s$ values.}
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{||c||c||c||c|c|c|c||c|c|c|||c|c|c|c||c|c|c|||c|c|c|c||c|c|c||c|c|c||}
\cline{4-27}
\cline{4-27}
\multicolumn{1}{c}{}& \multicolumn{1}{c}{}& \multicolumn{1}{c||}{} & \multicolumn{7}{c|||}{\multirow{1}{*}{\bf SRE-CG }} & \multicolumn{7}{c|||}{\multirow{1}{*}{\bf SRE-CG2}}& \multicolumn{10}{c||}{\multirow{1}{*}{\bf MSDO-CG}}\\ \cline{4-27}
\multicolumn{1}{c}{}& \multicolumn{1}{c}{}& \multicolumn{1}{c||}{} & \multicolumn{4}{c||}{\multirow{2}{*}{\bf s-step }} & \multicolumn{3}{c|||}{\multirow{2}{*}{\bf CA Alg7}}& \multicolumn{4}{c||}{\multirow{2}{*}{\bf s-step }}& \multicolumn{3}{c|||}{\multirow{2}{*}{\bf CA Alg5/7}}& \multicolumn{4}{c||}{\multirow{2}{*}{\bf s-step }}& \multicolumn{3}{c||}{\multirow{2}{*}{\bf CA Alg5}}& \multicolumn{3}{c||}{\multirow{2}{*}{\bf CA Alg7}}\\
\multicolumn{1}{c}{}& \multicolumn{1}{c}{}& \multicolumn{1}{c||}{}& \multicolumn{4}{c||}{\multirow{2}{*}{}} & \multicolumn{3}{c|||}{\multirow{2}{*}{}} & \multicolumn{4}{c||}{\multirow{2}{*}{}} & \multicolumn{3}{c|||}{\multirow{2}{*}{}} & \multicolumn{4}{c||}{\multirow{2}{*}{}} & \multicolumn{3}{c||}{\multirow{2}{*}{}}& \multicolumn{3}{c||}{\multirow{2}{*}{}}\\
\cline{2-27}
\multicolumn{1}{c||}{} & \multicolumn{1}{c||}{\bf PCG} & \backslashbox{$\bf t$}{$\bf s$} & \bf 1 &\bf 2 &\bf 4 &\bf 8 &\bf 2 &\bf 4 &\bf 8 &\bf 1 &\bf 2 &\bf 4 &\bf 8 &\bf 2 &\bf 4 &\bf 8 &\bf 1 &\bf 2 &\bf 4 &\bf 8 &\bf 2 &\bf 4 &\bf 8 &\bf 2 &\bf 4 &\bf 8 \\
\hline \hline
\multirow{6}{*}{\rotatebox[origin=c]{90}{\bf \poiss}}
&\multirow{6}{*}{67} &\bf 2 &60& 30& 15& 8& 30& 15& 8& 60& 30& 15& 8& 30& 15& 8& 61& 31& 15& 8& 32& 16& 8& 31& 16& 9 \\
\cline{3-27}
& & \bf 4 &51& 26& 13& 7& 26& 13& 7& 51& 26& 13& 7& 26& 13& 7& 53& 27& 14& 7& 27& 14& 7& 27& 14& 8 \\
\cline{3-27}
&&\bf 8 &42& 21& 11& 6& 21& 11& 6& 42& 21& 11& 6& 21& 11& 6& 45& 23& 11& 6&24& 12& 6& 23& 12& 7 \\
\cline{3-27}
&&\bf 16 &33& 17& 9& 5& 17& 9& 5& 33& 17& 9& 5& 17& 9& 5& 37& 20& 9& 5&21& 10& 5& 20& 10& 6 \\
\cline{3-27}
& &\bf 32 &25& 13& 7& 4& 13& 7& 4& 25& 13& 7& 4& 13& 7& 4& 30& 16& 7& 4& 17& 9& 4& 16& 8& 5 \\
\cline{3-27}
&&\bf 64 &20& 10& 5& 3& 10& 5& 3& 20& 10& 5& 3& 10& 5& 3& 23& 12& 6& 3& 14& 7& 3& 12& 6& 4 \\
\hline
\hline
\multirow{6}{*}{\rotatebox[origin=c]{90}{\nh}}
&\multirow{6}{*}{92} &\bf 2 & 74& 37& 19& 10& 37& 19& 10& 74& 37& 19& 10& 37& 19& 10& 81& 40& 19& 10&41& 21& 11& 40& 21& 12 \\
\cline{3-27}
& &\bf 4 & 61& 31& 16& 8& 31& 16& 8& 61& 31& 16& 8& 31& 16& 8& 66& 33& 17& 8& 33& 17& 9& 33& 17& 10\\
\cline{3-27}
&&\bf 8 &51& 26& 13& 7& 26& 13& 7& 51& 26& 13& 7& 26& 13& 7& 55& 28& 13& 7&29& 14& 7& 28& 14& 9 \\
\cline{3-27}
&&\bf 16 & 39& 20& 10& 5& 20& 10& 5& 39& 20& 10& 5& 20& 10& 5& 44& 23& 11& 5& 24& 12& 6&23& 12& 8 \\
\cline{3-27}
& &\bf 32 & 30& 15& 8& 4& 15& 8& 4& 30& 15& 8& 4& 15& 8& 4& 34& 19& 8& 4&20& 10& 5& 19& 9& 6 \\
\cline{3-27}
&&\bf 64 &23& 12& 6& 3& 12& 6& 3& 23& 12& 6& 3& 12& 6& 3& 26& 14& 6& 3&16& 8& 4& 14& 8& 5 \\
\hline
\hline
\multirow{6}{*}{\rotatebox[origin=c]{90}{\sky}}
&\multirow{6}{*}{264} &\bf 2 &193& 97& 48& 24& 97& 48& 27& 183& 92& 46& 23& 92& 46& 23& 189& 121& 56& 25& 118& 60& 33& 121& 58& 27 \\
\cline{3-27}
& &\bf 4 &105& 53& 27& 14& 53& 27& 14& 105& 53& 27& 14& 53& 27& 14& 146& 91& 37& 16& 106& 54& 27& 91& 38& 17\\
\cline{3-27}
&&\bf 8 &66& 33& 17& 9& 33& 17& 9& 66& 33& 17& 9& 33& 17& 9& 98& 64& 23& 10& 80& 39& 20& 64& 23& 12\\
\cline{3-27}
&&\bf 16 &44& 22& 11& 6& 22& 11& 6& 44& 22& 11& 6& 22& 11& 6& 70& 41& 15& 7&58& 28& 13& 41& 17& 8\\
\cline{3-27}
& &\bf 32 & 31& 16& 8& 4& 16& 8& 4& 31& 16& 8& 4& 16& 8& 4& 48& 26& 10& 5&37& 18& 7& 26& 11& 6\\
\cline{3-27}
&&\bf 64 &19& 10& 5& 3& 10& 5& 3& 19& 10& 5& 3& 10& 5& 3& 30& 17& 6& 3&21& 10& 4& 17& 7& 4 \\
\hline
\hline
\multirow{6}{*}{\rotatebox[origin=c]{90}{\skyt}}
&\multirow{6}{*}{225} &\bf 2 & 181& 91& 48& 24& 94& 48& 26& 173& 87& 46& 23& 87& 46& 23& 186& 106& 48& 25&114& 61& 28& 106& 49& 32\\
\cline{3-27}
& &\bf 4 & 139& 70& 37& 19& 72& 38& x& 130& 65& 34& 17& 65& 34& 18& 154& 113& 43& 19& 112& 60& 22& 113& 47& 24\\
\cline{3-27}
&&\bf 8 &80& 40& 20& 10& 40& 20& 14& 77& 39& 20& 10& 39& 20& 10& 117& 76& 26& 11&99& 48& 15& 76& 28& 14 \\
\cline{3-27}
&&\bf 16 &45& 23& 12& 6& 23& 12& 6& 45& 23& 12& 6& 23& 12& 6& 91& 52& 15& 6&87& 32& 10& 52& 17& 8\\
\cline{3-27}
& &\bf 32 & 29& 15& 8& 4& 15& 8& 4& 29& 15& 8& 4& 15& 8& 4& 62& 29& 9& 4&57& 20& 6& 29& 10& 6\\
\cline{3-27}
&&\bf 64 &20& 10& 5& 3& 10& 5& 3& 20& 10& 5& 3& 10& 5& 3& 44& 18& 7& 3&38& 13& 4& 18& 7& 4 \\
\hline
\hline
\multirow{6}{*}{\rotatebox[origin=c]{90}{\ani}}
&\multirow{6}{*}{69} &\bf 2 & 66& 33& 17& 9& 33& 17& 9& 66& 33& 17& 9& 33& 17& 9& 66& 33& 17& 9& 33& 17& 9& 33& 17& 9 \\
\cline{3-27}
& &\bf 4 &61& 31& 16& 8& 31& 16& 8& 61& 31& 16& 8& 31& 16& 8& 61& 31& 15& 8&31& 16& 8& 31& 16& 10\\
\cline{3-27}
&&\bf 8 &56& 28& 14& 7& 28& 14& 7& 56& 28& 14& 7& 28& 14& 7& 58& 29& 15& 8&29& 15& 8& 29& 16& 11
\\
\cline{3-27}
&&\bf 16 &49& 25& 13& 7& 25& 13& 7& 49& 25& 13& 7& 25& 13& 7& 54& 27& 13& 7& 28& 14& 7&27& 16& 10 \\
\cline{3-27}
& &\bf 32 &42& 21& 11& 6& 21& 11& 6& 42& 21& 11& 6& 21& 11& 6& 50& 24& 12& 6& 25& 13& 6&24& 14& 10\\
\cline{3-27}
&&\bf 64 &35& 18& 9& 5& 18& 9& 5& 35& 18& 9& 5& 18& 9& 5& 44& 20& 10& 5& 21& 11& 5&20& 12& 7\\
\hline
\hline
\end{tabular}
\end{table}
\section{Parallelization and Expected performance} \label{sec:par}
In this section, we briefly describe the parallelization of the unpreconditioned and preconditioned, s-step and CA SRE-CG, SRE-CG2, and MSDO-CG methods, assuming that the algorithms are executed on a distributed memory machine with $t$ processors. Then, we compare the performance of the s-step and CA methods with respect to the SRE-CG, SRE-CG2, and MSDO-CG methods. Finally, we compare the expected performance of the CA enlarged CG versions with respect to the classical CG, in terms of memory, flops and communication.
In what follows, we assume that the estimated runtime of an algorithm with a total of $z$ computed flops and $s$ sent messages, each of size $k$, is $\gamma_c z+ \alpha_c s + \beta_c sk$, where $\gamma_c $ is the inverse floating-point rate (seconds per floating-point operation), $\alpha_c$ is the latency (seconds), and $\beta_c$ is the inverse bandwidth (seconds per word). Moreover, unless specified otherwise, we assume that the number of processors is equal to the number of partitions $t$.
\subsection{Unpreconditioned Methods} \label{sec:unpm}
The unpreconditioned s-step SRE-CG and s-step SRE-CG2 parallelization is similar to that of SRE-CG and SRE-CG2 described in \cite{EKS}, with the difference that the s-step versions send $(s-1)log(t)$ less messages and words per s-step iteration. Moreover, the s-step MSDO-CG's algorithm is similar to that of s-step SRE-CG in structure. Thus the number of messages sent in parallel is the same as that of s-step SRE-CG2. We assume that SRE-CG, SRE-CG2, and MSDO-CG converge in $k$ iterations and the corresponding s-step versions converge in $k_s = \ceil{\frac{k}{s}}$ iteration. Thus, $ 5s{k_s}log(t) + k_slog(t)\approx 5{k}log(t) + \frac{k}{s}log(t)$ messages are sent in parallel in the s-step versions, compared to $6{k}log(t)$ messages. This leads to a $ \frac{(s- 1)100}{6s} \%$ reduction in communication, without increasing the number of computed flops. For example, for $s=3$, $11.11\%$ reduction is achieved in the s-step versions, and $15\%$ reduction for $s=10$.
The difference between the parallelization of unpreconditioned CA MSDO-CG and unpreconditioned CA SRE-CG2 is in the basis construction. In CA MSDO-CG each of the $t$ processors can compute the $s$ basis vectors
$$T_i(r_{k-1}), AT_i(r_{k-1}), A^2T_i(r_{k-1}),..., A^{s-1}T_i(r_{k-1})$$
independently from other processors, where $T_i(r_{k-1})$ is a vector of all zeros except at $n/t$ entries that correspond to the $i^{th}$ domain of the matrix $A$. Thus, there is no need for communication avoiding kernels. To compute the $s$ vectors without any communication, processor $i$ needs $T_i(r_{k-1})$, the row-wise part of the vector $r_{k-1}$ corresponding to the $i^{th}$ domain $D_i$, and a part of the matrix $A$ depending on $s$ and the sparsity pattern of $A$. Specifically, processor $i$ needs the column-wise part of $A$ corresponding to $R(G(A), D_i ,s)$, the set of vertices in the graph of $A$ reachable
by paths of length at most $s$ from any vertex in $D_i$.
On the other hand, in CA SRE-CG2 at iteration $k$, the $st$ basis vectors $$AW_{(k-1)s}, A^2W_{(k-1)s},..,A^sW_{(k-1)s}$$ have to be computed, where $W_{(k-1)s}$ is a block of $t$ dense vectors. Similarly to CA MSDO-CG, each of the $t$ processors can compute the $s$ basis vectors
$$AW_{(k-1)s}(:,i), A^2W_{(k-1)s}(:,i),..,A^sW_{(k-1)s}(:,i)$$
independently from other processors. But processor $i$ needs the full matrix $A$ and the vector $W_{(k-1)s}(:,i)$. Another alternative is to use a block version of the matrix powers kernel, where processor $i$ computes a row-wise part of the $s$ blocks without any communication, by performing some redundant computations. Moreover, as discussed in section \ref{sec:SRNum}, for numerical stability purposes, $AW_{(k-1)s}$ has to be A-orthonormalized before proceeding in the basis construction. This increases the number of messages sent.
Then, all of the computed $st$ vectors in CA MSDO-CG and CA SRE-CG2, are A-orthonormalized against the previous $st(k-1)$ vectors using CGS2 (Algorithm 18 in \cite{sophiethesis}), and against themselves using A-CholQR (Algorithm
21 in \cite{sophiethesis}) or Pre-CholQR (Algorithm 23 in \cite{sophiethesis}). The parallelization of the A-orthonormalization algorithms is described in details for a block of $t$ vectors in \cite{sophiethesis}. For a block of $st$ vectors, the same number of messages is sent but with more words. Thus, $ 5{k_s}log(t) + k_slog(t)\approx 6\frac{k}{s}log(t)$ messages are sent in parallel in CA MSDO-CG, leading to a $\frac{(s-1)100}{s} \%$ reduction in communication as compared to MSDO-CG (for $s=3 \implies 66.6\%$ reduction). Whereas, is CA SRE-CG $ 2*5{k_s}log(t) + k_slog(t)\approx 11\frac{k}{s}log(t)$ messages are sent, leading to a $\frac{(6s-11)100}{6s} \%$ reduction in communication (for $s=3 \implies 17.4\%$ reduction).
As for the unpreconditioned CA SRE-CG, its parallelization is exactly the same as that of CA SRE-CG2, where the same number of messages is sent in parallel. However, less words are sent per message, since in CA SRE-CG the $st$ computed vectors are A-orthonormalized against the previous $(s+1)t$ vectors.
Thus the s-step and CA versions of the enlarged CG methods reduce communication as compared to their corresponding enlarged versions for the same number of processors and the same number of partitions $t$. All the s-step versions are comparable in terms of numerical stability and communication reduction. However, CA MSDO-CG is a better choice since it reduces communication the most. But the question that poses itself is: ``Is it better, in terms of communication, to double $t$ or merge two iterations of MSDO-CG?".
The number of flops performed per iteration in the s-step and CA MSDO-CG for $s = 2^i$ and a given $t$, is comparable to that of the MSDO-CG algorithms where we have $2^i t$ partitions. This is due the fact that in both versions, we are constructing and A-orthonormalizing $2^i t$ basis vectors per iteration, for $\overline{\mathscr{K}}_{k,t,2^i}$ (s-step and CA MSDO-CG versions) and $\overline{\mathscr{K}}_{k,2^it,1}$ (MSDO-CG). On the other hand, based on the observed results in sections \ref{sec:SRNum} and \ref{sec:precconv}, by doubling $t$ in any of the enlarged CG methods, the number of iterations needed for convergence is not halved, but on average it is $25\%$ less. Whereas, in s-step MSDO-CG and CA MSDO-CG by doubling $s$ the number of iterations is halved (up to some value $s$).
The number of processors in the MSDO-CG, s-step MSDO-CG, and CA MSDO-CG could be equal, a multiple or a divisor of the number of partitions $t$. In the first case, we assume that the number of processors is equal to $t$. Let $k$ be the number of iterations needed for convergence of MSDO-CG, where $t$ basis vectors are computed per iteration. Then, the number of messages sent in parallel in MSDO-CG where we have $2^i t$ partitions, is $6(0.75)^i k log(t)$.
Whereas, for $s=2^i$, $[5 + (0.5)^i]klog(t)$ messages are sent in parallel in s-step MSDO-CG, and $6(0.5)^i k log(t)$ messages are sent in CA MSDO-CG. But, for $i \geq 1$, we have that
$$6(0.5)^i k log(t) < 6(0.75)^i k log(t) < [5 + (0.5)^i]klog(t). $$
Thus, in this case it is clear that doubling $s$ and using the CA version is better than doubling the number of partitions in MSDO-CG, which is better than using the s-step version.
In the second case, we assume that the number of processors is equal to the number of partitions. Then, the number of messages sent in parallel in MSDO-CG where we have $2^i t$ processors and partitions, is $6(0.75)^i k log(2^i t)$. However, in s-step MSDO-CG, and CA MSDO-CG we assume that we have $t$ processors and $s=2^i$.
In this case, s-step MSDO-CG sends less messages than MSDO-CG, if and only if,
\begin{eqnarray}
6(0.75)^i(i+log(t)) &>& [5 + (0.5)^i]log(t) \nonumber\\
\iff 6i(0.75)^i &>& [5-6(0.75)^i + (0.5)^i] log(t). \label{2i}
\end{eqnarray}
The inequality \eqref{2i} is valid for $i=1$ and $t=2,4,8,16$. This means that for $s=2$ s-step MSDO-CG requires less communication with $t=2,4,8,16$ processors/partitions than MSDO-CG with $2t$ processors/partitions. Hence, assuming that communication is much more expensive than flops, it is better to merge 2 iterations of MSDO-CG and compute a basis for $\overline{\mathscr{K}}_{k,t,2}$, than double the $t$ value and compute a basis for $\overline{\mathscr{K}}_{k,2t,1}$. Moreover, \eqref{2i} is valid for $i=2$ ($s=4$) and $t=2,4,8$, and for $i=3,4$ ($s=8,16$) and $t=2,4$.
On the other hand, CA MSDO-CG sends less messages than MSDO-CG for all values of $i \geq 1$ and $t \geq 2$, since
\begin{eqnarray}
6(0.75)^i(i+log(t)) &>& 6(0.5)^i log(t) \nonumber\\
\iff i(0.75)^i &>& ((0.5)^i -(0.75)^i) log(t). \label{2i2}
\end{eqnarray}
\subsection{Preconditioned Methods} \label{sec:pm}
The only difference between the preconditioned and unpreconditioned algorithms is in the matrix - block of vectors multiplication where the preconditioner is applied. Thus, if the preconditioned matrix - block of vectors multiplication can be performed without communication, then the same number of messages will be sent per iteration.
In the s-step versions, the preconditioner is applied twice, $L^{-t}[{T}(\widehat{r}_{k-1})]$ and $W_{i+1} = M^{-1}AW_i$, where $\widehat{r}_{k-1} = L^{-1}r_{k-1}$, $M = LL^t$ and $W_i$ is a dense $n\times t$ matrix. The parallelization of these ``multiplications" depends on the type of the preconditioner and the sparsity pattern of $A$.
For example, let $L$ be a block diagonal lower triangular matrix with $t$ blocks, $L_i$, for $i=1,..,t$. Then, ${T}_i(\widehat{r}_{k-1}) = {T}_i(L^{-1}{r}_{k-1}) = L^{-1}{T}_i({r}_{k-1})$ is an all zero vector except the entries corresponding to domain $D_i$, which are obtained by solving $z_i = L_i^{-1}{r}_{k-1}(D_i)$ using forward substitution. Thus, processor $i$ needs the $i^{th}$ diagonal block $L_i$ and $T_i(r_{k-1})$ to compute ${T}_i(\widehat{r}_{k-1})$.
Similarly, computing $L^{-t}{T}_i(\widehat{r}_{k-1})$ is reduced to computing $L_i^{-t}z_i$ using backward substitution. Thus, processor $i$ computes the vector $L^{-t}[{T}_i(\widehat{r}_{k-1})]$ without any communication. As for $W_{j+1} = M^{-1}AW_j$, processor $i$ computes the row-wise part of $W_{j+1}$ corresponding to domain $D_i$, using the row-wise part of $A$, $L_i$, $L_i^t$, and the row-wise part of $W_j$ corresponding to $\delta_i = R(G(A),D_i,1)$. Thus, processor $i$ computes $Z_i = A(D_i,\delta_i)W_j(\delta_i,:)$ without any communication, and solves for $W_{j+1}(D_i,:) = (L_iL_i^t)^{-1}Z_i$ using a backward and forward substitution.
In preconditioned CA MSDO-CG, at the $k^{th}$ iteration the $st$ vectors
$$L^{-t}T(\hat{r}_{k-1}), M^{-1}AL^{-t}T(\hat{r}_{k-1}), (M^{-1}A)^2L^{-t}T(\hat{r}_{k-1}),..., (M^{-1}A)^{s-1}L^{-t}T(\hat{r}_{k-1})$$
are computed.
Assuming that $L$ is a block diagonal lower triangular matrix, then each of the $t$ processors can compute the $s$ basis vectors
$$L^{-t}T_i(\hat{r}_{k-1}), M^{-1}AL^{-t}T_i(\hat{r}_{k-1}), (M^{-1}A)^2L^{-t}T_i(\hat{r}_{k-1}),..., (M^{-1}A)^{s-1}L^{-t}T_i(\hat{r}_{k-1})$$
independently from other processors, but using a relatively big column-wise part of $A$ and $M$, depending on $s$ and the sparsity patterns of $A$ and $L$.
Another alternative is that each of the $t$ processors computes the row-wise part of the $st$ vectors corresponding to $D_i$ without communication using a preconditioned block version of the matrix powers kernel. However, the same ``relatively big" column-wise part column-wise part of $A$ and $M$ is needed.
To reduce the memory storage needed per processor, one option is to overlap communication with computation in the preconditioner's application. Let $W_1 = L^{-t}T(\hat{r}_{k-1})$, and $W_{j+1} = M^{-1}A W_{j}$ for $j\geq 1$. Each processor $i$ can compute $W_1(D_i,:)$ independently, since $W_1(D_i,:)$ is all zeros except the $i^{th}$ column which is equivalent to solving for $L_i^{-t}T_i(\hat{r}_{k-1})$. To compute $W_{j+1}(D_i,:) = L_i^{-t}L_i^{-1} Z_i$, where $Z_i = A(D_i,\delta_i)W_j(\delta_i,:)$, processor $i$ needs part of $W_j(\delta_i,:)$ from neighboring processors. This local communication occurs once $W_j$ is computed, and it is overlapped with the computation of $A(D_i,D_i)W_j(D_i,:)$. Then the remaining part of the multiplication is performed once the messages are received from neighboring processors. In this case, processor $i$ only needs $A(D_i,\delta_i)$, and $L_i$. And there are $s-1$ communication phases, once before the last $s-1$ preconditioned matrix multiplications. Even though these local communications are hidden with computations, but they might require some additional time. However, the gain in communication reduction from replacing $s$ A-orthonormalization procedures by just one, overweighs this ``possible" additional communication, as the A-orthonormalization requires global communication.
In preconditioned CA SRE-CG and CA SRE-CG2, at the first iteration $$L^{-t}T(\hat{r}_{0}), M^{-1}AL^{-t}T(\hat{r}_{0}), (M^{-1}A)^2L^{-t}T(\hat{r}_{0}),..., (M^{-1}A)^{s-1}L^{-t}T(\hat{r}_{0})$$ are computed, but at the $k^{th}$ iteration the $st$ vectors $$M^{-1}AW, (M^{-1}A)^2W,..., (M^{-1}A)^{s}W$$ are computed, where $W = W_{(k-1)s}$ is a dense $n \times t$ matrix. The communication pattern and parallelization of the preconditioned matrix multiplication is the same as that of CA MSDO-CG, with the exception that for $k>1$ an additional local communication is required for $M^{-1}AW$. Moreover, the communication reduction in the preconditioned CA SRE-CG2 is comparable to that of CA MSDO-CG, since once preconditioned, SRE-CG2 with Algorithm \ref{alg:CA-Arnoldi} converges and scales even for $s=8$, as discussed in section \ref{sec:precconv}.
\subsection{Expected Performance}\label{expperf}
By merging $s$ iterations of the enlarged CG versions, communication is reduced in the corresponding s-step and CA versions as discussed in sections \ref{sec:unpm} and \ref{sec:pm}. Moreover, the enlarged CG versions converge faster in terms of iterations than CG by enlarging the Krylov Subspace. However, are these reductions enough to obtain a method that converges faster than CG in terms of runtime, using comparable resources?
Conjugate Gradient is known for its short recurrence formulae and the limited memory storage.
In preconditioned CG (Algorithm \ref{alg:pCG}), if processor $i$ computes part of the vectors $p_k(D_i), w(D_i), x_k(D_i), r_k(D_i), \widehat{r}_k(D_i)$, then it needs $A(D_i,:)$, $L_i$ and $b(D_i)$, assuming that $M = LL^t$ is a block diagonal matrix. Moreover, two global communications are needed per iteration to perform the dot products $p^tw, {\rho}_k, \widehat{\rho}_k$, and local communication with neighboring processors is needed to compute $w(D_i) = A(D_i,\delta_i)p(\delta_i)$, where $\delta_i = R(G(A),D_i,1)$. Given that there is a total of $m$ processors, $2log(m)$ messages are sent per CG iteration without considering local communication.
\begin{algorithm}[h!]
\centering
\caption{Preconditioned CG}
{\renewcommand{\arraystretch}{1.3}
\begin{algorithmic}[1]
\Statex{\textbf{Input:} $A$, $M = LL^t$, $b$, $x_0$, $\epsilon$, $k_{max}$ }
\Statex{\textbf{Output:} $x_k$, the approximate solution of the system $L^{-t}AL^t(L^{-t}x)=L^{-t}b$}
\State$r_0 = b - Ax_0$, $\rho_0 = ||r_0||^2_2$, $\widehat{r}_0 = M^{-1}r_{0}$, $\widehat{\rho}_0 = r_0^t \widehat{r}_0$, $k = 1$;
\While {( $\sqrt{\rho_{k-1}} > \epsilon \sqrt{\rho_{0}}$ and $k \leq k_{max}$ )}
\If {($k==1$)} $p = \widehat{r}_0$
\Else $\;\;\; \beta = \frac{\widehat{\rho}_{k-1}}{\widehat{\rho}_{k-2}}$, $p = \widehat{r}_k + \beta p$
\EndIf
\State $w = Ap$, \;\; $\alpha = \dfrac{\widehat{\rho}_{k-1}}{p^tw}$
\State $x_k = x_{k-1} + \alpha p $, \;\; $r_k = r_{k-1} - \alpha w $, \;\; $\widehat{r}_k = M^{-1}r_{k}$
\State $\rho_k = ||r_{k}||^2_2$, \;\; $\widehat{\rho}_k = r_k^t \widehat{r}_k$, \;\; $k = k+1$
\EndWhile
\end{algorithmic}}
\label{alg:pCG}
\end{algorithm}
Similarly to CG, the SRE-CG, SRE-CG2, and MSDO-CG versions have short recurrence formulae.
However, in terms of memory, the SRE-CG versions are the best choice since a limited number of vectors, depending only on $t$ and $s$, need to be stored. In SRE-CG, s-step SRE-CG, and CA SRE-CG, $(3t)$ vectors, $(s+2)t$ vectors, and $(2s+1)t$ vectors are stored respectively. Whereas, in SRE-CG2 and MSDO-CG version, $stk$ vectors need to be stored, where $k$ is the number of iterations needed for convergence which is not known a priori.
Given that $s = 2^i$, the number of partitions is $t = 2^j$, and a total of $m$ processors run the algorithms where $m$ is a multiple, divisor or equal to $t = 2^j$ for $j,i \geq 1$; then, $2klog(m)$ messages are sent in CG with a total of $k$ iterations till convergence. As for SRE-CG, a total of $6(0.75)^{j}klog(m)$ messages are sent, assuming that as $t$ is doubled the number of iterations is reduced by $25\%$ on average. Whereas in s-step and CA SRE-CG, a total of $[5+(0.5)^i](0.75)^{j}klog(m)$ and $11(0.5)^i(0.75)^{j}klog(m)$ messages are sent respectively.
As compared to CG, SRE-CG, s-step SRE-CG, and CA SRE-CG communicate less in total when
\begin{eqnarray}
2klog(m) &>& 6(0.75)^{j}klog(m) \label{sre}\\
2klog(m) &>& [5+(0.5)^i](0.75)^{j}klog(m)\label{sstepsre}\\
2klog(m) &>& 11(0.5)^i(0.75)^{j}klog(m) \label{casre}
\end{eqnarray}
respectively. For $j\geq 4$, inequality \eqref{sre} is satisfied, i.e. SRE-CG reduces communication with respect to CG for number of partitions $t \geq 16$. Similarly, s-step SRE-CG further reduces communication with respect to CG for $s=2,4,8$ and $t \geq 16$. Whereas CA SRE-CG reduces communication for $s=2$ and $j\geq 4$ ($t \geq 16$), $s = 4$ and $j\geq 2$ ( $t \geq 4$), and $s = 8$ and $j\geq 1$ ( $t \geq 2$).
Hence, for $s = 2^i$ and $t=2^j$ in the SRE-CG, s-step SRE-CG, and CA SRE-CG, the reduction in communication with respect to CG is respectively ($j = 5$ and $i = 2$)
\begin{eqnarray}
100-3(0.75)^j100\% &\qquad = \qquad& (28.8\%), \nonumber \\
100-(2.5+0.5^{i+1})0.75^j100\% &\qquad = \qquad& (37.7\%),\nonumber \\
100-5.5(0.5)^i(0.75)^{j}100\% & \qquad = \qquad & (67.37 \%). \nonumber
\end{eqnarray}
Thus, it is expected that SRE-CG, s-step SRE-CG, and CA SRE-CG will converge faster than CG in parallel considering the $s$ and $t$ values discussed above and that communication is much more expensive than flops. Moreover, even the s-step and CA versions of the SRE-CG2 and MSDO-CG are expected to converge faster than CG, but require much more memory storage per processor.
\section{Conclusion} \label{sec:conc}
In this paper, we introduced the s-step and communication avoiding versions of SRE-CG, and SRE-CG2, which are based on the enlarged Krylov subspace. We have also introduced a modified MSDO-CG version that is equivalent to MSDO-CG theoretically and numerically, but based on a modified enlarged Krylov subspace which allows the s-step and CA formulations. The split preconditioned s-step and CA versions are also presented in section \ref{sec:precCG}.
The s-step and communication avoiding versions merge $s$ iterations of the enlarged CG methods into one iteration where denser operations are performed for less communication. Numerical stability of the s-step and CA version is tested in section \ref{sec:SRNum}, where as $s$ is doubled, the number of iterations needed for convergence in the s-step methods is roughly divided by two, even for $s\geq 10$. As for the CA methods, once the system is preconditioned, a similar scaling behavior is observed in section \ref{sec:precconv}. Accordingly, it is shown in sections \ref{sec:unpm} and \ref{sec:pm} that the s-step and CA versions reduce communication with respect to the corresponding enlarged methods for $s \geq 2$.
Although the number of messages per iteration of the enlarged CG methods and their s-step and CA versions, is more than that of CG, however due to the reduction in the number of iterations in the enlarged versions, the total messages sent is less as discussed in section \ref{expperf}. This implies that all the enlarged CG variants should require less time to converge than CG. However, the SRE-CG variants are the most feasible candidates due to their limited memory storage requirements.
Future work will focus on implementing, testing, and comparing the runtime of the introduced enlarged CG versions on CPU's and GPU's with respect to existing similar methods.
|
1,108,101,565,575 | arxiv |
\section{Acknowledgements}
We thank Luca Carlone and Konstantinos Gatsis for inspiring discussions, and Luca Carlone for sharing code that enabled the numerical experiments in this paper.
\bibliographystyle{IEEEtran}
\section{Notation}\label{app:notation}
In the appendices below we use the following notation: given a finite ground set $\calV$, and a set function $f:2^\mathcal{V}\mapsto \mathbb{R}$, then, for any sets $\mathcal{X}\subseteq \mathcal{V}$ and $\mathcal{X}'\subseteq \mathcal{V}$:
\begin{equation*}\label{notation:marginal}
f(\mathcal{X}|\mathcal{X}')\triangleq f(\mathcal{X}\cup\mathcal{X}')-f(\mathcal{X}').
\end{equation*}
Moreover, let the set $\mathcal{A}^\star$ denote an (optimal) solution to Problem~\ref{pr:robust_sub_max}; formally:
\begin{equation*}
\mathcal{A}^\star\in \arg\underset{\mathcal{A}\subseteq \calV, \calA\in \calI}{\max} \; \; \underset{\;\mathcal{B}\subseteq \calA, \calB\in \calI'(\calA)}{\min} \; \;\; f(\mathcal{A}\setminus \mathcal{B}).
\end{equation*}
\section{Preliminary lemmas}\label{app:prelim}
We list lemmas that support the proof of Theorem~\ref{th:alg_rob_sub_max_performance}.\footnote{The proof of Lemmas~\ref{lem:D3}-\ref{lem:subratio} and of Corollary~\ref{cor:ineq_from_lemmata} is also found in~\cite{tzoumas2017resilient} and~\cite{tzoumas2018resilientSequential}.}
\begin{mylemma}\label{lem:D3}
Consider any finite ground set $\mathcal{V}$, a non-decreasing submodular function $f:2^\mathcal{V}\mapsto \mathbb{R}$, and non-empty sets $\calY, \calP \subseteq \calV$ such that for all elements $y \in \calY$, and all elements $p \in \calP$, it is $f(y)\geq f(p)$. Then:
\belowdisplayskip=-12pt\begin{equation*}
f(\calP|\calY)\leq |\calP|f(\calY).
\end{equation*}
\end{mylemma}
\paragraph*{Proof of Lemma~\ref{lem:D3}} Consider any element $y \in \calY$; then:
\begin{align}
f(\calP|\calY)&= f(\calP\cup\calY)-f(\calY)\label{aux1:1}\\
&\leq f(\calP)+f(\calY)-f(\calY)\label{aux1:2}\\
&= f(\calP)\nonumber\\
&\leq \sum_{p\in\calP}f(p)\label{aux1:4}\\
&\leq |\calP| \max_{p\in\calP} f(p)\nonumber\\
&\leq |\calP| f(y)\label{aux1:7}\\
&\leq |\calP| f(\calY),\label{aux1:5}
\end{align}
where eqs.~\eqref{aux1:1}-\eqref{aux1:5} hold for the following reasons: eq.~\eqref{aux1:1} holds since for any sets $\mathcal{X}\subseteq \mathcal{V}$ and $\mathcal{Y}\subseteq \mathcal{V}$, it is $f(\mathcal{X}|\mathcal{Y})=f(\mathcal{X}\cup \mathcal{Y})-f(\mathcal{Y})$; ineq.~\eqref{aux1:2} holds since $f$ is submodular and, as a result, the submodularity Definition~\ref{def:sub} implies that for any set $\calA\subseteq\calV$ and $\calA'\subseteq\calV$, it is $f(\calA\cup \calA')\leq f(\calA)+f(\calA')$~\cite[Proposition 2.1]{nemhauser78analysis}; ineq.~\eqref{aux1:4} holds for the same reason as ineq.~\eqref{aux1:2}; ineq.~\eqref{aux1:7} holds since for all elements $y \in \calY$, and for all elements $p \in \calP$, it is $f(y)\geq f(p)$; finally, ineq.~\eqref{aux1:5} holds since $f$ is monotone, and since $y\in\calY$.
\hfill $\blacksquare$
\begin{mylemma}\label{lem:non_total_curvature}
Consider a finite ground set $\mathcal{V}$, and a non-decreasing submodular set function $f:2^\mathcal{V}\mapsto \mathbb{R}$ such that $f$ is non-negative and $f(\emptyset)=0$. Then, for any $\mathcal{A}\subseteq \mathcal{V}$, it~holds:
\begin{equation*}
f(\mathcal{A})\geq (1-\kappa_f)\sum_{a \in \mathcal{A}}f(a).
\end{equation*}
\end{mylemma}
\paragraph*{Proof of Lemma~\ref{lem:non_total_curvature}} Let $\mathcal{A}=\{a_1,a_2,\ldots, a_{|{\cal A}|}\}$. We prove Lemma~\ref{lem:non_total_curvature} by proving the following two inequalities:
\begin{align}
f(\mathcal{A})&\geq \sum_{i=1}^{|{\cal A}|} f(a_i|\mathcal{V}\setminus\{a_i\}),\label{ineq5:aux_5}\\
\sum_{i=1}^{|{\cal A}|} f(a_i|\mathcal{V}\setminus\{a_i\})&\geq (1-\kappa_f)\sum_{i=1}^{|{\cal A}|} f(a_i)\label{ineq5:aux_6}.
\end{align}
We begin with the proof of ineq.~\eqref{ineq5:aux_5}:
\begin{align}
f(\mathcal{A})&=f(\mathcal{A}|\emptyset)\label{ineq5:aux_9}\\
&\geq f(\mathcal{A}|\mathcal{V}\setminus \mathcal{A})\label{ineq5:aux_10}\\
&= \sum_{i=1}^{|{\cal A}|}f(a_i|\mathcal{V}\setminus\{a_i,a_{i+1},\ldots,a_{|{\cal A}|}\})\label{ineq5:aux_11}\\
&\geq \sum_{i=1}^{|{\cal A}|}f(a_i|\mathcal{V}\setminus\{a_i\}),\label{ineq5:aux_12}
\end{align}
where ineqs.~\eqref{ineq5:aux_10}-\eqref{ineq5:aux_12} hold for the following reasons: ineq.~\eqref{ineq5:aux_10} is implied by eq.~\eqref{ineq5:aux_9} because $f$ is submodular and $\emptyset\subseteq \mathcal{V}\setminus \mathcal{A}$; eq.~\eqref{ineq5:aux_11} holds since for any sets $\mathcal{X}\subseteq \mathcal{V}$ and $\mathcal{Y}\subseteq \mathcal{V}$ it is $f(\mathcal{X}|\mathcal{Y})=f(\mathcal{X}\cup \mathcal{Y})-f(\mathcal{Y})$, and since $\{a_1,a_2,\ldots, a_{|{\cal A}|}\}$ denotes the set $\mathcal{A}$; and ineq.~\eqref{ineq5:aux_12} holds since $f$ is submodular, and since $\mathcal{V}\setminus\{a_i,a_{i+1},\ldots,a_{\mu}\} \subseteq \mathcal{V}\setminus\{a_i\}$. These observations complete the proof of ineq.~\eqref{ineq5:aux_5}.
We now prove ineq.~\eqref{ineq5:aux_6} using the Definition~\ref{def:curvature} of $\kappa_f$, as follows: since $\kappa_f=1-\min_{v\in \mathcal{V}}\frac{f(v|\mathcal{V}\setminus\{v\})}{f(v)}$, it is implied that for all elements $v\in \mathcal{V}$ it is $ f(v|\mathcal{V}\setminus\{v\})\geq (1-\kappa_f)f(v)$. Therefore, by adding the latter inequality across all elements $a \in \calA$, we complete the proof of ineq.~\eqref{ineq5:aux_6}.
\hfill $\blacksquare$
\begin{mylemma}\label{lem:curvature2}
Consider a finite ground set $\mathcal{V}$, and a monotone set function $f:2^\mathcal{V}\mapsto \mathbb{R}$ such that $f$ is non-negative and $f(\emptyset)=0$. Then, for any sets $\mathcal{A}\subseteq \mathcal{V}$ and $\mathcal{B}\subseteq \mathcal{V}$ such that $\calA \cap \calB=\emptyset$, it holds:
\begin{equation*}
f(\mathcal{A}\cup \mathcal{B})\geq (1-c_f)\left(f(\mathcal{A})+f(\mathcal{B})\right).
\end{equation*}
\end{mylemma}
\paragraph*{Proof of Lemma~\ref{lem:curvature2}}
Let $\mathcal{B}=\{b_1, b_2, \ldots, b_{|\mathcal{B}|}\}$. Then,
\begin{equation}
f(\mathcal{A}\cup \mathcal{B})=f(\mathcal{A})+\sum_{i=1}^{|\mathcal{B}|}f(b_i|\mathcal{A}\cup \{b_1, b_2, \ldots, b_{i-1}\}). \label{eq1:lemma_curvature2}
\end{equation}
The definition of total curvature in Definition~\ref{def:total_curvature} implies:
\begin{align}
&\!\!\!f(b_i|\mathcal{A}\cup \{b_1, b_2, \ldots, b_{i-1}\})\geq\nonumber\\
& (1-c_f)f(b_i|\{b_1, b_2, \ldots, b_{i-1}\}). \label{eq2:lemma_curvature2}
\end{align}
The proof is completed by substituting ineq.~\eqref{eq2:lemma_curvature2} in eq.~\eqref{eq1:lemma_curvature2} and then by taking into account that it holds $f(\mathcal{A})\geq (1-c_f)f(\mathcal{A})$, since $0\leq c_f\leq 1$.
\hfill $\blacksquare$
\begin{mylemma}\label{lem:curvature}
Consider a finite ground set $\mathcal{V}$m and a non-decreasing set function $f:2^\mathcal{V}\mapsto \mathbb{R}$ such that $f$ is non-negative and $f(\emptyset)=0$. Then, for any set $\mathcal{A}\subseteq \mathcal{V}$ and any set $\mathcal{B}\subseteq \mathcal{V}$ such that $\calA \cap \calB=\emptyset$, it holds:
\begin{equation*}
f(\mathcal{A}\cup \mathcal{B})\geq (1-c_f)\left(f(\mathcal{A})+\sum_{b \in \mathcal{B}}f(b)\right).
\end{equation*}
\end{mylemma}
\paragraph*{Proof of Lemma~\ref{lem:curvature}}
Let $\mathcal{B}=\{b_1, b_2, \ldots, b_{|\mathcal{B}|}\}$. Then,
\begin{equation}
f(\mathcal{A}\cup \mathcal{B})=f(\mathcal{A})+\sum_{i=1}^{|\mathcal{B}|}f(b_i|\mathcal{A}\cup \{b_1, b_2, \ldots, b_{i-1}\}). \label{eq1:lemma_curvature}
\end{equation}
In addition, Definition~\ref{def:total_curvature} of total curvature implies:
\begin{align}
f(b_i|\mathcal{A}\cup \{b_1, b_2, \ldots, b_{i-1}\})&\geq (1-c_f)f(b_i|\emptyset)\nonumber\\
&=(1-c_f)f(b_i), \label{eq2:lemma_curvature}
\end{align}
where the latter equation holds since $f(\emptyset)=0$.
The proof is completed by substituting~\eqref{eq2:lemma_curvature} in~\eqref{eq1:lemma_curvature} and then taking into account that $f(\mathcal{A})\geq (1-c_f)f(\mathcal{A})$ since $0\leq c_f\leq 1$. \hfill $\blacksquare$
\begin{mylemma}\label{lem:subratio}
Consider a finite ground set $\mathcal{V}$ and a non-decreasing set function $f:2^\mathcal{V}\mapsto \mathbb{R}$ such that $f$ is non-negative and $f(\emptyset)=0$. Then, for any set $\mathcal{A}\subseteq \mathcal{V}$ and any set $\mathcal{B}\subseteq \mathcal{V}$ such that $\mathcal{A}\setminus\mathcal{B}\neq \emptyset$, it holds:
\begin{equation*}
f(\mathcal{A})+(1-c_f) f(\mathcal{B})\geq (1-c_f) f(\mathcal{A}\cup \mathcal{B})+f(\mathcal{A}\cap \mathcal{B}).
\end{equation*}
\end{mylemma}
\paragraph*{Proof of Lemma~\ref{lem:subratio}}
Let $\mathcal{A}\setminus\mathcal{B}=\{i_1,i_2,\ldots, i_r\}$, where $r=|\mathcal{A}-\mathcal{B}|$. From Definition~\ref{def:total_curvature} of total curvature $c_f$, for any $i=1,2, \ldots, r$\!, it is $f(i_j|\mathcal{A} \cap \mathcal{B} \cup \{i_1, i_2, \ldots, i_{j-1}\})\geq (1-c_f) f(i_j|\mathcal{B} \cup \{i_1, i_2, \ldots, i_{j-1}\})$. Summing these $r$ inequalities:
$$f(\mathcal{A})-f(\mathcal{A}\cap \mathcal{B})\geq (1-c_f) \left(f(\mathcal{A}\cup \mathcal{B})-f(\mathcal{B})\right),$$
which implies the lemma. \hfill $\blacksquare$
\begin{mycorollary}\label{cor:ineq_from_lemmata}
Consider a finite ground set $\mathcal{V}$m and a non-decreasing set function $f:2^\mathcal{V}\mapsto \mathbb{R}$ such that $f$ is non-negative and $f(\emptyset)=0$. Then, for any set $\mathcal{A}\subseteq \mathcal{V}$ and any set $\mathcal{B}\subseteq \mathcal{V}$ such that $\mathcal{A}\cap\mathcal{B}=\emptyset$, it holds:
\begin{equation*}
f(\mathcal{A})+\sum_{b \in \mathcal{B}}f(b) \geq (1-c_f) f(\mathcal{A}\cup \mathcal{B}).
\end{equation*}
\end{mycorollary}
\paragraph*{Proof of Corollary~\ref{cor:ineq_from_lemmata}}
Let $\mathcal{B}=\{b_1,b_2,\ldots,b_{|\mathcal{B}|}\}$.
\begin{align}
f(\mathcal{A})+\sum_{i=1}^{|\mathcal{B}|}f(b_i) &\geq (1-c_f) f(\mathcal{A})+\sum_{i=1}^{|\mathcal{B}|}f(b_i))\label{ineq:cor_aux1} \\
& \geq (1-c_f) f(\mathcal{A}\cup \{b_1\})+\sum_{i=2}^{|\mathcal{B}|}f(b_i)\nonumber\\
& \geq (1-c_f) f(\mathcal{A}\cup \{b_1,b_2\})+\sum_{i=3}^{|\mathcal{B}|}f(b_i)\nonumber\\
& \;\;\vdots \nonumber\\
& \geq (1-c_f) f(\mathcal{A}\cup \mathcal{B}),\nonumber
\end{align}
where~\eqref{ineq:cor_aux1} holds since $0\leq c_f\leq 1$, and the rest due to Lemma~\ref{lem:subratio}, since $\mathcal{A}\cap\mathcal{B}=\emptyset$ implies $\mathcal{A}\setminus \{b_1\}\neq \emptyset$, $\mathcal{A}\cup \{b_1\}\setminus \{b_2\}\neq \emptyset$, $\ldots$, $\mathcal{A}\cup \{b_1,b_2,\ldots, b_{|\mathcal{B}|-1}\}\setminus \{b_{|\mathcal{B}|}\}\neq \emptyset$.
\hfill $\blacksquare$
\begin{mylemma}\label{lem:dominance}
Recall the notation in Algorithm~\ref{alg:rob_sub_max}, and consider the sets $\calA_1$ and $\calA_2$ constructed by Algorithm~\ref{alg:rob_sub_max}'s lines~\ref{line:begin_while_1}-\ref{line:end_while_1} and lines~\ref{line:begin_while_2}-\ref{line:end_while_2}, respectively. Then, for all elements $v\in \calA_1$ and all elements $v'\in \calA_2$, it holds $f(v)\geq f(v')$.
\end{mylemma}
\paragraph*{Proof of Lemma~\ref{lem:dominance}}
Let $v_1,\ldots,v_{|\calA_1|}$ be the elements in~$\calA_{1}$, ---i.e., $\calA_1\equiv \{v_1,\ldots,v_{|\calA_1|}\}$,--- and be such that for each $i=1,\ldots,|\calA_1|$ the element~$v_i$ is the $i$-th element added in $\calA_1$ per Algorithm~\ref{alg:rob_sub_max}'s lines~\ref{line:begin_while_1}-\ref{line:end_while_1}; similarly, let $v_1',\ldots,v_{|\calA_2|}'$ be the elements in $\calA_{2}$, ---i.e., $\calA_2\equiv \{v_1',\ldots,v_{|\calA_2|}'\}$,--- and be such that for each $i=1,\ldots,|\calA_2|$ the element $v_i'$ is the $i$-th element added in $\calA_2$ per Algorithm~\ref{alg:rob_sub_max}'s lines~\ref{line:begin_while_2}-\ref{line:end_while_2}.
We prove Lemma~\ref{lem:dominance} by the method of contradiction; specifically, we focus on the case where $(\calV,\calI')$ is a uniform matroid; the case where $(\calV,\calI')$ is a partition matroid with the same partition as $(\calV,\calI)$ follows the same steps by focusing at each partition separately. In~particular, assume that there exists an index $i\in\{1,\ldots,|\calA_1|\}$ and an $j\in\{1,\ldots,|\calA_2|\}$ such that $f(v'_j)>f(v_i)$, and, in particular, assume that $i,j$ are the smallest indexes such that $f(v'_j)>f(v_i)$. Since Algorithm~\ref{alg:rob_sub_max} constructs $\calA_1$ and~$\calA_2$ such that $\calA_1\cup\calA_2\in \calI$, and since it also is that $(\calV,\calI)$ is a matroid and $\{v_1,\ldots,v_{i-1}, v'_{j}\}\subseteq \calA_1\cup\calA_2$, we have that $\{v_1,\ldots,v_{i-1}, v'_{j}\}\in \calI$. In addition, we have that $\{v_1,\ldots,v_{i-1}, v'_{j}\}\in \calI'$\!\!, since $\calI'$ is either a uniform or a partition matroid and, as a result, if $\{v_1,\ldots,v_{i-1}, v_{i}\}\in \calI'$ then it also is $\{v_1,\ldots,v_{i-1}, v\}\in \calI'$ for any $v\in\calV\setminus\{v_1,\ldots,v_{i-1}\}$. Overall, $\{v_1,\ldots,v_{i-1}, v'_{j}\}\in \calI,\calI'$\!\!. Now, consider the ``while loop'' in Algorithm~\ref{alg:rob_sub_max}'s lines~\ref{line:begin_while_1}-\ref{line:end_while_1} at the beginning of its $i$-th iteration, that is, when Algorithm~\ref{alg:rob_sub_max} has chosen only the elements $\{v_1,\ldots,v_{i-1}
\}$ among the elements in $\calA_1$. Then, per Algorithm~\ref{alg:rob_sub_max}'s lines~\ref{line:select_element_bait}-\ref{line:build_of_bait}, the next element $v$ that is added in $\{v_1,\ldots,v_{i-1}
\}$ is the one that achieves the highest value of $f(v')$ among all elements in $v'\in\calV\setminus\{v_1,\ldots,v_{i-1}\}$, and which satisfies $\{v_1,\ldots,v_{i-1}, v\}\in \calI,\calI'$\!\!. Therefore, the next element $v$ that is added in $\{v_1,\ldots,v_{i-1}
\}$ cannot be $v_i$, since $f(v'_j)>f(v_i)$ and $\{v_1,\ldots,v_{i-1}, v'_{j}\}\in \calI,\calI'$.
\hfill $\blacksquare$
\begin{mylemma}\label{lem:its_a_matroid}
Consider a matroid $(\calV,\calI)$, and a set $\calY\subseteq \calV$ such that $\calY \in \calI$. Moreover, define the following collection of subsets of $\calV\setminus \calY$: $\calI'\triangleq\{\calX: \calX \subseteq \calV\setminus\calY, \calX\cup \calY \in \calI\}$. Then, $(\calV\setminus\calY, \calI')$ is a matroid.
\end{mylemma}
\paragraph*{Proof of Lemma~\ref{lem:its_a_matroid}}
We validate that $(\calV\setminus\calY, \calI')$ satisfies the conditions in Definition~\ref{def:matroid} of a matroid. In particular:
\begin{itemize}
\item to validate the first condition in Definition~\ref{def:matroid}, assume a set $\calX \subseteq \calV\setminus\calY$ such that $\calX\in \calI'$; moreover, assume a set $\calZ\subseteq \calX$; we need to show that $\calZ\in\calI'$\!\!. To this end, observe that the definition of $\calI'$ implies $\calX\cup\calY\in\calI$, since we assumed $\calX\in\calI'$\!\!. In~addition, the assumption $\calZ\subseteq \calX$ implies $\calZ\cup \calY\subseteq \calX\cup \calY$, and, as a result, $\calZ\cup \calY\in \calI$, since $(\calV,\calI)$ is a matroid. Overall, $\calZ\subseteq \calV\setminus\calY$ (since $\calZ\subseteq \calX$, by assumption, and $\calX\subseteq \calV\setminus \calY$) and $\calZ\cup \calY\in \calI$; hence, $\calZ\in \calI'$\!\!, by the definition of $\calI'$\!\!, and now the first condition in Definition~\ref{def:matroid} is validated;
\item to validate the second condition in Definition~\ref{def:matroid}, assume sets $\calX,\calZ\in\calV\setminus\calY$ such that $\calX,\calZ\in\calI'$ and $|\calX|<|\calZ|$; we need to show that there exists an element $z\in\calZ\setminus\calX$ such that $\calX\cup\{z\}\in\calI'$\!\!. To this end, observe that since $\calX,\calZ\in\calI'$\!\!, the definition of $\calI'$ implies that $\calX\cup\calY,\calZ\cup\calY\in\calI$. Moreover, since $|\calX|<|\calZ|$, it also is $|\calX\cup\calY|<|\calZ\cup\calY|$. Therefore, since $(\calV,\calI)$ is a matroid, there exists an element $z\in (\calZ\cup\calY)\setminus(\calX\cup\calY)=\calZ\setminus\calX$ such that $(\calX\cup\calY)\cup\{z\}\in\calI$;
as a result, $\calX\cup\{z\}\in \calI'$\!\!, by the definition of $\calI'$\!\!.
In sum, $z\in \calZ\setminus\calX$ and $\calX\cup\{z\}\in \calI'$\!\!, and the second condition in Definition~\ref{def:matroid} is validated too.
\hfill $\blacksquare$
\end{itemize}
\begin{mylemma}\label{lem:greedy_perfromance}
Recall the notation in Algorithm~\ref{alg:rob_sub_max}, and consider the sets $\calA_1$ and $\calA_2$ constructed by Algorithm~\ref{alg:rob_sub_max}'s lines~\ref{line:begin_while_1}-\ref{line:end_while_1} and lines~\ref{line:begin_while_2}-\ref{line:end_while_2}, respectively. Then, for the set $\calA_2$, it holds:
\begin{itemize}
\item if the function $f$ is non-decreasing submodular and:
\begin{itemize}
\item if $(\calV,\calI)$ is a uniform matroid, then:
\begin{equation}\label{eq:greedy_perfromance_uniform}
f(\calA_{2})\geq \frac{1}{\kappa_f}(1-e^{-\kappa_f})\;\;\underset{\calX\subseteq \calV\setminus\calA_1, \calX\cup\calA_1\in \calI}{\max} \; \;\; f(\calX).
\end{equation}
\item if $(\calV,\calI)$ is a matroid, then:
\begin{equation}\label{eq:greedy_perfromance_matroid}
f(\calA_{2})\geq \frac{1}{1+\kappa_f}\;\;\underset{\calX\subseteq \calV\setminus\calA_1, \calX\cup\calA_1\in \calI}{\max} \; \;\; f(\calX).
\end{equation}
\end{itemize}
\item if the function $f$ is non-decreasing, then:
\begin{equation}\label{eq:greedy_perfromance}
f(\calA_{2})\geq (1-c_f)\;\;\underset{\calX\subseteq \calV\setminus\calA_1, \calX\cup\calA_1\in \calI}{\max} \; \;\; f(\calX).
\end{equation}
\end{itemize}
\end{mylemma}
\paragraph*{Proof of Lemma~\ref{lem:greedy_perfromance}}
We first prove ineq.~\eqref{eq:greedy_perfromance}, then ineq.~\eqref{eq:greedy_perfromance_matroid}, and, finally, ineq.~\eqref{eq:greedy_perfromance_uniform}.
In particular, Algorithm~\ref{alg:rob_sub_max} constructs the set~$\calA_2$ greedily, by replicating the steps of the greedy algorithm introduced~\cite[Section~2]{fisher1978analysis}, to solve the following optimization problem:
\begin{equation}
\label{eq:auxxxx}
\underset{\calX\subseteq \calV\setminus\calA_1, \calX\cup\calA_1\in \calI}{\max} \; \;\; f(\calX);
\end{equation}
let in the latter problem $\calI'\triangleq\{\calX\subseteq \calV\setminus\calA_1, \calX\cup\calA_1\in \calI\}$. Lemma~\ref{lem:its_a_matroid} implies that $(\calA_1, \calI')$ is a matroid, and, as a result, the previous optimization problem is a matroid-constrained set function maximization problem. Now, to prove ineq.~\eqref{eq:greedy_perfromance}, ineq.~\eqref{eq:greedy_perfromance_matroid}, and ineq.~\eqref{eq:greedy_perfromance_uniform}, we make the following observations, respectively: when the function $f$ is merely non-decreasing, then \cite[Theorem~8.1]{sviridenko2017optimal} implies that the greedy algorithm introduced in~\cite[Section~2]{fisher1978analysis} returns for the optimization problem in eq.~\eqref{eq:auxxxx} a solution $\calS$ such that $f(\calS)\geq (1-c_f)\underset{\calX\subseteq \calV\setminus\calA_1, \calX\cup\calA_1\in \calI}{\max}\; f(\calX)$; this proves ineq.~\eqref{eq:greedy_perfromance}. Similarly, when the function $f$ is non-decreasing and submodular, then~\cite[Theorem~2.3]{conforti1984curvature} implies that the greedy algorithm introduced in~\cite[Section~2]{fisher1978analysis} returns for the optimization problem in eq.~\eqref{eq:auxxxx} a solution $\calS$ such that $f(\calS)\geq 1/(1+\kappa_f)\underset{\calX\subseteq \calV\setminus\calA_1, \calX\cup\calA_1\in \calI}{\max}\; f(\calX)$; this proves ineq.~\eqref{eq:greedy_perfromance_matroid}. Finally, when the objective function $f$ is non-decreasing submodular, and when $\calI$ is a uniform matroid, then~\cite[Theorem~5.4]{conforti1984curvature} implies that the greedy algorithm introduced in~\cite[Section~2]{fisher1978analysis} returns for the optimization problem in eq.~\eqref{eq:auxxxx} a solution~$\calS$ such that $f(\calS)\geq 1/\kappa_f(1-e^{-\kappa_f})\underset{\calX\subseteq \calV\setminus\calA_1, \calX\cup\calA_1\in \calI}{\max} \; f(\calX)$; this proves ineq.~\eqref{eq:greedy_perfromance_uniform}, and concludes the proof of the lemma.
\hfill $\blacksquare$
\begin{mylemma}\label{lem:from_max_to_minmax}
Recall the notation in Theorem~\ref{th:alg_rob_sub_max_performance} and Appendix~\ref{app:notation}. Also, consider a uniform or partition matroid $(\calV,\calI')$. Then, for any set ${\calY}\subseteq \calV$ such that $\calY\in \calI$ and $\calY\in \calI'$\!\!, it holds:
\begin{equation}\label{eq:toprovefrom_max_to_minmax}
\underset{{\mathcal{X}}\subseteq \calV\setminus\calY, \calX\cup{{\mathcal{Y}}}\in \calI}{\max} \; \;\; f({\mathcal{X}})\geq f(\calA^\star\setminus \calB^\star(\calA^\star)).
\end{equation}
\end{mylemma}
\paragraph*{Proof of Lemma~\ref{lem:from_max_to_minmax}}
We start from the left-hand-side of ineq.~\eqref{eq:toprovefrom_max_to_minmax}, and make the following observations:
\begin{align}
\underset{{\mathcal{X}}\subseteq \calV\setminus\calY, \calX\cup{{\mathcal{Y}}}\in \calI}{\max} \;\; \; f({\mathcal{X}})&\geq \min_{\bar{\calY} \subseteq \calV, \bar{\calY}\in \calI,\calI'}\;\; \underset{\;{\mathcal{X}}\subseteq \calV\setminus\bar{\calY}, \calX\cup{\bar{\calY}}\in \calI}{\max} \;\;\; f({\mathcal{X}})\nonumber\\
&=\min_{\bar{\calY} \subseteq \calV, \bar{\calY}\in \calI,\calI'}\;\; \underset{\bar{\mathcal{A}}\subseteq \calV, \bar{\mathcal{A}}\in \calI}{\max} \;\;\; f({\bar{\mathcal{A}}\setminus \bar{\calY}})\nonumber\\
&\triangleq h.\nonumber
\end{align}
We next complete the proof of Lemma~\ref{lem:from_max_to_minmax} by proving that $h\geq f(\calA^\star\setminus \calB^\star(\calA^\star))$. To this end, observe that for any set ${\calA}\subseteq \calV$ such that ${\calA}\in \calI$, and for any set ${\calY}\subseteq \calV$ such that $\calY\in \calI$ and $\calY\in \calI'$\!\!, it holds:
\begin{align}
\underset{\bar{\mathcal{A}}\subseteq \calV, \bar{\mathcal{A}}\in \calI}{\max} \;\;\; f({\bar{\mathcal{A}}\setminus {\calY}})&\geq f({\mathcal{A}}\setminus {\calY}),\nonumber
\end{align}
which implies the following observations:
\begin{align}
h&\geq \min_{\bar{\calY} \subseteq \calV, \bar{\calY}\in \calI,\calI'}\;\; f({\mathcal{A}}\setminus \bar{\calY})\nonumber\\
&\geq \min_{\bar{\calY} \subseteq \calV, \bar{\calY}\in \calI'}\;\; f({\mathcal{A}}\setminus \bar{\calY})\nonumber\\
&=\min_{\bar{\calY} \subseteq \calA, \bar{\calY}\in \calI'}\;\; f({\mathcal{A}}\setminus \bar{\calY}),\nonumber
\end{align}
and, as a result, it holds:
\belowdisplayskip=-11pt\begin{align}
h&\geq \underset{\bar{\mathcal{A}}\subseteq \calV, \bar{\mathcal{A}}\in \calI}{\max} \;\;\min_{\bar{\calY} \subseteq \calA, \bar{\calY}\in \calI'}\;\; f(\bar{\mathcal{A}}\setminus \bar{\calY})\nonumber\\
&= f(\calA^\star\setminus \calB^\star(\calA^\star)).\nonumber
\end{align}
\hfill $\blacksquare$
\section{Proof of Theorem~\ref{th:alg_rob_sub_max_performance}}
We first prove Theorem~\ref{th:alg_rob_sub_max_performance}'s part 1 (approximation performance), and then, Theorem~\ref{th:alg_rob_sub_max_performance}'s part 2 (running time).
\subsection{Proof of Theorem~\ref{th:alg_rob_sub_max_performance}'s part 1 (approximation performance)}
We first prove ineq.~\eqref{ineq:bound_non_sub}, and, then, ineq.~\eqref{ineq:bound_sub} and ineq.~\eqref{ineq:bound_sub_uniform}.
To the above ends, we use the following notation (along with the notation in Algorithm~\ref{alg:rob_sub_max}, Theorem~\ref{th:alg_rob_sub_max_performance}, and Appendix~\ref{app:notation}):
\begin{itemize}
\item let $\calA_{1}^+\triangleq \calA_{1}\setminus \calB^\star(\calA)$, i.e., $\calA_{1}^+$ is the set of remaining elements in the set $\calA_{1}$ after the removal from $\calA_{1}$ of the elements in the optimal (worst-case) removal $\calB^\star(\calA)$;
\item let $\calA_{2}^+\triangleq \calA_{2}\setminus \calB^\star(\calA)$, i.e., $\calA_{2}^+$ is the set of remaining elements in the set $\calA_{2}$ after the removal from $\calA_{2}$ of the elements in the optimal (worst-case) removal $\calB^\star(\calA)$.
\end{itemize}
\bigskip
\paragraph*{Proof of ineq.~\eqref{ineq:bound_non_sub}} Consider that the objective function~$f$ is non-decreasing and such that (without loss of generality) $f$ is non-negative and $f(\emptyset)=0$. Then, the proof of ineq.~\eqref{ineq:bound_non_sub} follows by making the following observations:
\belowdisplayskip=8pt\begin{align}
&\!\!\!f(\calA\setminus \calB^\star(\calA))\nonumber\\
&=f(\calA_{1}^+\cup \calA_{2}^+)\label{ineq2:aux_14}\\
&\geq (1-c_f)\sum_{v\in\calA_{1}^+\cup \calA_{2}^+}f(v)\label{ineq2:aux_15}\\
&\geq (1-c_f)\sum_{v\in\calA_{2}}f(v)\label{ineq2:aux_16}\\
&\geq (1-c_f)^2f(\calA_{2})\label{ineq2:aux_17}\\
&\geq (1-c_f)^3\underset{\calX\subseteq \calV\setminus\calA_1, \calX\cup\calA_1\in \calI}{\max} \; \;\; f(\calX)\label{ineq2:aux_18}\\
&\geq (1-c_f)^3f(\calA^\star\setminus \calB^\star(\calA^\star)),\label{ineq2:aux_19}
\end{align}
where eqs.~\eqref{ineq2:aux_14}-\eqref{ineq2:aux_19} hold for the following reasons: eq.~\eqref{ineq2:aux_14} follows from the definitions of the sets~$\calA_{1}^+$ and $\calA_{2}^+$; ineq.~\eqref{ineq2:aux_15} follows from ineq.~\eqref{ineq2:aux_14} due to Lemma~\ref{lem:curvature}; ineq.~\eqref{ineq2:aux_16} follows from ineq.~\eqref{ineq2:aux_15} due to Lemma~\ref{lem:dominance}, which implies that for any element $v\in\calA_1^+$ and any element $v'\in \calA_2^+$ it is $f(v)\geq f(v')$
---note that due to the definitions of $\calA_{1}^+$ and $\calA_{2}^+$ it is $|\calA_{1}^+|=|\calA_{2}\setminus \calA_{2}^+|$, that is, the number of non-removed elements in~$\calA_{1}$ is equal to the number of removed elements in $\calA_{2}$,---
and the fact $\calA_{2}=(\calA_{2}\setminus \calA_{2}^+)\cup \calA_{2}^+$;
ineq.~\eqref{ineq2:aux_17} follows from ineq.~\eqref{ineq2:aux_16} due to Corollary~\ref{cor:ineq_from_lemmata}; ineq.~\eqref{ineq2:aux_18} follows from ineq.~\eqref{ineq2:aux_17} due to Lemma~\ref{lem:greedy_perfromance}'s ineq.~\eqref{eq:greedy_perfromance}; finally, ineq.~\eqref{ineq2:aux_19} follows from ineq.~\eqref{ineq2:aux_18} due to Lemma~\ref{lem:from_max_to_minmax}.
\hfill $\blacksquare$
\bigskip
In what follows, we first prove ineq.~\eqref{ineq:bound_sub}, and then ineq.~\eqref{ineq:bound_sub_uniform}: we first prove the part $\frac{1-\kappa_f}{1+\kappa_f}$ and $\frac{1-\kappa_f}{\kappa_f}(1-e^{-\kappa_f})$ of ineq.~\eqref{ineq:bound_sub} and of ineq.~\eqref{ineq:bound_sub_uniform}, respectively, and then, the part $\frac{h_f(\alpha,\beta)}{1+\kappa_f}$ and $\frac{h_f(\alpha,\beta)}{\kappa_f}(1-e^{-\kappa_f})$ of ineq.~\eqref{ineq:bound_sub} and of ineq.~\eqref{ineq:bound_sub_uniform}, respectively.
\medskip
\paragraph*{Proof of part $(1-\kappa_f)/(1+\kappa_f)$ of ineq.~\eqref{ineq:bound_sub}}
Consider that the objective function~$f$ is non-decreasing submodular and such that (without loss of generality) $f$ is non-negative and $f(\emptyset)=0$.
To prove the part $(1-\kappa_f)/(1+\kappa_f)$ of
ineq.~\eqref{ineq:bound_sub} we follow similar observations to the ones we followed in the proof of ineq.~\eqref{ineq:bound_non_sub}; in particular:
\begin{align}
&\!\!\!f(\calA\setminus \calB^\star(\calA))\nonumber\\
&=f(\calA_{1}^+\cup \calA_{2}^+)\label{ineq5:aux_14}\\
&\geq (1-\kappa_f)\sum_{v\in\calA_{1}^+\cup \calA_{2}^+}f(v)\label{ineq5:aux_15}\\
&\geq (1-\kappa_f)\sum_{v\in\calA_{2}}f(v)\label{ineq5:aux_16}\\
&\geq (1-\kappa_f)f(\calA_{2})\label{ineq5:aux_17}\\
&\geq \frac{1-\kappa_f}{1+\kappa_f}\underset{\calX\subseteq \calV\setminus\calA_1, \calX\cup\calA_1\in \calI}{\max} \; \;\; f(\calX)\label{ineq5:aux_18}\\
&\geq \frac{1-\kappa_f}{1+\kappa_f}f(\calA^\star\setminus \calB^\star(\calA^\star)),\label{ineq5:aux_19}
\end{align}
where eqs.~\eqref{ineq5:aux_14}-\eqref{ineq5:aux_19} hold for the following reasons: eq.~\eqref{ineq5:aux_14} follows from the definitions of the sets~$\calA_{1}^+$ and $\calA_{2}^+$; ineq.~\eqref{ineq5:aux_15} follows from ineq.~\eqref{ineq5:aux_14} due to Lemma~\ref{lem:non_total_curvature}; ineq.~\eqref{ineq5:aux_16} follows from ineq.~\eqref{ineq5:aux_15} due to Lemma~\ref{lem:dominance}, which implies that for any element $v\in\calA_1^+$ and any element $v'\in \calA_2^+$ it is $f(v)\geq f(v')$
---note that due to the definitions of the sets~$\calA_{1}^+$ and $\calA_{2}^+$ it is $|\calA_{1}^+|=|\calA_{2}\setminus \calA_{2}^+|$, that is, the number of non-removed elements in $\calA_{1}$ is equal to the number of removed elements in~$\calA_{2}$,--- and because $\calA_{2}=(\calA_{2}\setminus \calA_{2}^+)\cup \calA_{2}^+$; ineq.~\eqref{ineq5:aux_17} follows from ineq.~\eqref{ineq5:aux_16} because the set function $f$ is submodular, and as~a result, the~submodularity Definition~\ref{def:sub} implies that for any sets $\mathcal{S}\subseteq \mathcal{V}$ and $\mathcal{S}'\subseteq \mathcal{V}$, it is $f(\mathcal{S})+f(\mathcal{S}')\geq f(\mathcal{S}\cup \mathcal{S}')$~\cite[Proposition 2.1]{nemhauser78analysis}; ineq.~\eqref{ineq5:aux_18} follows from ineq.~\eqref{ineq5:aux_17} due to Lemma~\ref{lem:greedy_perfromance}'s ineq.~\eqref{eq:greedy_perfromance_matroid}; finally, ineq.~\eqref{ineq5:aux_19} follows from ineq.~\eqref{ineq5:aux_18} due to Lemma~\ref{lem:from_max_to_minmax}.
\hfill $\blacksquare$
\medskip
\paragraph*{Proof of part $(1-\kappa_f)/\kappa_f(1-e^{-\kappa_f})$ of ineq.~\eqref{ineq:bound_sub_uniform}}
Consider that the objective function~$f$ is non-decreasing submodular and such that (without loss of generality) $f$ is non-negative and $f(\emptyset)=0$. Moreover, consider that the pair $(\calV,\calI)$ is a uniform matroid.
To prove the part $(1-\kappa_f)/\kappa_f(1-e^{-\kappa_f})$ of ineq.~\eqref{ineq:bound_sub_uniform} we follow similar steps to the ones we followed in the proof of ineq.~\eqref{ineq:bound_sub} via the ineqs.~\eqref{ineq5:aux_14}-\eqref{ineq5:aux_19}. We explain next where these steps differ: if instead of using Lemma~\ref{lem:greedy_perfromance}'s ineq.~\eqref{eq:greedy_perfromance_matroid} to get ineq.~\eqref{ineq5:aux_18} from ineq.~\eqref{ineq5:aux_17}, we use Lemma~\ref{lem:greedy_perfromance}'s ineq.~\eqref{eq:greedy_perfromance_uniform}, and afterwards apply Lemma~\ref{lem:from_max_to_minmax}, then, we derive ineq.~\eqref{ineq:bound_sub_uniform}.
\hfill $\blacksquare$
\medskip
\paragraph*{Proof of parts $h_f(\alpha,\beta)/(1+\kappa_f)$ and ${h_f(\alpha,\beta)}/{\kappa_f}(1-e^{-\kappa_f})$ of ineq.~\eqref{ineq:bound_sub} and ineq.~\eqref{ineq:bound_sub_uniform}, respectively}
\begin{figure}[t]
\def (0,0) circle (1cm) { (0,0) circle (1cm) }
\def (.5,0) circle (0.4cm){ (.5,0) circle (0.4cm)}
\def (2.5,0) circle (1cm) { (2.5,0) circle (1cm) }
\def (3.0,0) circle (0.4cm){ (3.0,0) circle (0.4cm)}
\def (-1.5, -1.5) rectangle (4, 1.5) { (-1.5, -1.5) rectangle (4, 1.5) }
\begin{center}
\begin{tikzpicture}
\draw (-1.5, -1.5) rectangle (4, 1.5) node[below left]{$\mathcal{V}$};
\draw (0,0) circle (1cm) node[left]{$\mathcal{A}_{1}$};
\draw (.5,0) circle (0.4cm) node[]{$\mathcal{B}_{1}^\star$};
\draw (2.5,0) circle (1cm) node[left]{$\mathcal{A}_{2}$};
\draw (3.0,0) circle (0.4cm) node[]{$\mathcal{B}_{2}^\star$};
\end{tikzpicture}
\end{center}
\caption{\small Venn diagram, where the sets $\mathcal{A}_{1}, \mathcal{A}_{2},\mathcal{B}_{1}^\star, \mathcal{B}_{2}^\star$ are as follows: per Algorithm~\ref{alg:rob_sub_max}, $\mathcal{A}_{1}$ and $\mathcal{A}_{2}$ are such that $\mathcal{A}=\mathcal{A}_{1}\cup \mathcal{A}_{2}$. Due to their construction, it holds $\mathcal{A}_{1}\cap \mathcal{A}_{2}=\emptyset$. Next,
$\mathcal{B}_{1}^\star$ and $\mathcal{B}_{2}^\star$ are such that $\mathcal{B}_{1}^\star=\mathcal{B}^\star(\mathcal{A})\cap\mathcal{A}_{1}$, and $\mathcal{B}_2^\star=\mathcal{B}^\star(\mathcal{A})\cap\mathcal{A}_{2}$; therefore, it is $\mathcal{B}_{1}^\star\cap \mathcal{B}_{2}^\star=\emptyset$ and $\mathcal{B}^\star(\mathcal{A})=(\mathcal{B}_{1}^\star\cup \mathcal{B}_{2}^\star)$.
}\label{fig:venn_diagram_for_proof}
\end{figure}
We complete the proof by first proving that:
\begin{equation}\label{ineq1:aux_1}
f(\calA\setminus\calB^\star(\calA))\geq \frac{1}{1+\beta}f(\calA_2),
\end{equation}
and, then, proving that:
\begin{equation}\label{ineq1:aux_2}
f(\calA\setminus\calB^\star(\calA))\geq \frac{1}{\alpha-\beta}f(\calA_2).
\end{equation}
The combination of ineq.~\eqref{ineq1:aux_1} and ineq.~\eqref{ineq1:aux_2} proves the part $\frac{h_f(\alpha,\beta)}{1+\kappa_f}$ and $\frac{h_f(\alpha,\beta)}{\kappa_f}(1-e^{-\kappa_f})$ of ineq.~\eqref{ineq:bound_sub} and of ineq.~\eqref{ineq:bound_sub_uniform}, respectively, after also applying Lemma~\ref{lem:greedy_perfromance}'s ineq.~\eqref{eq:greedy_perfromance_matroid} and ineq.~\eqref{eq:greedy_perfromance_uniform}, respectively, and then Lemma~\ref{lem:from_max_to_minmax}.
\medskip
\textit{To prove ineq.~\eqref{ineq1:aux_1}}, we follow the steps of the proof of~\cite[Theorem~1]{tzoumas2017resilient}, and use the notation introduced in Fig.~\ref{fig:venn_diagram_for_proof}, along with the following notation:
\begin{align}
\eta = \frac{f(\mathcal{B}_2^\star|\mathcal{A}\setminus \mathcal{B}^\star(\mathcal{A}))}{f(\mathcal{A}_2)}.
\end{align}
In particular, to prove ineq.~\eqref{ineq1:aux_1} we focus on the worst-case where $\calB^\star_2\neq \emptyset$; the reason is that if we assume otherwise, i.e., if we assume $\calB^\star_2= \emptyset$, then $f(\calA\setminus\calB^\star(\calA))=f(\calA_2)$, which is a tighter inequality to ineq.~\eqref{ineq1:aux_1}. Hence, considering $\calB^\star_2\neq \emptyset$,
we prove ineq.~\eqref{ineq1:aux_1} by first observing that:
\begin{equation}\label{ineq:aux_1}
f(\mathcal{A}\setminus\mathcal{B}^\star(\mathcal{A}))\geq\max\{f(\mathcal{A}\setminus\mathcal{B}^\star(\mathcal{A})),f(\mathcal{A}_1^+)\},
\end{equation}
and then proving the following three inequalities:
\begin{align}
f(\mathcal{A}\setminus\mathcal{B}^\star(\mathcal{A}))&\geq(1-\eta)f(\mathcal{A}_2)\label{ineq:aux_2},\\
f(\mathcal{A}_1^+)&\geq \eta \frac{1}{\beta}f(\mathcal{A}_2),\label{ineq:aux_3}\\
\max\{(1-\eta),\eta\frac{1}{\beta}\}&\geq \frac{1}{\beta+1}.\label{ineq:aux_4}
\end{align}
Specifically, if we substitute ineqs.~\eqref{ineq:aux_2}-\eqref{ineq:aux_4} to ineq.~~\eqref{ineq:aux_1}, and take into account that $f(\mathcal{A}_2)\geq 0$, then:
\begin{equation*}
f(\mathcal{A}\setminus\mathcal{B}^\star(\mathcal{A}))\geq \frac{1}{\beta+1}f(\mathcal{A}_2),
\end{equation*}
which implies ineq.~\eqref{ineq1:aux_1}.
We complete the proof of ineq.~\eqref{ineq:aux_1} by proving $0\leq \eta\leq 1$, and ineqs.~\eqref{ineq:aux_2}-\eqref{ineq:aux_4}, respectively.
\paragraph{Proof of ineq.~$0\leq \eta\leq 1$} We first prove that $\eta\geq 0$, and then, that $\eta\leq 1$: it holds~$\eta\geq 0$, since by definition $\eta=f(\mathcal{B}_2^\star|\mathcal{A}\setminus \mathcal{B}^\star(\mathcal{A}))/f(\mathcal{A}_2)$, and since $f$ is non-negative; and it holds~$\eta\leq 1$, since $f(\mathcal{A}_2)\geq f(\mathcal{B}^\star_2)$, due to monotonicity of $f$ and that $\mathcal{B}^\star_2 \subseteq \mathcal{A}_2$, and since $f(\mathcal{B}^\star_2)\geq f(\mathcal{B}_2^\star|\mathcal{A}\setminus \mathcal{B}^\star(\mathcal{A}))$, due to submodularity of $f$ and that $\emptyset \subseteq \mathcal{A}\setminus \mathcal{B}^\star(\mathcal{A})$.
\paragraph{Proof of ineq.~\eqref{ineq:aux_2}} We complete the proof of ineq.~\eqref{ineq:aux_2} in two steps. First, it can be verified that:
\begin{align}\label{eq:aux_1}
& f(\mathcal{A}\setminus\mathcal{B}^\star(\mathcal{A}))=f(\mathcal{A}_2)-\nonumber\\ & f(\mathcal{B}^\star_2|\mathcal{A}\setminus\mathcal{B}^\star(\mathcal{A}))+f(\mathcal{A}_1|\mathcal{A}_2)-f(\mathcal{B}^\star_1|\mathcal{A}\setminus\mathcal{B}^\star_1),
\end{align}
since for any $\mathcal{X}\subseteq \mathcal{V}$ and $\mathcal{Y}\subseteq \mathcal{V}$, it holds $f(\mathcal{X}|\mathcal{Y})=f(\mathcal{X}\cup \mathcal{Y})-f(\mathcal{Y})$. Second, eq.~\eqref{eq:aux_1} implies ineq.~\eqref{ineq:aux_2}, since $f(\mathcal{B}^\star_2|\mathcal{A}\setminus\mathcal{B}^\star(\mathcal{A}))=\eta f(\mathcal{A}_2)$, and $f(\mathcal{A}_1|\mathcal{A}_2)-f(\mathcal{B}^\star_1|\mathcal{A}\setminus\mathcal{B}^\star_1)\geq 0$.
The latter is true due to the following two observations:~$f(\mathcal{A}_1|\mathcal{A}_2)\geq f(\mathcal{B}_1^\star|\mathcal{A}_2)$, since $f$ is monotone and $\mathcal{B}_1^\star \subseteq \mathcal{A}_1$; and~$f(\mathcal{B}_1^\star|\mathcal{A}_2)\geq f(\mathcal{B}^\star_1|\mathcal{A}\setminus\mathcal{B}^\star_1)$, since $f$ is submodular and $\mathcal{A}_2\subseteq \mathcal{A}\setminus\mathcal{B}^\star_1$ (see also Fig.~\ref{fig:venn_diagram_for_proof}).
\paragraph{Proof of ineq.~\eqref{ineq:aux_3}}
Since it is $\calB^\star_2\neq \emptyset$ (and as a result, it also is $\calA^+_1\neq \emptyset$), and since for all elements $a \in \calA^+_1$ and all elements $b\in \calB^\star_2$ it is $f(a)\geq f(b)$, from Lemma~\ref{lem:D3} we have:
\begin{align}
f(\calB^\star_2|\calA^+_1)&\leq |\calB^\star_2|f(\calA^+_1)\nonumber\\
&\leq \beta f(\calA^+_1),\label{aux:111}
\end{align}
since $|\calB^\star_2|\leq \beta$. Overall,
\begin{align}
f(\calA^+_1)&\geq \frac{1}{\beta}f(\calB^\star_2|\calA^+_1)\label{aux5:1}\\
&\geq \frac{1}{\beta}f(\calB^\star_2|\calA^+_1\cup \calA^+_2)\label{aux5:2}\\
&=\frac{1}{\beta}f(\mathcal{B}_2^\star|\mathcal{A}\setminus \mathcal{B}^\star(\mathcal{A}))\label{aux5:3}\\
&=\eta\frac{1}{\beta}f(\calA_2),\label{aux5:4}
\end{align}
where ineqs.~\eqref{aux5:1}-\eqref{aux5:4} hold for the following reasons: ineq.~\eqref{aux5:1} follows from ineq.~\eqref{aux:111}; ineq.~\eqref{aux5:2} holds since $f$ is submodular and $\calA_1^+\subseteq \calA_1^+\cup \calA_2^+$; eq.~\eqref{aux5:3} holds due to the definitions of the sets $\calA_1^+$, $\calA_2^+$ and $\mathcal{B}^\star(\mathcal{A})$; finally, eq.~\eqref{aux5:4} holds due to the definition of $\eta$.
\paragraph{Proof of ineq.~\eqref{ineq:aux_4}} Let $b=1/\beta$. We complete the proof first for the case where
$(1-\eta)\geq \eta b$, and then for the case $(1-\eta)<\eta b$: when $(1-\eta)\geq \eta b$, $\max\{(1-\eta),\eta b\}= 1-\eta$ and $\eta \leq 1/(1+b)$; due to the latter, $1-\eta \geq b/(1+b)=1/(\beta+1)$ and, as a result,~\eqref{ineq:aux_4} holds. Finally, when $(1-\eta)< \eta b$, $\max\{(1-\eta),\eta b\}= \eta b$ and $\eta > 1/(1+b)$; due to the latter, $\eta b > b/(1+b)$ and, as a result,~\eqref{ineq:aux_4} holds.
We completed the proof of~$0\leq \eta\leq 1$, and of ineqs.~\eqref{ineq:aux_2}-\eqref{ineq:aux_4}. Thus, we also completed the proof of ineq.~\eqref{ineq1:aux_1}.
\medskip
\textit{To prove ineq.~\eqref{ineq1:aux_2}}, we consider the following mutually exclusive and collectively exhaustive cases:
\begin{itemize}
\item consider $\calB^\star_2= \emptyset$, i.e., all elements in $\calA_1$ are removed, and as result, none of the elements in $\calA_2$ is removed. Then, $f(\calA\setminus \calB^\star(\calA))=f(\calA_2)$, and ineq.~\eqref{ineq1:aux_2} holds.
\item Consider $\calB^\star_2\neq \emptyset$, i.e., at least one of the elements in $\calA_1$ is \textit{not} removed; call any of these elements $s$. Then:
\begin{equation}\label{eq1:aux1}
f(\calA\setminus\calB^\star(\calA))\geq f(s),
\end{equation}
since $f$ is non-decreasing.
In addition:
\begin{equation}\label{eq1:aux2}
f(\calA_2)\leq \sum_{v\in \calA_2}f(v)\leq (\alpha-\beta) f(s),
\end{equation}
where the first inequality holds since $f$ is submodular~\cite[Proposition~2.1]{nemhauser78analysis}, and the second holds due to Lemma~\ref{lem:dominance} and the fact that $\calA_2$ is constructed by Algorithm~\ref{alg:rob_sub_max} such that $\calA_1\cup\calA_2\subseteq \calV$ and $\calA_1\cup\calA_2\in\calI$, where $|\calA_1|=\beta$ (since $\calA_1$ is constructed by Algorithm~\ref{alg:rob_sub_max} such that $\calA_1\subseteq \calV$ and $\calA_1\in\calI'$, where $(\calV,\calI')$ is a matroid with rank $\beta$) and $(\calV,\calI)$ is a matroid that has rank $\alpha$; the combination of ineq.~\eqref{eq1:aux1} and ineq.~\eqref{eq1:aux2} implies ineq.~\eqref{ineq1:aux_2}.
\end{itemize}
Overall, the proof of ineq.~\eqref{ineq1:aux_2} is complete. \hfill $\blacksquare$
\subsection{Proof of Theorem~\ref{th:alg_rob_sub_max_performance}'s part 2 (running time)}
We complete the proof in two steps, where we denote the time for each evaluation of the objective function $f$ as $\tau_f$. In particular, we first compute the running time of lines~\ref{line:begin_while_1}-\ref{line:end_while_1} and, then, of lines~\ref{line:begin_while_2}-\ref{line:end_while_2}: lines~\ref{line:begin_while_1}-\ref{line:end_while_1} need at most $|\calV|[|\calV|\tau_f+|\calV|\log(|\calV|)+|\calV|+O(\log(|\calV|))]$ time, since they are repeated at most $|\calV|$ times, and at each repetition line~\ref{line:select_element_bait} asks for at most $|\calV|$ evaluations of $f$\!, and for their sorting, which takes $|\calV|\log(|\calV|)+|\calV|+O(\log(|\calV|))$ time, using, e.g., the merge sort algorithm. Similarly, lines~\ref{line:begin_while_2}-\ref{line:end_while_2} need $|\calV|[|\calV|\tau_f+|\calV|\log(|\calV|)+|\calV|+O(\log(|\calV|))]$.
Overall, Algorithm~\ref{alg:rob_sub_max} runs in $2|\calV|[|\calV|\tau_f+|\calV|\log(|\calV|)+|\calV|+O(\log(|\calV|))]=O(|\calV|^2\tau_f)$ time.
\hfill $\blacksquare$
\section{Concluding remarks \& Future work} \label{sec:con}
We made the first step to ensure the success of critical missions in control, robotics, and optimization that involve the design of systems subject to complex optimization constraints of heterogeneity and interdependency ---called matroid constraints--- against worst-case denial-of-service attacks or failures. In particular, we provided the first algorithm for Problem~\ref{pr:robust_sub_max}, which, with minimal running time, guarantees
a close-to-optimal performance against system-wide attacks and failures. To quantify the algorithm's approximation performance, we exploited a notion of curvature for monotone (not necessarily submodular) set functions, and contributed a first step towards characterizing the curvature's effect on the approximability of resilient \textit{matroid-constrained} maximization. Our curvature-dependent characterizations complement the current knowledge on the curvature's effect on the approximability of simpler problems, such as of {non-matroid-constrained} resilient maximization~\cite{tzoumas2017resilient,bogunovic2018robust,tzoumas2018resilientSequential}, and of {non-resilient} maximization~\cite{conforti1984curvature,iyer2013curvature,bian2017guarantees}. Finally, we supported our theoretical analyses with numerical experiments.
This paper opens several avenues for future research, both in theory and in applications.
Future work in theory includes the extension of our results to sequential (multi-step) maximization, per the recent developments in~\cite{tzoumas2018resilientSequential}, to enable applications of sensor scheduling and of path planning in online optimization that \textit{adapts} against persistent attacks and failures~\cite{tokekar2014multi,tzoumas2016scheduling}.
Future work in applications includes the experimental testing of the proposed algorithm in applications of motion-planning for multi-target covering with mobile vehicles~\cite{tokekar2014multi},
to~enable resiliency in {critical scenarios of surveillance.
\section{Introduction}\label{sec:Intro}
Applications in control, robotics, and optimization require system designs in problems such as:
\begin{itemize}
\item (\textit{Control}) \textit{Sensor placement}: In a linear time-invariant system, which few state variables should we measure to ensure observability?~\cite{PEQUITO2017261}
\item (\textit{Robotics}) \textit{Motion scheduling}: At a team of flying robots, how should we schedule the robots' motions to ensure their capability to track a maximal number of mobile targets?~\cite{tokekar2014multi}
\item (\textit{Optimization}) \textit{Data selection}: Given a flood of driving data collected from the smart-phones of several types of drivers (e.g., truck or commercial vehicle drivers), which
{few} data should we process from each driver-type to maximize the prediction accuracy of car traffic?~\cite{calinescu2007maximizing}
\end{itemize}
In particular, all above applications motivate the selection of sensors and actuators subject to complex combinatorial constraints, called \textit{matroids}~\cite{fisher1978analysis}, which require the sensors and actuators not only to be a few in number but also to satisfy
\textit{heterogeneity} restrictions (e.g., few driver data across each driver-type), or \textit{interdependency} restrictions (e.g., system observability). Other applications in control, robotics, and optimization that involve matroid constraints are:
\begin{itemize}
\item (\textit{Control}) Sparse actuator and sensor scheduling~\cite{summers2016actuator,tzoumas2016minimal,tzoumas2016near,zhang2017kalman,jawaid2015submodularity}; stabilization and voltage control in power grids~\cite{liu2017submodular,liu2018submodular}; and synchronization in complex networks~\cite{clark2017toward};
\item (\textit{Robotics}) Task allocation in collaborative multi-robot systems~\cite{williams2017matroid}; and agile autonomous robot navigation and sparse visual-cue selection~\cite{carlone2016attention};
\item (\textit{Optimization}) Sparse signal
recovery and subset column selection~\cite{candes2006stable,boutsidis2009improved,elenberg2016restricted}; and sparse approximation, dictionary and feature selection~\cite{cevher2011greedy,das2011spectral,khanna2017scalable}.
\end{itemize}
In more detail, all the aforementioned applications~\cite{PEQUITO2017261,tokekar2014multi,calinescu2007maximizing,summers2016actuator,tzoumas2016minimal,tzoumas2016near,zhang2017kalman,jawaid2015submodularity,liu2017submodular,liu2018submodular,clark2017toward,williams2017matroid,carlone2016attention,candes2006stable,boutsidis2009improved,elenberg2016restricted,cevher2011greedy,das2011spectral,khanna2017scalable} require the solution to an optimization problem of the form:
\begin{equation}\label{eq:non_res}
\underset{\!\mathcal{A}\subseteq \calV, \;\calA\in \calI}{\max} \; \;\; f(\mathcal{A}),
\end{equation}
where the set $\calV$ represents the available elements to choose from (e.g., the available sensors); the set $\calI$ represents a matroid constraint (e.g., a cardinality constraint on the number of sensors to be used, and/or a requirement for system observability); and the function~$f$ is a monotone and \textit{possibly} submodular objective function (submodularity is a diminishing returns property). In particular, $f$ can capture a performance objective (e.g., estimation accuracy).
Notably, the problem in eq.~\eqref{eq:non_res} is combinatorial, and, specifically, is NP-hard~\cite{Feige:1998:TLN:285055.285059}; notwithstanding, approximation algorithms have been proposed {for its solution, such as the greedy~\cite{iyer2013curvature,bian2017guarantees,Feige:1998:TLN:285055.285059,fisher1978analysis,conforti1984curvature}.}
But in all above critical applications, actuators can fail~\cite{willsky1976survey}; sensors can get (cyber-)attacked~\cite{wood2002dos}; and data can get deleted~\cite{mirzasoleiman2017deletion}. Hence, in such failure-prone and adversarial scenarios, \textit{resilient} matroid-constrained designs against denial-of-service attacks, failures, or deletions become important.
In this paper, we formalize for the first time a problem of \textit{resilient non-submodular maximization}, that goes beyond the traditional problem in eq.~\eqref{eq:non_res}, and guards against attacks, failures, and deletions. In~particular, we introduce the following resilient re-formulation of the problem in eq.~\eqref{eq:non_res}:
\begin{equation}\label{eq:res}
\underset{\!\mathcal{A}\subseteq \calV, \;\calA\in \calI}{\max} \; \; \underset{\;\mathcal{B}\subseteq \calA, \;\calB\in \calI'}{\min} \; \;\; f(\mathcal{A}\setminus \mathcal{B}),
\end{equation}
where the set $\calI'$ represents the collection of possible set-removals $\calB$ ---attacks, failures, or deletions--- from $\calA$, each of some specified cardinality. Overall, the problem in eq.~\eqref{eq:res}
maximizes~$f$ despite \textit{worst-case} failures that compromise the maximization in eq.~\eqref{eq:non_res}. Therefore, the problem formulation in eq.~\eqref{eq:res} is suitable in scenarios where there is no prior on the removal mechanism, as well as, in scenarios where protection against worst-case removals is essential, such as in expensive experiment designs, or missions of adversarial-target tracking.
Particularly, the optimization problem in eq.~\eqref{eq:res} may be interpreted as a $2$-stage perfect information sequential game between two players~\cite[Chapter~4]{myerson2013game}, namely, a ``maximization'' player (designer), and a ``minimization'' player (attacker), where the designer plays first, by selecting $\calA$ to maximize the objective function $f$\!, and, in contrast, the attacker plays second, by selecting $\calB$ to minimize the objective function~$f$\!. In more detail, \emph{the attacker first observes the selection~$\calA$}, and then, selects $\calB$ such that $\calB$ is a worst-case set removal from~$\calA$.
In sum, the optimization problem in eq.~\eqref{eq:res} goes beyond traditional (non-resilient) optimization~\cite{Feige:1998:TLN:285055.285059,iyer2013curvature,bian2017guarantees,fisher1978analysis,conforti1984curvature} by proposing \textit{resilient} optimization;
beyond merely cardinality-constrained resilient optimization~\cite{orlin2015robust,tzoumas2017resilient,bogunovic2018robust} by proposing \textit{matroid-constrained} resilient optimization;
and beyond protection against {non}-adversarial set-removals~\cite{mirzasoleiman2017deletion,kazemi2017deletion} by proposing protection against \textit{worst-case} set-removals. Hence, the problem in eq.~\eqref{eq:res} aims to protect the complex design of systems, per {heterogeneity} or {interdependency} constraints, against attacks, failures, or deletions, which is a vital objective both for the safety of critical infrastructures, such as power grids~\cite{liu2017submodular,liu2018submodular}, and for the safety of critical missions, such as multi-target surveillance with teams of mobile robots~\cite{tokekar2014multi}.
\medskip
\myParagraph{Contributions} In this paper, we make the contributions:
\begin{itemize}
\item (\textit{Problem}) We formalize the problem of \textit{resilient maximization over matroid constraints} against denial-of-service removals, per eq.~\eqref{eq:res}. This is the first work to formalize, address, and motivate this problem.
\item (\textit{Solution}) We develop the first algorithm for the problem of resilient maximization over matroid constraints in eq.~\eqref{eq:res}, and prove it enjoys the following properties:
\begin{itemize}
\item \textit{system-wide resiliency}: the algorithm is valid for any number of removals;
\item \textit{minimal running time}: the algorithm terminates with the same running time as state-of-the-art algorithms for (non-resilient) matroid-constrained optimization;
\item \textit{provable approximation performance}: for functions $f$ that are monotone and (possibly) submodular
---as it holds true in all above applications~\cite{PEQUITO2017261,tokekar2014multi,calinescu2007maximizing,carlone2016attention,summers2016actuator,tzoumas2016minimal,tzoumas2016near,zhang2017kalman,candes2006stable,boutsidis2009improved,elenberg2016restricted,liu2018submodular,jawaid2015submodularity,clark2017toward,cevher2011greedy,das2011spectral,khanna2017scalable,liu2017submodular,williams2017matroid},--- the algorithm ensures a solution close-to-optimal.
To quantify the algorithm's approximation performance, we use a notion of curvature for monotone (not necessarily submodular) set functions.
\end{itemize}
\item (\textit{Simulations}) We demonstrate the necessity for the resilient re-formulation of the problem in eq.~\eqref{eq:non_res} by conducting numerical experiments in various scenarios of sensing-constrained autonomous robot navigation, varying the number of sensor failures. In addition, via these experiments we demonstrate the benefits of our approach.
\end{itemize}
Overall, the proposed algorithm herein enables the resilient re-formulation and solution of all aforementioned matroid-constrained applications in control, robotics, and optimization~\cite{PEQUITO2017261,tokekar2014multi,calinescu2007maximizing,carlone2016attention,summers2016actuator,tzoumas2016minimal,tzoumas2016near,zhang2017kalman,candes2006stable,boutsidis2009improved,elenberg2016restricted,liu2018submodular,jawaid2015submodularity,clark2017toward,cevher2011greedy,das2011spectral,khanna2017scalable,liu2017submodular,williams2017matroid}; we describe in detail the matroid-constraints involved in all aforementioned application in Section~\ref{sec:problem_statement}. Moreover, the proposed algorithm enjoys minimal running time, and provable approximation guarantees.
\medskip
\myParagraph{Organization of the rest of the paper}
Section~\ref{sec:problem_statement} formulates the problem of resilient maximization over matroid constraints (Problem~\ref{pr:robust_sub_max}), and describes types of matroid constraints in control, robotics, and optimization. Section~\ref{sec:algorithm} presents the first scalable, near-optimal algorithm for Problem~\ref{pr:robust_sub_max}. Section~\ref{sec:performance} presents the main result in this paper, which characterizes the scalability and performance guarantees of the proposed algorithm. Section~\ref{sec:simulations} presents numerical experiments over a control-aware sensor selection scenario. Section~\ref{sec:con} concludes the paper. All proofs are found in the Appendix.
\medskip
\myParagraph{Notation}
Calligraphic fonts denote sets (e.g., $\calA$). Given a set $\calA$, then $2^\calA$ denotes the power set of $\calA$; $|\calA|$ denotes $\calA$'s cardinality; given also a set $\calB$, then $\calA\setminus\calB$ denotes the set of elements in $\calA$ that are not in~$\calB$; and the $(\calA,\calB)$ is equivalent to $\calA\cup\calB$. Given a ground set $\mathcal{V}$, a set function $f:2^\mathcal{V}\mapsto \mathbb{R}$, and an element $x\in \mathcal{V}$, the $f(x)$ denotes $f(\{x\})$.
\section{Resilient Non-Submodular Maximization over Matroid Constraints}\label{sec:problem_statement}
We formally define \emph{resilient non-submodular maximization over matroid constraints}.
We start with some basic definitions.
\begin{mydef}[Monotonicity]\label{def:mon}
Consider a finite ground set~$\mathcal{V}$. Then, a set function $f:2^\mathcal{V}\mapsto \mathbb{R}$ is \emph{non-decreasing} if and only if for any sets $\mathcal{A}\subseteq \mathcal{A}'\subseteq\mathcal{V}$, it holds $f(\mathcal{A})\leq f(\mathcal{A}')$.
\end{mydef}
\begin{mydef}[Matroid{~\cite[Section~39.1]{schrijver2003combinatorial}}]\label{def:matroid}
Consider a finite ground set~$\mathcal{V}$, and a non-empty collection of subsets of $\calV$, denoted by~$\calI$. Then, the pair $(\calV,\calI)$ is called a \emph{matroid} if and only if the following conditions hold:
\begin{itemize}
\item for any set $\calX\subseteq \calV$ such that $\calX\in \calI$, and for any set such that $\calZ \subseteq \calX$, it holds $\calZ\in \calI$;
\item for any sets $\calX,\calZ\subseteq \calV$ such that $\calX,\calZ\in \calI$ and $|\calX|<|\calZ|$, it holds that there exists an element $z \in \calZ\setminus \calX$ such that $\calX\cup\{z\}\in \calI$.
\end{itemize}
\end{mydef}
We next motivate Definition~\ref{def:matroid} by presenting three matroid examples ---uniform, partition, and transversal matroid--- that appear in applications in control, robotics, and optimization.
\myParagraph{Uniform matroid, and applications}
A matroid $(\calV,\calI)$ is a \emph{uniform matroid} if for a positive integer~$\alpha$ it holds $\calI\equiv\{\calA: \calA\subseteq \calV, |\calA|\leq \alpha\}$.
Thus, the uniform matroid treats all elements in $\calV$ \textit{uniformly} (that is, as being the same), by only limiting their number in each set that is feasible in~$\calI$.
Applications of the uniform matroid in control, robotics, and optimization, arise when one cannot use an arbitrary number of system elements, e.g., sensors or actuators, to achieve a desired system performance; for example, such sparse element-selection scenarios are necessitated in resource constrained environments of, e.g., limited battery, communication bandwidth, or data processing time~\cite{carlone2016attention}. In more detail, applications of such sparse uniform selection in control, robotics, and optimization include the following:
\begin{itemize}
\item (\textit{Control}) Actuator and sensor placement, e.g., for system controllability with minimal control effort~\cite{summers2016actuator,tzoumas2016minimal}, and for optimal smoothing or Kalman filtering~\cite{tzoumas2016near,zhang2017kalman};
\item (\textit{Robotics}) Sparse visual-cue selection, e.g., for agile autonomous robot navigation~\cite{carlone2016attention};
\item (\textit{Optimization})
Sparse
recovery and column subset selection, e.g., for experiment design~\cite{candes2006stable,boutsidis2009improved,elenberg2016restricted}.
\end{itemize}
\myParagraph{Partition matroid, and applications} A matroid $(\calV,\calI)$ is a \emph{partition matroid} if for a positive integer $n$, disjoint sets $\calV_1,\ldots,\calV_n$, and positive integers $\alpha_1,\ldots,\alpha_n$, it holds $\calV\equiv \calV_1\cup\cdots\cup\calV_n$ and $\calI\equiv\{\calA: \calA \subseteq \calV,|\calA\cap \calV_i|\leq \alpha_i,\text{ for all } i=1,\ldots,n\}$. Hence, the partition matroid goes beyond the uniform matroid by allowing for \textit{heterogeneity} in the elements included in each set that is feasible in $\calI$. We~give two interpretations of the disjoint sets $\calV_1,\ldots,\calV_n$: the first interpretation considers that $\calV_1,\ldots,\calV_n$ correspond to the available elements across $n$ different \textit{types} (buckets) of elements, and correspondingly, the positive integers $\alpha_1,\ldots,\alpha_n$ constrain uniformly the number of elements one can use from each type $1,\ldots,n$ towards a system design goal; the second interpretation considers that $\calV_1,\ldots,\calV_n$ correspond to the available elements across $n$ different \textit{times}, and correspondingly, the positive integers $\alpha_1,\ldots,\alpha_n$ constrain uniformly the number of elements that one can use at each time $1,\ldots,n$.
Applications of the partition matroid in control, robotics, and optimization include all the aforementioned applications in scenarios where heterogeneity in the element-selection enhances the system performance; for example, to guarantee voltage control in power grids, one needs to (possibly) actuate different types of actuators~\cite{liu2018submodular}, and to guarantee active target tracking, one needs to activate different sensors at each time step~\cite{jawaid2015submodularity}. Additional applications of the partition matroid in control and robotics include the following:
\begin{itemize}
\item (\textit{Control}) Synchronization in complex dynamical networks, e.g., for missions of motion coordination~\cite{clark2017toward};
\item (\textit{Robotics}) Robot motion planning, e.g., for multi-target tracking with mobile robots~\cite{tokekar2014multi};
\item (\textit{Optimization})
Sparse approximation and feature selection, e.g., for sparse dictionary selection~\cite{cevher2011greedy,das2011spectral,khanna2017scalable}.
\end{itemize}
\myParagraph{Transversal matroid, and applications} A matroid $(\calV,\calI)$ is a \emph{transversal matroid} if for a positive integer~$n$, and a collection of subsets $\calS_1,\ldots,\calS_n$ of $\calV$, it holds~$I$ is the collection of all partial transversals of $(\calS_1,\ldots,\calS_n)$ ---a {partial transversal} is defined as follows: for a finite set~$\calV$, a positive integer~$n$, and a collection of subsets $\calS_1,\ldots,\calS_n$ of $\calV$, a \emph{partial transversal} of $(\calS_1,\ldots,\calS_n)$ is a subset $\calP$ of $\calV$ such that there exist a one-to-one map $\phi: \calP\mapsto \{1,\ldots,n\}$ so that for all $p\in\calP$ it holds $p\in \calS_{\phi(p)}$; i.e., each element in $\calP$ intersects with one ---and only one--- set among the sets $\calS_1,\ldots,\calS_n$.
An application of the transversal matroid in control is that of actuation selection for optimal control performance subject to structural controllability constraints~\cite{PEQUITO2017261}.
\myParagraph{Additional examples} Other matroid constraints in control, robotics, and optimization are found in the following papers:
\begin{itemize}
\item (\textit{Control}) \cite{liu2017submodular}, for the stabilization of power grids;
\item (\textit{Robotics}) \cite{williams2017matroid}, for task allocation in multi-robot systems;
\item (\textit{Optimization}) \cite{calinescu2007maximizing}, for general task assignments.
\end{itemize}
Given the aforementioned matroid-constrained application-examples, we now define the main problem in this paper.
\begin{myproblem}\label{pr:robust_sub_max}
\emph{\textbf{(Resilient Non-Submodular Maximization over Matroid Constraints)}}
Consider the problem parameters:
\begin{itemize}
\item a matroid $(\calV,\calI)$;
\item a matroid $(\calV,\calI')$ such that: $(\calV,\calI')$ is either a uniform matroid, or a partition matroid $(\calV,\calI')$ with the same partition as $(\calV,\calI)$;
\item a non-decreasing set function $f:2^{\mathcal{V}} \mapsto \mathbb{R}$
such that (without loss of generality) it holds $f(\emptyset)=0$, and for any set $\calA\subseteq \calV$, it also holds $f(\calA)\geq 0$.
\end{itemize}
The problem of \emph{resilient non-submodular maximization over matroid constraints} is to
maximize the function~$f$ by selecting a set $\calA\subseteq \calV$ such that $\calA\in \calI$, and accounting for any worst-case set removal $\calB\subseteq \calA$ from $\calA$ such that $\calB\in \calI'$\!\!.
Formally:\footnote{Given a matroid $(\calV,\calI')$, and any subset $\calA\subseteq \calV$, then, the $(\calA,\{\calB: \mathcal{B}\subseteq \calA, \calB\in \calI'\})$ is also a matroid~\cite[Section~39.3]{schrijver2003combinatorial}.}
\vspace*{1mm}
\begin{equation*}
\underset{\mathcal{A}\subseteq \calV,\; \calA\in \calI}{\max} \; \; \underset{\;\mathcal{B}\subseteq \calA,\; \calB\in \calI'}{\min} \; \;\; f(\mathcal{A}\setminus \mathcal{B}).
\end{equation*}
\end{myproblem}
\medskip
As we mentioned in this paper's Introduction, Problem~\ref{alg:rob_sub_max} may be interpreted as a $2$-stage perfect information sequential game between two players~\cite[Chapter~4]{myerson2013game}, namely, a ``maximization'' player, and a ``minimization'' player,
where the ``maximization'' player plays first by selecting the set~$\calA$, and, then, \emph{the ``minimization'' player observes $\calA$}, and plays second by selecting a worst-case set removal $\calB$ from $\calA$.
In sum, Problem~\ref{pr:robust_sub_max} aims to guard all the aforementioned applications in control, robotics, and optimization~\cite{PEQUITO2017261,tokekar2014multi,calinescu2007maximizing,carlone2016attention, summers2016actuator,tzoumas2016minimal,tzoumas2016near,zhang2017kalman,candes2006stable,boutsidis2009improved,elenberg2016restricted,liu2018submodular,jawaid2015submodularity,clark2017toward,cevher2011greedy,das2011spectral,khanna2017scalable,liu2017submodular,williams2017matroid} against sensor and actuator attacks or failures, by proposing their resilient re-formulation, since all involve the maximization of non-decreasing functions subject to matroid constrains.
For example, we discuss the resilient re-formulation of two among the aforementioned applications~\cite{PEQUITO2017261,tokekar2014multi,calinescu2007maximizing,carlone2016attention, summers2016actuator,tzoumas2016minimal,tzoumas2016near,zhang2017kalman,candes2006stable,boutsidis2009improved,elenberg2016restricted,liu2018submodular,jawaid2015submodularity,clark2017toward,cevher2011greedy,das2011spectral,khanna2017scalable,liu2017submodular,williams2017matroid}:
\setcounter{paragraph}{0}
\paragraph{Actuator placement for minimal control effort~\cite{summers2016actuator,tzoumas2016minimal}} Given a dynamical system, the design objective is to select a few actuators to place in the system to achieve controllability with minimal control effort~\cite{tzoumas2016minimal}. In particular, the actuator-selection framework is as follows: given a set $\calV$ of available actuators to choose from, then, up to $\alpha$ actuators can be placed in the system. In more detail, the aforementioned actuator-selection problem can be captured by a uniform matroid $(\calV,\calI)$ where $\calI\triangleq\{\calA:\calA\in \calV,|\calA|\leq \alpha\}$. However, in the case of a failure-prone environment where up to~$\beta$ actuators may fail, then a resilient re-formulation of the aforementioned problem formulation is necessary: Problem~\ref{pr:robust_sub_max} suggests that such a resilient re-formulation can be achieved by modelling any set of $\beta$ actuator-failures in $\calA$ by a set $\calB$ in the uniform matroid on $\calA$ where $\calB\subseteq \calA$ and $|\calB|\leq \beta$.
\paragraph{Multi-target coverage with mobile robots~\cite{tokekar2014multi}} A number of adversarial targets are deployed in the environment, and a team of mobile robots~$\calR$ is tasked to cover them using on-board cameras. To this end, at each time step the robots in~$\calR$ need to jointly choose their motion. In particular, the movement-selection framework is as follows: given a finite set of possible moves~$\calM_i$ for each robot $i\in\calR$, then, at each time step each robot selects a move to make so that the team $\calR$ covers collectively as many targets as possible. In more detail, since each robot in~$\calR$ can make only one move per time, if we denote by~$\calA$ the set of moves to be made by each robot in~$\calR$, then the aforementioned movement-selection problem can be captured by a partition matroid $(\calV,\calI)$ such that $\calV=\cup_{i\in\calR} \calM_i$ and $\calI=\{\calA: \calA\subseteq \calV, |\calM_i\cap\calA|\leq 1, \text{ for all } i\in\calR\}$~\cite{tokekar2014multi}. However, in the case of an adversarial scenario where the targets can attack up to~$\beta$ robots, then a resilient re-formulation of the aforementioned problem formulation is necessary: Problem~\ref{pr:robust_sub_max} suggests that such a resilient re-formulation can be achieved by modelling any set of $\beta$ attacks to the robots in $\calR$ by a set $\calB$ in the uniform matroid on $\calS$ where $\calB\subseteq \calS$ and $|\calB|\leq \beta$.
\section{Algorithm for Problem~\ref{pr:robust_sub_max}} \label{sec:algorithm}
We present the first scalable algorithm for Problem~\ref{pr:robust_sub_max}.
The pseudo-code of the algorithm is described in Algorithm~\ref{alg:rob_sub_max}.
\subsection{Intuition behind Algorithm~\ref{alg:rob_sub_max}}\label{subsec:intuition}
The goal of Problem~\ref{pr:robust_sub_max} is to ensure a maximal value for an objective function $f$ through a single maximization step, despite compromises to the solution of the maximization step. In~particular, Problem~\ref{pr:robust_sub_max} aims to select a set $\calA$ towards a maximal value of $f$\!,
despite that $\calA$ is later compromised by a worst-case set removal $\calB$, resulting to $f$ being finally evaluated at the set $\calA\setminus \calB$ instead of the set $\calA$.
In~this~context, Algorithm~\ref{alg:rob_sub_max} aims to fulfil the goal of Problem~\ref{pr:robust_sub_max} by constructing the set $\calA$ as the union of two sets, namely, the $\calA_{1}$ and $\calA_{2}$ (line~\ref{line:selection} of Algorithm~\ref{alg:rob_sub_max}), whose role we describe in more detail below:
\setcounter{paragraph}{0}
\paragraph{Set $\calA_{1}$ approximates worst-case set removal from~$\calA$} Algorithm~\ref{alg:rob_sub_max} aims with the set $\calA_{1}$ to capture a worst-case set-removal of elements ---per the matroid $(\calV,\calI')$--- from the elements Algorithm~\ref{alg:rob_sub_max} is going to select in the set~$\calA$; equivalently, the set~$\calA_{1}$ is aimed to act as a ``bait'' to an attacker that selects to remove the \textit{best} set of elements from~$\calA$ per the matroid $(\calV,\calI')$ (\textit{best} with respect to the elements' contribution towards the goal of Problem~\ref{pr:robust_sub_max}). However, the problem of selecting the \textit{best} elements in~$\calV$ per a matroid constraint is a combinatorial and, in general, intractable problem~\cite{Feige:1998:TLN:285055.285059}.
For this reason, Algorithm~\ref{alg:rob_sub_max} aims to \textit{approximate} the best set of elements in $\calI'$\!\!, by letting $\calA_{1}$ be the set of elements with the largest marginal contributions to the value of the objective function~$f$ (lines~\ref{line:begin_while_1}-\ref{line:end_while_1} of Algorithm~\ref{alg:rob_sub_max}). In addition, since per Problem~\ref{pr:robust_sub_max} the set $\calA$ needs to be in the matroid $(\calV,\calI)$, Algorithm~\ref{alg:rob_sub_max} constructs $\calA_1$ so that not only it is $\calA_1\in\calI'$\!\!, as we described before, but so that it also is $\calA_1\in \calI$ (lines~\ref{line:begin_if_1}-\ref{line:end_if_1} of Algorithm~\ref{alg:rob_sub_max}).
\paragraph{Set $\calA_{2}$ is such that the set $\calA_{1}\cup \calA_{2}$ approximates optimal solution to Problem~\ref{pr:robust_sub_max}}
Assuming that~$\calA_{1}$ is the set that is going to be removed from Algorithm~\ref{alg:rob_sub_max}'s set selection~$\calA$,
Algorithm~\ref{alg:rob_sub_max} needs to select a set of elements $\calA_{2}$ to complete the construction of~$\calA$ so that~$\calA=\calA_1\cup\calA_2$ is in the matroid $(\calV,\calI)$, per Problem~\ref{pr:robust_sub_max}. In~particular, for $\calA=\calA_{1}\cup \calA_{2}$ to be an optimal solution to Problem~\ref{pr:robust_sub_max} (assuming the removal of~$\calA_{1}$ from $\calA$), Algorithm~\ref{alg:rob_sub_max} needs to select $\calA_{2}$ as a \textit{best} set of elements from $\calV\setminus\calA_{1}$ subject to the constraint that $\calA_1\cup\calA_2$ is in $(\calV,\calI)$ (lines~\ref{line:begin_if_2}-\ref{line:end_if_2} of Algorithm~\ref{alg:rob_sub_max}).
Nevertheless, the problem of selecting a \textit{best} set
of elements subject to such a constraint is a combinatorial and, in~general, intractable problem~\cite{Feige:1998:TLN:285055.285059}. Hence, Algorithm~\ref{alg:rob_sub_max} aims to \textit{approximate} such a best set,
using the greedy procedure in the lines~\ref{line:begin_while_2}-\ref{line:end_while_2} of Algorithm~\ref{alg:rob_sub_max}.
Overall, Algorithm~\ref{alg:rob_sub_max} constructs the sets $\calA_{1}$ and $\calA_{2}$ to approximate with their union $\calA$ an optimal solution to Problem~\ref{pr:robust_sub_max}.
We next describe the steps in Algorithm~\ref{alg:rob_sub_max} in more detail.
\begin{algorithm}[t]
\caption{Scalable algorithm for Problem~\ref{pr:robust_sub_max}.}
\begin{algorithmic}[1]
\REQUIRE Per Problem~\ref{pr:robust_sub_max}, Algorithm~\ref{alg:rob_sub_max} receives the inputs:
\begin{itemize}
\item a matroid $(\calV,\calI)$;
\item an either uniform or partition matroid $(\calV,\calI')$;
\item a non-decreasing set function $f:2^{\mathcal{V}} \mapsto \mathbb{R}$
such that it is $f(\emptyset)=0$, and for any set $\calA\subseteq \calV$, it also is $f(\calA)\geq 0$.
\end{itemize}
\ENSURE Set $\mathcal{A}$.
\medskip
\STATE $\mathcal{A}_{1}\leftarrow\emptyset$;~~~$\mathcal{R}_{1}\leftarrow\emptyset$;~~~$\mathcal{A}_{2}\leftarrow\emptyset$;~~~$\mathcal{R}_{2}\leftarrow\emptyset$;\label{line:initiliaze}
\WHILE {$\mathcal{R}_{1}\neq \calV$}\label{line:begin_while_1}
\STATE $x\in \arg\max_{y \in \calV\setminus\mathcal{R}_{1}} f(y)$;\label{line:select_element_bait}
\IF {$\mathcal{A}_{1}\cup\{x\}\in \calI$ and $\mathcal{A}_{1}\cup\{x\}\in \calI'$\!} \label{line:begin_if_1}
\STATE $\mathcal{A}_{1}\leftarrow\mathcal{A}_{1}\cup\{x\}$;\label{line:build_of_bait}
\ENDIF \label{line:end_if_1}
\STATE {$\mathcal{R}_{1}\leftarrow \mathcal{R}_{1}\cup \{x\}$}; \label{line:increase_removed_set_1}
\ENDWHILE \label{line:end_while_1}
\WHILE {$\mathcal{R}_{2}\neq \calV\setminus \calA_1$} \label{line:begin_while_2}
\STATE $x\in \arg\max_{y \in \mathcal{V}\setminus (\mathcal{A}_{1}\cup\mathcal{R}_{2})}f(\calA_2\cup \{y\})$; \label{line:greedy_selection}
\IF {$\mathcal{A}_{1}\cup\mathcal{A}_{2}\cup\{x\}\in \calI$} \label{line:begin_if_2}
\STATE $\mathcal{A}_{2}\leftarrow\mathcal{A}_{2}\cup\{x\}$;\label{line:build_of_greedy}
\ENDIF \label{line:end_if_2}
\STATE {$\mathcal{R}_{2}\leftarrow \mathcal{R}_{2}\cup \{x\}$}; \label{line:increase_removed_set_2}
\ENDWHILE \label{line:end_while_2}
\STATE $\mathcal{A}\leftarrow \mathcal{A}_{1} \cup \mathcal{A}_{2}$; \label{line:selection}
\end{algorithmic}\label{alg:rob_sub_max}
\end{algorithm}
\subsection{Description of steps in Algorithm~\ref{alg:rob_sub_max}}
Algorithm~\ref{alg:rob_sub_max} executes four steps:
\setcounter{paragraph}{0}
\paragraph{Initialization (line~\ref{line:initiliaze} of Algorithm~\ref{alg:rob_sub_max})} Algorithm~\ref{alg:rob_sub_max} defines four auxiliary sets, namely, the $\calA_{1}$, $\calR_1$, $\calA_2$, and $\calR_2$, and initializes each of them with the empty set (line~\ref{line:initiliaze} of Algorithm~\ref{alg:rob_sub_max}). \textit{The~purpose of $\calA_{1}$ and $\calA_2$} is to construct the set $\calA$, which is the set Algorithm~\ref{alg:rob_sub_max} selects as a solution to Problem~\ref{pr:robust_sub_max}; in particular,
the union of $\calA_{1}$ and of~$\calA_2$ constructs~$\calA$ by the end of Algorithm~\ref{alg:rob_sub_max} (line~\ref{line:selection} of Algorithm~\ref{alg:rob_sub_max}). \textit{The~purpose of $\calR_{1}$ and of $\calR_2$} is to support the construction of $\calA_{1}$ and $\calA_2$, respectively; in particular, during the construction of $\calA_1$, Algorithm~\ref{alg:rob_sub_max} stores in $\calR_1$ the elements of $\calV$ that have either been included already or cannot be included in $\calA_1$ (line~\ref{line:increase_removed_set_1} of Algorithm~\ref{alg:rob_sub_max}), and that way, Algorithm~\ref{alg:rob_sub_max} keeps track of which elements remain to be checked whether they could be added in $\calA_1$ (line~\ref{line:build_of_bait} of Algorithm~\ref{alg:rob_sub_max}). Similarly, during the construction of $\calA_2$, Algorithm~\ref{alg:rob_sub_max} stores in $\calR_2$ the elements of $\calV\setminus\calA_1$ that have either been included already or cannot be included in $\calA_2$ (line~\ref{line:increase_removed_set_2} of Algorithm~\ref{alg:rob_sub_max}), and that way, Algorithm~\ref{alg:rob_sub_max} keeps track of which elements remain to be checked whether they could be added in $\calA_2$ (line~\ref{line:build_of_greedy} of Algorithm~\ref{alg:rob_sub_max}).
\paragraph{Construction of set $\calA_{1}$ (lines~\ref{line:begin_while_1}-\ref{line:end_while_1} of Algorithm~\ref{alg:rob_sub_max})} Algorithm~\ref{alg:rob_sub_max} constructs the set $\calA_{1}$ sequentially ---by adding one element at a time from $\calV$ to $\calA_{1}$, over a sequence of multiple time-steps--- such that $\calA_{1}$ is contained in both the matroid $(\calV,\calI)$ and the matroid $(\calV,\calI')$ (line~\ref{line:begin_if_1} of Algorithm~\ref{alg:rob_sub_max}), and such that each element $v\in \calV$ that is chosen to be added in~$\calA_1$ achieves the highest marginal value of $f(v)$ among all the elements in $\calV$ that have not been yet added in~$\calA_1$ and can be added in $\calA_1$ (line~\ref{line:build_of_bait} of Algorithm~\ref{alg:rob_sub_max}).
\paragraph{Construction of set $\calA_{2}$ (lines~\ref{line:begin_while_2}-\ref{line:end_while_2} of Algorithm~\ref{alg:rob_sub_max})} Algorithm~\ref{alg:rob_sub_max} constructs the set $\calA_{2}$ sequentially, by picking greedily elements from the set $\calV_t\setminus \calA_{1}$ such that $\calA_1\cup\calA_2$ is contained in the matroid $(\calV,\calI)$.
Specifically, the greedy procedure in Algorithm~\ref{alg:rob_sub_max}'s ``while loop'' (lines~\ref{line:begin_while_2}-\ref{line:end_while_2} of Algorithm~\ref{alg:rob_sub_max}) selects an element $y\in\mathcal{V}\setminus (\mathcal{A}_{1}\cup\calR_2)$ to add in $\calA_{2}$ only if $y$ maximizes the value of $f(\calA_2\cup \{y\})$, where the set~$\calR_2$ stores the elements that either have already been added to $\calA_2$ or have been considered to be added to $\calA_2$ but they were not because the resultant set $\calA_1\cup\calA_2$ would not be in the matroid $(\calV,\calI)$.
\paragraph{Construction of set $\calA$ (line~\ref{line:selection} of Algorithm~\ref{alg:rob_sub_max})}
Algorithm~\ref{alg:rob_sub_max} constructs the set $\calA$ as the union of the previously constructed sets $\calA_{1}$ and~$\calA_2$ \mbox{(lines~\ref{line:selection} of Algorithm~\ref{alg:rob_sub_max}).}
In sum, Algorithm~\ref{alg:rob_sub_max} proposes a set $\calA$ as solution to Problem~\ref{pr:robust_sub_max}, and in particular, Algorithm~\ref{alg:rob_sub_max} constructs the set $\calA$ so it can withstand any compromising set removal from~it.
\section{Performance Guarantees for Algorithm~\ref{alg:rob_sub_max}}\label{sec:performance}
We quantify Algorithm~\ref{alg:rob_sub_max}'s performance, by bounding its running time, and its approximation performance. To this end, we use the following two notions of curvature for set functions, as well as, a notion of rank for a matroid.
\subsection{Curvature and total curvature of non-decreasing functions}\label{sec:total_curvature}
We present the notions of \emph{curvature} and of \emph{total curvature} for non-decreasing set functions. We start by describing the notions of \textit{modularity} and
\textit{submodularity} for set functions.
\begin{mydef}[Modularity]\label{def:modular}
Consider any finite set~$\mathcal{V}$. The set function $g:2^\mathcal{V}\mapsto \mathbb{R}$ is modular if and only if for any set $\mathcal{A}\subseteq \mathcal{V}$, it holds $g(\mathcal{A})=\sum_{\elem\in \mathcal{A}}g(\elem)$.
\end{mydef}
In words, a set function $g:2^\mathcal{V}\mapsto \mathbb{R}$ is modular if through~$g$ all elements in $\mathcal{V}$ cannot substitute each other; in particular, Definition~\ref{def:modular} of modularity implies that for any set $\mathcal{A}\subseteq\mathcal{V}$, and for any element $\elem\in \mathcal{V}\setminus\mathcal{A}$, it holds $g(\{\elem\}\cup\mathcal{A})-g(\mathcal{A})= g(\elem)$.
\begin{mydef}[Submodularity~{\cite[Proposition 2.1]{nemhauser78analysis}}]\label{def:sub}
Consider any finite set $\calV$. Then, the set function $g:2^\calV\mapsto \mathbb{R}$ is \emph{submodular} if and only if
for any sets $\mathcal{A}\subseteq \validated{\mathcal{A}'}{\mathcal{B}}\subseteq\calV$, and any element $\elem\in \calV$, it \validated{is}{holds}
$g(\mathcal{A}\cup \{\elem\})\!-\!g(\mathcal{A})\geq g(\validated{\mathcal{A}'}{\mathcal{B}}\cup \{\elem\})\!-\!g(\validated{\mathcal{A}'}{\mathcal{B}})$.
\end{mydef}
Definition~\ref{def:sub} implies that a set function $g:2^\calV\mapsto \mathbb{R}$ is submodular if and only if it satisfies a diminishing returns property where
for any set $\mathcal{A}\subseteq \mathcal{V}$, and for any element $\elem\in \mathcal{V}$, the marginal gain $g(\mathcal{A}\cup \{\elem\})-g(\mathcal{A})$ is~non-increasing.
In contrast to modularity, submodularity implies that the elements in $\mathcal{V}$ \emph{can} substitute each other, since Definition~\ref{def:sub} of submodularity implies the inequality $g(\{\elem\}\cup\mathcal{A})-g(\mathcal{A})\leq g(\elem)$; that is, in the presence of the set $\mathcal{A}$, the element $\elem$ may lose part of its contribution to the value of $g(\{x\}\cup\mathcal{A})$.
\begin{mydef}\label{def:curvature}
\emph{\textbf{(Curvature of monotone submodular functions~\cite{conforti1984curvature})}}
Consider a finite set $\mathcal{V}$, and a non-decreasing submodular set function $g:2^\mathcal{V}\mapsto\mathbb{R}$ such that (without loss of generality) for any element $\elem \in \mathcal{V}$, it is $g(\elem)\neq 0$. Then, the curvature of $g$ is defined as follows: \begin{equation}\label{eq:curvature}
\kappa_g\triangleq 1-\min_{\elem\in\mathcal{V}}\frac{g(\mathcal{V})-g(\mathcal{V}\setminus\{\elem\})}{g(\elem)}.
\end{equation}
\end{mydef}
Definition~\ref{def:curvature} of curvature implies that for any non-decreasing submodular set function $g:2^\mathcal{V}\mapsto\mathbb{R}$, it holds $0 \leq \kappa_g \leq 1$. In particular, the value of $\kappa_g$ measures how far~$g$ is from modularity, as we explain next: if $\kappa_g=0$, then for all elements $v\in\mathcal{V}$, it holds $g(\mathcal{V})-g(\mathcal{V}\setminus\{v\})=g(v)$, that is, $g$ is modular. In~contrast, if $\kappa_g=1$, then there exist an element $v\in\mathcal{V}$ such that $g(\mathcal{V})=g(\mathcal{V}\setminus\{v\})$, that is, in the presence of $\mathcal{V}\setminus\{v\}$, $v$~loses all its contribution to the value of $g(\mathcal{V})$.
\begin{mydef}\label{def:total_curvature}
\emph{\textbf{(Total curvature of non-decreasing functions~\cite[Section~8]{sviridenko2017optimal})}}
Consider a finite set $\mathcal{V}$, and a monotone set function $g:2^\mathcal{V}\mapsto\mathbb{R}$. Them, the total curvature of $g$ is defined as follows:
\begin{equation}\label{eq:total_curvature}
c_g\triangleq 1-\min_{v\in\mathcal{V}}\min_{\mathcal{A}, \mathcal{B}\subseteq \mathcal{V}\setminus \{v\}}\frac{g(\{v\}\cup\mathcal{A})-g(\mathcal{A})}{g(\{v\}\cup\mathcal{B})-g(\mathcal{B})}.
\end{equation}
\end{mydef}
Definition~\ref{def:total_curvature} of total curvature implies that for any non-decreasing set function $g:2^\mathcal{V}\mapsto\mathbb{R}$, it holds $0 \leq c_g \leq 1$. To connect the notion of total curvature with that of curvature, we note that when the function $g$ is non-decreasing and submodular, then the two notions coincide, i.e., it holds $c_g=\kappa_g$; the reason is that if $g$ is non-decreasing and submodular, then the inner minimum in eq.~\eqref{eq:total_curvature} is attained for $\calA=\calB\setminus\{v\}$ and $\calB=\emptyset$.
In addition, to connect the above notion of total curvature with the notion of modularity, we note that
if $c_g=0$, then $g$ is modular, since eq.~\eqref{eq:total_curvature} implies that for any elements $\elem\in\calV$, and for any sets $\calA,\calB\subseteq \calV\setminus \{\elem\}$, it holds:
\begin{equation}\label{eq:ineq_total_curvature}
(1-c_g)\left[g(\{\elem\}\cup\calB)-g(\calB)\right]\leq g(\{\elem\}\cup\calA)-g(\calA),
\end{equation}
which for $c_g=0$ implies the modularity of $g$. Finally,
to connect the above notion of total curvature with the notion of monotonicity, we mention that
if $c_g=1$, then eq.~\eqref{eq:ineq_total_curvature} implies that $g$ is merely non-decreasing (as it is already assumed by the Definition~\ref{def:total_curvature} of total curvature).
\subsection{Rank of a matroid}\label{subsec:rank_matroid}
We present a notion of rank for a matroid.
\begin{mydef}[Rank of a matroid{~\cite[Section~39.1]{schrijver2003combinatorial}}]\label{def:rank_matroid}
Consider a matroid $(\calV,\calI)$. Then, the rank of $(\calV,\calI)$ is the number equal to the cardinality of the set $\calX\in\calI$ with the maximum cardinality among all sets in $\calI$.
\end{mydef}
For example, per the discussions in Section~\ref{sec:problem_statement}, for a uniform matroid $(\calV,\calI)$ of the form $\calI\equiv\{\calA: \calA\subseteq \calV, |\calA|\leq \alpha\}$, the rank is equal to $\alpha$; and for a partition matroid $(\calV,\calI)$ of the form $\calV\equiv \calV_1\cup\cdots\cup\calV_n$ and $\calI\equiv\{\calA: \calA \subseteq \calV,|\calA\cap \calV_i|\leq \alpha_i,\text{ for all } i=1,\ldots,n\}$, the rank is equal to $\alpha_1+\ldots+\alpha_n$.
\subsection{Performance analysis for Algorithm~\ref{alg:rob_sub_max}}
We quantify Algorithm~\ref{alg:rob_sub_max}'s approximation performance, as well as, its running time per maximization step in Problem~\ref{pr:robust_sub_max}.
\begin{mytheorem}[Performance of Algorithm~\ref{alg:rob_sub_max}]\label{th:alg_rob_sub_max_performance}
Consider an instance of Problem~\ref{pr:robust_sub_max}, the notation therein, the notation in Algorithm~\ref{alg:rob_sub_max}, and the definitions:
\begin{itemize}
\item let the number $f^\star$ be the (optimal) value to Problem~\ref{pr:robust_sub_max};
\item given a set $\mathcal{A}$ as solution to Problem~\ref{pr:robust_sub_max}, let $\mathcal{B}^\star(\mathcal{A})$ be an optimal (worst-case) set removal from $\mathcal{A}$, per Problem~\ref{pr:robust_sub_max}, that is:
$\mathcal{B}^\star(\mathcal{A})\in\arg\underset{\;\mathcal{B}\subseteq \calA, \calB\in \calI'(\calA)}{\min} \; \;\; f(\mathcal{A}\setminus \mathcal{B})$;
\item let the numbers $\alpha$ and $\beta$ be such that $\alpha$ is the rank of the matroid $(\calV,\calI)$; and $\beta$ is the rank of the matroid $(\calV,\calI')$;
\item define $h(\alpha,\beta)\triangleq \max [1/(1+\alpha), 1/(\alpha-\beta)]$.\footnote{A plot of $h(\alpha,\beta)$ is found in Fig.~\ref{fig:bounds2}.}
\end{itemize}
The performance of Algorithm~\ref{alg:rob_sub_max} is bounded as follows:
\begin{enumerate}[leftmargin=*]
\item \emph{(Approximation performance)}~Algorithm~\ref{alg:rob_sub_max} returns a set $\calA$ such that $\mathcal{A}\subseteq \calV$, $\mathcal{A}\in \calI$, and:
\begin{itemize}
\item if $f$ is \emph{non-decreasing} and \emph{submodular}, and:
\begin{itemize}
\item if $(\calV,\calI)$ is a \emph{uniform} matroid, then:
\begin{equation}\label{ineq:bound_sub_uniform}
\frac{f(\mathcal{A}\setminus \mathcal{B}^\star(\calA))}{f^\star}\geq
\frac{\max\left[1-\kappa_f,h(\alpha,\beta)\right]}{\kappa_f}(1-e^{-\kappa_f});
\end{equation}
\item if $(\calV,\calI)$ is \emph{any} matroid, then:
\begin{equation}\label{ineq:bound_sub}
\frac{f(\mathcal{A}\setminus \mathcal{B}^\star(\calA))}{f^\star}\geq
\frac{\max\left[1-\kappa_f,h(\alpha,\beta)\right]}{1+\kappa_f};
\end{equation}
\end{itemize}
\item if $f$ is \emph{non-decreasing}, then:
\begin{equation}\label{ineq:bound_non_sub}
\frac{f(\mathcal{A}\setminus \mathcal{B}^\star(\calA))}{f^\star}\geq (1-c_f)^3\!\!.
\end{equation}
\end{itemize}
\item \emph{(Running time)}~Algorithm~\ref{alg:rob_sub_max} constructs the set $\calA$ as a solutions to Problem~\ref{pr:robust_sub_max} with $O(|\mathcal{V}|^2)$ evaluations of $f$\!.
\end{enumerate}
\end{mytheorem}
\myParagraph{Provable approximation performance}
Theorem~\ref{th:alg_rob_sub_max_performance} implies on the approximation performance of Algorithm~\ref{alg:rob_sub_max}:
\setcounter{paragraph}{0}
\paragraph{{Near-optimality}} Both for any monotone submodular objective functions $f$\!, and for any merely monotone objective functions~$f$ with total curvature $c_f<1$, Algorithm~\ref{alg:rob_sub_max} guarantees a value for Problem~\ref{pr:robust_sub_max} finitely close to the optimal. In~particular, per ineq.~\eqref{ineq:bound_sub_uniform} and ineq.~\eqref{ineq:bound_sub} (\textit{case of submodular functions}), the~approximation factor of Algorithm~\ref{alg:rob_sub_max}
is bounded by $\frac{h_f(\alpha,\beta)}{\kappa_f}(1-e^{-\kappa_f})$ and $\frac{h_f(\alpha,\beta)}{1+\kappa_f}$, respectively, which for any finite number~$\alpha$ are both
non-zero (see also Fig.~\ref{fig:bounds2}); in addition, per ineq.~\eqref{ineq:bound_sub_uniform} and ineq.~\eqref{ineq:bound_sub}, the~approximation factor of Algorithm~\ref{alg:rob_sub_max} is also bounded by $\frac{1-\kappa_f}{\kappa_f}(1-e^{-\kappa_f})$ and $\frac{1-\kappa_f}{1+\kappa_f}$, respectively, which are also non-zero for any monotone submodular function~$f$ with $\kappa_f<1$ (see also Fig.~\ref{fig:bounds}).
Similarly, per ineq.~\eqref{ineq:bound_non_sub} (\textit{case of monotone functions}), the approximation factor of Algorithm~\ref{alg:rob_sub_max} is bounded by $(1-c_f)^3$\!, which is non-zero for any monotone function~$f$ with $c_f<1$ ---notably, although it is known for the problem of (non-resilient) set function maximization that the approximation bound $(1-c_f)$ is tight~\cite[Theorem~8.6]{sviridenko2017optimal}, the tightness of the bound $(1-c_f)^3$ in ineq.~\eqref{ineq:bound_non_sub} for Problem~\ref{pr:robust_sub_max} is an open problem.
We discuss classes of functions $f$ with curvatures $\kappa_f<1$ or $c_f<1$, along with relevant applications, in the remark below.
\begin{myremark}\emph{\textbf{(Classes of functions $f$ with $\kappa_f<1$ or $c_f<1$, and applications)}}
\emph{Classes of functions $f$ with $\kappa_f<1$} are the concave over modular functions~\cite[Section~2.1]{iyer2013curvature}, and the $\log\det$ of positive-definite matrices~\cite{sviridenko2015optimal,sharma2015greedy}. \emph{Classes of functions $f$ with $c_f<1$} are support selection functions~\cite{elenberg2016restricted},
and estimation error metrics such as the average minimum square error of the Kalman filter~\cite[Theorem~4]{tzoumas2018codesign}.
\begin{figure}[t]
\begin{center}
\begin{tikzpicture}[scale=0.8]
\begin{axis}[
axis lines = left,
xtick = {0,1,...,9,10},
xticklabels={\Large $0$,$1$,,,$\alpha/2$,,,,,$\!\!\!\!\!\alpha-1$},
xlabel = $\beta$,
ytick = {0.2,0.5,1},
yticklabels={${2}/(\alpha+2)$,$0.5$,$1$},
ylabel = {$h(\alpha,\beta)$},
ymajorgrids=true,
xmajorgrids=true,
grid style=dashed,
legend style={at={(0.88,1)}},
ymin=0, ymax=1,
line width=0.8pt,
]
\addplot [
domain=0:10,
samples=11,
color=orange,
mark=triangle,
]
{max(1/(1+x),1/(10-x))};
\addlegendentry{$h(\alpha,\beta)\triangleq\max\left(\frac{1}{1+\beta},\frac{1}{\alpha-\beta}\right)$}
\end{axis}
\end{tikzpicture}
\caption{\small Given a natural number $\alpha$,
plot of $h(\alpha,\beta)$ versus~$\beta$. Given a finite~$\alpha$, then $h(\alpha,\beta)$ is always non-zero, with minimum value $2/(\alpha+2)$, and maximum value $1$.
}\label{fig:bounds2}
\end{center}
\end{figure}
\definecolor{OliveGreen}{rgb}{0,0.6,0}
\begin{figure}[t]
\begin{center}
\begin{tikzpicture}[scale=0.8]
\begin{axis}[
axis lines = left,
xlabel = $\kappa_f$,
ylabel = {$g(\kappa_f)$},
ymajorgrids=true,
grid style=dashed,
legend style={at={(1.01,1)}},
ymin=0, ymax=1,
line width=0.8pt,
]
\addplot [
domain=0.01:1,
samples=30,
color=OliveGreen,
mark=o,
]
{1/x*(1-exp(-x))+0.012};
\addlegendentry{\!\!\!\!\!\!$g(\kappa_f)=\frac{1}{\kappa_f}\left(1-e^{-\kappa_f}\right)$}
\addplot [
domain=0.01:1,
samples=30,
color=magenta,
mark=star,
]
{1.03*(1-x)/x*(1-exp(-x))+0.00};
\addlegendentry{$g(\kappa_f)=\frac{1-\kappa_f}{\kappa_f}\left(1-e^{-\kappa_f}\right)$}
\addplot [
domain=0.01:1,
samples=30,
color=blue,
mark=+,
]
{(1-x)/(1+x)};
\addlegendentry{\hspace{-1.65cm}$g(\kappa_f)=\frac{1-\kappa_f}{1+\kappa_f}$}
\end{axis}
\end{tikzpicture}
\caption{Plot of $g(\kappa_f)$ versus curvature $\kappa_f$ of a monotone submodular function $f$. By definition, the curvature $\kappa_f$ of a monotone submodular function $f$ takes values between $0$ and $1$. $g(\kappa_f)$ increases from $0$ to $1$ as $\kappa_f$ decreases from $1$ to $0$.
}\label{fig:bounds}
\end{center}
\end{figure}
The aforementioned classes of functions $f$ with $\kappa_f<1$ or $c_f<1$ appear in applications of control, robotics, and optimization, such as actuator and sensor placement~\cite{summers2016actuator,tzoumas2016minimal,tzoumas2016near,zhang2017kalman}, sparse approximation and feature selection~\cite{das2011spectral,khanna2017scalable}, and sparse
recovery and column subset selection~\cite{candes2006stable,boutsidis2009improved}; as a result, Problem~\ref{pr:robust_sub_max} enables critical applications such as resilient actuator placement for minimal control effort, resilient multi-robot navigation with minimal sensing and communication, and resilient experiment design; see, for example,~\cite{brent2018resilient}.
\end{myremark}
\paragraph{Approximation performance for low curvature}
For both monotone submodular and merely monotone functions $f$\!, when the curvature $\kappa_f$ and the total curvature~$c_f$, respectively, tend to zero, Algorithm~\ref{alg:rob_sub_max} becomes exact,
since for $\kappa_f\rightarrow 0$ and $c_f\rightarrow 0$ the terms $\frac{1-\kappa_f}{\kappa_f}(1-e^{-\kappa_f})$, $\frac{1-\kappa_f}{1+\kappa_f}$, and $(1-c_f)^3$ in ineqs.~\eqref{ineq:bound_sub_uniform}-\eqref{ineq:bound_non_sub} respectively, tend to $1$.
Overall, Algorithm~\ref{alg:rob_sub_max}'s curvature-dependent
approximation bounds make a first step towards separating
the classes of monotone submodular and merely monotone functions into
functions for which Problem~\ref{pr:robust_sub_max}
can be approximated well (low curvature functions), and functions for which it cannot \mbox{(high curvature functions).}
A machine learning problem where Algorithm~\ref{alg:rob_sub_max} guarantees an approximation performance close to $100\%$ the optimal is that of Gaussian process regression for processes with RBF kernels~\cite{krause2008near,bishop2006pattern}; this problem emerges in applications of sensor deployment and scheduling for temperature monitoring. The reason that in this class of regression problems Algorithm~\ref{alg:rob_sub_max} performs almost optimally is that the involved objective function is the entropy of the selected sensor measurements, which for Gaussian processes with RBF kernels has curvature value close to zero~\cite[Theorem~5]{sharma2015greedy}.
\paragraph{Approximation performance for no attacks or failures}
Both for monotone submodular functions~$f$\!, and for merely monotone functions $f$\!, when the number of set removals is zero, ---i.e., when $\calI'=\emptyset$ in Problem~\ref{pr:robust_sub_max}, which implies $\beta=0$ in Theorem~\ref{th:alg_rob_sub_max_performance},--- Algorithm~\ref{alg:rob_sub_max}'s approximation performance is the same as that of the state-of-the-art algorithms for (non-resilient) set function maximization. In~particular, \textit{for monotone submodular functions}, scalable algorithms for (non-resilient) set function maximization have approximation performance at least $\frac{1}{\kappa_f}(1-e^{-\kappa_f})$ the optimal for any uniform matroid constraint~\cite[Theorem~5.4]{conforti1984curvature}, and $\frac{1}{1+\kappa_f}$ the optimal for any matroid constraint~\cite[Theorem~2.3]{conforti1984curvature}; at the same time, per Theorem~\ref{th:alg_rob_sub_max_performance}, when $\beta=0$, then Algorithm~\ref{alg:rob_sub_max} also has approximation performance at least $\frac{1}{\kappa_f}(1-e^{-\kappa_f})$ the optimal for any uniform matroid constraint, and $\frac{1}{1+\kappa_f}$ the optimal for any matroid constraint, since for $\beta=0$ it is $h(\alpha,\beta)=1$ in ineq.~\eqref{ineq:bound_sub_uniform} and ineq.~\eqref{ineq:bound_sub}. Finally, \textit{for monotone functions~$f$\!}, and for $\calI'=\emptyset$, Algorithm~\ref{alg:rob_sub_max} is the same as the algorithm proposed in~\cite[Section~2]{fisher1978analysis} for (non-resilient) set function maximization, whose performance is optimal~\cite[Theorem~8.6]{sviridenko2017optimal}.
\myParagraph{Minimal running time}
Theorem~\ref{th:alg_rob_sub_max_performance} implies that Algorithm~\ref{alg:rob_sub_max}, even though it goes beyond the objective of (non-resilient) set function optimization, by accounting for attacks and failures, it has the same order of running time as state-of-the-art algorithms for (non-resilient) set function optimization. In particular, such algorithms for (non-resilient) set function optimization~\cite{nemhauser78analysis,fisher1978analysis,sviridenko2017optimal} terminate with $O(|\calV|^2)$ evaluations of the function~$f$\!, and Algorithm~\ref{alg:rob_sub_max} also terminates with $O(|\calV|^2)$ evaluations of the function~$f$\!.
\myParagraph{Summary of theoretical results} In sum, Algorithm~\ref{alg:rob_sub_max} is the first algorithm for the problem of resilient maximization over matroid constraints (Problem~\ref{pr:robust_sub_max}), and it enjoys:
\begin{itemize}
\item \textit{system-wide resiliency}: Algorithm~\ref{alg:rob_sub_max} is valid for any number of denial-of-service attacks and failures;
\item \textit{minimal running time}: Algorithm~\ref{alg:rob_sub_max} terminates with the same running time as state-of-the-art algorithms for (non-resilient) matroid-constrained optimization;
\item \textit{provable approximation performance}: for all monotone objective functions $f$ that are either submodular or merely non-decreasing with total curvature $c_f<1$, Algorithm~\ref{alg:rob_sub_max} ensures a solution finitely close to the optimal.
\end{itemize}
Overall, Algorithm~\ref{alg:rob_sub_max} makes the first step to ensure the success of critical applications in control, robotics, and optimization~\cite{candes2006stable,boutsidis2009improved,summers2016actuator,tzoumas2016minimal,tzoumas2016near,zhang2017kalman,carlone2016attention,liu2018submodular,jawaid2015submodularity,clark2017toward,tokekar2014multi,cevher2011greedy,das2011spectral,elenberg2016restricted,khanna2017scalable,PEQUITO2017261,liu2017submodular,williams2017matroid,calinescu2007maximizing}, despite compromising worst-case attacks or failures, and with minimal running time.
\section{Numerical Experiments on Control-Aware Sensor Selection}\label{sec:simulations}
In this section, we demonstrate the performance of Algorithm~\ref{alg:rob_sub_max} in numerical experiments. In particular, we consider a control-aware sensor selection scenario, namely, \textit{sensing-constrained robot navigation}, where the robot's localization for navigation is supported by on-board sensors to the robot, as well as by deployed sensors in the environment.\footnote{The scenario of {sensing-constrained robot navigation} with on-board sensors is introduced and motivated in~\cite[Section~V]{tzoumas2018codesign}; see also~\cite{vitus2011sensor} for the case of autonomous robot navigation with deployed sensors in the environment.}
Specifically, we consider an unmanned aerial vehicle (UAV) which has the objective to land but whose battery and measurement-processing power is limited. As a result, the UAV can to activate only a subset of its available sensors so to
localize itself, and to enable that way the generation of a control input
for landing. Specifically, we consider that the UAV generates its control input via an LQG controller, given the measurements from the activated sensor set~\cite{bertsekas2005dynamic}.
In more detail, herein we present a Monte Carlo analysis of the above sensing-constrained robot navigation scenario for instances where sensor failures are present, and observe that Algorithm~\ref{alg:rob_sub_max} results to a near-optimal sensor selection; that is, the resulting navigation performance of the UAV matches the optimal in all tested instances where the optimal sensor selection could be computed via a brute-force algorithm.
\myParagraph{Simulation setup} We consider an UAV that moves in a 3D space, starting from a
randomly selected initial location.
The objective of the UAV is to land at
position $[0,\;0,\;0]$ with zero velocity.
The UAV is modelled as a double-integrator
with state $x_t = [p_t \; v_t]^\top \in \Real{6}$ at each time $t=1,2,\ldots$
($p_t$ is the 3D position of the UAV, and $v_t$ is its velocity), and can control its own acceleration
$u_t \in \Real{3}$; the process noise is chosen as $W_t = \eye_6$.
The UAV may support its localization by utilizing $2$ on-board sensors and $12$ deployed sensors on the ground. The on-board sensors are one GPS receiver, measuring the
UAV position $p_t$ with a covariance~$2 \cdot\eye_3$,
and one altimeter, measuring only the last component of $p_t$ (altitude) with standard deviation $0.5\rm{m}$. The ground sensors vary with each Monte Carlo run, and are generated randomly; we consider them to provide linear measurements of the UAV's state.
Among the aforementioned $14$ available sensors to the UAV, we assume that the UAV can use only $\alpha$ of them.
In particular, the UAV chooses the $\alpha$ sensors to activate so to minimize an LQG cost of the form:
\begin{equation}\label{eq:lqg_cost}
\sum_{t=1}^{T}[x_t^\top Qx_t+u_t^\top Ru_t],
\end{equation}
per the problem formulation in~\cite[Section~II]{tzoumas2018codesign}, where the cost matrix $Q$ penalizes the deviation of the state vector from the zero state (since the UAV's objective is to land at position $[0,\;0,\;0]$ with zero velocity), and the cost matrix $R$ penalizes the control input vector;
specifically, in the simulation setup herein we consider $Q = \diag{[1e^{-3},\; 1e^{-3},\;10,\; 1e^{-3},\; 1e^{-3},\; 10]}$
and $R = \eye_3$. Note that the structure of $Q$ reflects the fact that during landing
we are particularly interested in controlling the vertical direction and the vertical velocity
(entries with larger weight in $Q$), while we are less interested in controlling accurately the
horizontal position and velocity (assuming a sufficiently large landing site). In~\cite[Section~III]{tzoumas2018codesign} it is proven that the UAV selects an optimal sensor set $\calS$, and enables the generation of an optimal LQG control input with cost matrices $Q$ and $R$, if it selects $\calS$ by minimizing an objective function of the form:
\begin{equation}\label{eq:opt_sensors}
\sum_{t=1}^{T}\text{trace}[M_t\Sigma_{t|t}(\calS)],
\end{equation}
where $M_t$ is a positive semi-definite matrix that depends on the LQG cost matrices $Q$ and $R$, as well as, on the UAV's system dynamics; and $\Sigma_{t|t}(\calS)$ is the error covariance of the Kalman filter given the sensor set selection $\calS$.
\myParagraph{Compared algorithms}
We compare four algorithms; all algorithms
only differ in how they select the sensors used.
The~first algorithm is the optimal sensor selection algorithm, denoted as \toptimal, which
attains the minimum of the cost function in eq.~\eqref{eq:opt_sensors}; this brute-force approach is viable since the number of available sensors is small.
The second approach is a random sensor selection, denoted as {\tt random$^*$}\!.
The third approach, denoted as \tlogdet, selects sensors to greedily minimize the cost function in eq.~\eqref{eq:opt_sensors}, \textit{ignoring the possibility of sensor failures}, per the problem formulation in eq.~\eqref{eq:non_res}.
The fourth approach uses Algorithm~\ref{alg:rob_sub_max} to solve the resilient re-formulation of eq.~\eqref{eq:opt_sensors} per Problem~\ref{pr:robust_sub_max}, and is denoted as \tslqg. From each of the selected sensor sets, by each of the above four algorithms respectively, we consider an optimal sensor removal, which we compute via brute-force.
\myParagraph{Results} We next present our simulation results averaged over 20 Monte Carlo runs of the above simulation setup, where we vary the number~$\alpha$ of sensor selections from $2$ up to $12$ with step $1$, and the number~$\beta$ of sensors failures from $1$ to $10$ with step $3$, and where we~randomize the
sensor matrices of the $12$ ground sensors. In particular, the results of our numerical analysis are reported in Fig.~\ref{fig:formationControlStats}.
In more detail, Fig.~\ref{fig:formationControlStats} shows the attained \LQG cost for all the combinations of $\alpha$ and $\beta$ values where $\beta\leq \alpha$ (for $\beta>\alpha$ the LQG cost is considered $+\infty$, since $\beta>\alpha$ implies that all $\alpha$ selected sensors fail). The following observations from Fig.~\ref{fig:formationControlStats} are due:
\begin{itemize}
\item \textit{Near-optimality of the \tslqg algorithm (Algorithm~\ref{alg:rob_sub_max})}: Algorithm~\ref{alg:rob_sub_max} ---blue colour in Fig.~\ref{fig:formationControlStats}--- performs close to the optimal algorithm \toptimal ---green colour in Fig.~\ref{fig:formationControlStats}. In particular, across all but two scenarios in Fig.~\ref{fig:formationControlStats}, Algorithm~\ref{alg:rob_sub_max} achieves an approximation performance at least 97\% the optimal; and in the remaining two scenarios (see Fig.~\ref{fig:formationControlStats}-(a) for $\alpha$ equal to $3$ or $4$, and $\beta$ equal to~$1$), Algorithm~\ref{alg:rob_sub_max} achieves an approximation performance at least 90\% the optimal.
\item \textit{Performance of the \tlogdet algorithm}: The \tlogdet algorithm ---red colour in Fig.~\ref{fig:formationControlStats}--- performs poorly as the number~$\beta$ of sensor failures increases, which is expected given that the \tlogdet algorithm minimizes the cost function in eq.~\eqref{eq:opt_sensors} {ignoring the possibility of sensor failures}. Notably, for some of the cases the \tlogdet performs worse or equally poor as the {\tt random$^*$}: for example, see Fig.~\ref{fig:formationControlStats}-(c) for $\alpha\geq 9$, and Fig.~\ref{fig:formationControlStats}-(d).
\item \textit{Performance of the {\tt random$^*$} algorithm}: Expectedly, the performance of also the {\tt random$^*$} algorithm ---black colour in Fig.~\ref{fig:formationControlStats}--- is poor across all scenarios in Fig.~\ref{fig:formationControlStats}.
\end{itemize}
\definecolor{OliveGreen}{rgb}{0,0.6,0}
\newcommand{\hspace{2mm}}{\hspace{2mm}}
\newcommand{4.5cm}{4.5cm}
\begin{figure*}[t]
\begin{center}
\begin{minipage}{\textwidth}
\centering
\hspace{-5mm}
\begin{tabular}{cc}%
\begin{minipage}{4.5cm}%
\centering
\begin{tikzpicture}[scale=0.7]
\begin{axis}[
axis lines = left,
ymin=2300,
xlabel = {\large$\alpha$},
xticklabels={$2$,$3$,$4$,$5$,$6$,$7$,$8$,$9$,$10$,$11$,$12$},
xtick = {2,...,12},
ylabel = {\large\text{LQG cost per eq.~\eqref{eq:lqg_cost}}},
legend pos=south east,
ymajorgrids=true,
grid style=dashed,
line width=1.1pt,
legend style={at={(1,.7)}},
]
\addplot[
color=black,
mark=cross,
style={solid},mark=star,
]
coordinates {
(2,4.429e+04)(3,9763)(4,2.28e+04
)(5,6033)(6,7847)(7,6630)(8,5142)(9,3311)(10,4990)(11,4163)(12,3396)
};
\addlegendentry{{\tt random$^*$}}
\addplot[
color=red,
mark=cross,
style={densely dotted}, mark=otimes*
]
coordinates {
(2,1.3e+04)(3,6750)(4,5700
)(5,5401)(6,4506)(7,4876)(8,3793)(9,2469)(10,3128)(11,4063)(12,3382)
};
\addlegendentry{\tlogdet}
\addplot[
color=blue,
mark=cross,
style={solid},mark=square
]
coordinates {
(2,8874)(3,6909)(4,6038
)(5,4964)(6,4298)(7,4876)(8,3793)(9,2469)(10,3128)(11,4063)(12,3382)
};
\addlegendentry{\tslqg}
\addplot[
color=OliveGreen,
style=solid,
]
coordinates {
(2,8874)(3,5983)(4,4907
)(5,4964)(6,4298)(7,4876)(8,3793)(9,2469)(10,3128)(11,4063)(12,3382)
};
\addlegendentry{\toptimal}
\end{axis}
\end{tikzpicture} \\
(a) \hspace{1.9cm}$\beta=1$
\end{minipage}
& \hspace{20mm}
\begin{minipage}{4.5cm}%
\centering%
\begin{tikzpicture}[scale=0.7]
\begin{axis}[
axis lines = left,
ymin=3000,
xlabel = {\large$\alpha$},
xticklabels={$2$,$3$,$4$,$5$,$6$,$7$,$8$,$9$,$10$,$11$,$12$},
xtick = {2,...,12},
ylabel = {\large\text{LQG cost per eq.~\eqref{eq:lqg_cost}}},
legend pos=south east,
ymajorgrids=true,
grid style=dashed,
line width=1.1pt,
legend style={at={(1,.7)}},
]
\addplot[
color=black,
mark=cross,
style={solid},mark=star,
]
coordinates {
(5,3.008e+05)(6,5.906e+04)(7,4.099e+04
)(8,1.974e+04)(9,1.217e+04)(10,1.392e+04)(11,1.414e+04)(12,1.258e+04)
};
\addlegendentry{{\tt random$^*$}}
\addplot[
color=red,
mark=cross,
style={densely dotted}, mark=otimes*
]
coordinates {
(5,1.066e+05)(6,3.244e+04)(7,9947
)(8,9485)(9,1.063e+04)(10,1.251e+04)(11,1.181e+04)(12,1.021e+04)
};
\addlegendentry{\tlogdet}
\addplot[
color=blue,
mark=cross,
style={solid},mark=square
]
coordinates {
(5,4.285e+04)(6,1.671e+04)(7,9354
)(8,7411)(9,1.063e+04)(10,1.251e+04)(11,1.072e+04)(12,9991)
};
\addlegendentry{\tslqg}
\addplot[
color=OliveGreen,
style=solid,
]
coordinates {
(5,4.285e+04)(6,1.671e+04)(7,9354
)(8,7411)(9,1.063e+04)(10,1.251e+04)(11,1.072e+04)(12,9991)
};
\addlegendentry{\toptimal}
\end{axis}
\end{tikzpicture} \\
(b) \hspace{1.9cm}$\beta=4$
\end{minipage}\\
\hspace{2mm}
\\
\begin{minipage}{4.5cm}%
\centering
\begin{tikzpicture}[scale=0.7]
\begin{axis}[
axis lines = left,
ymin=2300,
xlabel = {\large$\alpha$},
xticklabels={$2$,$3$,$4$,$5$,$6$,$7$,$8$,$9$,$10$,$11$,$12$},
xtick = {2,...,12},
ylabel = {\large\text{LQG cost per eq.~\eqref{eq:lqg_cost}}},
legend pos=south east,
ymajorgrids=true,
grid style=dashed,
line width=1.1pt,
legend style={at={(1,.7)}},
]
\addplot[
color=black,
mark=cross,
style={solid},mark=star,
]
coordinates {
(8,3.554e+05)(9,4.797e+04)(10,4.166e+04)(11,3.007e+04)(12,2.578e+04)
};
\addlegendentry{{\tt random$^*$}}
\addplot[
color=red,
mark=cross,
style={densely dotted}, mark=otimes*
]
coordinates {
(8,1.853e+05)(9,5.837e+04)(10,4.465e+04)(11,3.176e+04)(12,2.326e+04)
};
\addlegendentry{\tlogdet}
\addplot[
color=blue,
mark=cross,
style={solid},mark=square
]
coordinates {
(8,8.034e+04)(9,3.527e+04)(10,3.545e+04)(11,2.033e+04)(12,2.326e+04)
};
\addlegendentry{\tslqg}
\addplot[
color=OliveGreen,
style=solid,
]
coordinates {
(8,8.034e+04)(9,3.488e+04)(10,3.545e+04)(11,2.033e+04)(12,2.326e+04)
};
\addlegendentry{\toptimal}
\end{axis}
\end{tikzpicture} \\
(c) \hspace{1.9cm}$\beta=7$
\end{minipage}
& \hspace{20mm}
\begin{minipage}{4.5cm
\centering%
\begin{tikzpicture}[scale=0.7]
\begin{axis}[
axis lines = left,
ymin=2300,
xlabel = {\large$\alpha$},
xticklabels={$2$,$3$,$4$,$5$,$6$,$7$,$8$,$9$,$10$,$11$,$12$},
xtick = {2,...,12},
ylabel = {\large\text{LQG cost per eq.~\eqref{eq:lqg_cost}}},
legend pos=south east,
ymajorgrids=true,
grid style=dashed,
line width=1.1pt,
legend style={at={(1,.7)}},
]
\addplot[
color=black,
mark=cross,
style={solid},mark=star,
]
coordinates {
(11,7.352e+05)(12,6.833e+04)
};
\addlegendentry{{\tt random$^*$}}
\addplot[
color=red,
mark=cross,
style={densely dotted}, mark=otimes*
]
coordinates {
(11,7.352e+05)(12,5.36e+04)
};
\addlegendentry{\tlogdet}
\addplot[
color=blue,
mark=cross,
style={solid},mark=square
]
coordinates {
(11,1.624e+05)(12,5.36e+04)
};
\addlegendentry{\tslqg}
\addplot[
color=OliveGreen,
style=solid,
]
coordinates {
(11,1.624e+05)(12,5.36e+04)
};
\addlegendentry{\toptimal}
\end{axis}
\end{tikzpicture} \\
(d) \hspace{2.05cm}$\beta=10$
\end{minipage}
\end{tabular}
\end{minipage}%
\vspace{-1mm}
\caption{\label{fig:formationControlStats}\small
\LQG cost for increasing number of sensor selections $\alpha$ (from $2$ up to $12$ with step $1$), and for $4$ values of $\beta$ (number of sensor failures among the $\alpha$ selected sensors); in particular, the value of $\beta$ varies across the sub-figures as follows: $\beta=1$ in sub-figure (a); $\beta=4$ in sub-figure (b); $\beta=7$ in sub-figure (c); and $\beta=10$ in sub-figure (d).
}\vspace{-5mm}
\end{center}
\end{figure*}
Overall, in the above numerical experiments, Algorithm~\ref{alg:rob_sub_max} demonstrates a close-to-optimal approximation performance, and the necessity for a resilient re-formulation of the optimization problem in eq.~\eqref{eq:non_res}, e.g., per Problem~\ref{pr:robust_sub_max}, is exemplified.
|
1,108,101,565,576 | arxiv | \section{Introduction}
Recently, deep learning approaches have achieved tremendous
success in classification problems~\cite{krizhevsky2012imagenet} as well as low-level computer vision problems such as segmentation~\cite{ronneberger2015u}, denoising~\cite{zhang2016beyond}, super resolution~\cite{kim2015accurate, shi2016real}, etc.
The theoretical origin of their success has been investigated by a few authors \cite{poole2016exponential,telgarsky2016benefits}, where
the exponential expressivity under a given network complexity (in terms of VC dimension \cite{anthony2009neural} or Rademacher complexity \cite{bartlett2002rademacher})
has been attributed to their success.
In medical imaging area, there have been also extensive research activities applying deep learning. However, most of these works are
focused on image-based diagnostics, and
their applications to image reconstruction problems such as X-ray computed tomography (CT) reconstruction is relatively less investigated.
In X-ray CT,
due to the potential risk of radiation exposure,
the main research thrust is to reduce the radiation dose. Among various approaches for low-dose CT, sparse view CT is a recent proposal that reduces the
radiation dose by reducing the number of projection views \cite{sidky2008image}.
However, due to the insufficient
projection views, standard reconstruction using the
filtered back-projection (FBP) algorithm exhibits severe streaking artifacts. Accordingly, researchers have extensively employed
compressed sensing approaches \cite{donoho2006compressed} that minimize the total variation (TV) or other sparsity-inducing penalties under the data fidelity \cite{sidky2008image}.
These approaches are, however, computationally very expensive due to the repeated applications of projection and back-projection during iterative update steps.
\begin{figure*}[t]
\centerline{\includegraphics[width=0.95\linewidth]{proposed_network.jpg}}
\caption{The proposed deep residual learning architecture for sparse view CT reconstruction. }
\label{fig:proposed_network}
\end{figure*}
Therefore, the main goal of this paper is to develop a novel deep CNN architecture for sparse view CT reconstruction that outperforms the
existing approaches in its computational speed as well as reconstruction quality.
However, a direct application of conventional CNN architecture turns out to be inferior, because X-ray CT images have high texture details that are often difficult to estimate from sparse view reconstructions.
To address this, we propose a novel {\em deep residual learning } architecture to learn streaking artifacts.
Once the streaking artifacts are estimated, an artifact-free image is then obtained by subtracting the estimated streaking artifacts as shown in Fig.~\ref{fig:proposed_network}.
The proposed deep residual learning is based on our conjecture that streaking artifacts from sparse view CT reconstruction may have simpler topological structure such that learning streaking artifacts
is easier than learning the
original artifact-free images.
To prove this conjecture, we employ a recent computational topology tool called the {\em persistent homology} \cite{edelsbrunner2008persistent} to
show that the residual manifold is much simpler than the original one.
For practical implementation, we investigate several architectures of residual learning, which consistently shows that residual learning is
better than image learning. In addition, among various residual learning architecture, we show
that multi-scale deconvolution network with contracting path - which is often called
U-net structure~\cite{ronneberger2015u} for image segmentation -
is most effective in removing streaking artifacts especially from very sparse number of projection views.
\noindent {\bf Contribution: }
In summary, our contributions are as following. First, a computational topology tool called the persistent homology is proposed as a novel
tool to analyze the manifold of the label data. The analysis clearly shows the advantages of the residual learning in sparse view CT reconstruction.
Second, among various type of residual learning architecture, multi-scale architecture known as U-net is shown the most effective. We show that the advantage of this architecture is originated from its enlarged receptive fields
that can easily capture globally distributed artifact patterns. Finally, to our best knowledge, the proposed algorithm is the first
deep learning architecture that successfully reconstruct high resolution images from very sparse number of projection views.
Moreover, the proposed method significantly outperforms the existing compressed sensing CT approach in both image quality and reconstruction speed.
\section{Related works}
In CT reconstruction, only a few deep learning architectures are available.
Kang \etal\cite{kang2016deep} provided the first systematic study of deep CNN
in low-dose CT from reduced X-ray tube currents and showed that a deep CNN using directional wavelets is more efficient in removing
low-dose related CT noises.
Unlike these low-dose artifacts originated from reduced tube current,
the streaking artifacts from sparse projection views exhibit globalized artifact patterns, which is difficult to remove
using conventional denoising CNNs \cite{chen2015learning, mao2016image, xie2012image}.
Therefore, to our best knowledge, there exists no deep learning architecture for sparse view CT reconstruction.
The residual learning concept was first introduced by He \etal\cite{he2015deep} for image recognition.
In low-level computer vision problems, Kim \etal\cite{kim2015accurate} employed a residual learning for a super-resolution (SR) method.
In these approaches, the residual learning was implemented by a skipped connection corresponding to an identity mapping.
Unlike these architectures, Zhang \etal\cite{zhang2016beyond} proposed a direct residual learning architecture for image denoising and super-resolution, which has inspired our method.
The proposed architecture in Fig.~\ref{fig:proposed_network} is originated from U-Net developed by Ronneberger \etal\cite{ronneberger2015u} for image segmentation.
This architecture was motivated from another deconvolution network for image segmentation by Noh \etal\cite{noh2015learning} by adding contracting path and pooling/unpooling layers.
However, we are not aware of any prior work that employed this architecture beyond the image segmentation.
\section{Theoretical backgrounds}
Before we explain the proposed deep residual learning architecture, this section provides
theoretical backgrounds.
\subsection{Generalization bound}
In a learning problem, based on a random observation (input)
$X \in {\mathcal X}$ and a label $Y \in {\mathcal Y}$ generated by a distribution $D$, we are interested in estimating a regression function $f: X \rightarrow Y$ in a functional space ${\mathcal F}$
that minimizes the risk
$L(f) = E_D \|Y-f(X)\|^2.$
A major technical issue
is that the associated probability distribution $D$ is unknown.
Moreover, we only have a finite
sequence of independent and identically distributed training data
$S=\{(X_1,Y_1),\cdots, (X_n, Y_n)\}$
such that only
an empirical risk
$\hat L_n(f) = \frac{1}{n}\sum_{i=1}^n \|Y_i - f(X_i)\|^2$
is available.
Direct minimization of empirical risk is, however, problematic due to the overfitting.
To address these issues, statistical learning theory \cite{anthony2009neural} {\color{black}has} been developed to
bound the risk of a learning algorithm in terms of complexity measures (eg. VC dimension and shatter coefficients) and the empirical
risk. Rademacher complexity \cite{bartlett2002rademacher} is one of the {\color{black}most} modern notions of complexity that is distribution dependent
and defined for any class of real-valued functions.
Specifically, with probability $\geq 1-\delta$, for every function $f\in {\mathcal F}$,
\begin{equation}\label{eq:L}
L(f) \leq \underbrace{\hat L_n(f)}_{\text{empirical risk}} + \underbrace{2 \hat R_n({\mathcal F})}_{\text{complexity penalty}}+ 3 \sqrt{\frac{\ln(2/\delta)}{n}}
\end{equation}
where the empirical Rademacher complexity $\hat R_n({\mathcal F})$ is defined to be
$$\hat R_n({\mathcal F})= E_\sigma \left[\sum_{f\in {\mathcal F}} \left(\frac{1}{n}\sum_{i=1}^n \sigma_i f(X_i) \right) \right],$$
where $\sigma_1,\cdots, \sigma_n$ are independent random variables uniformly chosen from $\{-1,1\}$.
Therefore, to reduce the risk, we need to minimize both the empirical risk (i.e. data fidelity) and the complexity term in \eqref{eq:L} simultaneously.
In neural network, empirical risk is determined by the representation power of a network \cite{telgarsky2016benefits}, whereas the complexity term is determined by
the structure of a network.
Furthermore, it was shown that the capacity of implementable functions grows exponentially with respect to the number of hidden units \cite{poole2016exponential,telgarsky2016benefits}.
Once the network architecture is determined, its capacity is fixed. Therefore,
the performance of the network is now dependent on the complexity of the manifold of label $Y$ that a given deep network tries to approximate.
In the following, using the persistent homology analysis, we show that the residual manifold composed of X-ray CT streaking artifacts is topologically simpler than the original one.
\subsection{Manifold of CT streaking artifacts}
\begin{figure}[t]
\centerline{\includegraphics[width=0.95\linewidth]{ct_system.pdf}}
\caption{CT projection and back-projection operation.}
\label{fig:ct_system}
\end{figure}
In order to describe the manifold of CT streaking artifacts,
this section starts with a brief introduction of CT physics and its analytic reconstruction method.
For simplicity,
a parallel-beam CT system is described. In CT, an X-ray photon undergoes attenuation according to
Beers-Lambert law while it passes through the body. Mathematically, this can be described by a Radon transform.
Specifically, the projection
measurement at the detector distance $t$ in the projection angle $\theta$ is
described by
\begin{eqnarray*}\label{eq:proj}
P_\theta(t)
&=&\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}{f(x,y)\delta(t - x\rm{cos}\theta - y\rm{sin}\theta)}dxdy,
\end{eqnarray*}
where $f(x,y)$ denotes the underlying image, and $t = x\rm{cos}\theta + y\rm{sin}\theta$ denotes the X-ray propagation path as shown in Fig.~\ref{fig:ct_system}(a).
If densely sampled projection measurements are available,
the filtered back-projection (FBP)
\begin{eqnarray}\label{eq:flt_back_proj}
f(x,y
&=&\int_{0}^{\pi}d\theta\int_{-\infty}^{\infty}{|\omega|P_\theta(\omega)e^{j2\pi\omega t}}d\omega,
\end{eqnarray}
becomes the inverse Radon transform,
where $|\omega|$ denotes ramp filter and $P_\theta(\omega)$ indicates 1-D Fourier transform of projection along the detector $t$.
\begin{figure}[t]
\centerline{\includegraphics[width=0.95\linewidth]{general_streaking_pattern.jpg}}
\caption{CT streaking artifact patterns for (a) three point targets from 8 view projection measurements and
(b)(c) reconstruction images from 48 projections. }
\label{fig:streaking_pattern}
\end{figure}
In \eqref{eq:flt_back_proj},
the outer integral corresponds to the back-projection that projects the filtered sinogram back along the original
X-ray beam propagation direction (see Fig.~\ref{fig:ct_system}(b)).
Accordingly, if the number of projection view is not sufficient, this introduces streaking artifacts.
Fig. \ref{fig:streaking_pattern}(a) shows a FBP reconstruction result from eight view projection measurements for three point targets.
There exist significantly many streaking artifacts radiating from each point target.
Fig. \ref{fig:streaking_pattern}(b)(c) show two reconstruction images and their artifact-only images when only 48 projection views are available.
Even though the underlying images are very different from the point targets and from each other, similar streaking artifacts radiating from objects are consistently observed.
This suggests that the streaking artifacts from different objects may have similar topological structures.
\begin{figure}[!hbt]
\centerline{\includegraphics[width=1.0\linewidth]{persistent_homology_concept.pdf}}
\caption{(a) Point cloud data $K$ of true space $Y$ and its configuration over $\epsilon$ distance filtration. $Y_1$ is a doughnut and $Y_2$ is a sphere shaped space each of which represents a complicated space and a simpler space, respectively. (b) Zero and one dimensional barcodes of $K1$ and $K2$. Betti number can be easily calculated by counting the number of barcodes at each filtration value $\epsilon$. }
\label{fig:topo}
\end{figure}
The complexity of a manifold is a topological concept. Thus, it should be analyzed using topological tools.
In algebraic topology, Betti numbers ($\beta_m$) represent the number of $m$-dimensional holes of a manifold. For example, $\beta_0$ and $\beta_1$ are the number of connected components and cycles, respectively. They are frequently used to investigate the characteristic of underlying data manifold \cite{edelsbrunner2008persistent}.
Specifically, we can infer the topology of a data manifold by varying the similarity measure between the data points and tracking the changes of Betti numbers.
As allowable distance $\epsilon$ increases, point clouds merge together and finally become a single cluster.
Therefore, the point clouds with high diversity will merge slowly and this will be represented as a slow decrease in Betti numbers.
For example, in Fig.~\ref{fig:topo}(a), the space $Y_1$ is a doughnut with a hole (i.e. $\beta_0=1$ and $\beta_1=1$) whereas
$Y_2$ is a sphere-like cluster (i.e. $\beta_0=1$ and $\beta_1=0$).
Accordingly, $Y_1$ has longer zero dimensional \emph{barcodes} persisting over $\epsilon$ in Fig.~\ref{fig:topo}(b). In other words, it has a persisting one dimensional barcode implying the distanced configuration of point clouds that cannot be overcome until they reach a large $\epsilon$.
This persistence of Betti number is an important topological characteristics, and the
recent {\em persistent homology} analysis utilizes this to investigate the topology of data \cite{edelsbrunner2008persistent}.
In Bianchini \etal. \cite{bianchini2014complexity}, Betti number of
the set of inputs scored with a nonnegative
response was used as a capacity measure of a deep neural network.
However, our approach is novel, because we are interested in investigating the
complexity of the label manifold.
As will be shown in Experiment, the persistent homology analysis clearly show
that the residual manifold has much simpler topology than the original one.
\begin{figure}[!hbt]
\centerline{\includegraphics[width=0.95\linewidth]{receptive_field.jpg}}
\caption{Effective receptive field comparison.}
\label{fig:artifact_pattern}
\end{figure}
\section{Residual Learning Architecture}
As shown in Fig. \ref{fig:proposed_network},
the proposed residual network consists of convolution layer, batch normalization~\cite{ioffe2015batch}, rectified linear unit (ReLU) \cite{krizhevsky2012imagenet}, and contracting path connection with concatenation~\cite{ronneberger2015u}.
Specifically,
each stage contains four sequential layers composed of convolution with $3\times3$ kernels, batch normalization, and ReLU layers.
Finally, the last stage has two sequential layers and the last layer contains only one convolution layer with $1\times1$ kernel.
In the first half of the network, each stage is followed by a max pooling layer, whereas an average unpooling layer
is used in
the later half of the network.
Scale-by-scale contracting paths are used to concatenate the results from the front part of the network to the later part of network.
The number of channels for each convolution layer is illustrated in Fig. \ref{fig:proposed_network}.
Note that the number of channels are doubled after each pooling layers.
Fig. \ref{fig:artifact_pattern} compares the network depth-wise effective receptive field for a simplified form of the proposed network and a reference network without pooling layer.
With the same convolutional filter, the effective receptive field is enlarged in the proposed architecture.
Considering that the streaking artifact has globally distributed pattern as illustrated in Fig.~\ref{fig:streaking_pattern},
the enlarged effective receptive field from the multi-scale residual learning is more advantageous in removal of the streaking artifacts.
\section{Experimental Results}
\subsection{Data Set}
As a training data, we used the nine patient data provided by AAPM Low Dose CT Grand Challenge
(http://www.aapm.org/GrandChallenge/LowDoseCT/).
The data is composed of 3-D CT projection data from 2304 views.
Artifact-free original images were generated by FBP using all 2304 projection views.
Sparse view CT reconstruction input images $X$ were generated using FBP from 48, 64, 96, and 192 projection views, respectively.
For the proposed residual learning, the label data $Y$ were defined as the difference between the sparse view reconstruction and the full-view reconstruction.
Among the nine patient data, eight patient data were used for training, whereas a test was conducted using the remaining one patient data.
This corresponding to 3602 slices of $512\times 512$ images for the training data, and 488 slices of $512\times 512$ images for the test data.
The training data was augmented by conducting horizontal and vertical flipping.
For the training data set, we used the FBP reconstruction using both 48 and 96 projection views as input $X$ and the difference between the full-view (2304views) reconstruction and the sparse view reconstructions were used as label $Y$.
\begin{figure}[!b]
\centerline{\includegraphics[width=1.0\linewidth]{./persistent_homology_exp.pdf}}
\caption{Zero and one dimensional barcodes of the artifact-free original CT images (blue) and streaking artifacts (red). }
\label{fig:topo_exp}
\end{figure}
\subsection{Persistent homology analysis}
To compare the topology of the original and residual image spaces, we calculated Betti numbers using a toolbox called JAVAPLEX (http://appliedtopology.github.io/ javaplex/). Each label image of size $512\times 512$ was set to a point in ${\mathbb R}^{512^2}$ vector space to generate a point cloud.
We calculated Euclidean distance between each point and normalized it by the maximum distance.
The topological complexity of both image spaces was compared by the change of Betti numbers in Fig.~\ref{fig:topo_exp}, which
clearly showed that the manifold of the residual images are topologically simpler.
Indeed, $\beta_0$ of residual image manifold decreased faster to a single cluster.
Moreover, there exists no $\beta_1$ barcode for the residual manifold which infers a closely distributed point clouds as spherical example in Fig.~\ref{fig:topo}(a)(b).
These results clearly informs that the residual image manifold has a simpler topology
than the original one.
\begin{figure}[!hbt]
\centerline{\includegraphics[width=0.8\linewidth]{./proposed_result_cutview.jpg}}
\caption{Reconstruction results by TV based compressed sensing CT, and the proposed method.}
\label{fig:cutview_result}
\end{figure}
\begin{figure*}[!hbt]
\centerline{\includegraphics[width=0.95\linewidth]{./proposed_result.pdf}}
\caption{Axial view reconstruction results by TV based compressed sensing CT, and the proposed method.}\label{fig:proposed_result}
\end{figure*}
\subsection{Network training}
The proposed network was trained by stochastic gradient descent (SGD). The regularization parameter was $\lambda = 10^{-4}$. The learning rate was set from $10^{-3}$ to $10^{-5}$ which was gradually reduced at each epoch. The number of epoch was 150. A mini-batch data using image patch was used, and the size of image patch was $256\times256$.
The network was implemented using MatConvNet toolbox (ver.20) \cite{vedaldi2015matconvnet} in MATLAB 2015a environment (MathWorks, Natick). We used a GTX 1080 graphic processor and i7-4770 CPU (3.40GHz). The network takes about 1 day for training.
\subsection{Reconstruction results}
Fig. \ref{fig:cutview_result} shows reconstruction results from coronal and sagittal directions. Accurate reconstruction was obtained using the proposed method, whereas there exist remaining patterned artifacts in TV reconstruction.
Fig. \ref{fig:proposed_result}(a)-(c) shows the reconstruction results from axial views the proposed methods from
48, 64, and 96 projection views, respectively.
Note that the same network was used for all these cases to verify the universality of the proposed method.
The results in Fig. \ref{fig:proposed_result}(a)-(c) clearly showed that the proposed network removes most of streaking artifact patterns and preserves a detailed structure of underlying images.
The magnified view in Fig. \ref{fig:proposed_result}(a)-(c) confirmed that the detailed structures are very well reconstructed using the proposed method.
Moreover, compared to the standard compressed sensing CT approach with TV penalty, the proposed results in Fig.~\ref{fig:cutview_result} and Fig.~\ref{fig:proposed_result}
provides significantly improved image reconstruction results, even though the computational time for the proposed method is 123ms/slice.
This is 30 time faster than the TV approach where the standard TV approach took
about 3 $\sim$ 4 sec/slice for reconstruction.
\section{Discussion}
\subsection{Residual learning vs. Image learning}
Here, we conduct various comparative studies.
First, we investigated the importance of the residual learning.
As for reference, an image learning network in Fig.~\ref{fig:compared_network}(a) was used.
Although this has the same U-net structure as the proposed residual learning network in Fig.~\ref{fig:proposed_network}, the full view reconstruction results
were used as label and the network was trained to learn the artifact-free images.
According to our persistent homology analysis, the manifold of full-view reconstruction is topologically more complex so that the learning the full
view reconstruction images is more difficult.
\begin{figure}[!hbt]
\centerline{\includegraphics[width=0.95\linewidth]{./compared_network.jpg}}
\caption{Reference networks.}
\label{fig:compared_network}
\end{figure}
\begin{figure}[!hbt]
\centering
\begin{minipage}[b]{0.45\linewidth}
\centerline{\includegraphics[width=\linewidth]{./plot_cost.pdf}}
\centerline{(a) COST}\medskip
\end{minipage}
\begin{minipage}[b]{0.45\linewidth}
\centerline{\includegraphics[width=\linewidth]{./plot_psnr.pdf}}
\centerline{(b) PSNR}\medskip
\end{minipage}
\caption{Convergence plots for (a) cost function, and (b) peak-signal-to-noise ratio (PSNR) with respect to
each epoch.
\label{fig:err_plot}
\end{figure}
The convergence plot in Fig.~\ref{fig:err_plot} and reconstruction results in Fig. \ref{fig:image_result} clearly show the strength of the residual learning over the image learning.
The proposed residual learning network exhibits the fast convergence during the training
and the final performance outperforms the image learning network.
The magnified view of a reconstructed image in Fig. \ref{fig:image_result} clearly shows that
the detailed structure of internal organ was not fully recovered using image learning, whereas the proposed residual learning can recover.
\begin{figure*}[!hbt]
\centerline{\includegraphics[width=0.95\linewidth]{./image_result.pdf}}
\caption{Comparison results for residual learning and image learning from 64 and 192 view reconstruction input data. }
\label{fig:image_result}
\end{figure*}
\subsection{Single-scale vs. Multi-scale residual learning}
Next, we investigated the importance of the multi-scale nature of residual learning using U-net.
As for reference, a single-scale residual learning network as shown in Fig.~\ref{fig:compared_network}(b) was used.
Similar to the proposed method,
the streaking artifact images were used as the labels. However, the residual network was constructed without pooling and unpooling layers.
For fair comparison, we set the number of network parameters similar to the proposed method by fixing the number of channels at each layer across all the stages.
Fig. \ref{fig:err_plot} clearly shows the advantages of multi-scale residual learning over
single-scale residual learning.
The proposed residual learning network exhibits the fast convergence and the final performance was better.
In Fig.~\ref{fig:single_scale_result}, the image reconstruction quality by the multi-scale learning was much improved compared to the
single resolution one.
PSNR is shown as the quantification factor Table. \ref{tlb:err_table}.
Here, in extremely sparse projection views, multi-scale structures always win
the single-scale residual learning.
At 192 projection views, the global streaking artifacts become less dominant compared to the localized artifacts,
so the single-scale residual learning started to become better than multi-scale {\em image} learning approaches.
However, by combining the advantages of residual learning and multi-scale network,
the proposed multi-scale residual learning outperforms all the reference architectures in various view downsampling ranges.
\subsection{Diversity of training set}
Fig. \ref{fig:training_case_result} shows that reconstructed results by the proposed approach,
when the network was trained with sparse view reconstruction from 48, 96, or 48 and 96 views, respectively.
The streaking artifacts were removed very well in all cases; however, the detailed structure of underlying image was maintained when the network was trained
using 96 view reconstruction, especially when the network was used to reconstruct image from more dense view data set (198 views in this case).
On the other hand, the network trained with 96 views could not be used for 48 views sparse CT reconstruction.
Fig. \ref{fig:training_case_result} clearly showed the remaining streaking artifact in this case.
To address this issue and make the network universal across wide ranges of view down-sampling,
the proposed network was, therefore, trained using a training data set by combining sparse CT reconstruction data between 48 and 96 view.
As shown in Fig. \ref{fig:training_case_result}, the proposed approach with the combined training provided the best reconstruction across wide
ranges of view down-sampling.
\begin{table*}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\ No. of views & Single-scale image learning & Single-scale residual learning & Multi-scale image learning & Proposed \\
\hline\hline
48 view & 31.0027 & 31.7550 & $\color{blue}{32.5525}$ & $\color{red}{\bold{33.3916}}$ \\
64 view & 32.1380 & 32.4456 & $\color{blue}{32.9748}$ & $\color{red}{\bold{33.8680}}$ \\
96 view & 33.2983 & 33.3569 & $\color{blue}{33.4728}$ & $\color{red}{\bold{34.5898}}$ \\
192 view & $33.7693$ & \color{blue}{33.8390} & 33.7101 & $\color{red}{\bold{34.9028}}$ \\
\hline
\end{tabular}
\end{center}
\caption{Average PSNR results for various learning architectures.}
\label{tlb:err_table}
\end{table*}
\begin{figure*}[!hbt]
\centerline{\includegraphics[width=0.95\linewidth]{./single_scale_result.pdf}}
\caption{Comparison results for single-scale versus multi-scale residual learning from 48 and 96 view reconstruction input data. }
\label{fig:single_scale_result}
\end{figure*}
\begin{figure*}[!hbt]
\centerline{\includegraphics[width=0.95\linewidth]{./training_case_result.pdf}}
\caption{Comparison results for various training data configuration. Each column represents reconstructed images by proposed network which was trained with sparse view reconstruction from 48, 96, or 48/96 views, respectively.}
\label{fig:training_case_result}
\end{figure*}
\section{Conclusion}
In this paper, we develop a novel deep residual learning approach for sparse view CT reconstruction.
Based on the persistent homology analysis, we showed that the residual manifold composed of streaking artifacts is topologically simpler than the original one.
This claim was confirmed using persistent homology analysis
and experimental results, which clearly showed the advantages of the residual learning over image learning.
Among the various residual learning networks, this paper showed that the multi-scale residual learning using U-net structure was the most effective especially when the
number of views were extremely small.
We showed that this is due to the enlarged receptive field in U-net structure that can easily capture the globally distributed
streaking artifacts.
Using extensive experiments, we showed that the proposed deep residual learning is significantly better than the conventional compressed sensing CT approaches.
Moreover, the computational speed was extremely faster than that of compressed sensing CT.
Although this work was mainly developed for sparse view CT reconstruction problems, the proposed residual learning network may be universally used for removing various image
noise and artifacts that are globally distributed.
\section{Acknowledgement}
The authors would like to thanks Dr. Cynthia MaCollough, the Mayo Clinic, the American Association of Physicists in Medicine (AAPM), and grant EB01705 and EB01785 from the National
Institute of Biomedical Imaging and Bioengineering for providing the Low-Dose CT Grand Challenge data set.
This work is supported by Korea Science and Engineering Foundation, Grant number
NRF-2016R1A2B3008104.
|
1,108,101,565,577 | arxiv | \section{Introduction}
Fix $1\le p\le q<\infty$ and let $T\colon X\to E$ be a Banach space
valued linear operator defined on a saturated order semi-continuous
Banach function space $X$ related to a $\sigma$-finite measure
$\mu$. In this paper we prove an extension theorem for $T$ in the
case when $T$ is $q$-summing and $X$ is $p$-convex. In order to do
this, we first define and analyze a new class of Banach function
spaces denoted by $S_{X_p}^{\,q}(\xi)$ which have some good
properties, mainly order continuity and p-convexity. The space
$S_{X_p}^{\,q}(\xi)$ is constructed by using the spaces $L^p(\mu)$
and $L^q(\xi)$, where $\xi$ is a finite positive Radon measure on a
certain compact set associated to $X$.
Corollary \ref{COR: q-summing-extension} states the desired
extension for $T$. Namely, if $T$ is $q$-summing and $X$ is
$p$-convex then $T$ can be strongly extended continuously to a space of the
type $S_{X_p}^{\,q}(\xi)$. Here we use the term ``strongly" for this extension to remark that the map carrying $X$ into $S_{X_p}^{\,q}(\xi)$ is actually injective; as the reader will notice (Proposition \ref{PROP: SXpq(xi)-space}), this is one of the goals of our result. In order to develop our arguments, we introduce
a new geometric tool which we call the family of $p$-strongly
$q$-concave operators. The inclusion of $X$ into
$S_{X_p}^{\,q}(\xi)$ turns out to belong to this family, in
particular, it is $q$-concave.
If $T$ is $q$-summing then it is $p$-strongly $q$-concave
(Proposition \ref{PROP: q-Summing}). Actually, in Theorem \ref{THM:
SXpqExtension} we show that in the case when $X$ is $p$-convex, $T$
can be continuously extended to a space $S_{X_p}^{\,q}(\xi)$ if and
only if $T$ is $p$-strongly $q$-concave. This result can be
understood as an extension of some well-known relevant
factorizations of the operator theory:
\begin{itemize}\setlength{\leftskip}{-3ex}
\item[(I)] Maurey-Rosenthal factorization theorem: If $T$ is
$q$-concave and $X$ is $q$-convex and order continuous, then $T$ can
be extended to a weighted $L^q$-space related to $\mu$, see for
instance \cite[Corollary 5]{defant}. Several generalizations and
applications of the ideas behind this fundamental factorization
theorem have been recently obtained, see
\cite{calabuig-delgado-sanchezperez,
calabuig-rodriguez-sanchezperez,defant-sanchezperez,delgado-sanchezperez,sanchezperez}.
\item[(II)] Pietsch factorization theorem:
If $T$ is $q$-summing then it factors through a closed subspace
of $L^q(\xi)$, where $\xi$ is a probability Radon measure on a
certain compact set associated to $X$, see for instance
\cite[Theorem 2.13]{diestel-jarchow-tonge}.
\end{itemize}
In Theorem \ref{THM: SXpqExtension}, the extreme case $p=q$ gives a
Maurey-Rosenthal type factorization, while the other extreme case
$p=1$ gives a Pietsch type factorization. We must say also that our generalization will allow to face the problem of the factorization of several $p$-summing type of multilinear operators from products of Banach function spaces ---a topic of current interest---, since it allows to understand factorization of $q$-summing operators from $p$-convex function lattices from a unified point of view not depending on the order relation between $p$ and $q$.
As a consequence of Theorem \ref{THM: SXpqExtension}, we also prove
a kind of Kakutani representation theorem (see for instance
\cite[Theorem 1.b.2]{lindenstrauss-tzafriri}) through the spaces
$S_{X_p}^{\,q}(\xi)$ for $p$-convex Banach function spaces which are
$p$-strongly $q$-concave (Corollary \ref{COR: i-isomorphism}).
\section{Preliminaries}
Let $(\Omega,\Sigma,\mu)$ be a $\sigma$-finite measure space and
denote by $L^0(\mu)$ the space of all measurable real functions on
$\Omega$, where functions which are equal $\mu$-a.e.\ are
identified. By a \emph{Banach function space} (briefly B.f.s.) we
mean a Banach space $X\subset L^0(\mu)$ with norm
$\Vert\cdot\Vert_X$, such that if $f\in L^0(\mu)$, $g\in X$ and
$|f|\le|g|$ $\mu$-a.e.\ then $f\in X$ and $\Vert f\Vert_X\le\Vert
g\Vert_X$. In particular, $X$ is a Banach lattice with the
$\mu$-a.e.\ pointwise order, in which the convergence in norm of a
sequence implies the convergence $\mu$-a.e.\ for some subsequence. A
B.f.s.\ $X$ is said to be \emph{saturated} if there exists no
$A\in\Sigma$ with $\mu(A)>0$ such that $f\chi_A=0$ $\mu$-a.e.\ for
all $f\in X$, or equivalently, if X has a \emph{weak unit} (i.e.\
$g\in X$ such that $g>0$ $\mu$-a.e.).
\begin{lemma}\label{LEM: saturatedBfs}
Let $X$ be a saturated B.f.s. For every $f\in L^0(\mu)$, there
exists $(f_n)_{n\ge1}\subset X$ such that $0\le f_n\uparrow |f|$
$\mu$-a.e.
\end{lemma}
\begin{proof}
Consider a weak unit $g\in X$ and take $g_n=ng/(1+ng)$. Note that
$0< g_n< ng$ $\mu$-a.e., so $g_n$ is a weak unit in $X$. Moreover,
$(g_n)_{n\ge1}$ increases $\mu$-a.e.\ to the constant function equal
to $1$. Now, take $f_n=g_n|f|\chi_{\{\omega\in\Omega:\,|f|\le n\}}$.
Since $0\le f_n\le ng_n$ $\mu$-a.e., we have that $f_n\in X$, and
$f_n\uparrow |f|$ $\mu$-a.e.
\end{proof}
The \emph{K\"{o}the dual} of a B.f.s.\ $X$ is the space $X'$ given
by the functions $h\in L^0(\mu)$ such that $\int|hf|\,d\mu<\infty$
for all $f\in X$. If $X$ is saturated then $X'$ is a saturated
B.f.s.\ with norm $\Vert h\Vert_{X'}=\sup_{f\in B_X}\int|hf|\,d\mu$
for $h\in X'$. Here, as usual, $B_X$ denotes the closed unit ball of
$X$. Each function $h\in X'$ defines a functional $\zeta(h)$ on $X$
by $\langle\zeta(h),f\rangle=\int hf\,d\mu$ for all $f\in X$. In
fact, $X'$ is isometrically order isomorphic (via $\zeta$) to a
closed subspace of the topological dual $X^*$ of $X$.
From now and on, a B.f.s.\ $X$ will be assumed to be saturated. If
for every $f,f_n\in X$ such that $0\le f_n\uparrow f$ $\mu$-a.e.\ it
follows that $\Vert f_n\Vert_X\uparrow \Vert f\Vert_X$, then $X$ is
said to be \emph{order semi-continuous}. This is equivalent to
$\zeta(X')$ being a \emph{norming subspace} of $X^*$, i.e.\ $\Vert
f\Vert_X=\sup_{h\in B_{X'}}\int |fh|\,d\mu$ for all $f\in X$. A
B.f.s.\ $X$ is \emph{order continuous} if for every $f,f_n\in X$
such that $0\le f_n\uparrow f$ $\mu$-a.e., it follows that $f_n\to
f$ in norm. In this case, $X'$ can be identified with $X^*$.
For general issues related to B.f.s.'\ see
\cite{lindenstrauss-tzafriri}, \cite{okada-ricker-sanchezperez} and
\cite[Ch.\,15]{zaanen} considering the function norm $\rho$ defined
as $\rho(f)=\Vert f\Vert_X$ if $f\in X$ and $\rho(f)=\infty$ in
other case.
Let $1\le p<\infty$. A B.f.s.\ $X$ is said to be \emph{$p$-convex}
if there exists a constant $C>0$ such that
$$
\Big\Vert\Big(\sum_{i=1}^n |f_i|^p\Big)^{1/p}\,\Big\Vert_X \le C
\Big(\sum_{i=1}^n\Vert f_i\Vert_X^p\Big)^{1/p}
$$
for every finite subset $(f_i)_{i=1}^n\subset X$. In this case,
$M^p(X)$ will denote the smallest constant $C$ satisfying the above
inequality. Note that $M^p(X)\ge1$. A relevant fact is that every
$p$-convex B.f.s.\ $X$ has an equivalent norm for which $X$ is
$p$-convex with constant $M^p(X)=1$, see \cite[Proposition
1.d.8]{lindenstrauss-tzafriri}.
The \emph{$p$-th power} of a B.f.s.\ $X$ is the space defined as
$$
X_p=\{f\in L^0(\mu): |f|^{1/p}\in X\},
$$
endowed with the quasi-norm $\Vert f \Vert_{X_p}= \Vert
\,|f|^{1/p}\,\Vert_X^p$, for $f\in X_p$. Note that $X_p$ is always
complete, see the proof of \cite[Proposition
2.22]{okada-ricker-sanchezperez}. If $X$ is $p$-convex with constant
$M^p(X)=1$, from \cite[Lemma 3]{defant}, $\Vert \cdot\Vert_{X_p}$ is
a norm and so $X_p$ is a B.f.s. Note that $X_p$ is saturated if and
only if $X$ is so. The same holds for the properties of being order
continuous and order semi-continuous.
\section{The space $S_{X_p}^{\,q}(\xi)$}
Let $1\le p\le q<\infty$ and let $X$ be a saturated $p$-convex
B.f.s. We can assume without loss of generality that the
$p$-convexity constant $M^p(X)$ is equal to $1$. Then, $X_p$ and
$(X_p)'$ are saturated B.f.s.'. Consider the topology
$\sigma\big((X_p)',X_p\big)$ on $(X_p)'$ defined by the elements of
$X_p$. Note that the subset $B_{(X_p)'}^+$ of all positive elements
of the closed unit ball of $(X_p)'$ is compact for this topology.
Let $\xi$ be a finite positive Radon measure on $B_{(X_p)'}^+$. For
$f\in L^0(\mu)$, consider the map $\phi_f\colon
B_{(X_p)'}^+\to[0,\infty]$ defined by
$$
\phi_f(h)=\Big(\int_\Omega|f(\omega)|^ph(\omega)\,d\mu(\omega)\Big)^{q/p}
$$
for all $h\in B_{(X_p)'}^+$. In the case when $f\in X$, since
$|f|^p\in X_p$, it follows that $\phi_f$ is continuous and so
measurable. For a general $f\in L^0(\mu)$, by Lemma \ref{LEM:
saturatedBfs} we can take a sequence $(f_n)_{n\ge1}\subset X$ such
that $0\le f_n\uparrow|f|$ $\mu$-a.e. Applying monotone convergence
theorem, we have that $\phi_{f_n}\uparrow\phi_f$ pointwise and so
$\phi_f$ is measurable. Then, we can consider the integral
$\int_{B_{(X_p)'}^+}\phi_f(h)d\xi(h)\in[0,\infty]$ and define the
following space:
$$
S_{X_p}^{\,q}(\xi)=\left\{f\in L^0(\mu):\,
\int_{B_{(X_p)'}^+}\Big(\int_\Omega|f(\omega)|^ph(\omega)\,d\mu(\omega)\Big)^{q/p}d\xi(h)<\infty\right\}.
$$
Let us endow $S_{X_p}^{\,q}(\xi)$ with the seminorm
\begin{eqnarray*}
\Vert f\Vert_{S_{X_p}^{\,q}(\xi)} & = &
\left(\int_{B_{(X_p)'}^+}\Big(\int_\Omega|f(\omega)|^ph(\omega)\,d\mu(\omega)\Big)^{q/p}d\xi(h)\right)^{1/q}
\\ & = & \Big\Vert \,h\to\big\Vert f|h|^{1/p}\,\big\Vert_{L^p(\mu)}\,\Big\Vert_{L^q(\xi)}.
\end{eqnarray*}
In general, $\Vert \cdot\Vert_{S_{X_p}^{\,q}(\xi)}$ is not a norm.
For instance, if $\xi$ is the Dirac measure at some $h_0\in
B_{(X_p)'}^+$ such that $A=\{\omega\in\Omega:\, h_0(\omega)=0\}$
satisfies $\mu(A)>0$, taking $f=g\chi_A\in X$ with $g$ being a weak
unit of $X$, we have that
$$
\Vert
f\Vert_{S_{X_p}^{\,q}(\xi)}=\Big(\int_A|g(\omega)|^ph_0(\omega)\,d\mu(\omega)\Big)^{1/p}=0
$$
and
$$
\mu(\{\omega\in\Omega:\, f(\omega)\not=0\})=
\mu(A\cap\{\omega\in\Omega:\, g(\omega)\not=0\})=\mu(A)>0.
$$
\begin{proposition}\label{PROP: SXpq(xi)-space}
If the Radon measure $\xi$ satisfies
\begin{equation}\label{EQ: xiProperty}
\int_{B_{(X_p)'}^+}\Big(\int_Ah(\omega)\,d\mu(\omega)\Big)^{q/p}\,d\xi(h)=0
\ \ \Rightarrow \ \ \mu(A)=0
\end{equation}
then, $S_{X_p}^{\,q}(\xi)$ is a saturated B.f.s. Moreover,
$S_{X_p}^{\,q}(\xi)$ is order continuous, $p$-convex (with constant
$1$) and $X\subset S_{X_p}^{\,q}(\xi)$ continuously.
\end{proposition}
\begin{proof}
It is clear that if $f\in L^0(\mu)$, $g\in S_{X_p}^{\,q}(\xi)$ and
$|f|\le|g|$ $\mu$-a.e.\ then $f\in S_{X_p}^{\,q}(\xi)$ and $\Vert
f\Vert_{S_{X_p}^{\,q}(\xi)}\le\Vert g\Vert_{S_{X_p}^{\,q}(\xi)}$.
Let us see that $\Vert \cdot\Vert_{S_{X_p}^{\,q}(\xi)}$ is a norm.
Suppose that $\Vert f\Vert_{S_{X_p}^{\,q}(\xi)}=0$ and set
$A_n=\{\omega\in\Omega:\, |f(\omega)|>\frac{1}{n}\}$ for every
$n\ge1$. Since $\chi_{A_n}\le n|f|$ and
$$
\int_{B_{(X_p)'}^+}\Big(\int_{A_n}h(\omega)\,d\mu(\omega)\Big)^{q/p}\,d\xi(h)=\big\Vert
\chi_{A_n}\big\Vert_{S_{X_p}^{\,q}(\xi)}^q\le n^q\Vert
f\Vert_{S_{X_p}^{\,q}(\xi)}^q=0,
$$
from \eqref{EQ: xiProperty} we have that $\mu(A_n)=0$ and so
$$
\mu(\{\omega\in\Omega:\,
f(\omega)\not=0\})=\lim_{n\to\infty}\mu(A_n)=0.
$$
Now we will see that $S_{X_p}^{\,q}(\xi)$ is complete by showing
that $\sum_{n\ge1}f_n\in S_{X_p}^{\,q}(\xi)$ whenever
$(f_n)_{n\ge1}\subset S_{X_p}^{\,q}(\xi)$ with $C=\sum \Vert
f_n\Vert_{ S_{X_p}^{\,q}(\xi)}<\infty$. First let us prove that
$\sum_{n\ge1}|f_n|<\infty$ $\mu$-a.e. For every $N,n\ge1$, taking
$A_n^N=\{\omega\in\Omega:\, \sum_{j=1}^n|f_j(\omega)|>N\}$, since
$\chi_{A_n^N}\le \frac{1}{N}\sum_{j=1}^n|f_j|$, we have that
\begin{eqnarray*}
\int_{B_{(X_p)'}^+}\Big(\int_{A_n^N}h(\omega)\,d\mu(\omega)\Big)^{q/p}\,d\xi(h)
& = & \Vert\chi_{A_n^N}\Vert_{S_{X_p}^{\,q}(\xi)}^q \\ & \le &
\frac{1}{N^q}\,\Big\Vert\sum_{j=1}^n|f_j|\,\Big\Vert_{S_{X_p}^{\,q}(\xi)}^q
\le\frac{C^q}{N^q}.
\end{eqnarray*}
Note that, for $N$ fixed, $(A_n^N)_{n\ge1}$ increases. Taking limit
as $n\to\infty$ and applying twice the monotone convergence theorem,
it follows that
$$
\int_{B_{(X_p)'}^+}\Big(\int_{\cup_{n\ge1}A_n^N}h(\omega)\,d\mu(\omega)\Big)^{q/p}\,d\xi(h)\le
\frac{C^q}{N^q}.
$$
Then,
$$
\int_{B_{(X_p)'}^+}\Big(\int_{\cap_{N\ge1}\cup_{n\ge1}A_n^N}h(\omega)\,d\mu(\omega)\Big)^{q/p}\,d\xi(h)
\le\lim_{N\to\infty}\frac{C^q}{N^q}=0,
$$
and so, from \eqref{EQ: xiProperty},
$$
\mu\Big(\Big\{\omega\in\Omega:\,\sum_{n\ge1}|f_n(\omega)|=\infty\Big\}\Big)=
\mu\Big(\bigcap_{N\ge1}\bigcup_{n\ge1}A_n^N\Big)=0.
$$
Hence, $\sum_{n\ge1}f_n\in L^0(\mu)$. Again applying the monotone
convergence theorem, it follows that
\begin{eqnarray*}
\int_{B_{(X_p)'}^+}\Big(\int_\Omega\Big|\sum_{n\ge1}f_n(\omega)\Big|^ph(\omega)\,d\mu(\omega)\Big)^{q/p}d\xi(h)
& \le & \\
\int_{B_{(X_p)'}^+}\Big(\int_\Omega\big(\sum_{n\ge1}|f_n(\omega)|\big)^ph(\omega)\,d\mu(\omega)\Big)^{q/p}d\xi(h)
& = & \\
\lim_{n\to\infty}\int_{B_{(X_p)'}^+}\Big(\int_\Omega\big(\sum_{j=1}^n|f_j(\omega)|\big)^ph(\omega)\,d\mu(\omega)\Big)^{q/p}d\xi(h)
& = & \\ \lim_{n\to\infty}\Big\Vert
\sum_{j=1}^n|f_j|\Big\Vert_{S_{X_p}^{\,q}(\xi)}^q & \le & C^q
\end{eqnarray*}
and thus $\sum_{n\ge1}f_n\in S_{X_p}^{\,q}(\xi)$.
Note that if $f\in X$, for every $h\in B_{(X_p)'}^+$ we have that
$$
\int_\Omega|f(\omega)|^ph(\omega)\,d\mu(\omega)\le\Vert\,|f|^p\,\Vert_{X_p}\Vert
h\Vert_{(X_p)'}\le\Vert f\Vert_X^p
$$
and so
$$
\int_{B_{(X_p)'}^+}\Big(\int_\Omega|f(\omega)|^ph(\omega)\,d\mu(\omega)\Big)^{q/p}d\xi(h)
\le\Vert f\Vert_X^q\,\xi\big(B_{(X_p)'}^+\big).
$$
Then, $X\subset S_{X_p}^{\,q}(\xi)$ and $\Vert
f\Vert_{S_{X_p}^{\,q}(\xi)}\le\xi\big(B_{(X_p)'}^+\big)^{1/q}\,\Vert
f\Vert_X$ for all $f\in X$. In particular, $S_{X_p}^{\,q}(\xi)$ is
saturated, as a weak unit in $X$ is a weak unit in
$S_{X_p}^{\,q}(\xi)$.
Let us show that $S_{X_p}^{\,q}(\xi)$ is order continuous. Consider
$f,f_n\in S_{X_p}^{\,q}(\xi)$ such that $0\le f_n\uparrow f$
$\mu$-a.e. Note that, since
$$
\int_{B_{(X_p)'}^+}\Big(\int_\Omega|f(\omega)|^ph(\omega)\,d\mu(\omega)\Big)^{q/p}d\xi(h)<\infty,
$$
there exists a $\xi$-measurable set $B$ with
$\xi(B_{(X_p)'}^+\backslash B)=0$ such that
$\int_\Omega|f(\omega)|^ph(\omega)\,d\mu(\omega)<\infty\,$ for all
$h\in B$. Fixed $h\in B$, we have that $|f-f_n|^ph\downarrow0$
$\mu$-a.e.\ and $|f-f_n|^ph\le |f|^ph$ $\mu$-a.e. Then, applying the
dominated convergence theorem,
$\int_\Omega|f(\omega)-f_n(\omega)|^ph(\omega)\,d\mu(\omega)\downarrow0$.
Consider the measurable functions $\phi,\phi_n\colon
B_{(X_p)'}^+\to[0,\infty]$ given by
\begin{eqnarray*}
\phi(h) & = &
\Big(\int_\Omega|f(\omega)|^ph(\omega)\,d\mu(\omega)\Big)^{q/p} \\
\phi_n(h) & = &
\Big(\int_\Omega|f(\omega)-f_n(\omega)|^ph(\omega)\,d\mu(\omega)\Big)^{q/p}
\end{eqnarray*}
for all $h\in B_{(X_p)'}^+$. It follows that $\phi_n\downarrow0$
$\xi$-a.e.\ and $\phi_n\le \phi$ $\xi$-a.e. Again by the dominated
convergence theorem, we obtain
$$
\Vert f-f_n\Vert_{S_{X_p}^{\,q}(\xi)}^q=
\int_{B_{(X_p)'}^+}\phi_n(h)d\xi(h)\downarrow0.
$$
Finally, let us see that $S_{X_p}^{\,q}(\xi)$ is $p$-convex. Fix
$(f_i)_{i=1}^n\subset S_{X_p}^{\,q}(\xi)$ and consider the
measurable functions $\phi_i\colon B_{(X_p)'}^+\to[0,\infty]$ (for
$1\le i\le n$) defined by
$$
\phi_i(h)=\int_\Omega|f_i(\omega)|^ph(\omega)\,d\mu(\omega).
$$
for all $h\in B_{(X_p)'}^+$. Then,
\begin{eqnarray*}
\Big\Vert\Big(\sum_{i=1}^n
|f_i|^p\Big)^{1/p}\,\Big\Vert_{S_{X_p}^{\,q}(\xi)}^q & = &
\int_{B_{(X_p)'}^+}\Big(\int_\Omega\sum_{i=1}^n|f_i(\omega)|^ph(\omega)\,d\mu(\omega)\Big)^{q/p}d\xi(h)
\\ & = &
\int_{B_{(X_p)'}^+}\Big(\sum_{i=1}^n\phi_i(h)\Big)^{q/p}d\xi(h)
\\ & \le & \Big(\sum_{i=1}^n\Vert
\phi_i\Vert_{L^{q/p}(\xi)}\Big)^{q/p}.
\end{eqnarray*}
Since $\Vert \phi_i\Vert_{L^{q/p}(\xi)}=\Vert
f_i\Vert_{S_{X_p}^{\,q}(\xi)}^p$ for all $1\le i\le n$, we have that
$$
\Big\Vert\Big(\sum_{i=1}^n
|f_i|^p\Big)^{1/p}\,\Big\Vert_{S_{X_p}^{\,q}(\xi)}
\le\Big(\sum_{i=1}^n\Vert
f_i\Vert_{S_{X_p}^{\,q}(\xi)}^p\Big)^{1/p}.
$$
\end{proof}
\begin{example}
Take a weak unit $g\in(X_p)'$ and consider the Radon measure $\xi$
as the Dirac measure at $g$. If $A\in\Sigma$ is such that
$$
0=
\int_{B_{(X_p)'}^+}\Big(\int_Ah(\omega)\,d\mu(\omega)\Big)^{q/p}\,d\xi(h)
=\Big(\int_Ag(\omega)\,d\mu(\omega)\Big)^{q/p}
$$
then, $g\chi_A=0$ $\mu$-a.e.\ and so, since $g>0$ $\mu$-a.e.,
$\mu(A)=0$. That is, $\xi$ satisfies \eqref{EQ: xiProperty}. In this
case, $S_{X_p}^{\,q}(\xi)=L^p(gd\mu)$ with equal norms, as
$$
\int_{B_{(X_p)'}^+}\Big(\int_\Omega
|f(\omega)|^ph(\omega)\,d\mu(\omega)\Big)^{q/p}\,d\xi(h)=
\Big(\int_\Omega|f(\omega)|^pg(\omega)\,d\mu(\omega)\Big)^{q/p}
$$
for all $f\in L^0(\mu)$.
\end{example}
\begin{example}
Write $\Omega=\cup_{n\ge1}\Omega_n$ with $(\Omega_n)_{n\ge1}$ being
a disjoint sequence of measurable sets and take a sequence of
strictly positive elements $(\alpha_n)_{n\ge1}\in\ell^1$. Let us
consider the Radon measure
$\xi=\sum_{n\ge1}\alpha_n\delta_{g\chi_{\Omega_n}}$ on
$B_{(X_p)'}^+$, where $\delta_{g\chi_{\Omega_n}}$ is the Dirac
measure at $g\chi_{\Omega_n}$ with $g\in(X_p)'$ being a weak unit.
Note that for every positive function $\phi\in L^0(\xi)$, it follows
that
$\int_{B_{(X_p)'}^+}\phi\,d\xi=\sum_{n\ge1}\alpha_n\phi(g\chi_{\Omega_n})$.
If $A\in\Sigma$ is such that
$$
0=\int_{B_{(X_p)'}^+}\Big(\int_Ah(\omega)\,d\mu(\omega)\Big)^{q/p}\,d\xi(h)
=\sum_{n\ge1}\alpha_n\Big(\int_{A\cap\Omega_n}g(\omega)\,d\mu(\omega)\Big)^{q/p}
$$
then, $\int_{A\cap\Omega_n}g(\omega)\,d\mu(\omega)=0$ for all
$n\ge1$. Hence,
$$
\int_Ag(\omega)\,d\mu(\omega)=\sum_{n\ge1}\int_{A\cap\Omega_n}g(\omega)\,d\mu(\omega)=0
$$
and so $g\chi_A=0$ $\mu$-a.e., from which $\mu(A)=0$. That is, $\xi$
satisfies \eqref{EQ: xiProperty}. For every $f\in L^0(\mu)$ we have
that
\begin{eqnarray*}
\int_{B_{(X_p)'}^+}\Big(\int_\Omega
|f(\omega)|^ph(\omega)\,d\mu(\omega)\Big)^{q/p}\,d\xi(h)= \\
\sum_{n\ge1}\alpha_n\Big(\int_{\Omega_n}|f(\omega)|^pg(\omega)\,d\mu(\omega)\Big)^{q/p}.
\end{eqnarray*}
Then, the B.f.s.\ $S_{X_p}^{\,q}(\xi)$ can be described as the space
of functions $f\in \cap_{n\ge1}L^p(g\chi_{\Omega_n}d\mu)$ such that
$\big(\alpha_n^{1/q}\Vert
f\Vert_{L^p(g\chi_{\Omega_n}d\mu)}\big)_{n\ge1}\in\ell^q$. Moreover,
$\Vert f\Vert_{S_{X_p}^{\,q}(\xi)}=\Big(\sum_{n\ge1}\alpha_n\,\Vert
f\Vert_{L^p(g\chi_{\Omega_n}d\mu)}^q\Big)^{1/q}$ for all $f\in
S_{X_p}^{\,q}(\xi)$.
\end{example}
\section{$p$-strongly $q$-concave operators}
Let $1\le p\le q<\infty$ and let $T\colon X\to E$ be a linear
operator from a saturated B.f.s.\ $X$ into a Banach space $E$.
Recall that $T$ is said to be \emph{$q$-concave} if there exists a
constant $C>0$ such that
$$
\Big(\sum_{i=1}^n\Vert T(f_i) \Vert_E^q\Big)^{1/q} \le C
\Big\Vert\Big(\sum_{i=1}^n|f_i|^q\Big)^{1/q}\,\Big\Vert_X
$$
for every finite subset $(f_i)_{i=1}^n\subset X$. The smallest
possible value of $C$ will be denoted by $M_q(T)$. For issues
related to $q$-concavity see for instance
\cite[Ch.\,1.d]{lindenstrauss-tzafriri}. We introduce a little
stronger notion than $q$-concavity: $T$ will be called
\emph{$p$-strongly $q$-concave} if there exists $C>0$ such that
$$
\Big(\sum_{i=1}^n\Vert T(f_i)\Vert_E^q\Big)^{1/q}\le C
\sup_{(\beta_i)_{i\ge1} \in B_{\ell^r}}\Big\Vert\Big(\sum_{i=1}^n
|\beta_if_i|^p\Big)^{1/p}\,\Big\Vert_X
$$
for every finite subset $(f_i)_{i=1}^n\subset X$, where $1<
r\le\infty$ is such that $\frac{1}{r}=\frac{1}{p}-\frac{1}{q}$. In
this case, $M_{p,q}(T)$ will denote the smallest constant $C$
satisfying the above inequality. Noting that $\frac{r}{p}$ and
$\frac{q}{p}$ are conjugate exponents, it is clear that every
$p$-strongly $q$-concave operator is $q$-concave and so continuous,
and moreover $\Vert T\Vert\le M_q(T)\le M_{p,q}(T)$. As usual, we
will say that $X$ is \emph{$p$-strongly $q$-concave} if the identity
map $I\colon X\to X$ is so, and in this case, we denote
$M_{p,q}(X)=M_{p,q}(I)$.
Our goal is to get a continuous extension of $T$ to a space of the
type $S_{X_p}^{\,q}(\xi)$ in the case when $T$ is $p$-strongly
$q$-concave and $X$ is $p$-convex. To this end we will need to
describe the supremum on the right-hand side of the $p$-strongly
$q$-concave inequality in terms of the K\"{o}the dual of $X_p$.
\begin{lemma}\label{LEM: Sup-lr-Xp*}
If $X$ is $p$-convex and order semi-continuous then
$$
\sup_{(\beta_i)_{i\ge1}\in B_{\ell^r}}\Big\Vert\Big(\sum_{i=1}^n
|\beta_if_i|^p\Big)^{1/p}\,\Big\Vert_X=\sup_{h\in B_{(X_p)'}^+}
\Big(\sum_{i=1}^n\Big(\int|f_i|^ph\,d\mu\Big)^{q/p}\,\Big)^{1/q}
$$
for every finite subset $(f_i)_{i=1}^n\subset X$, where $1<
r\le\infty$ is such that $\frac{1}{r}=\frac{1}{p}-\frac{1}{q}$ and
$B_{(X_p)'}^+$ is the subset of all positive elements of the closed
unit ball $B_{(X_p)'}$ of $(X_p)'$.
\end{lemma}
\begin{proof}
Given $(f_i)_{i=1}^n\subset X$, since $X_p$ is order
semi-continuous, as $X$ is so, and $(\ell^{q/p})^*=\ell^{r/p}$, as
$\frac{r}{p}$ is the conjugate exponent of $\frac{q}{p}$, we have
that
\begin{eqnarray*}
\sup_{(\beta_i)\in B_{\ell^r}}\Big\Vert\Big(\sum_{i=1}^n
|\beta_if_i|^p\Big)^{1/p}\,\Big\Vert_X^p & = & \sup_{(\beta_i)\in
B_{\ell^r}}\Big\Vert\sum_{i=1}^n |\beta_if_i|^p\,\Big\Vert_{X_p}
\\ & = & \sup_{(\beta_i)\in
B_{\ell^r}}\sup_{h\in B_{(X_p)'}}\int
\sum_{i=1}^n|\beta_if_i|^p|h|\,d\mu \\ & = & \sup_{(\beta_i)\in
B_{\ell^r}}\sup_{h\in B_{(X_p)'}^+}\int
\sum_{i=1}^n|\beta_if_i|^ph\,d\mu \\ & = & \sup_{h\in
B_{(X_p)'}^+}\,\sup_{(\beta_i)\in B_{\ell^r}}
\sum_{i=1}^n|\beta_i|^p\int |f_i|^ph\,d\mu
\\ & = &
\sup_{h\in B_{(X_p)'}^+}\,\sup_{(\alpha_i)\in B_{\ell^{r/p}}^+}
\sum_{i=1}^n\alpha_i\int |f_i|^ph\,d\mu \\ & = & \sup_{h\in
B_{(X_p)'}^+}
\Big(\sum_{i=1}^n\Big(\int|f_i|^ph\,d\mu\Big)^{q/p}\,\Big)^{p/q}.
\end{eqnarray*}
\end{proof}
In the following remark, from Lemma \ref{LEM: Sup-lr-Xp*}, we obtain
easily an example of $p$-strongly $q$-concave operator.
\begin{remark}\label{REM: i-pq-concave}
Suppose that $X$ is $p$-convex and order semi-continuous. For every
finite positive Radon measure $\xi$ on $B_{(X_p)'}^+$ satisfying
\eqref{EQ: xiProperty}, it follows that the inclusion map $i\colon X
\to S_{X_p}^{\,q}(\xi)$ is $p$-strongly $q$-concave. Indeed, for
each $(f_i)_{i=1}^n \subset X$, we have that
\begin{eqnarray*}
\sum_{i=1}^n\Vert f_i\Vert_{S_{X_p}^{\,q}(\xi)}^q & = &
\sum_{i=1}^n\int_{B_{(X_p)'}^+}\Big(\int_\Omega|f_i(\omega)|^ph(\omega)\,d\mu(\omega)\Big)^{q/p}d\xi(h)
\\ & \le & \xi\big(B_{(X_p)'}^+\big)\sup_{h\in
B_{(X_p)'}^+}\sum_{i=1}^n\Big(\int_\Omega|f_i(\omega)|^ph(\omega)\,d\mu(\omega)\Big)^{q/p}
\end{eqnarray*}
and so, Lemma \ref{LEM: Sup-lr-Xp*} gives the conclusion for
$M_{p,q}(i)\le\xi\big(B_{(X_p)'}^+\big)^{1/q}$.
\end{remark}
Now let us prove our main result.
\begin{theorem}\label{THM: xiDomination}
If $T$ is $p$-strongly $q$-concave and $X$ is $p$-convex and order
semi-continuous, then there exists a probability Radon measure $\xi$
on $B_{(X_p)'}^+$ satisfying \eqref{EQ: xiProperty} such that
\begin{equation}\label{EQ: xiDomination}
\Vert T(f)\Vert_E\le
M_{p,q}(T)\Big(\int_{B_{(X_p)'}^+}\Big(\int_\Omega|f(\omega)|^ph(\omega)\,d\mu(\omega)\Big)^{q/p}\,d\xi(h)\Big)^{1/q}
\end{equation}
for all $f\in X$.
\end{theorem}
\begin{proof}
Recall that the stated topology on $(X_p)'$ is $\sigma((X_p)',X_p)$,
the one which is defined by the elements of $X_p$. For each finite
subset (with possibly repeated elements) $M=(f_i)_{i=1}^m\subset X$,
consider the map $\psi_M\colon B_{(X_p)'}^+\to [0,\infty)$ defined
by
$\psi_M(h)=\sum_{i=1}^m\big(\int_{\Omega}|f_i|^p\,h\,d\mu\big)^{q/p}$
for $h\in B_{(X_p)'}^+$. Note that $\psi_M$ attains its supremum as
it is continuous on a compact set, so there exists $h_M\in
B_{(X_p)'}^+$ such that $\sup_{h\in
B_{(X_p)'}^+}\psi_M(h)=\psi_M(h_M)$. Then, the $p$-strongly
$q$-concavity of $T$, together with Lemma \ref{LEM: Sup-lr-Xp*},
gives
\begin{eqnarray}\label{EQ: xM*-inequality}
\sum_{i=1}^m\Vert T(f_i)\Vert_E^q & \le & M_{p,q}(T)^q\sup_{h\in
B_{(X_p)'}^+}\sum_{i=1}^m\Big(\int_{\Omega}|f_i|^ph\,d\mu\Big)^{q/p}
\nonumber
\\ & \le & M_{p,q}(T)^q\sup_{h\in B_{(X_p)'}^+}\psi_M(h) \nonumber \\ & = &
M_{p,q}(T)^q\,\psi_M(h_M).
\end{eqnarray}
Consider now the continuous map $\phi_M\colon B_{(X_p)'}^+\to
\mathbb{R}$ defined by
$$
\phi_M(h)=M_{p,q}(T)^q\,\psi_M(h)-\sum_{i=1}^m\Vert T(f_i)\Vert_E^q
$$
for $h\in B_{(X_p)'}^+$. Take $B=\{\phi_M:\, M \textnormal{ is a
finite subset of } X\}$. Since for every $M=(f_i)_{i=1}^m,\,
M'=(f'_i)_{i=1}^k\subset X$ and $0<t<1$, it follows that
$t\phi_M+(1-t)\phi_{M'}=\phi_{M''}$ where
$M''=\big(t^{1/q}f_i\big)_{i=1}^m\cup\big((1-t)^{1/q}f'_i\big)_{i=1}^k$,
we have that $B$ is convex. Denote by $\mathcal{C}(B_{(X_p)'}^+)$
the space of continuous real functions on $B_{(X_p)'}^+$, endowed
with the supremum norm, and by $A$ the open convex subset $\{\phi\in
\mathcal{C}(B_{(X_p)'}^+):\, \phi(h)<0 \, \textnormal{ for all }
h\in B_{(X_p)'}^+\}$. By \eqref{EQ: xM*-inequality} we have that
$A\cap B=\emptyset$. From the Hahn-Banach separation theorem, there
exist $\xi\in \mathcal{C}(B_{(X_p)'}^+)^*$ and $\alpha\in\mathbb{R}$
such that $\langle\xi,\phi\rangle<\alpha\le\langle\xi,\phi_M\rangle$
for all $\phi\in A$ and $\phi_M\in B$. Since every negative constant
function is in $A$, it follows that $0 \le\alpha$. Even more,
$\alpha=0$ as the constant function equal to $0$ is just
$\phi_{\{0\}}\in B$. It is routine to see that
$\langle\xi,\phi\rangle\ge0$ whenever
$\phi\in\mathcal{C}(B_{(X_p)'}^+)$ is such that $\phi(h)\ge0$ for
all $h\in B_{(X_p)'}^+$. Then, $\xi$ is a positive linear functional
on $\mathcal{C}(B_{(X_p)'}^+)$ and so it can be interpreted as a
finite positive Radon measure on $B_{(X_p)'}^+$. Hence, we have that
$$
0\le\int_{B_{(X_p)'}^+}\phi_M\,d\xi
$$
for all finite subset $M\subset X$. Dividing by $\xi(B_{(X_p)'}^+)$,
we can suppose that $\xi$ is a probability measure. Then, for
$M=\{f\}$ with $f\in X$, we obtain that
$$
\Vert T(f)\Vert_E^q\le
M_{p,q}(T)^q\int_{B_{(X_p)'}^+}\Big(\int_{\Omega}|f(\omega)|^ph(\omega)\,d\mu(\omega)\Big)^{q/p}\,
d\xi(h)
$$
and so \eqref{EQ: xiDomination} holds.
\end{proof}
Actually, Theorem \ref{THM: xiDomination} says that we can find a
probability Radon measure $\xi$ on $B_{(X_p)'}^+$ such that $T\colon
X\to E$ is continuous when $X$ is considered with the norm of the
space $S_{X_p}^{\,q}(\xi)$. In the next result we will see how to
extend $T$ continuously to $S_{X_p}^{\,q}(\xi)$. Even more, we will
show that this extension is possible if and only if $T$ is
$p$-strongly $q$-concave.
\begin{theorem}\label{THM: SXpqExtension}
Suppose that $X$ is $p$-convex and order semi-continuous. The
following statements are equivalent:
\begin{itemize}\setlength{\leftskip}{-3ex}
\item[(a)] $T$ is $p$-strongly $q$-concave.
\item[(b)] There exists a probability Radon
measure $\xi$ on $B_{(X_p)'}^+$ satisfying \eqref{EQ: xiProperty}
such that $T$ can be extended continuously to $S_{X_p}^{\,q}(\xi)$,
i.e.\ there is a factorization for $T$ as
$$
\xymatrix{
X \ar[rr]^T \ar@{.>}[dr]_(.4){i} & & E \\
& S_{X_p}^{\,q}(\xi) \ar@{.>}[ur]_(.6){\widetilde{T}} & }
$$
where $\widetilde{T}$ is a continuous linear operator and $i$ is the
inclusion map.
\end{itemize}
If (a)-(b) holds, then $M_{p,q}(T)=\Vert\widetilde{T}\Vert$.
\end{theorem}
\begin{proof}
(a) $\Rightarrow$ (b) From Theorem \ref{THM: xiDomination}, there is
a probability Radon measure $\xi$ on $B_{(X_p)'}^+$ satisfying
\eqref{EQ: xiProperty} such that $\Vert T(f)\Vert_E\le
M_{p,q}(T)\Vert f\Vert_{S_{X_p}^{\,q}(\xi)}$ for all $f\in X$. Given
$0\le f\in S_{X_p}^{\,q}(\xi)$, from Lemma \ref{LEM: saturatedBfs},
we can take $(f_n)_{n\ge1}\subset X$ such that $0\le f_n\uparrow f$
$\mu$-a.e. Then, since $S_{X_p}^{\,q}(\xi)$ is order continuous, we
have that $f_n\to f$ in $S_{X_p}^{\,q}(\xi)$ and so
$\big(T(f_n)\big)_{n\ge1}$ converges to some element $e$ of $E$.
Define $\widetilde{T}(f)=e$. Note that $\widetilde{T}$ is well
defined, since if $(g_n)_{n\ge1}\subset X$ is such that $0\le
g_n\uparrow f$ $\mu$-a.e., then
$$
\Vert T(f_n)-T(g_n)\Vert_E \le M_{p,q}(T)\Vert
f_n-g_n\Vert_{S_{X_p}^{\,q}(\xi)}\to0.
$$
Moreover,
\begin{eqnarray*}
\Vert \widetilde{T}(f)\Vert_E & = & \lim_{n\to\infty}\Vert
T(f_n)\Vert_E \\ & \le & M_{p,q}(T)\lim_{n\to\infty}\Vert
f_n\Vert_{S_{X_p}^{\,q}(\xi)} \\ & = & M_{p,q}(T)\Vert
f\Vert_{S_{X_p}^{\,q}(\xi)}.
\end{eqnarray*}
For a general $f\in S_{X_p}^{\,q}(\xi)$, writing $f=f^+-f^-$ where
$f^+$ and $f^-$ are the positive and negative parts of $f$
respectively, we define
$\widetilde{T}(f)=\widetilde{T}(f^+)-\widetilde{T}(f^-)$. Then,
$\widetilde{T}\colon S_{X_p}^{\,q}(\xi)\to E$ is a continuous linear
operator extending $T$. Moreover $\Vert \widetilde{T}\Vert\le
M_{p,q}(T)$. Indeed, let $f\in S_{X_p}^{\,q}(\xi)$ and take
$(f_n^+)_{n\ge1},\,(f_n^-)_{n\ge1}\subset X$ such that $0\le
f_n^+\uparrow f^+$ and $0\le f_n^-\uparrow f^-$ $\mu$-a.e. Then,
$f_n^+-f_n^-\to f$ in $S_{X_p}^{\,q}(\xi)$ and
$$
T(f_n^+-f_n^-)=T(f_n^+)-T(f_n^-)\to
\widetilde{T}(f^+)-\widetilde{T}(f^-)=\widetilde{T}(f)
$$
in $E$. Hence,
\begin{eqnarray*}
\Vert \widetilde{T}(f)\Vert_E & = & \lim_{n\to\infty}\Vert
T(f_n^+-f_n^-)\Vert_E \\ & \le & M_{p,q}(T)\lim_{n\to\infty}\Vert
f_n^+-f_n^-\Vert_{S_{X_p}^{\,q}(\xi)} \\ & = & M_{p,q}(T)\Vert
f\Vert_{S_{X_p}^{\,q}(\xi)}.
\end{eqnarray*}
(b) $\Rightarrow$ (a) Given $(f_i)_{i=1}^n\subset X$, we have that
\begin{eqnarray*}
\sum_{i=1}^n\Vert T(f_i)\Vert_E^q & = & \sum_{i=1}^n\Vert
\widetilde{T}(f_i)\Vert_E^q \le
\Vert\widetilde{T}\Vert^q\sum_{i=1}^n\Vert
f_i\Vert_{S_{X_p}^{\,q}(\xi)}^q \\ & = & \Vert\widetilde{T}\Vert^q
\sum_{i=1}^n\int_{B_{(X_p)'}^+}\Big(\int_\Omega|f_i(\omega)|^ph(\omega)\,d\mu(\omega)\Big)^{q/p}d\xi(h)
\\ & \le & \Vert\widetilde{T}\Vert^q
\sup_{h\in
B_{(X_p)'}^+}\sum_{i=1}^n\Big(\int_\Omega|f_i(\omega)|^ph(\omega)\,d\mu(\omega)\Big)^{q/p}.
\end{eqnarray*}
That is, from Lemma \ref{LEM: Sup-lr-Xp*}, $T$ is $p$-strongly
$q$-concave with $M_{p,q}(T)\le\Vert\widetilde{T}\Vert$.
\end{proof}
A first application of Theorem \ref{THM: SXpqExtension} is the
following Kakutani type representation theorem (see for instance
\cite[Theorem 1.b.2]{lindenstrauss-tzafriri}) for B.f.s.' being
order semi-continuous, $p$-convex and $p$-strongly $q$-concave.
\begin{corollary}\label{COR: i-isomorphism}
Suppose that $X$ is $p$-convex and order semi-continuous. The
following statements are equivalent:
\begin{itemize}\setlength{\leftskip}{-3ex}
\item[(a)] $X$ is $p$-strongly $q$-concave.
\item[(b)] There exists a probability Radon measure $\xi$ on
$B_{(X_p)'}^+$ satisfying \eqref{EQ: xiProperty}, such that
$X=S_{X_p}^{\,q}(\xi)$ with equivalent norms.
\end{itemize}
\end{corollary}
\begin{proof}
(a) $\Rightarrow$ (b) The identity map $I\colon X\to X$ is
$p$-strongly $q$-concave as $X$ is so. Then, from Theorem \ref{THM:
SXpqExtension}, there exists a probability Radon measure $\xi$ on
$B_{(X_p)'}^+$ satisfying \eqref{EQ: xiProperty}, such that $I$
factors as
$$
\xymatrix{
X \ar[rr]^I \ar@{.>}[dr]_(.4){i} & & X \\
& S_{X_p}^{\,q}(\xi) \ar@{.>}[ur]_(.6){\widetilde{I}} & }
$$
where $\widetilde{I}$ is a continuous linear operator with $\Vert
\widetilde{I}\Vert=M_{p,q}(X)$ and $i$ is the inclusion map. Since
$\xi$ is a probability measure, we have that $\Vert
f\Vert_{S_{X_p}^{\,q}(\xi)}\le\Vert f\Vert_X$ for all $f\in X$, see
the proof of Proposition \ref{PROP: SXpq(xi)-space}. Let $0\le f\in
S_{X_p}^{\,q}(\xi)$. By Lemma \ref{LEM: saturatedBfs}, we can take
$(f_n)_{n\ge1}\subset X$ such that $0\le f_n\uparrow f$ $\mu$-a.e.
Since $S_{X_p}^{\,q}(\xi)$ is order continuous, it follows that
$f_n\to f$ in $S_{X_p}^{\,q}(\xi)$ and so $f_n=\widetilde{I}(f_n)\to
\widetilde{I}(f)$ in $X$. Then, there is a subsequence of
$(f_n)_{n\ge1}$ converging $\mu$-a.e.\ to $\widetilde{I}(f)$ and
hence $f=\widetilde{I}(f)\in X$. For a general $f\in
S_{X_p}^{\,q}(\xi)$, writing $f=f^+-f^-$ where $f^+$ and $f^-$ are
the positive and negative parts of $f$ respectively, we have that
$f=\widetilde{I}(f^+)-\widetilde{I}(f^-)=\widetilde{I}(f)\in X$.
Therefore, $X=S_{X_p}^{\,q}(\xi)$ and $\widetilde{I}$ is de identity
map. Moreover, $\Vert f\Vert_X=\Vert \widetilde{I}(f)\Vert_X\le
\Vert \widetilde{I}\Vert\,\Vert
f\Vert_{S_{X_p}^{\,q}(\xi)}=M_{p,q}(X)\Vert
f\Vert_{S_{X_p}^{\,q}(\xi)}$ for all $f\in X$.
(b) $\Rightarrow$ (a) From Remark \ref{REM: i-pq-concave} it follows
that the identity map $I\colon X \to X$ is $p$-strongly $q$-concave.
\end{proof}
Note that under conditions of Corollary \ref{COR: i-isomorphism}, if
$X$ is $p$-strongly $q$-concave with constant $M_{p,q}(X)=1$, then
$X=S_{X_p}^{\,q}(\xi)$ with equal norms.
\section{$q$-summing operators on a $p$-convex B.f.s.}
Recall that a linear operator $T\colon X\to E$ between Banach spaces
is said to be \emph{$q$-summing} ($1\le q<\infty$) if there exists a
constant $C>0$ such that
$$
\Big(\sum_{i=1}^n\Vert Tx_i\Vert_E^q\Big)^{1/q}\le C\sup_{x^*\in
B_{X^*}}\Big(\sum_{i=1}^n|\langle x^*,x_i\rangle|^q\Big)^{1/q}
$$
for every finite subset $(x_i)_{i=1}^n\subset X$. Denote by
$\pi_q(T)$ the smallest possible value of $C$. Information about
$q$-summing operators can be found in \cite{diestel-jarchow-tonge}.
One of the main relations between summability and concavity for
operators defined on a B.f.s.\ $X$, is that every $q$-summing
operator is $q$-concave. This is a consequence of a direct
calculation which shows that for every $(f_i)_{i=1}^n\subset X$ and
$x^*\in X^*$ it follows that
\begin{equation}\label{EQ: q-norm}
\Big(\sum_{i=1}^n|\langle x^*,f_i\rangle|^q\Big)^{1/q}\le\Vert
x^*\Vert_{X^*}\Big\Vert\Big(\sum_{i=1}^n|f_i|^q\Big)^{1/q}\Big\Vert_X,
\end{equation}
see for instance \cite[Proposition 1.d.9]{lindenstrauss-tzafriri}
and the comments below. However, this calculation can be slightly
improved to obtain the following result.
\begin{proposition}\label{PROP: q-Summing}
Let $1\le p\le q<\infty$. Every $q$-summing linear operator $T\colon
X \to E$ from a B.f.s.\ $X$ into a Banach space $E$, is $p$-strongly
$q$-concave with $M_{p,q}(T)\le\pi_q(T)$.
\end{proposition}
\begin{proof}
Let $1<r\le\infty$ be such that
$\frac{1}{r}=\frac{1}{p}-\frac{1}{q}$ and consider a finite subset
$(f_i)_{i=1}^n\subset X$. We only have to prove
$$
\sup_{x^*\in B_{X^*}}\Big(\sum_{i=1}^n|\langle
x^*,f_i\rangle|^q\Big)^{1/q}\le\sup_{(\beta_i)_{i\ge1}\in
B_{\ell^r}}\Big\Vert\Big(\sum_{i=1}^n|\beta_if_i|^p\Big)^{1/p}\Big\Vert_X.
$$
Fix $x^*\in B_{X^*}$. Noting that $\frac{q}{p}$ and $\frac{r}{p}$
are conjugate exponents and using the inequality \eqref{EQ: q-norm},
we have
\begin{eqnarray*}
\Big(\sum_{i=1}^n|\langle x^*,f_i\rangle|^q\Big)^{1/q} & = &
\sup_{(\alpha_i)_{i\ge1}\in
B_{\ell^{r/p}}}\Big(\sum_{i=1}^n|\alpha_i||\langle
x^*,f_i\rangle|^p\Big)^{1/p} \\ & = & \sup_{(\beta_i)_{i\ge1}\in
B_{\ell^r}}\Big(\sum_{i=1}^n|\langle
x^*,\beta_if_i\rangle|^p\Big)^{1/p} \\ & \le &
\sup_{(\beta_i)_{i\ge1}\in B_{\ell^r}}\Big\Vert\Big(\sum_{i=1}^n
|\beta_if_i|^p\Big)^{1/p}\Big\Vert_X.
\end{eqnarray*}
Taking supremum in $x^*\in B_{X^*}$ we get the conclusion.
\end{proof}
From Proposition \ref{PROP: q-Summing}, Theorem \ref{THM:
SXpqExtension} and Remark \ref{REM: i-pq-concave}, we obtain the
final result.
\begin{corollary}\label{COR: q-summing-extension}
Set $1\le p\le q<\infty$. Let $X$ be a saturated order
semi-continuous $p$-convex B.f.s.\ and consider a $q$-summing linear
operator $T\colon X \to E$ with values in a Banach space $E$. Then,
there exists a probability Radon measure $\xi$ on $B_{(X_p)'}^+$
satisfying \eqref{EQ: xiProperty} such that $T$ can be factored as
$$
\xymatrix{
X \ar[rr]^T \ar@{.>}[dr]_(.4){i} & & E \\
& S_{X_p}^{\,q}(\xi) \ar@{.>}[ur]_(.6){\widetilde{T}} & }
$$
where $\widetilde{T}$ is a continuous linear operator with
$\Vert\widetilde{T}\Vert\le\pi_q(T)$ and $i$ is the inclusion map
which turns out to be $p$-strongly $q$-concave, and so $q$-concave.
\end{corollary}
Observe that what we obtain in Corollary \ref{COR:
q-summing-extension} is a proper extension for $T$, and not just a
factorization as the obtained in the Pietsch theorem for $q$-summing
operators through a subspace of an $L^q$-space.
|
1,108,101,565,578 | arxiv | \section{Introduction}
Galaxy clusters, the most massive gravitational bound systems, continually accrete material from their surroundings. In fact, a substantial fraction of their accretion is recent; clusters increase their mass by a factor of two between $z \sim 0.5$ and the present (e.g., \citealp{Zhao09, Fakhouri10, vandenBosch14, Haines18}). Thus a dense redshift survey that explores this epoch can place interesting constraints on the co-evolution of clusters and their members (e.g., \citealp{Dressler84, Blanton09, Peng10, Haines13, Wetzel14, Gullieuszik15, Sohn20}).
Catalogs of clusters are the necessary foundation for studying clusters and their members. Several techniques yield galaxy cluster catalogs. For example, X-ray observations reveal large samples of galaxy clusters by tracing the X-ray emitting hot intracluster medium (e.g., \citealp{Edge90, Gioia90, Ebeling98, Ebeling10, Bohringer00, Bohringer01, Bohringer17, Pacaud16}). The intergalactic medium in rich clusters also distorts the cosmic microwave background spectrum (the Sunyaev-Zel'dovich (SZ) effect) providing another route to cluster identification (e.g., \citealp{Melin06, Vanderlinde10, Marriage11, Bleem15, PlanckCollaboration15, PlanckCollaboration16}). X-ray and the SZ observations of clusters not only detect clusters, but also provide a measure of the cluster mass.
Identifying galaxy over-densities in optical and infrared (IR) imaging is a long-standing technique for obtaining large samples of clusters. Since the first systematic survey of cluster by \citet{Abell58}, many surveys have identified galaxy clusters photometrically based on various optical and infrared imaging surveys (e.g., \citealp{Zwicky68, Abell89, Gladders00, Koester07, Wen09, Hao10, Rykoff14, Oguri18, Gonzalez19}).
Dense spectroscopic surveys enable a robust identification of cluster members. Redshift measurements of the individual galaxies in the cluster field clearly separate the cluster members and interlopers. Previous studies compile spectroscopic redshift measurements of galaxies in clusters identified by other methods (e.g., X-ray, optical, and IR imaging) to refine these cluster catalogs (e.g., \citealp{Rozo15, Clerc16, Sohn18b, Sohn18a, Rines18, Myles20, Kirkpatrick21}). Other studies identify galaxy over-densities or, equivalently, clusters in redshift space (e.g., \citealp{Huchra82, Eke04, Berlind06, Robotham11, Tago10, Tempel14}). These catalogs generally provide an estimate of the cluster velocity dispersion, a mass proxy that complements other estimates.
HectoMAP \citep{Geller15, Hwang16, Sohn21} is a large-scale redshift survey designed to study galaxy cluster evolution in the intermediate redshift where clusters grow by a factor of 2. HectoMAP covers $\sim55$ deg$^{2}$ of the sky with $\sim 2000$ redshifts deg$^{-2}$. This high density survey enables robust identification of galaxy clusters based only on spectroscopy. Here, we apply a friends-of-friends (FoF) algorithm to identify galaxy clusters in HectoMAP purely based on the spectroscopy. The resultant cluster catalog includes 346 systems with more than 5992 members with $z \leq 0.6$.
The HectoMAP region is included in the Subaru/Hyper Suprime-Cam (HSC) Strategic Survey Program (SSP) project \citep{Miyazaki12, Aihara18}. The exquisite imaging combined with the dense spectroscopy provides a platform for exploring the co-evolution of the 346 FoF clusters and their BCGs. In addition to the redshifts, the HectoMAP survey provides central velocity dispersions for all of the BCGs in the FoF catalog. The redshift coverage and the mass range of the HectoMAP FoF clusters enable a clean exploration of the relationship between the cluster velocity dispersion and the central velocity dispersion of the BCG as a function of cluster velocity dispersion and redshift (e.g., \citealp{Sohn20}). This relationship is a test of current simulations of the growth of structure in $\Lambda$CDM.
We first introduce the HectoMAP redshift survey in Section \ref{sec:data}. We describe the cluster identification algorithm in Section \ref{sec:identification}. In Section \ref{sec:cat}, we introduce the HectoMAP cluster catalog, and we also explore the physical properties of the HectoMAP clusters. We then investigate the connection between HectoMAP clusters and their BCGs as a test of simulations (Section \ref{sec:connection} and Section \ref{sec:discussion}). We conclude in Section \ref{sec:conclusion}. We use the standard $\Lambda$CDM cosmology with $H_{0} = 70~\rm km~s^{-1}~Mpc^{-1}$, $\Omega_{m} = 0.3$, $\Omega_{\Lambda} = 0.7$, and $\Omega_{k} = 0.0$ throughout.
\section{HectoMAP}\label{sec:data}
HectoMAP is a dense redshift survey of the intermediate-age universe with a median redshift $z \sim 0.31$ \citep{Geller11,Geller15,Hwang16,Sohn21}. The survey field is located at $200 < $ R.A. (deg) $< 250$ and $42.5 < $ Decl. (deg) $< 44.0$, covering 54.64 deg$^{2}$ of the sky. The full survey includes $\sim 110,000$ spectroscopic redshifts and the typical galaxy number density is $\sim 2000$ deg$^{-2}$.
HectoMAP is included in the Subaru/HSC SSP fields \citep{Miyazaki12, Aihara18}. \citet{Sohn21} published the spectroscopic data within 8.7 deg$^{2}$ that includes the HSC/SSP Data Release (DR) 1 coverage. \citet{Sohn21} described the details of the HectoMAP survey. Here we briefly review the photometric and spectroscopic data.
\subsection{Photometry}\label{sec:phot}
SDSS DR16 \citep{SDSSDR16} is the photometric basis of HectoMAP. We select galaxies with SDSS $probPSF = 0$, where $probPSF$ indicates the probability that the object is a star. Following \citet{Sohn21}, we use Petrosian magnitudes for the galaxies and we compute galaxy colors based on model magnitudes.
Because the HectoMAP survey covers a wide redshift range, we apply the $K-$correction to the galaxy photometry. We use the $kcorrect$ code \citep{Blanton07} to derive the $K-$correction at $z = 0.35$, the median redshift of the HectoMAP survey. Hereafter, we use galaxy magnitudes and colors after both foreground-extinction and K-correction.
\subsection{Spectroscopy}\label{sec:spec}
The HectoMAP spectroscopy comes from two major spectroscopic surveys: SDSS/BOSS and our own MMT/Hectospec survey. We first compiled the SDSS DR16 spectroscopy which includes 25524 SDSS and BOSS redshifts within the HectoMAP field. The typical redshift uncertainty of these SDSS/BOSS measurements is $\sim 36~\rm km~s^{-1}$.
The majority of HectoMAP spectroscopy is from the Multi-Mirror Telescope (MMT)/Hectospec survey. Hectospec is a multi-object fiber-fed spectrograph mounted on the MMT 6.5m telescope \citep{Fabricant98, Fabricant05}. Hectospec has 300 fibers deployable over a 1 degree diameter field. A Hectospec spectrum, obtained through a 270 mm$^{-1}$ grating, covers the wavelength range 3700 - 9100 {\rm \AA~} with an average resolution of 6.2 {\rm \AA}. The Hectospec survey was carried out from 2009 to 2019. The primary targets of HectoMAP are galaxies with $r < 20.5$ and $(g-r) > 1$, and galaxies with $20.5 \leq r < 21.3$, $g-r > 1$, and $r-i > 0.5$.
We reduce the Hectospec spectra using the standard HSRED v2.0 package\footnote{http://mmto.org/$\sim$rcool/hsred/}. We measured the redshift using cross-correlation (RVSAO, \citealp{Kurtz98}). We also visually inspected the cross-correlation results and classified them into three categories: `Q' for high quality fits, `?' for ambiguous fits, and `X' for poor fits. We use only redshifts with `Q' for further analysis. We note that the typical offset between the Hectospec and SDSS/BOSS redshifts is $\sim 26~\rm km~s^{-1}$ \citep{Sohn21}, less than the typical uncertainty in the Hectospec redshift ($\sim 40~\rm km~s^{-1}$)
Figure \ref{complete} shows the HectoMAP spectroscopic survey completeness as a function of $r-$band magnitude. The survey integral completeness for the main targets with $(g-r) > 1.0$ is 80\% at $r = 20.5$ and 62\% at $r = 21.3$. The survey is much less complete for bluer objects outside the target range.
\begin{figure}
\centering
\includegraphics[scale=0.47]{fig1.pdf}
\caption{Spectroscopic survey completeness of HectoMAP as a function of $r-$band magnitude. Black, red, and blue lines show the integral completeness of the entire, red ($g-r \geq 1.0$) and blue ($g-r < 1.0$) subsamples, respectively. }
\label{complete}
\end{figure}
We derive two additional spectroscopic properties of the HectoMAP galaxies: $D_{n}4000$ and the central stellar velocity dispersion. We first measure the $D_{n}4000$ index, a stellar population age indicator (e.g., \citealp{Kauffmann03}). Following the definition from \citet{Balogh99}, we compute the flux ratio between $4000 - 4100$ \AA~ and $3850 - 3950$ \AA: $D_{n}4000 = F_{\lambda} (4000 - 4100) / F_{\lambda}(3850 - 3950)$. We use the $D_{n}4000$ index for characterizing brightest cluster galaxies in Section \ref{sec:connection}.
We also derive the central stellar velocity dispersion of HectoMAP galaxies. For SDSS/BOSS spectra, we obtain the stellar velocity dispersion from the Portsmouth data reduction. The Portsmouth data reduction \citep{Thomas13} measures the velocity dispersion using the Penalized Pixel-Fitting (pPXF) code \citep{Cappellari04}. There are 10,992 HectoMAP galaxies with Portsmouth velocity dispersion measurements.
For HectoMAP galaxies with MMT/Hectospec spectra, we estimate the velocity dispersion using the University of Lyon Spectroscopic analysis Software (ULySS, \citealp{Koleva09}). ULySS derives the velocity dispersion by comparing the observed spectra with stellar population templates based on the PEGASE-HR code and the MILES stellar library. We use the rest-frame spectral range $4100 - 5500$ \AA~ for deriving the stellar velocity dispersion to minimize the velocity dispersion uncertainty. A total of 91\% of quiescent galaxies in HectoMAP have a measured velocity dispersion.
Because the fiber sizes of Hectospec ($0.75\arcsec$ radius) and SDSS ($1.5\arcsec$ radius) differ, we apply an aperture correction. The aperture correction is defined as $\sigma_{A}/\sigma_{B} = (R_{A} / R_{B})^{\beta}$, where $\sigma$ is the stellar velocity dispersion, $R$ is the fiber aperture. We use the aperture correction coefficient $\beta = -0.054 \pm 0.005$ following \citet{Sohn17}. We correct the velocity dispersion to a fiducial physical aperture 3 kpc \citep{Zahid17, Sohn17, Sohn20}: $\sigma_{3 {\rm kpc}} / \sigma_{\rm SDSS/Hecto} = (3 {\rm kpc} / R_{SDSS/Hecto})^{\beta}$, where $R_{SDSS/Hecto}$ is the physical scale corresponding to SDSS/Hectospec aperture. We note that the median difference between the raw and aperture corrected velocity dispersions is small ($\sim 3\%$). In Section \ref{sec:connection}, we use these velocity dispersions to explore the relationship between the BCGs and their host clusters. Essentially all of the 346 BCGs have a measured velocity dispersion.
\section{Cluster Identification}\label{sec:identification}
Our first goal is to identify galaxy clusters and their members based on spectroscopy. We describe the friends-of-friends (FoF) algorithm we use for identifying galaxy systems (Section \ref{sec:fof}). We then elucidate the empirical determination of linking lengths for the FoF algorithm (Section \ref{sec:ll}). We describe the construction of the full HectoMAP FoF catalog in Section \ref{sec:iden}, and we explore the properties of the the cluster catalog in Section \ref{sec:explore}.
\subsection{Friends-of-Friends Algorithm}\label{sec:fof}
The FoF algorithm \citep{Huchra82} has a long history as a tool for identifying clusters of galaxies. Starting from a galaxy, the algorithm finds neighboring galaxies (friends) within a given linking length and repeats this search for neighbors of the neighbors (friends of friends). The set of connected neighboring galaxies constitute a single galaxy system.
The FoF algorithm is straightforward to apply to large surveys. Furthermore, the algorithm does not require any {\it a priori} physical assumptions about the galaxy systems including, but not limited to, their three dimensional geometry or their number density profile \citep{Duarte14}. Many previous studies build catalogs of galaxy systems using the FoF algorithm (e.g., \citealp{Huchra82, Barton96, Eke04, Berlind06, Tago10, Robotham11, Tempel12, Tempel14, Tempel16, Hwang16, Sohn16, Sohn18b}); these catalogs include galaxy systems on various scales from groups (e.g., \citealp{Ramella97, Sohn16}) to the large scale features in the cosmic web (e.g., \citealp{Hwang16}).
We apply the FoF algorithm in redshift space. The standard FoF algorithm \citep{Huchra82} in redshift space requires two linking lengths: one in the projected spatial direction ($\Delta D$) and one in the radial direction ($\Delta V$). We connect two galaxies if the separation between them in both the projected spatial and radial directions are smaller than the relevant linking lengths. We define the linking lengths as:
\begin{equation}
\Delta D = b_{proj} \times \bar{n}_{g} (z)^{-1/3},
\end{equation}
and
\begin{equation}
\Delta V = b_{radial} \times \bar{n}_{g} (z)^{-1/3},
\end{equation}
where $\bar{n}_{g} (z)$ is the mean galaxy volume number density of the survey (generally a function of redshift $z$), and $b_{proj}$ and $b_{radial}$ are the projected spatial and radial linking lengths in units of the mean galaxy separation within the survey at redshift $z$.
The choice of linking length determines the nature of galaxy systems that the FoF algorithm identifies. For example, if the linking length is too large, galaxies that are not physically connected can be bundled into a galaxy system. In contrast, the FoF algorithm with a tight linking length breaks galaxy systems into smaller fragments and thus the algorithm detects only dense, compact systems. Despite its importance, the determination of optimal linking lengths is not straightforward \citep{Duarte14}.
The projected linking length determines the density contrast of systems identified by the FoF algorithm. \citet{Huchra82} demonstrate that the minimum galaxy overdensity of the FoF systems depends on the projected linking length:
\begin{equation}
\frac{\delta n}{n} = \frac{3}{4\pi b_{proj}^{3}} - 1.
\end{equation}
\citet{Duarte14} compare the FoF linking lengths used in various catalogs (see their Table 1). The minimum overdensity of previous FoF cluster surveys varies from 80 - 1100, corresponding to $0.06 < b_{proj} < 0.14$. A smaller projected linking length identifies denser systems. We use a projected linking length within this range; $b_{proj} \sim 0.13$ corresponds to a minimum overdensity of 110 (see Section \ref{sec:ll}).
Many previous cluster surveys based on large redshift surveys and the FoF algorithm use variable linking lengths to cover the survey redshift range (e.g., \citealp{Huchra82, Eke04, Robotham11, Duarte14, Tempel16}). In general, the galaxy number density ($\bar{n}_{g} (z)$) varies as a function of redshift in a magnitude-limited redshift survey. Thus, the FoF algorithm with a fixed $b_{proj}$ and $b_{radial}$ identifies neighboring galaxies with different densities and density contrasts at different redshifts. Varying the linking length identifies systems with similar over-densities over the redshift survey range. One issue with this approach is that at the limiting redshift of the survey where the mean galaxy density drops, the FoF bundles large numbers of unrelated galaxies into single extended systems.
Figure \ref{dmean} displays the mean separation ($\overline{D}_{mean} (z)$) of HectoMAP galaxies as a function of redshift. We compute $\overline{D}_{mean} (z)$ from the mean number density ($\bar{n}_{g} (z)$) in each redshift bin: $\overline{D}_{mean} (z) = \bar{n}_{g} (z)^{-1/3}$. As a result of the survey selection that is not purely magnitude limited, the mean separation of HectoMAP galaxies remains constant over the redshift range for $0.1 \lesssim z \lesssim 0.45$ and increases only beyond $z > 0.45$. The HectoMAP survey density drops rapidly at $z > 0.45$. This decrease in the survey number density occurs when the $20.5 < r < 21.3$ sample dominates the survey.
\begin{figure}
\centering
\includegraphics[scale=0.47]{fig2.pdf}
\caption{Mean separation of all HectoMAP galaxies (black circles) and HectoMAP galaxies in the volume-limited sample (red squares) as a function of redshift. }
\label{dmean}
\end{figure}
Applying the FoF algorithm to a volume-limited subsample is insensitive to selection biases introduced by the change in survey number density. Figure \ref{rmmagz} displays the foreground extinction- and K-corrected $r-$band magnitude of HectoMAP galaxies as a function of redshift. We derive the survey limit that corresponds to $r = 21.3$ based on the median foreground extinction- and K-correction as a function of redshift (the solid line). We then define a volume-limited sample with $z < 0.35$ and $M_{r} < -19.72$ (the dashed lines). In Figure \ref{dmean}, red squares show the mean survey density of galaxies in the volume-limited sample. Indeed, the mean separation is constant within the redshift range of the volume-limited sample. In particular, the mean number density of the volume-limited sample does not decrease at $z < 0.1$ unlike the mean density of the full sample. Thus the linking length in the volume-limited sample is constant throughout the survey redshift range.
Because the volume limited sample covers most of the survey redshift range, we extend the FoF algorithm with a fixed linking length from the volume-limited sample throughout the survey (see Section \ref{sec:ll}). This approach enables identification of galaxy systems with similar physical properties \citep{Barton96, Sohn16}. Section \ref{sec:explore} discusses the systematics introduced by this choice.
\begin{figure}
\centering
\includegraphics[scale=0.47]{fig3.pdf}
\caption{Foreground extinction- and $K-$corrected $r-$band absolute magnitudes of HectoMAP galaxies (gray points) as a function of redshift. We plot only 25\% of galaxies for clarity. Red crosses indicate the spectroscopically identified redMaPPer members in HectoMAP. The solid line indicates the survey magnitude limit, $r = 21.3$. Dashed lines mark the boundary of the distance-limited sample ($M_{r} \leq -19.72$ and $z \leq 0.35$). }
\label{rmmagz}
\end{figure}
\subsection{Empirical Determination of the Linking Length}\label{sec:ll}
We use the redMaPPer clusters \citep{Rykoff14, Rykoff16} as a training set for empirical determination of linking length. redMaPPer (hereafter RM) is a photometric cluster finding algorithm based on the red-sequence. The RM catalog includes a large number of systems over a wide mass range and it is unbiased by selection of the BCG. The RM catalog (v6.3) based on the SDSS DR8 \citep{Rykoff16} lists 104 systems in the HectoMAP region. These HectoMAP RM systems are a sufficient basis for an empirical test of the success rate of the FoF algorithm as a function of the linking length.
We previously tested the fidelity of the HectoMAP RM clusters based on our redshift survey \citep{Sohn18b, Sohn21}. Over $90\%$ of the HectoMAP RM clusters are genuine clusters with 10 or more spectroscopic members. The typical number of spectroscopically identified members of these RM systems is $\sim 20$ \citep{Sohn18b}. Thus, we can ask which set of linking lengths recovers these populous clusters.
The RM catalog also allows us to find the proper linking lengths for identifying low mass clusters. Figure \ref{rmz} shows the mass distribution ($M_{200}$, the mass enclosed within the radius where the density equals 200 times the critical density) of the HectoMAP RM clusters as a function of redshift. We compute $M_{200}$ using the relation between mass and RM richness \citep{Rines18}. The relation is based on 27 RM clusters with large richness ($\lambda > 64$) and with dense spectroscopy. In Figure \ref{rmz}, red circles show RM clusters with 10 or more spectroscopic members, and black squares indicate less populous systems.
The HectoMAP RM sample includes clusters with M$_{200} \gtrsim 6 \times 10^{13}$ M$_{\odot}$ at $0.08 < z < 0.35$ where the redshift range corresponds to our volume-limited sample. An empirical test based on the HectoMAP RM clusters will find linking lengths that identify systems with mass larger than $6 \times 10^{13}$ M$_{\odot}$. The final sample we use for the empirical test includes 57 RM systems at $z < 0.35$ with 10 or more spectroscopic members.
\begin{figure}
\centering
\includegraphics[scale=0.47]{fig4.pdf}
\caption{Mass vs. redshift of redMaPPer clusters in HectoMAP with 10 or more members (red circles) and with less than 10 members (black squares). }
\label{rmz}
\end{figure}
For the empirical test, we generate a set of linking lengths by varying the projected linking lengths from 100 kpc to 1 Mpc in steps of 100 kpc. We explore radial linking lengths in the range $100~\rm km~s^{-1}$ to $1000~\rm km~s^{-1}$ in steps of $100~\rm km~s^{-1}$. We thus test 100 combinations of linking lengths to find the linking lengths that recovers the largest number of HectoMAP RM clusters.
\begin{figure}
\centering
\includegraphics[scale=0.47]{fig5.pdf}
\caption{Recovery rate of the HectoMAP redMaPPer systems by the FoF algorithm with various linking lengths. The x- and y-axes show the projected spatial and the radial linking lengths respectively. Darker colors indicate that more redMaPPer clusters are recovered. }
\label{RM_recovery}
\end{figure}
Figure \ref{RM_recovery} illustrates the result of the empirical test. The axes indicate the projected and radial linking lengths we test. In each pixel, we list the number of RM clusters recovered. With tighter linking lengths, the FoF algorithm misses many RM systems. The number of recovered RM systems also decreases slightly with the largest linking lengths (e.g., $\Delta D > 1000$ kpc or $\Delta V > 800~\rm km~s^{-1}$), because the algorithm bundles independent RM clusters into a single system.
Based on the empirical test, we use linking lengths of 900 kpc and $500~\rm km~s^{-1}$ for identifying HectoMAP galaxy systems. We use the smallest radial linking length that recovers more than 90\% of the RM clusters. This catalog contains 248 systems with 10 or more spectroscopic members. These systems include all of the RM clusters except one with low galaxy number density; this missing RM cluster has an FoF counterpart with 6 members. The projected linking length corresponds to $b_{proj} \simeq 0.13$ (i.e., $\delta n / n \sim 110$), similar to linking lengths in a previous search for galaxy clusters based on 2dFGRS \citep{Eke04} or SDSS \citep{Berlind06}.
The cluster identification based on a volume limited sample omits fainter cluster members. We remedy this drawback by selecting additional spectroscopic members within a cylindrical volume around the FoF cluster center (see Section \ref{sec:explore}).
We also test the empirical linking lengths based on the HectoMAP X-ray clusters. \citet{Sohn18b} used ROSAT All-Sky survey data to identify 15 X-ray clusters in HectoMAP complete to limiting flux of $f_{X} = 3 \times 10^{-13}$ erg s$^{-1}$ cm$^{-2}$. All 15 X-ray clusters are successfully recovered by the choice of linking lengths.
Interestingly, five of the HectoMAP X-ray clusters are not included in the RM cluster catalog; one of them at $z = 0.03$ is out of the RM cluster survey redshift range. Figure \ref{xray_RM_missing} displays phase-space diagrams of the 4 X-ray clusters missing from RM. These phase-space diagrams, often referred to as the R-v diagram, show the relative rest-frame line-of-sight velocity difference versus the projected distances from the cluster center. In Figure \ref{xray_RM_missing}, gray and red circles show the spectroscopic galaxies around the X-ray cluster center and the FoF cluster members, respectively. The FoF algorithm identifies the spectroscopic members of the X-ray clusters successfully.
\begin{figure*}
\centering
\includegraphics[scale=0.60]{fig6.pdf}
\caption{Phase-space diagrams of the 4 HectoMAP X-ray clusters identified by the FoF algorithm, but not by the redMaPPer algorithm. Gray circles are the spectroscopic galaxies around the X-ray cluster center. Red circles are the members of the FoF clusters matched with the X-ray cluster. }
\label{xray_RM_missing}
\end{figure*}
Based on the Subaru/HSC SSP dataset, \citet{Jaelani20} identify a large sample of strong gravitational lens candidates including 13 candidates in HectoMAP that are within the magnitude range of the volume limited redshift survey. They visually identify strong lensing arcs around the center of photometrically identified CAMIRA clusters \citep{Oguri14, Oguri18}. We cross-match these 13 HectoMAP strong lensing candidates with the FoF cluster catalog. Ten of these strong lens candidates have HectoMAP FoF cluster counterparts. The other 3 systems have a FoF group counterpart with 4, 6, and 9 FoF members, respectively. These systems provide an additional test of the efficacy of the FoF cluster identification.
\subsection{Construction of the Full HectoMAP FoF Catalog}\label{sec:iden}
We extend the FoF catalog to cover the full redshift range of HectoMAP by applying the linking lengths determined from the volume-limited subset to higher redshifts $0.35 < z < 0.6$ where essentially all of the galaxies are intrinsically brighter than the limit for the volume limited sample of Section \ref{sec:ll}. The mean survey density remains constant for $0.35 < z < 0.45$ (Figure \ref{dmean}), slightly higher than the redshift limit of the volume-limited sample. At higher redshift $0.45 < z < 0.6$, the FoF clusters we identify with the fiducial linking lengths tend to be denser than their counterparts at lower redshift, an expected systematic. Increasing the linking lengths at the largest redshifts would lead to a large number of false positives because of the steep decline in the survey density. The FoF cluster catalog we construct for $z > 0.45$ still contains robust massive systems.
The FoF algorithm identifies a total of 12195 systems with more than two members in the full HectoMAP survey. Most of these systems are pairs (59\%), triplets (20\%), or groups ($4 \leq N < 10$, 18\%). Following previous approaches \citep{Lee04, Sohn16, Sohn18b}, we further explore 346 systems with 10 or more FoF members (hereafter FoF clusters); 248 of these systems are within the volume limited subsample. The typical number of FoF members in these clusters is $\sim 17$. FoF systems with 10 or more members potentially contain many more faint members below the magnitude limit (see below).
We determine the center of each FoF cluster based on the center of light method \citep{Robotham11}. The center of light is basically identical to the center of mass, but it is based on galaxy luminosity rather than galaxy mass. We compute the center of light among FoF members. We then iterate after excluding the most distant FoF members from the center until only two members remain. Finally, we select the brighter galaxy as the system center; we define this central galaxy as the brightest cluster galaxies (BCGs) hearafter. For a majority ($\sim 75\%$) of the HectoMAP FoF clusters, the center corresponds to the location of the brightest cluster member. We discuss the properties of systems where the central galaxy is not the brightest cluster member in Section \ref{sec:connection}. Hereafter, we refer to the center of light as the cluster center.
For the clusters identified in the volume-limited sample, there are members fainter than the magnitude limit ($M_{r} = -19.72$) that are not included by the FoF. We identify these faint members within $R_{cl} < max (R_{proj, FoF})$ and $|c (z_{galaxy} - z_{cl}) / (1 + z_{cl})| < max(|\Delta V_{FoF}|)$. Here, $max (R_{proj, FoF})$ is the largest projected distance of the FoF members, and $max (|\Delta V_{FoF}|)$ is the maximum radial velocity difference between the FoF members and the cluster center. We added $\sim 5$ faint members per cluster. We include these additional faint members in our analysis (e.g., to determine the cluster velocity dispersion).
\subsection{Exploring the HectoMAP FoF Clusters}\label{sec:explore}
Taking advantage of the dense spectroscopy, we identify galaxy overdensities in redshift space as galaxy clusters. Like any method, the FoF algorithm does produce some false positives (e.g., \citealp{Ramella97, Diaferio99}). The algorithm may identify weak concentrations of galaxies or cuts through the extended filamentary structures where the central line-of-sight velocity dispersion is the value in the surrounding region. The inclusion of these features is unavoidable in constructing a cluster catalog based on the FoF algorithm. We thus explore the FoF cluster identification based on additional physical parameters.
We use the galaxy number density to test the cluster identification because a high galaxy number density within the central region is a key characteristic of galaxy clusters. Additionally, we use deep Subaru/HSC imaging as a guide to the nature of the system. The HSC images also allow us to examine the morphology of the BCGs. The presence of extended quiescent early-type BCGs is characteristic of galaxy clusters.
We compute the central galaxy number density ($\rho_{cl}$) for each cluster: $\rho_{VL} = N_{\rm galaxy} (M_{r} < -19.72) / V$. Here, $N_{\rm galaxy} (M_{r} < -19.72)$ is the number of galaxies brighter than $M_{r} = -19.72$ (this absolute magnitude limit is fainter than the survey limit for $z \gtrsim 0.35$), the magnitude limit of the volume-limited sample, $V$ is the cylindrical volume within $R_{proj} < 1$ Mpc, and $|c(z_{\rm galaxy} - z_{cl})| / (1 + z_{cl}) < 1000~\rm km~s^{-1}$. We compute the volume within $R_{proj} < 1$ Mpc, corresponding to the typical $R_{200}$ of galaxy clusters with $M_{200} > 10^{14} M_{\odot}$. The radial length of the cylindrical volume is also sufficient to encompass most spectroscopic members of the clusters. To compute the density contrast, we derive the galaxy number density of the entire HectoMAP survey also as a function of redshift.
\begin{figure*}
\centering
\includegraphics[scale=0.65]{fig7.pdf}
\caption{(a) Galaxy number density of the FoF clusters (black circles) as a function of redshift. The red dashed line shows the mean number density of galaxies in the HectoMAP volume-limited sample. The gray shaded area indicates the redshift range where the survey limit is brighter than $M_{r} = -19.72$. (b) The density contrast of the FoF clusters as a function of redshift. }
\label{fof_density}
\end{figure*}
Figure \ref{fof_density} (a) displays the central galaxy number density of the FoF clusters as a function of cluster redshift. The dashed line shows the average galaxy number density in the HectoMAP survey. Figure \ref{fof_density} (b) shows the density contrast between the clusters and the HectoMAP survey density at the cluster redshift: $\Delta = \rho_{VL} / \rho_{HectoMAP} (z_{cl})$. Indeed, the FoF clusters have high density contrast ($\Delta > 10$) as expected. The density of the high-z ($z >0.35$) clusters generally exceeds the low-z ($z <0.35$) cluster densities because the FoF algorithm preferentially identifies higher density and higher density contrast clusters at high-z, where the survey density decreases.
In Figure \ref{fof_density}, red circles show FoF clusters with a RM counterpart. The RM clusters generally have higher number density although they are distributed over a wide density range. It is interesting that even at the highest galaxy number densities, there are FoF clusters (open circles) that are not identified by RM. We discuss these systems further below.
We compute the velocity dispersion of the FoF members as a cluster mass proxy. We use the bi-weight technique \citep{Beers90}, which yields a robust velocity dispersion measurement with a small number of members. The uncertainty in the velocity dispersion corresponds to the $1\sigma$ standard deviation derived from 1000 bootstrap resamplings. The typical uncertainty in the cluster velocity dispersion is $\sim 80~\rm km~s^{-1}$.
Figure \ref{fof_density_sigma} (a) displays the galaxy number density as a function of the cluster velocity dispersion. Figure \ref{fof_density_sigma} (b) and (c) show the distributions of the number density and the cluster velocity dispersion, respectively. In general, the larger velocity dispersion (more massive) systems have higher galaxy number density. The Spearman's rank correlation coefficient is 0.45 with a significance of $1.13 \times 10^{-18}$. The solid line in Figure \ref{fof_density_sigma} shows the best-fit linear relation: $\rho_{VL} = (-0.044 \pm 0.014) + (0.091 \pm 0.008) \times (\sigma / 200 [\rm km~s^{-1}])$. According to this relation, a galaxy number density of 0.15 Mpc$^{-3}$ corresponds to a cluster velocity dispersion of $\sim 450~\rm km~s^{-1}$ and thus a cluster mass $\sim 10^{14} M_{\odot}$ \citep{Rines13}. This cluster mass is the approximate redMaPPer completeness limit and we thus use it for further exploration of the catalogs.
\begin{figure}
\centering
\includegraphics[scale=0.43]{fig8.pdf}
\caption{(a) Galaxy number density within the FoF clusters as a function of the cluster velocity dispersion. The solid line shows the best-fit linear relation. The gray shaded area indicates the low density range which may include false positives. (b) The distribution of the cluster velocity dispersion and (c) the distribution of the galaxy number density. }
\label{fof_density_sigma}
\end{figure}
Figure \ref{fof_regime} shows the cumulative distribution of the FoF cluster number density (the black line) ($\rho_{VL}$). The red solid line displays the cumulative distribution for the FoF clusters with a RM counterpart. We compute the fraction of FoF clusters with RM counterparts in three broad galaxy number density bins (the blue symbols in Figure \ref{fof_regime}). The three bins are high-density ($\rho_{VL} > 0.24$), intermediate-density ($0.15 < \rho_{VL} < 0.24$), and low-density ($\rho_{VL} < 0.15$). The clusters in the high-density regime generally have $\sigma \gtrsim 625~\rm km~s^{-1}$ ($M_{200} \gtrsim 2.5 \times 10^{14} M_{\odot}$) and those in the intermediate-density regime have $\sigma \gtrsim 450~\rm km~s^{-1}$ (Figure \ref{fof_density_sigma}).
\begin{figure}
\centering
\includegraphics[scale=0.49]{fig9.pdf}
\caption{Cumulative distribution of FoF clusters as a function of galaxy number density (black solid line). The red solid line shows the same distribution for FoF clusters with redMaPPer counterparts. Blue circles and the blue dashed line shows the fraction of FoF clusters with redMaPPer counterparts binned in galaxy number density. In the high-density regime, every FoF cluster has a redMaPPer counterpart. In the intermediate-density regime, $\sim 55\%$ of FoF clusters have redMaPPer counterparts. Only $\sim10\%$ of FoF clusters have redMaPPer counterparts in the low-density regime. }
\label{fof_regime}
\end{figure}
There are 10, 70, and 266 FoF clusters in the high-, intermediate-, and low-density regime, respectively. All of the clusters in the high-density regime have an RM counterpart supporting both approaches to cluster identification. In the intermediate density regime, 38 (54\%) FoF clusters have a RM counterpart. Within the intermediate density range the fraction of FoF clusters with RM counterparts increases with density.
Figure \ref{noRM_ID} shows Subaru/HSC images and R-v diagrams of two example FoF clusters with intermediate density and without a RM counterpart. The cluster members show a strong concentration in the HSC images. In the R-v diagrams, there is clear elongation of the cluster members along the line-of-sight, the signature of a massive cluster. All of the other clusters within the intermediate density show a similarly strong concentration in the HSC images and elongation in the R-v diagram. Differences in the catalog at the fiducial mass limit of the RM catalog probably reflect error in the velocity dispersion (FoF catalog) and/or the error in the richness (RM).
\begin{figure}
\centering
\includegraphics[scale=0.40]{fig10.pdf}
\caption{(Left) Subaru/HSC images of two FoF clusters without a RM counterpart in the intermediate density regime. (Right) The R-v diagrams of the two clusters. Gray circles show galaxies with a spectroscopic redshift and the red circles mark the FoF members. }
\label{noRM_ID}
\end{figure}
In the low-density regime, only 32 clusters have a RM counterpart. This result is not surprising because these systems may have masses much lower than the richness limit of the redMaPPer catalog. Even in this regime, a large fraction of the systems seems to be genuine clusters. For example, Figure \ref{noRM_LD} (a) and (b) display the HSC image and the R-v diagram of an FoF system with $\rho_{VL} = 0.11$. This FoF system has a dominant BCG at the center surrounded by many quiescent galaxies. The FoF members cluster around the BCG and there is the expected elongation in the radial direction. However, some systems with low number density are apparent false positives. Figure \ref{noRM_LD} (c) and (d) show FoF systems with $\rho_{VL} = 0.12$. Although this system consists of 13 spectroscopic members, but clustering around the central galaxy is weak. In the R-v diagram, the members extend to a larger projected distance, but only a few members are within the central region. There is no elongation.
\begin{figure}
\centering
\includegraphics[scale=0.40]{fig11.pdf}
\caption{Same as Figure \ref{noRM_ID}, but for two FoF clusters in the low-density regime without a redMaPPer counterpart. Panels (a) and (b) show a genuine cluster we identify based on the FoF algorithm. Panels (c) and (d) display an example of false positive where clustering around the central galaxy is weak. }
\label{noRM_LD}
\end{figure}
\section{HectoMAP FoF Cluster Catalog}\label{sec:cat}
The HectoMAP FoF cluster catalog includes 346 clusters with 10 or more spectroscopic members. We list the properties of HectoMAP FoF clusters including R.A., Decl., central redshift, the number of FoF members, the cluster velocity dispersion, the galaxy number density, and the density flag in Table \ref{cat:cl}. The density flag indicates the density regime that includes the FoF cluster. In the high- and intermediate-density regime, there are no false positive FoF clusters. In the low-density regime, there are some likely false positives among FoF systems (see Section \ref{sec:explore}). We also list the FoF cluster members in Table \ref{cat:mem} including the FoF cluster ID, SDSS object ID, R.A., Decl, and redshift of the individual FoF members.
\begin{deluxetable*}{lccccccccc}
\label{cat:cl}
\tablecaption{FoF Clusters in HectoMAP}
\tablecolumns{10}
\tabletypesize{\scriptsize}
\tablewidth{0pt}
\tablehead{
\multirow{2}{*}{ID} & \colhead{R.A.} & \colhead{Decl.} & \multirow{2}{*}{z} &
\multirow{2}{*}{$N_{FoF, mem}$\tablenotemark{$^{a}$}} &
\multirow{2}{*}{$N_{spe, mem}$\tablenotemark{$^{b}$}} &
\colhead{$\sigma^{c}$} & \colhead{$\rho_{VL}$} &
\multirow{2}{*}{$\rho_{VL}$ Flag$^{d}$} &
\multirow{2}{*}{BCG Flag$^{e}$} \\
& \colhead{(deg)} & \colhead{(deg)} & &
&
&
\colhead{($\rm km~s^{-1}$)} & \colhead{(Mpc$^{-3}$)} &
&
}
\startdata
HMRM001 & 200.633513 & 43.008366 & 0.281784 & 17 & 24 & $223 \pm 50$ & 0.15 & I & Y \\
HMRM002 & 203.101095 & 42.595151 & 0.304925 & 24 & 27 & $310 \pm 39$ & 0.12 & L & N \\
HMRM003 & 201.696117 & 43.188272 & 0.143471 & 10 & 26 & $386 \pm 74$ & 0.07 & L & Y \\
HMRM004 & 200.307439 & 43.506075 & 0.316042 & 12 & 14 & $378 \pm 60$ & 0.10 & L & Y \\
HMRM005 & 201.970415 & 43.264792 & 0.372521 & 10 & 10 & $288 \pm 71$ & 0.11 & L & Y \\
HMRM006 & 202.311345 & 43.232747 & 0.332000 & 17 & 20 & $514 \pm 110$ & 0.14 & L & Y \\
HMRM007 & 204.656461 & 42.817525 & 0.432667 & 11 & 11 & $400 \pm 154$ & 0.09 & L & Y \\
HMRM008 & 200.496818 & 43.173660 & 0.326603 & 11 & 16 & $166 \pm 31$ & 0.10 & L & Y \\
HMRM009 & 204.491698 & 42.824992 & 0.303717 & 25 & 32 & $422 \pm 72$ & 0.17 & I & Y \\
HMRM010 & 201.870106 & 43.083619 & 0.373789 & 11 & 11 & $514 \pm 146$ & 0.12 & L & Y \\
\enddata
\tablenotetext{a}{Number of FoF members.}
\tablenotetext{b}{Number of spectroscopic members including galaxies fainter than $M_{r} = -19.72$.}
\tablenotetext{c}{Velocity dispersion. }
\tablenotetext{d}{The flag indicates the density regime: `H' is high-density regime, `I' is intermediate-density regime, and `L' is low-density regime. }
\tablenotetext{e}{The flag indicates that a central galaxy is the brightest cluster member. }
\end{deluxetable*}
\begin{deluxetable*}{lccccc}
\label{cat:mem}
\tablecaption{HectoMAP FoF Cluster Members}
\tablecolumns{6}
\tabletypesize{\scriptsize}
\tablewidth{0pt}
\tablehead{
\colhead{Cluster ID} & \colhead{Object ID} & \colhead{R.A.} & \colhead{Decl.} & \colhead{z}}
\startdata
HMRM001 & 1237661849863782556 & 200.633513 & 43.008366 & $0.28178 \pm 0.00007$ \\
HMRM001 & 1237661849863782610 & 200.607216 & 43.005065 & $0.28273 \pm 0.00011$ \\
HMRM001 & 1237661849863782898 & 200.683745 & 42.999933 & $0.28211 \pm 0.00009$ \\
HMRM001 & 1237661849863782555 & 200.638060 & 43.005394 & $0.28275 \pm 0.00009$ \\
HMRM001 & 1237661849863782459 & 200.582142 & 43.033193 & $0.28387 \pm 0.00010$ \\
HMRM001 & 1237661849863782755 & 200.605014 & 43.015250 & $0.28312 \pm 0.00017$ \\
HMRM001 & 1237661849863782558 & 200.648308 & 42.990242 & $0.28222 \pm 0.00009$ \\
HMRM001 & 1237661849863782826 & 200.654731 & 43.042562 & $0.28258 \pm 0.00022$ \\
HMRM001 & 1237661849863782753 & 200.605855 & 43.020831 & $0.28133 \pm 0.00012$ \\
HMRM001 & 1237661849863782894 & 200.680854 & 42.998204 & $0.28245 \pm 0.00021$ \\
\enddata
\end{deluxetable*}
The cone diagram in Figure \ref{cone} shows the distribution of spectroscopic objects and FoF clusters. Squares mark the location of the FoF clusters; the darker and larger symbols indicate higher density. The FoF systems follow the large scale structure defined by all of galaxies in HectoMAP. The inset image shows the FoF cluster redshift distribution (red histogram). For comparison, we also plot the redshift distribution of the entire HectoMAP survey. At $z > 0.35$, the sampling in HectoMAP only enables identification of dense, relatively massive systems.
\begin{figure*}[ht]
\centering
\includegraphics[scale=0.44]{fig12.pdf}
\caption{HectoMAP cone diagram projected in R.A.. Black points show all HectoMAP galaxies with spectroscopic redshifts. Squares mark the HectoMAP FoF clusters; darker and larger symbols indicate higher density. The inset image shows the normalized redshift distribution of the FoF clusters (red open histogram) and the full HectoMAP survey (black filled histogram).}
\label{cone}
\end{figure*}
We next explore the BCG properties of the HectoMAP FoF clusters. The BCG is a distinctive galaxy often located at the bottom of the cluster potential well. Because we identify the BCGs based on the center of light method, which takes the luminosity density around the central galaxy into account, the BCGs of the FoF clusters are generally close to the center of the cluster and identification is obvious. However, in this process the BCG identification can be confused because of uncertainties in galaxy photometry in the crowded central region or because of contamination by other bright galaxies in the outskirts of the cluster (e.g., \citealp{Sohn19}). Among 346 FoF clusters, there are 86 systems ($\sim 25\%$) where the brightest cluster member is not identical to the object identified by the center of light method.
Figure \ref{bcg_nocentral} (a) shows the velocity dispersion of these 86 FoF systems as a function of redshift. The Kolmogorov-Smirnov test suggests that the distributions of the redshift and velocity dispersion of these 86 systems are not significantly different from the full sample with a significance level of 0.05 and 0.47, respectively.
Figure \ref{bcg_nocentral} (b) shows the magnitude difference between the central galaxy and the brightest cluster member ($\Delta r = r_{0, Central} - r_{0, Brightest}$) in the 86 FoF clusters where the choice of the BCG is not obvious. In 30 systems ($\sim35\%$), the magnitude difference is less than the $3\sigma$ uncertainty in the BCG magnitude. In other words, the BCG identification can be confused because of the large uncertainty in the photometry. In these cases, the central and the brightest galaxies often have nearby companions that affect the galaxy photometry. In the other 56 systems, the brightest members are brighter than the central galaxies by $0.1 - 1.3$ mag.
Figure \ref{bcg_nocentral} (c) shows the relative velocity difference and the projected distance between the brightest member and the central galaxies in 86 problematic FoF systems. The brightest members are located at $0.1 < R_{cl} ({\rm Mpc}) < 1.0$ and $|\Delta cz / (1 + z_{cl})| < 1000~\rm km~s^{-1}$. The stacked R-v diagram shows that the brightest members in these cases are actually in the cluster outskirts.
Figure \ref{bcg_nocentral} (d) displays the difference between the local number density around the central galaxy and around the brightest member as a function of cluster redshift. Here, the local density is the galaxy number count within a cylindrical volume with $R_{proj} < 200$ kpc and $|\Delta cz /(1 + z_{cl})| < 1300~\rm km~s^{-1}$. A positive local density difference indicates that the local density around the central galaxy exceeds that around the brightest cluster member. In a majority of the systems ($\sim 83\%$), the local density difference is positive, suggesting that the central galaxy is a better BCG choice because it sits nearer to the potential minimum. There are only 8 systems where the local density around the brightest cluster member exceeds the density around the central galaxy and where the magnitude difference is significant.
In conclusion, the central galaxies we identify are indeed brightest cluster galaxies (BCGs) in a majority ($\sim 75\%)$ of the HectoMAP FoF clusters. In 86 systems, the BCG identification is confused by the brighter galaxies located in the cluster outskirts. We mark these 86 systems in Table \ref{cat:cl}. For further discussion, we include the central galaxies in these 86 clusters. Excluding these clusters does not impact the results of the analysis.
\begin{figure}
\centering
\includegraphics[scale=0.35]{fig13.pdf}
\caption{(a) Velocity dispersion vs. redshift for 86 FoF clusters where the brightest cluster member is not identical to the central galaxy identified by the center of light method. Blue (and red) circles indicate FoF systems where the magnitude difference is less (and more) than $3\sigma$ uncertainty in the BCG photometry (see panel (b)). (b) The magnitude difference between the central galaxy and the brightest cluster member as a function of cluster redshift. (c) The stacked R-v diagram of the 86 brightest members in FoF systems where the brightest cluster member is not the central galaxy. The projected distance and the relative radial velocity of the brightest cluster members are computed with respect to the central galaxy. (d) The difference between the local density around the central galaxy and the brightest cluster member as a function of cluster redshift. }
\label{bcg_nocentral}
\end{figure}
We next examine the internal physical properties of the BCGs in all 346 HectoMAP FoF clusters. Figure \ref{bcg_prop} displays the distributions of the physical properties of the BCGs, including (a) foreground extinction and K-corrected $r-$band absolute magnitudes, (b) $(g-r)$ color, (c) $D_{n}4000$, (d) stellar mass, and (e) stellar velocity dispersion. For comparison, we also plot the same distributions for all of the FoF cluster members (black histograms). Figure \ref{bcg_prop} demonstrates that the BCGs are a distinctive population. The $D_{n}4000$ distribution shows that the most of BCGs ($\sim 96\%$) are quiescent ($D_{n}4000 > 1.5$). The HectoMAP BCGs are very massive; $\sim 90\%$ of the BCGs have $\log (M_{*} / M_{\odot}) > 11$. The stellar velocity dispersions of the BCGs are also generally large compared to those of other cluster galaxies, although the range is quite broad ($100 < \sigma~(\rm km~s^{-1}) < 450$).
\begin{figure*}
\centering
\includegraphics[scale=0.36]{fig14.pdf}
\caption{Distributions of the physical properties of the BCGs (red open histograms) including (a) foreground extinction and K-corrected $r-$band absolute magnitudes, (b) $(g-r)$ color, (c) $D_{n}4000$, (d) stellar mass, and (e) stellar velocity dispersion. For comparison, black filled histograms display the distributions of all FoF cluster members.}
\label{bcg_prop}
\end{figure*}
We list the physical properties of the BCGs in Table \ref{cat:bcg}. We include the SDSS object ID, foreground extinction and K-corrected $r-$band absolute magnitude, $(g-r)$ color, $D_{n}4000$, and the central stellar velocity dispersion of the BCGs ($\sigma_{*, BCG}$). Here, inclusion of $\sigma_{*, BCG}$ is a unique feature of the HectoMAP FoF cluster sample. We explore the relation between the physical properties of the BCGs and the FoF clusters in Section \ref{sec:connection}.
\begin{deluxetable*}{llcccc}
\label{cat:bcg}
\tablecaption{BCGs of in the HectoMAP FoF Clusters}
\tablecolumns{6}
\tabletypesize{\scriptsize}
\tablewidth{0pt}
\tablehead{
\colhead{Cluster ID} & \colhead{BCG Object ID} & \colhead{$M_{r}$} & \colhead{$(g-r)$} & \colhead{$D_{n}4000$} & \colhead{$\sigma$}}
\startdata
HMRM001 & 1237661849863782556 & $-22.30 \pm 0.02$ & 1.71 & $2.13 \pm 0.05$ & $273 \pm 13$ \\
HMRM002 & 1237661849864568984 & $-21.97 \pm 0.06$ & 1.64 & $2.12 \pm 0.11$ & $182 \pm 24$ \\
HMRM003 & 1237661850400981101 & $-21.98 \pm 0.02$ & 1.63 & $1.72 \pm 0.03$ & $199 \pm 12$ \\
HMRM004 & 1237661850400522409 & $-22.40 \pm 0.03$ & 1.72 & $1.87 \pm 0.08$ & $273 \pm 23$ \\
HMRM005 & 1237661850401046742 & $-22.31 \pm 0.05$ & 1.65 & $1.91 \pm 0.05$ & $233 \pm 20$ \\
HMRM006 & 1237661850401177769 & $-22.10 \pm 0.07$ & 1.78 & $2.06 \pm 0.06$ & $332 \pm 23$ \\
HMRM007 & 1237661850401964328 & $-22.42 \pm 0.08$ & 1.76 & $1.96 \pm 0.13$ & $266 \pm 32$ \\
HMRM008 & 1237661871871623348 & $-22.18 \pm 0.04$ & 1.71 & $1.75 \pm 0.03$ & $184 \pm 13$ \\
HMRM009 & 1237661850401898642 & $-22.54 \pm 0.03$ & 1.69 & $2.23 \pm 0.06$ & $277 \pm 17$ \\
HMRM010 & 1237661871872082077 & $-23.26 \pm 0.05$ & 1.85 & $2.13 \pm 0.13$ & $339 \pm 24$ \\
\enddata
\end{deluxetable*}
\section{CONNECTION BETWEEN BRIGHTEST CLUSTER GALAXIES AND CLUSTERS}\label{sec:connection}
Dense spectroscopy of galaxy clusters enables interesting dynamical analyses that connect clusters with their BCGs. \citet{Sohn20} demonstrate the application of dense spectroscopy to explore the connection between clusters and their BCGs.
The large HectoMAP cluster catalog not only doubles the sample size of \citet{Sohn20} for exploring this relation, but it also provides a sample that covers wider redshift ($0.1 < z < 0.6$) and mass ranges ($100 < \sigma_{cl} (\rm km~s^{-1}) < 1000$). This redshift range is important because clusters double their mass from a redshift $0.5 - 0.6$ to the present (e.g., \citealp{Fakhouri10, Haines18, Pizzardo21}). BCGs develop in tandem with their host clusters (e.g., \citealp{DeLucia07}). Exploring lower mass systems also provides a more extensive picture of the relationship between clusters and their central galaxies.
In \citet{Sohn20}, we investigate the relationship between the BCG stellar velocity dispersion (hereafter $\sigma_{*, BCG}$) and the cluster velocity dispersion (hereafter $\sigma_{cl}$). \citet{Sohn20} use the HeCS-omnibus sample that compiles spectroscopic data for 223 massive clusters. HeCS-omnibus includes clusters at $0.02 < z < 0.29$ with a median redshift of $0.10$. The masses of the HeCS-omnibus clusters range from $2.5 \times 10^{14} M_{\odot}$ to $1.8 \times 10^{15} M_{\odot}$ with a median mass of $3.0 \times 10^{14} M_{\odot}$, corresponding to $210 < \sigma_{cl}~(\rm km~s^{-1}) < 1350$ with a median $\sigma_{cl}$ of $\sim 700~\rm km~s^{-1}$. The HeCS-omnibus clusters are the most massive clusters selected from a large volume that covers almost half of the sky (i.e., the northern hemisphere). The HectoMAP cluster sample potentially probes the evolution of the relation between the median redshift of HeCS-omnibus and HectoMAP.
\begin{figure}
\centering
\includegraphics[scale=0.35]{fig15.pdf}
\caption{Ratio between the stellar velocity dispersion of BCGs and the cluster velocity dispersion ($\sigma_{*, BCG}/\sigma_{cl}$) vs. cluster velocity dispersion ($\sigma_{cl}$) for (a) 80 HectoMAP FoF clusters with $\rho_{cl} > 0.15$, (b) 80 HectoMAP FoF clusters and 223 HeCS-omnibus clusters, (c) the full HectoMAP FoF clusters, and (3) the full HectoMAP and HecS-omnibus clusters. Black circles and red squares are HectoMAP FoF and HeCS-omnibus clusters. The blue solid line shows the best-fit relation derived from the HeCS-omnibus clusters \citep{Sohn20}. The blue dashed line shows the best-fit relation we derive for each subsample in the panel. The purple horizontal solid line and the shaded region show the theoretical prediction and its $1\sigma$ boundary from \citet{Dolag10}. The horizontal dashed line indicates a similar prediction from \citet{Remus17}. }
\label{bcg_ratio}
\end{figure}
Figure \ref{bcg_ratio} (a) shows the ratio between $\sigma_{*, BCG}$ and $\sigma_{cl}$ for 80 HectoMAP FoF clusters as a function of $\sigma_{cl}$. Here, we plot only FoF clusters within the high- and intermediate density regime; all of these systems are genuine massive clusters. We note that all 80 clusters have quiescent BCGs (i.e., $D_{n}4000 > 1.5$). Thus, the stellar velocity dispersion of the BCGs is a good mass proxy. These 80 HectoMAP clusters show a remarkably tight relation; the ratio declines as a function of $\sigma_{cl}$. We use a Markov Chain Monte Carlo (MCMC) approach to derive the best-fit relation for these clusters (the blue dashed line):
\begin{equation}
\sigma_{*, BCG} / \sigma_{cl} = (-1.12 \pm 0.14) \log \sigma_{cl} + (3.60 \pm 0.37).
\end{equation}
We compare the relation from HectoMAP clusters with the best-fit relation for HeCS-omnibus clusters (the blue solid line in Figure \ref{bcg_ratio}, \citealp{Sohn20}):
\begin{equation}
\sigma_{*, BCG} / \sigma_{cl} = (-0.82 \pm 0.17) \log \sigma_{cl} + (2.77 \pm 3.93).
\end{equation}
In Figure \ref{bcg_ratio} (b), red squares show 223 HeCS-omnibus clusters. Because HeCS-omnibus includes $\sim180$ spectroscopic members in each cluster, individual cluster velocity dispersions have much small uncertainties ($\lesssim 50~\rm km~s^{-1}$). The relation derived for the HeCS-omnibus sample is slightly shallower than the relation of the HectoMAP dense clusters, but the difference is not significant ($< 2.1\sigma$).
Figure \ref{bcg_ratio} (c) displays the same relation for the full HectoMAP FoF sample. In this plot, we use 331 FoF clusters with quiescent BCGs. We exclude 15 clusters that host non-quiescent BCGs, because the velocity dispersion of these BCGs could be dominated by ordered rotation of the disk. Remarkably, the relation between $\sigma_{*, BCG}/\sigma_{cl}$ and $\sigma_{cl}$ is tight for $\sigma_{cl} \gtrsim 250~\rm km~s^{-1}$:
\begin{equation}
\sigma_{*, BCG} / \sigma_{cl} = (-1.06 \pm 0.07) \log \sigma_{cl} + (3.39 \pm 0.19).
\end{equation}
We compare the full HectoMAP FoF and HeCS-omnibus samples in Figure \ref{bcg_ratio} (d). In general, the relationship derived from the two independent cluster samples at different redshifts and covering different mass ranges is striking. We derive the best-fit relation based on the combined HectoMAP and HeCS-omnibus sample:
\begin{equation}
\sigma_{*, BCG} / \sigma_{cl} = (-0.82 \pm 0.03) \log \sigma_{cl} + (2.77 \pm 0.09).
\end{equation}
This relation is essentially identical to the relation based only on the HeCS-omnibus sample because of the small uncertainties in the HeCS-omnibus velocity dispersions.
\section{DISCUSSION}\label{sec:discussion}
The tight observed relation between $\sigma_{*, BCG}$ and $\sigma_{cl}$ suggests an interesting evolutionary scenario for BCGs and their host clusters. The relation indicates that the mass fraction associated with the BCG changes as a function of cluster mass. In massive clusters, the mass associated with the BCGs decreases steadily relative to the cluster mass. In low mass systems, the BCG mass is comparable with the cluster mass.
Here, we discuss the implication of the $\sigma_{*,BCG}/\sigma_{cl} - \sigma_{cl}$ relation. We compare the observed relation with the prediction from numerical simulations in Section \ref{sec:sim}. We then explore the plausible redshift evolution of the relation in Section \ref{sec:evolution}. We discuss a possible evolutionary scenario of the BCGs based on this relation in Section \ref{sec:scenario}.
\subsection{Comparison with Numerical Simulations}\label{sec:sim}
We compare the observed relation between $\sigma_{*, BCG}/\sigma_{cl}$ and $\sigma_{cl}$ with predictions from numerical simulations. We use the results from \citet{Dolag10} because they include stellar velocity dispersion measurements that can be compared with the observations. \citet{Dolag10} explore the relation between the BCG and cluster velocity dispersions based on numerical simulations that include 44 clusters with $M_{200} > 5 \times 10^{13} M_{\odot}$ (or $\sigma_{cl} \gtrsim 300~\rm km~s^{-1}$). They identified star particles that are not bound to any subhalos within the cluster potential. These star particles show a two component velocity distribution; one component belongs to the BCG (cD galaxy) central potential and another one is associated with the diffuse stellar halo (DSC). They compute the velocity dispersions of these two components as $\sigma_{BCG}$ and $\sigma_{DSC}$ (i.e., $\sim \sigma_{cl}$).
Interestingly, the ratio between $\sigma_{BCG}$ and $\sigma_{cl}$ measured from the simulations is independent of $\sigma_{cl}$. \citet{Dolag10} show that both $\sigma_{BCG}$ and $\sigma_{cl}$ are well correlated with the cluster halo mass ($M_{halo} \sim \sigma^{3}$), and thus the ratio between $\sigma_{BCG}$ and $\sigma_{cl}$ remains constant: $\sigma_{*, BCG} \simeq (0.45 \pm 0.11) \sigma_{cl}$. In Figure \ref{bcg_ratio}, the horizontal solid line and the shaded region mark the relation derived from \citet{Dolag10} and the $1\sigma$ range. \citet{Remus17} derived a similar relation based on simulations with higher resolution and a larger box size: $\sigma_{*, BCG} = 0.5 \sigma_{cl}$ (the dashed line).
The discrepancy between the observed and theoretical relations for $\sigma_{*, BCG}$ and $\sigma_{cl}$ offers an intriguing test of BCG and cluster formation models. Many previous studies explore the evolution of BCGs based primarily on BCG stellar mass, which is sensitive to complex baryonic physics (e.g., feedback models) in numerical simulations. Observed stellar mass estimates are affected by photometric uncertainties in the crowded cluster core and by systematic biases introduced by the choice of stellar population model. The central stellar velocity dispersion is insensitive to systematic observational biases and is relatively straightforward to measure. In future simulations, the velocity dispersion of the BCG could be measured based on particles within a cylindrical region that penetrates the central region of the BCG for more direct comparison with the spectroscopic observations (e.g., \citealp{Zahid18}).
\subsection{Tracing the Coevolution of BCGs and Their Host Clusters}\label{sec:evolution}
We next explore BCGs and their host cluster at different redshifts. We select 78 HeCS-omnibus clusters with $z < 0.1$ and 97 HectoMAP FoF clusters with $0.3 < z < 0.4$. The age difference between these two redshift epochs is $\sim 3$ Gyrs. The HeCS-omnibus subsample includes very massive systems with $210 < \sigma_{cl} (\rm km~s^{-1}) < 963$ with a median $\sigma_{cl} = 622~\rm km~s^{-1}$. In contrast, the HectoMAP subsample includes generally lower velocity dispersion systems ($126 < \sigma_{cl} (\rm km~s^{-1}) < 797$ with a median $\sigma_{cl} = 353~\rm km~s^{-1}$). The HeCS-omnibus clusters at low redshift tend to be more evolved systems with large mass. The selection of the HectoMAP and HeCS-omnibus samples differ substantially. HectoMAP is a comprehensive FoF sample in its redshift range; HeCS-omnibus collects available data from the literature. In spite of the differences in catalog construction, the two samples provide a baseline for comparing sets of clusters are different epochs.
Figure \ref{comp_ratio} shows the $\sigma_{*, BCG}/\sigma_{cl} - \sigma_{cl}$ relation for the HectoMAP (black circles) and HeCS-omnibus (red squares) subsamples. The slopes of the best-fit relations are consistent: $(-1.04 \pm 0.10)$ for the HeCS-omnibus and $(-1.26 \pm 0.25)$ for the HectoMAP. These slopes are based on subsamples with $\sigma_{cl} < 800~\rm km~s^{-1}$, the maximum $\sigma_{cl}$ of the HectoMAP subsample. The slope for the full HeCS-omnibus subsample is slightly shallower ($-0.84 \pm 0.09)$, but within $2\sigma$ of the HectoMAP sample.
The remarkable consistency in slope indicates that the ratio between the BCG and the cluster mass evolves along the relation in Figure \ref{comp_ratio} as the universe ages over the last $\sim 3$ Gyrs. The HectoMAP systems at $z \sim 0.35$ presumably evolve into more massive clusters (e.g., \citealp{Zhao09, Fakhouri10, Haines18}). More specifically, if a cluster halo reaches a velocity dispersion of $\sigma_{cl} \sim 300~\rm km~s^{-1}$, the BCG mass growth is slower than the growth of the clsuter halo.
The direction of the arrow in Figure \ref{comp_ratio} assumes the cluster growth rate from \citet{Haines18} and a negligible change in the BCG velocity dispersion as might be expected if BCG growth is dominated by minor mergers at these epochs (e.g., \citealp{Edwards20}). The arrow essentially parallels the observations. The length of the arrow indicates the expected magnitude of the evolution; the predicted length spanning these two epochs is a change in cluster velocity dispersion of about $\sim 80~\rm km~s^{-1}$. Based on shells at large radius for the HectoMAP cluster centers (Pizzardo et al. 2021, in preparation) show that cluster growth in the HectoMAP sample is consistent with the prediction of these simulations.
\begin{figure}
\centering
\includegraphics[scale=0.5]{fig16.pdf}
\caption{Same as Figure \ref{bcg_ratio}, but for subsamples from the HeCS-omnibus (red squares) and HectoMAP FoF cluster catalogs (black circles). The solid and dashed lines show the best-fit for HeCS and HectoMAP, respectively. The shaded region indicates where the clusters deviate from the relation. The magenta arrow marks the expected evolutionary direction based on simulated clusters combined with minor mergers (e.g., \citealp{Haines18}.) }
\label{comp_ratio}
\end{figure}
\subsection{Growth Mechanisms for the BCG }\label{sec:scenario}
The tight relation in Figure \ref{bcg_ratio} and Figure \ref{comp_ratio} indicates that the mass fraction associated with BCG decreases continuously as a function of cluster mass (at $\sigma_{cl} > 300~\rm km~s^{-1}$). The slope of the relation suggests that the BCG mass growth is slow over the redshift range we explore. BCG growth in massive clusters seems to be slower than in less massive systems.
The apparently slow growth of BCGs supports the idea that at late times minor mergers and/or accretion of stripped material (e.g., \citealp{Contini18, RagoneFigueroa18}) are the dominant mechanism for BCG growth. High mass clusters have a large velocity dispersion that precludes major mergers.
We note that the relation for HectoMAP clusters in the shaded region in Figure \ref{comp_ratio} steepens at $\sigma_{cl} < 300~\rm km~s^{-1}$. The apparent steepening of the relation is even more evident for the full HectoMAP sample (Figure \ref{bcg_ratio}). The impact of major mergers can affect BCG growth in this velocity dispersion range where the cluster and BCG dispersions are similar. Major mergers lead to significant increases in the central velocity dispersion of the resultant object (e.g., \citealp{Hilz12}). Interestingly, some HectoMAP FoF systems with $\sigma_{cl} < 300~\rm km~s^{-1}$ host BCGs that show sign of recent mergers (e.g., shell structures) in the HSC images. The presence of mergers in the lower dispersion systems may be evidence of the role of pre-processing (e.g., \citealp{Balogh02, Fujita04}) in the development of BCGs. A systematic study of these BCGs will provide more insights into BCG evolution (Sohn et al. in preparation). A general picture of BCG growth is emerging where major mergers, minor mergers and accretion of stripped material all play a role (e.g., \citealp{Diaferio01, Lin04, DeLucia07, Laporte13, RagoneFigueroa18, Spavone21}), but the timing for each process is probably restricted by local cluster dynamics.
\section{Conclusion}\label{sec:conclusion}
HectoMAP is a dense spectroscopic survey covering 54.64 deg$^{2}$ of the sky. A central goal of the HectoMAP redshift survey is identification of galaxy clusters based on spectroscopy and to explore the coevolution of the clusters and their members. In \citet{Sohn18a}, we use the HectoMAP survey to test the photometrically identified redMaPPer clusters. In \citet{Sohn18b}, we identify 15 X-ray clusters based on ROSAT all-sky X-ray data that are associated with the spectroscopic overdensities. Ultimately we plan to use the Subaru HSC imaging to measure weak lensing masses for the systems identified spectroscopically. eROSITA should soon provide X-ray masses throughout the mass and redshift range \citep{Merloni12, Predehl21}.
To build the catalog we apply a Friends-of-Friends (FoF) algorithm in redshift space. We use galaxies brighter than $M_{r} = -19.72$ in a volume-limited sample to $z = 0.35$ to determine linking lengths. We then extend these fiducial lengths them throughout the survey range. At redshifts $ z > 0.35$ the FoF catalog is dominated by relatively denser, more massive systems.
The properties of FoF systems depend on the choice of linking lengths. We determine the linking lengths empirically based on comparison with redMaPPer clusters in the HectoMAP region. We test a set of projected and radial linking lengths, and find the optimal set of linking lengths (900 kpc and $500~\rm km~s^{-1}$) that recovers redMaPPer clusters (56/57 in the test sample). These linking lengths identify systems with a density larger than $~110$ times the typical density of the universe at a cluster redshift.
The final HectoMAP FoF cluster catalog includes 346 systems with 10 or more spectroscopic members. We provide the FoF catalog including the membership, the BCG identification, and the BCG central stellar velocity dispersion. We divide the sample into three categories based on the galaxy number density around the cluster center. We investigate Subaru/HSC images and R-v diagrams of these systems. Systems in high- and intermediate-density regimes are all genuine clusters with strong concentration in the image and the elongation in the R-v diagram. In the high density regime all of the FoF clusters have RM counterparts; in the intermediate density regime, the FoF find 45\% more clusters than redMaPPer. In the low-density regime the FoF naturally includes some probable false positives ($\sim 30\%$) with no elongation in the R-v diagram.
Based on the 346 FoF clusters, we explore the connection between the BCGs and their host clusters. Following \citet{Sohn20}, we investigate the relation between cluster velocity dispersion ($\sigma_{cl}$) and the stellar velocity dispersion of the BCGs ($\sigma_{*, BCG}$). The ratio between $\sigma_{*, BCG}$ and $\sigma_{cl}$ decreases as a function of $\sigma_{cl}$. This trend is consistent with the one for the HeCS-omnibus cluster sample \citep{Sohn20}. The slope of the relation is remarkably tight for $\sigma_{cl} > 300~\rm km~s^{-1}$ in both the HectoMAP (especially the high- and intermediate-density samples) and the HeCS-omnibus samples.
In contrast with the data, numerical simulations predict a constant $\sigma_{*, BCG}/\sigma_{cl}$ ratio over a large $\sigma_{cl}$ range \citep{Dolag10, Remus17}. This discrepancy between the observed relation and the theoretical prediction offers an interesting test of coordinated BCG and cluster evolution.
As a probe of the synergy between BCG and cluster evolution, we compare the $\sigma_{*, BCG}/\sigma_{cl} - \sigma_{cl}$ relation at two different redshifts based on HeCS-omnibus and HectoMAP. The relations from the two subsamples have the same slope, suggesting BCGs evolve along the relation as cluster accrete surrounding material. BCG evolution must be slow in massive clusters over the redshift range explored by HectoMAP. The data suggest that at late times BCGs in massive clusters ($\sigma_{cl} > 300~\rm km~s^{-1}$) grows mainly by minor mergers that produce a negligible increase in the BCG velocity dispersion. For systems with low velocity dispersion $\sigma_{*, BCG}/\sigma_{cl} - \sigma_{cl}$, an apparent steepening of the relation may result from the major mergers.
The observational indications of the changing role of various BCG growth processes with velocity dispersion and possibly cosmic time can be tested with current high resolution simulations (e.g., \citealp{Springel18}). Additional observational constraints will obviously come from larger surveys and from multiple observational approaches to the HectoMAP FoF catalog including strong lensing, weak lensing, and X-ray observations.
\acknowledgements
We thank Perry Berlind, Michael Calkins, and Nelson Caldwell for operating Hectospec. We thank Susan Tokarz, Jaehyon Rhee and Sean Moran for their significant ontributions to the data reduction. We also thank Scott Kenyon, Ivana Damjanov, Adi Zitrin, and Mark Vogelsberger for discussions that clarified the paper. J.S. is supported by the CfA Fellowship. M.J.G. acknowledges the Smithsonian Institution for support. H.S.H. is supported by the New Faculty Startup Fund from Seoul National University. A.D. acknowledges partial support from the INFN grant InDark and the Italian Ministry of Education, University and Research (MIUR) under the {\it Departments of Excellence} grant L.232/2016. This research has made use of NASA’s Astrophysics Data System Bibliographic Services.
Funding for the SDSS-IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS-IV acknowl- edges support and resources from the Center for High Performance Computing at the University of Utah. The SDSS website is www.sdss.org. SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, Center for Astrophysics | Harvard and Smithsonian, the Chilean Participation Group, the French Participation Group, Instituto de Astrofísica de Canarias, Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU)/University of Tokyo, the Korean Participation Group, Lawrence Berkeley National Laboratory, Leibniz Institut für Astrophysik Potsdam (AIP), Max-Planck-Institut für Astronomie (MPIA Heidelberg), Max- Planck-Institut für Astrophysik (MPA Garching), Max-Planck- Institut für Extraterrestrische Physik (MPE), National Astro- nomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatário Nacional/MCTI, Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United King- dom Participation Group, Universidad Nacional Autónoma de México, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University.
The Hyper Suprime-Cam (HSC) collaboration includes the astronomical communities of Japan and Taiwan as well as Princeton University. The HSC instrumentation and software were developed by the National Astronomical Observatory of Japan (NAOJ), the Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU), the University of Tokyo, the High Energy Accelerator Research Organization (KEK), the Academia Sinica Institute for Astronomy and Astrophysics in Taiwan (ASIAA), and Princeton University.
Funding was contributed by the FIRST program from the Japanese Cabinet Office, the Ministry of Education, Culture, Sports, Science and Technology (MEXT), the Japan Society for the Promotion of Science (JSPS), Japan Science and Technol- ogy Agency (JST), the Toray Science Foundation, NAOJ, Kavli IPMU, KEK, ASIAA, and Princeton University. This paper makes use of software developed for the Large Synoptic Survey Telescope (LSST). We thank the LSST Project for making their code available as free software at http://dm.lsst. org. This paper is based [in part] on data collected at the Subaru Telescope and retrieved from the HSC data archive system, which is operated by Subaru Telescope and Astronomy Data Center (ADC) at National Astronomical Observatory of Japan. Data analysis was in part carried out with the cooperation of Center for Computational Astrophysics (CfCA), National Astronomical Observatory of Japan.
\facilities{MMT Hectospec, Subaru Hyper Suprime Cam}
\bibliographystyle{aasjournal}
|
1,108,101,565,579 | arxiv | \section{Introduction}
\label{sec:introduction}
Every day people around the world leave more and more of their digital traces at places they visit. There is an impressive amount of papers leveraging such data for studying human behavior, including mobile phone records \cite{ratti2006mlu, calabrese2006real, girardin2008digital, quercia2010rse}, vehicle GPS traces \cite{santi2013taxi, kang2013exploring}, smart cards usage \cite{bagchi2005, lathia2012}, social media posts \cite{java2007we, szell2013, frank2013happiness} and bank card transactions \cite{sobolevsky2014mining, sobolevsky2014money}. Results of the studies could be applied for addressing to a wide range of policy and decision-making challenges, such as regional delineation \cite{ratti2010redrawing, sobolevsky2013delineating} or land use classification \cite{pei2014new, grauwin2014towards} for instance. Many works focus specifically on studying human mobility at urban \cite{gonzalez2008uih, kung2014exploring, hoteit2014estimating}, country \cite{amini2014impact} or even global scale \cite{hawelka2014, paldino2015flickr}.
In this paper we consider international human mobility on the example of Spain by quantifying and analyzing the ability of different cities to attract foreign visitors (for various purposes) and proposing the mechanism that can explain the observed pattern. For the purpose of the study we use information about interactions between people and businesses registered through bank card transaction records and between people and urban spaces using geotagged photographs and tweets. By utilizing Flickr and Twitter datasets, as well as bank card transaction dataset provided by Banco Bilbao Vizcaya Argentaria (BBVA) bank, our goal is to investigate how city ability to attract foreign visitors depends on the city size. City attractiveness is defined as the absolute number of photographs, tweets or economical transactions made in the city by the foreign visitors.
Of course this overall measure by itself does not tell the whole story as for example it can not explain the reason for the attractiveness --- is the city really a touristic destination or does it attract a lot of business or a more special category of visitors? And what actually makes it attractive? Also one might want to zoom into much more details here analyzing where those visitors come from and what specific places across the city they visit. Obviously this kind of analysis would also require an in-depth consideration of the city historical, cultural, demographical, economical and infrastructural context. While in this paper we'll focus just on a general overall attractiveness estimate as an initial step in this direction.
Discovering how to improve city attractiveness, which is seen differently by residents and tourists, can be used in several fields such as planning, forecasting flows, tourism, economics and transportation \cite{sinkiene2014concept}. In the past, photography was also already considered as a good mean of inquiry in architecture and urban planning, being used for understanding landscapes \cite{spirn1998language}. Girardin et al.\ showed that it was possible to define a measure of city attractiveness by exploring big data from photo sharing websites \cite{girardin2009quantifying}. Moreover, shopping patterns of tourists, including their specific preferences and satisfaction level, were analyzed with the overall purpose of accurate planning, marketing and management of sales strategies \cite{lehto2004, oh2004, yuksel2004, rosenbaum2005, uncles1995}. However, there was little work done to show how city attractiveness can be quantified and explained from a many-sided perspective of diverse datasets created by different aspects of individual activity.
The novelty of this paper is twofold: applying new multi-layered data (i.e., Flickr, Tweeter and bank card transactions) for quantifying urban attractiveness for the foreign visitors as well as detecting strong and robust pattern. More specifically, our study looks at the way how attractiveness of a city depends on its size. Validating the robustness of the findings we look at different ways of city definitions and at different datasets used to quantify the attractiveness.
\section{Datasets}
\label{data_sets}
In our analysis we are combining three different datasets: the first one containing more than 100 million publicly shared geotagged photographs on Flickr around the world took during a period of several years, the second containing geotagged tweets posted on Twitter worldwide during 2012, and the last one containing a set of bank card transactions of domestic and foreign users recorded by Banco Bilbao Vizcaya Argentaria (BBVA) during 2011, all over Spain. The aforementioned data allow us to analyze activity of the foreign tourists in Spain from three different aspects - making purchases, taking photographs or expressing sentiments of interesting places they visited.
\subsection{Flickr dataset}
By merging two Flickr datasets \cite{flickr1, flickr2} we created a new dataset containing more than 130 million photographs/videos. Both datasets are available upon request to the interested researchers~-- one coming from a research project, another from Yahoo. Each dataset consists of over 100 million photographs or videos taken by more than one million users. The records in two datasets partially overlap, but since each photograph/video in both datasets has its id, we were able to merge both datasets by omitting duplicates and choosing only those entries that were made within a 10 year time window, i.e., from 2005 and until 2014.
In order to determinate which of the users acting in a certain location are actually foreign visitors, for each user in the merged dataset we define his/her home country by using the following criteria: a person is considered to be a resident of a certain country if this is the country where he/she took the highest number of the photographs/videos over the longest timespan (calculated as the time between the first and last photograph taken within the country) compared to all other countries for the considered person. For over 500 thousand users we were able to determine their home country using our criteria. Those users took almost 80\% of all the photographs/videos in the dataset (i.e., more than 90 millions in total), while the rest of users for which home country can not be defined mostly belong to a low-activity group taking photographs only occasionally.
For the purpose of our study we only consider the users with defined home country. From the total of over 3.5mln pictures taken in Spain, over 400 thousand are taken by over 16 thousand of foreign visitors coming from 112 countries all over the world.
\subsection{Twitter dataset}
The second dataset consists of geotagged messages posted during 2012 and collected from the digital microblogging and social media platform Twitter. Data was collected with the Twitter Streaming API \cite{twitterapi} and cleansed from potential errors and artificial tweeting noise as previously described in \cite{hawelka2014}. Globally, the dataset covers 944M tweets sent by 13M users \cite{hawelka2014}.
The final number of tweets posted in Spain in 2012 reached almost 35 million messages sent by 641 thousand Twitter users.
To differentiate between Spanish residents and foreign visitors, we used a similar technique as the one used in a case of Flickr dataset. We found out that 2\% of the total number of tweets posted in Spain in 2012 was sent by 80 thousand foreign visitors from 180 countries.
\subsection{BBVA dataset}
The third dataset used in this study is a complete set of bank card transactions registered by the Spanish bank BBVA during 2011. Those transactions are of two types: i) made using debit or credit cards issued by BBVA, or ii) made using cards issued by other banks in any of over 300 thousand BBVA card terminals. For the transactions of the second group, the dataset includes the country of origin where the card was issued. For our study we focus mainly on this second group, in particular on 17 million transactions made by the 8.6 million foreigner visitors from 175 different countries.
Due to the sensitive nature of bank data, our dataset was anonymized by BBVA prior to sharing, in accordance to all local privacy protection laws and regulations. Therefore, cards are identified only by randomly generated IDs, and all the details that could allow re-identifying a card holder were removed. The raw dataset is protected by the appropriate non-disclosure agreement and is not publicly available.
\section{Scaling of city attractiveness in Spain}
\label{section_3}
Cities are known not only to be the places where people live, but also the environment transforming human life. A bigger city boosts up human activity: intensity of interactions \cite{schlapfer2012scaling}, creativity \cite{bettencourt2010urbscaling}, economic efficiency (e.g., measured in GDP \cite{bettencourt2013origins}), as well as certain negative aspects: crime \cite{bettencourt2010urbscaling} or infectious diseases \cite{bettencourt2007growth}. Due to agglomeration effects and intensified human interactions, many aggregated socioeconomic quantities are known to depend on the city size in the form of the superlinear scaling laws, meaning that those quantities are not simply growing with the city size, but are actually growing faster compared to it; at the other hand urban infrastructure dimensions (e.g., total road surface) reveal a sublinear relation to the city size \cite{batty2008size, bettencourt2007growth, brockmann2006scaling, bettencourt2013origins}.
However, all of above mentioned quantities are mainly related to the processes that are happening inside the city. In this paper we pose another closely related, but slightly different question about such a characteristic of city external appearance as its attractiveness for the foreign visitors. Worth mentioning is that by "attractiveness" we do not necessary mean that a place is being a touristic destination, but we are just looking at its ability to attract the visitors for whatever reason~-- touristic, business, or any other personal matter-related ones.
In this study we focus on Spain as it is the country which economy largely depend on international tourism giving a paramount importance to the ability of its cities to attract foreign visitors (even if they are coming for a primary purpose other than tourism, visitors still often act like tourists do, making their contribution to the touristic sector). Many tourist rankings only consider a number of people visiting a city, consequently often leading to a fairly obvious conclusion that larger cities are more attractive as they can accommodate more tourists. The question we are interested in is how the total amount of visitor activity typically scales with the city size. Understanding this kind of scaling allows to predict the expected performance for a particular city and to estimate if it is actually under- or over-performing compared to the average expectation for the cities of that size.
The first step of conducting an analysis of city attractiveness is to get decided with what should be considered as a city. There is a number of ways of how to define a city, and selecting an appropriate city definition is important for analyzing aggregated urban performance \cite{batty2008size, roth2011structure, bettencourt2013origins}. For the purpose of our study we utilized definitions proposed by European Urban Audit Survey (EUAS) \cite{urbauditweb}, European Spatial Planning Observation Network (ESPON) \cite{espon2007} and A\'reas Urbanas de Espan\~a (AUDES) \cite{audes} project. On the most fine grain level, AUDES project defines 211 conurbations (CONs) in Spain. On more aggregate level, ESPON defines 40 Functional Urban Areas (FUAs), and finally on the most aggregated level EUAS defines 24 Large Urban Zones (LUZs). Population for LUZ and FUA were obtained from Eurostat \cite{eurostat} and National Statistics Institute of Spain \cite{ine}, and for CONs from the AUDES project. In addition to those three city definitions, we also perform our analysis for the 52 Spanish provinces in order to see if our conclusions could be actually extrapolated from the level of cities to more general consistent geographical entities.
\subsection{Aggregated city attractiveness}
\label{section_3.1}
In order to explore the overall city attractiveness for the foreign visitors we focus on three different aspects of visitor activity - economical transactions, taking photographs and twitting during their visit. We use the total amount of the described activity to quantify the attractiveness measure instead of a simple count of the users, since unlike the quantity of people who visited the city at least once, the amount of activity also takes into account the length of visitors' stay and intensity of exploring the city. We believe this is a more relevant proxy for the average visitor activity in the observed city at every single moment of time.
\begin{figure*}[t!]
\centering
\includegraphics[width=.49\textwidth]{CON-3.eps}
\includegraphics[width=.49\textwidth]{FUA-3.eps}
\includegraphics[width=.49\textwidth]{LUZ-3.eps}
\includegraphics[width=.49\textwidth]{Provinces-3.eps}
\caption{\label{fig::city_scaling}Scaling of city attractiveness with population size observed through three different datasets for different Spanish city definitions as well as for the provinces. X-axis represents the number of people living in the city, while Y-axis represents the fraction of the number of photographs/videos, tweets or bank card transactions registered in the city versus the total amount registered in all the cities.}
\end{figure*}
Figure \ref{fig::city_scaling} reports the results of fitting a power-law dependence $A\sim a p^b$ to the observed pairs of attractiveness $A$ and the population $p$. Fitting is performed on the log-log scale where it becomes a simple linear regression problem $log(A)\sim log(a)+b\cdot log(p)$. The fitted scaling trends are substantially superlinear for for all three datasets - BBVA, Flickr and Twitter - and for all types of city definitions (i.e., CONs, FUAs, LUZs) as well as for the provinces. Regardless of the spatial scale used in the analysis, the scaling exponent $b$ remained approximately the same for all three datasets considered (i.e., around $1.5$ with the highest level of fluctuation for provinces, compered to the lowest one for LUZs) confirming the robustness of the pattern. Moreover, this pattern seems to be quite significant~-- such a high exponent indicates that for example attractiveness of one city that is $3$ times larger than the other one, is expected not to be $3$ times, but on average $5$ times higher.
In order to double-check if the average scaling trend is really consistent with the fitted power-law, we perform the binning of the data by considering average attractiveness of all the cities falling into each of the five population ranges, evenly splitting the entire sample on the log-scale. As one can see from \ref{fig::city_scaling}, the binned trend in all the cases goes pretty much along the fitted power-law, confirming the scaling shape.
Finally, the analysis of the fit statistics confirms statistical significance of the trends --- $R2$ values for the binned trends fitted by power laws vary as $97.6\pm 1.8\%$ for all three ways of city definition and all three datasets, quantitatively confirming the observed visual similarity between the trends and the superlinear power laws. Quantifying $R2$ for the power-law fits to the scattered plots including the entire variation of all the individual original city data still keeps $R2$ high enough, reporting $57.5\pm 12.9\%$ of the total data variation being explained by the superlinear trends. At the same time $p$-values are always below $1\%$ (usually much lower) in all the cases considered, serving as an ultimate evidence of the trend significance.
\subsection{Temporal aspects of city attractiveness}
In the analysis that was presented in Section \ref{section_3.1}, we considered the overall aggregated city attractiveness over the entire time frame of data availability (i.e., one year in case of Twitter and BBVA datasets or ten years in a case of Flicker data). However, one should be aware of the noticeable seasonality of visitation patterns. Therefore, in this section we investigate whether or not this seasonality affects the observed attractiveness scaling. For that purpose we consider an aggregation of the activity over a moving window of three-month seasons, shifting it month-by-month through the entire year.
\begin{figure*}[t!]
\centering
\includegraphics[width=.32\textwidth]{flickr_exponent_by_monthes.eps}
\includegraphics[width=.32\textwidth]{twitter_exponent_by_monthes.eps}
\includegraphics[width=.32\textwidth]{bbva_exponent_by_monthes.eps}
\caption{\label{fig::exponent_by_monthes}Variation of the relative value of the scaling exponent over the year, normalized by the yearly average.}
\end{figure*}
\begin{figure*}[b!]
\centering
\includegraphics[width=.8\textwidth]{residuals_LUZ.eps}
\caption{\label{fig::fig_residuals}LUZ scale-independent attractiveness through three data sets.}
\end{figure*}
Figure~\ref{fig::exponent_by_monthes} shows a substantial dependence between the observed exponent and the time of the year (i.e., month). The trend appears to be mostly consistent across different ways of city definition, however shows slight variation depending on the dataset. However, the main pattern is always the same and is confirmed by all trends considered~-- it always drops over the summer as it seams that people tend to explore more extensively and more different destinations in Spain. This could be easily explained by a higher touristic activity over the summer, especially focused on beach tourism being spread over the number of smaller destinations along the coast, while the rest of the year, especially in spring and autumn, there is a larger number of business foreign visits who are primarily attracted to the major cities.
\subsection{Learning from deviations - scale-free city attractiveness}
The above superlinear power-law trends describe the way city attractiveness scales with the city size on average. However, each particular city performance can be different from the trend prediction. This actually opens up a possibility for the scale-independent scoring of the city attractiveness by considering the log-scale residual of the actual attractiveness value vs.\ the trend prediction: ${\rm res}=log(A)-b\cdot log(p) - log(a)$. This residual being positive points out to the city over-performance vs.\ the average trend, while the negative value points out that the city is under-performing.
Just to give an example, Figure \ref{fig::fig_residuals} visualizes residuals for the LUZs ordering the cities from the most over- to the most underperforming ones according to the bank card transactions data. One can notice that although residuals from different datasets are different, the patterns are generally consistent~-- cities strongly over/under-performing according to one dataset usually do the same according to the others. However, there are some interesting exceptions e.g., in cases of Malaga and Alicante. Although foreign people use to visit those cities, as one can see from bank card and twitter data, the visitors seem to have relatively lower motivation for posting pictures of them.
This pattern could probably be explained - those places are known to be the typical retirement places for senior people from northern Europe \cite{munoz2012atractivo} - some of them move there, while many keep visiting these places especially over the winter. And it seems quite likely that visitors from this category might typically be much less active users of Flickr.
Similarly, the two island cities: Santa Cruz de Tenerife and Las Palmas are attracting foreign visitors to spend their money, but do not seem to encourage them enough to perform both online activities: Twitter and Flickr. Another interesting outlier is Badajoz which seems to be particularly underperforming in terms of Flickr and Twitter activity of the foreign visitors. A possible reason might be the context of this city not really being a major touristic attractor, but mainly, because of its proximity to the frontier, serving the nearby Portuguese people as a shopping center and service provider.
The outliers highlight the importance of a more in-depth analysis which could address not only the very fact of foreign visitor activity but also it's reasons as well as its structure by visitor origin and type of activity. Nevertheless, overall consistency of the residual values defined through different datasets also observed on the scales of FUAs, CONs and the provinces, highlights the robustness of the general patterns. Table \ref{tab:Correlations} presents the pairwise correlation values between those residuals which happen to be high enough, typically falling in the range between 50 and 80\%.
\begin{table}[h]
\caption{\label{tab:Correlations}Correlations (\%) between city/provinces residuals defined through different datasets.}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
& bank/twitter & bank/flickr & twitter/flickr \\
\hline
Provinces & 84.45 & 46.77 & 52.89 \\
LUZs & 62.95 & 52.72 & 77.09 \\
FUAs & 73.68 & 59.98 & 77.90 \\
CONs & 80.07 & 56.90 & 58.56 \\
\hline
\end{tabular}
\end{center}
\end{table}
\section*{Conclusions}
\label{conclusion}
In this study we leveraged three types of big data created by human activity for quantifying the ability of cities in Spain to attract the foreign visitors. In general, city attractivity was found to demonstrate a strong superlinear scaling with the city size. A high consistency of the scaling exponents across different ways of defining cities as well as across all three datasets used in the study provides an evidence for the robustness of our finding and also serves as an indirect proof of the applicability of selected datasets for that purpose.
Moreover, we analyzed the temporal variation of the scaling exponent during a year, which was found to reveal a very intuitive pattern quantified by a noticeable drop of the exponent value over the summer~-- visitor activity seems to be more spread across different smaller destinations within the country over more touristic summer time, while more concentrated at major destinations in autumn and spring, presumably because of having more business visitors in the country. Again the pattern appears to be pretty robust and consistent as being confirmed by all the datasets for all different city definitions.
\section*{Acknowledgments}
The authors would like to thank Banco Bilbao Vizcaya Argentaria (BBVA) for providing the anonymized bank dataset as well as Eric Fisher and Yahoo! Webscope program for providing the Flickr datasets for this research. Special thanks to Assaf Biderman, Marco Bressan, Elena Alfaro Martinez and Mar\'ia Hern\'andez Rubio for organizational support of the project and stimulating discussions. We further thank BBVA, MIT SMART Program, Center for Complex Engineering Systems (CCES) at KACST and MIT, Accenture, Air Liquide, The Coca Cola Company, Emirates Integrated Telecommunications Company, The ENEL foundation, Ericsson, Expo 2015, Ferrovial, Liberty Mutual, The Regional Municipality of Wood Buffalo, Volkswagen Electronics Research Lab, and all the members of the MIT Senseable City Lab Consortium for supporting the research. Part of this research was also funded by the Austrian Science Fund (FWF) through the Doctoral College GIScience (DK W 1237-N23), Department of Geoinformatics - Z\_GIS, University of Salzburg, Austria.
|
1,108,101,565,580 | arxiv | \section{INTRODUCTION}\label{sec:intro}
Cosmic rays (CRs) are high-energy radiation produced outside the solar system. The accelerators of CRs with energies below $10^{15} $eV (PeV, the knee) are believed to be located inside the Galaxy. Identification of the accelerators, especially which can accelerate CRs to PeV energies (called PeVatrons), is a prime objective towards understanding of the origin of cosmic rays in Galaxy. In particular, the supernova remnants (SNRs) have been proposed as potential sources of Galactic cosmic rays \citep{2013APh....43...56B}. The detection of characteristic pion-decay feature provides direct evidence that protons can be accelerated in SNRs \citep{2013Sci...339..807A}. However, the very-high energy (VHE; E$\geq$0.1 TeV) gamma-ray spectra of more than ten young SNRs appear to be steep or contain breaks at energies below 10 TeV. This has raised doubts about the ability of SNRs to operate as PeVatrons \citep{2019NatAs...3..561A}. Other possible candidates for hadronic PeVatron include Galactic center \citep{2016Natur.531..476H}, young massive star clusters \citep{2019NatAs...3..561A} and so on. The identification of the hadronic PeVatron remains unclear and more observations are needed.
The typical energy of gamma-rays relative to parent CRs is about 1/10. Thus, the observation of ultra-high energy (UHE; E$\geq$0.1 PeV) gamma-rays is the most effective method to search for PeVatrons. The first UHE gamma-ray source Crab Nebula was reported in 2019 by Tibet AS$\gamma$ \citep{2019PhRvL.123e1101A}, and the next four UHE sources have been revealed by Tibet AS$\gamma$ collaboration and HAWC collaboration \citep{HAWC100TeV2019} in the past two years. Recently, Large High Altitude Air Shower Observatory (LHAASO) reported the detection of 12 UHE gamma-ray sources with statistical signifcance over $7\sigma$ \citep{2021LHAASONature}. The photons detected by LHAASO far beyond 100 TeV prove the existence of Galactic PeVatrons. It is likely that the Milky Way is full of these particle accelerators. The large field of view (FOV) of LHAASO, together with its high duty cycle, allow the discovery of VHE and UHE gamma-ray sources by surveying a large fraction of the sky in the range of declination from -15$^{\circ}$ to 75$^{\circ}$. The Square Kilometre Array (KM2A), a key sub-array of LHAASO, has a sensitivity at least 10 times higher than that of the current instruments at energies above 30 TeV \citep{He2018Design}. KM2A is therefore a suitable tool to detect and study PeVatrons within our Galaxy. Half of the KM2A array began at the end of 2019 and the whole array will be completed in 2021. The achieved sensitivity in the UHE band has exceeded all previous observations.
In this paper, we will report the discovery of a new UHE gamma-ray source LHAASO J2108+5157 based on the LHAASO-KM2A observation in Section 2. It is the first source revealed in the UHE band without a VHE gamma-ray counterpart reported by other detectors. A discussion on the plausible counterparts and the possible emission mechanisms of this source are presented in Section 3. Finally, Section 4 summarizes the conclusions.
\section{LHAASO OBSERVATIONS AND RESULTS}
\subsection{The LHAASO Detector Array}
LHAASO, located at 4410 m a.s.l. in the Sichuan province of China, is a new-generation complex extensive air shower (EAS) array \citep{2010A}. It consists of three sub-arrays: the KM2A, the Water Cherenkov Detector Array (WCDA), and the Wide-Field Air Cherenkov Telescope Array (WFCTA) \citep{He2018Design}. As a major part of LHAASO, KM2A is composed of 5195 electromagnetic particle detectors (EDs) and 1188 muon detectors (MDs), which are distributed in an area of 1.3 $\rm km^{2}$. Each ED \citep{2018APh...100...22L} consists of 4 plastic scintillation tiles covered by a 0.5-cm-thick lead plate to convert the gamma rays to electron-positron pairs and improve the angular resolution of the array. The EDs detect the electromagnetic particles in the shower which are used to reconstruct informations of air shower, such as the primary direction, core location and energy. Each MD includes a cylindrical water tank with a diameter of 6.8 m and a height of 1.2 m. The tank is buried under 2.5 m of soil to shield against the high energy electrons/positrons and photons of the showers. The MDs are designed to detect the muon component of showers, which is used to discriminate between gamma-ray and hadron induced showers.
Half of the KM2A array equiped with 2365 EDs and 578 MDs has been bring in service since December 2019. For the half-array, a trigger is generated when 20 EDs are fired within a 400 ns window. This results in a 1 kHz event trigger rate. For each triggered event, the parameters of the air shower, like the direction, core, and gamma/hadron separation variables, are devived from the recorded hit time and charge. The core resolution and angular resolution (68\% containment) of the half-array are energy- and zenith- dependent \citep{2020KM2ACrab}. The core resolution ranges from 4$-$9 m for events at 20 TeV to 2$-$4 m for events at 100 TeV and the angular resolution ranges from 0.5$^\circ-$0.8$^\circ$ to 0.2$^\circ-$0.3$^\circ$. The energy resolution is about 24\% at 20 TeV and 13\% at 100 TeV, for showers with zenith angle less than 20$^\circ$. Thanks to the good measurement of the muon component, KM2A has reached a very high rejection power of the hadron induced showers. The rejection power is about $10^3$ at 25 TeV and greater than $4\times10^3$ at energies above 100 TeV.
The same simulation data presented in \citet{2020KM2ACrab} is adopted here to simulate the detector response by re-weighting the shower zenith angle distribution to trace the trajectory of LHAASO J2108+5157. For this sample, CORSIKA \citep[version 7.6400;][]{1998cmcc.book.....H} is used to simulate air showers and a specific software G4KM2A \citep{2019ICRC...36..219C} is used to simulate the detector response. The energy of gamma rays is sampled from 1 TeV to 10 PeV and the zenith angle is sampled from 0$^\circ$ to 70$^{\circ}$.
\subsection{Analysis Methods}
The pipeline of KM2A data analysis presented in \citet{2020KM2ACrab} is designed for surveying the whole sky in the range of declination from -15$^{\circ}$ to 75$^{\circ}$ and the corresponding measurements for the source morphology and energy spectrum. The same pipeline is directly adopted in this analysis. The LHAASO-KM2A data were collected from 27th December 2019 to 24th November 2020. The final livetime used for the analysis is 308.33 days, corresponding to 94\% duty cycle. The effective observation time on source LHAASO J2108+5157 is 2525.8 hours. After the pipeline cuts presented in \citet{2020KM2ACrab}, the number of events used in this analysis is 1.2$\times$10$^{8}$.
The sky map in celestial coordinates (right ascension and declination) is divided into a grid of $0.1^{\circ} \times 0.1^{\circ}$ filled with the number of detected events according to their reconstructed arrival directions (event map). The ``direct integration method" \citep{2004ApJ...603..355F} is adopted to estimate the number of cosmic ray background events in each grid (background map). The background map is then subtracted from the event map to obtain the source map.
The significance levels of sources are computed using a likelihood analysis given a specific source geometry. It takes into account a given source model, the data and background maps, and the expected detector response and calculates a binned Poisson log-likelihood value. A likelihood ratio test is built to compute the test statistic ($TS$):
\begin{equation}
TS=2ln\frac{L_{s+b}}{L_{b}}
\label{eq:ts}
\end{equation}
Here, $L_{b}$ is the maximum likelihood of the only background hypothesis and $L_{s+b}$ is the maximum likelihood of the signal plus background hypothesis. Taking Wilks' Theorem \citep{wilks1938} into account, the test statistic is distributed as the chi-square distribution with the number of degrees of freedom equal to the difference of number of free parameters between the hypotheses. The usual source discovery test includes the null hypothesis as ``there is no source'' and the alternative hypothesis as ``there is a source with a flux normalization X''. There is only one degree of freedom.
The significance simplifies as $\sqrt{TS}$.
We estimate the spectral energy distribution (SED) of LHAASO J2108+5157 with the forward-folding method described in \citet{2020KM2ACrab}. The SED of this source is assumed to follow a power-law spectrum ${dN}/{dE}=\phi_0({E}/{20\rm\ TeV})^{-\alpha}$. The best-fit values of $\phi_0$ and $\alpha$ are obtained by the least-squares fitting method.
\subsection{Results}
The significance maps around LHAASO J2108+5157 in both energy ranges of 25$-$100 TeV and $>$100 TeV are shown in Figure \ref{fig1}, which are smoothed using the point spread function (PSF) of KM2A in the corresponding energy range. The source is detected with a statistical significance of 9.6$\sigma$ and 8.5$\sigma$, respectively. Given the maximum trials of $3.24 \times 10^6$ in the whole sky of -15$^{\circ}<$Dec.$<$75$^{\circ}$, the post-trial significance at $>$100 TeV is $6.4 \sigma$. The position of LHAASO J2108+5157 is determined by fitting a two-dimensional symmetrical Gaussian model taking into account the KM2A PSF using events with $E_{rec}>$ 25 TeV. The centroid of the Gaussian corresponding to the location of LHAASO J2108+5157 is found to be $\rm R.A. = 317.22^{\circ} \pm 0.07^{\circ}_{stat}$, $\rm Dec. = 51.95^{\circ} \pm 0.05^{\circ}_{stat}$ (J2000), which is coincident with the location of events with $E_{rec}>$ 100 TeV \citep{2021LHAASONature}.
\begin{figure}[h]
\centering
\includegraphics[width=0.47\textwidth]{Map-J2108-1-eps-converted-to.pdf}%
\includegraphics[width=0.50\textwidth]{Map-J2108-2-eps-converted-to.pdf}
\caption{Left: significance map around LHAASO J2108+5157 as observed by KM2A for reconstructed energies from 25 TeV to 100 TeV. Right: significance map for energies above 100 TeV. The red cross and circle denote the best-fit position and the 95\% position uncertainty of the LHAASO source. The white circle at bottom-right corner shows the size of PSF (containing 68\% of the events).
\label{fig1}}
\end{figure}
To study the morphology of the source, a two-dimensional symmetrical Gaussian template convolved with the KM2A PSF is used to fit the data with $E_{rec}>$ 25 TeV using the equation \ref{eq:ts}. The source is found to be point-like ($TS\rm =118.79$), but a slightly extended morphology ($TS=121.48$) cannot be ruled out due to the limited statistics and uncertainty on the KM2A PSF. An upper limit on the extension of the source is calculated to be $0.26^\circ$ at 95\% confidence level (CL). Figure \ref{fig2} shows the measured angular distribution based on KM2A data related to LHAASO J2108+5157 using events with $E_{\rm rec}>$ 25 TeV. The distribution is generally consistent with the PSF obtained using MC simulations ($\chi^2/ndf$= 9.1/10).
\begin{figure}[ht]
\centering
\includegraphics[width=0.8\textwidth]{Profile-J2108-eps-converted-to.pdf}
\caption{Distribution of events as a function of the square of the angle between the event arrival direction and LHAASO J2108+5157 direction. The blue histogram is the expected event distribution by our Monte Carlo simulation assuming a point-like gamma-ray source.
\label{fig2}}
\end{figure}
The data sets are divided into eight energy bins (Appendix \ref{sec:KM2A_data}, Table \ref{TableKM2A}). The total number of photon-like events detected above 25 TeV and 100 TeV is 140 and 18, respectively. There is one photon-like event with energy of 434 $\pm$ 49 TeV in the last bin. Assuming a single power-law form, the differential energy spectrum of gamma-ray emission from LHAASO J2108+5157 is derived. It can be described by a single power law from 20 TeV to 500 TeV as:
\begin{equation}
\frac{dN}{dE}=(1.59\pm0.35_{\rm stat})\times10^{-15}(\frac{E}{20 \rm\ TeV})^{-2.83\pm0.18} (\rm\ TeV^{-1}cm^{-2}s^{-1})
\end{equation}
The chi-square test for goodness of fit ($\chi^2/ndf$) equals to $4.26/5$. While it equals to $3.8/4$ in a power-law with exponential cut-off hypothesis. The improvement is not significant with only about $0.7\sigma$ comparing to fitting by a pure power-law. The systematic uncertainty comes largely from the atmospheric model used in the simulation. According to the variation of event rate during the operation, the overall systematic uncertainty of flux is about 7\% and that of spectral index is 0.02 \citep{2020KM2ACrab}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.78\textwidth]{SED-J2108-eps-converted-to.pdf}
\caption{The SED of LHAASO J2108+5157. The solid red line shows the best fit power-law function.
\label{fig3}}
\end{figure}
\section{DISCUSSION}
\subsection{Searching for counterparts at other wavelengths}
In this analysis, LHAASO J2108+5157 is a point-like source with a $95\%$ position uncertainty of $0.14^\circ$. We firstly searched for the gamma-ray counterpart within $0.14^\circ$ from the center of LHAASO J2108+5157 in the VHE catalog TeVCat\footnote{\url{http://tevcat.uchicago.edu/}}. No VHE counterpart of LHASSO J2108+5157 is found, even when enlarging the searching radius to $0.5^\circ$. In the fourth {\em Fermi}-LAT Source Catalog \citep[4FGL,][] {2020ApJS..247...33A}, a HE point source 4FGL J2108.0+5155 is spatially coincident with LHAASO J2108+5157 at an angular distance of $\sim 0.13^\circ$. The coincidence possibility of a HE point source being found in the region of LHAASO J2108+5157 is about 0.006. We dedicated an analysis to 4FGL J2108.0+5155 using $\sim 12.2$ years $\textsl{Fermi}$-LAT data (see Appendix \ref{sec:Fermi_data} for details). At a high significance level ($7.8\sigma$), 4FGL J2108.0+5155 is spatially extended (namely 4FGL J2108.0+5155e) and the extension of a 2D-Gaussian model is $\sim 0.48^\circ$. The extrapolation of the spectrum of 4FGL J2108.0+5155e predicts a differential flux of $ 4.4\times 10^{-13} \rm\ erg\ cm^{-2}\ s^{-1}$ at 10 TeV, a factor of 10 lower than that of LHAASO J2108+5157. Considering the angular size of 4FGL J2108.0+5155e is about two times larger than the 95\% upper limit extension ($UL_{ext,95\%}$ $=0.26^\circ$) of LHAASO J2108+5157, the physical coincidence between the two sources can not be clearly identified. Therefore, we derived the upper limit flux with the same spatial template as LHAASO J2108+5157 ($\sigma=0.26^\circ$) centered at the position of LHAASO J2108+5157 above 10 GeV, which is used to limit the 10 GeV-1 TeV emission associated with LHAASO J2108+5157 (as shown in Figure \ref{fig5}).
For the X-ray observation, Swift-XRT surveyed this region with an exposure of 4.7 ks \citep{2013ApJS..207...28S}. However, no X-ray counterparts are found within $0.26^\circ$ from the center of LHAASO J2108+5157. The closest X-ray source is the eclipsing binary RX J2107.3+5202 with the separation of $0.3^\circ$. At radio wavelengths, Canadian Galactic Plane Survey (CGPS) carried out a high-resolution survey of the 408 MHz and 1420 MHz continue emission covering the LHAASO J2108+5157 region. As shown in Figure \ref{fig4}, an extended radio source is within the region of $UL_{\rm ext,95\%}$ of LHAASO J2108+5157, which seems plausible that they are associated with the star-forming region nearby. The radio upper limit are derived from the $0.26^\circ$ region centered at the position of LHAASO J2108+5157.
We use the 2.6 mm CO-line survey \citep{2001ApJ...547..792D} to search for the molecular cloud clumps at the direction of LHAASO J2108+5157. Two peaks at $\sim \rm -13\ km/s$ and $\sim \rm -2\ km/s$ are identified as the molecular cloud [MML 2017]4607 and [MML2017]2870 \citep{2017ApJ...834...57M}, respectively (See the Appendix \ref{sec:CO_line} for details). As shown in Figure \ref{fig4}, LHAASO J2108+5157 lies near the center of the molecular cloud [MML2017]4607 \citep{2017ApJ...834...57M} and this cloud is within the upper limit of the extension of LHAASO J2108+5157. The average angular radius of the cloud [MML2017]4607 is 0.236$^{\circ}$ with a mass of 8469 M$_{\bigodot}$ at a distance $\sim$3.28 kpc. The number density is estimated to be $\sim 30\rm\ cm^{-3}$. The coincidence possibility of a molecular cloud being found in the region of LHAASO J2108+5157 is about 0.04.
Most possible candidates to accelerate particles up to hundreds of TeV include SNRs, pulsar wind nebulae (PWNe), and young stellar clusters. Based on those catalogs collected by SIMBAD\footnote{\url{http://simbad.u-strasbg.fr/simbad/sim-fcoo}}, we searched for the possible accelerators within $0.8^\circ$ from the center of the LHAASO source. In spite of missing SNRs and PWNe, we found two young star clusters Kronberger 82 \citep{2006A&A...447..921K} and Kronberger 80 \citep{2016A&A...585A.101K}.
\begin{figure}[h]
\centering
\includegraphics[width=0.78\textwidth]{CO_map_1-eps-converted-to.pdf}
\caption{ Brightness temperature distribution of ${}^{12}\rm CO(1-0)$ line survey integrated over a velocity interval between -14.3 and -9.1 $\rm km\ s^{-1}$ corresponding to a distance for the molecular gas of $\sim 3.26$ kpc \citep{2001ApJ...547..792D}. The white contours indicate 1420 MHz continue emission survey \citep{2003AJ....125.3145T}. The best-fitted location of the LHAASO J2108+5157 and the upper limit to its extension (68\% containment radius) are indicated with a red cross and solid circle, respectively. The location of the young star cluster Kronberger 80 \citep{2016A&A...585A.101K} and open cluster candidate Kronberger 82 \citep{2006A&A...447..921K} are marked with magenta stars. The pink diamond represents the position of the binary RX J2107.3+5202. The light blue solid ellipse represent the molecular cloud [MML2017]4607. The cyan dashed circle shows the extent of 4FGL J2108.0+5155e (68\% containment radius).
\label{fig4}}
\end{figure}
\subsection{Scenarios for the origin of UHE emission}
Because LHAASO J2108+5157 and the molecular cloud [MML2017]4607 have a spatial coincidence, there is a preference for hadronic origin. The UHE emission could be produced by protons accelerated up to PeV colliding with the ambient dense gas. We use the NAIMA package \citep{2015ICRC...34..922Z} to estimate the parent particle spectrum to best reproduce the observed gamma-ray spectrum from LHAASO J2108+5157 region. The observed gamma rays are attributed to the decay of $\pi^0$ mesons produced in inelastic collisions between accelerated protons and target gas in the [MML2017]4607. For the energy distribution of the parent particles, we assume an exponential cutoff power-law form. The index is fixed to 2 which is predicted in the standard diffusive shock acceleration. We obtain the cutoff energy of about 600 TeV. The total energy of the cosmic ray protons is $ 2 \times 10^{48}{(\frac{n}{30 \rm\ cm^{-3}})}^{-1}{(\frac{D}{3.28 \rm\ kpc})}^{2}$ erg, which is less than 10\% of the typical kinetic energy of supernova explosions and consistent with the energy content of escaping cosmic rays from SNR \citep{2018ApJ...860...69C}, where n is the gas density and D is distance to the source.
Considering the soft GeV spectrum of 4FGL J2108.0+5155e in the lower energy band and the hard TeV spectrum of LHAASO J2108+5157, together with the different spatial extensions, we suggest that the much extended gamma-ray emission from 4FGL J2108.0+5155e may be associated with an old SNR, like W28. And the point-like gamma-ray emission from LHAASO J2108+5157 may be produced by the interaction between the escaping CRs from the SNR and the molecular cloud clumps, like the three TeV sources in the southern of W28 detected by HESS \citep{2008A&A...481..401A}. In addition, the young massive stars may also operate as proton PeVatrons with a dominant contribution to the Galactic cosmic rays \citep{2019NatAs...3..561A}. Two young star cluster, Kronberger 80 and Kronberger 82 indicated in Figure \ref{fig4}, are located nearby LHAASO J2108+5157 with an angular distance of $0.62^\circ$ and $0.45^\circ$, respectively. Kronberger 80 at a distance of 4.9 kpc \citep{2016A&A...585A.101K} is about 1.6 kpc far away the cloud [MML2017]4607, implying Kronberger 80 is unlikely to interact with the [MML2017]4607 to radiate the UHE gamma-ray photons. Considering the absence of the distance measurements for Kronberger 82, we cannot conclude whether or not Kronberger 82 is the source nearby the [MML2017]4607 for explaining the UHE gamma-ray emission.
\begin{figure}[h]
\centering
\includegraphics[width=0.78\textwidth]{Mult-Model-eps-converted-to.pdf}
\caption{The multiwavelength SED of LHAASO J2108+5157 with hadronic and leptonic models. The red points and arrows are the LHAASO-KM2A observations. The blue triangles are the radio fluxes. The grey points and blue arrows are the {\em Fermi}-LAT spectral points and upper limits.}
\label{fig5}
\end{figure}
Nevertheless, UHE gamma rays can also be generated by high energy electrons, as shown in Figure \ref{fig5}. According to the previous observations, a large fraction of the VHE sources are identified to be PWNe. Recent LHAASO observations also found that most of the sources detected with energy above 100 TeV are spatially coincident with identified pulsars \citep{2021LHAASONature}. A likely scenario for LHAASO J2108+5157 would be that the UHE gamma-ray emission stems from a PWN powered by a yet unknown pulsar. According to the spectrum of LHAASO J2108+5157, the gamma-ray luminosity at 20$-$500 TeV is estimated to be 1.4$\rm \times 10^{32}(\frac{D}{1\rm\ kpc})^{2}$ $\rm erg$ $\rm s^{-1}$. It can be reasonably powered by a typical gamma-ray pulsar which usually has spin-down luminosity ranging from $10^{35}$ to $10^{39}$ erg s$^{-1}$ \citep{2009ApJ...694...12M}. The PWN scenario for LHAASO J2108+5157 suggests that the UHE gamma-ray emission is produced by the inverse Compton scattering process of electrons. We assumed that the primary electron spectrum follows a power-law with an exponential cutoff $dN/dE \propto E^{-\Gamma} exp(-E/E_c)$. Considering the absence of the multi-wavelength data, the spectral index of electron is adopted to be 2.2, and the magnetic field strength is adopted to be 3 $\mu$G, which are reasonable for typical PWNe. The cutoff energy of electrons is set to be 200 TeV to explain the UHE gamma-ray spectrum of LHAASO J2108+5157. It corresponds to a synchrotron energy loss time-scale of $10^{4}$ yrs with the magnetic field strength of 3 $\mu$G and the total energy of electrons with energy above 1 GeV is estimated to be $1 \times10^{46}{(\frac{D}{1 \rm\ kpc})}^{2} $ erg. Because of the absence of pulsar counterpart, the PWN scenario remains uncertain.
\section{summary}
Using the first 11 months of data from the KM2A half-array, we report the discovery of a new UHE gamma-ray source $-$ LHAASO J2108+5157. The statistical significance of the gamma-ray signal reaches 8.5$\sigma$ over 100 TeV. In absence of VHE gamma-ray counterparts, this is the first gamma-ray source directly discovered in the UHE band. The source is point-like with an extension less than $0.26^\circ$ at 95\% confidence level. The power-law spectral index of LHAASO J2108+5157 is $ -2.83 \pm 0.18_{\rm stat}$. In addition, the source is correlated with the molecular cloud [MML2017]4607. The colliding between cosmic ray protons and the ambient gas may produce the observed radiation. Other possible scenarios, such as a PWN, can also be invoked to explain the gamma-ray emission. So far, no conclusion about the origin of its UHE emission can be achieved. The forthcoming observations by LHAASO will reduce the uncertainties of the spectral points and will allow us to extend the measured energy range.
~
This work is supported in China by National Key R\&D program of China under the grants 2018YFA0404201, 2018YFA0404202, 2018YFA0404203, 2018YFA0404204, by NSFC (No.12022502, No.11905227, No.11635011, No.U1931112, No.11905043, No.U1931111), and in Thailand by RTA6280002 from Thailand Science Research and Innovation. The authors would like to thank all staff members who work at the LHAASO site above 4410 meters above sea level year-round to maintain the detector and keep the electrical power supply and other components of the experiment operating smoothly. We are grateful to the Chengdu Management Committee of Tianfu New Area for their constant financial support of research with LHAASO data.
The research presented in this paper has used data from the Canadian Galactic Plane Survey, a Canadian project with international partners, supported by the Natural Sciences and Engineering Research Council.
\software{CORSIKA \citep[version 7.6400;][]{1998cmcc.book.....H}, G4KM2A \citep{2019ICRC...36..219C}, NAIMA \citep{2015ICRC...34..922Z}, FermiPy \citep{2017ICRC...35..824W}}
|
1,108,101,565,581 | arxiv | \section{Introduction}
The coming decade will see transformative dark energy science done by both ground-based surveys (DESI \cite{DESI}, LSST \cite{LSST}) and space-based missions (Euclid \cite{EUCLID18}, WFIRST \cite{WFIRST18}). We expect that large swaths of the universe will be sampled to close to the sample variance limit on very large scales using galaxies as tracers of density field as well as sources used to measure the weak gravitational lensing shear or cosmic backlights illuminating the cosmic hydrogen for Lyman-$\alpha$ forest studies. At the same time, these fields will offer a multitude of cross-correlation opportunities, through which more and more robust science will be derived.
Looking beyond the current decade, several experimental options are being considered that will allow us to continue on the path of mapping ever increasing volume of the Universe. Photometric experiments, such as LSST, will likely be systematics limited by the quality of the photometric redshifts. Spectroscopic instruments are more attractive, especially using LSST as the targetting survey, but will require major investments in a dedicated new telescope and more aggressive spectrograph multiplexing to be truly interesting when compared to DESI. Going at higher redshift, traditional galaxy spectroscopy in optical becomes ever more difficult, since the sources are sparser, fainter and further redshifted. On the other hand, turning to radio and relying on the 21-cm signal from neutral hydrogen could offer a cost-effective way of reaching deep sampling of the universe, especially in the redshifts range $2 \lesssim z \lesssim 6$, which remains largely unexplored on cosmological scales.
A number of upcoming (CHIME \cite{CHIME}, HIRAX \cite{HIRAX}, Tianlai \cite{Tianlai}, SKA\cite{SKACosmo}) or planned (PUMA, the proposed Stage {\sc ii} experiment \cite{CVDE-21cm}) interferometric instruments will use the 21-cm signal to probe this volume of the universe. In the post-reionization era most of the hydrogen in the Universe is ionized, and the 21-cm signal comes from self-shielded regions largely within halos. Such `intensity mapping' surveys therefore measure the unresolved emission from halos tracing the cosmic web in redshift space.
The 21-cm observations suffer from one fundamental problem, namely that it is completely insensitive to modes which vary slowly along the line of sight (that is low $k_{\parallel}$ modes), because these are perfectly degenerate with foregrounds which are orders of magnitude brighter that the signal \cite{Shaw14,Shaw15,Pober15,Byrne18}. This effect alone severely limits the usefulness of 21-cm observations for cross-correlations with tracers that have wide kernels in the radial directions. This is in particular true for cross-correlations with the cosmic microwave background (CMB) lensing reconstructions and shear field measured by the photometric galaxy surveys, such as that coming from LSST. Cross-correlations with CMB would allow us to use the 21-cm observations and CMB observations in conjunction to put the strongest limits on modified gravity by measuring growth. Cross-correlations with spectroscopic galaxies would allow us to characterize the source redshift of galaxy samples used for weak-lensing measurements, one of the main optical weak lensing systematics. Moreover, the foreground wedge (discussed below) can render further regions of the $k$-space impotent for cosmological analysis \cite{Datta10,Morales12,Parsons12,Shaw14,Shaw15,Liu14,Pober15,SeoHir16,Cohn16,Byrne18}. The foreground wedge is not as fundamental as the low $k_{\parallel}$ foreground contamination, because it only results as a consequence of an imperfect instrument calibration rather than being a fundamental astrophysical bane. Nevertheless, it is a data cut that will likely be necessary at least in the first iterations of data analysis from the upcoming surveys.
Can this lost information be recovered? The answer is yes, although the extend to which this is possible depends on both the resolution and noise properties of the instrument. Imagine the following thought experiment. At infinite resolution in real space (and ignoring the thermal broadening of the signal for the moment), the underlying radio intensity is composed of individual objects, that appear as distinct peaks. A high pass filter in $k$ space will not fundamentally alter one's ability to count these objects. While the filtering might introduce `wings' in individual profiles, the peaks in the density fields are still there. In this case, we can recover the lost large-scale modes perfectly. A somewhat different way to look at the same physics is to realize that the non-linear mode coupling propagates information from large-scale to small-scale modes, while the initial conditions of the small-scale modes are forgotten. Since the total number of modes scales as $k^2\,\delta k$, there are always many more small-scale modes than large-scale modes and in a sense the system is over-constrained if one wants to recover large-scale modes for which there is no direct measurements. Non-linear evolution erases the primordial phase information on small scales and encodes the primoridal large-scale field over the entire $k$-space volume. So it is clear that the process of backing out the large-scale information from the small scale is possible, at least in the limit of sufficiently low noise and sufficiently high resolution.
In this paper we approach this problem by means of forward modelling the final density field. Starting with an initial density field and a suitable bias parameterization, we reformulate the problem of recovering the large scales as a problem of non-linear inversion. In the past few years, several reconstruction methods have been developed to solve this \cite{Jasche13,Seljak17,Schmittfull17,Modi18,Schmidt18,Schmittfull19}. The solution of this non-linear problem is the linear, 3D field that evolves under the given forward model to result in observed final field. Thus one not only recovers the linear large scales where the information was gone, but also automatically performs an optimal reconstruction of the linear field \emph{on scales where we had non-linear information to start with}. This is often the product that one ultimately desires. It allows not only optimal BAO measurement for the 21-cm survey, but also increases the scale over which one can model cross-correlations with other tracers, thus enabling cross-correlations with CMB lensing reconstruction and photometric galaxy samples.
The full implications of this reconstruction exceed the scope of this paper. Instead we focus on the basic questions:
i) Does the forward modeling approach to reconstruction work at all in the case where we loose linear modes to foregrounds? ii) What is the complexity of forward model required? iii) How does the result depend on the noise and angular resolution of the experiment? iv) How does the performance vary with scale, direction, redshift and real vs.\ redshift space data? v) What are the gains one expects for different science objectives such as BAO reconstruction, cross-correlations with CMB and cross-correlations with different LSS surveys such as LSST.
The outline of the paper is as follows. We begin by discussing the observational constraints we are likely to face in \S\ref{sec:instruments} and the simulation suite we will be using as mock data to demonstrate our reconstruction algorithm in \S\ref{sec:sims}. Next, we review our forward model and the method we use to reconstruct the field from the observations in \S\ref{sec:recon} (building upon refs.~\cite{Seljak17,Modi18}). In \S\ref{sec:results} we show the results for our forward model on the `observed' 21-cm data, as well as gauge the performance of our reconstruction algorithm on different metrics for multiple experimental setups. In \S\ref{sec:implications} we show the improvements expected by using our reconstructed field for different science objectives such as BAO reconstruction, photometric redshift estimation and CMB cross correlations. We conclude in \S\ref{sec:conclusions}.
\section{Observational constraints}
\label{sec:instruments}
The instruments of interest for this investigation are interferometers measuring the redshifted 21-cm line. Such instruments work in the Fourier domain with the correlation between every pair of feeds, $i$ and $j$, measuring the Fourier transform of the sky emission times the primary beam at a wavenumber, $k_\perp = 2\pi \vec{u}_{ij}/ \chi(z)$, set by the spacing of the two feeds in units of the observing wavelength ($\vec{u}_{ij}$) \cite{TMS17}.
The visibility noise is inversely proportional to the number (density) of such baselines \cite{ZalFurHer04,McQuinn06,Seo2010,Bull2015,SeoHir16,Cohn16,Wol17,Alonso17,White17,Obuljen18,CVDE-21cm,Chen19}.
Where necessary, we take the noise parameters from Refs.~\cite{CVDE-21cm,Chen19}.
We shall investigate how our reconstruction depends upon the noise (thermal plus shot noise) and the $k$-space sampling of the instrument.
Radio foregrounds, primarily free-free and synchrotron emission from the Galaxy and unresolved point sources, are several orders of magnitude brighter than the signal of interest and present a major problem for 21-cm measurements \cite{Furlanetto06,Shaw14,Pober15}. However, due to their emission mechanisms, they are intrinsically very spectrally smooth and this is the property that allows them to be separated from the signal of interest which varies along the line of sight due to variation in underlying cosmic density field along the line of sight. This separation naturally becomes increasingly difficult as we seek to recover very low $k_\parallel$ modes, i.e.\ modes close to transverse to the line of sight. The precise value below which recovery becomes impossible is currently unknown (see Refs.~\cite{Shaw14,Shaw15,Pober15,Byrne18} for a range of opinions). To be conservative, we will assume that we lose all the modes below $k_\parallel=0.03 \,h\,{\rm Mpc}^{-1}$, however we will also study how sensitive are our results to this cut-off value.
In addition to low $k_\parallel$, non-idealities in the instrument lead to leakage of foreground information into higher $k_\parallel$ modes. This arises because, for a single baseline, a monochromatic source (i.e.\ a bright foreground) at non-zero path-length difference is perfectly degenerate with signal at zero path-length difference (i.e.\ zenith for transiting arrays) but appropriately non-flat spectrum, such as that arising from 21-cm fluctuations. This is usually phrased in terms of a foreground ``wedge'' which renders modes with low $k_\parallel/k_\perp$ unusable \cite{Datta10,Morales12,Parsons12,Shaw14,Shaw15,Liu14,Pober15,SeoHir16,Cohn16,Byrne18}. Due to the variation of Hubble parameter with redshift the wedge becomes progressively larger at higher redshift. Information in this wedge is not irretrievably lost, because using multiple baselines can break the degeneracy, but it requires progressively better phase calibration the deeper into the wedge one pushes. The better the instrument can be calibrated and characterized the smaller the impact of the wedge. The most pessimistic wedge assumption is that all sources to the horizon contribute to the contamination -- will we not consider this case as it makes a 21-cm survey largely ineffective for large-scale structure. The most optimistic assumption is that the wedge has been subtracted perfectly. We regard this as unrealistic. A middle-of-the-road assumption is that the wedge matters to an angle measured by primary field of view. We take our `optimistic' choice to be the `primary beam’ wedge defined as $\theta_w = 1.22\lambda/2D_e$, where $D_e$ is the effective diameter of the dish after factoring in the aperture efficiency ($\eta_\alpha = 0.7$) and the factor of two in the denominator gives an approximate conversion between the first null of the Airy disk and its full Width at half maximum (FWHM). We shall contrast this with the `pessimistic' case $\theta_w = 3\times 1.22\lambda/2D_e$.
Fig.~\ref{fig:knoise} shows this information in graphical form, following refs.~\cite{Chen19,HiddenValley19}. The color scale shows the fraction of the total power which is signal, as a function of $k_\perp$ and $k_\parallel$. The modes lost to foregrounds in the wedge are illustrated by the gray dashed (optimistic) and dotted (pessimistic) lines. Modes below and to the right of these lines would be contaminated by foregrounds. In addition we expect to lose modes with $k_\parallel<k_\parallel^{\rm min}$ where $k_\parallel^{\rm min}=0.01-0.1\,h\,{\rm Mpc}^{-1}$. At $z=2$ and low $k$ the signal dominates, at intermediate $k$ the shot-noise starts to become important and at high $k_\parallel$ the thermal noise from the instrument dominates.
\begin{figure}
\centering
\resizebox{\columnwidth}{!}{\includegraphics{figs/puma_scales}}
\caption{The fraction of the total power that is signal, $S/(S+N)$, vs.~$k_\perp$ and $k_\parallel$ for a $5\,$yr PUMA-like survey at $z=2$ (left), 4 (middle) and 6 (right). The dotted and dashed lines in each panel show a pessimistic and optimistic forecast for the foreground wedge (see text). The loss of low $k_\perp$ modes, most visible at $z=2$, is due to the constraint that dishes must be further apart than their diameters, leading to a minimum baseline length.}
\label{fig:knoise}
\end{figure}
\section{Data: Hidden Valley simulations}
\label{sec:sims}
To test the efficacy of our method we make use of the Hidden Valley\footnote{http://cyril.astro.berkeley.edu/HiddenValley} simulations \cite{HiddenValley19}, a set of trillion-particle N-body simulations in gigaparsec volumes aimed at intensity mapping science. Our workhorse simulation will be {\tt HV10240/R}, which evolved $10240^3$ particles in a periodic, $1024\,h^{-1}$Mpc box from Gaussian initial conditions using a particle-mesh code \cite{FengChuEtAl16} with a $20480^3$ force mesh. At this resolution, one is able to resolve halos down to $M\sim 10^9\,h^{-1}{\rm M}_\odot$, which host the majority ($>95\%$) of the cosmological HI signal, while the volume allows robust measurement of observables such as baryon acoustic oscillations.
The halos in the simulation were assigned neutral hydrogen with a semi-analytic recipe, outlined in more detail in ref.~\cite{HiddenValley19}. We make use of their fiducial model, `Model A', at $z=2$, $4$ and $z=6$. Briefly, this model populates halos with centrals and satellites following a halo occupation description and these galaxies are then assigned HI mass following a $M_h-M_{\rm HI}$ relation. Our mock HI data lives in redshift space and captures the small scale non-linear redshift space distortion effects due to satellite motion.
We also caution the reader that \rsp{while we have made use of a particular semi-analytic model,
the manner in which HI traces the matter field at high $z$ is currently poorly constrained
observationally. While our model is consistent with our best current knowledge, the particular values of various modeling parameters we have assumed may not be correct.
However as long as this field can be modeled with a flexible bias framework and the scatter between the HI mass and halo mass (or underlying dark matter density) is similar to other tracers, such as stellar mass,} we believe our \rsp{qualitative} results do not depend critically upon these details.
\rsp{
For our analysis, we will use two Cartesian meshes (discussed further in Section \ref{sec:annealing})
with 256 and 512 cells along each dimension, which have resolutions of $4$ and $2\,h^{-1}{\rm Mpc}$ respectively.
To generate the HI data, we desposit the galaxies, weighted by their HI mass, on these meshes with a cloud-in-cloud (CIC) interpolation scheme and then estimate the HI overdensity field ($\delta_{\rm HI}$) in the usual way.
Similarly, to generate the final, Eulerian matter field at the redshifts of interest, we use a $4\%$ subsampled snapshot of the particle positions and paint them with the same CIC scheme (and equal weights per particle).
The initial, Lagrangian field ($\delta_L$), is generated on the meshes using the same initial seed (hence same initial conditions) as the simulation itself.
Once we generate the `clean HI data' on a mesh we need to simulate the foreground wedge and thermal noise to construct a mock observed data for the purpose of reconstruction.
To include the foreground wedge, $w$, we simply omit from our calculations all of the modes that are within the wedge, i.e.\ below a certain cut-off $k_\parallel/k_\perp$, as shown in Fig.~\ref{fig:knoise}.
We simulate thermal noise by drawing a zero-mean, Gaussian realization, $n(\mathbf{k})$, from the 2D thermal noise power spectrum, $P_{\rm th}(k, \mu)$.
Then, we corrupt the `clean' data by adding this noise realization and use it as the `observed' data for reconstruction
\begin{equation}
\delta_{\rm HI}^n(\mathbf{k}) = \delta_{\rm HI}(\mathbf{k}) + n(\mathbf{k})
\label{eq:addnoise}
\end{equation}
}
\section{Reconstruction Method}
\label{sec:recon}
There have been many approaches to determining the density field from noisy or incomplete observations in cosmology, from early work like refs.~\cite{Peebles89,Bertschinger89,Rybicki92,Nusser92,Dekel94} through ref.~\cite{Narayanan99} and references therein. Forward model reconstructions similar to our approach have also been developed previously and applied in other contexts \cite{Jasche10,Jasche13,Kitaura13,Wang14,Jasche15,Wang16}.
Other authors have also investigated how one could reconstruct the long-wavelength modes which are lost to foregrounds in intensity mapping \cite{Karacay19,Others}. Ref.~\cite{Karacay19}, in particular, investigates a very similar problem with a related approach.
Our method to reconstruct the long-wavelength modes puts in practice the intuition developed in the introduction, that the non-linear evolution of matter under gravity propagates information from large scales in the initial conditions to the small scales in the final matter field. We reconstruct the initial density field by optimizing its posterior, conditioned on the observed data (HI density in $k$-space), assuming Gaussian initial conditions. Evolving this initial field allows us to reconstruct the observed data on all scales, thus reconstructing the sought-after long wavelength modes. To solve the problem of reconstruction of initial conditions, we follow the approach developed by refs.~\cite{Seljak17,Modi18}. In this first application of the method we will hold the cosmology fixed, though in future one could imagine jointly fitting the cosmology and the long wavelength modes.
\rsp{Varying cosmology and hence the prior power spectrum is especially important when quantifying uncertainities on the reconstructed fields but that is beyond the scope of this work and we defer it for future.}
\rsp{In this section, we begin by outlining our forward model with the focus on the bias model. Next, we use this forward model to setup the likelihood function for reconstruction and modify it to account for the additional noise and foreground wedge present in the observed data. Lastly, we also outline annealing schemes that we develop to assist convergence of our algorithm and hence improve our reconstructions.}
\subsection{Forward Model}
\label{sec:biasmodel}
To use the framework of refs.~\cite{Seljak17,Modi18}, we require a forward model [$\mathcal{F}(\mathbf{s})$] to connect the observed data $(\mathbf{d})$ with the Gaussian initial conditions (ICs; $\mathbf{s}$) in a differentiable manner. This forward model is typically composed of two parts - non-linear dynamics to evolve dark-matter field from the Gaussian ICs to the Eulerian space and a mapping from the underlying matter field to the observed tracers \rsp{i.e.\ a bias model}.
We try two different models to shift the particles from Lagrangian space to their Eulerian space: a simple Zeldovich displacement \cite{Zel70} and full N-body evolution \cite{FengChuEtAl16}, albeit at relatively low resolution.
For the mapping from the matter field to the tracers we use a Lagrangian bias model (as for example in refs.~\cite{Matsubara08a,Matsubara08b,Carlson13,White14,Vlah16,Modi16,Schmittfull19}) including terms up to quadratic order.
At this order, the Lagrangian fields are $\delta_L(\mathbf{q})$, $\delta_L^2(\mathbf{q})$ and $s^2_L(\mathbf{q}) \equiv \sum_{ij}s_{ij}^2(\mathbf{q})$
the scalar shear field where $s_{ij}^2(\mathbf{q}) = (\partial_i \partial_j\partial^{-2} - [1/3]\delta_{ij}^{D}) \delta_L(\mathbf{q})$
-- we discuss derivative bias below.
We use these fields from the ICs of the simulation itself (\rsp{as described in Section \ref{sec:sims}}),
but evolved to $z=0$ using linear theory (due to this evolution, care must be taken in interpreting our bias parameters).
Since $\delta_L^2(\mathbf{q})$ and $s^2(\mathbf{q})$ are correlated, we define a new field $g_L^2(\mathbf{q}) = \delta_L^2(\mathbf{q}) - s^2(\mathbf{q})$ which does not correlate with $\delta_L^2(\mathbf{q})$ on large scales and use this instead of shear field. In addition, we subtract the zero-lag terms to make these fields have zero mean.
\rsp{Then, to generate our model field as a combination of these Lagrangian fields,
we use the approach developed by ref.~\cite{Schmittfull19}, which is itself an implementation
of the ideas of ref.~\cite{Matsubara08b} (e.g.~Eq.~8) and its extensions \cite{Vlah16}.
Specifically we match the model with the observations at the level of the field instead of only matching the two-point functions.
We use the $512^3$ particle mesh for these fields, with particle/mesh spacing $2\,h^{-1}$Mpc for our box of $1024\,h^{-1}{\rm Mpc}$ (and no further smoothing).
Each particle is `shifted' to its Eulerian position, $\mathbf{x}$, with the non-linear dynamics of choice and then binned onto the grid with cloud-in-cloud (CIC) interpolation and weight
\begin{equation}
{\rm weight} = 1 + b_1\delta_L(\mathbf{q}) + b_2 (\delta^2_L(\mathbf{q}) - \langle \delta^2_L(\mathbf{q})\rangle) + b_g \left(g_L(\mathbf{q}) - \left\langle g_L(\mathbf{q}) \right\rangle\right)
\end{equation}
Note the contributions of $\delta_L$, $\delta_L^2$ and $g_L^2$ assigned to each particle are based on its initial location ($\mathbf{q}$).
This procedure is equivalent to building separate fields with each particle weighted by $1$, $\delta_L$, $\delta_L^2$ and $g_L$ and then taking the linear combination of those fields after shifting, which is how the model was actually implemented.
Thus our modeled tracer field is:
\begin{equation}
\delta_{\rm HI}^b(\mathbf{x}) = \delta_{[1]}(\mathbf{x}) + b_1\delta_{[\delta_L]}(\mathbf{x}) + b_2\delta_{[\delta^2_L]}(\mathbf{x}) + b_g \delta_{[g_L]}(\mathbf{x}) \qquad .
\label{eq:biasmodel1}
\end{equation}
where $\delta_{[W]}(\mathbf{x})$ refers to the field generated by shifting the particles weighted with `$W$' field.
}
To fit for the bias parameters, we minimize the mean square model error between the data and the model fields which is equivalent to minimizing the error power spectrum of the residuals, $\mathbf{r}(k) = \delta_{\rm HI}^b(\mathbf{k}) - \delta_{\rm HI}(\mathbf{k})$, in Fourier space:
\begin{equation}
P_{\rm err}(k) = \frac{1}{N_{\rm modes}(k)}\sum_{\mathbf{k}, |\mathbf{k}|\sim k}\left|\delta_{\rm HI}^b(\mathbf{k}) - \delta_{\rm HI}(\mathbf{k})\right|^2
\label{eq:error}
\end{equation}
where the sum is over half of the $\mathbf{k}$ plane since $\delta^\star(\mathbf{k})=\delta(-\mathbf{k})$ for the Fourier transform of a real field, and the `data' correspond to our `clean' field with no noise at this stage.
In principle the bias parameters can be made scale dependent, $b(k)$, and treated as transfer functions \cite{Schmittfull19}. To get these transfer functions, we simply minimize Eq.~\ref{eq:error} for every $k$-bin independently. However we will find that the best-fit parameters are scale independent to a very good degree (see below). Thus to minimize the number of fitted parameters we use scalar bias parameters and fit for them by minimizing the error power spectrum only on large scales, $k < 0.3 \,h\,{\rm Mpc}^{-1}$. We will show below that the fit is quite insensitive to this $k$-range (chosen reasonably) used for fitting the bias parameters.
Traditionally, bias parameters are defined such that the bias model field matches the observations in real space. However the observations of 21-cm surveys are done in redshift space. To model these observations, we `shift' the Lagrangian field directly into redshift space and minimize the error power spectrum (Eq.~\ref{eq:error}) directly in redshift space instead of real space.
Finally, we shall assume throughout that the cosmological parameters will be well known, and in fact use the values assumed in the simulation. In principle one could iterate over cosmological parameters, but the 21-cm data themselves (and external data) will provide us with extremely tight constraints on cosmological parameters even before reconstruction \cite{CVDE-21cm}.
\subsection{Reconstruction}
\label{sec:recon2}
Once the bias parameters are fixed the procedure above provides a differentiable `forward model', $\mathcal{F}(\mathbf{s}) (= \delta_{\rm HI}^b(\mathbf{x})) $, from the Gaussian initial conditions ($\mathbf{s(\mathbf{k})}$) to the observations, $\mathbf{d}(\mathbf{k}) = \delta_{HI}(\mathbf{k})$. To reconstruct the initial conditions we need a likelihood model for the data. Since the error power spectrum \rsp{was minimized to fit for the model bias parameters, it measures the remaining} disagreement between the observations and our model and hence provides a natural likelihood function. \rsp{ When using this form of likelihood, we have made the assumption that the residuals in Fourier space between our model and data are drawn from a diagonal Gaussian in the Fourier space with variance given by error power spectrum. The diagonal assumption is valid due to the translational invariance of both the data and the model. The Gaussian assumption is motivated on large scales by the fact that the dynamics are linear, and on small scales by central limit theorem when averaged over large number of modes on these scales. Moreover, while this likelihood might not be completely accurate or optimal on all scales, it does provide us well motivated and simple (negative) loss function, maximizing which should reconstruct the data. This is sufficient for our purpose here, where we are only interested in a point estimate of the reconstructed field.} Thus we can write down the negative log-likelihood:
\begin{equation}
\mathcal{L} = \frac{1}{2}\chi^2 = \sum_k \frac{1}{N_{\rm modes}(k)}\sum_{\mathbf{k}, |\mathbf{k}|\sim k} \frac{|\delta_{\rm HI}^b(\mathbf{k}) - \delta_{\rm HI}(\mathbf{k})|^2}{P_{\rm err}(k)}
\label{eq:likelihood}
\end{equation}
where the sum is over half of the $\mathbf{k}$ plane since $\delta^\star(\mathbf{k})=\delta(-\mathbf{k})$ for the Fourier transform of a real field.
This is combined with the prior over the initial modes, which are assumed to be Gaussian and uncorrelated in the Fourier space. Hence the negative log-likelihood for the Gaussian prior can be combined with the negative log-likelihood of the data to get the posterior
\begin{equation}
\mathcal{P} = \sum_k \frac{1}{N_{\rm modes}(k)}\sum_{\mathbf{k}, |\mathbf{k}|\sim k} \Big( \frac{|\delta_{\rm HI}^b(\mathbf{k}) - \delta_{\rm HI}(\mathbf{k})|^2}{P_{\rm err}(k)} + \frac{|\mathbf{s}(\mathbf{k}))|^2}{P_{\rm s}(k)} \Big)
\label{eq:posterior}
\end{equation}
where $P_{\rm s}$ is the initial prior power spectrum. To reconstruct the initial modes $\mathbf{s}$, we minimize this posterior with respect to them using L-BFGS\footnote{https://en.wikipedia.org/wiki/Limited-memory\_BFGS} \cite{nocedal06}, as in refs.~\cite{Seljak17,Modi18}, to get a maximum-a-posteriori (MAP) estimate.
\rsp{We note that while, in principle, one would measure the (modeling) error power spectrum as an average of different simulations, due to the computational requirements inherent in simulating the HI field we are forced to use the same single simulation to fit for the bias parameters and measure error spectra, and then use it as mock data for reconstruction.
This could potentially lead to overfitting, improving the reconstruction by underestimating the error spectra and ignoring cosmic variance.
To check this, we estimate the error power spectra for modeling the halo mass field on a set of (cheaper) simulations with smaller boxes and poorer resolution, where the problem of cosmic variance if anything should be worse.
We find that using the bias parameters fit for a one simulation and estimating the error spectra on other simulations, the error power spectrum varies. However its ratio with the error power spectrum for the `fit' simulation is not consistently greater than 1, as one would have expected in case of overfitting but has a distribution around 1 on all scales of interest.
Hence while one would need to quantify the distribution of this error spectra to get uncertainties on the reconstructed field, we do not find any evidence that using the same simulation as mock data and to estimate error spectra leads to any overfitting or an artificially good reconstruction.}
\subsection{Noise}
\label{sec:noise}
So far we have ignored the presence of noise and the foreground wedge in our data. While shot noise is included automatically if we use the HI realization in the simulations, we must handle foregrounds explicitly. Due to the foreground wedge, $w$, we loose all the information in the modes below a certain cut-off $k_\parallel/k_\perp$, as shown in Fig.~\ref{fig:knoise}. To take this into account in our reconstruction, we simply drop these modes from our likelihood term.
In addition to the wedge, 21-cm surveys also suffer from thermal noise that dominates on small scales and has angular dependence with respect to the line of sight.
\rsp{As outlined previously in Eq. \ref{eq:addnoise}, this is incorporated by drawing a Gaussian noise realization, $n(\mathbf{k})$, with zero mean and noise power spectrum, $P_{\rm th}(k,\mu)$ that is then used to added to our simulated data ($\delta_{\rm HI}$) to generate our mock data $\delta_{\rm HI}^n$.}
\rsp{Including both effects our final posterior is :}
\begin{equation}
\mathcal{P}_w = \sum_k \frac{1}{\rm N_{modes}(k)} \left( \sum_{\substack{\mathbf{k}, |\mathbf{k}|\sim k, \\ {\mathbf{k} \not\in w}}} \frac{|\delta_{\rm HI}^b(\mathbf{k}) - \delta_{\rm HI}^n(\mathbf{k})|^2}{P_{\rm err}(k,\mu)} + \sum_{\mathbf{k}, |\mathbf{k}|\sim k} \frac{|\mathbf{s}(\mathbf{k}))|^2}{P_{\rm s}(k)} \right)
\end{equation}
\rsp{The error power spectrum, $P_{\rm err}$, is now a combination of the modeling error (as before) and the noise power spectrum. This changes the amplitude of $P_{\rm err}$, especially on small scales, and also introduces an angular dependence.} We have indicated this by the additional $\mu$ dependence in $P_{\rm err}$ in the likelihood term of $\mathcal{P}_w$. Note the data automatically include shot-noise, since we have a single realization of the halo field in the simulation.
\subsection{Annealing}
\label{sec:annealing}
Reconstructing the initial modes by minimizing Eq.~\ref{eq:posterior} is an optimization problem in a high dimensional space with both the number of underlying features (initial modes) and the number of data points (grid cells) being in millions. Despite using gradient and approximate Hessian information, it is a hard problem to solve. Since we are aware of the underlying physics driving our model, as well as its performance, we use our domain knowledge to assist the convergence of the optimizer by modifying the loss function over iterations rather than simply brute-forcing the optimization with the vanilla loss function. A more detailed discussion on these schemes is provided in refs.~\cite{Seljak17, Feng18, Modi18}. Here we briefly summarize the two annealing schemes that we use to improve our performance
\begin{itemize}
\item Residual smoothing : Both the dynamics and the bias model are more linear on large scales than smaller scales. Hence the posterior surface is more convex on these scales and convergence is easier. However since the number of modes scales as $k^3$, the large scale modes are a small fraction of the total and are harder for the optimizer to reconstruct in practice. To mitigate this we smooth the residual term of the loss function on small scales with a Gaussian kernel. Thus on these scales, the prior pulls down the small scale power to zero and we force the optimizer to get the large scales correct first.
\item Upsampling: To minimize the cost of reconstruction, we begin our optimization on a low-resolution grid and reconstruct all the modes following the residual smoothing. Upon convergence, we upsample the converged initial field to a higher resolution grid and paint our HI data on this grid as well. Since the higher resolution has information to smaller scales, this allows us to leverage these scales to improve our reconstruction. Further, since the largest scales have already converged on the lower resolution, they remain mostly unchanged and we do not need to repeat the residual smoothing on all the scales for this higher resolution.
\end{itemize}
\section{Results}
\label{sec:results}
We present the results for our reconstruction in the section. Our primary metrics to gauge the performance of our model as well as reconstruction are the cross correlation function, $r_{cc}(k)$, and transfer function, $T_f(k)$, defined as
\begin{equation}
r_{cc}(k) = \frac{P_{XY}(k)}{\sqrt{P_{X}(k) P_{Y}(k)}} \qquad , \qquad
T_f(k) = \sqrt{\frac{P_{Y}(k)}{P_{X}(k)}} \quad ,
\label{eq:rt-def}
\end{equation}
and the error power spectrum, $P_{\rm err}$, defined in Eq.~\ref{eq:error}.
These metrics will always be defined between either the model or the reconstructed fields as $Y$ and the corresponding true field as $X$ unless explicitly specified otherwise.
\newcommand{{\rm S}}{{\rm S}}
\newcommand{{\rm N}}{{\rm N}}
To gain some intuition for these functions it may be helpful to recall the results for the linear case. For Gaussian signal, $\mathbf{s}$, with covariance ${\rm S}$, and \rsp{Gaussian} noise, $\mathbf{n}$, with covariance ${\rm N}$ and a data vector $\mathbf{d}=\mathbf{s}+\mathbf{n}$, the posterior is given by $P(s|d) \propto P(d|s) P(s)$ which is the product of two Gaussians. The MAP solution is given by the well-known Wiener filter\footnote{https://en.wikipedia.org/wiki/Wiener\_filter} \rsp{\cite{wiener64}}
\begin{equation}
\tilde{s}= {\rm S} \left( {\rm S} + {\rm N} \right)^{-1} \mathbf{d} \equiv W\mathbf{d} \qquad .
\end{equation}
For stationary problems both ${\rm S}$ and ${\rm N}$ are diagonal in Fourier space. In this simple case $r_{cc}(k)=T_f(k)=W^{1/2}$. In the limit of very small noise, $P_N/P_S\ll 1$, the measurements are a faithtful representation of the true field and $W^2=r_{cc}=T_f\simeq 1$. In the limit of very large noise, $P_N/P_s\gg 1$, the filter becomes prior dominated and the most-likely value of $\mathbf{s}$ is zero. In this limit $W^2=r_{cc}=T_f\rightarrow 0$.
In our case, there is a non-linear transformation at the heart of the model, i.e.\ $\mathbf{d} = \mathcal{F}(\mathbf{s}) + \mathbf{n}$. Therefore the Wiener filter is no longer the solution, but much of the same intuition applies. In particular $r_{cc}$ measures how faithfully the reconstructed map describes
the input map, up to rescalings of the output map amplitude. The transfer function, on the other hand, tells us about the amplitude of the output map as a function of scale, with $r\,T_f = P_{XY}/P_X$.
\subsection{Bias model}
\begin{table}[]
\centering
\begin{tabular}{c|cccc||cccc}
& \multicolumn{4}{c||}{Model A} & \multicolumn{4}{c}{Model B} \\
$z$ & $b_1$ & $b_2$ & $b_g$ & $T_f(k)$ & $b_1$ & $b_2$ & $b_g$ & $T_f(k)$ \\ \hline
2 & 0.528 & 0.006 & -0.023 & $0.98-0.197\,k^2$ & 0.55 & -0.025 & -0.012 & $0.97-0.222\,k^2$ \\
4 & 0.434 & 0.046 & -0.011 & $0.993-0.110\,k^2$ & 0.446 & 0.102 & -0.012 & $0.99-0.151\,k^2$ \\
6 & 0.399 & 0.094 & -0.009 & $0.995-0.112\,k^2$ & 0.378 & 0.16 & -0.011 & $0.984-0.187\,k^2$
\end{tabular}
\caption{The best-fit bias parameters (and transfer function) to the HiddenValley simulation HI fields for models A and B of ref.~\cite{HiddenValley19} at $z=2$, 4 and 6. The bias parameters are defined on a $2\,h^{-1}$Mpc grid (see text).}
\label{tab:biasparams}
\end{table}
\begin{figure}
\centering
\resizebox{1\columnwidth}{!}{\includegraphics{./figs/bsum_L1024_N0256_3333_ik50.pdf}}
\caption{Comparison of different bias models through the real-space cross-correlation (top left) and transfer function (top right) at $z=2$. Blue and orange lines correspond to the Lagrangian bias model with Zeldovich and N-body dynamics respectively. The green line corresponds to an Eulerian bias model through quadratic order, while the dashed red line is a simple linear bias model in Eulerian space. In both cases the Eulerian fields are smoothed with a Gaussian kernel of $3\,h^{-1}{\rm Mpc}$. The vertical dashed-black line corresponds to the $k_{\rm max}$ upto which the the error is minimized. (Bottom left) Comparison of different bias models at the level of the error power spectrum. The dashed black line is the `Poisson' shot noise for the HI-mass-weighted field. (Bottom right) The scale dependence of the bias parameters for the Lagrangian bias model with Zeldovich dynamics. The dashed lines correspond to our default assumption: the scale-independent fits to $k<0.3\,h\,{\rm Mpc}^{-1}$ data.
}
\label{fig:biasperf}
\end{figure}
We begin by evaluating the performance of our bias model using the known initial conditions in the simulation and the aforementioned three metrics, and compare the error power spectrum with the HI-mass-weighted shot noise. The statistics for the best fit to the HI field at $z=2$ are shown in Fig.~\ref{fig:biasperf}. Though not shown in this figure, the results for $z=4$ and 6 are very similar. Fig.~\ref{fig:biasperf} compares three different bias models: two Lagrangian bias models (as described in \S\ref{sec:biasmodel}), with Zeldovich dynamics and PM dynamics\footnote{Here, our PM dynamics corresponds to a FastPM simulation on a $512^3$ grid with 5 time steps and force resolution $B=1$, i.e.\ a $512^3$ mesh for the force computation. For comparison, the HI data was generated on a $10240^3$ grid with 40 steps to $z=2$ and force resolution $B=2$.} respectively, as well as an Eulerian bias model where the three bias parameters are defined with respect to the fields generated from the Eulerian matter field smoothed at $3\,h^{-1}{\rm Mpc}$ with a Gaussian smoothing kernel. In addition, we show the simple case of linear Eulerian bias ($b_1^E$) to contrast with our other biasing schemes.
Firstly, comparing the Eulerian and Lagrangian bias models, we find that Lagrangian bias outperforms Eulerian bias at the level of cross-correlation and transfer function on all scales. Lagrangian bias models also lead to quite scale independent error power spectra and much lower noise than the Poisson shot-noise level, while this is not the case for Eulerian models. This implies that we can do analysis with higher fidelity than one would expect from the simplest Poisson prediction and are able to access information to smaller scales. This is in general agreement with the findings of ref.~\cite{Schmittfull19} for halos at different number densities and weightings. Note that while the linear-Eulerian model is a subset of the quadratic-Eulerian model, it performs worse at the level of cross-correlation. This is because the metric for fitting bias parameters is minimizing the noise power spectrum up to $k\simeq 0.3 \,h\,{\rm Mpc}^{-1}$ where the quadratic Eulerian model is better than the linear bias model.
Amongst Lagrangian bias models we find that the Zeldovich displacements perform better in terms of the cross correlation and the transfer function with our observed HI data field and resolution than the PM dynamics. One can increase the accuracy of the PM displacements by instead increasing the number of time steps or using force resolution $B=2$ for the PM simulations and we find that this improves the model slightly over ZA but makes the modeling much more expensive. Moreover, the difference between the performance of the two dynamics also depends on the resolution of the meshes used to do simulations. Currently, we are modeling the data from a much higher resolution simulation ($20480^3$ force mesh) with a quite `low' resolution ($256^3$ or $512^3$ mesh) PM or ZA dynamics so its not obvious how they should compare, but we do find the performance of PM improving as we increase the resolution of models as might be expected.
Another way to improve the bias model is to add a term corresponding to $b_\nabla^2$ which comes with $k^2$ dependence motivated by peaks bias as well as effective field theory counter-terms. We find that adding such a term does not change the two models at the level of cross-correlation, but significantly improves the PM model over ZA at the level of the transfer function.
Going one step further, one can also make the bias parameters scale dependent and use them as transfer functions. To assess the scale dependence of the bias parameters, in the lower-right panel of Fig.~\ref{fig:biasperf} we also show the scale dependent bias parameters for the Zeldovich bias model which are fit for by minimizing the error power spectrum independently for every $k$-bin. The best-fit bias parameters still do not have any significant scale dependence up to intermediate scales. As mentioned in previous section, this explains why we find that the fit of bias parameters is quite insensitive to the $k-$range used for fitting the bias model, as long as we do not fit up to highly non-linear scales.
Given the differences in performance of ZA and PM dynamics, how scale-dependent the biases are on the small scales that begin to get increasingly noise dominated in the data and the cost of the ZA vs.~PM forward models, we find the performance of a scale-independent bias model with ZA dynamics to be sufficient for reconstruction. Henceforth, we present results for the reconstruction with this model. However we suspect that it would be worth studying the performance of different bias models in more detail. Our comparison is also likely to change for different number densities and weightings (such as position, mass, HI mass etc.) of the biased tracers. We leave such a detailed study to future work.
\subsection{Reconstruction}
\begin{figure}
\centering
\resizebox{1\columnwidth}{!}{\includegraphics{./figs/noise2d_L1024_mu3_up.pdf}}
\caption{Ratio of signal to total signal and noise in our HI data at different redshifts as a function of scale and angle ($\mu$ bins) for the three different thermal noise cases considered for reconstruction. Here we have neglected any loss of modes due to foregrounds.
}
\label{fig:snr}
\end{figure}
\begin{figure}
\centering
\resizebox{1\columnwidth}{!}{\includegraphics{./figs/map_L1024_2000-pess-Oranges-up.pdf}}
\caption{Different projections for the true HI field, data corrupted with thermal noise and a foreground wedge and our reconstructed HI field at redshift $z=4$ for our fiducial thermal noise corresponding to 5 years of observing with a half-filled array but for a pessimistic wedge ($k_\parallel = 0.03 \,h\,{\rm Mpc}^{-1},\, \theta_w=38^\circ$). Different rows show $20 \,h^{-1}{\rm Mpc}$ thick slices when projected along the axis specified by the $y$-axis label. The horizontal and vertical image dimensions correspond to $200 \,h^{-1}{\rm Mpc}$ and correspond to direction specified by arrows on top left for every row. The line of sight of the data is along the $Z$ axis. We use a log-scale color scheme and the color-scale is same for all the panels in the same row.
}
\label{fig:image}
\end{figure}
\subsubsection{Configurations}
In this section, we show the results for the reconstruction of the initial and HI field. We do reconstruction on our $1024 \,h^{-1}{\rm Mpc}$ box at redshifts $z=2$, 4 and $6$. The line of sight is always assumed to be along the $z$-axis. Unless otherwise specified, we will show the reconstruction for our fiducial setup which is reconstruction in redshift space, with thermal noise corresponding to $5$ years with a half-filled array, $k_\parallel^{\rm min} = 0.03 \,h\,{\rm Mpc}^{-1}$ and an optimistic foreground wedge ($\theta_w=5^\circ$, $15^\circ$ and $26^\circ$ at $z=2$, 4 and $6$). To gauge the impact of our assumptions regarding this fiducial setup, we will show comparisons with other setups which include a pessimistic foreground wedge (dashed) corresponding to $\theta_w=15^\circ$, $38^\circ$ and \rsp{$55^\circ$} at $z=2$, 4 and 6 respectively, and different thermal noise configurations corresponding to a full-filled array (optimistic case) and a quarter filled array (pessimistic case).
To gain some intuition about how these noise configurations compare with the signal, we also show the ratio of signal to signal plus noise in Fig.~\ref{fig:snr} as a function of scale and angle for different redshifts. Since the impact of the foreground wedge is binary with respect to scale and angle, we have ignored the loss of modes due to it in plotting Fig.~\ref{fig:snr}.
\rsp{However to emphasize how severe the loss of information due to the wedge is,
at redshift $z=2$, 4 and 6 we completely loose $21(7)\%$, $60(21)\%$ and $88(40)\%$ of the modes due to the pessimistic (optimistic) foreground wedge. When we include the thermal noise, even in the fiducial case, a further $30(43)\%$, $24(51)\%$ and $9(37)\%$ of the modes outside the wedge
become noise dominated. Thus we preface our results by re-iterating that reconstructing the large-scale modes is a non-trivial problem in these situations.}
\subsubsection{Annealing: Implementation}
To implement our annealing scheme we begin with a $256^3$ mesh and anneal by smoothing the loss-function at smoothing scales corresponding to $4$, 2, 1, 0.5 and $0$ cells. \rsp{For the given resolution, this roughly corresponds to $k\sim 0.2$, 0.4, 0.8 and $1.6\,h\,{\rm Mpc}^{-1}$. Thus even in the case of largest smoothing, we still have enough small scale modes outside the wedge to inform reconstruction.} We find that the statistics of interest stop changing roughly after $\sim$ 100 iterations, thus we do this many iterations for every step of annealing before declaring it `converged'. After converging on this mesh, we upsample our reconstructed initial field to a $512^3$ grid and repeat our reconstruction exercise starting from this point while comparing to data now painted on this higher resolution grid. At this stage, we do residual smoothing only corresponding to $1$ and 0 cells, 100 iterations each.
We have tried other annealing methods, \rsp{such as different smoothing scales and other upsampling schemes.
In addition, to study convergence, we have also let the optimizer run for longer, increasing the number of iterations.
We find that none of these choices have a significant impact, and while running more iterations can improve results marginally, our aforementioned implementation provides a good balance between computational cost and performance for this work.
Moreover, given the heuristic nature of our annealing scheme, its not obvious if we will converge to a unique solution. To study this, we run simulations with different initial conditions and find that the reconstructed fields are correlated well enough, and hence identical, on the scales of interest here. We establish both of these quantitatively in appendix \ref{app:validanneal}.}
\subsubsection{Reconstructed field : Visual}
Before gauging our reconstruction quantitatively, we also first visually see the impact of noise and reconstruction at the level of the field to develop some intuition. Fig.~\ref{fig:image} shows different projections of the true HI field, as well as the data which is now corrupted with thermal noise in addition to the foreground wedge and our reconstructed HI field for our fiducial thermal noise corresponding to 5 years of observing with a half-filled array but for a pessimistic wedge ($k_\parallel = 0.03 \,h\,{\rm Mpc}^{-1},\, \theta_w=38^\circ$) at $z=4$. The line of sight direction is Z. As expected from Fig. \ref{fig:snr}, where the signal is close to zero on small scales, the data is heavily corrupted with small scale noise and the noise is higher when we take the sum over the transverse direction than when along the line of sight. In fact, visually, one can only faintly distinguish the biggest structure peaks in the corrupted data from noise and its impressive how well we are able to reconstruct smaller structures in the HI field despite this. Due to the foreground wedge, the structures seem to be stretched in the transverse direction in the data. This can also be seen by comparing first two rows with the last, where structures are more isotropic since the stretching is the same in both transverse directions. For the reconstructed field we have visibly reduced this stretching by reconstructing modes in the wedge. Overall the reconstructed field is smoother than the input data, since we do not fully reconstruct the small-scale modes.
\begin{figure}
\centering
\resizebox{1\columnwidth}{!}{\includegraphics{./figs/allcomparewd_L1024-hexup.pdf}}
\caption{We show the cross-correlation (left) and transfer function (right) of the reconstructed HI field at $z=2$, 4 and $6$ for three different thermal noise levels as well as two wedge configurations (optimistic and pessimistic; see text for more details). For comparison, we also show these quantities for the noisy and masked data which was used as the input for reconstruction with light colors.
}
\label{fig:allcompare}
\end{figure}
\subsubsection{Reconstructed field : Two point function}
Next, to quantitatively see how the foreground wedge and thermal noise combined affects our data, we estimate the cross-correlation and transfer function of this noisy input data with the true HI field. This is shown in Fig.~\ref{fig:allcompare} for $z=2$, $4$ and 6 as thin lines. In addition to our fiducial setup, we also consider other configurations for the wedge and thermal noise as outlined at the beginning of this section. This figure contrasts with Fig. \ref{fig:snr} since there we neglected the loss of modes due to foreground wedge and only focused on thermal noise. For the combined noisy data, the transfer function with respect to the true HI field is zero on the largest scales ($k< 0.03\,h\,{\rm Mpc}^{-1}$) since these modes are completely lost to the foregrounds and we have no information on these scales. On intermediate scales, the cross correlation and transfer function increase but are still well below unity since some modes are still lost to the wedge in every $k-$bin, and as per our expectations, this loss is greater in the pessimistic wedge case. On the smallest scales, the transfer function exceeds one since these modes are dominated by thermal noise. However the cross-correlation on these scales drops rapidly since this noise is uncorrelated to the data and contains no information.
For comparison, we show as thick lines in Fig.~\ref{fig:allcompare} the cross-correlation and transfer function of the reconstructed HI field with true HI field. This highlights how our reconstruction helps in recovering the information lost due to foregrounds and thermal noise. The gains in cross-correlation coefficient over noisy data are quite impressive, reaching $\sim 0.8$, 0.9, 0.96 for the optimistic wedge on the largest scales for $z=2$, 4 and $6$ respectively where we have access to no modes in the data. The modes in this regime are constructed only out of mode coupling due to no-linear evolution. On intermediate scales, where the data have the most information, the cross correlation reaches $1$ for all redshifts while it drops again (to below $0.8$) on the smallest scales which are thermal noise dominated. A similar trend is observed in the transfer function, where we recover $\sim 40$, $50$ and $60\%$ of the largest scales lost completely to the foregrounds for $z=2$, $4$ and $6$ respectively. We recover more power as we move to smaller scales, with the transfer function reaching close to unity on intermediate scales $k\simeq 0.1\,h\,{\rm Mpc}^{-1}$ before starting to decrease at the smallest, noise dominated scales.
\begin{figure}
\centering
\resizebox{1\columnwidth}{!}{\includegraphics{./figs/sdmap_L1024_2000.pdf}}
\caption{The cross-correlation (left) and transfer function (right) at $z=4$ for the reconstructed initial matter, final matter and HI field with their corresponding true fields. We have assumed 5 years of observing with a half-filled array, $k_\parallel^{\rm min} = 0.03 \,h\,{\rm Mpc}^{-1}$ and $\theta_w=15^\circ$. The dashed lines are for reconstruction on the fiducial $256^3$ mesh while the solid lines are reconstruction after upsampling to a $512^3$ mesh, which leads to significant gains.
}
\label{fig:allfields}
\end{figure}
Reconstruction is slightly worse in the case of pessimistic wedge, with higher redshifts paying a higher penalty simply due to the larger difference in the two configurations. However the cross-correlation is still $\simeq 80\%$ at $z=6$ on the largest scales, even in the most pessimistic setup. While the foreground wedge affects reconstruction on all scales, thermal noise does not affect reconstruction on the large scales. On small scales, reconstruction is slightly worse with increasing noise, again penalizing higher redshifts more than lower redshifts.
With our procedure, along with reconstructing the observed data, we also reconstruct the initial and final matter field. Its instructive to see how close are these to the true fields since they have different science applications. The initial (Lagrangian) field can be used to reconstruct Baryon Acoustic Oscillations (BAOs), while the final matter field across redshifts has applications in CMB and weak lensing science. Here we briefly look at the recovery of these fields. In Fig.~\ref{fig:allfields}, we show the cross-correlation and transfer function of the reconstructed initial matter, final matter and HI field for our fiducial setup at $z=4$. As for the HI data field, the cross correlation and transfer function increases as we go from large to intermediate scales since the large scales are completely absent in the data due to foreground wedge. Since the information moves from larger scales to small scales during non-linear evolution, the cross correlation of the reconstructed final-matter and HI field are much better on smaller scales than the initial field. While the transfer function for initial and HI field drops on small scales due to thermal noise, that of final matter field increases over $1$. This is simply because our dynamic model for reconstruction is the Zeldovich approximation while the true data was generated by particle-mesh simulations.
In Fig.~\ref{fig:allfields} we also show how upsampling improves the performance of reconstruction. The dashed lines show the reconstruction on the fiducial 256$^3$ grid, without any upsampling, while the solid lines show the results when continuing reconstruction after upsampling the reconstructed field to a 512$^3$ mesh. Since the higher resolution allows us to push to smaller scales it increases the number of modes not dominated by thermal noise while also recovering some of the signal that was earlier lost due to grid-smoothing on these scales. We find large gains in our reconstruction for all three fields. While we have not pushed to even higher resolution due to CPU limitations, we suspect it would yield diminishing returns since the smaller scales are dominated by thermal noise.
\begin{figure}
\centering
\resizebox{1\columnwidth}{!}{\includegraphics{./figs/kmin_L1024-hex.pdf}}
\caption{The cross-correlation (left) and transfer function (right) at $z=4$ with an optimistic wedge ($\theta_w=15^\circ$) and fiducial thermal noise (corresponding to 5 years of observing with a half-filled array) for different $k_\parallel^{\rm min}$ (in $\,h\,{\rm Mpc}^{-1}$), without upsampling. For comparison, we also show the case where we do not lose any modes to foregrounds (i.e. no wedge and $k_{\parallel}^{\rm min}=0$) but have the same fiducial thermal noise (dashed line). We have checked that as the thermal noise is reduced the cross correlation moves closer to 1 as expected.
}
\label{fig:kmin}
\end{figure}
\subsubsection{Impact of $k_\parallel^{\rm min}$}
While changing the thermal noise and wedge configurations, we have so far kept the $k_\parallel^{\rm min}$ fixed at $k_\parallel^{\rm min} = 0.03 \,h\,{\rm Mpc}^{-1}$. In Fig.~\ref{fig:kmin}, we show how the reconstruction performs for different values of $k_\parallel^{\rm min}$, and compare it to the hypothetical case where we loose no modes to foregrounds. Again, we consider our fiducial setup at $z=4$ with optimistic wedge ($\theta_w=15^\circ$) and thermal noise corresponding to 5 years of observing with a half-filled array without upsampling annealing. Reconstruction on all scales is slightly worse than the case when we loose no modes to the foregrounds, but not by much. We find that for a given wedge, small scales are quite insensitive to the $k_\parallel^{\rm min}$ threshold. On large scales, reconstruction gets progressively worse to smaller $k$ as we increase $k_\parallel^{\rm min}$. However even for the most pessimistic case, $k_\parallel^{\rm min} = 0.05\,h\,{\rm Mpc}^{-1}$, the cross correlation is better than $0.8$ on the largest scales. Furthermore upsampling annealing would improve the reconstruction on all scales.
\subsubsection{Real vs.~redshift space}
Its also instructive to compare reconstruction in real and redshift space, since this gives us access to new signal (through the velocity field) but it is often the case that one loses power in Fourier modes in the radial direction. However for the 21-cm signal finger-of-god effects are subdominant and most of the redshift space signal can be modeled with perturbation theory \cite{VN18,HiddenValley19}. As a result, we find that doing reconstruction from the redshift space data improves our results over real space data. We show this in Fig.~\ref{fig:realrsd} where we compare the statistics for reconstruction in real space (dashed) and redshift space (solid) for $z=2$, $4$ and $6$. We model the redshift space data by moving the Lagrangian fields to Eulerian space with Zeldovich dynamics, as before, and then using the Zeldovich velocity component along the line of sight. The velocity field information residing in the anisotropic clustering provides additional information which improves our performance in redshift space over real space. The gains are largest at $z=6$ and decrease with decreasing redshift. This is likely because we model only the linear dynamics while non-linear RSD, as well as the finger-of-god effects, increase at lower redshifts. Using higher order perturbation theory, or the particle mesh dynamics, should improve the performance at lower $z$.
\begin{figure}
\centering
\resizebox{1\columnwidth}{!}{\includegraphics{./figs/realrsd_L1024-hex.pdf}}
\caption{We show the cross-correlation (left) and transfer function (right) of the reconstructed HI field at $z=2$, 4 and $6$ for the fiducial noise setup and optimisitic wedge in real and redshift space. The additional information available in the redshift-space field enhances the recovery of the signal.
}
\label{fig:realrsd}
\end{figure}
\subsubsection{Angular cross-correlation}
Given that the line of sight and transverse modes are differently affected by foreground wedge, thermal noise and redshift space distortions, we conclude this section by looking at the cross-correlation coefficient for the reconstructed data as a function of $\mu$. This is shown in Fig.~\ref{fig:compare2dmu} for our fiducial thermal noise model (after annealing). We recover the largest scales almost perfectly, even though we do not have any modes along the line of sight in the data. On the other hand small scale modes along the line of sight are significantly better reconstructed than the transverse modes. We find that the reconstruction of line-of-sight modes is less sensitive to the loss of modes in the foreground wedge than transverse modes. However in all cases the better foregrounds are controlled, and the smaller the wedge can be made, the better reconstruction proceeds.
\begin{figure}
\centering
\resizebox{1\columnwidth}{!}{\includegraphics{./figs/rep2dmu_L1024_mu8_up.pdf}}
\caption{We show the cross-correlation of the reconstructed HI field with the corresponding true field as a function of the angle $\mu$ with the line of sight in different $k$ bins for fiducial thermal noise and two wedge configurations (with corresponding $\mu_{\rm wedge}$ shown as thin vertical lines) at $z=2$, 4 and $6$. All the modes are reconstructed well along the line of sight, while the large scales are reconstructed better than small scales in the transverse direction.
}
\label{fig:compare2dmu}
\end{figure}
\section{Implications}
\label{sec:implications}
Our ability to reconstruct long wavelength modes of the HI field has several implications, and we explore some of them here.
The first two subsections discuss cross-correlation opportunities afforded by reconstruction while the last describes the improvement in BAO distance measures.
The utility of large 21-cm intensity mapping arrays is the highest in the high redshift regime, where there is a lot of cosmic volume to be explored and which is the most difficult to be accessed using other methods. While there are some fields that will cross-correlate straight-forwardly with the 21-cm data, notably the Lyman-$\alpha$ forest and sparse galaxy and quasars samples which entail true three-dimensional correlations, any field that is projected in radial direction occupies the region of the Fourier space that is the most contaminated with the foregrounds. This technique allows us to re-enable these cross-correlations and thus significantly broadens the appeal of high-redshift 21-cm experimentation. We discuss two important examples below.
The first question that needs to addressed is whether it is preferable to cross-correlate with the reconstructed HI field or the evolved matter field or even the linear field. While there are some arguments against, we opted to use the reconstructed HI field, which we refer to simply as the ``reconstructed'' field. Most importantly, on scales where data are available, this field will resemble the measured data regardless of modeling imperfections. In particular, Figure \ref{fig:kmin} show that the HI field has a higher cross-correlation coefficient than the initial field, indicating that the modeling is imperfect but nevertheless sufficiently flexible to account for these deficiencies.
Finally we note that low-$k$ modes are extremely important in the quest for Primordial Non Gaussianities (PNG), with constraints on PNG from 21\,cm survey severely hampered by the loss of long wavelength modes due to foregrounds \cite{CVDE-21cm}. PNG of the local-type would benefit enormously from reconstruction, which appears to be most robust way to recover the true signal at large scales free of residual foregrounds \cite{2004PhR...402..103B,2014PTEP.2014fB105T}. The equilateral and squeezed triangle configurations could potentially also benefit, because their sensitivity is normally limited by non-linear gravitational evolution that produces similarly shaped bispectra. We intend to return to this specific case in future work.
\subsection{Redshift distribution reconstruction}
A common problem facing future photometric surveys is to determine the redshift distribution of the objects, many of which may be too faint or too numerous to obtain redshifts of directly \cite{Newman15}. One approach is to use `clustering redshifts', wherein the photometric sample is cross-correlated with a spectroscopic sample in order to determine $dN/dz$ of the former \cite{Ho08,Newman08,Erben09,Menard13,McQuinn13}.
One difficulty with this approach for intensity mapping surveys is that, for broad $dN/dz$, the photometric sample only probes $k_\parallel\approx 0$. In linear theory the cross-correlation between a high-pass filtered 21-cm field and the photometric sample is highly suppressed.
Translating a redshift uncertainty of $\delta z$ into a comoving distance uncertainty of $\sigma_\chi=c\,\delta z/H(z)$, to probe $k_\parallel=0.03\,h\,{\rm Mpc}^{-1}$ requires $\delta z/(1+z)<0.01-0.015$ at $z=2-6$. Such photo-$z$ precision is in principle achievable, given enough filters, but primarily at lower redshift and for brighter galaxies \cite{Gorecki14,Laigle16}. The more common assumption of $\delta z/(1+z)=0.05$ corresponds to
\begin{equation}
\sigma_\chi \simeq 120\,h^{-1}{\rm Mpc} \left(\frac{1+z}{5}\right)^{-1/2}
\label{eq:scatter}
\end{equation}
for $2\le z\le 6$. Modes with $k_\parallel=0.03\,h\,{\rm Mpc}^{-1}$ are almost entirely unconstrained by such measurements.
To be more quantitative we note that such a photometric redshift uncertainty would smooth the galaxy field in the redshift direction. At very low $k$, and for the purposes of exploration, we can assume scale-independent linear bias so that on large scales we have
\begin{equation}
\delta_{\rm photo}(k,\mu) = \left( b_p + f\mu^2\right) \mathcal{D}
\, \delta_m(\vec{k}) + {\rm noise} \quad , \quad \mathcal{D}(k,\mu)\equiv \exp[-k^2\mu^2\sigma_\chi^2/2]
\label{eq:photo}
\end{equation}
where $\delta_m(\vec{k})$ is the matter overdensity and $\mathcal{D}$ encompasses the effect of photometric redshift uncertainty which we have approximated as Gaussian. Similarly, for the true HI field, assuming scale independent bias on these scales,
\begin{equation}
\delta_{\rm HI}(k,\mu) = \left( b_{\rm HI} + f\mu^2\right) \, \delta_m(\vec{k}) + {\rm noise}
\end{equation}
Using Eq.~\ref{eq:rt-def} the cross-power spectrum with $\delta_{\rm rec}$ is then given by
\begin{equation}
P_\times(k,\mu)\equiv
\left\langle\delta_{\rm rec} \delta_{\rm photo}\right\rangle
= \frac{b_p + f\mu^2}{b_{\rm HI} + f\mu^2}
\,\mathcal{D}(k,\mu)\ r(k,\mu)T_f(k,\mu)\ P_{HI}(k,\mu)
\label{eq:cross-with-recon}
\end{equation}
At high redshift both the galaxies and HI are highly biased and we are interested in $\mu\approx 0$ modes so the first factor becomes $b_p/b_{HI}$ which simply rescales the amplitude of $T_f$. Eq.~\ref{eq:cross-with-recon} shows why we need to reconstruct the low $k$ modes in order to achieve a significant signal in cross-correlation: $\mathcal{D}$ highly damps the signal at high $k_\parallel$ and without reconstruction $rT_f\to 0$ at low $k_\parallel$.
For Gaussian fields the variance of the cross-correlation is
${\rm Var}\left[\delta P_\times\right] = N_{\rm modes}^{-1} \left( P_\times^2 + P_{\rm photo}P_{\rm rec}\right)$
where $N_{\rm modes}$ is the number of independent modes in the bin and $P_{\rm rec} = T_f^2 P_{HI}$ is the reconstructed auto-spectra of HI field along with the reconstruction noise. Additionally, the auto-spectrum $P_{\rm photo}$ includes a contribution from the noise auto-spectrum, which we assume to be shot-noise: $\bar{n}^{-1}$. In this limit
\begin{equation}
P_{\rm photo} = \left( b_p + f\mu^2\right)^2 \mathcal{D}^2 P_m + \frac{1}{\bar{n}}
\end{equation}
where $P_m$ is the matter power spectra. Thus,
\begin{equation}
\frac{{\rm Var}[P_\times]}{P_\times^2} \simeq N_{\rm modes}^{-1}
\left( 1 + \rho^{-2} \right)
\end{equation}
where
\begin{equation}
\rho^2 = \frac{P^2_\times}{P_{\rm photo}P_{\rm rec}} = r^2 \frac{b_p^2\mathcal{D}^2\,\bar{n}P_m}{1+b_p^2\mathcal{D}^2\,\bar{n}P_m}
\label{eq:rho}
\end{equation}
quantifies the effective signal and plays the role of the more familiar $\bar{n}P$ that frequently appears in power spectrum errors \cite{FKP94}. An important caveat here is that to estimate $\rho$ as a function of $\mu$, the correct way is to integrate over the $\mu$-bin, and not simply to estimate it at the bin-center. Since the photometric damping kernel scales as $\mu^2$, estimating at the bin center causes over damping and the impact is especially severe for the smallest $\mu$ bin as modes with $\mu \rightarrow 0$ should effectively be undamped.
\begin{figure}
\centering
\resizebox{1\columnwidth}{!}{\includegraphics{./figs/photod_z40_L1024-Nmu8-up.pdf}}
\caption{We show the effective cross-correlation ($\rho^2$, left) as well as the variance of the cross-spectra (right) of the reconstructed HI field (fiducial setup) with the photometric field at $z=4$ and number density $\bar{n}\simeq 10^{-2.5}(\,h\,{\rm Mpc}^{-1})^3$, for different $\mu$-bins. The solid lines show the measurement from the simulations while the dashed lines are theoretical predicition based on Eq.~\ref{eq:rho}. The dotted lines on the left show the photometric damping kernel of Eq.~\ref{eq:photo} while on the right they show the variance for the input HI data with thermal noise and wedge (see text for discussion).
}
\label{fig:photo}
\end{figure}
For clustering redshifts with dense spectroscopic samples most of the weight comes from transverse modes near the non-linear scale. For example, the optimal quadratic estimator for $dN/dz$ (\cite{McQuinn13}; \S\,5.1) weights $P_\times(k_\perp,k_\parallel\approx 0)$ with $P_{\rm photo}^{-1}(k_\perp,0)$ which typically peaks at several Mpc scales. Similarly the estimator of ref.~\cite{Menard13} has weights proportional to $J_0(kr)$ integrated between $r_{\rm min}$ and $r_{\rm max}$. Taking $r_{\rm min}$ to be large enough that 1-halo terms are small leads to similar scales being important. As shown in Fig.~\ref{fig:compare2dmu} the transverse modes at intermediate scales are quite well reconstructed by our procedure, suggesting that 21-cm experiments would provide highly constraining clustering redshifts even for very high $z$ populations.
As an example to show how well we are able to cross-correlate photometric surveys with the reconstructed HI field, we consider a strawman survey at redshift $z \approx 4$, as outlined in ref.~\cite{Wilson19}, that detects Lyman break galaxies (LBG) with the $g$-band dropout technique. Based on Table 5 of ref.~\cite{Wilson19} we take $\bar{n}\simeq 10^{-2.5}(\,h\,{\rm Mpc}^{-1})^3$ and $b\simeq 3.2$. Given these numbers, we show how the $\rho^2$ as well as noise-to-signal ratiofrom for the cross-spectra as function of scales and angles in Fig.~\ref{fig:photo}. The solid lines are the estimates from the simulations while the dashed lines are the `theoretical' predictions. To generate the photometric data in the simulation, we simply select the heaviest halos up to the given number density. To implement photometric uncertainties, we smooth the data with the Gaussian kernel of Eq.~\ref{eq:photo}. An alternate way to implement photometric redshifts would be to scatter the positions of halos along the line of sight with standard deviation given by Eq.~\ref{eq:scatter}, and we find that this leads to similar results for scales where ${\rm Var}[P_\times]/P_\times^2 < 1$, but becomes noisier on smaller scales. Thus here, we stick with the smoothing implementation. Given a photometric field, we then estimate its auto-spectra (with shot-noise ) and cross-spectra with the reconstructed HI field in $\mu$-bins and use Eq.~\ref{eq:rho} to estimate $\rho$ and correspondingly the variance. Similarly, to get the `theoretical' prediction, we linearly interpolate the estimated $r_c$ for every $k$-bin as function of $\mu$ and then integrate the last term of Eq.~\ref{eq:rho} in every $\mu$-bin.
For the reconstructed HI field, we find in Fig.~\ref{fig:photo} that the signal to noise in both, the predicted and measured cross-spectra for the smallest $\mu$-bin is of the order 10 on all scales while reaching $\sim 100$ on the intermediate scales that are reconstructed the best. For higher $\mu$-bins, the signal to noise is still of the order $\sim 10$ on the largest scales but it deteriorates rapidly due to the photometric damping kernel. For comparison, we also the show as dotted lines the signal to noise for the cross-spectra with the input noisy HI data (with the wedge and the thermal noise) and see that it does not achieve 1 on any scales. This clearly demonstrates the gains made by using reconstructed HI field for estimating photometric redshifts by measuring clustering.
\subsection{Cross-correlation with CMB weak lensing}
In CMB lensing we are attempting to cross-correlate our HI with the reconstructed convergence field, $\kappa$, given by
\begin{equation}
\kappa(\theta) = \int_0^{\chi_s} d\chi\ W_\kappa (\chi) \delta_m(\chi,\theta)
\quad{\rm with}\quad
W_\kappa = \frac{3 \Omega_m (1+z)}{2} \left(\frac{H_0}{c}\right)^2 \frac{\chi(z) (\chi_s-\chi)}{\chi_s}
\quad .
\end{equation}
with $\chi_s\simeq \chi (z=1150)$. Lensing kernel varies very slowly in radial direction and hence it will be insensitive to parallel wavenumbers larger than $k_{\parallel}\sim 10^{-3} h\,{\rm Mpc}^{-1}$. Therefore, the suppression is even more strong than in the case of photometric redshifts and cross-correlation is completely hopeless unless one recovers the $k_{\parallel}\approx 0$ modes.
When cross-correlating with a tomograpic bin of reconstructed 21-cm density, the angular cross power spectrum is given by
\begin{equation}
C_\ell^\times = \int_{z_{\rm min}}^{z_{\rm max}} \frac{c\, dz}{\chi^2(z) H(z)} W_\kappa(z)
\ r(z,k_{\perp}(z),k_{\parallel}=0) T_f(z,k_{\perp}(z),k_{\parallel}=0) b_{\rm HI} (z) P_{mm}(k_{\perp}(z),z)
\end{equation}
where transverse wavenumber is evaluated at $k_{\perp}=\ell/\chi(z)$ and we have neglected any magnification bias. For slices that are thin $\Delta \chi \ll \chi$ (but thick enough so that the Limber approximation is valid) this simplifies to
\begin{equation}
C_\ell^\times = W_\kappa\ b\ r T_f \frac{c\,\Delta z}{\chi^2 H(z)} P_{mm}
\end{equation}
Similarly, the HI auto power is given by
\begin{equation}
C_\ell^{HI} = b^2 T_f^2 \frac{c\,\Delta z}{\chi^2 H(z)} P_{mm}
\end{equation}
while the $\kappa$ autospectrum remains an integral over the line of sight.
As before, the noise ${\rm Var}[C_\ell^\times] = N_{\rm modes}^{-1}((C_\ell^\times)^2 + C_\ell^{\kappa\kappa} C_\ell^{{\rm HI}}) \propto r^2$ assuming the second term dominates. The amount of signal-to-noise available for extraction is thus proportional to $r$. Our method thus allows us to reconstruct over 80\% of all signal available (at a fixed noise of CMB map) compared to none in case of using the pure foreground cleaned 21-cm signal.
\subsection{BAO reconstruction}
\label{sec:BAOrecon}
A major goal of high $z$, 21-cm cosmology is the measurement of distances using the baryon acoustic oscillation (BAO) method \cite{Weinberg13}. In ref.~\cite{HiddenValley19} we found that future interferometers would be able to make high-precision measurements of the BAO scale out to $z\simeq 6$ with scale-dependent biasing and redshift-space distortions having only a small effect on the signal. Non-linear structure formation damps the BAO peaks and reduces the signal-to-noise ratio for measuring the acoustic scale \cite{Bharadwaj98,Meiksin99,ESW07,Smith08,Crocce08,Matsubara08a,Seo08,White14,White15}. Since the non-linear scale shifts to smaller scales at high-redshifts, this damping is modest. We find that only the fourth BAO peak is slightly damped (compared to linear theory) at $z=2$ and no damping is visible at $z=6$.
Galaxy surveys which measure BAO typically apply a process known as reconstruction \cite{ES3, Obuljen17} to their data in order to restore some of the signal lost to non-linear evolution (e.g.~see refs.~\cite{BOSS_DR12,Carter18,Bautista18} and references therein). It is known that the absence of low $k_\parallel$ modes and the presence of the foreground wedge make reconstruction much less effective \cite{SeoHir16}, though some of the lost signal can be recovered with other surveys \cite{Cohn16}. Our approach provides another route to BAO reconstruction \cite{Modi18}, so it is interesting to ask how the performance of the algorithm compares to standard reconstruction and how it is impacted by loss of data due to foregrounds. For standard reconstruction, we will follow the algorithm outlined in ref.~\cite{SeoHir16}. This differs slightly from the traditional reconstruction using galaxies in that one removes the modes lost in the wedge to estimate Zeldovich displacements and instead of point objects like galaxies, one shifts the HI fields using this displacement. Since BAO reconstruction is most effective at $z=2$, we focus our attention on this case -- which is also the epoch at which our knowledge of the manner in which HI inhabits halos is most secure. At this redshift an instrument like PUMA is shot-noise limited.
\begin{figure}
\centering
\resizebox{1\columnwidth}{!}{\includegraphics{./figs/rccstd12d_L1024-Nmu4-hexup.pdf}}
\caption{We show the cross-correlation of the reconstructed initial/linear field with the true Lagrangian field for our method of reconstruction (iterative, top row) and standard reconstruction (bottom row) for the three redshifts and two wedge configurations (solid and dashed) and fiducial thermal noise setup corresponding to 5 years with half-filled array. The first column is the cross correlation monopole while the second and third column show the cross correlation in bins nearly perpendicular to and along the line of sight respectively.
}
\label{fig:rccstd}
\end{figure}
To gauge how well are we able to recover the BAO signal we closely follow ref.~\cite{Modi18} which in turn builds upon refs.~\cite{Seo08, Seo16}. Specifically we employ a simple Fisher analysis to estimate the uncertainty in locating the BAO features which allow us to measure the sound horizon at the drag epoch, $s_0$. This parameter is only sensitive to the BAO component of the power spectrum, which is damped due to Silk damping and non-linear evolution. The information lost in the latter is recovered with reconstruction and we quantify its success by measuring the linear information in the reconstructed field, $\delta_r$. This is done by estimating its projection onto the linear field, $\delta_{\rm lin}$ in the form of the `propagator'
\begin{equation}
G(k, \mu) = \frac{\langle \delta_r \delta_{\rm lin} \rangle}{b\langle \delta_{\rm lin} \delta_{\rm lin} \rangle} = \frac{r_c T_f }{b}
\end{equation}
where $b$ is the bias of the field and $r_c$ and $T_f$ are the corresponding cross correlation and transfer function of the reconstructed field with the linear field. Thus, the linear `signal' in the field is $S = b^2 G^2 P_{\rm lin}$ while under the Gaussian assumption of fields, the total variance is given by the square of the power spectra of the field itself. As a result, the total signal to noise for the linear information is:
\begin{equation}
\frac{S}{N} = \frac{b^2 G^2 P_{\rm lin} }{\langle \delta_{r} \delta_{r} \rangle} = r_c^2
\end{equation}
We compare the performance of the two methods in Fig. \ref{fig:rccstd} where we show the cross-correlation of the two reconstructed fields with the true linear field for different redshifts and wedge configurations. Since the comparison for other noise levels is qualitatively similar, we show only the case of fiducial setup corresponding to 5 years of observation with half-filled array. Overall, our method outperforms standard reconstruction significantly, with higher cross-correlation on all scales and extending to smaller scales. More importantly, as shown in the second column of Fig.~\ref{fig:rccstd}, standard reconstruction fails to reconstruct modes inside the foreground region while our method is able to do so. These modes provide complementary information to the line of sight modes, constraining the angular diameter distance while line of sight modes constrain $H(z)$. Thus, unlike the standard method, our reconstruction should be able to constrain the angular diameter distance as well as improve measures of the Hubble parameter.
To be more quantitative, we can use the Fisher formalism to estimate the error on $s_0$ as (for a derivation, see refs.~\cite{Seo08,Seo16})
\begin{equation}
F_{{\rm ln}\, s_0} = \bigg(\frac{s_0}{\sigma_{s_0}}\bigg)^2 = V_{\rm{survey}} A_0^2 \int_{k_{\rm min}}^{k_{\rm max}}k^2 dk \ \frac{b^4 G^4(k)\,\exp[-2(k \Sigma_s)^{1.4}]}{\left[P_{\rm lin}(k)/P_{0.2}\right]^2}
\label{eq:Fishers0}
\end{equation}
where we have integrated over the angles assuming isotropy, $V_{\rm survey}$ is the survey volume, $A_0$ = 0.4529 for WMAP1 cosmology, $\Sigma_s\simeq 7.76\,h^{-1}$Mpc is the Silk damping scale and $P_{0.2}$ is the linear power spectrum at $k=0.2\,h\,{\rm Mpc}^{-1}$. While the nature of this calculation is rather crude, our aim here is to not do any accurate Fisher forecast for measuring BAO, but simply to broadly compare our reconstruction with the standard reconstruction under a sensible metric and to that end this procedure should suffice. To be more yet conservative, instead of quoting the relative Fisher errors individually, we estimate the ratio of predicted errors for the two methods. We find that for the BAO peak, $s_0$, under the fiducial thermal noise setup, our reconstruction reduces errors over the standard method by a factor of $\sim 2$ ($2$) and $\sim 2.8$ ($3.5$) for optimistic (pessimistic) wedge at $z=2$ and $6$. The conclusions remain relatively unchanged for other thermal noise configurations. While this is for angular averaged BAO, as mentioned previously, we expect gains to be comparable for Hubble parameter and larger for angular diameter distance. We leave a more accurate calculation for future work.
\section{Conclusions}
\label{sec:conclusions}
In this paper, we have applied recently developed field reconstruction methods to the case of future 21-cm cosmology experiments. In many respects this is an optimal target for such methods. First, we are dealing with continuous fields which are naturally more suited for these methods, since the problem of the non-analytical object creation does not apply. \rsp{
In galaxy clustering, while its possible to circumvent the discreteness of the data by averaging in pixels that are sufficiently big that the galaxy counts in cells become effectively continuous, it leads to information loss. However, for 21-cm intensity mapping experiment, the finite resolution of the experiment naturally leads to a continuous version of the problem. We simply do not have information of scales that are small enough to even contemplate separating individual objects, even though the signal is dominated by them.}
Second, the 21-cm intensity field is dominated by numerous contributions from relatively low mass dark matter halos \cite{Castorina17,VN18,HiddenValley19}, which are well described by a low-order bias expansion to relatively high wavenumbers. \rsp{This allows us to start from the underlying dark matter field and model the observed data at the field level more simply and accurately than for discrete and more biased tracers such as galaxies.}
Third, we are missing some regions of $k$-space (\rsp{$\sim 20(7)-80(40)\%$ of the modes for our pessimistic (optimistic) case}) but, on the other hand, measuring a very large number of modes over other regions of $k$-space. As we have demonstrated, the measured modes \rsp{and the couplings introduced by non-linear evolution} more than make up for the missing ones.
Our method proceeds by maximizing \rsp{the posterior for the initial conditions, which is constructed by combining a Gaussian prior on the initial field with the likelihood of a forward model matching the observations}. In our case the observations are the mock, redshift space $\delta_{HI}$ in $k$-space, as would be measured by a 21-cm interferometric survey. The forward model consists of a quadratic, Lagrangian bias scheme paired with non-linear dynamics which can be either perturbative dynamics or a particle mesh simulation. We find that simple Zeldovich dynamics for such a model do a good job of fitting our mock data, with errors well below the shot noise level in the field (Fig.~\ref{fig:biasperf}).
For $2<z<6$ we are able to recover modes down to $k\simeq 10^{-2}\,h\,{\rm Mpc}^{-1}$ with cross-correlation coefficients larger than 0.8 for both optimistic and pessimistic assumptions about foreground contamination (Fig.~\ref{fig:allcompare}). Our reconstruction is relatively insensitive to loss of line-of-sight modes, up to $k_\parallel^{\rm min}=0.05\,h\,{\rm Mpc}^{-1}$, but more sensitive to missing modes in the `wedge' (\S\ref{sec:instruments}) as shown in Fig.~\ref{fig:kmin}. For our fiducial thermal noise assumptions we recover the $k\simeq 10^{-2}\,h\,{\rm Mpc}^{-1}$ modes with cross-correlation coeffecient greater than $0.9$ in all directions and the line of sight modes almost perfectly, even though we do not have any of these modes in the data. On the other hand small scale modes along the line of sight are significantly better reconstructed than the transverse modes. Thus as shown in Fig. \ref{fig:compare2dmu}, we find that the reconstruction of line-of-sight modes is less sensitive to the loss of modes in the foreground wedge than transverse modes. However in all cases the better foregrounds are controlled, and the smaller the wedge can be made, the better reconstruction proceeds. At $z\simeq 2$ our reconstructions are relatively insensitive to thermal noise over the range we have tested. However at higher $z$ increasingly noisy data leads to steadily lower cross-correlation coefficient, as shown in Fig.~\ref{fig:allcompare}.
Our method also provides a technique for density field reconstruction for baryon acoustic oscillations (BAO) \cite{Modi18}. It is known that the absence of low $k_\parallel$ modes and the presence of the foreground wedge make reconstruction much less effective \cite{SeoHir16}, though some of the lost signal can be recovered with other surveys \cite{Cohn16}. By constraining the reconstructed initial field, as well as the evolved HI field, we can use the measured modes to `undo' some of the non-linear evolution and increase the signal-to-noise ratio on the acoustic peaks (see \S\ref{sec:BAOrecon}). In particular, we find that while the standard methods of reconstruction recover almost no modes in the foreground wedge, our method reconstructs modes with cross correlation better than $80\%$. This can lead to improvements of a factor of 2-3 in isotropic BAO analysis and more in transverse directions. We note that \rsp{these gains over standard methods are significantly larger than in the case of galaxy surveys since the standard reconstructions is already quite efficient for the latter \cite{SeoHir16, Schmittfull17, Modi18}}.
We have focused primarily on the reconstruction technique in this paper, but the ability to reconstruct low $k$ modes opens up many scientific opportunities. We described (\S\ref{sec:implications}) how 21-cm fluctuations could provide superb clustering redshift estimation at high $z$, where it is otherwise extremely difficult to obtain dense spectroscopic samples (Fig. \ref{fig:photo}). This could be extremely important for high $z$ samples coming from LSST, Euclid and WFIRST. The recovery of low $k_\parallel$ modes also opens up a wealth of cross-correlation opportunities with projected fields (e.g.\ lensing) which are restricted to modes transverse to the line of sight. The measurement of long wavelength fluctuations should also enhance the mapping of the cosmic web at these redshifts, helping to find protoclusters or voids for example \cite{White17}.
In this work we have assumed that we measure the 21-cm field in an unperturbed redshift-space field. In practice, what we observe is the field that has been displaced on large scales by the effect of weak gravitational lensing by the intervening matter along the line of sight. It is clear that the two effects must be degenerate at some level. A location of a given halo in observed redshift-space is obtained by adding the effect of the Lagrangian displacement from the proto-halo region and the effect of weak-lensing by the lower-redshift structures. The latter can be replaced, at least approximately, by a distribution of matter whose Lagrangian displacement absorbs both effects. However, the degeneracy is not perfect. In particular, weak lensing effect is almost exclusively limited to moving the structures in transverse direction by a displacement field that changes slowly with distance. These issues have been studied in the context of traditional lensing estimators in ref.~\cite{1803.04975}, which find encouraging results. Similar lessons should apply to our method, which should, if anything, perform better since it naturally extracts more signal. We leave this for future investigation.
For the purpose of reconstruction, we have assumed a fixed cosmology and fixed bias parameters. Ideally, one would like to estimate the model and cosmological parameters at the same time as reconstructing linear field. The impact of keeping these parameters fixed at their fiducial values varies with the science objective, for instance it will differently affect BAO reconstruction vs.\ photometric redshift estimation. In some cases, since the non-linear dynamics of our forward model are quite cheap, one can imagine doing this reconstruction recursively, updating model parameters if they vary significantly. A detailed analysis of this is beyond the scope of current work and we leave this for the future. For completeness, we point out that some recent papers (e.g.\ ref.~\cite{Elsner19}) have explored \rsp{ways of taking into account the model
and cosmology parameters simultaneously} for other biased tracers.
\rsp{We have also made simplifying assumptions to generate our observed mock data to test the reconstruction algorithm.} Since HI assignment to halos is still poorly understood, we have assumed a simple semi-analytic model. We have neglected any effect of UV background fluctuations \cite{Sanderbeck18, HiddenValley19} or assembly bias \cite{VN18} on HI distribution in halos and galaxies. However a flexible bias model, and our forward modeling framework, should be able to include these. In the same vein, it's worth exploring the impact of stochasticity (scatter) between HI mass and dark matter mass on reconstruction since it adds noise on all scales. However we leave it for future work when the required observations can calibrate the amplitude of the effect.
The success of a quadratic, Lagrangian bias model plus perturbative dynamics (Eq.~\ref{eq:biasmodel1}) in describing the HI field also motivates a new route for making mock catalogs. As we have demonstrated (Fig.~\ref{fig:biasperf}) our forward model using a function of the linear density and shear field to weight particles which are moved using Zeldovich dynamics (or $2^{\rm nd}$ order perturbation theory) generates a good realization of an HI field in redshift space. The agreement can be improved even further by dividing by the transfer function at the field level. Since the initial grid can be relatively coarse (Mpc scale), very large volumes are achievable with reasonable computing resources. To the HI field generated in this manner one can add light-cone effects, foregrounds, ultra-violet background fluctuations, etc. While it is beyond the scope of our paper, our results also suggest that such mock catalogs could be used in the modeling of reionization (see also ref.~\cite{McQuinn18}) where the dynamics should be even more linear. To our model of the HI could be added a simple model for ionization fluctuations (e.g.~those developed in ref.~\cite{Zahn11} or similar). The model already predicts the velocity field, so the redshift-space clustering of HI on the lightcone or statistics like the kinetic Sunyaev-Zeldovich effect (kSZ; \cite{Sunyaev70}) can be forecast (e.g.~ref.~\cite{Alvarez16}).
While we have proceeded numerically in this paper, Fig.~\ref{fig:biasperf} shows that a Lagrangian bias model with Zeldovich dynamics does quite a good job of describing the low $k$ modes of the HI field. This suggests it may be possible to develop a fully analytic understanding of our reconstruction process, and the statistics that are being used for clustering redshifts or lensing cross-correlations. In principle the analytic models could be extended to reionization and kSZ. We defer development of such models to future work.
\section*{Acknowledgments}
We would like to thank the Cosmic Visions 21-cm Collaboration for stimulating discussion on the challenges for 21-cm surveys. C.M. would also like to thank Marcel Schmittfull for useful discussions.
M.W.~is supported by the U.S.~Department of Energy and by NSF grant number 1713791.
This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility operated under Contract No. DE-AC02-05CH11231.
This work made extensive use of the NASA Astrophysics Data System and of the {\tt astro-ph} preprint archive at {\tt arXiv.org}.
|
1,108,101,565,582 | arxiv | \section{Introduction}
Falls are common in people with Parkinson's disease (PD), and can have an enormous impact on the physical and psychological health such as injury \cite{Bloem2001, Genever2005}, reduced activity levels \cite{Bloem2001} and poor quality of life \cite{Bloem2004, Romero2003}. A review of prospective studies reported that 45\% to 68\% of patients fell at least once within a 6- to 12-month period \cite{Latt2009, Paul2013,Wood2002}. Moreover, it was also reported that patients experiencing falls are more likely to fall again \cite{Pickering2007, Wood2002}. These findings highlight the importance of identifying fall risk factors to further aid clinicians design tailored treatment options to reduce falls.
Many studies have been conducted to model falls in PD, with the attempt to discriminate fallers from non-fallers based on various measurements (see for example \cite{Catala2015, Duncan2015, Gazibara2015, Lindholm2015, Mak2010}). However, PD is progressive. Movement control becomes more debilitating over time, as the patient experiences more tremors, rigid muscles, slow movement and difficulty balancing \cite{McCoy2016}. It is plausible to hypothesize that patients might experience falls more frequently as the disease progresses. While previous studies have looked at fallers/non-fallers, this study focuses on falls frequency, with the aim to identify risk factors associated with falls frequency.
There have also been many studies aimed at classifying PD into several subtypes. Using data driven techniques, researchers have recommended several ways of subgrouping (subtypes), with the numbers of subtypes ranging from 2 to 5 \citep{Dujardin2004, Gasparoli2002, Graham1999, Lewis2005, Liu2011,Post2008, Reijnders2009,Schrag2006,Rooden2011, White2012}. The methods proposed include discriminant analysis \citep{Rooden2011,Gasparoli2002}, $K$-means clustering \citep{Liu2011}, and empirical assignment, mostly based on tremor dominance versus non-tremor dominance. Using these various methods, patients were assigned to a fixed subgroup (subtype).
While there have been many such studies, the implementation of PD subtypes in clinical research studies has been very limited \citep{Marras2013}. It is argued that in order to be useful, the subtypes should be able to explain the disease aetiology, prognosis or treatment responsiveness \citep{Marras2013}, and should be associated with the disease progression.
In response to this, we consider fall frequency as a measure of disease progression in addition to other clinical assessments for profiling subgroups of PD patients. Furthermore, considering the clinically highly heterogeneous characteristics of PD \citep{Lewis2005, Kehagia2010, FlensborgDamholdt2012, Marras2013}, a stochastic assignment of individuals into subgroups seems more suitable than a fixed subgroup assignment, as was done in the studies previously described. This can be addressed using finite mixture models (FMMs). Subgrouping subjects based on some covariates via the FMMs, in tandem with the associated risk factor, is known as profile regression \citep{Molitor2010, Liverani2015}.
A mixture model is an effective method of analysis in order to gain insight into patient groupings as it facilitates the identification of different sub-populations (subgroups) and their characteristics. The application of mixture models for subgrouping has a number of advantages: it quantifies the uncertainty of a patient's assignment into a given subgroup, profiles for the subgroups found can be generated using the estimated model, and it provides a statistically rigorous way of determining the number of subgroups \citep{McLachlan2004}. Examples of FMM implementation in PD studies are as demonstrated in \citep{Rolfe2012} and \citep{White2012} for the identification of PD phenotype based on symptoms.
As our main interest is to find subgroups of patients that have similar profiles with regard to disease progression, incorporating fall frequency eases the interpretation of the subgroups produced. For example, the subgroup with a high frequency of falls is considered to be a high risk group. The characteristics of each subgroup are generated by important factors governing the subgroups and provide information on the fall risk factors. Further, upon examining these characteristics, it may be possible to develop interventions to slow the progression of PD. To the best of our knowledge, this is the first study in PD that has implemented FMMs for patient subgrouping incorporating fall frequency.
The aim of this paper is to identify risk factors related to falls frequency in people with early stages of PD using profile regression model. The focus is twofold: (i) profile generation of clusters of patients based on the disease-specific and functional tests measures which in the end, the optimal combination of those measures can be inferred to explain falls frequency and (ii) comparing the role of disease specific tests and functional tests in assigning patients into clusters.
The remainder of this paper is organized as follows. A description of data and methodology are provided in Section \ref{methods}. Key results are presented in Section \ref{Sec:result}, including a comparison between disease-specific measures and functional tests measures to determine the patients clustering, and generating the patients clusters profiles. A discussion of results and limitations are presented in Section \ref{Sec:discussion}. Finally, summary of overall findings is given in Section \ref{Sec:summary}.
\section{Data and Methods}
\label{methods}
\subsection{Data}
\justify
\textbf{Participants.} 101 people diagnosed with idiopathic PD participated in a prospective study conducted from March 2002 until December 2006. Participants were recruited from community support groups and neurology clinics in southeast Queensland, as a part of a larger research project conducted by the Institute of Health and Biomedical Innovation in Brisbane, Australia \cite {Kerr2010}. All participants were classified as early stage PD, determined by a Hoehn and Yahr (HY) score of 3 or less. Two patients with extremely high frequency of falls were excluded, giving a total of 99 patients data for the analysis.
\justify
\textbf{Assessments.} Each participant was followed up for a consecutive six month period with a monthly record of falls. Participants were classified as fallers if they recorded any falls during follow-up. Successful completion of falls diary was monitored by phone calls and mail correspondence.
A series of clinical and functional tests were conducted at baseline. Participants were assessed based on two types of tests: disease specific tests and functional tests. The disease specific tests consist of the Unified Parkinson's Disease Rating Scale (UPDRS) and the Schwab and England activities of daily living scale (SE ADL). The UPDRS was assessed for three subscales: I (mentation, behaviour, mood), II (activities of daily living), and III (motor function). A measure of postural instability and gait disability (PIGD) was derived from the UPDRS (sum of items 13 - 15, 27 - 30). Sums of items in each of the UPDRS subscales yielded subtotals 1, 2, and 3, correspondingly. The functional tests consist of Tinetti (comprised of 2 subscales which relate to a clinical balance and gait), Berg Balance Scale (BBS), Timed Up and Go (TUG), Functional Reach (FR) and Freezing of Gait (FOG) for balance and gait, Mini Mental State Examination (MMSE) for cognitive impairment, and Melbourne Edge Test (MET) for visual acuity.
\subsection{Statistical methods} \label{sec:statmethodsch5}
\subsubsection{Profile regression}
Data on $n$ patients are denoted by $D=\{d_1, ..., d_n\}$, where $d_i$, the data of patient $i$, consists of $P$ measurements based on assessments related to the disease, $\boldsymbol{x_i}=(x_{i1},...,x_{iP})$, and fall frequency $y_i$. Each of the assessments $j$, $x_{.j}$, is assumed to follow a Gaussian distribution with mean $\mu_j$ and variance $\sigma^2_j$, that is $x_{.j} \sim N(x_j|\mu_j, \sigma_j^2)$, for $j=1, ..., P$. Fall frequency, $y_i$, is assumed to follow a Poisson distribution with mean $\theta$, $y_i \sim Po(\theta)$. Assume that patients belong to $K$ sub-populations, hereafter called subgroups.
For patient $i$, the mixture model for the covariates is given by
\begin{equation}
f(\boldsymbol{x_i}|\pi,\boldsymbol{\mu}, \Sigma)=\sum_{k=1}^K \pi_k N_P(\boldsymbol{x_i}|\boldsymbol{\mu}_k, \Sigma_k)\,\quad i=1,...,n,
\end{equation}
where $N_P(.)$ is the $p$-dimensional Gaussian density, $\pi=\left\lbrace \pi_k\right\rbrace_{k=1}^K$ are the mixing proportions, interpreted as the probability of assigning patient $i$ with the specified criteria $\boldsymbol{x}_i$ into subgroup $k$. As proportions, each $\pi_k$ lies between 0 and 1 and $\sum_k \pi_k=1$. If patient $i$ belongs to subgroup $k$, then $\boldsymbol{\mu}_k=(\mu_{1k}, ..., \mu_{Pk})^T$ is the mean vector of $\textbf{x}_i$ with the covariance matrix of $\Sigma_k$. We assume variables are independent with a constant variance across each of the subgroups, that is, $\Sigma_k=\text{diag} (\sigma_1^2, ..., \sigma_P^2)$.
The assignment of each patient to one of the subgroups is of interest. For this purpose, a latent variable $z_i$ such that
\begin{equation}
\boldsymbol{x_i}|z_i=k \sim N(\boldsymbol{x_i}|\mu_k, \Sigma_k),
\end{equation}
is introduced to identify the subgroup from which each patient has been generated. This latent variable is considered as missing data, and is to be estimated as part of the model. By incorporating this latent variable, it provides an alternative interpretation of the component weights, $\pi_k=Pr(z_i=k)$, that is $\pi_k$ is the probability that object $i$ is assigned to subgroup $k$. This implies that a multinomial distribution with parameter $\pi=(\pi_1, ...,\pi_K)$ is specified for $z_i$.
Given the response data (i.e. fall frequency), the joint covariate and response model for patient $i$ is then given by,
\begin{equation}
f(\boldsymbol{x_i},y_i|\pi,\boldsymbol{\mu}, \Sigma, \theta)=\sum_{k=1}^K \pi_k N(\boldsymbol{x_i}|\boldsymbol{\mu}_k, \Sigma_k)\,Po(y_i|\theta_k) \quad i=1,...,n,
\end{equation}
where the response $y_i$ follows a Poisson distribution with mean $\theta$ (which takes value $\theta=\theta_k$ if patient $i$ is assigned to subgroup $k$).
The association of the profiles and the response is characterized by
\begin{equation}
\label{eq:ch5_gamma0}
\text{log}(\theta_i|z_i)=\gamma_{z_i}+\boldsymbol{\beta}\textbf{u}_i,
\end{equation}
where $\boldsymbol{\beta}=(\beta_1,...,\beta_H)$ denotes the regression parameter coefficients associated with the covariates $\textbf{u}_i=(u_{i1}, ..., u_{iH})$, $\gamma_{z_i}$ being an individual-level intercept. In this paper, no covariates $\textbf{u}_i$ are assumed to affect fall count, and thus the model in Equation (\ref{eq:ch5_gamma0}) reduces to
\begin{equation}
\label{eq:ch5_gamma}
\text{log}(\theta_i|z_i)=\gamma_{z_i},
\end{equation}
and thus $\gamma_{z_i}$ denotes the mean fall count (in a logarithm scale) for patient $i$ if he or she belongs to subgroup $z_i$.
Hence, the likelihood for all patients is%
\begin{equation}
f(\boldsymbol{x},y|\pi, \boldsymbol{\mu}, \Sigma, \theta)=\prod_{i=1}^n{\sum_{k=1}^K \pi_k N(\boldsymbol{x_i}|\boldsymbol{\mu}_k, \Sigma_k)\, Po(y_i|\theta_k)}.
\end{equation}
If the subgroup assignment is known, then the complete data likelihood is given by,
\begin{equation}
\label{eq:ch5_complete_lik}
f(\boldsymbol{x},y|\pi, \boldsymbol{\mu}, \Sigma, \theta)=\prod_{i=1}^n \pi_{z_i} N(\boldsymbol{x_i}|\boldsymbol{\mu}_{z_i}, \Sigma_{z_i})\, Po(y_i|\theta_{z_i}).
\end{equation}
The model states that the patients, based on their symptoms represented by clinical measurements $\boldsymbol{x}$ and fall frequency $y$, are generated from $K$ distinct random processes representing the $K$ subgroups. Each of the processes is described by its own distributions, $N(\boldsymbol{x}|\boldsymbol{\mu}_k, \Sigma_k)$ and $Po(y|\theta_k)$.
We wish to make Bayesian inference for the model parameters $\boldsymbol{\Theta}$, characterized by the uncertainty in parameter values. These uncertainty are addressed though specification of prior probability distributions, as follows:
\begin{itemize}
\item $\mu_{{1,j},...,{K,j}}|\lambda_j,\Sigma_k \sim \prod_{k=1}^K {N(\mu_j, \sigma^2_j \lambda_j)}$ for means of $x_j$ in clusters $\{1,...,K\}$, $j=1, ...,p$
\item $\mu_j \sim N(0, \infty)$ for common mean of $x_j$, $j=1, ...,p$
\item $\lambda_j \sim Ga(c,d)$ for shrinkage parameter $\lambda_j$, $j=1, ...,p$
\item $\sigma_j^2 \sim IG (r,s)$ for variance of $x_j$, $j=1, ...,p$
\item $w \sim \text{Dirichlet} (\alpha_1, .., \alpha_K)$, for the mixture weight,
\end{itemize}
where $Ga(.)$ and $IG(.)$ denote the Gamma dan Inverse-Gamma distributions respectively.
Introduction of shrinkage parameter $\lambda_j$ is based on \cite{Yau2011}, in order to facilitate the selection of variables contributing to the clustering. Variables having the group means $\mu_{{1,j},...,{K,j}}$ shrink towards a common value $\mu_j$ are considered to be less relevant to form the clusters than other variables, and thus insight into the role of each variable could be obtained upon comparison of these parameters.
To avoid the issue of label switching \cite{Stephens2000}, prior for the mean of falls frequency, $\theta=$ exp$(\gamma)$, for each cluster $k$ is specified as
\begin{equation*}
\gamma_1 \sim N(0,\sigma_0^2),
\end{equation*}
and
\begin{equation*}
\gamma_k = \gamma_{k-1} + \eta_{k-1},\, k=2, ...,K
\end{equation*}
where
$\eta_k, \, k=1,...,K-1$, are assumed to follow a truncated normal distribution at 0, i.e. $\eta_k \sim N(0,\sigma_0^2)I_{(0,\infty)}$. The indicator function $I_{(0,\infty)}$ takes the value 1 over the interval $(0,\infty)$ and 0 elsewhere. With this specification, the labeling is inline with falls frequency, where cluster with smaller label is associated with a group of patients with smaller frequency of falls. The model is visualized in Figure \ref{fig:ch5_DAGmodel}.
\begin{figure}[htbp]
\centering\includegraphics[width=0.7\linewidth]{ch5_DAGmodelv2.png}
\caption{Directed Acyclic Graph representation of the FMM model. $\Sigma=\text{diag}(\sigma_1^2, ...,\sigma_p^2)$.}
\label{fig:ch5_DAGmodel}
\end{figure}
Having specified the likelihood and the prior distributions, using the Bayes rule, the complete data posterior is given by
\begin{align}
p(\boldsymbol{\pi}, \Phi, \theta|\textbf{x},y) & \propto \text{likelihood} \times \text{prior} \nonumber \\
&= f(\boldsymbol{x},y|\pi, \Phi) \times p(\Phi)p(\pi) \nonumber \\
&= f(\textbf{x},y|\boldsymbol{\pi}, \Phi)\times p(\boldsymbol{\mu}|\lambda,\Sigma)p(\lambda)p(\Sigma)p(\theta)p(\boldsymbol{\pi}) \nonumber \\
&=f(\textbf{x},y|\boldsymbol{\pi}, \Phi) \times \prod_{j=1}^p \left[N(\mu_j, \sigma^2_j \lambda_j)p(\lambda_j)p(\sigma_j^2)\right]p(\theta)p(\pi)
\end{align}
where $f(\textbf{x},y|\boldsymbol{\pi}, \Phi)$ is the complete likelihood given in Equation \ref{eq:ch5_complete_lik} and the corresponding densities for the prior are as listed above. Since there is no closed form for the solution of this posterior density, samples are drawn from the posterior distribution via MCMC.
We ran this model in WinBUGS \citep{Thomas1994}, discarding the first 10,000 burn-in iterations and sampling from the last 100,000 iterations. Exploratory data analysis prior to fitting the mixture model and post analysis were conducted in R \citep{RCoreTeam2015}. Since the number of subgroups, $K$, is assumed unknown, we fitted the models starting with $K=2$ and iteratively increased the value of $K$ until a stopping criterion was met. The criterion is described in the following section.
\subsection{Model diagnostics}
Questions about the number of clusters and the quality of the representation of the data often arise upon implementing a FMM. When the number of component is too large (i.e. overfitted mixture models), the overfitted latent clusters will asymptotically become empty under specific conditions for the prior of the class proportions \cite{Nasserinejad2017,Rousseau2011}.
Various approaches have been proposed in the literature for choosing the number of latent classes in mixture models. However, no consensus has emerged regarding which of these methods performs best \cite{Nasserinejad2017}. In this study, we chose two commonly used criteria, Akaike's Information Criterion (AIC) \cite{Akaike1987}
\begin{equation*}
AIC_k=-2\text{log}f(\boldsymbol{x},y|\boldsymbol{\Theta})+2\nu_K,
\end{equation*}
and the Bayesian Information Criterion (BIC) \cite{Schwarz1978}
\begin{equation*}
AIC_k=-2\text{log}f(\boldsymbol{x},y|\boldsymbol{\Theta})+\nu_K\text{log}(n).
\end{equation*}
Deviance component, $-2\text{log}f(\boldsymbol{x},y|\boldsymbol{\Theta})$, measures the fitness of the model to data $D=\{\boldsymbol{x},y\}$ given parameter $\boldsymbol{\Theta}$. The penalty term $\nu_K$ is set to control the model's complexity by accounting for the number of parameters required to fully specify the model. Model with smaller values of $AIC_K$ and $BIC_K$ is preferred.
\subsection{Model inference}
Once the number of subgroups is selected, the aim is to make an inference on the unknown subgroup indicator, $z_i$, and subgroup parameters $\mu_k, \sigma_k^2$ representing the mean and variance of the corresponding symptoms in subgroup $k$, and $\theta_k$ representing the associated fall frequency for patients in subgroup $k$. These parameters provide information for further analysis such as profile generation for each subgroup. Moreover, assessments on the relevance and contribution of variables on the subgroup memberships will also be undertaken.
\subsubsection{Profile generation}
Each subgroup was characterized by distinctive trends of subgroup attributes (variables). Subgroup profiles were generated by describing the trend of each variable through graphs of posterior distributions of the variable's mean in each subgroup. Furthermore, a summary of each variable , say $x_{.j}$, in each subgroup $k$ in the form of mean $\mu_{jk}$ and standard deviation $\sigma_{jk}$ will also be inspected to characterize subgroups.
\subsubsection{Variable influence on subgroups membership}
Given the subgroup profiles, it is of interest to assess the role of each variable in assigning subjects into subgroups.
Variable $x_{.j}$ is said to be relevant to the subgrouping if its realizations are relatively homogeneous for subjects in the same subgroup, and are different between subgroups. This relevance measurement can be inferred through the shrinkage parameter $\lambda_j$ introduced in the model in Section \ref{sec:statmethodsch5}. A high value of $\lambda_j$ indicates that $x_{.j}$ is relevant for the subgrouping, and vice versa. Thus, relevance can be interpreted as the relative importance of variables governing the subgroups.
More insight into the role of each variable can be obtained through the credible interval for the mean of each subgroup. The presence of non-overlapping intervals between subgroups implies there are distinct characteristics with respect to the corresponding variable.
In addition, there will be uncertainty about this mean which shoud be considered when making comparisons. This uncertainty can be evaluated by calculating the following:
\begin{equation}
D^*_{kk'}=\frac{\sum_{t=1}^T I\{\mu_k^{(t)}>\mu_{k'}^{(t)}\}}{T}.
\end{equation}
where $T$ is the total number of MCMC iterations and $I$ is the indicator function which equals $1$ when $\mu_k>\mu_{k'}$ and 0 otherwise. $D^*_{kk'}$ measures the consistency of the distribution of $x_{.j}$ to subgroup $\{k,k'\}$. $D^*$ near 0 or 1 implies homogeneity within subgroup and a good-separation between subgroups.
To further assess a variable's influence in forming subgroups, the sensitivity of the posterior probabilities of subgroup membership with respect to changes of a given variable is examined. The presence of noticeable changes implies that the variable of interest is associated with the posterior probability of subgroup membership. Such sensitivities can also be examined via the following odds ratio
\begin{equation}
Odds_{kk'}=\frac{Pr(z_i=k|x_{ij})}{Pr(z_i=k'|x_{ij})},
\end{equation}
which states the ratio between the posterior probability of being in subgroup $k$ and the posterior probability of being in subgroup $k'$ for patient $i$ with the characteristic $x_{ij}$. Odds $>1$ suggests that patients with characteristic $x_{.j}$ are more likely to be in subgroup $k$ than in subgroup $k'$.
\section{Results}
\label{Sec:result}
\subsection{Description of participants and exploratory data}
A summary of patients' measurements and of patients classified as fallers and non-fallers is given in Table \ref{tab:Ch5desc}. Data from 99 patients were used in the analysis (66 males, 33 females, mean age 66.3 years). There was no significant association between the demographics measurements and fall/non-fall occurrences. Clear differences between fallers and non-fallers can be observed in disease specific and functional tests measurements. We will not elaborate more on these results. They are presented to show that there are differences between fallers and non-fallers, and thus motivate further elaboration on the corresponding measurements with fall frequency.
It is apparent that fall frequency increases over time, on average, as its frequency in the 6-month follow-up is higher (mean 1.4) than that in the previous year (mean 0.8). There were also patients who fell prior to participating in the study (mean 0.3), yet did not fall during the follow-up. Fall frequency for fallers in the previous year was almost 5 times higher than that of non-fallers. This indicates that once patients fell, they were prone to falling again.
\begin{table}[htbp]
\centering
\caption{Summary of variables for study cohort at 6 months for all patients, and classified by fallers and non-fallers. Variables in the first part of the table (top) are for patients descriptions only (except Age). Variables for subgroup profiling are in the second part of the table (bottom). Categorical variables are summarized by their frequency (\%). Numerical variables are summarized by their means. p-value is for between groups test (Fallers and Non-fallers).}
\scalebox{1}{
\begin{tabular}{lcccc}
\toprule
\rowcolor[rgb]{ .949, .949, .949} & \multicolumn{1}{l}{All patients} & Fallers & Non-fallers & p-value \\
\midrule
\midrule
Fall frequency & & & & \\
\quad In the previous year & 0.8 & 1.4 & 0.3 & $<.01$ \\
\quad In 6 months of follow-up & 1.4 & 3 & 0 & $<.01$ \\
Gender & & & & 0.5 \\
\quad Male & 66 (67\%) & 38 (58\%) & 28 (42\%) & \\
\quad Female & 33 (33\%) & 15 (45 \%) & 18 (55 \%) & \\
Age & 66.3 & 65.7 & 66.8 & 0.5 \\
Height & 168.7 & 168.6 & 168.7 & 0.98 \\
Weight & 72.8 & 74.5 & 71.4 & 0.31 \\
Body mass index & 25.6 & 26.2 & 25 & 0.24 \\
MMSE & 28 & 27.8 & 28.1 & 0.53 \\
Schwab \& England ADL & 82.2 & 79.7 & 84.5 & 0.03 \\
Hoehn \& Yahr & & & & 0.19 \\
\quad 1 & 27 (27\%) & 22 (81\%) & 5 (19\%) & \\
\quad 2 & 44 (44\%) & 20 (45\%) & 24 (55\%) & \\
\quad 3 & 28 (29\%) & 11 (39\%) & 17 (61\%) & \\
\hline\hline
\textbf{Disease specifics} & & & & \\
Duration & 6.2 & 7.3 & 5.3 & 0.05 \\
UPDRS &&&&\\
\quad Subtotal 1 & 2.4 & 2.8 & 2.1 & 0.16 \\
\quad Subtotal 2 & 10 & 11.5 & 8.8 & 0.01 \\
\quad Subtotal 3 & 18.9 & 21.3 & 16.8 & 0.03 \\
PIGD & 3.9 & 4.8 & 3 & $<.01$ \\
Freezing of gait & 4.5 & 6.1 & 3.2 & $<.01$ \\
\textbf{Functional tests} & & & & \\
Tinetti total & 25.9 & 24.8 & 26.8 & $<.01$ \\
Berg balance score & 53.6 & 52.9 & 54.2 & 0.05 \\
Timed up and go & 9.9 & 10.5 & 9.3 & 0.03 \\
Functional reach (best) & 27.4 & 27.3 & 27.5 & 0.85 \\
\bottomrule
\end{tabular}%
}
\label{tab:Ch5desc}%
\end{table}%
The fall counts were examined and the result is depicted in Figure \ref{fig:falls_eda1_v2}. The plot shows a very high frequency for 0/1 fall counts, a peak in the middle then low frequencies beyond a count of 5. The empirical density reveals multiple peaks. With the assumption that data follows a Poisson distribution with the mean equal to the fall count average, a QQ-plot was produced, see Figure \ref{fig:falls_eda1_v2}. The plot, however, does not support the assumption, as many points are off the line.
\begin{figure}
\centering\includegraphics[width=0.9\linewidth]{falls_eda1_v2.png}
\caption{Fall count bar plot, empirical density, and QQ-plot for all data.}
\label{fig:falls_eda1_v2}
\end{figure}
There are excess zero counts, i.e. 52\% of the cohort did not fall. Excluding these zeros, the same process was then repeated and the result is reproduced in Figure \ref{fig:falls_eda2_v2}. The problem of multimodality still persists, and thus, the assumption of a Poisson distribution for all of the data does not appear valid. This motivates exploring whether there are underlying subgroups in the PD population, and thus motivates the consideration of profile regression models, as presented in the following section.
\begin{figure}
\centering\includegraphics[width=0.9\linewidth]{falls_eda2_v2.png}
\caption{Fall count bar plot, empirical density, and QQ-plot for data without zero fall count.}
\label{fig:falls_eda2_v2}
\end{figure}
\subsection{Subgrouping via profile regression models}
\label{sec:ch5_results}
\subsubsection{Model choice}
Profile regression models were fitted with the mixture component (subgroup) for the covariates ranging from 2 to 4. Based on the AIC$_K$ and BIC$_K$ criteria as presented in Table ~\ref{tab:aic}, the model with three subgroups produced the lowest values. Thus, further interpretation was based on this model. Figure \ref{fig:ch5_fallscount} presents the fall counts for subjects in the three formed subgroups, and the predicted counts (with $95\%$ credible interval) for the replicated data (Figure \ref{fig:ppc}). As indicated by the credible interval, fall counts are subgrouped to three values, very low (0 or 1), low (less than 5), and high (greater than 5), which agrees with the fall count in Figure \ref{fig:falls_eda2_v2}, implying the goodness of fit of the model.
\begin{table}[htbp]
\centering
\caption{Information criteria for the models with $K$ subgroups.}
\begin{tabular}{crrr}
\toprule
\multicolumn{1}{l}{Criteria} & \multicolumn{3}{c}{Number of subgroups} \\
\cmidrule{2-4} & \multicolumn{1}{c}{2} & \multicolumn{1}{c}{3} & \multicolumn{1}{c}{4} \\
\midrule
\midrule
AIC$_K$ & 7854 & 7728 & 7784 \\
BIC$_K$ & 8007 & 7827 & 7961 \\
\bottomrule
\end{tabular}%
\label{tab:aic}%
\end{table}%
\subsubsection{Subgroup description}
Three subgroups were formed as shown in Figure~\ref{fig:ch5_fallscount}, with the probability of being in Subgroups 1 to 3 being 0.63, 0.27, and 0.08, respectively. Subgroup 1 consists of patients who never fall or just experienced one fall (mean fall counts = 0.5). The second subgroup is dominated by patients experiencing one or two falls (mean fall counts = 1.48). Finally, more frequent fallers were in Subgroup 3, with the mean fall counts of 10.61.
\begin{figure}[t!]
\centering
\begin{subfigure}[b]{.5\textwidth}
\centering
\includegraphics[keepaspectratio=true,scale=0.62]{clusty_v2.png}
\caption{}
\label{fig:ch5_fallscount}
\end{subfigure}%
~
\begin{subfigure}[b]{.5\textwidth}
\centering
\includegraphics[keepaspectratio=true,scale=0.325]{PPC_v2.png}
\caption{}
\label{fig:ppc}
\end{subfigure}
\caption{Fall counts in three subgroups (a), Predicted fall counts with 95\% credible intervals for the replicated data (b) .}
\end{figure}
The relevance parameter measuring the relative importance of each variable in assigning patients into the three subgroups is depicted in Figure~\ref{fig:relevanceLambda_v3}. According to this parameter, the length of time being diagnosed with the disease (Duration) was least relevant for the subgrouping, and the Tinetti total was the most relevant. Freezing of gait (FOG), timed up and go (TUG), postural instability and gait difficulty (PIGD) and balance measured by Berg balance score (BBS) shared a similar contribution to the patients' subgroupings as the relevance of these variables is about the same.
\begin{figure}
\centering\includegraphics[width=0.5\linewidth]{relevanceLambda_v3.png}
\caption{Box plots of relative relevance of covariates used in subgrouping. Variables are ordered in ascending relevance.}
\label{fig:relevanceLambda_v3}
\end{figure}
The posterior distribution of the means of variables was used to generate the subgroup profiles (as depicted in Figures~\ref{fig:profile1} and ~\ref{fig:profile2}). Table \ref{tab:postmeanCI} provides more insight into the contribution of each variable to the subgrouping in the form of credible intervals of posterior means and the degree of overlap of these distributions. It can be inferred that Subgroup 1 and Subgroup 2 have quite distinctive profiles based on several variables. Subgroup 3 on average is quite different from the other subgroups, however, the variation in the means is large, resulting in similar coverage values of the variables overall.
According to the disease specific measures (Figure~\ref{fig:profile1}), UPDRS subscales II, III, and PIGD can classify patients in Subgroup 1 and Subgroup 2 clearly. For UPDRS subscale I and FOG, there are slight overlaps between the two subgroups. Interestingly, the distributions are more compact in subgroups with lower fall frequency (shown by more peaked density), implying more certainty in describing the non-or single-fallers than the frequent fallers. As in this case, it is evident from the picture that Subgroup 3, the frequent fallers, has a wide range of measures and thus cannot be differentiated with the other subgroups. On the other hand, despite there being overlap, FOG could differentiate between Subgroup 2 and Subgroup 3 more than other disease-specific measures. The trend is: high scores of the UPDRS subscales I, II, III and also of the PIGD, and worsening in freezing of gait, which indicate deterioration of the patients' condition represented by an increase in fall frequency. As far as the disease duration is concerned, it does not seem to be associated with the fall frequency.
\begin{figure}[htp]
\centering
\begin{subfigure}[b]{.3\textwidth}
\centering
\includegraphics[keepaspectratio=true,scale=0.35]{updrs1clustdist_v2.png}
\caption{}
\label{fig:updrs1clustdist}
\end{subfigure}
~
\begin{subfigure}[b]{.3\textwidth}
\centering
\includegraphics[keepaspectratio=true,scale=0.35]{updrs2clustdist_v2.png}
\caption{}
\label{updrs2clustdist}
\end{subfigure}
~
\begin{subfigure}[b]{.3\textwidth}
\centering
\includegraphics[keepaspectratio=true,scale=0.35]{updrs3clustdist_v2.png}
\caption{}
\label{fig:updrs3clustdist}
\end{subfigure}
~
\begin{subfigure}[b]{.3\textwidth}
\centering
\includegraphics[keepaspectratio=true,scale=0.35]{fogclustdist_v2.png}
\caption{}
\label{fig:fogclustdist}
\end{subfigure}%
~
\begin{subfigure}[b]{.3\textwidth}
\centering
\includegraphics[keepaspectratio=true,scale=0.35]{pigdclustdist_v2.png}
\caption{}
\label{fig:pigdclustdist}
\end{subfigure}
~
\begin{subfigure}[b]{.3\textwidth}
\centering
\includegraphics[keepaspectratio=true,scale=0.35]{durationclustdist_v2.png}
\caption{}
\label{fig:durationclustdist}
\end{subfigure}
\caption{Profiles for the three subgroups -Subgroup 1 (dotted lines), Subgroup 2 (dashed lines), and Subgroup 3 (solid lines)- based on disease-specific measures.}
\label{fig:profile1}
\end{figure}
As for the profiles based on functional tests measures (Figure~\ref{fig:profile2}), a similar pattern with that of disease specific measures is shown: more compact distribution for posterior means of variables for Subgroup 1, clearer distinction of Subgroup 1 and Subgroup 2, and wide coverage range of variables' posterior means for patients in Subgroup 3. Among these variables, Tinetti total, TUG, and BBS best differentiated the first two subgroups. Moreover, Tinetti total can also differentiate Subgroup 3 from the other subgroups clearly, relative to the other variables. It can be inferred from the figure that better balance and gait (i.e. high scores on Tinetti and Berg balance tests) and ease in movement (lower TUG, high FRB) were associated with lower fall frequency, and vice versa.
\begin{figure}[htp]
\begin{subfigure}[b]{.3\textwidth}
\centering
\includegraphics[keepaspectratio=true,scale=0.32]{Ttotalclustdist_v2.png}
\caption{}
\label{fig:Ttotalclustdist_v2}
\end{subfigure}
~
\begin{subfigure}[b]{.3\textwidth}
\centering
\includegraphics[keepaspectratio=true,scale=0.35]{bergclustdist_v2.png}
\caption{}
\label{fig:bergclustdist_v2}
\end{subfigure}
~
\begin{subfigure}[b]{.3\textwidth}
\centering
\includegraphics[keepaspectratio=true,scale=0.35]{frbclustdist_v2.png}
\caption{}
\label{fig:frbclustdist_v2}
\end{subfigure}
~
\centering
\begin{subfigure}[b]{.3\textwidth}
\centering
\includegraphics[keepaspectratio=true,scale=0.35]{tugclustdist_v2.png}
\caption{}
\label{fig:tugclustdist_v2}
\end{subfigure}
~
\begin{subfigure}[b]{.3\textwidth}
\centering
\includegraphics[keepaspectratio=true,scale=0.35]{ageclustdist_v2.png}
\caption{}
\label{fig:ageclustdist_v2}
\end{subfigure}
\caption{Profiles for the three subgroups -Subgroup 1 (dotted lines), Subgroup 2 (dashed lines), and Subgroup 3 (solid lines)- based on functional tests and age.}
\label{fig:profile2}
\end{figure}
As indicated in Table \ref{tab:Ch5desc}, there was no significant difference in age (on average) between fallers and non-fallers. Further insight is provided by the mixture model, as depicted in Figure~\ref{fig:ageclustdist_v2}. Non-or single-fallers were relatively younger than low-frequency fallers. The non-association of age and fallers/non-fallers stated earlier was due to the contribution from recurrent fallers, whose age range covered the range for the first two subgroups. Thus, when patients were only classified as fallers or non-fallers, the age was not greatly different between the two subgroups.
The profile for each subgroup is summarized in Table \ref{tab:postmeanCI}. The disease-specific measurements were able to differentiate Subgroup 1 and Subgroup 2 clearly, shown by non-overlapping of credible intervals of the posterior means of these variables (except for Duration). UPDRS subscales II and III, PIGD, Tinetti total, Berg balance, and timed up and go scores were completely different for the two subgroups. The degree of uncertainty measurement also provides a similar conclusion as many values of $D^*_{12}$ are either close to 0 or 1. The proportion of Subgroup 1 with the mean value of the corresponding variable is greater than that of Subgroup 2 is close to 0, or the opposite. However, these variables cannot clearly differentiate between patients with low fall frequency (Subgroup 2) and patients with high fall frequency (Subgroup 3).
Functional test variables also show similar role to disease-specific test variables in the subgroups. These variables were very different in Subgroup 1 and Subgroup 2, and not greatly different between Subgroup 2 and Subgroup 3.
However, further insight into $D^*_{23}$ shows that the Tinetti total, Berg balance score and freezing of gait could identify Subgroups 2 and 3 relatively well compared to the other variables. Furthermore, the Tinetti total and Berg balance score have a consistent, negative association with fall frequency $(D^*_{12}= 1, D^*_{23}\approx 1)$, which means good balance (measured by high score of the Tinetti total and BBS) is associated with less falls. As for FOG, $D^*_{23}\approx 0$ means that patients in Subgroup 2 have a lower freezing of gait score than patients in Subgroup 3. Adding the information of $D^*_{12} \approx 0$ for freezing of gait shows its consistent negative association with falls.
\begin{table}
\label{tab:postmeansch5}
\centering
\caption{Posterior means and 95\% credible intervals and comparison of mean differences between subgroups for each variable. Disjoint intervals are in bold. $D_{ij}^*$ is for comparing Subgroup $i$ and Subgroup $j$.}
\label{tab:postmeanCI}
\scalebox{0.8}{%
\begin{tabular}{@{}lccccc@{}}\toprule
& Subgroup 1 & Subgroup 2 & Subgroup 3&$D_{12}^*$&$D_{23}^*$\\ \midrule
Age & \textbf{64.3 (62.3, 66.3)} & \textbf{70.1 (67.2, 73)}& 65.8 (58.3, 74)&.0023&.7972\\
Disease-specifics\\
\quad UPDRS I & \textbf{1.7 (1.2, 2.3)} & \textbf{3.5 (2.6, 4.3)} & 4 (1.4, 6.4) &.0006 & .4695\\
\quad UPDRS II & \textbf{7.8 (6.7, 8.9)} & \textbf{13.7 (11.9, 15.4)} & 13.5 (8.4, 18.2)& 0& .584\\
\quad UPDRS III & \textbf{15 (12.9, 17.2)} & \textbf{25.3 (22.1, 28.6)} & 26.8 (17.9, 36.6)&0 & .334 \\
\quad PIGD & \textbf{2.4 (1.9, 3)} & \textbf{6.3 (5.4, 7.1)} & 6.1 (3.8, 8.6)&0 & .5052 \\
\quad Freezing of gait & \textbf{3 (2, 4.1)} & \textbf{6.6 (5.1, 8.2)} & 11.1 (6.6, 16)&.0004 & .0238 \\
\quad Duration & 6 (4.8, 7.2) & 6.4 (4.8, 8.1) & 6.7 (2.1, 11.1)&.3211 & .4852\\
Functional tests\\
\quad Tinetti Total & \textbf{26.8 (26.1, 27.4)} &\textbf{24.6(23.7, 25.6)} & 22.7 (19.8, 25.3)& 1 & .9816 \\
\quad Berg balance score & \textbf{54.6 (54, 55.3)} & \textbf{52.1 (51.1, 53)} & 49.4 (45.9, 52.9)&1 & .9227 \\
\quad Functional reach (best) & \textbf{29.1 (27.4, 30.7)} & \textbf{24.6 (22.3, 26.8)} & 25.2 (19, 31.7)&.9974&.4278\\
\quad Timed up \& go & \textbf{8.7 (8.2, 9.2)} & \textbf{11.8 (11, 12.6)} & 12 (9.9, 14.4)&0 & .3898 \\
\bottomrule
\end{tabular}%
}
\end{table}
\subsubsection{Uncertainty in subgroup membership}
Given the subgroup profiles and relative importance of each variable, it is of interest to assess how variables determine the subgroup membership. Therefore, some variables were modified (by changing particular values, as summarized in Table ~\ref{tab:modif}), and the changes in subgroup membership were evaluated. Five modifications were implemented, and the results are shown in Figure~\ref{fig:pp}.
\begin{table}[htbp]
\centering
\caption{Variable modification to assess the uncertainty in subgroup membership. Patient 1 is the reference.}
\scalebox{0.8}{
\begin{tabular}{cl}
\toprule
\rowcolor[rgb]{ .949, .949, .949} Patient & Modified variables \\
\midrule
\midrule
1 & Values for age and duration diagnosed, were set to the mean for all data \\
& Other variables were set to the corresponding means for Subgroup 1. \\
2 & Change values for Functional test variables to their Subgroup 2 means. \\
3 & Change values for disease-specific variables to their Subgroup 2 means.\\
4 & Change values for Functional tests and disease-specific variables to their Subgroup 2 means.\\
5 & Selected variables from Patient 4.\\
\bottomrule
\end{tabular}%
}
\label{tab:modif}%
\end{table}%
\begin{figure}[htp]
\centering
\begin{subfigure}[b]{.18\textwidth}
\centering
\includegraphics[keepaspectratio=true,scale=0.2]{postprob1.png}
\caption{}
\label{fig:pp1}
\end{subfigure}%
~
\begin{subfigure}[b]{.18\textwidth}
\centering
\includegraphics[keepaspectratio=true,scale=0.2]{postprob2.png}
\caption{}
\label{fig:pp2}
\end{subfigure}
~
\begin{subfigure}[b]{.18\textwidth}
\centering
\includegraphics[keepaspectratio=true,scale=0.2]{postprob3.png}
\caption{}
\label{fig:pp3}
\end{subfigure}
~
\begin{subfigure}[b]{.18\textwidth}
\centering
\includegraphics[keepaspectratio=true,scale=0.2]{postprob4.png}
\caption{}
\label{fig:pp4}
\end{subfigure}
~
\begin{subfigure}[b]{.18\textwidth}
\centering
\includegraphics[keepaspectratio=true,scale=0.2]{postprob5.png}
\caption{}
\label{fig:pp5}
\end{subfigure}
\caption{Empirical densities of posterior probability membership for each patient described in Table~\ref{tab:modif} (solid line = subgroup 1, dashed line = subgroup 2). Subgroup 3 is not presented as the probability is almost zero for all modifications. }
\label{fig:pp}
\end{figure}
Considering the distinct separation of values of functional tests for Subgroup 1 and Subgroup 2 as described in the previous subsection, it is surprising that Patient 2 is assigned to Subgroup 1 (Figure~\ref{fig:pp2}). However, there is a slight change in the density of the posterior probability of membership from Patient 1 to Patient 2 (Figures~\ref{fig:pp1}, \ref{fig:pp2}), indicating a subtle effect of functional tests on subgroup membership. A more noticeable result is obtained when the disease-specific measures are changed (Patient 1 to Patient 3), as shown by the posterior probability of assigning Patient 3 to Subgroup 2 (Figures~\ref{fig:pp1}, \ref{fig:pp3}). When both functional tests and disease-specific measures are changed (Patient 4), a more certain subgroup assignment is yielded (Figure~\ref{fig:pp4}). Considering the overlap in the distribution of some measures for Subgroup 1 and Subgroup 2, omission of these variables (UPDRS I and FRB) does not change the subgroup assignment as shown for Patient 5.
It can be inferred from Figure~\ref{fig:pp} that the contribution of disease-specific measures is stronger than that of functional tests in subgrouping -and thus in predicting fall frequency- as changes in the empirical density for Patient 1 (Figure~\ref{fig:pp1}) to that for Patient 3 (Figure~\ref{fig:pp3}) is greater than changes for Patient 2 (Figure~\ref{fig:pp2}). The effect of functional tests is noticeable when it is adjusted with the disease-specific measures (with more compact density in Figure~\ref{fig:pp4} compared to Figure~\ref{fig:pp3}).
As for the individual instrument measures, changing the value of one measure did not greatly change the posterior probability of subgroup membership (graphs not shown). However, to assess the effect of the functional tests, the odds of being in either Subgroup 1 or Subgroup 2 were calculated as the selected variable changes relative to the reference value. Balance and gait represented by the Tinetti total score was modified, taking all possible values (1 to 28), and the odds are presented in Figure~\ref{fig:oddsclust12}.
It is shown that a patient having good balance and gait (high Tinetti total score) is more likely to be in Subgroup 1 than Subgroup 2, and vice versa. As the Tinetti total score increases, the odds of being in Subgroup 1 increases faster when other measures are set to Subgroup 2 means (Figure~\ref{fig:pp12c}) than when they are set to Subgroup 1 means (Figure~\ref{fig:pp12a}). The opposite trend occurs for the odds of being in Subgroup 2 (Figures~\ref{fig:pp12b} and \ref{fig:pp12d}). This demonstrates the association between the Tinetti and other variables used in this model.
\begin{figure}[htp]
\centering
\begin{subfigure}[b]{.23\textwidth}
\centering
\includegraphics[keepaspectratio=true,scale=0.5]{oddsclust12a.png}
\caption{}
\label{fig:pp12a}
\end{subfigure}
~
\begin{subfigure}[b]{.23\textwidth}
\centering
\includegraphics[keepaspectratio=true,scale=0.5]{oddsclust12b.png}
\caption{}
\label{fig:pp12b}
\end{subfigure}
~
\begin{subfigure}[b]{.23\textwidth}
\centering
\includegraphics[keepaspectratio=true,scale=0.5]{oddsclust12c.png}
\caption{}
\label{fig:pp12c}
\end{subfigure}
~
\begin{subfigure}[b]{.23\textwidth}
\centering
\includegraphics[keepaspectratio=true,scale=0.5]{oddsclust12d.png}
\caption{}
\label{fig:pp12d}
\end{subfigure}
\caption{Odds of being in Subgroup 1 (a) and Subgroup 2 (b) when Tinetti total score is increased and all other variables were set to Subgroup 1 means. When other variables were set to Subgroup 2 means, the corresponding odds are as in (c) and (d).}
\label{fig:oddsclust12}
\end{figure}
When changing more variables- timed up and go (TUG) scores - the changes in subgroup membership are more noticeable, as represented by the odds depicted in Figure~\ref{fig:oddsTinettiTUG}. As the Tinetti total and TUG scores reflect a better condition for the patient (higher Tinetti total, lower TUG), the odds of being in Subgroup 1 increased greatly although the other variables were set to Subgroup 2 means, representing a worse condition than Subgroup 1, (Figure \ref{fig:oddsTinettiTUG}c compared to Figure \ref{fig:oddsTinettiTUG}a). Under the same scenario, the odds of being in Subgroup 2 also diminished rapidly (Figure \ref{fig:oddsTinettiTUG}b compared to Figure \ref{fig:oddsTinettiTUG}d).
\begin{figure}[h]
\centering\includegraphics[width=0.9\linewidth]{fig8oddstinettitug.png}
\caption{Odds of being in Subgroup 1 (a) and Subgroup 2 (b) when the Tinetti total score is increased and TUG score is decreased, all other variables were set to Subgroup 1 means. When other variables were set to Subgroup 2 means, the corresponding odds are as in (c) and (d).}
\label{fig:oddsTinettiTUG}
\end{figure}
\subsubsection{Sensitivity analysis}
Two distibutions were chosen for the prior of mixture weight $\pi$: Dirichlet distribution and the multinomial logit model with a parameter which follows a Normal distribution.
For an FMM with three classes, an uninformative Dirichlet distribution is given by $\alpha=(1,1,1)$. Denote this as Prior 1. For Prior 2, the following multinomial logit model was considered
\begin{equation}
\pi_k=\frac{\text{exp}(\gamma_k)}{\sum_k \text{exp}(\gamma_k)},
\end{equation}
with $\gamma_1=0, \gamma_k \sim N(0,1)$ for $k=2,3$. A condition of $\gamma_1 \leq \gamma_2 \leq \gamma_3$ was set to address the problem of label switching. The posterior distribution obtained given each prior is shown in Figure \ref{fig:SensAnalysis}.
Both priors produced similar posterior estimates for the mixture weights, indicated by the overlapping graphs of the posterior density of the mixture weights. This shows that the results are robust to the choice of prior for the mixture weight.
\begin{figure}[htp]
\centering
\begin{subfigure}[b]{.45\textwidth}
\centering
\includegraphics[keepaspectratio=true,scale=0.36]{ch5_sensAnalysis_piCl1v4.png}
\caption{}
\label{fig:ch5_sensAnalysis_piCl1}
\end{subfigure}
~
\begin{subfigure}[b]{.45\textwidth}
\centering
\includegraphics[keepaspectratio=true,scale=0.36]{ch5_sensAnalysis_piCl2v4.png}
\caption{}
\label{fig:ch5_sensAnalysis_piCl2}
\end{subfigure}
~
\begin{subfigure}[b]{.5\textwidth}
\centering
\includegraphics[keepaspectratio=true,scale=0.4]{ch5_sensAnalysis_piCl3v4.png}
\caption{}
\label{fig:ch5_sensAnalysis_piCl3}
\end{subfigure}
\caption{Posterior density of mixture weights for FMM with three subgroups:(a) Subgroup 1, (b) Subgroup 2, and (c) Subgroup 3. The red lines represent the Dirichlet prior and the green lines represent the multinomial logit prior. }
\label{fig:SensAnalysis}
\end{figure}
\section {Discussion}
\label{Sec:discussion}
In this paper, we have demonstrated generating profiles for subgroups of patients with early stages of PD via profile regressions. Variables measuring functional and disease specific assessments were assumed to follow a Gaussian distribution, while a Poisson distribution was assumed for fall frequency.
Three subgroups representing non- or single- fallers, low frequency fallers, and high frequency fallers were formed. Profiles characterizing each subgroup were also generated. Distinctive characteristics were identified between non-or single-fallers and low frequency fallers, while this was not the case for the high frequency fallers. For the high frequency fallers, this result needs a cautious interpretation due to the small number of cases assigned to this subgroup.
The subgroups with a higher fall frequency have wide coverage for each of the measures. This indicates that, in the early stage of PD, patients with recurrent falls cannot be differentiated from those with a low frequency of falls. Even for the duration diagnosed, there are patients who were diagnosed for a shorter time but who experienced more falls (being in Subgroup 3) than other patients. This suggests the possibility of unknown factors that need to be examined to shed light on this problem. Further, it might imply that once a patient has experienced a fall, they are prone to experience recurrent falls, regardless of other conditions represented by the above measures.
Further, this suggests that it might be useful to identify risk factors associated with the first fall, as potentially it may be more beneficial to try to prevent that first fall rather than repeated falls. Variables that differentiate Subgroup 1 from the other subgroups offer potential to be associated with the occurrence of the first fall. The posterior density of the subgroup means reveals that the FOG, PIGD, Tinetti total and BBS are the potential risk factors for the first fall, with the Tinetti total as the strongest associated factor.
Modifying only one variable did not change the subgroup assignment greatly. Thus, to assess the contribution of measures to subgroup membership, we modified the variables based on the disease-specific or functional tests measures. Functional test variables can differentiate well between non -or single- fallers with low frequency fallers, showing the usefulness of these measures in subgrouping the patients. However, only changing these variables had little effect on the subgroup without the change of the disease-specific measures, as has been demonstrated in Section \ref{sec:ch5_results}. This confirmed the order of importance of the variable, where functional tests are needed to enhance the information from disease-specific measures.
Furthermore, functional tests can differentiate low frequency fallers from high frequency fallers better than disease-specific measures, as the posterior probability distribution of subgroup membership for the Tinetti total and Berg balance score had the least overlap between the two subgroups. The model with Tinetti balance and Tinetti gait as replacements to the Tinetti total was also fitted (results not shown). Tinetti balance separated Subgroup 1 and Subgroup 2 very well, but did not differentiate Subgroup 3 from the other two subgroups. Interestingly, Tinetti gait does not separate Subgroup 1 and Subgroup 2 as well as Tinetti balance (there was an overlap between the posterior probability distribution between the two subgroups). Instead, it had the least overlap between Subgroup 2 and Subgroup 3 compared to all other measures, indicating the ability of Tinetti gait to identify high frequency fallers more accurately than other variables.
Upon examining the contribution of individual variables towards subgroup membership, the Tinetti was chosen as the measure to modify. Different rates of change on the odds of being in either Subgroup 1 or Subgroup 2 when other variables were set differently indicates the dependency of Tinetti's effect on other variables. When the patient's condition was relatively healthy (as other variable values were set to Subgroup 1 means), a worsening in balance and gait did not change the subgroup membership. However, when the condition becomes worse (changing the values to Subgroup 2 means), the change in the Tinetti produced a noticeable effect on the subgroup membership.
This might imply that the variables change simultaneously, that is, a change in one variable would also imply a change in other variables. Thus we cannot infer the impact of one variable alone in assessing the patients' conditions. Another implication is that as the patients' conditions degenerate, slight changes of one variable could impact on their subgroup membership (which implies an increased risk of falls).
For Tinetti, the change in subgroup membership is more explained by the balance test than the gait test in the subgroups of non-or single-fallers and low frequency fallers. The gait test gave a better explanation than the balance test when comparing low and high frequency fallers. Adding the TUG to the modification supported the results.
An early study by \cite{Janvin2003} generated profiles for PD patients based on neuropsychological measurements, while subgroups based on the tendency towards
delusions and hallucinations were examined in \cite{Amar2014}. A recent profiling study by \cite{Adwani2016} focused on the cognitive aspect of the disease. The results from this study provides insight into the composition of the PD population. The inclusion of fall frequency in tandem with other clinical measurements allow profiles to be generated and assessed against trends and characteristics observed in other variables. This provided additional insight into understanding the variability within a PD population.
\section{Summary}
\label{Sec:summary}
Through this research study, we have identified three subgroups of patients with early stage PD, based on fall frequency, disease-specific measurements, and functional test measurements. Profiles for each subgroup were generated. Inclusion of fall frequency harnesses insight into each subgroup, namely non-or single-fallers (Subgroup 1), low frequency fallers (Subgroup 2), and high frequency fallers (Subgroup 3). Thus, a tailored treatment could be recommended based on these profiles to help prevent the deterioration of patients condition (i.e. further falls).
Using disease-specific variables, a clear differentiation of Subgroup 1 and Subgroup 2 is observed. However, these variables could not differentiate Subgroup 2 and Subgroup 3 very well. On the other hand, functional test variables were able to differentiate the 3 subgroups clearly. However, a comparison of disease specific variables and functional test variables in affecting the subgroup assignment of patients showed that the former have a higher contribution to the subgrouping than the latter. Thus, it is inferred that disease-specific measures are significant and sensitive enough to differentiate PD patients with no-or single-falls from patients with low fall frequency. Once patients have experienced at least one fall, functional tests complement the disease specific measures to signify low frequency fallers from high frequency fallers. Thus, a tailored treatment focusing on disease-specific factors could be designed to prevent the first fall, or to prevent further falls for patients who have just had one fall. For patients with recurrent falls, falls preventive treatment could be based on functional test factors.
|
1,108,101,565,583 | arxiv | \section{Introduction}
Person re-identification (ReID) aims to match people in the different places (time) using another non-overlapping camera, which has become increasingly popular in recent years due to the wide range of applications, such as public security, criminal investigation, and surveillance. Most deep learning approaches have been shown to be more effective than traditional methods \cite{Ahmed,Chen,Nips_huang,PR_ding}. But there still remains many challenging problems because of human pose, lighting, background, occluded body region and camera viewpoints.
Video-based person ReID approaches consist of feature representation and feature aggregation. And feature aggregation attracts more attention in recent works. Although most of methods \cite{jiyanggao} (see Fig. 1(A)) propose to use average or maximum temporal pooling to aggregate features, they do not take full advantage of the temporal dependency information. To this end, RNN based methods \cite{McLaughlin_2016_CVPR} (see Fig. 1(B)) are proposed to aggregate the temporal information among video frames. However, the most discriminative frames cannot be learned by RNN based methods while treating all frames equally. Moreover, temporal attention methods \cite{Liu_2017_CVPR} as shown in Fig. 1(C) are proposed to extract the discriminative frames. In conclusion, these methods mentioned above cannot tackle temporal dependency, attention and spatial misalignment simultaneously. Although there are a few methods \cite{Xu_2017_ICCV} using the jointly attentive spatial-temporal scheme, it is hard to optimize the networks under severe occlusion.
\begin{figure}[t]
\centering
\includegraphics[width=8.5cm]{methods.pdf}
\caption{Three temporal modeling methods (A: temporal pooling, B: RNN, C: temporal attention) based on an image-level feature extractor (typically a 2D CNN). For temporal pooling, average or maximum pooling is used. For RNN, hidden state is used as the aggregated feature. For attention, spatial conv + FC is shown.}
\label{fig:example}
\end{figure}
In this paper, we propose a method to aggregate temporal-dependency features and tackle spatial misalignment problems using attention simultaneously as illustrated in Fig. 2. Inspired by the recent success of 3D convolutional neural networks on video action recognition \cite{3dconv,i3d}, we directly use it to extract spatial-temporal features in a sequence of video frames. It can integrate feature extraction and temporal modeling into one step. In order to capture long-range dependency, we embed the non-local block \cite{Wang_2018_CVPR,Dai_2017_ICCV} into the model to obtain an aggregate spatial-temporal representation. We summarize the contributions of this work in three-folds.
\begin{figure}[t]
\centering
\includegraphics[width=8cm]{allwork.pdf}
\caption{The overall architecture of the proposed method. 3D convolutions are used for track-level feature extractor. Non-local blocks are embedded into aggregate spatial-temporal features.}
\label{fig:example}
\end{figure}
\begin{enumerate}
\item
We first propose to use 3D convolutional neural network to extract the aggregate representation of spatial-temporal features, which is capable of discovering pixel-level information and relevance among video tracks.
\item
Non-local block, as a spatial-temporal attention strategy, explicitly solves the misalignment problem of deformed images. Simultaneously, the aggregative feature can be learned from video tracks by the temporal attentive scheme.
\item
Spatial attention and temporal attention are incorporated into an end-to-end 3D convolution model, which achieves significant performance compared to the existing state-of-the-art approaches on three challenging video-based ReID datasets.
\end{enumerate}
The rest of this paper is organized as follows. In Section 2, we discuss some related works. Section 3 introduces the details of the proposed approach. Experimental results on three public datasets will be given in Section 4. At last, we conclude this paper in Section 5.
\section{Related Work}
In this section, we first review some related works in person ReID, especially those video-based methods. Then we will discuss some related works about 3D convolution neural networks and non-local methods.
\subsection{Person Re-ID}
\noindent\textbf{Image-based person ReID} mainly focuses on feature fusion and alignment with some external information such as mask, pose, and skeleton, etc. Zhao \textit{et al.} \cite{Zhao_2017_CVPR} proposed a novel Spindle Net based on human body region guided multi-stage feature decomposition and tree-structured competitive feature fusion. Song \textit{et al.} \cite{Song_2018_CVPR} introduced the binary segmentation masks to construct synthetic RGB-Mask pairs as inputs, as well as a mask-guided contrastive attention model (MGCAM) to learn features separately from body and background regions. Suh \textit{et al.} \cite{sun_part_aligned} proposed a two-stream network that consists of appearance map extraction stream and body part map extraction stream, additionally a part-aligned feature map is obtained by a bilinear mapping of the corresponding local appearance and body part descriptors. These models all actually solve the person misalignment problem.
\noindent\textbf{Video-based person ReID} is an extension of image-based methods. Instead of pairs of images, the learning algorithm is given pairs of video sequences. The most important part is how to fuse temporal features from video tracks. Wang \textit{et al.} \cite{wang} aimed at selecting discriminative spatial-temporal feature representations. They firstly choosed the frames with the maximum or minimum flow energy, which is computed by optical flow fields. In order to take full use of temporal information, McLaughlin \textit{et al.} \cite{McLaughlin_2016_CVPR} built a CNN to extract features of each frame and then used RNN to integrate the temporal information between frames, the average of RNN cell outputs are adapted to summarize the output feature. Similar to \cite{McLaughlin_2016_CVPR}, Yan \textit{et al.} \cite{Yan_rnn} also used RNNs to encode video tracks into sequence features, the final hidden state is used as video representation. RNN based methods treat all frames equally, which cannot focus on more discriminative frames.Liu \textit{et al.} \cite{Liu_2017_CVPR} designed a Quality Aware Network (QAN), which is essentially an attention weighted average, to aggregate temporal features; the attention scores are generated from frame-level feature maps. In 2016, Zheng \textit{et al.} \cite{zheng2016mars} built a new dataset MARS for video-based person ReID, which becomes the standard benchmark for this task.
\subsection{3D ConvNets}
3D CNNs are well-suited for spatial-temporal feature learning. Ji \textit{et al.} \cite{3dconv} first proposed a 3D CNN model for action recognition. Tran \textit{et al.} \cite{Tran_2015_ICCV} proposed a C3D network to be applied into various video analysis tasks. Despite 3D CNNs' ability to capture the appearance and motion information encoded in multiple adjacent frames effectively, it is difficult to be trained with more parameters. More recently, Carreira \textit{et al.} \cite{i3d} proposed the Inflated 3D (I3D) architecture which initializes the model weights by inflating the pre-trained weights from ImageNet \cite{ILSVRC15} over temporal dimension which significantly improves the performance of 3D CNNs and it is the current state-of-the-art on the Kinetics dataset \cite{kinetics}.
\begin{figure*}[t]
\centering
\includegraphics[width=17cm]{framework.pdf}
\caption{Illustration of networks we propose in this paper; (a) illustrates the overall architecture that is consist of 3D convolutions, 3D pooling, 3D residual blocks, bottleneck and non-local blocks; (b) shows bottelneck; (c) illustrates residual blocks; Seperable convolutions are shown in (d).}
\label{fig:example}
\end{figure*}
\subsection{Self-attention and Non-local}
Non-local technique \cite{non-local-image} is a classical digital image denoising algorithm that computes a weighted average of all pixels in an image. As attention models grow in popularity, Vaswani \textit{et al.} \cite{NIPS2017_attention} proposed a self-attention method for machine translation that computes the response at a position in a sequence (\textit{e.g.,} a sentence) by attending to all positions and taking their weighted average in an embedding space. Moreover, Wang \textit{et al.} \cite{Wang_2018_CVPR} proposed a non-local architecture to bridge self-attention in machine translation to the more general class of non-local filtering operations. Inspired by these works, We embed non-local blocks into I3D model to capture long-range dependencies on space and time for video-based ReID. Our method demonstrates better performance by aggregating the discriminative spatial-temporal features.
\section{The Proposed Approach}
In this section, we introduce the overall system pipeline and detailed configurations of the spatial-temporal modeling methods. The whole system could be divided into two important parts: extracting spatial-temporal features from video tracks through 3D ResNet, and integrating spatial-temporal features by the non-local blocks.
A video tracet is first divided into consecutive non-overlap tracks $ \{ c_k \} $, and each track contains $N$ frames. Supposing each track is represented as
\begin{equation}
c_k = \{x_t | x_t \in \mathbb{R}^{H \times W} \}_{t=1}^N ,
\end{equation}
where $N$ is the length of $c_k$, and $H$, $W$ are the height, width of the images respectively. As shown in Fig. 3(a), the proposed method directly accepts a whole video track as the inputs and outputs a $d$-dimensional feature vector $f_{c_k}$. At the same time, non-local blocks are embedded into 3D residual block (Fig. 3(c)) to integrate spatial and temporal features, which can effectively learn the pixel-level relevance of each frame and learn hierarchical feature representation.
Finally, average pooling followed by a bottleneck block (Fig. 3(b)) to speed up training and improve performance. A fully-connected layer is added on top to learn the identity features. A Softmax cross-entropy with label smoothing, proposed by Szegedy \textit{et al.} \cite{Szegedy_2016_CVPR}, is built on top of the fully connected layer to supervise the training of the whole network in an end-to-end fashion. At the same time, Batch Hard triplet loss \cite{triplet} is employed in the metric learning step. During the testing, the final similarity between $ c_i $ and $ c_j $ can be measured by L2 distance or any other distance function.
In the next parts, we will explain each important component in more detail.
\subsection{Temporally Separable Inflated 3D Convolution}
In 2D CNNs, convolutions are applied on the 2D feature maps to compute features from the spatial dimensions only. When applied to the video-based problem, it is desirable to capture the temporal information encoded in multiple contiguous frames. The 3D convolutions are achieved by convolving 3D kernel on the cube formed by stacking multiple consecutive frames together. In other words, 3D convolutions can directly extract a whole representation for a video track, while 2D convolutions first extract a sequence of image-level features and then features are aggregated into a single vector feature. Formally, the value at position $ (x, y, z) $ on the $j$-th feature map in the $i$th layer $V_{ij}^{xyz}$ is given by
\begin{equation}
V_{ij}^{xyz} = b_{ij} + \sum_m \sum_{p=0}^{P_i - 1} \sum_{q=0}^{Q_i - 1} \sum_{r=0}^{R_i - 1} W_{ijm}^{pqr} V_{(i - 1)m}^{(x + p) (y+q)(z+r)},
\end{equation}
where $P_i$ and $Q_i$ are the height and width of the kernel, $R_i$ is the size of the 3D kernel along with the temporal dimension, $W_{ijm}^{pqr}$ is the $(p, q, r) $th value of the kernel connected to the $m$-th feature map in the previous layer $V_{i - 1}$, and $b_{ij}$ is the bias.
We adopt 3D ResNet-50 \cite{resnet} that uses 3D convolution kernels with ResNet architecture to extract spatial-temporal features. However, C3D-like 3D ConvNet \cite{DuTran} is hard to optimize because of a large number of parameters. In order to address this problem, we inflate all the 2D ResNet-50 convolution filters with an additional temporal dimension. For example, a 2D $k \times k$ kernel can be inflated as a 3D $t \times k \times k$ kernel that spans $t$ frames. We initialize all 3D kernels with 2D kernels (pre-trained on ImageNet): each of the $t$ planes in the $t \times k \times k$ kernel is initialized by the pre-trained $k \times k$ weights, rescaled by $1 / t$. According to Xie \textit{et al.} \cite{xie2017rethinking} experiments, temporally separable convolution is a simple way to boost performance on variety of video understanding tasks. We replace 3D convolution with two consecutive convolution layers: one 1D convolution layer purely on the temporal axis, followed by a 2D convolution layer to learn spatial features in Residual Block as shown in Fig. 3(d). Meanwhile, we pre-train the 3D ResNet-50 on Kinetics \cite{kinetics} to enhance the generalization performance of the model. We replace the final classification layer with person identity outputs. The model takes $T$ consecutive frames (\textit{i.e.} a video track) as the input, and the layer outputs before final classification layer is used as the video track identity representation.
\subsection{Non-local Attention Block}
A non-local attention block is used to capture long-range dependency in space and time dealing with occlusion and misalignment. We first give a general definition of non-local operations and then provide the 3D non-local block instantiations embedded into the I3D model.
Following the non-local methods \cite{non-local-image} and \cite{Wang_2018_CVPR}, the generic non-local operation in deep neural networks can be given by
\begin{equation}
\begin{array}{l}
\displaystyle y_i = \frac{1}{\mathcal{C}(x)} \sum_{\forall j} f(x_i, x_j) g(x_j).
\end{array}
\label{eq:6}
\end{equation}
Here $x_i$ can be the position in input signal (image, sequence, video; often their features) and $y_i$ is the position in output signal of the same size as $x$, whose response is to be computed by all possible input positions $x_j$. A pairwise function $f$ computes a scalar between $i$ and all $j$, which represents attention scores between position $i$ in output feature and all position $j$ in the input signal. The unary function $g$ computes a representation in an embedded space of the input signal at the position $j$. At last, the response is normalized by a factor $\mathcal{C}(x)$ as shown in Fig. 4.
\begin{figure}[t]
\centering
\includegraphics[width=8cm]{1.pdf}
\caption{An illustration of generic non-local methods. $x$ is input signal (image, sequence, video or their features). $y$ is output signal.}
\label{fig:example}
\end{figure}
Because of the fact that all positions $( \forall j)$ are considered in the operation in Eq.(2), this is so-called non-local. Compared with this, a standard 1D convolutional operation sums up the weighted input in a \textit{local} neighborhood (\textit{e.g.}, $i-1 \leq j \leq i+1$ with kernel size 3, and recurrent operation at time $i$ is often based only on the current and the latest time step (\textit{e.g.}, $j=i$ or $i - 1$).
There are several versions of $f$ and $g$, such as gaussian, embedded gaussian, dot product, etc. According to experiments in \cite{Wang_2018_CVPR}, the non-local operation is not sensitive to these choices. We just choose embedded gaussian as $f$ function that is given by
\begin{equation}
f(x_i, x_j) = e^{\theta (x_i)^T \phi (x_j)}
\end{equation}
Here $x_i$ and $x_j$ are given in Eq. (3), $\theta (x_i) = W_{\theta} x_i$ and $\phi (x_j) = W_{\phi} x_j$ are two embeddings. We can set $\mathcal{C} (x)$ as a softmax operation, so we have a self-attention form that is given by
\begin{equation}
y = \sum_{\forall j} \frac{e^{\theta (x_i)^T \phi (x_j)}}{\sum_{\forall i} e^{\theta (x_i)^T \phi (x_j)}} g(x_j)
\end{equation}
A non-local operation is very flexible, which can be easily incorporated into any existing architecture. The non-local operation can be wrapped into a non-local block that can be embedded into the earlier or later part of the deep neural network. We define a non-local block as:
\begin{equation}
z_i = W_z y_i + x_i
\end{equation}
where $y_i$ is given in Eq.(3) and "$+x_i$" means a residual connection \cite{resnet}. We can plug a new non-local block into any pre-trained model, without breaking its initial behavior (\textit{e.g.}, if $W_z$ is initialized as zero) which can build a richer hierarchy architecture combining both global and local information.
In ResNet3D-50, we use a 3D spacetime non-local block illustrated in Fig. 5. The pairwise computation in Eq.(4) can be simply done by matrix multiplication. We will talk about detailed implementation of non-local blocks in next part.
\begin{figure}
\centering
\includegraphics[width=8cm]{2.pdf}
\caption{The 3D spacetime non-local block. The feature maps are shown as the shape of their tensors, \textit{e.g.}, $1024 \times T \times H \times W$ for 1024 channels (it can be different depending on networks). "$\otimes$" denotes matrix multiplication, and "$\oplus$" denotes element-wise sum. The softmax operation is performed on each row. The blue boxes denote $1 \times 1 \times 1$ convolutions. We show the Embedded Gaussian version, with a bottleneck of 512 channels.}
\label{fig:example}
\end{figure}
\subsection{Loss Functions}
We use triplet loss function with hard mining \cite{triplet} and a Softmax cross-entropy loss function with label smoothing regularization \cite{Szegedy_2016_CVPR}.
The triplet loss function we use was originally proposed in \cite{triplet}, and named as Batch Hard triplet loss function. To form a batch, we randomly sample $P$ identities and randomly sample $K$ tracks for each identity (each track contains $T$ frames); totally there are $P \times K$ clips in a batch. For each sample $a$ in the batch, the hardest positive and the hardest negative samples within the batch are selected when forming the triplets for computing the loss $L_{triplet}$.
\begin{equation}
\begin{aligned}
L_{triplet} = \overbrace{\sum_{i=1}^P \sum_{a=1}^K}^{all\ anchors} [m + \overbrace{\max \limits_{p=1 \cdots K} D(f_{a}^i, f_{p}^i)}^{hardest\ positive} \\ - \underbrace{\min \limits_{\substack{j = 1 \cdots P \\ n = 1\cdots K \\ j \neq i}} D(f_{a}^i, f_{n}^j) }_{hardest \ negative}]_{+}
\end{aligned}
\end{equation}
The original Softmax cross-entropy loss function is given by:
\begin{equation}
L_{softmax} = -\frac{1}{P \times K} \sum_{i=1}^P \sum_{a=1}^K p_{i,a} \log q_{i, a}
\end{equation}
where $p_{i,a}$ is the ground truth identity and $q_{i,a}$ is prediction of sample $\{ i, a\}$. The \textit{label-smoothing regularization} is proposed to regularize the model and make it more adaptable with:
\begin{equation}
L^{'}_{softmax} = -\frac{1}{P \times K} \sum_{i=1}^P \sum_{a=1}^K p_{i,a} \log ( (1 - \epsilon) q_{i, a} + \frac{\epsilon}{N})
\end{equation}
where $N$ is the number of classes. This can be considered as a mixture of the original ground-truth distribution $ q_{i, a}$ and the uniform distribution $u(x) = \frac{1}{N}$.
The total loss L is the combination of these two losses.
\begin{equation}
L = L^{'}_{softmax} + L_{triplet}
\end{equation}
\section{Experiments}
We evaluate our proposed method on three public video datasets, including iLIDS-VID \cite{wang}, PRID-2011 \cite{prid} and MARS \cite{zheng2016mars}. We compare our method with the state-of-the-art methods, and the experimental results demonstrate that our proposed method can enhance the performance of both feature learning and metric learning and outperforms previous methods.
\subsection{Datasets}
The basic information of three dataset is listed in Table 1 and some samples are displayed in Figure [3].
\renewcommand{\arraystretch}{1.1}
\begin{table}
\centering
\footnotesize
\caption{The basic information of three datasets to be used in our experiments.}
\begin{tabular}{llll}
\hline
Datasets & iLIDS-VID & PRID2011 & MARS\\
\hline
$\#$identities & 300 & 200 & 1,261 \\
$\#$track-lets & 600 & 400 & 21K \\
$\#$boxes & 44K & 40K & 1M \\
$\#$distractors & 0 & 0 & 3K \\
$\#$cameras & 2 & 2 & 6 \\
$\#$resolution & $64 \times 128$ & $64 \times 128$ & $128 \times 256$ \\
$\#$detection & hand & hand & algorithm \\
$\#$evaluation & CMC & CMC & CMC $\&$ mAP \\
\hline
\end{tabular}
\end{table}
\noindent\textbf{iLIDS-VID} dataset consists of 600 video sequences of 300 persons. Each image sequence has a variable length ranging from 23 to 192 frames, with averaged number of 73. This dataset is challenging due to clothing similarities among people and random occlusions.
\noindent\textbf{PRID-2011} dataset contains 385 persons in camera A and 749 in camera B. 200 identities appear in both cameras, constituting of 400 image sequences. The length of each image sequence varies from 5 to 675. Following \cite{zheng2016mars}, sequences with more 21 frames are selected, leading to 178 identities.
\noindent\textbf{MARS} dataset is a newly released dataset consisting of 1,261 pedestrians captured by at least 2 cameras. The bounding boxes are generated by classic detection and tracking algorithms (DPM detector) \cite{dpm}, yielding 20,715 person sequences. Among them, 3,248 sequences are of quite poor quality due to the failure of detection or tracking, significantly increasing the difficulty of person ReID.
\subsection{Implementation Details and Evaluation Metrics}
\textbf{Training.} We use ResNet3D-50 \cite{resnet3d} as our backbone network. According to the experiments in \cite{Wang_2018_CVPR}, five non-local blocks are inserted to right before the last residual block of a stage. Three blocks are inserted into $res_4$ and two blocks are inserted into $res_3$, to every other residual block. Our models are pre-trained on Kinetics \cite{kinetics}; we also compare the models with different pre-trained weights, and the details are described in the next section.
Our implementation is based on publicly available code of PyTorch \cite{pytorch}. All person ReID models in this paper are trained and tested on Linux with GTX TITAN X GPU. In training term, eight-frame input tracks are randomly cropped out from 64 consecutive frames every eight frames. The spatial size is $256 \times 128$ pixels, randomly cropped from a scaled videos whose size is randomly enlarged by 1/8. The model is trained on an eight-GPU machine and each GPU have 16 tracks in a mini-batch (so in total with a mini-batch size of 128 tracks). In order to train hard mining triplet loss, 32 identities with 4 tracks each person are taken in a mini-batch and iterate all identities as an epoch. Bottleneck consists of fully connected layer, batch norm, leaky ReLU with $\alpha=0.1$ and dropout with $0.5$ drop ratio. The model is trained for 300 epochs in total, starting with a learning rate of 0.0003 and reducing it by exponential decay with decay rate 0.001 at 150 epochs. Adaptive Moment Estimation (Adam) \cite{adam} is adopted with a weight decay of 0.0005 when training.
The method in \cite{kaiming} is adopted to initialize the weight layers introduced in the non-local blocks. A BatchNorm layer is added right after the last $1 \times 1 \times 1$ layer that represents $W_z$; we do not add BatchNorm to other layers in a non-local block. The scale parameter of this BatchNorm layer is initialized as zeros, following \cite{large_minibatch}. This ensures that the initialize state of the entire non-local block is an identity mapping, so it can be inserted into any pre-trained networks while maintaining its initial behavior.
\renewcommand{\arraystretch}{1.1}
\begin{table}
\centering
\footnotesize
\caption{Component analysis of the proposed method: rank-1, rank-5, rank-10 accuracies and mAP are reported for MARS dataset. \textbf{ResNet3D-50} is the ResNet3D-50 pre-trained on Kinectis, \textbf{ResNet3D-50 NL} is added with non-local blocks.}
\vspace{0.5em}
\label{table:headings}
\begin{tabular}{lllll}
Methods & CMC-1 & CMC-5 & CMC-10 & mAP\\
\hline
\textbf{Baseline} & 77.9 & 90.0 & 92.5 & 69.0 \\
\textbf{ResNet3D-50} & 80.0 & 92.2 & 94.5 & 72.6 \\
\textbf{ResNet3D-50 NL} & 84.3 & 94.6 & 96.2 & 77.0 \\
\hline
\end{tabular}
\end{table}
\textbf{Testing.} We follow the standard experimental protocols for testing on the datasets. For iLIDS-VID, the 600 video sequences of 300 persons are randomly split into $50 \%$ of persons for testing. For PRID2011, only 400 video sequences of the first 200 persons, who appear in both cameras are used according to experiment setup in previous methods \cite{McLaughlin_2016_CVPR} For MARS, the predefined 8,298 sequences of 625 persons are used for training, while the 12,180 sequences of 636 persons are used for testing, including the 3,248 low quality sequences in the gallery set.
We employ Cumulated Matching Characteristics (CMC) curve and mean average precision (mAP) to evaluate the performance for all the datasets. For ease of comparison, we only report the cumulated re-identification accuracy at selected ranks.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{figure6}
\caption{Example of the behavior of a non-local block to tackle misalignment problems. The starting point of arrows represents one $x_i$, and the ending points represent $x_j$. This visualization shows how the model finds related part on different frames.}
\label{fig:example}
\end{figure}
\subsection{Component Analysis of the Proposed Model}
In this part, we report the performance of different components in our models.
\subsubsection{3D CNN and Non-local.} Baseline method, ResNet3D-50 and ResNet3D-50 with non-local blocks on the MARS dataset are shown in Table 2. \textbf{Baseline} corresponds to ResNet-50 trained with softmax cross-entropy loss and triplet with hard mining on image-based person ReID. The representation of an image sequence is obtained by using the average temporal pooling. \textbf{ResNet3D-50} corresponds to ResNet3D-50 pre-trained on Kinetics discussed above. \textbf{ResNet3D-50 NL} corresponds to ResNet3D-50 with non-local blocks pre-trained on Kinetics. The gap between our results and baseline method is significant, and it is noted that: (1) ResNet3D increases from $77.9\%$ to $80.0\%$ under single query, which fully suggests ResNet3D-50 effectively aggregate the spatial-temporal features; (2) ResNet3D with non-local increase from $80.0 \%$ to $84.3 \%$ compared with ResNet3D, which indicates that non-local blocks have the great performance on integrating spatial-temporal features and tackling misalignment problem. The results are shown in Fig. 6.
\renewcommand{\arraystretch}{1.1}
\begin{table}
\centering
\footnotesize
\caption{
Effect of different initialization methods: rank-1, rank-5, rank-10 accuracies and mAP are reported for MARS dataset. \textbf{ImageNet} corresponds to model pre-trained on ImageNet, \textbf{Kinetics} corresponds to model pre-trained on Kinetics and \textbf{ReID} corresponds to model pre-trained on ReID datasets.
}
\label{table:headings}
\begin{tabular}{lllll}
\hline\noalign{\smallskip}
Init Methods & CMC-1 & CMC-5 & CMC-10 & mAP\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\textbf{ImageNet} & 78.4 &91.5 & 93.9 & 69.8 \\
\textbf{ReID} & 79.9 & 92.6 & 94.5 & 71.3 \\
\textbf{Kinetics} & 84.3 & 94.6 & 96.2 & 77.0 \\
\hline
\end{tabular}
\end{table}
\renewcommand{\arraystretch}{1.1}
\begin{table}
\centering
\footnotesize
\caption{
Comparisons of our proposed approach to the state-of-the-art on PRID2011, iLIDS-VID and MARS datasets. The rank1 accuracies are reported and for MARS we provide mAP in brackets. The best and second best results are marked by {\color{red}{red}} and {\color{blue}{blue}} colors, respectively.
}
\label{table:headings}
\begin{tabular}{llll}
Methods \qquad\qquad & PRID2011 & iLIDS-VID & MARS \\
\hline
STA \cite{Liu_ICCV} & 64.1 & 44.3 & - \\
DVDL \cite{dvdl} & 40.6 & 25.9 & - \\
TDL \cite{tdl} & 56.7 & 56.3 & - \\
SI2DL \cite{si2dl} & 76.7 & 48.7 & - \\
mvRMLLC+Alignment \cite{mvrm} & 66.8 & 69.1 & - \\
AMOC+EpicFlow \cite{amoc} & 82.0 & 65.5 & - \\
RNN \cite{McLaughlin_2016_CVPR} & 40.6 & 58.0 & - \\
IDE \cite{ide} + XQDA \cite{xqda} & - & - & 65.3(47.3) \\
end AMOC+epicFlow \cite{amoc} & 83.7 & 68.7 & 68.3(52.9) \\
Mars \cite{zheng2016mars} & 77.3 & 53.0 & 68.3(49.3) \\
SeeForest \cite{Zhou} & 79.4 & 55.2 & 70.6(50.7) \\
QAN \cite{Liu_2017_CVPR} & 90.3 & 68.0 & - \\
Spatialtemporal \cite{Li_2018_CVPR} & {\color{red}{93.2}} & {\color{blue}{80.2}} & \color{blue}{82.3(65.8)} \\ \hline
\textbf{ours} & {\color{blue}91.2} & {\color{red} 81.3} & {\color{red}{84.3(77)}} \\
\hline
\end{tabular}
\end{table}
\subsubsection{Different Initialization Methods.} We also carry out experiments to investigate the effect of different initialization methods in Table 3. \textbf{ImageNet} and \textbf{ReID} corresponds to ResNet3D-50 with non-local block, whose weights are inflated from the 2D ResNet50 pre-trained on ImageNet or on CUHK03 \cite{cuhk03}, VIPeR \cite{viper} and DukeMTMC-reID \cite{duke} respectively. \textbf{Kinetics} corresponds to ResNet3D-50 with non-local blocks pre-trained on Kinetics. The results show that model pre-trained on Kinetics has the best performance than on other two datasets. 3D model is hard to train because of the large number of parameters and it needs more datasets to pre-train. Besides, the model pre-trained on Kinetics (a video action recognition dataset) is more suitable for video-based problem.
\subsection{Comparision with State-of-the-art Methods}
Table 4 reports the performance of our approach with other state-of-the-art techniques.
\subsubsection{Results on MARS.} MARS is the most challenging dataset (it contains distractor sequences and has a substantially larger gallery set) and our methodology achieves a significant increase in mAP and rank1 accuracy. Our method improves the state-of-the-art by $2.0 \%$ compared with the previous best reported results $82.3 \%$ from Li \textit{et al.} \cite{Li_2018_CVPR} (which use spatialtemporal attention). SeeForest \cite{Zhou} combines six spatial RNNs and temporal attention followed by a temporal RNN to encode the input video to achieve $70.6 \%$. In contrast, our network architecture is straightforward to train for the video-based problem. This result suggests our ResNet3D with non-local is very effective for video-based person ReID in challenging scenarios.
\subsubsection{Results on iLIDS-VID and PRID.} The results on the iLIDS-VID and PRID2011 are obtained by fine-tuning from the pre-trained model on the MARS. Li \textit{et al.} uses spatialtemporal attention to automatically discover a diverse set of distinctive body
parts which achieves $93.2 \%$ on PRID2011 and $80.2 \%$ on iLIDS-VID. Our proposed method achieves the comparable results compared with it by $91.2 \%$ on PRID2011 and $81.3 \%$ on iLIDS-VID. 3D model cannot achieve the significant improvement because of the size of datasets. These two datasets are small video person ReID datasets, which lead to overfitting on the training set.
\section{Conclusion}
In this paper, we have proposed an end-to-end 3D ConvNet with non-local architecture, which integrates a spatial-temporal attention to aggregate a discriminative representation from a video track. We carefully design experiments to demonstrate the effectiveness of each component of the proposed method. In order to discover pixel-level information and relevance between each frames, we employ a 3D ConvNets. This encourages the network to extract spatial-temporal features. Then we insert non-local blocks into model to explicitly solves the misalignment problem in space and time. The proposed method with ResNet3D and non-blocks outperforms the state-of-the-art methods in many metrics.
{\small
\bibliographystyle{ieee}
|
1,108,101,565,584 | arxiv | \section{Introduction}
In many models of electroweak-scale dark matter (DM), achieving the correct thermal relic density while avoiding direct and indirect detection, collider and precision constraints requires mass relations between particles in the dark sector.
As first explored in \cite{Griest:1990kh}, the dark matter may be close in mass to another state, permitting coannihilation with or phase-space suppressed annihilation to the other state.
Alternatively, the dark matter mass may be approximately half that of a resonance.
Such relations can enhance the dark matter annihilation rate, allowing the correct relic density to be achieved with smaller couplings, and hence without large detection or production cross sections (see \cite{Cohen:2011ec,Profumo:2013hqa} for recent discussions).
But why should such mass relations exist?
Moreover, as masses and couplings vary with energy scale, one can ask why dark sector masses happen to exhibit the required relations at the appropriate scale (i.e. around the dark matter mass).
In this note, we explore the idea that dark sector mass relations arise from infrared (IR)-attractive ratios. Though GUT scale parameters may be \emph{a priori} unrelated, renormalization group (RG) running focuses the parameters to particular ratios at the electroweak scale.
The mass relations thus emerge dynamically due to the interactions and quantum numbers of the dark sector particles.
For instance, consider a fermion (which we imagine to be the DM) and a vector boson that both acquire mass via coupling to a scalar field that attains a vacuum expectation value (vev).
If $y$ represents the relevant Yukawa coupling, $g$ the gauge coupling and $V$ the vev, then the fermion and vector boson masses go as $m_f \propto y V$ and $m_V \propto g V$, such that the mass ratio $m_f/m_V \propto y/g$ is entirely determined by the ratio of the couplings. At one-loop order, the RG equations for the couplings are of the form
\begin{eqnarray}
\label{eq:simplegbeta} (4 \pi)^2 \frac{d g}{d t} & = & b g^3, \\
\label{eq:simpleybeta} (4 \pi)^2 \frac{d y}{d t} & = & y (c y^2 - k g^2),
\end{eqnarray}
where $t \equiv \ln \mu$ is the logarithm of the renormalization scale $\mu$. This system of equations exhibits an IR-attractive ratio, which can be found by solving
\begin{equation}
\label{eq:generalfixedratio}
\frac{d}{dt} \ln \left(\frac{y}{g}\right) = 0 \quad \Rightarrow \quad \left(\frac{y}{g}\right)_{IR} = \pm\sqrt{\frac{k + b}{c}}.
\end{equation}
Certain choices of quantum numbers and couplings (i.e. of $b,c$ and $k$) will lead to mass relations such as $m_f \approx m_V$ or $m_f \approx \frac{m_V}{2}$.
A toy example of the focusing of $\sqrt{2} y/g$ to the fixed ratio (of $1$) is shown in Fig.~\ref{fig:intro_simplifiedfocusing} for $c = 5, b = 1$ and $k = \frac{3}{2}$. Clearly, a particular coupling (and hence mass) ratio can be achieved at the weak scale without significant numerical coincidence at the GUT scale.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{intro_simplifiedfocusing.png}
\caption{\label{fig:intro_simplifiedfocusing} Evolution of the ratio $\sqrt{2} y/g$ as a function of scale $\mu$ in the simplified example of RG focusing based on \eqs{eq:simplegbeta}{eq:simpleybeta} with $c = 5, b = 1$ and $k = \frac{3}{2}$.
We fix $g_{GUT} = 2$ and take $y_{GUT} = 3$ (solid) or $y_{GUT} = 1$ (dashed).}
\end{figure}
This idea shares some intellectual ancestry with earlier attempts to predict masses and mass relations for the top quark and Higgs boson using IR fixed points in the Standard Model (SM) \cite{Pendleton:1980as,Hill:1980sq,Wetterich:1987az,Schrempp:1992zi}.
Other recent attempts to understand dark sector masses using RG properties include \cite{Hertzberg:2012zc,Hambye:2013dgv,Bai:2013xga}.
In the next section, we will explore RG focusing in the context of models in which the dark matter is charged under a new $U(1)_X$ gauge group that kinetically mixes with the hypercharge $U(1)_Y$ of the SM. We will demonstrate how particular mass relations can be achieved and will discuss the phenomenological implications. Then, we will discuss possible extensions and alternative applications of this idea and conclude.
\section{Kinetic Mixing Examples}
\label{sec:models}
A simple model of dark matter involves a fermion $\Psi$ charged under a new $U(1)_X$ gauge group,
\begin{equation}
\mathcal{L} \supset i \overline{\Psi} \gamma^\mu (\partial_\mu + i g_X (q_L P_L - q_R P_R) X_\mu) \Psi,
\end{equation}
where $X$ is the $U(1)_X$ gauge boson and $q_{L, R}$ are the $U(1)_X$ charges of the left- and right-handed components of $\Psi$.
The $X$ boson mixes with the Standard Model hypercharge boson $Y$ via kinetic mixing \cite{Holdom:1985ag,Baumgart:2009tn},
\begin{equation}
\mathcal{L} \supset - \frac{\sin \epsilon}{2} F_X^{\mu \nu} F_{Y \mu \nu}.
\end{equation}
We assume that $X$ acquires mass due to the vev of a scalar field $\Phi$ (with charge normalized to $-1$),
\begin{equation}
\mathcal{L} \supset \abs{D_\mu \Phi}^2 = \abs{(\partial_\mu - i g_X X_\mu) \Phi}^2,
\end{equation}
such that for $\vev{\Phi} = \frac{V}{\sqrt{2}}$, $m_X = g_X V$. Diagonalizing the kinetic and mass terms gives rise to three mass eigenstates $(A, Z, Z^\prime)$, where $A$ is the SM photon and $(Z, Z^\prime)$ are admixtures of the SM $Z$-boson and $X$. This mixing allows the correct dark matter thermal relic density $\Omega h^2 = 0.1199 \pm 0.0027$ \cite{Ade:2013zuv} to be achieved, as $\Psi$ will annihilate to SM states via the $Z$ and $Z^\prime$ bosons.
Throughout this paper, we assume that the Higgs boson associated with the $U(1)_X$ breaking, $\varphi$, does not significantly impact the phenomenology.
This type of model provides a particularly nice framework for studying RG focusing.
First, the relative simplicity permits the construction of straightforward yet instructive examples.
Second, both theoretical and experimental considerations tend to require small $\sin \epsilon$, which makes it difficult to achieve the correct relic density without invoking particular mass relations \cite{Mambrini:2011dw}.
On the theoretical side, the value of $\sin \epsilon$ generated by loops of heavy particles charged under both $U(1)_X$ and $U(1)_Y$ is expected to be $\sin \epsilon \!\mathrel{\hbox{\rlap{\lower.55ex \hbox{$\sim$}} \kern-.34em \raise.4ex \hbox{$<$}}} 0.1$ \cite{Dienes:1996zr,Baumgart:2009tn}.
On the experimental side, LHC searches for resonances decaying to lepton pairs \cite{CMS-PAS-EXO-12-061} and electroweak precision measurements \cite{Kumar:2006gm,Hook:2010tw} place limits on $\sin \epsilon$ for a wide range of $m_{Z^\prime}$. Moreover, if the dark matter exhibits vectorial couplings to $X$, direct detection constraints on spin-independent (SI) scattering with nucleons from XENON100 \cite{Aprile:2012nq} can be significant.\footnote{For weak-scale thermal dark matter, bounds from indirect detection experiments are not currently constraining \cite{Mambrini:2011dw}. For lighter DM ($m_{DM} \!\mathrel{\hbox{\rlap{\lower.55ex \hbox{$\sim$}} \kern-.34em \raise.4ex \hbox{$<$}}} 10 \text{ GeV}$), limits from BaBar \cite{Aubert:2009af} can also be relevant \cite{Hook:2010tw}.}
The relevant experimental bounds, which tightly constrain $\sin \epsilon$, are shown in Fig.~\ref{fig:sineps_mzp_constraints}.\footnote{Relic densities and SI scattering cross sections are computed in \texttt{micrOMEGAs3.1} \cite{Belanger:2013oya} using expressions from \cite{Chun:2010ve}. Approximate projections for the 14 TeV LHC with $\mathcal{L} = 300 \text{ fb}^{-1}$ are derived based on hadronic structure functions \cite{Carena:2004xs,Accomando:2010fz} calculated using \texttt{CalcHEP 3.4} \cite{Belyaev:2012qa}, dilepton invariant mass resolution estimates from \cite{Feldman:2006wb,Beringer:1900zz}, and variation in background between $\sqrt{s} = 8 \text{ TeV}$ and $14 \text{ TeV}$ estimated using \texttt{PYTHIA 8.1} \cite{Sjostrand:2007gs}.}
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{sineps_mzp_constraints_qL15qRm3.png}
\vspace{0.3cm}
\includegraphics[width=0.8\linewidth]{sineps_mzp_constraints_qL9qR3.png}
\vspace{0.3cm}
\includegraphics[width=0.8\linewidth]{sineps_mzp_constraints_qL6qR6.png}
\caption{\label{fig:sineps_mzp_constraints}
Regions in the $(m_{Z^\prime}, \sin \epsilon)$ plane yielding the correct relic density (red), taken to be the $5 \sigma$ range from PLANCK \cite{Ade:2013zuv}, fixing $m_{DM} = 500 \text{ GeV}$ and $g_X = 1$.
Also shown are constraints from the LHC (black, solid) \cite{CMS-PAS-EXO-12-061}, electroweak precision tests (gray shaded) \cite{Kumar:2006gm,Hook:2010tw} and XENON100 (blue, solid) \cite{Aprile:2012nq}.
In addition, we include projections for XENON1T (blue, dotted) \cite{Aprile:2009yh} and the 14 TeV LHC with $\mathcal{L} = 300 \text{ fb}^{-1}$ (black, dotted).
The three plots correspond to $q_L = \frac{5}{4}, q_R = -\frac{1}{4}$ (top), $q_L = \frac{3}{4}, q_R = \frac{1}{4}$ (middle) and $q_L = q_R = \frac{1}{2}$ (bottom); for $q_L = q_R = \frac{1}{2}$ the purely axial DM couplings yield velocity-suppressed SI scattering, so no XENON limits appear.}
\end{figure}
Consequently, for approximately weak-scale DM, achieving the correct thermal relic density with sufficiently small values of $\sin \epsilon$ requires either
\begin{enumerate}
\item $m_{DM} \approx m_{Z^\prime}$, such that the efficient annihilation process $\Psi \overline{\Psi} \rightarrow Z^\prime Z^\prime$ (which for $m_{DM} > m_{Z^\prime}$ would yield a very small relic density even if $\sin \epsilon \approx 0$) can occur, but Boltzmann and phase-space suppression prevent over-annihilation, or
\item $m_{\text{DM}} \approx \frac{1}{2} m_{Z^\prime}$, in which case annihilation $\Psi \overline{\Psi} \rightarrow Z^\prime \rightarrow \text{SM} \; \overline{\text{SM}}$ is enhanced in the early universe due to a small $s$-channel propagator, permitting smaller values of $\sin \epsilon$.
\end{enumerate}
The necessity of these mass relations makes kinetic mixing models with weak-scale DM prime candidates for benefitting from RG focusing.
We now present two models with basic structure as outlined in the introduction, one of which exhibits $m_f \approx m_{Z^\prime}$ and one of which exhibits $m_f \approx \frac{1}{2} m_{Z^\prime}$.
\subsection{(1) $m_{DM} \approx m_{Z^\prime}$}
Consider $\chi_\pm$, $\eta_\pm$ to be left-handed Weyl fermions with $U(1)_X$ charges $\pm q$ and $\pm (1-q)$ respectively. We introduce Yukawa couplings of the form
\begin{equation}
\label{eq:model1}
\mathcal{L} \supset - y_+ \Phi \chi_+ \eta_+ - y_- \Phi^\ast \chi_- \eta_- + \text{h.c.}
\end{equation}
As the fermions come in pairs with opposite charges, this model is anomaly free.
We assume separate $\mathbb{Z}_2$ symmetries, which ensure that the new fermions are stable (and hence DM candidates) and also forbid vector-like masses of the form $\chi_+ \chi_-$.
After spontaneous symmetry breaking of the $U(1)_X$, the $\chi_\pm$ and $\eta_\pm$ are married to yield two Dirac fermions with masses $m_\pm = \frac{y_\pm V}{\sqrt{2}}$. The ratio of $m_\pm$ to $m_X$ is given by
\begin{equation}
\label{eq:m1massratio}
\frac{m_\pm}{m_X} = \frac{y_\pm}{\sqrt{2} g_X}.
\end{equation}
The one-loop beta functions for the couplings are
\begin{eqnarray}
\label{eq:m1yukawabeta}
(4 \pi)^2 \frac{d y_\pm}{d t} & = & y_\pm \left(2 y_\pm^2 + y_\mp^2 - 3 (q^2 + (1-q)^2) g_X^2\right), \\
\label{eq:m1gaugebeta}
(4 \pi)^2 \frac{d g_X}{d t} & = & b_X g_X^3,
\end{eqnarray}
where $b_X = \frac{4}{3} (q^2 + (1-q)^2) + \frac{1}{3}$. This system of equations exhibits IR-attractive fixed ratios
\begin{equation}
\label{eq:m1couplingratio}
\left.\frac{y_+}{y_-}\right|_0 = 1, \quad \left.\frac{y_\pm}{g_X}\right|_0 =
\frac{1}{3} \sqrt{13 (q^2 + (1-q)^2) + 1}.
\end{equation}
The subscript ``$0$'' denotes that these ratios are RG invariant -- in other words, for couplings fixed to these ratios, the ratios will be preserved by RG running.
We now imagine that the couplings take some generic values at the unification scale $M_{GUT}$. Then, as the couplings are run to the dark matter scale (taken to be on the order of $m_Z$), they evolve such that they are attracted towards these ratios. By \eq{eq:m1massratio}, this leads to particular relations between the fermion masses and the $Z^\prime$ mass -- different choices of $q$ will yield different mass ratios.
By examining \eqs{eq:m1massratio}{eq:m1couplingratio}, we see that we can approximately achieve the desired mass relation if $q = \frac{5}{4}$, for which
\begin{equation}
\left.\frac{m_\pm}{m_X}\right|_0 = \left.\frac{y_\pm}{\sqrt{2} g_X}\right|_0 \approx 1.1.
\end{equation}
Provided that the couplings converge to this ratio sufficiently quickly, the dark matter will have mass $m_\chi \!\mathrel{\hbox{\rlap{\lower.55ex \hbox{$\sim$}} \kern-.34em \raise.4ex \hbox{$>$}}} m_{Z^\prime}$ and so will undergo phase-space suppressed annihilation to $Z^\prime Z^\prime$ in the early universe, conceivably yielding the correct relic density even for very small values of $\sin \epsilon$.
Both of the new fermions are stable, so they will each constitute a component of the dark matter -- however, the heavier state will annihilate more efficiently and so the lighter state will comprise the majority of the dark matter.
How quickly do the couplings converge to the fixed ratio? Consider the variable $\delta_\pm$, defined by
\begin{equation}
\frac{y_\pm}{g_X} = \left(\frac{y_\pm}{g_X}\right)_0 (1 + \delta_\pm),
\end{equation}
which measures the deviation of the coupling ratio from the fixed ratio. From \eqs{eq:m1yukawabeta}{eq:m1gaugebeta}, we can derive a differential equation for $\delta_\pm$, assuming $\delta_+ = \delta_-$ for simplicity\footnote{As this is a point of enhanced symmetry, $y_+ = y_-$ could perhaps be enforced as a GUT scale relation.},
\begin{equation}
\label{eq:m1delta}
\frac{d \delta_\pm}{dt} = \frac{3 g_X^2}{(4 \pi)^2} \left(\frac{y_\pm}{g_X}\right)^2_0 \delta_\pm (\delta_\pm + 1) (\delta_\pm + 2).
\end{equation}
This demonstrates that, for $\delta_\pm > 0$ or $-1 < \delta_\pm < 0$, $\delta_\pm \rightarrow 0$ as $t \rightarrow -\infty$; the fixed ratio is IR attractive. The other fixed points of the equation are $\delta_\pm = -1$, corresponding to turning off the Yukawas (and indicating no Yukawas are generated by RG running), and $\delta_\pm = -2$, which is analogous to the fixed point at $\delta_\pm = 0$ up to re-phasing of the fermion fields.
\begin{figure*}
\centering
\subfigure{\includegraphics[width=0.46\linewidth]{threshfoceff_d1eqd2.png}} \hspace{0.05\linewidth}
\subfigure{\includegraphics[width=0.46\linewidth]{threshfoceff_d1neqd2.png}}
\caption{\label{fig:threshfoceff}
$\delta_\pm$ at the electroweak scale after 33 $e$-folds of RG evolution as a function of $g_{X, GUT}$ assuming (a) $\delta_+ = \delta_-$ and (b) $\delta_+ \neq \delta_-$.
Black lines in (a) represent $\delta_{\pm, GUT} = 2$ (dot-dashed), $1$ (solid), $-1/2$ (dotted) and $-2/3$ (dashed).
In (b), we take $(\delta_+, \delta_-)_{GUT} = (1/2,1)$ (black) and $(\delta_+, \delta_-)_{GUT} = (-1/4,-1/2)$ (red, dotted). In both plots, the gray dashed line represents the value of $\delta_{\pm, EW}$ for which $m_\pm = m_X$.}
\end{figure*}
The values of $\delta_{+, EW} = \delta_{-, EW}$ at the electroweak scale after $\sim 33$ $e$-folds of running
(corresponding to running from $\mu = M_{GUT}$ to $\mu \sim \mathcal{O}(m_Z)$\footnote{As the gauge couplings do not unify in this minimal model, it is not obvious what value one should take for $M_{GUT}$.
Potential candidates range from the scale at which $g_1$ and $g_2$ unify all the way up to the Planck scale, and depend on the UV completion.
We remain agnostic, and simply take $\ln(M_{GUT}/m_{DM}) \approx 33$ as a representative value.})
are shown in Fig.~\ref{fig:threshfoceff}(a) as a function of $g_{X, GUT}$ for a variety of GUT-scale deviations $\delta_{+, GUT} = \delta_{-, GUT}$. It is clear that, for reasonable values of $g_{X, GUT} \approx \mathcal{O}(1)$, the couplings come very close to the fixed ratio even if there is significant misalignment at the GUT scale, demonstrating the efficacy of the focusing. Thus, this mechanism is capable of generating dark sector mass relations without substantial coincidence of parameters. As expected from \eq{eq:m1delta}, the couplings approach the fixed ratio faster for $\delta_\pm > 0$ than for $-1 < \delta_\pm < 0$.
It is also interesting to consider what happens if the Yukawa couplings are not aligned at the GUT scale ($\delta_{+, GUT} \neq \delta_{-, GUT}$). The results are shown in Fig.~\ref{fig:threshfoceff}(b). Although the Yukawas do not end up equal, they are driven to similar values near the IR-attractive ratio. This gives rise to the situation described above wherein the dark matter is multi-component, but dominated by the (slightly) lighter component. In Fig.~\ref{fig:correctrelic_d1d2}, we show the regions in the $(\delta_+, \delta_-)_{GUT}$ plane for which the correct relic density is achieved for two different choices of $g_{X, GUT}$.
As a result of the RG focusing, a significant region of the GUT scale parameter space yields the correct relic density.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{correctrelic_d1d2.png}
\caption{\label{fig:correctrelic_d1d2} Values of $\delta_{\pm, GUT}$ yielding the correct relic density for $g_{X, GUT} = 1.2$ (hatched) or 1.4 (red). We fix $m_{Z^\prime} = 500 \text{ GeV}$ and $\sin \epsilon = 0.01$. For $\sin \epsilon \!\mathrel{\hbox{\rlap{\lower.55ex \hbox{$\sim$}} \kern-.34em \raise.4ex \hbox{$<$}}} 0.015$ (chosen to satisfy the LHC limit shown in the top panel of Fig.~\ref{fig:sineps_mzp_constraints} -- here $q_{L,\pm} = \pm q = \pm \frac{5}{4}$ and $q_{R,\pm} = \pm(1-q) = \mp\frac{1}{4}$), the precise value of $\sin \epsilon$ does not affect the cosmology provided that it is large enough that the $Z^\prime$ decays prior to BBN.}
\end{figure}
Our analysis has thus far considered the RG evolution of the couplings only at one-loop. Given the large GUT scale values for the couplings, a reasonable concern is whether our conclusions are greatly affected by higher-order terms.
For instance, in Fig.~\ref{fig:threshfoceff}(a), $y_{\pm, GUT} = 7.0$ for $\delta_{\pm, GUT} = 2.0$ and $g_{X, GUT} = 1.5$, so in this region the plot should be taken as indicative of the power of one-loop focusing as opposed to an exact result.
Performing a full analysis of higher-loop effects is more complicated, in part because the (so far unspecified) scalar quartic coupling enters at the two-loop level. However, we have confirmed that higher-loop corrections of the size expected from \cite{Machacek:1983fi,Luo:2002ti} do not significantly alter our results or the rate of convergence to the fixed ratio. This is partly because the couplings become smaller in the IR, such that the perturbative expansion is under control in the region where the couplings are approaching the fixed ratio. As a result, the one-loop terms dominate.
Finally, it is interesting to explore how efficient the focusing would be over fewer $e$-folds. For instance, one could imagine a scenario in which the RG equations attain the correct form to yield the desired IR-attractive ratio after crossing some heavy mass threshold $M_H < M_{GUT}$. In this case, we can take $(\delta_\pm, g_X)_H$ to be boundary conditions at the threshold scale $\mu = M_{H}$. In Fig.~\ref{fig:focus_func_efolds}, we show how $\delta_+ = \delta_-$ evolves as a function of $\ln(M_{H}/\mu)$, fixing $g_{X, H} = 1.4$. It is evident that $\delta_+ = \delta_-$ approaches zero quite rapidly, particularly for $\delta_{\pm, H} > 0$. Even for $\delta_{+, H} = \delta_{-, H} = 1.5$, $\delta_+ = \delta_- \!\mathrel{\hbox{\rlap{\lower.55ex \hbox{$\sim$}} \kern-.34em \raise.4ex \hbox{$<$}}} 0.05$ by $\log_{10}(M_{H}/\mu) = 8$. Thus, this mechanism could be used to generate mass relations in models with mass thresholds as low as $M_{H} \approx 10^{10} \text{ GeV}$.
In the context of kinetic mixing models, this mass threshold could perhaps correspond to the mass of heavy states charged under $U(1)_X$ and $U(1)_Y$ responsible for generating $\sin \epsilon$.\footnote{The rapidity of the focusing also implies that such a model could give rise to dark sector mass relations at a significantly higher scale than the weak scale -- of course, such a scenario is phenomenologically dismal.}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{focus_func_efolds.png}
\caption{\label{fig:focus_func_efolds} $\delta_+ = \delta_-$ as a function of $\log(M_H/\mu)$ for $g_{X, H} = 1.4$ (the gauge coupling at $\mu = M_H$). The gray dashed line represents the value of $\delta_{\pm, EW}$ for which $m_\pm = m_X$.}
\end{figure}
The main phenomenological implication of this model is that the dark matter $U(1)_X$ charges must be $q_L \approx \frac{5}{4}$, $q_R \approx - \frac{1}{4}$ in order to achieve the desired mass ratio.
For these charge assignments, the dark matter exhibits a significant vectorial coupling to the $Z$ and $Z^\prime$ gauge bosons, giving rise to appreciable SI scattering cross sections and enabling direct detection experiments to probe smaller values of $\sin \epsilon$.
Depending on the value of $m_\pm$, the strongest constraints in the near threshold region come from either LHC dilepton resonance searches or XENON100 and require $\sin \epsilon \!\mathrel{\hbox{\rlap{\lower.55ex \hbox{$\sim$}} \kern-.34em \raise.4ex \hbox{$<$}}} (1-2) \times 10^{-2}$ (see the top panel of Fig.~\ref{fig:sineps_mzp_constraints}). The DM could likely be observed by a one-ton Xenon experiment for $\sin \epsilon \!\mathrel{\hbox{\rlap{\lower.55ex \hbox{$\sim$}} \kern-.34em \raise.4ex \hbox{$>$}}} 10^{-3}$, and for $\sin \epsilon \!\mathrel{\hbox{\rlap{\lower.55ex \hbox{$\sim$}} \kern-.34em \raise.4ex \hbox{$>$}}} 5 \times 10^{-3}$ the concurrent observation of the DM and a $Z^\prime$ with $m_{Z^\prime} \approx m_{DM}$ may be possible.
\subsection{(2) $m_{DM} \approx \frac{1}{2} m_{Z^\prime}$}
As shown in Fig.~\ref{fig:sineps_mzp_constraints}, the other region of interest exhibiting the correct relic density and small $\sin \epsilon$ has $m_{DM}/m_{Z^\prime} \approx \frac{1}{2}$, such that annihilation in the early universe is approximately on resonance. Thus, one might also wish to explain this mass ratio via a similar mechanism.
However, for the model above,
\begin{equation}
\left.\frac{2 m_\pm}{m_X}\right|_0 = \left.\frac{\sqrt{2} y_\pm}{g_X}\right|_0 \geq 1.3
\end{equation}
with the minimum occurring for $q = \frac{1}{2}$. Thus, we are limited in how close we can get to $m_{DM}/m_{Z^\prime} \approx \frac{1}{2}$, at least in this simple model.
Gauge couplings drive $y_\pm$ up towards the IR, whereas Yukawas drive $y_\pm$ down, so to achieve $y_\pm$ sufficiently small with respect to $g_X$ requires the introduction of additional Yukawa couplings.
Said another way, in terms of \eq{eq:generalfixedratio}, to get closer to resonance requires additional Yukawa contributions that increase $c$ without a correspondingly large increase in $b$ ($k$ is fixed by the DM charges).
This can be accomplished by introducing new fermions with Yukawa couplings to $\Phi$. The additional fermions will contribute to the scalar wave function renormalization, increasing $c$.\footnote{A similar alternative, which we do not elaborate on here, would be to introduce new ``inert'' scalars coupling to $\chi_\pm, \eta_\pm$, which would increase $c$ by contributing to the fermion wave function renormalization.}
Moreover, if these states have larger $U(1)_X$ charges than the dark matter or are charged under additional gauge groups, their Yukawa couplings will tend to larger values than the DM Yukawa couplings, further enhancing $c$ and making it easier to achieve the ratio $m_{DM}/m_{Z^\prime} \approx \frac{1}{2}$.
However, the introduction of additional couplings can somewhat reduce the efficacy of the focusing relative to the $m_{DM} \approx m_{Z^\prime}$ case above.
Perhaps the simplest way to introduce new states is to augment \eq{eq:model1} to respect an $SU(N_F)^2$ symmetry. For $N_F = 4$ and $q = \frac{1}{2}$, $(2m_\pm/m_X)_0 = (\sqrt{2} y_\pm/g_X)_0 \approx 1.0$. However, as there are more DM components, the dark matter must annihilate more efficiently to achieve the correct relic density.
This requires either a larger value of $\sin \epsilon$ (in tension with the constraints mentioned above) or that $2 m_\pm$ is particularly close to $m_{Z^\prime}$, which would imply a significant numerical coincidence in GUT scale parameters even with RG focusing (largely neutralizing the benefits of the focusing).
Consequently, we instead introduce new fermions $X_\pm, N_\pm$ (in addition to $\chi_\pm, \eta_\pm$) that couple to $\Phi$, but decay such that they do not contribute to the dark matter relic density. $X_\pm$ and $N_\pm$ have $U(1)_X$ and $U(1)_Y$ charges $\pm Q_X, \pm (1-Q_X)$ and $\pm Q_Y, \mp Q_Y$ respectively. We add to \eq{eq:model1} the Yukawa couplings
\begin{equation}
\mathcal{L} \supset - Y_+ \Phi X_+ N_+ - Y_- \Phi^\ast X_- N_- + \text{h.c.}
\end{equation}
When $\Phi$ takes on its vev, $X_\pm$ and $N_\pm$ marry to form two Dirac fermions with $M_\pm = \frac{Y_\pm V}{\sqrt{2}}$. As the $X_\pm, N_\pm$ states decay, in principle we do not need to relate their masses to that of the $U(1)_X$ gauge boson as for $\chi_\pm, \eta_\pm$.
However, since the interactions of the new states are vital for producing the desired IR-attractive ratio, we want these states to contribute to the RG evolution all the way to the dark scale.
In light of this, it is logical that these states acquire all of their mass from $U(1)_X$ breaking such that $M_\pm \sim m_\pm$ -- for this reason, we assume additional $\mathbb{Z}_2$ symmetries forbidding vector-like mass terms. This also leads to a particular prediction of these models, namely the existence of additional dark sector states
with masses comparable to the dark matter mass.
The choices of $Q_X$ and $Q_Y$ determine how $X_\pm, N_\pm$ can decay -- one choice that readily permits decay is $Q_X = q$ and $Q_Y = 1$. We introduce a new scalar $\tilde{e}$ with $SU(2)_L \times U(1)_Y$ quantum numbers $(\mathbf{1},-1)$ and interactions of the form
\begin{equation}
- \Delta \mathcal{L} = \kappa_+ \tilde{e} X_+ \chi_- + \kappa_- \tilde{e} N_- \eta_+ + \kappa \tilde{e}^\dagger \ell \ell + \text{h.c.},
\end{equation}
permitting decays such as $X^- \rightarrow \chi \ell^- \overline{\nu}_\ell$ (assuming $m_{\tilde{e}} > M_\pm > m_\pm$ -- superscripts denote $U(1)_{\text{EM}}$ charges).\footnote{Another choice permitting decay is $Q_X = 1$, $Q_Y = 0$. The $N_\pm$ would be gauge singlets, and could decay via the higher-dimension operator $\mathcal{L} = \frac{1}{\Lambda} N_\pm u^c d^c d^c$. The attractive ratio in this model is $(2m_\pm/m_X)_0 \approx 1.0$ for $q = \frac{4}{5}$.}
Note that the $U(1)_Y$ interactions will tend to drive $Y_\pm > y_\pm \Rightarrow M_\pm > m_\pm$. $\kappa_\pm$ and $\kappa$ are taken to be sufficiently small that they have a negligible effect on the dark sector RG evolution, but sufficiently large that the $X_\pm, N_\pm$ states decay prior to DM freeze-out to avoid repopulating the dark matter. For approximately TeV scale particles, fast enough decay occurs if $\kappa_\pm \approx \kappa \!\mathrel{\hbox{\rlap{\lower.55ex \hbox{$\sim$}} \kern-.34em \raise.4ex \hbox{$>$}}} 10^{-4}$ such that both of these conditions can indeed be satisfied.
In this model, the ratios that the couplings approach in the IR are somewhat more complicated due to the effect of the hypercharge on the RG equations.
Symmetry between $+$ and $-$ states implies
\begin{equation}
\left.\frac{y_+}{y_-}\right|_0 = \left.\frac{Y_+}{Y_-}\right|_0 = 1.
\end{equation}
However, solving the equations
\begin{equation}
\frac{d}{dt} \ln\left(\frac{y_\pm}{g_X}\right) = 0, \quad \frac{d}{dt} \ln\left(\frac{Y_\pm}{g_X}\right) = 0
\end{equation}
yields
\begin{eqnarray}
\left.\frac{y_\pm}{g_X}\right|_0 & = & \sqrt{\frac{17 (q^2 + (1-q)^2) + 1 - 36 (g_Y/g_X)^2}{15}}, \\
\left.\frac{Y_\pm}{g_X}\right|_0 & = & \sqrt{\frac{17 (q^2 + (1-q)^2) + 1 + 54 (g_Y/g_X)^2}{15}}.
\end{eqnarray}
The attractive ratios evolve as a function of scale (or as a function of the values of $g_{X, Y}$).
Fig.~\ref{fig:resfoc_yY_g2} shows regions of GUT parameter space for which $\frac{2 m_\pm}{m_X} \in [0.95,1.05]$ at the weak scale for two charge assignments $q = \frac{3}{4}$ and $q = \frac{1}{2}$, taking $g_{X, GUT} = 2$. For simplicity, we set $y_+ = y_-$ and $Y_+ = Y_-$.
The chosen range for $2m_\pm/m_X$ provides a rough guideline as to where the correct thermal relic density is achieved, consistent with experimental constraints, for dark matter masses $\mathcal{O}(100 \text{ GeV} - 1 \text{ TeV})$ and $\sin \epsilon \!\mathrel{\hbox{\rlap{\lower.55ex \hbox{$\sim$}} \kern-.34em \raise.4ex \hbox{$<$}}} \mathcal{O}(0.1)$.
However, valid regions of parameter space do exist for smaller or larger values of $2m_\pm/m_X$.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{resfoc_yY_g2.png}
\caption{\label{fig:resfoc_yY_g2} Regions in the $(y_\pm, Y_\pm)_{GUT}$ plane for which $2m_\pm/m_X \in [0.95,1.05]$ (left- and right-boundaries, respectively) for $q = \frac{3}{4}$ (blue) and $q = \frac{1}{2}$ (red), fixing $g_{X, GUT} = 2$. The dotted contours give the value of $M_\pm/m_\pm$ at the weak scale for $q = \frac{1}{2}$, with the shaded gray region forbidden as $M_\pm < m_\pm$ -- contours for $q = \frac{3}{4}$ are not shown but are largely similar.}
\end{figure}
In Fig.~\ref{fig:resfoc_y_g2_Y2}, we show the value of $2 m_{\pm}/m_X$ at the weak scale as a function of $y_{+, GUT} = y_{-, GUT}$ both with and without the $X_\pm, N_\pm$ states.
If these states are present, the slope of the lines is much shallower in the region of $2m_\pm/m_X = 1$, such that a wider range of $y_{+, GUT} = y_{-, GUT}$ will give rise to $\frac{2 m_\pm}{m_X} \in [0.95,1.05]$.
Without the additional states, a more significant conspiracy of GUT scale parameters is needed to achieve $m_\pm \approx \frac{1}{2} m_X$.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{resfoc_y_g2_Y2.png}
\caption{\label{fig:resfoc_y_g2_Y2} The distance from resonance at the weak scale (parameterized by $2 m_\pm/m_X$), fixing $g_{X, GUT} = 2$, as a function of $y_{+, GUT} = y_{-, GUT}$ for $q = \frac{3}{4}$ (solid) and $q = \frac{1}{2}$ (dashed) in the model without (black) and with (red) the $X_\pm, N_\pm$ states and $Y_{+, GUT} = Y_{-, GUT} = 2$. The presence of the extra states with reasonable GUT-scale Yukawas reduces the numerical coincidence required to achieve $m_{\pm} \approx \frac{1}{2} m_X$. Gray dotted lines demarcate the region $2 m_\pm/m_X \in [0.95,1.05]$.}
\end{figure}
Again, these results are based on one-loop beta functions only, neglecting the small kinetic mixing, but we have checked that approximate corrections due to two-loop effects and kinetic mixing \cite{delAguila:1988jz,Luo:2002iq} do not significantly alter our results. However, because the spectrum contains states charged under both $U(1)_X$ and $U(1)_Y$, a related consideration is how $\sin \epsilon$ evolves. In particular, one might wonder what values of $(\sin \epsilon)_{GUT}$ yield the desired $(\sin \epsilon)_{EW} \sim \mathcal{O}(0.1)$. Generally, depending on the precise choices of $q$ and $(\sin \epsilon)_{EW}$, either $(\sin \epsilon)_{GUT} \sim \mathcal{O}(0.01)$ or $(\sin \epsilon)_{GUT} \sim \mathcal{O}(0.5)$ for $g_{X, GUT} = 2$. $(\sin \epsilon)_{GUT}$ is expected to be $\mathcal{O}(1)$ if the operator $F_X F_Y$ is permitted at the GUT scale or $\sim 0$ if it is forbidden (by, e.g., gauge invariance of the unification group). Notably, the GUT boundary conditions required to give $(\sin \epsilon)_{EW} \sim \mathcal{O}(0.1)$ in this model are approximately consistent with one of these two scenarios.
If the dark matter relic density is set by near-resonant annihilation in the early universe, it may well imply the existence of additional states close in mass to the dark matter, resulting in novel phenomenology beyond the dark matter direct detection prospects. For the model above, we predict new charged particles with masses $M_\pm \sim (1.2-1.7) m_\pm$. As these particles decay prior to dark matter freeze-out, their lifetimes satisfy
\begin{equation}
\tau \!\mathrel{\hbox{\rlap{\lower.55ex \hbox{$\sim$}} \kern-.34em \raise.4ex \hbox{$<$}}} H^{-1}(T_{fo}) \; \Rightarrow \; \tau \!\mathrel{\hbox{\rlap{\lower.55ex \hbox{$\sim$}} \kern-.34em \raise.4ex \hbox{$<$}}} 10^{-9} \left(\frac{500 \text{ GeV}}{m_\pm}\right)^2 \text{ s}.
\end{equation}
For $\tau$ close to saturating this bound, the additional particles would be relatively long-lived and could produce disappearing tracks at the LHC. Such signals have been searched for, and limits of $M_\pm \!\mathrel{\hbox{\rlap{\lower.55ex \hbox{$\sim$}} \kern-.34em \raise.4ex \hbox{$>$}}} \mathcal{O}(400-500 \text{ GeV})$ have been placed \cite{ATLAS-CONF-2013-069}. For shorter lifetimes, the heavier states will decay to yield opposite-sign dilepton plus missing energy signatures, such that they could potentially be observed in SUSY chargino searches \cite{ATLAS-CONF-2013-049,CMS-PAS-SUS-13-006}. However, the current reach of such searches is relatively limited (only requiring $M_\pm \!\mathrel{\hbox{\rlap{\lower.55ex \hbox{$\sim$}} \kern-.34em \raise.4ex \hbox{$>$}}} \mathcal{O}(100-200 \text{ GeV})$) due to the somewhat small $M_\pm-m_\pm$ splitting predicted.
Depending on the $U(1)_X$ charge of the dark matter, there can be interesting interplay between direct detection and LHC dilepton resonance searches. For instance, if $q = \frac{3}{4}$, there are regions of parameter space exhibiting the correct relic density that are not yet excluded by current constraints, but which will be probed by both XENON1T and the LHC with $\sqrt{s} = 14 \text{ TeV}$ and $\mathcal{L} = 300 \text{ fb}^{-1}$. These regions present the exciting possibility of the concurrent observation of the dark matter, a $Z^\prime$ boson with $m_{Z^\prime} \approx 2 m_{DM}$, and a long-lived charged particle with mass $m_{DM} < M_\pm < m_{Z^\prime}$. In other regions of parameter space (or for $q = \frac{1}{2}$) the dark matter will evade direct detection, but this could be mitigated by the imminent observation of a $Z^\prime$ and perhaps also of a long-lived charged particle with mass $\frac{m_{Z^\prime}}{2} < M_\pm < m_{Z^\prime}$. Of course, for $2 m_{DM}$ very close to $m_{Z^\prime}$, the small values of $\sin \epsilon$ that yield the correct relic density preclude both dark matter direct detection and $Z^\prime$ observation. The charged states may still be observed, although this would of course depend on their masses and lifetimes.
\section{Discussion and Conclusions}
\label{sec:conc}
In this paper, we have proposed that dark sector mass relations may arise due to IR-attractive ratios in the dark sector RG equations. We have discussed this in the context of two simple models consisting of new dark sector fermions charged under a gauged $U(1)_X$, which kinetically mixes with the SM $U(1)_Y$.
We have focused on this class of model in part because it permits a straightforward introduction to this application of RG focusing, but a wide variety of alternative implementations can be imagined. Throughout this paper, we have assumed that the Higgs boson $\varphi$ associated with the $U(1)_X$ breaking by the vev of $\Phi$ ($V$) does not impact the phenomenology. However, the mass of $\varphi$ will also be related to $V$ by $m_\varphi \sim \sqrt{\lambda} V$, where $\lambda$ represents the $\Phi$ quartic coupling. RG focusing could yield $m_{DM} \approx \frac{1}{2} m_\varphi$ -- in the presence of a mixed $\Phi$, SM Higgs quartic $\lambda_{\Phi H} \abs{\Phi}^2 \abs{H}^2$, this would lead to a realization of resonant Higgs portal dark matter \cite{LopezHonorez:2012kv}.
The fact that $b_X > 0$ for an Abelian gauge group meant that achieving $m_{DM} \approx \frac{1}{2} m_X$ required additional Yukawa couplings. A non-Abelian theory could potentially allow this fixed ratio to be achieved more readily. Of course, in this case, it is less trivial to communicate between the dark and SM sectors as gauge invariance now forbids kinetic mixing terms.
Alternatively, one could construct a coannihilation model in which $(M_\pm/m_\pm)_0 \!\mathrel{\hbox{\rlap{\lower.55ex \hbox{$\sim$}} \kern-.34em \raise.4ex \hbox{$>$}}} 1$, for instance if the heavier states had additional gauge interactions. This is somewhat similar to the small mass splittings between charged and neutral states within multiplets that arise from electromagnetic interactions.
Aside from the new model-building possibilities, it may also be the case that pre-existing models of weak-scale dark matter exhibit RG focusing.
We have also made several simplifying assumptions regarding the structure of the theory.
For instance, we assumed no dynamics affect the RG equations between $\mu = M_{GUT}$ and $\mu \sim \mathcal{O}(m_Z)$. As alluded to earlier, however, one could imagine more complicated scenarios with additional mass thresholds that alter the relative running of the couplings.
Furthermore, we considered only a single $U(1)_X$ Higgs field -- in models with multiple Higgs fields, the presence of additional ``$\tan \beta$'' parameters will affect the masses of the various particles. While this provides more model-building freedom, it requires an explanation as to why ratios of vevs would take particular values as well.
In addition, we have remained agnostic as to why the dark scale and electroweak scale might be related (in other words, why $V \approx v_{EW}$).
This represents a second hierarchy problem, and could be addressed in a UV-complete model. On a related note, one could attempt to realize a supersymmetric version of this mechanism. In practice, the additional states present in a supersymmetric theory (which contribute to $b$, must acquire masses etc.) make such examples more complicated.
In models that generate mass relations via RG focusing, achieving specific attractive ratios requires certain charge assignments or the introduction of additional states and interactions. While new states need not contribute to the dark matter relic density, they may still have properties (such as masses or charges) related to the dark matter properties. Thus, although dark sector mass relations may make direct detection more difficult, they may also point to rich alternative phenomenology.
Furthermore, RG focusing can make tightly-constrained models (such as the kinetic mixing models explored here) more palatable by relating masses without requiring serious numerical coincidences.
RG focusing in the dark sector offers a wide variety of possibilities and is worthy of future study.
\vspace{-0.2cm}
\subsection*{Acknowledgments}
\vspace{-0.3cm}
We would like to thank Kathryn Zurek, Michele Papucci and James Wells for useful conversations. AP
is supported by DoE grant DE-SC0007859 and CAREER grant NSF-PHY 0743315. JK is supported by CAREER grant NSF-PHY 0743315.
|
1,108,101,565,585 | arxiv |
\section{Introduction}
\label{sec:intro}
Describing a given scene is considered an essential ability for humans to perceive the world. It is thus important to develop intelligent agents with such an ability, formally known as image captioning that generates the natural language descriptions based on the given images~\cite{kulkarni2013babytalk,vinyals2015show, cornia2020meshed,guo2020normalized}.
A further pursuit in the image captioning task has emerged: not only the captions should semantically reflect the content of the image, but also the words in the generated captions should be \textit{grounded} in the image, i.e., finding visual regions in the image that corresponds to the words. This way, more interpretable captions are expected to be generated.
\input{figs/local_grounding_att}
To this end, the mainstream works leverage attention mechanisms that focus on certain regions of images to generate more grounded captions.
Pioneering works \cite{liu2017attention, yu2017supervising, zhou2019grounded} have explored using annotated bounding box as attention supervision for each visually groundable word or noun phrase in the training stage, and led to desirable improvement on the interpretability of the generated captions. However, acquiring such region-word alignment annotations is expensive and time-consuming.
Recently, several attempts have been made to generate more grounded attention with weak supervision \cite{zhou2020more, liu2020prophet, ma2020learning}. These methods mainly focus on designing various regularization schemes to produce attended regions to enclose the visual entities corresponding to the words in the generated captions. Despite the promising results achieved by the weakly supervised methods, their performances are still far from the fully supervised baselines. One main issue is that the attention for generating the visually groundable word without ground-truth grounding supervision may only focus on the most discriminant regions, failing to cover the entire object.
See Fig. \ref{fig:vis_local_grounding_att} for example, the grounded region of the word 'woman' generated by our baseline network only covers the facial part.
While such grounding region identify the correct person (\textit{`the woman in blue shirt'}), the network largely ignores the body parts presented in the image, which means that it fails to visually distinguish the word `woman' and the word `face'. This problem is important, especially when the input images are not accessible to the users (\textit{e.g. voice assist system for blind person}). In such cases, whether grounding on partial or the whole body may affect some follow-up decisions.
Such a problem, termed as the \textit{partial grounding problem} in our paper, is usually seen in the grounded image captioning task with weak supervision. This is primarily attributed to the fact that
the objects of interest in the images are usually partially covered by many redundant region proposals, and choosing the most salient proposal inevitably leads to the described problem.
In Fig. \ref{fig:vis_error_stat}, we statistically analyze the grounding error of the baseline architecture on Flickr 30k Entities \cite{plummer2015flickr30k} dataset by casting it into three categories including partial grounding, enlarged grounding, and deviated grounding. As we can see that, the partial grounding problem contributes to a large portion of the grounding error. This further confirms the necessity to solve the partial grounding problem.
\input{figs/vis_error_stat}
Based on the observation, we propose a simple yet effective method to alleviate the partial grounding problem. The ingredient of our method is a distributed attention mechanism that enforces the network to attend to multiple semantically consistent regions. In this way, the union of the attended regions should form a visual region that encloses the object completely.
We follow the previous method~\cite{zhou2019grounded} and build our baseline network based on the widely used up-down architecture \cite{anderson2018bottom}. The attention module in the baseline network is augmented with multiple branches, and region proposal elimination is used to enforce different attention branches to focus on different regions.
We conduct various experiments to verify the effectiveness of our method.
From our experiments, we illustrate that the partial grounding problem is a crucial issue that produces inferior grounding results, and adding our distributed attention to the baseline method can effectively achieve a large performance gain on grounding accuracy than those produced by the state-of-the-art methods relying on sophisticated network designs.
In summary, our technical contributions are summarized:
\begin{itemize}
\item {We find that alleviating the partial grounding issue is critical to promote grounding performance, which is ignored in all previous methods.}
\item {We propose a novel solution to the partial grounding issue by introducing a distributed attention mechanism that enforces the network to aggregate the semantically consistent regions across the entire image.}
\item {By testing our method on the Flickr30k Entities dataset and ActivityNet Entities dataset, we achieve significant gain on grounding accuracy compared with state-of-the-art methods.}
\end{itemize}
\section{Related Work}
\label{sec:related_work}
\paragraph{\bf Grounded Image Captioning}
Image captioning is an important task at the juncture of computer vision and natural language processing.
Traditional methods \cite{das2013thousand,kulkarni2013babytalk,mitchell2012midge,zhang2020relational} mainly adopts the pre-defined templates with slots that are filled with visual cues detected on the images. The performance of these methods is limited.
With the advent of deep neural networks, the caption quality has been greatly improved.
State-of-the-art methods \cite{yang2019auto,you2016image,lu2017knowing,anderson2018bottom,huang2019attention,guo2020normalized,cornia2020meshed} make use of various form of the attention mechanism for attending to different regions of the images while generating captions. The attention mechanism can be applied to various types of data including spatial grids \cite{lu2017knowing}, semantic metadata \cite{yang2019auto,you2016image} as well as object region proposals \cite{anderson2018bottom,huang2019attention}. Despite the great success achieved in terms of captioning performance of these methods, the grounding accuracy is still not satisfactory.
Several attempts have been made for improving the grounding accuracy. Some pioneering works \cite{liu2017attention, yu2017supervising, zhou2019grounded,zhang2020relational} obtained improved grounding results with the help of the attention mechanism in a fully supervised manner. However, labeling the word-region correspondences is time-consuming and difficult to scale up to large datasets. Recently, researchers started to use different kinds of regularization schemes to improve the grounding accuracy in a weakly supervised manner.
Zhou et al. \cite{zhou2020more} adopted the teacher-student scheme and distilled the knowledge learned by a Part-of-Speech enhanced image-text matching network \cite{lee2018stacked} to improve the grounding accuracy of the image captioning model.
Ma et al. \cite{ma2020learning} employed the cyclical training regimen as a regularization, in which a generator and a localizer are trained jointly to regularize the attention.
Liu et al. \cite{liu2020prophet} proposed the \emph{Prophet Attention}, which uses future information to compute the ideal attention and then takes the ideal attention as regularization to alleviate the deviated attention. The abovementioned methods utilized different regularization schemes to improve the grounding accuracy, however, they all neglect the partial grounding issue as described previously.
To this end, we propose a distributed attention mechanism for aggregating semantically consistent regions to alleviate the partial grounding problem.
\paragraph{\bf Visual Grounding}
The visual grounding task aims at learning fine-grained correspondences between image regions and visually groundable noun phrases.
Existing works can be roughly divided into two categories: supervised and weakly supervised.
The supervised methods \cite{plummer2018conditional, liu2020learning, dogan2019neural, yu2018mattnet} took bounding boxes as supervision to enforce the alignment between image regions and noun phrases, and have achieved remarkable success.
However, annotating the region word alignments is expensive and time-consuming, which makes it difficult to scale to large datasets.
Weakly supervised methods \cite{liu2019knowledge,liu2021relation, rohrbach2016grounding,chen2018knowledge,gupta2020contrastive,wang2020improving} aimed at learning the correspondence with only image-caption pairs.
Some studies utilize reconstruction regularization~\cite{rohrbach2016grounding,liu2019knowledge} to learn visual grounding.
Rohrbach et al.~\cite{rohrbach2016grounding} first generate a candidate bounding box by the attention mechanism and then reconstruct the phrase based on the selected bounding box.
Liu et al.~\cite{liu2019knowledge} further boost the grounding accuracy by introducing the contextual entity and model the relationship between the target entity (subject) and
contextual entity (object).
Contrastive learning is also exploited in several method~\cite{wang2020improving,gupta2020contrastive}.
Wang et al.~\cite{wang2020improving} learn a score function between the region-phrase pairs for distilling knowledge from a generic object detector for weakly supervised grounding.
Gupta et al.~\cite{gupta2020contrastive} maximize a lower bound on mutual information between sets of the region features extracted from an image and contextualized word representations.
Chen et al.~\cite{chen2018knowledge} facilitate weakly supervised grounding by considering both visual and language consistency and leveraging complementary knowledge from the feature extractor.
Our task is different from visual grounding in that our caption is to be generated rather than given for grounding.
\paragraph{\bf Weakly Supervised Object Detection}
WSOD aims to learn the localization of objects with only image-level labels (e.g., ones given for classification) and can be roughly divided into methods based on region proposals or class activation maps (CAMs)~\cite{zhou2016learning}. Proposal-based approaches~\cite{song2014learning,gokberk2014multi,bilen2015weakly} formulate this task as multiple instance learning.
Bilen \textit{et al.}~\cite{bilen2016weakly} select proposals by parallel detection and classification branches in deep convolutional networks.
Contextual information~\cite{kantorov2016contextlocnet}, attention mechanism~\cite{teh2016attention}, gradient map~\cite{shen2019category} and semantic segmentation~\cite{wei2018ts2c} are leveraged to learn accurate object proposals.
CAM-based methods~\cite{singh2017hide, singh2017hide, zhang2018adversarial, zhang2020inter, pan2021unveiling} produce localization maps by aggregating deep feature maps using a class-specific fully connected layer.
Despite the simplicity and effectiveness of CAM-based methods, they suffer from identifying small discriminative parts of objects. To improve the activation of CAMs, several methods~\cite{singh2017hide, yun2019cutmix, choe2019attention, zhang2018adversarial} adopted adversarial erasing on input images or feature maps to drive localization models focusing on extended object parts.
SPG~\cite{zhang2018self} and I$^{2}$C~\cite{zhang2020inter} increased the quality of localization maps by introducing the constraint of pixel-level correlations into the network.
SPA~\cite{pan2021unveiling} and TS-CAM~\cite{gao2021ts} obtain accurate localization maps with the help of long-range structural information. Different from the WSOD task, our task leverages the correctly attended regions for caption generation.
\section{Methodology}
\label{sec:method}
\input{figs/overview}
In this section, we first give the notations of image captioning (Sec~\ref{sec:method_nota}).
We then briefly introduce the attention based encoder-decoder baseline network as our image captioning baseline (Sec~\ref{sec:method_review}).
In Sec~\ref{sec:method_da}, we describe in detail the proposed distributed attention mechanism for aggregating semantically consistent partial regions for more grounded image captioning.
\subsection{Notations}
\label{sec:method_nota}
The generic image captioning task aims at generating a sentence $Y$ to describe the context of a given image $I$. We follow the pioneering works~\cite{zhou2019grounded,ma2020learning} on weakly supervised image captioning and present the input image as a set of region proposals $R=\{r_1, r_2,...,r_N\} (r_i \in \mathbb{R}^d)$ and the image feature map $f_c$, where $N$ and $d$ are the number and the feature dimension of the region proposals, respectively. We generate $r_i$ with Faster RCNN \cite{ren2015faster} while $f_c$ is extracted from a pretrained ResNet~\cite{he2016deep} model.
The generated sentence is represented as a sequence of one-hot vectors $Y=\{y_1, y_2,...,y_T\} (y_t \in \mathbb{R}^s)$, where $T$ is the length of the sequence and $s$ is the size of the vocabulary.
\subsection{Revisit the Baseline Method}
\label{sec:method_review}
Our method is built on GVD \cite{zhou2019grounded} except for the self-attention loss for region feature embedding.
Fig. \ref{fig:pipeline}(b) shows the core part of the GVD, named language generating module, which is an extension of the widely used Bottom-up and top-down attention network \cite{anderson2018bottom} for the image captioning task.
Specifically, the language generating model consists of two LSTM layers, the attention LSTM layer, and the language LSTM layer. The attention LSTM is used to identify the importance of different regions in the input image for generating the next word. It takes as input the global image feature vector $v_g$, the previous word embedding vector $e_{t-1}=W_e y_{t-1}$ and the previous hidden state $h_{t-1}^1$, and encode them into current hidden state $h_t^1$:
\begin{equation}\label{eq:lstm1}
h_t^1 = LSTM_1 ([v_g;e_{t-1}], h_{t-1}^1),
\end{equation}
where $W_e$ is the learnable word embedding matrix, $[\cdot;\cdot]$ denotes the concatenation operation and we obtain $v_g$ by applying global average pooling on the image feature map $f_c$. Note that, the description of the cell states for both LSTM layers is omitted for notational clarity.
To obtain the attention weight $\alpha_t$ over the image regions, the hidden state $h_{t}^1$ produced by the above-mentioned attention LSTM layer is further fed into an attention module followed by a softmax layer as:
\begin{equation}\label{eq:attention_w}
\begin{aligned}
z_{i,t} &= W_a tanh(W_r r_i \oplus W_h h_{t}^1),\\
\alpha_t &= softmax(z_t),
\end{aligned}
\end{equation}
\noindent where $W_a$, $W_r$ and $W_h$ are the learnable weight matrices and $\oplus$ is the element-wise addition operation. Given the attention weight $\alpha_t$, the attended region feature $\hat{r}_t$ can be calculated as the weighted combination of the regions features:
\begin{equation}\label{eq:attention_feat}
\hat{r}_t = \alpha_t^\top R
\end{equation}
where $R \in \mathbb{R}^{N \times d}$ is a feature matrix whose column represents the feature of a region proposal. Following \cite{zhou2019grounded,ma2020learning}, we also enhance the attended region feature $\hat{r}_t$ with the image features $f_c$ to get the attended image feature $\hat{f}_t$:
\begin{equation}\label{eq:convfeat}
\begin{aligned}
\hat{f}_t = \hat{r}_t + att_{img}(f_c)
\end{aligned}
\end{equation}
where $att_{img}$ is the attention block for aggregating the image feature map $f_c$ into a feature vector.
Finally, the attended image feature $\hat{f}_t$ is fed into the language LSTM for producing the conditional probability distribution $p(y_t|y_{1:t-1})$ of the next word $y_t$ over all possible outputs as:
\begin{equation}\label{eq:lstm2}
\begin{aligned}
h_{t}^2 = LSTM_2 ([h_t^1, \hat{f}_t], h_{t-1}^2),\\
p(y_t|y_{1:t-1}) = softmax(W_o h_{t}^2),
\end{aligned}
\end{equation}
where $y_{1:t-1}$ denotes the sequence of the previously predicted words, $p(y_t|y_{1:t-1})$ denotes the conditional probability of the word $y_t$ given the previous words, $h^2_t$ is the hidden state of the language LSTM and $W_o$ is the learnable weight.
In the training phase, the network is optimized by adopting the teacher forcing strategy given the ground truth caption sequence $Y^* = \{y_1^*, y_2^*,...,y_T^*\}$.
The training objective is to minimize the following cross-entropy loss $L_{CE}(\theta)$:
\begin{equation}\label{eq:celoss}
\begin{aligned}
L_{CE}(\theta) = -\sum_{t=1}^{T}{log(p_{\theta}(y^*_t|y^*_{1:t-1}))}
\end{aligned}
\end{equation}
where $\theta$ denotes the trainable parameters. The region with the maximum attention weight will be selected as the grounding result.
\input{figs/vis_bbox}
\subsection{Region Aggregation with Distributed Attention}
\label{sec:method_da}
In this section, we introduce our distributed attention mechanism for aggregating semantically consistent regions to resolve the partial grounding problem in detail.
As shown in Fig. \ref{fig:vis_bbox}, the extracted region proposals by a pre-trained Faster-RCNN are redundant with each object partially covered by many region proposals. It is likely that the attention mechanism used in the baseline network will only identify parts of the objects which are the most discriminative.
Based on the above discussion, we propose the distributed attention mechanism for aggregating semantically consistent regions for generating the words. Fig. \ref{fig:pipeline}(a) illustrates the overall pipeline of our proposed method. Specifically, to make the attention focus on different regions, we augment the attention module in the baseline network with $K$ attention branches, and enforce the focused regions of all the attention branches to be mutually exclusive by iteratively applying the attention branches on the region proposals:
\begin{equation}\label{eq:mattn}
\begin{aligned}
a_t^{k} &= Attention^k(\hat{R}^k, h^1_{t-1}),\\
\hat{R}^k &= \{r_i \in R, r_i \notin M^{k-1}\}
\end{aligned}
\end{equation}
where $Attention^k()$ is the function for computing the attention weights with $k$-th attention branch over the region proposals as described in Equ. \ref{eq:attention_w}, $k \in [1,K]$ is the branch index, $\hat{R}^k$ denotes the set of region proposals used in the attention branch $k$, $a_t^k$ is the attention weights predicted by the $k$-th attention branch and $M^{k-1}$ denotes $k-1$ region proposals selected by previous $k-1$ attention branches. Note that these attention branches do not share the same weights. The attended features of all $K$ attention branches are further passed through language LSTM (Equ. \ref{eq:attention_feat}, \ref{eq:convfeat}, \ref{eq:lstm2}) to get $K$ outputs.
At the training stage, we use the teacher-forcing strategy and apply the cross-entropy loss to all the $K$ outputs. During testing, at each time step, we select the word predicted by the most number of attention branches and combine the corresponding regions together into a single region and use it as the grounding output.
Through our method, the distributed attention model will learn to aggregate semantically consistent regions when generating the word to alleviate the partial grounding problem. The union of the attended regions should form a visual region that encloses the object of interest completely.
\section{Experiments}
\label{sec:exp}
\subsection{Experimental Settings}
\noindent\textbf{Datasets.}
We conducted the main experiments on the widely used Flickr30k-Entities dataset \cite{plummer2015flickr30k}. It contains $31k$ images in total, and each image is annotated with $5$ sentences. In addition, it also contains $275k$ bounding box annotations and each bounding box corresponds to a visually groundable noun phrase in the caption. Following GVD \cite{zhou2019grounded}, we used the data split setting from Karpathy et al. \cite{karpathy2015deep}, which has $29k$ images for training, $1k$ images for validation, and $1k$ for testing. The vocabulary size of the dataset is $8639$.
\noindent\textbf{Evaluation Metrics.}
We used the standard captioning evaluation toolkit \cite{chen2015microsoft} to measure the captioning quality. Four commonly used language evaluation metrics are reported, i.e., BLEU \cite{papineni2002bleu}, METEOR \cite{denkowski2014meteor},
CIDEr \cite{vedantam2015cider} and SPICE \cite{anderson2016spice}.
To evaluate the grounding performance, we used the metrics $F1_{all}$ and $F1_{loc}$ defined in GVD \cite{zhou2019grounded}. For $F1_{all}$, a prediction is considered as correct if the object word is correctly generated and the Intersection-over-Union (IOU) of the predicted bounding box and the ground truth bounding box is larger than $0.5$. In $F1_{loc}$, only the correctly generated words are considered. Both metrics were averaged over classes.
\noindent\textbf{Implementation Details.}
We implemented our method with PyTorch, and all the experiments were conducted on two V100 GPUs. Following the previous works \cite{zhou2019grounded, ma2020learning, liu2020prophet}, we adopted the widely used Faster R-CNN network \cite{ren2015faster} pre-trained on Visual Genomes \cite{krishna2017visual} by GVD \cite{zhou2019grounded} to extract $100$ region proposals for each image. The word embedding dimension in the captions was set to $512$, and the word embedding layer was trained from scratch. The dimension of the hidden states for both Attention and Language LSTM was set to $1024$.
The network was optimized with Adam Optimizer \cite{kingma2014adam}, with an initial learning rate set to $5e^{-4}$ and decayed by a factor of $0.8$ for every three epochs. The batch size was set to $64$. We trained the network with a single attention branch for $20$ epoches and included the other branches in the training loop afterward. The whole training process takes less than one day.
\input{figs/local_grounding_cmp}
\subsection{Comparison with Prior Work}
\noindent \textbf{Quantitative Comparison.}
We compared our method with state-of-the-art weakly supervised grounded image captioning methods, including GVD \cite{zhou2019grounded}, Prophet \cite{liu2020prophet}, Zhou et al. \cite{zhou2020more} and Cyclical \cite{ma2020learning}, to see the effectiveness of our method on the grounding accuracy.
Note that, in Zhou et al. \cite{zhou2020more}, various techniques are used to improve the captioning performance including SCST \cite{rennie2017self} and scheduled sampling \cite{bengio2015scheduled}, while we only employ the standard cross-entropy loss used in the baseline network during training.
As shown in Table. \ref{tab:comp_sota}, by applying our proposed distributed attention module on the baseline method, we achieved significant improvement on the grounding accuracy ($F1_{all}$ and $F1_{loc}$) compared with other weakly supervised grounded image captioning methods. By comparing our method with the supervised baseline method in Cyclical, the grounding performance of our method only inferiors a little. It further confirmed the necessity of solving the partial grounding problem in this task.
\input{tables/main_results}
\noindent \textbf{Qualitative Comparison.} We qualitatively compare our method with the baseline method to see how the partial grounding problem can be alleviated. As shown in Fig. \ref{fig:vis_local_grounding_cmp}, the predicted grounding regions of our method are more accurate and can cover the whole objects, while the selected regions of the baseline method may only cover the most salient parts.
\noindent \textbf{Performance on ActivityNet-Entities.}
We further tested the performance of our method on the ActivityNet-Entities dataset \cite{zhou2019grounded} to see if our method works or not on the grounded video description task. By extending GVD with our distributed attention module, we achieved significant improvement on the validation set for both $F1_{all}$(from $3.70$ to $5.32$) and $F1_{loc}$(from $12.70$ to $21.06$) with the captioning performance keeping comparable. This further confirmed the effectiveness of our proposed method.
\subsection{Qualitative Results.}
\noindent \textbf{Example Results.} In Fig. \ref{fig:vis_res}, we present some example results including the generated captions as well as the corresponding grounding results. As we can see that, our method is able to generate fine grained image captions with accurate groundings for both foreground objects (e.g., 'man') and the background (e.g., 'beach').
\noindent \textbf{Visualization of the Distributed Attention.} We visualize the distributed attention in Fig. \ref{fig:vis_matt} to see how the region proposals are selected and fused. When generating a word, the distributed attention will attend to multiple semantically consistent regions with different locations, and the attended region proposals with the same semantic meanings will be fused to get the grounding results. We can see that our distributed attention has the ability to aggregate semantically consistent regions to enclose the object of interest completely and thus alleviate the partial grounding problem.
\input{figs/vis_res}
\subsection{Ablation Study}
\noindent\textbf{Effect of $K$.}
We tested the performance of our method with a varied number of attention branches $K$. As shown in Table \ref{tab:num_k}, the grounding performance of our method outperformed the state-of-the-art methods by a large margin when the number of attention branches $K$ is larger than $3$. When $K$ is smaller than $3$, the grounding accuracy is degraded. The main reason is that, due to the redundant region proposals extracted with Faster-RCNN, the distributed attention module is not able to discover enough regions when $K$ is too small.
\noindent\textbf{Effect of Region Proposal Elimination.}
As shown in Table \ref{tab:abl_erasing} we tested the performance of our method without region proposal elimination. Our method can also improve the grounding accuracy without region proposal elimination. That's mainly because, different initialization of the attention branches will lead to slightly different attention results, and the partial grounding problem can also be alleviated. The use of the region proposal elimination is to explicitly enforce the distributed attention module to focus on different semantically consistent regions to further boost the grounding performance.
\noindent\textbf{Relationship with Multi-head Attention.}
Our method has a close relationship with multi-head attention proposed in \cite{vaswani2017attention}. Here, we compared our distributed attention with multi-head attention by replacing the attention module in our baseline network with multi-head attention and tested it by varying the number of attention heads from $2$ to $6$. The best performing result is achieved with the number of attention head setting to $4$, that is $4.75$ for $F1_{all}$ and $13.19$ for $F1_{loc}$, which is lower than our result as shown in Table \ref{tab:comp_sota}. One possible reason is that, in our work, we explicitly enforce the attention branches to focus on multiple regions with consistent semantics, which is not guaranteed for multi-head attention.
\input{tables/abl_k}
\input{tables/abl_erasing}
\subsection{Error Analysis}
In this section, to better understand the grounding performance, we analyzed the grounding accuracy detailly by classifying all the predictions on Flickr30k Entities test set into five categories:
\input{figs/vis_matt}
\textbf{Mis Cls}: Missing Class computes the ratio of noun words that are in the ground truth reference captions but not in the corresponding predicted captions.
\textbf{Hallu Cls}: Hallucinated Class computes the ratio of noun words that are in the predicted captions but not in the corresponding ground truth reference captions.
\textbf{Corr Grd}: Correct Grounding computes the ratio of noun words that are correctly generated and grounded ($IOU > 0.5$).
\textbf{Part Grd}: Partial Grounding computes the ratio of noun words that are correctly generated but partially grounded (the predicted bounding box is contained in the ground truth bounding boxes, but the $IOU$ of them is less than $0.5$).
\textbf{Ohter Err}: Other Error Grounding computes the ratio of noun words that are correctly generated but are neither in Part Grd nor in Corr Grd.
\input{tables/ground_err_stat}
As shown in Table \ref{tab:ground_err_stat}, the classification error (missing class and hallucinated class) contributes to the most portion of the error when computing the grounding accuracy. One main reason is that, for a given image, there exist various ways for describing its content, and directly comparing the generated captions with the corresponding references may lead to errors in computing the classification accuracy. That's why the metric $F1_{all}$, which considers the classification accuracy, is always much lower than $F1_{loc}$. And it's also the limitation of current evaluation metrics in this task.
When only considering the predictions whose classification labels are correct, our proposed method achieved much more correctly grounded predictions than that of the baseline method. It further confirmed the importance of solving the partial grounding issue and the superiority of our proposed method.
We also measure the grounding accuracy of the baseline method if regarding all the predictions which are partially grounded as correct predictions to see the upper bound. In this case, the $F1_{all}$ and $F1_{loc}$ are $10.65$ and $30.28$ correspondingly. By comparing it with our results ($F1_{all}:7.91$, $F1_{loc}:21.5$), we can see that, there is still a large space for improvement.
\subsection{Limitations}
The main limitation of our proposed method is that the predicted grounding regions might be larger than the target objects as shown in Fig. \ref{fig:vis_lim}. One important reason might be that the attention weights are calculated based on contextual information from the previous words rather than the one to be generated. Thus, the attention weights predicted by some attention branches might have the 'deviated focus' problem, and fusing the deviated regions together may lead to enlarged grounding regions. One possible solution is to combine our work with other state-of-the-arts (e.g. \cite{liu2020prophet,ma2020learning}). As it is not the main focus of our work, we leave it as future work.
\input{figs/vis_limitation}
\section{Conclusion}
\label{sec:conclusion}
In this paper, we study the problem of generating image captions with accurate grounding regions without any grounding annotations. We unveil the fact that alleviating the partial grounding issue is critical to improve the grounding performance, which has been overlooked in many previous works. To this end, we propose the distributed attention mechanism to enforce the network to aggregate different regions with consistent semantics while generating the words. Extensive experiments have shown the effectiveness of our proposed method. By incorporating our proposed module into the baseline network, we achieved significant improvement on the grounding accuracy compared with the state-of-the-arts.
|
1,108,101,565,586 | arxiv | \section{Introduction}
Blazars -- the AGNs dominated by the Doppler boosted radiation of the
relativistic jets -- provide an exceptional opportunity for studies of
physics of innermost parts of AGN jets. However, to take advantage of
this opportunity the radiative mechanisms of the jet emission must be
identified. While a low-energy component of blazar spectra is
uniquely identified as due to the synchrotron process, the nature of a
high-energy component is still uncertain. It can be contributed by
inverse-Compton (IC) radiation of directly accelerated electrons
(hereafter by electrons we mean both electrons and positrons), by
synchrotron radiation of pair cascades powered by hadronic processes
and accompanying pair cascades, and, finally, by synchrotron emission
of protons and muons \citep[see review by][]{sm01}. Seed photons for
the IC process are provided by locally operating synchrotron mechanism
as well as by external sources, like broad line region (BLR) and/or
dusty tori. The IC models involving Comptonization of local
synchrotron radiation are called synchrotron-self-Compton (SSC)
models, and those involving Comptonization o external radiation are
usually coined external-Compton (EC) models.
In this paper we focus on blazars with a dense radiative environment,
i.e. those hosted by quasars. Their $\gamma$-ray fluxes usually
exceed fluxes of a low-energy, synchrotron spectral component by a
large factor, and their X-ray spectra are often very hard. About 30\%
of them have X-ray spectral index $\alpha_X < 0.5$
\citep{cap97,rt00,dsg05}. Such spectra cannot be produced in a fast
cooling regime. This practically eliminates models which predict
production of X- and $\gamma$-rays by synchrotron emission of
ultrarelativistic electrons, and also put severe constraints on SSC
models. We demonstrate that spectra of such blazars can be explained
in terms of the SSC models only for magnetic fields much below the
equipartition with electrons and that no such constraint applies for
EC models provided a bulk Lorentz factor of the source is sufficiently
high.
\section{Magnetic fields in SSC and ERC models}
We investigate here broad-line blazars (those hosted by quasars) with
$q \equiv F_{\gamma}/F_{syn}\gg 1$ and assume that their $\gamma$-ray
component is produced by IC process. Both components are assumed to
be produced by the same population of electrons which in the source
co-moving frame have the isotropic distribution. Since in the
broad-line blazars the KN and pair production effects are likely to be
insignificant \citep{mod05}, we ignore them in our considerations
below. With all these assumptions
\begin{equation}
u_{\gamma}'/u_{seed}' = u_{SSC}'/u_{syn}' = u_{syn}'/u_B' =
u_{EC}'/u_{ext}' = A \,
\label{eq:ugus}
\end{equation}
where $A$ is the Thomson amplification factor, $u_B'$ is the magnetic
energy density, and $u_{\gamma}'$, $u_{seed}'$, $u_{syn}'$,
$u_{SSC}'$, $u_{ext}'$, $u_{EC}'$, are energy densities of
$\gamma$-ray, seed photon, synchrotron, SSC, external and EC radiation
fields, respectively, all as measured in the source co-moving frame.
\subsection{Magnetic vs. radiation energy density}
In the SSC model $u_{seed}'=u_{syn}'$ and $u_{\gamma}' = u_{SSC}'$.
Therefore,
\begin{equation}
q = {F_{\gamma} \over F_{syn}} = {u_{SSC}'\over u_{syn}'} = A \, ,
\label{eq:qssc}
\end{equation}
and
\begin{equation}
{u_B' \over u_{\gamma}'} = {u_{syn}' \over u_{SSC}'} {u_B' \over
u_{syn}'} = {1 \over A^2} = {1 \over q^2} \, .
\label{eq:ussc}
\end{equation}
In the EC model $u_{seed}'=u_{ext}'$ and $u_{\gamma}' = u_{EC}'$, and,
therefore,
\begin{equation}
q = {F_{\gamma} \over F_{syn}} = {f u_{EC}'\over u_{syn}'} \, ,
\label{eq:qerc}
\end{equation}
and
\begin{equation}
{u_B' \over u_{\gamma}'} = {u_{syn}'/A \over u_{EC}'} = {f \over
Aq} \, ,
\label{eq:uerc}
\end{equation}
where the factor $f \simeq ({\cal D}/\Gamma)^2$ accounts for
anisotropy of IC radiation in the source comoving frame \citep{msb03}
and ${\cal D} \equiv 1/[\Gamma(1-\beta \cos \theta_{obs})]$ is the
Doppler factor. Note that here, in general, $q \ne A$.
Hence, in SSC models $q \gg 1$ is possible only for $u_B' \ll
u_{\gamma}'$, while EC can give $q \gg 1$ even for $u_B' >
u_{\gamma}'$, provided $A < f/q$.
\subsection{Electron vs. radiation energy density}
Let us approximate the geometry of the source as a piece of a
cylinder, of a cross-sectional radius $R$ and height $\lambda'$ as
measured in the source co-moving frame. An amount of energy
acumulated in relativistic electrons during the time period of the
injection of relativistic electrons is
$$ E_e' = (1-\eta_{rad}) L_{e,inj}' t_{inj}' \, , \eqno(6) $$
and energy density of the relativistic electrons is
$$ u_e' = {E_e' \over \pi R^2 \lambda'} = {(1-\eta_{rad}) L_{e,inj}'
t_{inj}' \over \pi R^2 \lambda'} \, , \eqno(7) $$
where $L_{e,inj}$ is the the rate of the energy injection via
acceleration of relativistic electrons, $t_{inj}'$ is the injection
time scale, and $\eta_{rad}$ is the fraction of the total electron
energy converted via radiative processes to photons. Energy of the
radiation produced during the flare is $E_{rad}' =
\eta_{rad}L_{e,inj}' t_{fl}' $, and energy density of the emitted
radiation is
$$ u_{rad}' = {\eta_{rad} L_{e,inj}' \over 2 \pi R^2 c} \, , \eqno(8)
$$
provided $\lambda' < R$. Hence,
$$ {u_{rad}' \over u_e'} = {\kappa \, \eta_{rad} \over 1-\eta_{rad}}
\, . \eqno(9) $$
where $\kappa = \lambda'/ (2 t_{inj}' c)$.
\subsection{Magnetic vs. electron energy density}
Noting that for $q \gg 1$, $u_{rad}' \simeq u_{\gamma}'$, and
combining equations (3), (5), and (9), we obtain for SSC model
$$ {u_B' \over u_e'} = {1 \over q^2} \, {\kappa \, \eta_{rad} \over
1-\eta_{rad}} \, ,\eqno(10) $$
and for EC model
$$ {u_B' \over u_e'} = {f \over Aq} \, {\kappa \, \eta_{rad} \over
1-\eta_{rad}} \, . \eqno(11) $$
In the broad-line blazars radiation efficiency $\eta_{rad}$ is
typically of the order of $0.5$, and then, for $\lambda' < 2 c
t_{inj}'$, the SSC models of the high-q blazars require magnetic
fields much below equipartition, $u_B'/u_e' < 1/q^2$.
In the case of EC models, equipartition of magnetic fields with
electrons is possible for any $q$, but requires
$$ A \sim {f \over q} \, {\kappa \, \eta_{rad} \over 1-\eta_{rad}} \,
. \eqno(12) $$
Noting that
$$ u_{EC}' = {L_{\gamma}' \over 2 \pi R^2 c} = {2 F_{\gamma} \over c}
\left(d_L \over R \right)^2 {1 \over f {\cal D}^4} \, \eqno(13) $$
and
$$ u_{ext}' = \Gamma^2 \, {\xi L_d \over 4 \pi r^2 c} = \Gamma^2 {\xi
F_d \over c} \left(d_L \over r\right)^2 \, , \eqno(14) $$
we obtain
$$ A={u_{EC}' \over u_{ext}'} = {2 F_{\gamma} \over \xi F_d} \, {1
\over f {\cal D}^4 (\Gamma \theta_j)^2} \, , \eqno(15) $$
where $\xi$ is the fraction of the disk radiation reprocessed into
broad lines and/or IR dust radiation at a distance $r$ at which the
flare is produced, $\theta_j = R/r$, and $d_L$ is the luminosity
distance of the blazar. Combining equations (12) and (15) gives the
condition for the Doppler factor ${\cal D}$ to have $u_B' = u_e'$,
$$ {\cal D} = \left( 2 q (1 -\eta_{rad}) F_{\gamma} \over f^2
\eta_{rad} \kappa (\Gamma \theta_j)^2 \xi F_d \right)^{1/4} \,
. \eqno(16) $$
For ${\cal D} \sim \Gamma \sim 1/\theta_j$, $\eta_{rad} \sim 0.5$, and
noting that $F_{\gamma}/F_d = (F_{\gamma}/F_{syn})(F_{syn}/F_d)=q
(F_{syn}/F_d)$, we obtain
$$ \Gamma \sim 16 \, \left(q \over 10\right)^{1/2} \, \left( {\kappa
\over (\xi/0.3)}{F_{syn} \over F_d} \right)^{1/4} \, . \eqno(17) $$
\section{Conclusion}
Large $\gamma$-ray excesses ($q \gg 1$), observed in many broad-line
blazars can be produced by SSC process only if $B \ll B_{equip}$. No
such constraint applies to the ERC model, provided a jet Lorentz
factor is respectively high: $B \sim B_{equip}$ fits the model if
$\Gamma \sim (q/10)^{1/2} (F_{syn}/F_d)^{1/4}$.
\acknowledgements This work was partially supported by Polish MNiSW
grant 1P03D00928.
|
1,108,101,565,587 | arxiv | \section{Introduction}
One of the simplest and most frequently studied version of the general three-body problem, is the planar circular restricted three-body problem (henceforth CRTBP), which can be stated as follows:
\begin{itemize}
\item Two primaries, $\mathcal{M}_1$ and $\mathcal{M}_2$ at positions $X_1$ and $X_2$, respectively, follow a circular orbit around their common center of mass keeping a fixed distance $r$, while moving at constant angular velocity $\omega_0$.
\item A third body $\mathcal{M}$, that is much smaller than either $\mathcal{M}_1$ or $\mathcal{M}_2$, remains in the orbital plane of the primaries.
\item The equations of motion are derived only for the test particle $\mathcal{M}$, whose motion does not affect the primaries.
\end{itemize}
The basic formulation of the CRTBP dates back to Euler, who proposed the use of synodical coordinates $(x,y)$ instead of the inertial coordinate system $(X,Y)$, in order to simplify this problem \cite{Euler67}. The transformation between these two systems can be performed by means of the rotation matrix\footnote{Along the paper $G=\mathcal{M}=\omega_{0}=r=1$, is understood.},
\begin{equation}\label{eq:rotmatrix}
\left( \begin{array}{c} x \\ y \end{array}\right)=
\left( \begin{array} {cc}
\cos\omega_0 t & -\sin\omega_0 t \\ \sin\omega_0 t & \cos\omega_0 t
\end{array} \right)
\left( \begin{array}{c} X \\ Y \end{array} \right).
\end{equation}
Using this transformation, Lagrange proved the existence of five equilibrium points for the system, named Lagrangian points. The subject of equilibrium points in the CRTBP has been studied extensively in the literature (see {\it e.g} \cite{Henon-book} and references therein). The discovery of the Trojan asteroids around the Lagrangian points $L_4$ and $L_5$ in the Sun-Jupiter system \cite{Murray-book}, and the recent observations of asteroids around $L_4$ for the Sun-Earth system \cite{Connors2011}, increased theoretical research on the subject (see {\it e.g.} \cite{Morbidelli2005}). It should be noted that, in spite of the fact that the CRTBP is much simpler than the general three-body problem, it is non-integrable, which opened the possibility to analyze systematically the orbits \cite{Henon-book,Stephani-Book}.
Under the assumption of weak fields and low velocities, and as a first approximation to the relativistic CRTBP, in 1967 \cite{Krefetz1967} Krefetz considered for the first time the post-Newtonian equations of motion for the CRTBP, using the Einstein-Infeld-Hoffmann (EIH) formalism \cite{Einstein1937}. The Lagrangian for this system was explicitly presented by Contopoulos in 1976 \cite{Contopoulos1976} and some typos for the Jacobian constant were corrected by Maindl \etal \cite{Maindl1994}, who also studied the deviations due to the post-Newtonian corrections on the Lagrangian points \cite{Maindl1996}. Concerning analytical solutions to the general relativistic three-body problem, Yamada \etal \cite{Yamada2010} obtained a collinear solution by using the EIH approximation up to the first order. In a later paper, they studied the post-Newtonian triangular solution for three finite masses, showing that such configuration is not always equilateral \cite{Yamada2012}. Recently, as a first study of chaos in the post-Newtonian CRTBP, Huang and Wu \cite{Huang2014} studied the influence of the separation between the primaries, concluding that if it is close enough, the post-Newtonian dynamics is qualitatively different. In particular, some Newtonian bounded orbits become unstable.
To avoid the cumbersome equations of motion that take place in the post-Newtonian formalism, Steklain and Letelier used the Paczy\'nski--Wiita pseudo-Newtonian potential to study the dynamics of the CRTBP in the Hill's approximation \cite{Steklain2006}, finding that some pseudo-Newtonian systems are more stable than their Newtonian counterparts. Following this idea, and considering that the Jacobian constant is not preserved in the post-Newtonian approximation (which limits the dynamical studies), in the present paper we shall use an alternative approach to studying the dynamics of the pseudo-Newtonian CRTBP. To do so, we derive an approximate potential for the gravitational field of two uncharged spinless particles modeled as sources with multipole moment $m$, by using the Fodor-Hoenselaers-Perj\'es (FHP) procedure \cite{Fodor1989} (taking into account the corrections made by Sotiriou and Apostolatos \cite{Sotiriou2004}). Abusing astrophysical terminology, we call the new potential pseudo-Newtonian, due to the fact that in this kind of approaches the common Newtonian formulas are used even when the resulting potentials do not satisfy the Laplace equation. Unlike other pseudo-Newtonian approaches, the final expressions are not ad-hoc proposals but are derived directly from the multipole structure of the source.
The paper is organized as follows: In section \ref{sec:PNMEEM}, by means of the FHP procedure, we calculate the gravitational pseudo-potential for each primary; then we write down the Lagrangian of the CRTBP with their respective equations of motion for a test particle under the influence of this potential. In section \ref{sec:APND}, we analyze the gradual transition of the dynamics for the FHP pseudo-Newtonian approximation to the classical regime. The analysis is made using Poincar\'e surfaces of section and the variational method for the calculation of the largest Lyapunov exponent \cite{Contopoulos1978}. Finally, in section \ref{sec:CR} we summarize our main conclusions.
\section{Pseudo-Newtonian Equations of Motion}
\label{sec:PNMEEM}
The Fodor-Hoenselaers-Perj\'es procedure is an algorithm to determine the multipole moments of stationary axisymmetric electrovacuum space-times \cite{Fodor1989}. The method can be stated as follows:
In the Ernst formalism \cite{Ernst1968-1,Ernst1968-2}, Einstein field equations are reduced to a pair of complex equations through the introduction of the complex potentials $\mathcal{E}$ and $\Psi$, which can be defined in terms of the new potentials $\xi$ and $\varsigma$, through the definitions
\begin{equation}
\mathcal{E}= \frac{1-\xi}{1+\xi}, \quad \Psi=\frac{\varsigma}{1+\xi},
\end{equation}
satisfying the alternative representation of the Einstein-Maxwell field
equations,
\begin{eqnarray}
\label{eq:GenerLaplaceXI}
(\xi \xi^{*} - \varsigma \varsigma^{*} -1)\nabla^2\xi =
2( \xi^{*} \nabla\xi - \varsigma^{*} \nabla\varsigma)\cdot \nabla\xi,
\\
\label{eq:GenerLaplaceSIGMA}
(\xi \xi^{*} - \varsigma \varsigma^{*} -1)\nabla^2\varsigma =
2( \xi^{*} \nabla\xi - \varsigma^{*} \nabla\varsigma)\cdot \nabla\varsigma.
\end{eqnarray}
The fields $\xi$ and $\varsigma$ are related to the gravitational and electromagnetic potentials in a very direct way,
\begin{equation}
\xi=\phi_{M}+i \phi_{J}, \quad \varsigma=\phi_{E}+i \phi_{H},
\end{equation}
where $\phi_{\alpha}$ with $\alpha=M,J,E,H$, are analogous to the Newtonian mass, angular momentum, electrostatic and magnetic potentials, respectively (see {\it e.g.} \cite{Hansen1974} and \cite{Sotiriou2004}). Hereafter, for the sake of convenience, we consider $\phi_{E}=\phi_{H}=0$, which implies $\varsigma=\Psi=0$, {\it i.e} the absence of electromagnetic fields.
Now, according to Geroch and Hansen \cite{Hansen1974,Geroch1970}, the multipole moments of a given space-time are defined by measuring the deviation from flatness at infinity. Following this idea, the initial 3-metric $h_{ij}$ is mapped to a conformal one, that is $h_{ij} \rightarrow
\tilde{h_{ij}} = \Omega^{2}h_{ij}$. The conformal factor $\Omega$ transforms the potential $\xi$ into $\tilde{\xi} = \Omega^{-1/2}\xi$, with
$\Omega = \bar{r}^{2} = \bar{\rho}^{2} + \bar{z}^{2}$, and
\begin{eqnarray}\label{eq:coordtrans}
\bar{\rho}&=&\frac{\rho}{\rho^{2}+z^{2}},\quad \bar{z}=\frac{z}{\rho^{2}+z^{2}}, \quad \bar{\varphi}=\varphi.
\end{eqnarray}
On the other hand, the potential $\tilde{\xi}$ can be written in a power series of $\bar{\rho}$ and
$\bar{z}$ as
\begin{equation}
\label{eq:xi-q}
\tilde{\xi} = \sum_{i,j=0}^{\infty} a_{ij}\bar{\rho}^{i}\bar{z}^{j},
\end{equation}
and the coefficients $a_{ij}$ are calculated by the
recursive relations \cite{Sotiriou2004}
\begin{eqnarray}\label{eq:rra}
(r + 2)^{2} a_{r+2,s} &=& -(s + 2)(s + 1)a_{r,s+2} + \sum_{k,l,m,n,p,g}(a_{kl}a^{*}_{mn}
- b_{kl}b^{*}_{mn}) [a_{pg}\nonumber\\
&\times& (p^{2} + g^{2} - 4p - 5g - 2pk - 2gl- 2) + a_{p+2,g-2}(p + 2)\nonumber\\
&\times&(p + 2 - 2k)+ a_{p-2,g+2}(g + 2)(g + 1 - 2l)],
\end{eqnarray}
where $m = r - k - p, 0 \leq k \leq r, 0 \leq p \leq r - k$, with $k$ and $p$ even, and $n = s - l - g$,
$0 \leq l \leq s + 1$, and $-1 \leq g \leq s - l$. Finally, the gravitational multipole moments $P_i$ of the source are computed from their values on the symmetry axis $m_{i}\equiv a_{0i}$, by means of the following relationships
\begin{eqnarray}\label{eq:gpot}
P_0&=&m_0,\quad P_1=m_1,\quad P_2=m_2,\quad P_3=m_3,\quad P_4=m_4-\frac{1}{7}m_0^{*}M_{20},\nonumber\\
P_5&=&m_5-\frac{1}{3} m_{0}^{*}M_{30}-\frac{1}{21} m_{1}^{*}M_{20}
\end{eqnarray}
where $M_{ij}=m_i m_j-m_{i-1}m_{j+1}$.
From the previous, it can be inferred that once we know the gravitational multipole moments, and following the inverse procedure, it is possible to determinate approximate expressions for the gravitational potential $\xi$, in terms of the physical parameters of the source (see {\it e.g.} \cite{Pachon2011}). Hence, let us apply the outlined procedure to a concrete example, a source whose multipole structure is given by
\begin{equation}
P_{0}=m, P_{i}=0\quad {\rm for}\quad i\geq 1,
\end{equation}
such that $m$ denotes the mass of the source. From Eq. (\ref{eq:rra}) and the seed $a_{00}=m$, it can be noted that the only non-vanishing coefficients are $a_{2n,2m}$ with $n, m \in \mathbb{N}$. Thus the potential
$\xi$ is reconstructed from (\ref{eq:xi-q}), (\ref{eq:coordtrans}) and $ \xi= \Omega^{1/2}\tilde{\xi}$, and is explicitly given by
\begin{equation}\label{xiapp}
\xi(\rho,z)= \frac{m}{\sqrt{\rho^2+z^2}}-\frac{m^3 \rho^2}{2 \left(\rho^2+z^2\right)^{5/2}}+\frac{m^5 \rho^2 \left(3 \rho^2-4 z^2\right)}{8 \left(\rho^2+z^2\right)^{9/2}}+\ldots
\end{equation}
It is important to note that the infinite sum of terms of the potential $\xi$ corresponds to the Schwarzschild solution, while the lower order of approximation, {\it i.e} by taking the first term, corresponds to the Newtonian potential. In order to stress that the approximations different to the lower order do not satisfy the Laplace equation, we call the potential $\xi=\phi_{M}$, pseudo-Newtonian.
On the other hand, aiming to set up the planar circular restricted three-body problem in our model, we made the following assumptions:
{\it i)} The primaries are sufficiently far apart to keep moving in a circle; {\it ii)} the superposition principle holds, such that the total gravitational potential $V$ can be expressed as $V=\phi_{M1}+\phi_{M2}$; {\it iii)} we consider only the first correction to the Newtonian potential, and
{\it iv)} the motion takes place in the plane $z=0$.
Accordingly, the total potential energy of a test particle with mass $\mathcal{M}=1$ in the presence of two pseudo-Newtonian sources, in Cartesian coordinates, can be written as \footnote{The speed of light $c$ is explicitly presented in the pseudo-Newtonian potential in order to show its contributions; however, we set $c = 1$ in the numerical simulations.}
\begin{eqnarray}
V(x,y) &=& -\sum_{i=1}^2\frac{\mathcal{M}_i}{r_i} +\frac{1}{2 c^4}\sum_{i=1}^2\frac{\mathcal{M}_i^3}{r_i^3} \label{eq:Uf}
\end{eqnarray}
where $\mathcal{M}_{1}$ and $\mathcal{M}_{2}$ are the masses of each primary at positions $(x_1,0)$ and $(x_2,0)$, respectively, and $r_{1,2}=\sqrt{(x-x_{1,2})^2+y^2}$. Thus, the Lagrangian for the test particle moving in a non-inertial rotating frame, whose origin coincides with the center of mass of the system, in the presence of the potential (\ref{eq:Uf}) is expressed as
\begin{equation}\label{eq:LagF}
\mathcal{L}= \frac{U^2+2 A+R^2}{2} +\sum_{i=1}^2\frac{\mathcal{M}_i}{r_i}-\frac{1}{2c^4}\sum_{i=1}^2\frac{\mathcal{M}_i^3}{r_i^3},
\end{equation}
with $R=(x^2+y^2)^{1/2}$ the position of the test particle with respect to the center of mass, $U=(U_x^2+U_y^2)^{1/2}$ the magnitude of the velocity of the test particle in the rotating frame and $A=U_y x-U_x y$. Consequently, the Euler-Lagrange equations of motion are
\begin{eqnarray}
&\ddot{x}=2 U_{y} + x - \sum_{i=1}^2\frac{\mathcal{M}_{i}(x-x_{i})}{r_{i}^3}
+ \frac{3}{2c^4}
\frac{\mathcal{M}_{i}^3(x-x_{i})}{r_{i}^5},\label{ecFHPx}\,\,\\
&\ddot{y}=-2 U_{x}+y \bigg(1-\sum_{i=1}^2\frac{\mathcal{M}_{i}}{r_{i}^3}+ \frac{3}{2c^4}\frac{\mathcal{M}_{i}^3}{r_{i}^5}\bigg).\label{ecFHPy}
\end{eqnarray}
Finally, the Jacobian integral of motion, for this approximation, can be calculated as
\begin{equation}\label{eq:JacF}
C_{J}= R^2-U^2+ 2 \sum_{i=1}^2\frac{\mathcal{M}_i}{r_i}-\frac{1}{c^4}\sum_{i=1}^2\frac{\mathcal{M}_i^3}{r_i^3}.
\end{equation}
It can be seen that, the equations (\ref{eq:LagF})-(\ref{eq:JacF}) reduce to the Newtonian case in the limit $1/c \to 0$. With some straightforward algebra, it can be easily shown that the Jacobian constan is conserved, that is to say, $d C_{J}/dt=0$.
\section{Analysis of the pseudo-Newtonian dynamics}\label{sec:APND}
In order to simplify the numerical calculations and to nondimensionalise the problem, we use the Szebehely convention \cite{Szebehely-book},
\begin{equation}\label{eq:parameters}
\mathcal{M}_1=1-\mu,\,\mathcal{M}_2=\mu, \,
x_1=-\mu,\,x_2=1-\mu,
\end{equation}
where $\mu \in [0, 1/2]$, is the only control parameter for the system, and the center of mass always lies at the origin. The dynamics of the system will be studied through the Poincar\'e sections method and the Lyapunov exponents. From now on, we set $\mu=10^{-3}$, $c=1$, and units of time such that the angular velocity of the primaries around their common center of mass is $\omega=1$. Moreover, with the aim of observing the transition from the classical to the pseudo-Newtonian regime, we introduce the following transformation
$$\frac{1}{c^4}\to \epsilon \frac{1}{c^4}, $$
where $\epsilon \in [0,1]$, taking the value $\epsilon=0$ in the classical limit and $\epsilon=1$ in the pseudo-Newtonian case.
\subsection{Dynamics of the pseudo-Newtonian equations}
In the case under consideration, we follow the evolution of the system while keeping the Jacobian constant fixed to the value $C_{J}=3.07$, see Figs. \ref{fig1} and \ref{fig2}. Given the initial conditions for $x_0, y_0$ and $U_{x0}$, the initial condition for $U_{y0}$ is determined by Eq. (\ref{eq:JacF}). It should be noted that the orbits in the Poincar\'e sections must not cross between them, because the integral of motion is the same for the set of initial conditions considered in each phase space. Orbits for the sets 1, 2, 3, and 4 have initial values $y_0=U_{x0}=0$ and $x_{0}= 1.6, 2.0, 2.5$, and $3.0$, respectively. The convention in Fig. \ref{fig1} and \ref{fig2} is the following: set of initial conditions 1 is plotted in red color, set 2 in blue, set 3 in black and set 4 in green.
As an additional tool to determine the existence of chaos or regularity of the orbits, we measure the average Largest Lyapunov exponent $\langle {\lambda_{max}} \rangle$, for each trajectory (Fig. \ref{fig2}). To do so, we use the variational method (which is very accurate for many classes of dynamical systems) instead of the two-particle approach, because it has been previously shown that the last one could lead to inconsistent values of the $\lambda_{max}$, in particular when using arbitrary values of the renormalization time and the initial separation between trajectories \cite{Dubeibe2014}. The Lyapunov exponents larger than the threshold (dashed black line) can be considered chaotic, while the ones below the threshold are considered regular.
\begin{figure}[h!]
\centering
\begin{tabular}{ccc}
\includegraphics[width=4.0 cm,angle=0]{C2e0.eps}&
\includegraphics[width=4.0 cm,angle=0]{C2e0125.eps}&
\includegraphics[width=4.0 cm,angle=0]{C2e025.eps}\\
\includegraphics[width=4.0 cm,angle=0]{C2e0375.eps}&
\includegraphics[width=4.0 cm,angle=0]{C2e05.eps}&
\includegraphics[width=4.0 cm,angle=0]{C2e0625.eps}\\
\includegraphics[width=4.0 cm,angle=0]{C2e075.eps}&
\includegraphics[width=4.0 cm,angle=0]{C2e0875.eps}&
\includegraphics[width=4.0 cm,angle=0]{C2e1.eps}
\end{tabular}
\caption{(Color online) Poincar\'e surface of sections for the CRTBP for fixed Jacobian constant $C_{J}= 3.07$, in terms of the parameter $\epsilon$. Orbits for the sets 1 (red), 2 (blue), 3 (black), and 4 (green) have initial values $y_0=U_{x0}=0$ and $x_{0}= 1.6, 2.0, 2.5$, and $3.0$, respectively.}
\label{fig1}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=7.5cm,angle=0]{LyapC2.eps}
\caption{(Color online) Average Largest Lyapunov exponent $\langle {\lambda_{max}} \rangle$ over an ensemble of $10^6$ nearby trajectories calculated with the aid of the variational method for the set of initial conditions given in Figure \ref{fig1}, for different values of $\epsilon$. The inset shows the average value of $\lambda_{max}$ for small $\epsilon$.}
\label{fig2}
\end{figure}
From Fig. \ref{fig1}, it can be noted that in the Newtonian problem ($\epsilon=0$) there exist chaotic orbits (green and red) for the set of initial condition 1 and 4, coexisting with regular orbits for the sets 2 and 3. Two special cases can be considered in this figure: the orbit for the set 1 (red) corresponds to chaos in a very narrow zone (weakly perturbed KAM tori) and should correspond to a small value of $\lambda_{max}$; the other case corresponds to orbit 3 (black), in which three small regular islands are present (surviving tori). If the $\epsilon$-parameter is slightly increased, the main dynamical changes are observed for this two orbits, as is expected. For values of $\epsilon$ larger than 0.4, all the sets of orbits become regular. However, in the pseudo-Newtonian limit ($\epsilon$ = 1), the narrow chaotic zone appears again for the set 1, while all the other sets keep regular.
The Lyapunov exponent gives us a more detailed description of the system dynamics. From Fig. \ref{fig2}, the large chaotic zone, corresponding to the set of initial conditions 4, gradually reduces its chaoticity and becomes regular for larger values of $\epsilon$. A very different behavior is observed for the set of initial condition 1. In the classical limit, the orbit shows a small value of $\lambda_{max}$, for the intermediate values of $\epsilon$ the orbit turns out regular, and in the pseudo-Newtonian limit the small value of $\lambda_{max}$ emerges again. On the other hand, some sporadic appearances of chaos occur for the sets of initial conditions 2 and 3, but in the limits, $\epsilon=0$ and $\epsilon = 1$, they are regular.
\section{Concluding Remarks}\label{sec:CR}
In the present paper, we propose a new pseudo-Newtonian formulation of the planar circular restricted three-body problem, by using the Fodor-Hoenselaers-Perj\'es procedure. For this new approximation, the Jacobian constant is strictly conserved (unlike the Jacobian obtained in the first-order post-Newtonian approximation see {\it e.g} \cite{Huang2014}), and its equations of motion reduce to its classical counterpart in the limit $1/c \to 0$. Through the Poincar\'e section method and validated with the Largest Lyapunov exponent, we have shown that for the set of initial conditions and Jacobian constant selected, the Newtonian and pseudo-Newtonian CRTBP exhibit a mixed phase space, {\it i.e} regular and chaotic orbits coexist.
The introduction of an arbitrary parameter $\epsilon$ allowed us to explore the transition from the Newtonian to the pseudo-Newtonian regime. If we track the evolution of the system keeping the Jacobian constant fixed, a chaotic orbit in the Newtonian system can be either chaotic or regular in the pseudo-Newtonian limit. In the transition, the phase space can be filled by periodic orbits even if some of the orbits are chaotic in the limits $\epsilon = 0$ and $\epsilon = 1$. In accordance with previous studies \cite{Huang2014}, in most of the cases we found that a given set of initial conditions, whose phase space is bounded in the classical regime, correspond to unbounded trajectories in the non-Newtonian regime, that is, the system becomes unstable. The instability of the orbits is a result of the lack of stable fixed points in the system. In fact, the number of real roots of the system, when $\ddot{x}=\ddot{y}=\dot{x}=\dot{y}=0$ in Eqs.~(\ref{ecFHPx}) and (\ref{ecFHPy}), varies when varying the mass parameter $\mu$ in the interval $[0,1/2]$.
In conclusion, we may say that even the smallest corrections to the Newtonian circular restricted three-body problem, could drastically change the dynamics of the system. In addition, it is important to note that the procedure outlined in the present paper can be used to model different kinds of sources, for example, a pair of massive spinning primaries in circular orbits, or a binary system formed by two non-spherical spinning sources, just to name a few. Results in this direction will be reported soon.
\section*{Acknowledgments}
FLD acknowledges financial support from the University of the Llanos, under Grants Commission: Postdoctoral Fellowship Scheme. FDLC and GAG gratefully acknowledges the financial support provided by VIE-UIS, under grants numbers 1822, 1785 and 1838, and COLCIENCIAS, Colombia, under Grant No. 8840.
|
1,108,101,565,588 | arxiv | \section{Introduction}
Despite its difficulty, quantization of the gravitational field has attracted considerable attention because of its fundamental importance. Currently favored approaches
include string theory and loops, both important schemes; see, e.g., \cite{str,ash}.
A comparatively new effort is the {\it affine quantum gravity program}. Although the principles involved are conservative and fairly natural, this program nevertheless involves a somewhat unconventional approach when compared with more traditional techniques. This article offers an overview of the affine quantum gravity program; detailed discussions of this program appear in \cite{kla1, kla2, kla3}.
\subsubsection*{Basic principles of affine quantum gravity}
The program of affine quantum gravity is founded on {\it four basic
principles} which we briefly review here. {\it First}, like the corresponding classical variables, the 6 components of the spatial metric field operators $\hg_{ab}(x)\,[=\hg_{ba}(x)]$, $a,b=1,2,3$, form a {\it positive-definite $3\!\times\! 3$ matrix for all $x$}. {\it Second}, to ensure self-adjoint kinematical variables when smeared, it is necessary to adopt the {\it affine commutation relations} (with $\hbar=1$)
\bn &&[\hp^a_b(x),\hp^c_d(y)]=i\half[\d^c_b\,\hp^a_d(x)-\d^a_d\,\hp^c_b(x)]\,\d(x,y)\;,\no\\
&& \hskip-.08cm[\hg_{ab}(x),\hp^c_d(y)]=i\half[\d^c_a\,\hg_{db}(x)+\d^c_b\,\hg_{ad}(x)]\,\d(x,y)\;,\\
&& \hskip-.18cm[\hg_{ab}(x),\hg_{cd}(y)]=0\;\no \label{e1}\en
between the metric and the 9 components of the mixed-index momentum field operator $\hp^a_b(x)$, the
quantum version of the classical variable $\pi^a_b(x)\equiv \pi^{ac}(x)\s g_{cb}(x)$; these commutation
relations are direct transcriptions of Poisson brackets for the classical fields $g_{ab}(x)$ and $\pi^c_d(x)$. The affine commutation relations are like {\it current commutation relations} and their representations are
quite different from those for
canonical communtation relations; indeed, the present program is called `affine quantum gravity' because these commutation relations are analogous to
the Lie algebra belonging to affine transformations $(A,\s b)$, where $x\ra x'=A\s x+b$, $x,\s x'$, and $b$ are $n$-vectors, and $A$ are real, invertible $n\times n$ matrices. {\it Third}, the principle of {\it quantization first and reduce second}, favored by Dirac, requires that the basic fields $\hg_{ab}$ and $\hp^c_d$ are initially realized by an {\it ultralocal representation}, which is explained below. {\it Fourth}, and last, introduction and enforcement of the gravitational constraints not only leads to the physical Hilbert space but it has the added virtue that all vestiges of the temporary ultralocal operator representation are replaced by physically acceptable alternatives.
In attacking these basic issues full use of {\it coherent state methods} and the {\it projection operator method} for constrained system quantization is made.
\subsubsection*{Affine coherent states}
The affine coherent states are defined (for $\hbar=1$) by
\bn |\pi,g\>\equiv e^{\t i\tint \pi^{ab}\hg_{ab}\,d^3\!x}\,e^{\t-i\tint\gamma^a_b\hp^b_a\,d^3\!x}\,|\eta\> \label{e2}\en
for general, smooth, $c$-number fields $\pi^{ab}\,[=\pi^{ba}]$ and $\gamma^c_d$ of compact support, and the
fiducial vector $|\eta\>$ is chosen so that the coherent state overlap functional becomes
\bn &&\hskip-.5cm\<\pi'',g''|\pi',g'\>= \exp\bigg(\!-\!2\int b(x)\,d^3\!x\, \no\\
&&\hskip.1cm\times\ln\bigg\{ \frac{
\det\{\half[g''^{kl}(x) +g'^{kl}(x)]+i\half b(x)^{-1}[\pi''^{kl}(x)-
\pi'^{kl}(x)]\}} {(\det[g''^{kl}(x)])^{1/2}\,(\det[g'^{kl}(x)])^{1/2}}
\bigg\}\bigg) \;.\label{e3}\en
Observe that the matrices $\gamma''$ and $\gamma'$ do {\it not} explicitly appear in (\ref{e3}); the
choice of $|\eta\>$ is such that each $\gamma=\{\gamma^a_b\}$ has been replaced by $g=\{g_{ab}\}$, where
\bn g_{ab}(x)\equiv [e^{\t\gamma(x)/2}]_a^c\,\<\eta|\hg_{cd}(x)|\eta\>\,[e^{\t\gamma(x)^T/2}]_b^d \;.\en
Note that the functional expression in (\ref{e3}) is ultralocal, i.e., specifically of the form
\bn \exp\{-\tint b(x)\,d^3\!x\,L[\pi''(x),g''(x);\pi'(x),g'(x)]\,\}\;, \en
and thus, {\it by design}, there are no correlations between spatially separated field values, a neutral position adopted towards correlations before any constraints are introduced. On invariance grounds, (\ref{e3}) necessarily involves a {\it scalar density} $b(x)$, $0< b(x)<\infty$, for all $x$; this arbitrary and nondynamical auxiliary function $b(x)$ will disappear when the gravitational constraints are fully enforced, at which point proper field correlations will arise; see below. In addition, note that the coherent state overlap functional is {\it invariant} under general spatial coordinate transformations. Finally, we emphasize that the expression $\<\pi'',g''|\pi',g'\>$ is a {\it continuous functional of positive type} and thus may be used as a {\it reproducing kernel} to define a {\it reproducing kernel Hilbert space}
(see \cite{mesh}) composed of continuous phase-space functionals $\psi(\pi,g)$ on which the initial, ultralocal representation of the affine field operators acts in a natural fashion.
\subsubsection*{Functional integral representation}
A functional integral formulation has been developed \cite{kla2} that, in effect, {\it within a single formula} captures the essence of {\it all four of the basic principles} described above. This ``Master Formula'' takes the form
\bn && \<\pi'',g''|\s\E\s|\pi',g'\> \no\\
&&\hskip1cm=\lim_{\nu\ra\infty}{\o{\cal N}}_\nu\s\int
e^{\t-i\tint[g_{ab}{\dot\pi}^{ab}+N^aH_a+NH]\,d^3\!x\,dt}\no\\
&&\hskip1.5cm\times\exp\{-(1/2\nu)\tint[b(x)^{-1}g_{ab}g_{cd}
{\dot\pi}^{bc}{\dot\pi}^{da}+b(x)g^{ab}g^{cd}{\dot g}_{bc}{\dot g}_{da}]\,
d^3\!x\,dt\}\no\\
&&\hskip2cm\times[\Pi_{x,t}\,\Pi_{a\le b}\,d\pi^{ab}(x,t)\,
dg_{ab}(x,t)]\,\D R(N^a,N)\;. \label{e8} \en
Let us explain the meaning of (\ref{e8}).
As an initial remark, let us
artificially set $H_a=H=0$, and use the fact that $\int{\cal D}R(N^a,N)=1$. Then the result is that $\E=\one$, and the remaining functional integral yields the coherent state overlap $\<\pi'',g''|\pi',g'\>$ as given in (\ref{e3}). This is the state of affairs {\it before} the constraints are imposed, and remarks below regarding the properties of the functional integral on the right-hand side of (\ref{e8}) apply in this case as well. We next turn to the full content of (\ref{e8}).
The expression $\<\pi'',g''|\s\E\s|\pi',g'\>$ denotes the coherent state matrix element of a projection operator $\E$ which projects onto a subspace of the original Hilbert space on which the quantum constraints are fulfilled in a
regularized fashion. Furthermore, the expression $\<\pi'',g''|\s\E\s|\pi',g'\>$ is another continuous functional of positive type that can be used as a reproducing kernel to generate the reproducing kernel physical Hilbert space on which the quantum constraints are fulfilled in a regularized manner. The right-hand side of equation (\ref{e8}) denotes an essentially well-defined functional integral over fields $\pi^{ab}(x,t)$ and $g_{ab}(x,t)$, $0<t<T$, designed to calculate this important reproducing kernel for the regularized physical Hilbert space and which entails functional arguments defined by their smooth initial values $\pi^{ab}(x,0)=\pi'^{ab}(x)$ and $g_{ab}(x,0)=g'_{ab}(x)$ as well as their smooth final values $\pi^{ab}(x,T)=\pi''^{ab}(x)$ and $g_{ab}(x,T)=g''_{ab}(x)$, for all $x$ and all $a,b$. Up to a surface term, the phase factor in the functional integral represents the canonical action for general relativity, and specifically $N^a$ and $N$ denote Lagrange multiplier fields (classically interpreted as the shift and lapse), while $H_a$ and $H$ denote phase-space symbols (since $\hbar\ne0$) associated with the quantum diffeomorphism and Hamiltonian constraint field operators, respectively. The $\nu$-dependent factor in the integrand formally tends to unity in the limit $\nu\ra\infty$; but prior to that limit, the given expression {\it regularizes and essentially gives genuine meaning} to the heuristic, formal functional integral that would otherwise arise if such a factor were missing altogether \cite{kla2}. The functional form of the given regularizing factor ensures that the metric variables of integration {\it strictly fulfill} the positive-definite domain requirement. The given form, and in particular the need for the nondynamical, nonvanishing, arbitrarily chosen scalar density $b(x)$, is very welcome since this form---{\it and quite possibly only this form}---leads to a reproducing kernel Hilbert space for gravity having the needed infinite dimensionality; a seemingly natural alternative \cite{kla5} using $\sqrt{\det[g_{ab}(x)]}$ in place of
$b(x)$ fails to lead to a reproducing kernel Hilbert space with the required dimensionality \cite{wat2}. The choice of $b(x)$ determines a specific ultralocal representation for the basic affine field variables, but this unphysical and temporary representation {\it disappears} after the gravitational constraints are fully enforced (as soluble examples explicitly demonstrate \cite{kla3}). The integration over the Lagrange multiplier fields ($N^a$ and $N$) involves a {\it specific measure} $R(N^a,N)$ (described in \cite{kla6}), which is normalized such that $\tint\D R(N^a,N)=1$. This measure is designed to enforce (a regularized version of) the
{\it quantum constraints$\s$}; it is manifestly {\bf not} chosen to enforce the classical constraints, even in a regularized form. The consequences of this choice are {\it profound} in that no gauge fixing is needed, no ghosts are required, no Dirac brackets are necessary, etc. In short, {\it no auxiliary structure of any kind is introduced}. (These facts are general properties of the projection operator method of dealing with constraints \cite{kla6,sch} and are not limited to gravity. A sketch of this method appears below.)
It is fundamentally important to make clear how Eq.~(\ref{e8}) was derived and how it is to be used \cite{kla2}. The left-hand side of (\ref{e8}) is an abstract operator construct in its entirety that came {\it first} and corresponds to one of the basic expressions one would like to calculate. The functional integral on the right-hand side of (\ref{e8}) came {\it second} and is a valid representation of the desired expression; its validity derives from the fact that the affine coherent state representation enjoys a complex polarization that is used to formulate a kind of Feynman-Kac realization of the coherent state matrix elements of the regularized projection operator \cite{kla2}.
However, the final goal is to turn that order around and to use the functional integral to {\it define and evaluate} (at least approximately) the desired operator-defined expression on the left-hand side. In no way should it be thought that the functional integral (\ref{e8}) was ``simply postulated as a guess as how one might represent the proper expression''.
A major goal in the general analysis of (\ref{e8}) involves reducing the regularization imposed on the quantum constraints to its appropriate minimum value, and, in particular, for constraint operators that are partially second class, such as those of gravity, the proper minimum of the regularization parameter is {\it non\/}zero;
see below. Achieving this minimization involves {\it fundamental changes} of the representation of the basic kinematical operators, which, as models show \cite{kla3}, are so significant that any unphysical aspect of the original, ultralocal representation disappears completely. When the appropriate minimum regularization is achieved, then the quantum constraints are properly satisfied. The result is the reproducing kernel for the physical Hilbert space, which then permits a variety of physical questions to be studied.
We next offer some additional details.
\section{Quantum Constraints and their Treatment}
The quantum gravitational constraints, $\H_a(x)$, $a=1,2,3$, and $\H(x)$, formally satisfy the commutation relations
\bn &&[\H_a(x),\H_b(y)]=i\s\half\s[\delta_{,a}(x,y)\s\H_b(y)+\delta_{,b}(x,y)\s\H_a(x)]\;,\no\\\
&&\hskip.15cm[\H_a(x),\H(y)]=i\s\delta_{,a}(x,y)\s\H(y) \;,\\
&&\hskip.31cm[\H(x),\H(y)]=i\s\half\s\delta_{,a}(x,y)\s[\s g^{ab}(x)\s\H_b(x)+\H_b(x)\s g^{ab}(x) \no\\
&&\hskip3.6cm +g^{ab}(y)\s\H_b(y)+\H_b(y)\s g^{ab}(y)\s] \;. \no \en
Following Dirac, we first suppose that $\H_a(x)\s|\psi\>_{phys}=0$ and $\H(x)\s|\psi\>_{phys}=0$ for all $x$ and $a$, where $|\psi\>_{phys}$ denotes a vector in the proposed physical Hilbert space ${\frak H}_{phys}$. However, these conditions are {\it incompatible} since, generally, $[\H(x),\H(y)]\s|\psi\>_{phys}\ne0$ because $[\H_b(x),g^{ab}(x)]\ne0$ and $g^{ab}(x)\s|\psi\>_{phys}\not\in{\frak H}_{phys}$, even when smeared. This means that the quantum gravity constraints are {\it partially second class}. While others may resist this conclusion, we accept it for what it is.
One advantage of the projection operator method is that it treats first- and second-class constraints on an {\it equal footing$\s$}; see \cite{kla6,sch}. The essence of the projection operator method is the following. If $\{\Phi_a\}$ denotes a set of self-adjoint quantum constraint operators, then
\bn \E=\E(\!\!(\Sigma\s\Phi_a^2\le\s\delta(\hbar)^2\s)\!\!) =\int {\sf T}\s e^{\t-i\tint\lambda^a(t)\s\Phi_a\,dt}\,\D R(\lambda)\;, \label{e10}\en
in which ${\sf T}$ enforces time ordering,
denotes a projection operator onto a regularized physical Hilbert space, ${\frak{H}}_{phys}\equiv\E\s\frak{H} $,
where $\frak{H}$ denotes the original Hilbert space before the constraints are imposed.
It is noteworthy that there
is a {\it universal form} for the weak measure $R$ \cite{kla6} that depends only on the
number of constraints, the
time interval involved, and the regularization parameter $\delta(\hbar)^2$; $R$ does {\it not} depend in any
way on the constraint operators themselves! Sometimes, just by reducing the regularization parameter $\delta(\hbar)^2$ to its appropriate size, the proper physical Hilbert space arises. Thus, e.g., if $\Sigma\s\Phi_a^2=J_1^2+J_2^2+J_3^2$, the Casimir operator of $su(2)$, then $0\le\delta(\hbar)^2<3\hbar^2/4$ works for this first class example. If $\Sigma\s\Phi_a^2=P^2+Q^2$, where $[Q,P]=i\hbar\one$, then $\hbar\le\delta(\hbar)^2<3\hbar$ covers this second class example. Sometimes, one needs to take the limit when
$\delta\ra0$. The example $\Sigma\s\Phi_a^2=Q^2$ involves
a case where $\Sigma\Phi^2_a=0$ lies in the continuous spectrum. To deal with this case it is appropriate to introduce
\bn \<\<p'',q''|p',q'\>\>\equiv \lim_{\delta\ra0}\<p'',q''|\s\E\s|p',q'\>/\<\eta|\s\E\s|\eta\>\;, \en
where $\{|p,q\>\}$ are traditional coherent states, as a reproducing kernel for the physical Hilbert space in which ``$Q=0$'' holds. It is interesting to observe that the projection operator for {\it reducible} constraints, e.g., $\E(Q^2+Q^2\le\delta^2)$, or for {\it irregular} constraints, $\E(Q^{2\Omega}\le\delta^2)$, $0<\Omega\ne1$, leads to the {\it same reproducing kernel} that arose from the case $\E(Q^2\le\delta^2)$. No gauge fixing is ever needed, and thus no global consistency conditions arise that may be violated; see, e.g., \cite{litkla}.
Other cases may be more involved but the principles are similar. The time-ordered integral representation for $\E$ given in (\ref{e10}) is useful in path integral representations, and this application explains the origin of $R(N^a,N)$ in (\ref{e8}).
\section{Nonrenormalizability and Symbols}
Viewed perturbatively, gravity is nonrenormalizable. However, the (nonperturbative) {\it hard-core picture of nonrenormalizability} \cite{kla11,book} holds that the nonlinearities in such theories are so strong that, from a functional integral point of view, a nonzero set of functional histories that were allowed in the support of the linear theory is now forbidden by the nonlinear interaction.
\subsubsection*{Elementary example of hard-core behavior}
An elementary example that illustrates the basic concepts is the following. Consider a one-dimensional harmonic
oscillator with the Euclidean-time action functional
\bn I_0\equiv\int_0^T \{\half[{\dot x}(t)^2+x(t)^2]\s\}\,dt \en
defined for a set of functions $W_0\equiv\{x(t): \tint_0^T[\s{\dot x}^2+x^2]\s dt<\infty\}$. Observe that the set $W_0$ includes many functions for which $\tint_0^T x^{-4}\s dt=\infty$. As a consequence, the
Euclidean-time action functional
\bn I_g\equiv\int_0^T\{\half[{\dot x}(t)^2+x(t)^2] +g\s x(t)^{-4}\s\}\,dt \;,\en
for $g>0$, defined for the set $W_g\equiv\{x(t): \tint_0^T[\s{\dot x}^2+x^2+x^{-4}]\s dt<\infty\}$ does
{\it not} pass to the set $W_0$ as $g\ra0$. Stated otherwise -- now using a real time formulation -- the set of solutions of the interacting
model with $g>0$ does {\it not} pass to the set of solutions of the free theory as $g\ra0$. Instead,
the set of solutions pass to those of a {\it pseudofree} theory for which $x(t)=0$ is forbidden!
This change in character of the classical theory has its image in the quantum theory as well. If
$\{h_n(x)\}_{n=0}^\infty$ denotes the set of Hermite functions (eigenfunctions of the free harmonic oscillator), then
the free quantum theory is determined by the Euclidean-time propagator
\bn {\cal N}_0\int_{x(0)=x'}^{x(T)=x''} e^{\t-I_0}\;{\cal D}x=\sum_{n=0}^\infty h_n(x'')\s h_n(x')\, e^{\t-(n+1/2)\s T}\;.\en
In contrast, the Euclidean-time propagator for the quantum pseudofree theory is determined by
[using $\theta(y)=1$, for $y>0$, and $\theta(y)=0$, for $y<0$]
\bn &&\lim_{g\ra0} {\cal N}_g\int_{x(0)=x'}^{x(T)=x''} e^{\t-I_g}\,{\cal D}x=\theta(x''\s x')\sum_{n=0}^\infty\no\\
&&\hskip2cm\times\{\s h_n(x'')[\s h_n(x')-h_n(-x')\s]\s\}\,e^{\t-(n+1/2)\s T} \en
due to the hard-core nature of the potential that remains even after $g\ra0$. In brief, as $g\ra0$, the
interacting theory is continuously connected to the pseudofree theory and not to the usual free
theory; if a perturbation analysis of the interacting model is made, it must be made about
the pseudofree theory and not about the free theory!
\subsubsection*{Hard-core behavior for field theories}
Various, highly specialized, nonrenormalizable quantum field theory models exhibit entirely analogous hard-core behavior, and nevertheless possess suitable nonperturbative solutions \cite{book}. It is believed that gravity and also $\phi^4$ field theories in high enough spacetime dimensions can be understood in similar terms. A computer study to analyze the $\phi^4$ theory has begun, and there is hope to clarify that particular theory. Any progress in the scalar field case could strengthen a similar argument for the gravitational case as well.
The expectation that nonrenormalizable, self-interacting scalar fields will exhibit hard-core behavior follows from a so-called multiplicative inequality \cite{lady,book}. In particular, for smooth
functions $\phi(x)$, $x\in{\mathbb{R}}^n,$ it follows that
\bn \{\tint \phi(x)^4\,d^n\!x\s\}^{1/2}\le C\s\tint\{\s[\s{\nabla\phi}(x)\s]^2+m^2\s\phi(x)^2\s\}\,d^n\!x\;,\en
where for $n\le 4$ (the renormalizable models) one may choose $C=4/3$, while for $n\ge5$ (the nonrenormalizable models) one must choose $C=\infty$ meaning, in the latter case,
that there are field functions, e.g.,
$\phi_{singular}(x)=|x|^{-p}\s \exp(-x^2)$, with $n/4\le p<n/2-1$, for which the left side diverges
while the right side is finite. This is exactly the signal that the interaction acts at the
classical level partially as a hard core, and it is not too much to expect that the quantum theory
would also reflect that fact as well. Additional arguments favoring a hard-core understanding
for $\phi^4$ models in five and more spacetime dimensions appear in \cite{kla17}.
Evidence from soluble examples points to the appearance of a nontraditional and nonclassical (proportional to $\hbar^2$) counterterm in the functional integral representing the irremovable effects of the hard core.
For the proposed quantization of gravity, these counterterms would have an important role to play in conjunction with the symbols representing the diffeomorphism and Hamiltonian constraints in the functional integral since for them $\hbar\ne0$ as well. In brief, the form taken by the symbols $H_a$ and $H$ in (\ref{e8}) is closely related to a proper understanding of how to handle the perturbative nonrenormalizability and the concomitant hard-core nature of the overall theory. These are clearly difficult issues, but it is equally clear that they may be illuminated by studies of other nonrenormalizable models such as $\phi^4$ in five and more spacetime dimensions.
\section{Classical Limit}
Suppose one starts with a classical theory, quantizes it, and then takes the classical limit. It seems obvious that the classical theory obtained at the end should coincide with the classical theory one started with. However, there are counterexamples to this simple wisdom! For example, the $\phi^4$ theory in five spacetime dimensions has a {\it nontrivial} classical behavior. But, if one quantizes it as the continuum limit of a natural lattice formulation, allowing for mass, field strength, and coupling constant renormalization, the result is a free (or generalized free) quantum theory whose classical limit is also free and thus differs from the original theory \cite{fro}. This unsatisfactory behavior is yet another facet of the nonrenormalizability puzzle. However, those nonrenormalizable models for which the quantum hard-core behavior has been accounted for do have satisfactory classical limits \cite{book}. The conjectured hard-core nature of $\phi^4$ models is under present investigation, and it is anticipated that a proper classical limit should arise. It is further conjectured that a favorable consequence of clarifying and including the hard-core behavior in gravity will ensure that the resultant quantum theory enjoys the correct classical limit.
An additional remark may be useful. It is a frequent misconception that passage to the classical limit requires that the parameter $\hbar\ra0$. To argue against this view, recall that the macroscopic world we know and describe so well by classical mechanics is the same real world in which $\hbar\ne0$. In point of fact, classical and quantum formalisms must {\it coexist}, and this coexistence is very well expressed with the help of coherent states. It is characteristic of coherent state formalisms that classical and quantum ``generators'', loosely speaking, are related to each other through the {\it weak correspondence principle} \cite{corr}. In the case of the gravitational field, prior to the introduction of constraints, this connection takes the general form
\bn \<\pi,g|\s{\cal W}\s|\pi,g\>=W(\pi,g) \;, \en
where $\cal W$ denotes a quantum generator and $W(\pi,g)$ the corresponding classical generator (which is generally a ``symbol'' still since $\hbar\ne0$ ). The simplest examples of this kind are given by $\<\pi,g|\s\hg_{ab}(x)\s|\pi,g\>=g_{ab}(x)$ and $\<\pi,g|\s\hp_a^b(x)\s|\pi,g\>=\pi^{bc}(x)g_{ca}(x)\equiv \pi_a^b(x)$. Moreover, these two examples also establish that the {\it physical meaning of the $c$-number labels
is that of mean values} of the respective quantum field operators in the affine coherent states.
In soluble models where the appropriate classical limit has been obtained \cite{book}, coherent state methods were heavily used. It is expected that they will prove equally useful in the case of gravity.
\section{Going Beyond the Ultralocal\\ Representation}
We started our discussion by choosing an ultralocal representation of the basic affine quantum field operators.
Before the constraints were introduced, an ultralocal representation is the proper choice because all the
proper spatial connections are contained in the constraints themselves. Moreover, the chosen ultralocal representation is based on an extremal weight vector of the underlying affine algebra, which has the virtue of
leading to affine coherent states that fulfill a complex polarization condition enabling us to obtain
a fairly well defined functional integral representation for coherent state matrix elements of the
regularized projection operator. To complete the story, one only needs to eliminate the regularizations!
Of course, this is an enormous task. But it should not be regarded as impossible because there is a model
problem in which just that issue has been successfully dealt with. In \cite{kla3} the quantization of a
free field of mass $m$ (among other examples) was discussed starting with a reparametrization invariant formulation. In particular, by elevating the time variable to a dynamical one, the original dynamics is transformed to the imposition of a constraint. Thus, in the constrained form, the Hamiltonian vanishes, and
the choice of the original representation of the field operators is taken as an ultralocal one. Subsequent imposition of the constraint---by the projection operator method---not only eliminated the ultralocal
representation but allowed us to focus the final reproducing kernel for the physical Hilbert space on any
value of the mass parameter $m$ one desired! It is the kind of procedures used for this relatively simple example
of free field quantization that we have in mind to be used to transform the original ultralocal representation
of the quantum gravity story into its final and physically relevant version.
\section{Dedication} I am pleased to dedicate this article to Andrei Alekseevich Slavnov, a scholar and gentleman of the first rank. I hope he enjoys many more years of productive research!
\section{Acknowledgements}
Thanks are extended to the Center for Applied Mathematics of the Mathematics Department, University of Florida,
for partial travel support to attend the ``Slavnov70'' conference.
|
1,108,101,565,589 | arxiv | \section{Introduction}
Cold atoms in optical lattices exhibit phenomena typical of solid state physics like the formation of energy bands, Josephson effects Bloch oscillations and strongly correlated phases. Many of these phenomena have been already the object of experimental investigations. For a recent review see \cite{Morsch06}. Standard methods to observe quantum properties of ultracold atoms are based on destructive matter-wave interference between atoms released from traps \cite{Greiner}. Recently, a new approach was proposed which is based on all optical measurements that conserve the number of atoms. It was shown that atomic quantum statistics can be mapped on transmission spectra of high-Q cavities, where atoms create a quantum refractive index. This was shown to be useful for studying phase transitions between Mott insulator and superfluid states since various phases show qualitatively distinct spectra \cite{Mekhov07}.
Experimental implementation of a combination of cold atoms and cavity QED (quantum electrodynamics) has made significant progress \cite{Nagorny03,Sauer04,Anton05}. Theoretically there have been some interesting work on the correlated atom-field dynamics in a cavity. It has been shown that the strong coupling of the condensed atoms to the cavity mode changes the resonance frequency of the cavity \cite{Horak00}. Finite cavity response times lead to damping of the coupled atom-field excitations \cite{Horak01}. The driving field in the cavity can significantly enhance the localization and the cooling properties of the system\cite{Griessner04,Maschler04}. It has been shown that in a cavity the atomic back action on the field introduces atom-field entanglement which modifies the associated quantum phase transition \cite{Maschler05,Mekhov07}. The light field and the atoms become strongly entangled if the latter are in a superfluid state, in which case the photon statistics typically exhibits complicated multimodal structures \cite{Chen07}. A coherent control over the superfluid properties of the BEC can also be achieved with the cavity and pump \cite{Bhattacherjee}
In this work, we show that a nonlinear two-photon interaction between the superfluid (SF) phase and the Mott insulating (MI) phase of a Bose-Einstein condensate (BEC) (confined in an optical lattice) and the cavity field show qualitatively different transmission spectra compared to the one-photon interaction. Two-photon spectroscopy played a very important role in the studies of BEC of atomic hydrogen \cite{fried}. Two-photon excitation of $^{87} Rb$ atoms to a Rydberg state was also achieved recently\cite{low}.
\section{The effective two-photon transition Hamiltonian}
The system we consider here is an ensemble of $N$ two-level atoms with upper and lower states denoted by $|1>$ and $|0>$ respectively in an optical lattice with $M$ sites formed by far off resonance standing wave laser beams inside a cavity. A region of $k\eqslantless M$ sites is coupled to two light modes as shown in Fig.1. Fig.1 shows two cavities containing the two modes $a_{1}$ and $a_{2}$ crossed by a one-dimensional optical lattice confining the BEC. In the two-photon process, an intermediate level $|i>$ is involved, which is assumed to be coupled to $|1>$ and $|0>$ by dipole allowed transitions.
The manybody Hamiltonian in the second quantized form is given by
\begin{figure}[t]
\begin{tabular}{cc}
\includegraphics[scale=0.35]{fig1_2.eps}
\end{tabular}
\caption{Schematic diagram of the setup. The BEC atoms are periodically confined in an optical lattice and are made to interact with two laser modes $a_{1}$ and $a_{2}$ which are confined in two intersecting cavities. Mode $a_{2}$ is transmitted and measured by a detector.}
\label{fig.1}
\end{figure}
\begin{subequations}\label{1}
\begin{eqnarray}
H=H_f +H_a, \\
H_f=\sum_{l=1}^{2}{\hbar\omega_l a^\dag_l a_l} -i\hbar\sum_{l=1}^{2}{(\eta^*_l a_l -
\eta_l a^\dag_l)}, \\
H_a=\int{d^3{\bf r}\Psi^\dag({\bf r})H_{a1}\Psi({\bf r})} \nonumber\\
+\frac{2\pi a_s \hbar^2}{m}\int{d^3{\bf r}\Psi^\dag({\bf
r})\Psi^\dag({\bf r})\Psi({\bf r})\Psi({\bf r})}.
\end{eqnarray}
\end{subequations}
In the field part of the Hamiltonian $H_f$, $a_l$ are the annihilation operators of light modes with the frequencies $\omega_l$, wave vectors ${\bf k}_l$, and mode functions $u_l({\bf r})$, which can be pumped by coherent fields with amplitudes $\eta_l$. In the atom part, $H_a$, $\Psi({\bf r})$ is the atomic matter-field operator, $a_s$ is the $s$-wave scattering length characterizing the direct interatomic interaction, and $H_{a1}$ is the atomic part of the single-particle Hamiltonian $H_1$. The detuning between the atomic transition frequency and any one of the two modes is nonzero. Under these circumstances, the intermediate state can be adiabatically eliminated and the effective Hamiltonian of the two-level atom can be written in the rotating-wave and dipole approximation as
\begin{subequations}\label{2}
\begin{eqnarray}
H_1=H_f +H_{a1}, \\
H_{a1}=\frac{{\bf p}^2}{2m_a}+\frac{\hbar\omega_a}{2} \sigma_z -
i\hbar g_0{[\sigma^+ a_1 u_1({\bf r}) a_2 u_2({{\bf r}})-\text{H. c.}}]
\end{eqnarray}
\end{subequations}
Here, ${\bf p}$ and ${\bf r}$ are the momentum and position operators of an atom of mass $m_a$ and resonance frequency $\omega_a$, $\sigma^+$, $\sigma^-$, and $\sigma_z$ are the raising, lowering, and population difference operators, $g_0$ is the atom--light coupling constant assumed to be same for both the modes.
We will consider nonresonant interaction where the light-atom detunings $\Delta = \omega_1+\omega_2 - \omega_a$ are much
larger than the spontaneous emission rate and Rabi frequencies $g_0 a_1 a_2$. Thus, in the Heisenberg equations obtained from the
single-atom Hamiltonian $H_1$ (\ref{2}), $\sigma_z$ can be set to $-1$ (approximation of linear dipoles). Moreover, the polarization
$\sigma^-$ can be adiabatically eliminated and expressed via the fields $a_1$ and $a_{2}$. An effective single-particle Hamiltonian that gives the corresponding Heisenberg equations for $a_1$ and $a_{2}$ can be written as $H_{1\text{eff}}=H_f +H_{a1}$ with
\begin{equation}\label{3}
H_{a1}=\frac{{\bf p}^2}{2m_a}+V_{\text {cl}}({\bf r})+ \dfrac{2 \hbar g^2_0}{\Delta}a^{\dagger}_{2} a^{\dagger}_{1} a_{1} a_{2} |u_{1}({\bf r})|^{2} |u_{2}({\bf r})|^{2}.
\end{equation}
Here, we have also added a classical trapping potential of the lattice, $V_{\text {cl}}({\bf r})$, corresponds to a strong classical standing wave. Interestingly, we find that unlike in \cite{Mekhov07} we find that the Hamiltonian $H_{a1}$ does not contain terms like $u^{*}_{1}({\bf r})u_{2}({\bf r})$ or $u^{*}_{2}({\bf r})u_{1}({\bf r})$ which gives rise to an optical grating.
To derive the generalized Bose--Hubbard Hamiltonian we expand the field operator $\Psi({\bf r})$ in Eq.~(\ref{1}), using localized Wannier functions corresponding to $V_{\text {cl}}({\bf r})$ and keeping only the lowest vibrational state at each site: $\Psi({\bf
r})=\sum_{i=1}^{M}{b_i w({\bf r}-{\bf r}_i)}$, where $b_i$ is the annihilation operator of an atom at the site $i$ with the coordinate
${\bf r}_i$. Substituting this expansion in Eq.~(\ref{1}) with $H_{a1}$ (\ref{3}), we get
\begin{eqnarray}\label{4}
H=H_f+\sum_{i,j=1}^M{J_{i,j}^{\text {cl}}b_i^\dag b_j} + \dfrac{2 \hbar g^2_0}{\Delta} a^{\dagger}_{2}a^{\dagger}_{1}a_{1}a_{2} \sum_{i,j=1}^{K} J_{i,j}b^{\dagger}_{i} b_{j} \nonumber \\
+\frac{U}{2}\sum_{i=1}^M{b_i^\dag b_i(b_i^\dag b_i-1)},
\end{eqnarray}
where
\begin{equation}\label{5}
J_{i,j}^{\text {cl}}=\int{d{\bf r}}w({\bf r}-{\bf
r}_i)\left(-\frac{\hbar^2\nabla^2}{2m}+V_{\text {cl}}({\bf
r})\right)w({\bf r}-{\bf r}_j),
\end{equation}
\begin{equation}\label{6}
J_{i,j}=\int{d{\bf r}}w({\bf r}-{\bf r}_i) |u_{1}({\bf r})|^{2} |u_{2}({\bf r})|^{2} w({\bf r}-{\bf r}_j),
\end{equation}
\begin{equation}
U=4\pi a_s\hbar^2/m_a \int{d{\bf r}|w({\bf
r})|^4}.
\end{equation}
The BH Hamiltonian derived above is valid only for weak atom-field nonlinearity \cite{larson}. We assume that atomic tunneling is possible only to the nearest neighbor sites. Thus, coefficients (\ref{5}) do
not depend on the site indices ($J_{i,i}^{\text {cl}}=J_0^{\text {cl}}$ and $J_{i,i\pm 1}^{\text {cl}}=J^{\text {cl}}$), while
coefficients (\ref{6}) are still index-dependent. The Hamiltonian (\ref{4}) then reads
\begin{eqnarray}\label{7}
H=H_f+J_0^{\text {cl}}\hat{N}+J^{\text {cl}}\hat{B}+ \dfrac{2 \hbar g^2_0}{\Delta}
a^{\dagger}_{2} a^{\dagger}_{1} a_{1} a_{2} \left(\sum_{i=1}^K{J_{i,i} \hat{n}_i}\right) \nonumber \\
+ \dfrac{2 \hbar g^2_0}{\Delta} a^{\dagger}_{2} a^{\dagger}_{1} a_{1} a_{2} \left(\sum_{<i,j>}^K{J_{i,j} b_i^\dag
b_j}\right) +\frac{U}{2}\sum_{i=1}^M{\hat{n}_i(\hat{n}_i-1)},
\end{eqnarray}
where $<i,j>$ denotes the sum over neighboring pairs, $\hat{n}_i=b_i^\dag b_i$ is the atom number operator at the $i$-th site, and $\hat{B}=\sum_{i=1}^M{b^\dag_i b_{i+1}}+{\text {H.c.}}$ While the total atom number determined by $\hat{N}=\sum_{i=1}^M{\hat{n}_i}$ is conserved, the atom number at the illuminated sites, determined by $\hat{N}_K=\sum_{i=1}^K{\hat{n}_i}$, is not necessarily a conserved
quantity.
\section{The Transmission Spectra}
The Heisenberg equations for $a_1$, $a_2$ and $b_i$ can be obtained from
the Hamiltonian (\ref{7}) as
\begin{equation} \label{8}
\dot a_{1}=-i\left\lbrace \omega_{1} +2 \dfrac{g_{0}^{2}}{\Delta}(\sum_{i=1}^{K}J_{i,i}\hat{n}_{i}+\sum_{<i,j>}^{K} J_{i,j}b^{\dagger}_{i} b_{j}) a^{\dagger}_{2} a_{2}\right\rbrace a_{1}+\eta_{1}
\end{equation}
\begin{equation} \label{9}
\dot a_{2}=-i\left\lbrace \omega_{2} +2 \dfrac{g_{0}^{2}}{\Delta}(\sum_{i=1}^{K}J_{i,i}\hat{n}_{i}+\sum_{<i,j>}^{K} J_{i,j}b^{\dagger}_{i} b_{j}) a^{\dagger}_{1} a_{1}\right\rbrace a_{2}+\eta_{2}
\end{equation}
\begin{eqnarray}\label{10}
\dot{b}_i=-\frac{i}{\hbar}\left( J_0^{\text {cl}}+ \dfrac{2 \hbar g^2_0}{\Delta} a^{\dagger}_{2} a^{\dagger}_{1} a_{1} a_{2} J_{i,i}+U\hat{n}_i\right) b_i \nonumber \\
-\frac{i}{\hbar}\left( J^{\text {cl}}+\dfrac{2 \hbar g^2_0}{\Delta} a^{\dagger}_{2} a^{\dagger}_{1} a_{1} a_{2} J_{i,i+1}\right)b_{i+1} \nonumber \\
-\frac{i}{\hbar}\left( J^{\text {cl}}+ \dfrac{2 \hbar g^2_0}{\Delta} a^{\dagger}_{2} a^{\dagger}_{1} a_{1} a_{2} J_{i,i-1}\right)b_{i-1}.
\end{eqnarray}
Equations \ref{8}-\ref{10} represent the coupled light-matter wave equations which determine completely the dynamics of the present system. In Eqns. \ref{9} and \ref{10}, the second and third terms in the parentheses correspond to the phase shift of the light modes $a_{1}$ and $a_{2}$ due to two-photon coherence. Note that the term that describes scattering of one mode into the other is absent. We consider a deep lattice formed by a strong classical potential $V_{\text {cl}}({\bf r})$, so that the overlap between Wannier functions in Eqs.~(\ref{5}) and (\ref{6}) is small. Thus, we can neglect the contribution of tunneling by putting $J^{\text{cl}}=0$ and $J_{i,j}=0$ for $i \ne j$. Under this approximation, the matter-wave dynamics is not essential for light scattering. In experiments, such situation can be realized because the time scale of light measurements can be much faster than the time scale of atomic tunneling. One of the well-known advantages of the optical lattices is their extremely high tunability. Thus, tuning the lattice potential, tunneling can be made very slow~\cite{jaksch}.
In a deep lattice, the on-site coefficients $J_{i,i}$ (\ref{6}) can be approximated as $J_{i,i}=|u_1({\bf r}_i)|^{2} |u_2({\bf r}_i)|^{2}$ neglecting details of the atomic localization. Using $a_{i}(t)=a_{i} exp(-i\omega_{1}t)$, we obtain the stationary solutions of Equations \ref{8} and \ref{9} as
\begin{equation}\label{11}
a^{\dagger}_{1} a_{1}=\dfrac{\eta_{1}^{2}}{\left\lbrace \dfrac {2 g_{0}^{2}}{\Delta} \sum_{i=1}^{K} |u_{1}({\bf r})|^{2} |u_{2}({\bf r})|^{2} \hat {n}_{i} a^{\dagger}_{2} a_{2}\right\rbrace^{2}+\kappa^{2}}
\end{equation}
\begin{equation}\label{12}
a^{\dagger}_{2} a_{2}=\dfrac{\eta_{2}^{2}}{\left\lbrace \Delta_{p}-\dfrac{ 2 g_{0}^{2}}{\Delta} \sum_{i=1}^{K} |u_{1}({\bf r})|^{2} |u_{2}({\bf r})|^{2} \hat {n}_{i} a^{\dagger}_{1} a_{1}\right\rbrace^{2}+\kappa^{2}}
\end{equation}
Where $\Delta_{p}=\omega_{1}-\omega_{2}$. We now assume the mode $a_{1}$ to be in a the coherent state, which enables us to consider the quantity $a_{1}$ as a $c$ number. The intensity of the mode $a_{1}$ is then large and undepleted as compared to the weak mode ($a_{2}$). As a result in Eqn. \ref{11}, one could neglect the influence of the mode $a_{2}$ on $a_{1}$ by ignorning the quantity $\dfrac{g_{0}^{2}}{\Delta} \sum_{i=1}^{K} |u_{1}({\bf r})|^{2} |u_{2}({\bf r})|^{2} \hat {n}_{i} a^{\dagger}_{2} a_{2}$ with respect to $\kappa$. This basically means that we are ignorning the phase shift of the mode $a_{1}$. This can be achieved by keeping the detuning $\Delta$ large and the probe amplitude $\eta_{2}$ small. This yields
\begin{equation}\label{13}
a^{\dagger}_{2} a_{2}=\dfrac{\eta_{2}^{2}}{\left\lbrace \Delta_{p}-\dfrac{2 g_{0}^{2} \eta_{1}^{2}}{\kappa^{2} \Delta } \sum_{i=1}^{K} |u_{1}({\bf r})|^{2} |u_{2}({\bf r})|^{2} \hat {n}_{i} \right\rbrace^{2}+\kappa^{2}}
\end{equation}
Following \cite{Mekhov07} Eq.~(\ref{13}) allows to express $a^{\dagger}_{2} a_{2}$ as a function $f(\hat{n}_1,...,\hat{n}_M)$ of atomic occupation number operators and calculate their expectation values for prescribed atomic states $|\Psi\rangle$.
For the Mott state $\langle\hat{n}_i\rangle_\text{MI}=q_i$ atoms are well localized at the $i$th site with no number fluctuations. It is represented by a product of Fock states, i.e. $|\Psi\rangle_\text{MI}=\prod_{i=1}^M |q_i\rangle_i\equiv |q_1,...,q_M\rangle$, with expectation values
\begin{eqnarray}\label{14}
\langle f(\hat{n}_1,...,\hat{n}_M)\rangle_\text{MI}=f(q_1,...,q_M),
\end{eqnarray}
For simplicity we consider equal average densities $\langle\hat{n}_i\rangle_\text{MI}=N/M\equiv n$ ($\langle\hat{N}_K\rangle_\text{MI}=nK\equiv N_K$).
In SF state, each atom is delocalized over all sites leading to local number fluctuations. It is represented by superposition of Fock states corresponding to all possible distributions of $N$ atoms at $M$ sites: $|\Psi\rangle_\text{SF} =\sum_{q_1,...,q_M}\sqrt{N!/M^N}/\sqrt{q_1!...q_M!} |q_1,...,q_M\rangle$. Expectation values of light operators can be calculated from
\begin{eqnarray}\label{15}
\langle f(\hat{n}_1,...,\hat{n}_M)\rangle_\text{SF}=\frac{1}{M^N}
\sum_{q_1,...,q_M}\frac{N!} {q_1!...q_M!}f(q_1,...,q_M),
\end{eqnarray}
representing a sum of all possible ``classical'' terms. Thus, all these distributions contribute to scattering from a SF, which is
obviously different from $\langle f(\hat{n}_1,...,\hat{n}_M)\rangle_\text{MI}$ with only a single contributing term.
We now apply this formalism to compare the two-photon spectra with the one photon spectra calculated in \cite{Mekhov07} Eqn. (5). For travelling waves, we take $|u_{1}({\bf r_{i}})|^{2}=|u_{2}({\bf r_{i}})|^{2}=1$ and $\sum_{i=1}^{K}|u_{1}({\bf r_{i}})|^{2} |u_{2}({\bf r_{i}})|^{2}\hat{n}_{i}=\sum_{i=1}^{K} \hat{n}_{i}$.
\begin{figure}[t]
\begin{tabular}{cc}
\includegraphics[scale=0.8]{lorentzian_sf_1.eps}& \includegraphics[scale=0.8]{lorentzian_mott_1.eps}
\end{tabular}
\caption{Photon number in the mode $a_{2}$ for SF state (a) and MI state (b) due to one-photon transition (thin line) and two-photon transition (thick line). The separation between the lorentzians in the SF state due to two-photon case is $11 \delta$ and that due to one-photon case is $2 \delta$. In the Mott state for the one-photon case, two-satellite contour is observed reflecting normal mode splitting of the two oscillators $<a_{0,1}>$ coupled through atoms. This splitting is absent in the two-photon case. All the frequencies are scaled with respect to $\delta$. Here $\delta=g_{0}^{2}/\Delta$. Also $N=M=K=15$ and $\kappa=0.3 \delta$. The mode amplitudes are in scaled units $\eta_{1}=1$ and $\eta_{2}=0.6$. }
\label{fig.2}
\end{figure}
\begin{figure}[t]
\begin{tabular}{cc}
\includegraphics[scale=0.8]{lorentzian_sf_2.eps}& \includegraphics[scale=0.8]{lorentzian_mott_2.eps}
\end{tabular}
\caption{ Photon number in the mode $a_{2}$ for SF state (a) and MI state (b) due to one-photon transition (thin line) and two-photon transition (thick line). The parameters are same as in Fig. 2 except $\kappa=1.2 \delta$. The comb like structure for the SF case (a) is replaced by a broad lorentzian for both the single-photon and two-photon case. For the Mott case (b), however, the peaks are broadened and the two-photon single peak shifts i.e, the dispersion is reduced. }
\label{fig.3}
\end{figure}
We present the transmission spectra for the one-photon case(thin line) and two photon (thick line) case in the SF state in Fig.1(a) and that in the Mott state in Fig.1(b) for the case where $\kappa=0.3 g^{2}_{0}/\Delta$. Clearly for both the cases, the transmission spectra is a sum of lorentzians with different dispersion shifts. A comb like structure is seen if each lorentzian is resolved. However the separation between the lorentzians for the two-photon case ($11 \delta$) is much larger than that for the one-photon case($2 \delta$).Such a spectra can be reproduced experimentally by repeated measurements over a long time scale so that a superposition of different atomic distributions(obtained by tunneling) contibutes to the spectra. In the Mott state, the difference between the one-photon interaction(thin line) and the two-photon interaction (thick line) is striking. In the one-photon case, two-satellite contour is observed reflecting normal mode splitting of the two oscillators $<a_{0,1}>$ coupled through atoms. This splitting is absent in the two-photon case. A probable explanation for this observation can be traced back to the Hamiltonian $H_{a1}$ of Eqn. \ref{2}. In the absence of any optical grating the atomic grating is not formed and as a result diffraction of one mode into the other is absent (absence of Bragg scattering). In the one photon case, the two modes are bahaving like two independent oscillators exchanging energy via the atoms. In the two-photon transition, there is no energy exchange between the two modes but energy exchange occurs between the atoms and the two modes (taken together). The system then behaves effectively as a two-level atom interacting with a single mode via one-photon transition.
The results for the case $\kappa > g^{2}_{0}/\Delta$ is shown in Fig.2. As found in \cite{Mekhov07} , the comb like structure for the SF case is replaced by a broad lorentzian for both the single-photon and two-photon case (Fig2a). For the Mott case, however, the peaks are broadened and the two-photon single peak shifts i.e, the dispersion is reduced.
From the experimental point of view, such a two photon excitation has already been achieved in $^{87}Rb$ atoms \cite{low} and atomic hydrogen BEC \cite{fried}. Using the setup described in \cite{low}, $^{87}Rb$-atoms are magnetcally trapped in the $5S_{1/2}, F=2, m_{F}=2$ state and produce samples from thermal clouds to BECs by means of forced rf evaporation. After this preparation, the atoms are subject to a two-photon Rydberg excitation (using $780.246 nm$ and $480.6nm$ lasers) via the $5P_{3/2}$ state to the $43S_{1/2}$ state with a length of the square light pulses between $170 ns$ and $2\mu s$. To reduce spontaneous photon scattering, the light is blue detuned by $2 \pi\times 483 MHz$ from the $5P_{3/2}, F=3$ level. Thus only one photon per 100 atoms is scattered for the longest excitation time. However in such experiments the excitation is locally blocked by the van der Walls interaction between Rydberg atoms.
\section{Conclusion}
We have investigated off-resonant collective light scattering from a Bose Einstein condensate trapped in an optical lattice interacting with two cavity modes via nonlinear two-photon transition. Measuring the transmission spectra allows one to distinguish between a nonlinear two-photon interaction and a one-photon interaction. Depending on the state of the BEC, we found that the two-photon spectra is different from one-photon spectra. We found that when the BEC is in the Mott state, the usual normal mode splitting present in the one-photon transition is missing in the two-photon interaction. When the BEC is in the superfluid state, the transmission spectra shows the usual multiple lorentzian structure. However the separation between the lorentzians for the two-photon case is much larger than that for the one-photon case. This method could be useful for non-destructive high resolution Rydberg spectroscopy of ultracold atoms or two-photon spectroscopy of hydrogen BEC.
\section{Acknowledgements}
One of the authors Tarun Kumar acknowledges the Council of University Grants Commission, New Delhi for the financial support under the Junior Research Fellowship scheme Sch/JRF/AA/30/2008-2009.
|
1,108,101,565,590 | arxiv | \chapter{Introduction}\label{ch:introduction}
\section{\label{sec:pre}Preliminaries}
The central physical phenomenon which lies behind all this work is called {\textit autoionization}. The very first sighting of this phenomenon dates back to 1935 by~\citeauthor{Beutler1935} in the context of the photoabsorption of rare gases atoms, and later again by ~\citep{Madden1963} in the photoabsorption of atomic helium. The modern understanding of autoionization considers a sort of discrete states which are embedded in the continuum, these states are called resonances and they are genuine eigenstates of the interelectronic repulsion $1/r_{12}$. Even though each resonance may be characterized and described by three parameters: the energy position $E_r$, the shape parameter $q$ and the resonance width, a completely satisfactory classification scheme of resonances is still unavailable. The subject of classification of \ac{DES} or resonances has been characterized by controversy and general disagreement. This work pretends to study the so called $(K,T)$ classification scheme of \ac{DES}~\citep{Herrick1975,Lin1983} by analysing the topological features of the electronic density by means of measures of information theory, see figure~\ref{fig:clasification}.
\begin{figure}[h]
\includegraphics[width=\textwidth]{gfx/clasification}
\caption[Approximate $(K,T)$ ``quantum numbers'' for doubly excited states. ]{\label{fig:clasification}The alternative and approximate $(K,T)$ ``quantum numbers'' that was proposed, by~\citep{Herrick1975,Lin1983}, as an attempt to classify the \ac{DES} in helium atom.}
\end{figure}
\subsection{\label{sec:resonanceselem}Resonances in elementary quantum mechanics: Resonance scattering from a double $\delta$-function potential}
In order to obtain a deep understanding on the concept of resonant state we may introduce a straightforward example commonly encountered in almost all introductory course of quantum mechanics. Under certain conditions the scattering of a particle by a one-dimensional square potential barrier exhibits resonant behavior. It is also well known that the transmission coefficient a one-dimensional double $\delta$-function potential also exhibits resonances at a series of energy values~\citep{Lapidus1982}. The problem involves the \ac{TISE}
\begin{equation}\label{eq:schroeq}
-\frac{\hbar^2}{2m}\frac{\partial^2 \psi(x)}{\partial x^2}+V(x)\psi(x)=E\psi(x),
\end{equation}
where $V(x)=V_0(\delta(x+a)+\delta(x-a))$ with $V_0$ an appropriate constant and $2a$ the separation between the barriers. The solution of the \ac{TISE} with this potential can be written as
\begin{subequations}
\begin{align}\label{eq:ressol}
\psi_-&=e^{ikx}+re^{-ikx}\hspace{24pt}x<-a \\
\psi_0&=Ae^{ikx}+Be^{-ikx}\hspace{11pt}|x|\le a \\
\psi_+&=te^{ikx}\hspace{60pt}x> a
\end{align}
\end{subequations}
where the constants $A$, $B$, $r$, and $t$ are determined by the boundary conditions
\begin{subequations}\label{eq:boundarycond}
\begin{align}
\psi_0(a)&=\psi_+(a), \\
\psi_-(-a)&=\psi_0(-a),
\end{align}
\end{subequations}
and
\begin{subequations}\label{eq:derboundary}
\begin{align}
\psi_+^{\prime}(a)-\psi_0^{\prime}(a)=&(-\frac{2}{a_0})\psi_+(a), \\
\psi_0^{\prime}(-a)-\psi_-^{\prime}(-a)=&(-\frac{2}{a_0})\psi_-(-a),
\end{align}
\end{subequations}
where $a_0=\hbar^2/mV_0$. By using the equations~\eqref{eq:boundarycond} and~\eqref{eq:derboundary} we can eliminate the constants $A$ and $B$ and then solve for the amplitudes for transmission and reflection $r$ and $t$. The transmission and reflection coefficients are $T=|t|^2$ and $R=|r|^2$, where $R+T=1$. Finally, the transmission amplitude can be written as
\begin{equation}\label{eq:transmission}
t=\frac{a_0^2k^2}{a_0^2k^2-2ia_0k+e^{4iak}-1}.
\end{equation}
The figure~\ref{fig:elemres} plots the transmission coefficient $T$ as a function of the energy $E=\hbar^2k^2/2m$. This illustrative and short discussion provides an intuitive picture of what is the meaning of a resonance state in quantum mechanical contexts.
\begin{figure}[h]
\includegraphics[width=\textwidth]{gfx/resonances}
\caption[Resonances in a double $\delta$-function potential.]{\label{fig:elemres}An elementary example of resonances in the scattering form a double $\delta$-function potential. The transmission coefficient $T$ exhibits resonant behavior for some values of energy.}
\end{figure}
\section{\label{sec:outlook}Outlook}
This work is built up with three parts, five chapters and three appendices. The first part, composed by one chapter, is dedicated to the stationary calculation of the electronic structure, i.e., the calculation of energies and the two-electron \ac{CI}-\ac{WF} of two-electron atoms using the \ac{TISE}. In chapter~\ref{ch:doublyhelium} we describe the two-electron atoms and the methods that we have implemented to calculate their electronic structure, based on the construction of the two-electron \ac{CI}-\ac{WF} with the help of one-particle \ac{WF} which are obtained in the appendix~\ref{ch:hybsplines}; and particulary, we review the application of \ac{FM} to obtain de \ac{DES} states of helium atom.
The second part, which is composed of two chapters is dedicated to the topological description of the electronic density by means of measures of information theory. In chapter~\ref{ch:tim} we calculate the two-dimensional electronic density for two-electron atoms and by integration over one radial dimension we obtain the one-particle density. Additionally we introduce the Shannon entropy and the Fisher information as two measures of information theory in order to explore their topological implications over the densities of \ac{DES}. The chapter~\ref{ch:entanglement} is dedicated to the study of quantum entanglement in two-electron systems. There we will introduce its definition and the quantities which measure the amount of entanglement in helium. Finally we discuss the possible implications of the amount of entanglement in \ac{DES} and their classification.
The third part, in chapter~\ref{ch:conclusions} summarizes our conclusions and perspectives for future work.
Finally, the appendices contain a detailed description of the numerical basis implemented in this work: B-splines, as well as some details of the analytical and numerical calculation of electronic structure of one-electron atoms, focussing only at bound states. Some supplementary figures are shown in appendix~\ref{ch:supgraphics}.
\chapter{Doubly Excited States of Helium}\label{ch:doublyhelium}
The description of the electronic eigenspectrum of a two-electron atom involves, in addition to the bound and continuum states, another kind of quasibound eigenstates of the Hamiltonian. Similar to the infinite series of single excited states located below the first ionization threshold, Rydberg series of discrete \ac{DES} appear below the upper continuum thresholds associated to excited target configurations (for instance, He$^+$ $(n=2,3,4,\dots)+ e^-$ in the case of \ac{DES} in helium (see figure~\ref{fig:hestates}), i.e., these states are genuinely immersed in the continuum but share properties similar to the bound states (discrete energies and spatial quasi-localization). Since these quasibound states are coupled to the underlying continuum, they are indeed metastable states characterized with a finite lifetime \citep{Friedrich2005}. In principle, there is no external perturbation responsible for the decay of these metastable states into the degenerated continua (for instance, in helium, the decay from a \ac{DES} produces a final ionizing state, i.e., He$^{**} \to$ He$^+ + e^-$, a process which is called autoionization), but it is intrinsic to the two-electron Hamiltonian, in particular the electron correlation term $1/r_{12}$.
Thus the good account of the properties of doubly excited states in two-electron atoms, also called resonances or autoionizing states, thoroughly depend on the proper description of the electron correlation. Consequently, these states and their properties cannot be described using simple models, like the independent particle model. The actual existence of \ac{DES} was put forward experimentally by~\citep{Madden1963} and the first comprehensive theory of resonance phenomena was proposed by \citep{Fano1961,Fano1983}. From then, a vast amount of effort has been expended in the understanding of the atomic resonance phenomena and, in particular, the characterization of the resonance states (energy positions, lifetimes, time-dependent decay and its contribution to photoionization cross section, electron distributions, classification using quantum labels, etc.
Determining resonance energies and widths (or lifetimes) in atoms has been the subject of intense study and research for decades. Nevertheless, the classification of resonances within the same Rydberg series in terms of labels corresponding to approximate quantum numbers has been a matter of great controversy~\citep{Cooper1963, Nikitin1976, Lin1983,Lin1984,Lin1984a}. Indeed, at variance with the bound state, mostly labeled $(1s,1s)$ and singly excited states, easily labelled $(1s,nl)$, any attempt to describe the nature of \ac{DES} using simple configurations $(n_1l_1, n_2l_2)$ where $n_1, n_2 > 1$ and $l_i=1,\dots,n_i-1$, has been ill-fated due to the general strong mixing of two-electron configurations in the description of each \ac{DES}.
\begin{figure}[h]
\centering
\includegraphics[width=0.75\textwidth]{gfx/helium_kt.pdf}
\caption[The energy level spectrum of helium.]{\label{fig:hestates}The energy level spectrum of helium. The ground state $(1s^2)$ and the Rydberg series of singly excited states $(1s,nl)$ lie below the first ionization threshold (He$^+ (n=1) + e^-$) with energy $E= -2.0$ a.u. Above this threshold and below the second ionization threshold (He$^+ (n=2) + e^-$) with energy $E= -0.5$ a.u. a Rydberg series of doubly excited states (naively denoted as $(2 l_1, n_2 l_2)$ and more properly with the~\citep{Lin1983, Lin1984} set of labels $_{n_1}(K,T)_{n_2}^A$. These \ac{DES} or resonances are immersed into the electronic continuum $(1s, \epsilon l)$ and they are degenerated to it, which ultimately causes its irreversible decay into the continuum once they become populated. A full Rydberg series of resonances appear just below each ionization threshold of the parent ion He$^+ (n),\hspace{5pt}2<n<\infty$.}
\end{figure}
Nowadays, the most successful proposal for the taxonomy of the different types of resonances is adopted from the work of~\citep{Lin1983, Lin1984}, after the pioneering work of~\citep{Herrick1975}. These approaches define a new set of approximate quantum numbers known as (K,T) numbers. As a result, these quasi-quantum numbers can describe, to a rather good approximation, the characteristics of the series of doubly excited states. In the following sections we shall discuss the \ac{CI} method to calculate the stationary eigenspectrum of helium-like ions. Additionally, we shall introduce the \ac{FM} which is one of the the most sophisticated theoretical tools to adequately deal with autoionizing states or resonances. Finally, we shall conclude with a review of the \ac{HSL} classification scheme.
For the sake of clarity, figure~\ref{fig:hestates} shows a semi-quantitative energy spectrum of the He atom, indicating specifically the location of resonance states as Rydberg series below any ionization threshold.
\section{\label{sec:qmd}Stationary quantum-mechanical description of Helium-like atoms or ions}
Helium-like atoms cannot be solved analytically. The Hamiltonian of the system involves an inter-electronic interaction term which depends only upon the spatial separation between the electrons and it does not allow us to obtain a solution in terms of any known analytical function. Naturally, since the foundation of wave mechanics, a lot of diverse approaches have been intended to approximate the \ac{WF} for the ground state, singly excited states, and resonances. So, we can list, among the most emblematic methods, the following ones: Hartree-Fock and Multiconfigurational Hartree-Fock methods~\citep{FroeseFischer1973,FroeseFischer1977,FroeseFischer1978}, and the general \ac{CI} methods~\citep{Shavitt1977,Friedrich2005} which is our method to obtain the eigenspectrum of helium.
\subsection{Hamiltonian of helium-like atoms}
Two-electron atoms consisting of a nucleus of mass $M$ and charge $Ze$ and two electrons, with mass $m$ and charge $e$, can be described in terms of the Coulomb interactions between the three charged particles. As in the case of one-electron atoms (see section~\eqref{sec:fullhy}), we can separate the motion of the centre of mass. Actually, for helium-like ions, this is a slightly more complicated procedure, that can be followed in~\citep{Bransden2003}. Therefore, denoting by $\boldsymbol{r}_1$ and $\boldsymbol{r}_2$ the relative coordinates of the two electrons with respect to the nucleus we can write the following two-particle Hamiltonian
\begin{align}\label{eq:Hham1}
H &=-\frac{\hbar^2}{2\mu}\nabla^2_{\boldsymbol{r}_1}-\frac{\hbar^2}{2\mu}\nabla^2_{\boldsymbol{r}_2}-\frac{\hbar^2}{M}\nabla^2_{\boldsymbol{r}_1}\cdot\nabla^2_{\boldsymbol{r}_2}-\frac{Ze^2}{(4\pi\epsilon_0)r_1}-\frac{Ze^2}{(4\pi\epsilon_0)r_2}\\ \nonumber
&+\frac{e^2}{(4\pi\epsilon_0)r_{12}},
\end{align}
where $\mu=mM/(m+M))$ is the reduced mass of an electron with respect to the nucleus and $r_{12}=|\boldsymbol{r}_1-\boldsymbol{r}_2|$.
We shall consider in our calculation an {\textit infinitely heavy} nucleus since in this work we are not in the pursuit of high precision calculations, that must include all corrections due to the finite mass. As a result, $\mu=m$ and the {\textit mass polarisation} term $(\hbar^2/M)\nabla^2_{\boldsymbol{r}_1}\cdot\nabla^2_{\boldsymbol{r}_2}$ can be neglected. Consequently, we can rewrite the expression~\eqref{eq:Hham1}, in \ac{a.u.}, as
\begin{equation}\label{eq:Hham2}
H = h(1) + h(2) + \frac{1}{r_{12}},
\end{equation}
where $h(i)= -\frac{\nabla^2_i}{2} -\frac{Z}{r_i}$. The Schr\"odinger equation reads
\begin{equation}\label{eq:schrotwo}
\left [h(1) + h(2) + \frac{1}{r_{12}}\right] \psi(\boldsymbol{r}_1,\boldsymbol{r}_2)=E\psi(\boldsymbol{r}_1,\boldsymbol{r}_2),
\end{equation}
and it cannot be separated due to the presence of the $1/r_{12}$ term. For this reason, we cannot write the total two-particle \ac{WF} as a direct product of one-particle \ac{WF}s, i.e., the system is not separable or, in a more common and modern speaking terminology, it is {\textit entangled}.
\subsection{Two-fermion antisymmetric wave function }\label{sec:symmetrizationpos}
{\textit Pauli exclusion principle} asserts that in a system of identical fermions no more than one particle can have exactly the same single particle quantum numbers, this statement requires that the \ac{WF} of a two-electron system must be antisymmetric as a whole, i.e, the \ac{WF} must change its sign by a single permutation of the global electron coordinates (spatial plus spin), that is
\begin{equation}\label{eq;antisym}
\ket{\psi(2,1)} =-\ket{\psi(1,2)}.
\end{equation}
This is the so called {\textit Symmetrization Postulate} which states that: (a)~particles whose spin is an {\textit integer} multiple of $\hbar$ have only symmetric states (these particles are called {\textit bosons}); (b)~particles whose spin is a {\textit half odd-integer} multiple of $\hbar$ have only antisymmetric states (these particles are called {\textit fermions}); ({c})~partially symmetric states do no exist~\citep{Messiah1966, Ballentine1998}. Besides, this postulate is another form of the {\textit principle of indistinguishability of identical particles}. Following to~\citep{Messiah1964} the principle states: {\textit "Dynamical states that differ only by a permutation of identical particles cannot be distinguished by any observation whatsoever"}.
Formally, in an {\textit independent particle model}, i.e., neglecting the term $1/r_{12}$ in the Hamiltonian~\eqref{eq:Hham2}, we can build up the two-electron \ac{WF} from the one-particle orbitals by means of the antisymmetrizing operator $\mathcal{\hat{A}}$ which may be defined as
\begin{equation}\label{eq;antisymket}
\ket{\psi(1,2)} =\mathcal{\hat{A}}[\ket{\phi(1)}\ket{\phi(2)}].
\end{equation}
For the case of two fermions, in which the {\textit Pauli exclusion principle} has a central role, the total \ac{WF} can be factorized into the spatial part (symmetric or antisymmetric) and the spin part (singlet or triplet), respectively.
\begin{equation}\label{eq:wf1}
\ket{\psi(1,2)} = \ket{\phi(1, 2)}_{symmetric, antisymmetric} \otimes\ket{ \chi (1,2)}_{singlet,triplet}
\end{equation}
Hydrogen-like functions or one-particle functions, which are introduced in Appendix~\eqref{sec:fullhy}, may be written using the so called {\textit Dirac notation}, namely
\begin{align}\label{eq:hylike}
\ket{\psi_{n,l,m_l,s, m_s}}&=\ket{\phi_{nlm_l}}\otimes\ket{\chi_{sm_s}}, \nonumber \\
&=\ket{n,l,m_l,s, m_s}
\end{align}
where, $\{n,l,m_l,s, m_s\}$ represent good quantum numbers corresponding to the commuting operators $(\hat{H},\hat{L}^2, \hat{L}_z, \hat{S}^2, \hat{S}_z$) which completely describe the one-particle state. Since we deal with electrons, the spin quantum number has a definite constant {\textit half-integer (fermion)} number $s=1/2$, which is assumed hereafter in the formulas. Now, $\ket{\chi_{sm_s}}$ is the spin vector and $\ket{\phi_{nlm_l}}$ is the orbital eigenstate which may be projected onto the space representation to obtain the corresponding \ac{WF} as
\begin{equation}\label{eq:orbitalfunc}
\braket{\boldsymbol{r}|\phi_{n,l,m_l}}=\phi_{n,l,m_l}(r,\theta,\phi)=\frac{\mathcal{U}_{n,l}({r})}{r}\mathcal{Y}^l_{m_l}(\theta,\phi),
\end{equation}
where $\mathcal{Y}^l_{m_l}(\theta,\phi)$ is a spherical harmonic corresponding to the orbital angular momentum quantum number $l$ and to the magnetic quantum number $m_l$. The functions $\mathcal{U}_{n,l}({r})=rR_{n,l}({r})$ satisfy the reduced radial equation~\eqref{eq:radschr} quoted in Appendix~\ref{ch:hybsplines}. So, the separation of equation~\eqref{eq:relativeeqn1} enables us to write the state vector as the full state vector in a partially mixed Dirac notation as
\begin{equation}\label{eq:hyket}
\braket{r|n,l,m_l,m_s}=\braket{r|\phi_{n,l,m_l},\chi_{m_s}}=\frac{\mathcal{U}_{n,l}({r})}{r}\ket{l,m_l,m_s}.
\end{equation}
Even though we make an abuse in the standard Dirac notation in equation~\eqref{eq:hyket}, it can be useful in the following. In addition, it is important to say that any {\textit inner product} involves an integration over the radial coordinate $r$, for $0\leq r<\infty$. Note that we drop the label $s$ from the spin ket $| \chi \rangle$ since $s=1/2$ always for the electrons. The spin projection $m_s$ can take the values $+\frac{1}{2}$ and $-\frac{1}{2}$ which corresponds to the two state vectors $\ket{\chi_\frac{1}{2}}\equiv\ket{\alpha}$ and $\ket{\chi_{-\frac{1}{2}}}\equiv\ket{\beta}$.
As stated above, the whole \ac{WF} of a two-electron atom must be antisymmetric against the permutation $1 \leftrightarrow 2$. Then, using the expression~\eqref{eq:wf1}, we may build up the following two antisymmetrized \ac{WF} by means of the eigenstates of the global spin operators $\hat{S}^2$ and $\hat{S}_z$ with $\hat{S}=\hat{s}_1+\hat{s}_2$ and $\hat{S}_z=(\hat{s}_1)_z+(\hat{s}_2)_z$ (spin {\textit singlet} function $S=0$ with $M_S=0$ and spin {\textit triplet} function $S=1$ with $M_S=1,0,-1$ ).
\begin{align}
\ket{\psi(1,2)}^{para}& =\ket{\Phi(1,2)}_{symmetric}\otimes\frac{1}{\sqrt{2}}[\ket{\alpha\beta}-\ket{\beta\alpha}], \label{eq:funpara}\\
\ket{\psi(1,2)}^{orto}& =\ket{\Phi(1,2)}_{antisymmetric}\otimes\begin{cases}
\ket{\alpha\alpha}, \\
\frac{1}{\sqrt{2}}[\ket{\alpha\beta}+\ket{\beta\alpha}], \\
\ket{\beta\beta}.
\end{cases} \label{eq:funorto}
\end{align}
In the expression~\eqref{eq:funpara} we use a spatially symmetric vector state (commonly called {\textit para}) together with the singlet spin state vector, but in the equation~\eqref{eq:funorto} we use an antisymmetric vector state (commonly called {\textit orto}) together with the spin triplet symmetric state.
Actually, in this work we only consider two-electron atoms with $LS$ coupling; this very special angular momentum coupling is adequate to describe atoms with small nuclear charge $Ze$. Particularly, the goal is to get an antisymmetric \ac{WF} with total angular momentum $L$ with projection $M_L$ and total spin $S$ with projection $M_S$. In order to accomplish our goal, we shall follow the graphical method described by~\citep{Lindgren1986}, although the same result may be readily obtained algebraically using Clebsch-Gordan angular momentum coupling coefficients, e.g.,~\citep{Edmonds1957}. First we couple the angular momenta of the two separated electrons, to build up the states of total orbital and spin angular momentum, as follows
\begin{align}
\ket{(l_al_b)LM_L}&=\sum_{m_l^am_l^b}\ket{l_am^a_l,l_bm^b_l}\braket{l_am^a_l,l_bm^b_l|LM_L},\label{eq:orbcoup} \\
\ket{SM_S}&=\sum_{m_s^am_s^b}\ket{m^a_s,m^b_s}\braket{s_am^a_s,s_bm^b_s|SM_S},\label{eq:spncoup}
\end{align}
where $\braket{l_am^a_l,l_bm^b_l|LM_L}$ and $\braket{s_am^a_s,s_bm^b_s|SM_S}$ are the {\textit vector-coupling coefficients} or \ac{CGC}~\citep{Ballentine1998,Lindgren1986}. Indeed, we may use, instead of \ac{CGC}, a more symmetrical quantity called the Wigner $3\text{-}j\text{-}$symbol which is defined as follows
\begin{equation}\label{eq:3jwigner}
\tj{j_1}{j_2}{j_3}{m_1}{m_2}{m_3}=(-1)^{j_1-j_2-m_3}(2j_3+1)^{-\frac{1}{2}}\braket{j_1m_1,j_2m_2|j_3-m_3}.
\end{equation}
It is easy to show that the $3j$-symbol vanishes, unless $m_1+m_2+m_3=0$. In addition, a non-vanishing $3j$-symbol must satisfy the triangular condition $|j_1-j_2|\leq j_3\leq j_1+j_2$.
Finally, using equations~\eqref{eq:orbcoup},~\eqref{eq:spncoup}, together with~\eqref{eq:hylike}, and~\eqref{eq;antisymket}, we can write the antisymmetric state of the two-electron atom in $LS$ coupling as
\begin{align}\label{eq:ansymm1}
&\ket{\{n_al_a\hspace{4pt}n_bl_b\}LM_LSM_S}= \\ \nonumber
&\hspace{25pt}F\left[\ket{(n_al_a)_1(n_bl_b)_2LM_LSM_S}-\ket{(n_al_a)_2(n_bl_b)_1LM_LSM_S}\right]
\end{align}
where $F$ is a normalization factor to be calculated. The subscripts $1$ and $2$ refer to the individual electrons. The {\textit curly brackets} denote the antisymmetric combination, i.e., it implies the antisymmetric action of the projection operator
\begin{equation}\label{eq:antiop}
\hat{\mathcal{A}}=\frac{1}{N_t!}\sum_P(-1)^PP,
\end{equation}
where $N_t$ is the total number of particles and $P$ denotes one of the $N_t!$ permutations of the $N_t$ indexes for the particles. By means of the properties of symmetry of the \ac{CGC} or the $3j$-symbols, we may permute any two columns; an even permutation leaves the $3j$-symbol value invariant, but an odd permutation introduces the additional phase $(-1)^{j_1+j_2+j_3}$. For this reason, we get two phase factors in our case: $(-1)^{l_a+l_b+L}$ for the orbital part and $(-1)^{S+1}$ for the spin part. Consequently, equation~\eqref{eq:ansymm1} can be recast in the form:
\begin{align}\label{eq:ansymm2}
&\ket{\{n_al_a\hspace{4pt}n_bl_b\}LM_LSM_S}= \\ \nonumber
&F\left[\ket{(n_al_a)_1(n_bl_b)_2LM_LSM_S}+(-1)^{l_a+l_b+L+S}\ket{(n_bl_b)_1(n_al_a)_2LM_LSM_S}\right].
\end{align}
If, $n_a=n_b$ and $l_a=l_b$, i.e., the electrons are said to be {\textit equivalent}, the normalization factor $F$ is equal to $1/2$ and the normalized antisymmetric function becomes
$\ket{\{(nl)^2\}LM_LSM_S}=\ket{(nl)_1(nl)_2LM_LSM_S}$ for $S+L$ even. On the other hand, if, $n_a\neq n_b$ or $l_a\neq l_b$, i.e., the electrons are said to be {\textit non-equivalent}, and the normalization factor $F$ is equal to $1/\sqrt{2}$.
\subsection{Configuration interaction (CI) method}\label{sec:cimethod}
Our chosen method to get successful and accurate solutions, both for eigenstates and eigenenergies, to the \ac{TISE}, equation~\eqref{eq:schrotwo}, is the so called {\textit configuration interaction}~\ac{CI} method~\citep{Shavitt1977,Szabo1989}. As mentioned above, approximate solutions to the $N$-electron problem may be achieved using different methods, for instance, the \ac{HF} method. The \ac{HF} method retains the simplicity of solving the total \ac{WF} in terms of a single Slater determinant in which each orbital is optimized by solving the one-particle Fock operator, which averages the interaction with the other electrons. Nevertheless,
the \ac{HF} method is not able to describe the full electron correlation~\citep{Szabo1989,Friedrich2005}. The \ac{CI} is in essence a variational many-electron method which built up the \ac{WF} as a huge linear combination of antisymmetrized configurations constructed with \ac{HF} or hydrogenic orbitals (appropriately coupled to yield the correct total angular momenta $L$ and $S$).
A general calculation using the \ac{CI} scheme can be understood as an optimization of the trial \ac{CI}-\ac{WF} constructed with a large combination of $N$ different configurations, i.e., it is a {\textit linear combination or superposition} of a large number of antisymmetrized two-electron functions based on products of spin-orbitals, then configurations in the form of~\eqref{eq:ansymm2} are built up~\citep{Sherrill1999,Cramer2004,Friedrich2005}. Therefore, the general \ac{CI}-\ac{WF} can be written as
\begin{equation}\label{eq:ciwfunc}
\ket{\Psi^{CI}}=\sum_{i=1}^N C_i\ket{\psi_i},
\end{equation}
where $C_i$ are the variational expansion coefficients, for the $i^{th}$ configuration, subject to optimization. As previously mentioned, each member of the expansion is defined by
\begin{equation}\label{eq:ansymm3}
\ket{\psi_i}=\ket{\{n_a^il_a^i\hspace{4pt}n_b^il_b^i\}LM_LSM_S}.
\end{equation}
Afterwards, the variational method asserts that the eigenvalues and eigenfunctions of the Hamiltonian~\eqref{eq:Hham2} can be approximated by seeking the conditions under which the following functional $E[\Psi^{CI}]$ will be stationary, this is
\begin{equation}\label{eq:varstat}
\delta E[\Psi^{CI}]=0,
\end{equation}
with
\begin{align}\label{eq:varstat1}
E[\Psi^{CI}]&=\frac{\braket{\Psi^{CI}|H|\Psi^{CI}}}{\braket{\Psi^{CI}|\Psi^{CI}}},\\ \nonumber
&=\frac{\sum_{ij}^NC_iC^{*}_j\braket{\psi_i|H|\psi_j}}{\sum_i^NC_iC^{*}_i}.
\end{align}
The \ac{CI} method is conceptually the most straightforward method to solve the \ac{TISE}. It is said that \ac{CI} constitutes an "exact theory" in the limit of an infinite basis of configurations. In practice, however, the matrix equations are not exact because the expansion in equation~\eqref{eq:ciwfunc} must be truncated to a finite number $N$ of terms. Therefore, if we include a large enough number of configurations, the diagonalization of the Hamiltonian in the truncated subspace can give a very good approximation to the exact eigenenergies and eigenstates since the electron correlation due to the Coulomb term $1/r_{ij}$ is better described using this kind of many particle methods. However, our \ac{CI}-\ac{WF} does not contain terms including the inter-electronic coordinate, $r_{12}$, i.e., the trial \ac{WF}, we say, is not explicitly correlated. Instead, the proper description of the correlation term $1/r_{12}$ is achieved by the angular mixing of configurations in the \ac{CI}-\ac{WF}. Other \ac{CI} schemes might include a trial \ac{WF} which are explicitly correlated, i.e.,containing functions of the coordinate $r_{12}$. These explicitly correlated schemes have in general faster radial and angular convergence with the number of configurations. However, its computational implementation is much more involved, and lacks the simplicity of configurations based on direct products of orbitals.
The classical (explicitly uncorrelated) \ac{CI} method has been implemented in a vast range of applications or calculations both in atomic and molecular physics. For instance, in the $H^-$ ion by~\citep{Chang1991}, in He atom by~\citep{Bachau1984,Castro-Granados2012, vanderHart1992a}, in $Li$ and $Li$-like atoms by~\citep{Cardona2010}, in $Be$ atom by~\citep{Chang1989}, in $N^{3+}$ and $N^{5+}$ by~\citep{vanderHart1992a,vanderHart1992b}, in $Mg$ atom by~\citep{Tang1990}, in $Mg^-$ and $Ca^-$ by~\citep{Sanz-Vicario2008}, to mention only a few.
To summarize the whole procedure, we first solve the one-electron problem in the parent ion in order to obtain the basis set of orbitals for different angular momenta $l=0(s),1({p}),2(d),3(f),\dots$; the orbitals themselves can be expanded in terms of a basis set. In our case, the latter basis consist of B-splines (see Appendix~\ref{ch:hybsplines}). Secondly, we construct the two-electron variational \ac{CI}-\ac{WF} with antisymmetrized configurations out of the set of orbitals, accordingly to the $LS$ coupling. Once the matrix elements of the total two-electron Hamiltonian are calculated, we solve the generalized eigenvalue problem to obtain the eigenvalues and the corresponding eigenvectors.
\subsection{Hamiltonian matrix elements}\label{sec:rmatxelem}
As suggested above, the variational theorem requires the optimization of the average value of the Hamiltonian operator $\braket{\Psi^{CI}|H|\Psi^{CI}}$ with respect to the expansion coefficients $C_i$ in equation~\eqref{eq:ciwfunc}. This variational optimization is equivalent to solve the matrix eigenvalue problem in an algebraic subspace spanned by the basis of configurations, see \citep{Levine2008}. In order to solve this eigenvalue problem, the Hamiltonian matrix elements $H_{ij}=\braket{\psi_{i}|H|\psi_{j}}$ with $i,j=1,\dots,N$ must be calculated. Using the equations~\eqref{eq:Hham2} and~\eqref{eq:ansymm3} they read
\begin{align}\label{eq:hammatrix1}
H_{ij}&=\braket{\psi_i|H|\psi_j}, \\ \nonumber
&=\bra{\{n_al_a\hspace{4pt}n_bl_b\}_iLM_LSM_S}\left(h(1) + h(2) + \frac{1}{r_{12}}\right)\ket{\{n_cl_c\hspace{4pt}n_dl_d\}_jLM_LSM_S}, \\ \nonumber
&=E_{i}\delta_{ij}+E_i\delta_{ij}+\left(\frac{1}{r_{12}}\right)_{ij}.
\end{align}
where $i\equiv\{n_al_a,n_bl_b\}$, $j\equiv\{n_cl_c,n_dl_d\}$, and $\left(1/r_{12}\right)_{ij}$ is the matrix element of the inter-electronic Coulomb operator. The Kronecker deltas in equation~\eqref{eq:hammatrix1} suggest that we are dealing with orthogonal orbitals. The Hamiltonian matrix $\boldsymbol{H}$ is a dense matrix, i.e., it is not a sparse matrix, which we must diagonalize; many optimised algorithms are available to do this task. We accomplish the diagonalization procedure with the help of the routine \texttt{DSYEV} included in the \texttt{LAPACK} library~\citep{Anderson1999}.
\subsubsection{Matrix elements for the interelectronic Coulomb repulsion.}
Since the trial \ac{CI}-\ac{WF} is built up as antisymmetric products of spin orbitals, once we have the one-particle energies, we only need to calculate the matrix elements for the electron-electron Coulomb interaction, that read
\begin{align}\label{eq:intermatrix}
\left(\frac{1}{r_{12}}\right)_{ij}&=\braket{\psi_i|\left(\frac{1}{r_{12}}\right)|\psi_j},\\ \nonumber
&=\bra{\{n_a^il_a^i\hspace{4pt}n_b^il_b^i\}LM_LSM_S}\left(\frac{1}{r_{12}}\right)\ket{\{n_c^jl_c^j\hspace{4pt}n_d^jl_d^j\}LM_LSM_S}.
\end{align}
Even though the indices $i$ and $j$ are too redundant, we have kept them in order to emphasize our \ac{CI} approach. In this context they denote two different specific configurations. Hereafter, these indices will be removed from the notation.
For a two-electron system with two non-equivalent electrons, following the equation~\eqref{eq:ansymm2}, the antisymmetric \ac{WF} with quantum numbers $n_al_a$ and $n_bl_b$ reads,
\begin{align}\label{eq:ansymm33}
&\ket{\{n_al_a\hspace{4pt}n_bl_b\}LM_LSM_S}= \\ \nonumber
&\frac{1}{\sqrt{2}}\left[\ket{(n_al_a)_1(n_bl_b)_2LM_LSM_S}+(-1)^{l_a+l_b+L+S}\ket{(n_bl_b)_1(n_al_a)_2LM_LSM_S}\right],
\end{align}
and the antisymmetric \ac{WF} for the case of two-equivalent electrons is
\begin{equation}\label{eq:ansymm34}
\ket{\{(nl)^2\}LM_LSM_S}=\ket{(nl)_1(nl)_2LM_LSM_S}\hspace{15pt}L+S\hspace{5pt}even.
\end{equation}
Accordingly, the electron-electron Coulomb interaction matrix elements between
antisymmetric \ac{WF} will be in terms of basic integrals in the form:
\begin{equation}\label{eq:matrixnonanty}
(r_{12}^{-1})^{\aleph}=\bra{(n_al_a)_1(n_bl_b)_2LM_LSM_S}r_{12}^{-1}\ket{(n_cl_c)_1(n_dl_d)_2LM_LSM_S},
\end{equation}
where the superscript $\aleph$ denotes that we are using non-antisymmetrized functions.
To begin with the calculations of the matrix elements~\eqref{eq:matrixnonanty}, which in principle involves integrations over $\boldsymbol{r}_1$ and $\boldsymbol{r}_2$, we must introduce a commonly used multipole expansion of the inter-electronic correlation term
\begin{equation}\label{eq:intermulti}
r_{12}^{-1}=\sum_l\frac{r^l_{<}}{r^{l+1}_{>}}P_l(cos\theta_{12}),
\end{equation}
where $P_l(cos\theta_{12})$ is the Legendre polynomial of order $l$ whose argument is the cosine of the inter-electronic angle~\citep{Abramowitz1965}; $r_{<}$ is the lesser between $r_1$ and $r_2$, $r_>$ is the greater.
As previously introduced, we use a mixed Dirac notation to separate the radial part form the orbital and spin angular momentum part in the matrix elements as follows
\begin{align}\label{eq:matrixnonanty2}
(r_{12}^{-1})_{ab,cd}^{\aleph}&=\sum_l\int_0^\infty dr_1\int_0^\infty dr_2\mathcal{U}_{n_{a}l_{a}}(r_{1})\mathcal{U}_{n_{b}l_{b}}(r_{2})\frac{r^l_{<}}{r^{l+1}_{>}}\mathcal{U}_{n_{c}l_{c}}(r_{1})\mathcal{U}_{n_{d}l_{d}}(r_{2})\\ \nonumber
&\times\bra{(l_al_b)LM_LSM_S}P_l(cos\theta_{12})\ket{(l_cl_d)LM_LSM_S},\\ \nonumber
&=\sum_lR^{l}(ab,cd)\braket{(LS)_{ab}|P_l(cos\theta_{12})|(LS)_{cd}}.
\end{align}
At first, in order to calculate the orbital-spin part of the matrix element which are written now as $\braket{(LS)_{ab}|P_l(cos\theta_{12})|(LS)_{cd}}$, we shall use the graphical method described in~\citep{Lindgren1986}; this method is based on the {\textit Spherical Tensor Operators Theory}. Now, using the addition theorem for \ac{SH}, we write the Legendre polynomial $P_{l}(cos\theta_{12})$ in terms of products of \ac{SH}
\begin{equation}\label{eq:addthe1}
P_{l}(cos\theta_{12})=\frac{2\pi}{2l+1}\sum_{m}\mathcal{Y}_{m}^{l}(\theta_{1},\phi_{1})\mathcal{Y}_{m}^{*l}(\theta_{2},\phi_{2}),
\end{equation}
In addition, the definition of the ``$\mathbf{C}$ tensor'', having components
\begin{subequations}\label{eq:ccomp144}
\begin{align}
C_{q}^{k}&=\sqrt{\frac{2\pi}{2k+1}}\mathcal{Y}_{q}^{k}(\theta,\phi), \label{eq:ccomp14} \\
C_{-q}^{k}&=\sqrt{\frac{2\pi}{2k+1}}\mathcal{Y}_{-q}^{k}(\theta,\phi), \label{eq:ccomp15}
\end{align}
\end{subequations}
the relation of parity symmetry for the \ac{SH}: $\mathcal{Y}_{q}^{*k}(\theta,\phi)=(-1)^{q}\mathcal{Y}_{-q}^{k}(\theta,\phi)$, and finally, the canonical form of \textit{tensor scalar product} of two tensors, that is defined by $\mathbf{t}^{k}(1)\cdot\mathbf{u}^{k}(2)=\sum_{q}(-1)^{q}t_{q}^{k}(1)u_{-q}^{k}(2)$, allow us to rewrite (\eqref{eq:addthe1}) as
\begin{equation}\label{eq:addthe3}
P_{l}(cos\theta_{12})=\sum_{m}(-1)^{m}C_{m}^{l}(1)C_{-m}^{l}(2)=\mathbf{C}^{l}(1)\cdot\mathbf{C}^{l}(2).
\end{equation}
Consequently, we are able to write the inter-electronic interaction operator in a rather useful form
\begin{equation}\label{eq:intermulti2}
r_{12}^{-1}=\sum_l(-1)^l(2l+1)^{\frac{1}{2}}\frac{r^l_{<}}{r^{l+1}_{>}}\{\mathbf{C}^{l}(1)\mathbf{C}^{l}(2)\}^{0}_{0},
\end{equation}
where we have also used the expression
\begin{equation}\label{eq:tensororank}
\{\mathbf{t}^{k}(1)\mathbf{u}^{k}(2)\}^{0}_{0}=(-1)^{k}(2k+1)^{-\frac{1}{2}}\mathbf{t}^{k}(1)\cdot\mathbf{u}^{k}(2),
\end{equation}
and here $\{\mathbf{t}^{k}(1)\mathbf{u}^{k}(2)\}^{0}_{0}$ is a scalar operator (or a tensor of rank zero) as expected for the Coulomb repulsion $1/r_{12}$. Finally, for a two-electron atom (helium isoelectronic series) the matrix element of equation~\eqref{eq:matrixnonanty2} may be written as
\begin{equation}\label{eq:nonantmatrix}
(r_{12}^{-1})_{ab,cd}^{\aleph}=\sum_l(-1)^l(2l+1)^{\frac{1}{2}}R^{l}(ab,cd)\braket{(LS)_{ab}|\{\mathbf{C}^{l}(1)\mathbf{C}^{l}(2)\}^{0}_{0}|(LS)_{cd}}.
\end{equation}
\subsubsection{\label{sec:orbitalspinopelem}Two-particle orbital-spin angular momentum matrix element}
In the first place, we shall evaluate the general matrix element of the compound tensor of rank $K$, defined as $\hat{g}_{12}=\gamma(r_1,r_2)\{\mathbf{t}^{k_{1}}(1)\mathbf{u}^{k_{2}}(2)\}^{K}_{Q}$, which reads
\begin{equation}\label{eq:maelemten12}
\braket{ab|\gamma(r_1,r_2)\{\mathbf{t}^{k_{1}}(1)\mathbf{u}^{k_{2}}(2)\}^{K}_{Q}|cd},
\end{equation}
\noindent where $a, b, c, d$ denote the uncoupled one-electron states of equation~\eqref{eq:hyket}, this is
\[\ket{a}=\ket{n_al_am^{a}_{l}m^{a}_{s}}\]
The arbitrary function $\gamma(r_1,r_2)$ depends on the radial coordinates of the two electrons. The matrix element \eqref{eq:maelemten12} may be calculated using the {\textit Wigner-Eckart theorem}~\citep{Edmonds1957,Lindgren1986, Ballentine1998} to represent integrals over the angular coordinates in the following way:
\tikzset{
paint/.style={draw=#1!50!black, fill=#1!60!black}
}
\begin{align}\label{eq:maelemten13}
\braket{ab|\hat{g}_{12}|cd}&=R(ab,cd)\braket{ab|\{\mathbf{t}^{k_{1}}(1)\mathbf{u}^{k_{2}}(2)\}^{K}_{Q}|cd},\\ \nonumber
&=\int_{0}^{\infty}\int_{0}^{\infty} P_{a}(r_{1})P_{b}(r_{2})\gamma(r_{1},r_{2})P_{c}(r_{1})P_{d}(r_{2})dr_{1}dr_{2} \\ \nonumber
&\times\braket{l_a||\mathbf{t}^{k_{1}}||l_c}\braket{l_b||\mathbf{u}^{k_{2}}||l_d}\\ \nonumber
&\begin{tikzpicture}[decoration={
shape width=.45cm, shape height=.25cm,
shape=isosceles triangle, shape sep=.55cm,
shape backgrounds}]
\draw (2,3) node[left] {-} -- (6,3) node[right] {+};
\draw (2,0) node[below] {$l_{c}m^{c}_{l}$} -- (2,5) node[above] {$l_{a}m^{a}_{l}$};
\draw (6,0) node[below] {$l_{d}m^{d}_{l}$} -- (6,5) node[above] {$l_{b}m^{b}_{l}$};
\draw [line width=3pt] (4,1.5) -- (4,3) node[above] {+};
\draw (4,0) node[below] {KQ} -- (4,1.5);
\draw [paint=purple,decorate] (3,3) node[above] {$k_{1}$} -- (3.5,3);
\draw [paint=purple,decorate] (5,3) node[above] {$k_{2}$} -- (5.5,3);
\draw [paint=purple,decorate] (2,4) -- (2,4.5);
\draw [paint=purple,decorate] (6,4) -- (6,4.5);
\draw (7.5,0) node[below] {$s_{c}m^{c}_{s}$} -- (7.5,5) node[above] {$s_{a}m^{a}_{s}$};
\draw (9.5,0) node[below] {$s_{d}m^{d}_{s},$} -- (9.5,5) node[above] {$s_{b}m^{b}_{s}$};
\node (a) at (1,2.5) {$\times$};
\end{tikzpicture}
\end{align}
here $\braket{l_a||\mathbf{t}^{k}||l_c}$ is the reduced matrix element which is independent of $m_{k}$ and
$R(ab,cd)=\int_{0}^{\infty}\int_{0}^{\infty} \mathcal{U}_{a}(r_{1})\mathcal{U}_{b}(r_{2})\gamma(r_{1},r_{2})\mathcal{U}_{c}(r_{1})\mathcal{U}_{d}(r_{2})dr_{1}dr_{2}$.
By the way, the operator~\eqref{eq:intermulti2} is a tensor of rank zero, i.e., we must set $K=0$ and $Q=0$. The corresponding $\gamma$ function for the electron-electron interaction is
\begin{equation}\label{eq:gamma13}
\gamma^k(r_1,r_2)=\frac{r^k_{<}}{r^{k+1}_{>}},
\end{equation}
where $r_<$ is the lesser between $r_1$ and $r_2$, and $r_>$ is the greater. Using the graphical identity
\begin{equation}\label{eq:graph1}
\begin{tikzpicture}[decoration={
shape width=.45cm, shape height=.25cm,
shape=isosceles triangle, shape sep=.55cm,
shape backgrounds}]
\draw (0,2) node[left] {$l_{2}m_{2}$} -- (3,2) node[right] {$l_{1}m_{1}$};
\draw (1.5,0) node[below] {$00$} -- (1.5,2);
\draw [paint=purple,decorate] (0.75,2) -- (1.25,2);
\draw [paint=purple,decorate] (2.25,2) -- (2.75,2);
\node (a) at (4.5,1) {$= [l_{1}]^{-\frac{1}{2}}$};
\draw (6.5,1) node[left] {$l_{2}m_{2}$} -- (9.6,1) node[right] {$l_{1}m_{1},$};
\draw [paint=purple,decorate] (8,1) -- (7.5,1);
\end{tikzpicture}
\end{equation}
where $[l_1]^{-\frac{1}{2}}=(2l_1+1)^{-\frac{1}{2}}$, the uncoupled matrix element of the operator $r_{12}^{-1}$ may be written as
\begin{align}\label{eq:maelemten23}
\braket{ab|r_{12}^{-1}|cd} &= \sum_{k}(-1)^{k}\braket{ab|\gamma^k(r_1,r_2)\{\mathbf{C}^{k}(1)\mathbf{C}^{k}(2)\}^{0}_{0}|cd}\\ \nonumber
&= \sum_{k}(-1)^{k}R^k(ab,cd)\braket{l_a||\mathbf{C}^{k}||l_c}\braket{l_b||\mathbf{C}^{k}||l_d}\\ \nonumber
&\begin{tikzpicture}[decoration={
shape width=.45cm, shape height=.25cm,
shape=isosceles triangle, shape sep=.55cm,
shape backgrounds}]
\draw (1,3) node[left] {$l_{a}m^{a}_{l}$} -- (4,3) node[right] {$l_{c}m^{c}_{l}$};
\draw (1,0) node[left] {$l_{b}m^{b}_{l}$} -- (4,0) node[right] {$l_{d}m^{d}_{l}$};
\draw (2.5,0) node[below]{-} -- (2.5,3) node[above]{+} ;
\draw [paint=purple,decorate] (1.75,3) -- (1.25,3);
\draw [paint=purple,decorate] (1.75,0) -- (1.25,0);
\draw [paint=purple,decorate] (2.5,1.5) -- (2.5,2);
\node (a) at (0,1.5) {$\times$};
\node (b) at (2.75,1.65) {$k$};
\end{tikzpicture} \\ \nonumber
&\begin{tikzpicture}
\draw (1,1.5) node[left] {$s_{a}m^{a}_{s}$} -- (4,1.5) node[right] {$s_{c}m^{c}_{s}$};
\draw (1,0) node[left] {$s_{b}m^{b}_{s}$} -- (4,0) node[right] {$s_{d}m^{d}_{s}.$};
\node (a) at (0,0.75) {$\times$};
\end{tikzpicture}
\end{align}
This is the matrix element of the inter-electronic operator for uncoupled states $a, b, c, d$. Now, we shall calculate the general coupled matrix element. The coupled states may be written
\begin{align}\label{eq:coupled13}
&\hspace{-1cm}\ket{(LS)_{cd}}=\ket{(l_cl_d)LM_LSM_S}= \ket{(l_cl_d)LM_L}\otimes\ket{(s_cs_d)SM_S}\\ \nonumber
&\hspace{-1cm}\begin{tikzpicture}[decoration={
shape width=.45cm, shape height=.25cm,
shape=isosceles triangle, shape sep=.55cm,
shape backgrounds}]
\node (a) at (0,1) {$=\sum_{m^{c}_{l}m^{d}_{l}}$};
\draw [line width=3pt] (3,1) node[above] {-} -- (3.5,1);
\draw (3.5,1) -- (4,1) node[right] {$LM_{L}$} ;
\draw (3,1) -- (2,2) node[left] {$l_{c}m^{c}_{l}$};
\draw (3,1) -- (2,0) node[left] {$l_{d}m^{d}_{l}$};
\draw [paint=purple,decorate] (2.5,1.5) -- (2.75,1.25);
\draw [paint=purple,decorate] (2.5,0.5) -- (2.25,0.25);
\node (a) at (6,1) {$\sum_{m^{c}_{s}m^{d}_{s}}$};
\draw [line width=3pt] (9,1) node[above] {-} -- (9.5,1);
\draw (9.5,1) -- (10,1) node[right] {$SM_{S}$} ;
\draw (9,1) -- (8,2) node[left] {$s_{c}m^{c}_{s}$};
\draw (9,1) -- (8,0) node[left] {$s_{d}m^{d}_{s}$};
\draw [paint=purple,decorate] (8.5,1.5) -- (8.75,1.25);
\draw [paint=purple,decorate] (8.5,0.5) -- (8.25,0.25);
\node ({c}) at (11.5,1) {$\ket{cd},$};
\end{tikzpicture}
\end{align}
\noindent and
\begin{align}\label{eq:coupled23}
&\hspace{-1cm}\bra{(LS)_{ab}}=\bra{(l_al_b)LM_LSM_S}= \bra{(l_al_b)LM_L}\otimes\bra{(s_as_b)SM_S}\\ \nonumber
&\hspace{-1cm}\begin{tikzpicture}[decoration={
shape width=.45cm, shape height=.25cm,
shape=isosceles triangle, shape sep=.55cm,
shape backgrounds}]
\node (a) at (0,1) {$=\sum_{m^{a}_{l}m^{b}_{l}}$};
\draw (2,1) node[left] {$LM_{L}$} -- (2.5,1);
\draw [line width=3pt] (2.5,1) -- (3,1) node[above] {+};
\draw (3,1) -- (4,2) node[right] {$l_{a}m^{a}_{l}$};
\draw (3,1) -- (4,0) node[right] {$l_{b}m^{b}_{l}$};
\draw [paint=purple,decorate] (3.5,1.5) -- (3.25,1.25);
\draw [paint=purple,decorate] (3.5,0.5) -- (3.75,0.25);
\node (a) at (6,1) {$\sum_{m^{a}_{s}m^{b}_{s}}$};
\draw (8,1) node[left] {$SM_{S}$} -- (8.5,1);
\draw [line width=3pt] (8.5,1) -- (9,1) node[above] {+};
\draw (9,1) -- (10,2) node[right] {$s_{a}m^{a}_{s}$};
\draw (9,1) -- (10,0) node[right] {$s_{b}m^{b}_{s}$};
\draw [paint=purple,decorate] (9.5,1.5) -- (9.25,1.25);
\draw [paint=purple,decorate] (9.5,0.5) -- (9.75,0.25);
\node ({c}) at (11.25,1) {$\bra{ab},$};
\end{tikzpicture}
\end{align}
\noindent At this point, we can combine expressions~\eqref{eq:maelemten23},~\eqref{eq:coupled13}, and~\eqref{eq:coupled23} to obtain the coupled matrix elements
\begin{align}\label{eq:maelemten33}
(r_{12}^{-1})_{ab,cd}^{\aleph}&=\sum_k(-1)^kR^k(ab,cd)\bra{(LS)_{ab}}\{\mathbf{C}^{k}(1)\mathbf{C}^{k}(2)\}^{0}_{0}\ket{(LS)_{cd}}\\ \nonumber
&= \sum_{k}(-1)^{k}R^k(ab,cd)\braket{l_a||\mathbf{C}^{k}||l_c}\braket{l_b||\mathbf{C}^{k}||l_d}\\ \nonumber
&\begin{tikzpicture}[decoration={
shape width=.35cm, shape height=.20cm,
shape=isosceles triangle, shape sep=.55cm,
shape backgrounds}]
\draw (0,1) node[left] {$LM_{L}$} -- (0.5,1);
\draw [line width=3pt] (0.5,1) -- (1,1) node[above] {+};
\draw (1,1) -- (2,2);
\draw (1,1) -- (2,0);
\draw (2,0) node[below] {-} -- (2,2) node[above] {+} ;
\draw [paint=purple,decorate] (1.75,1.75) -- (1.25,1.25);
\draw [paint=purple,decorate] (1.25,0.75) -- (1.5,0.5);
\draw [paint=purple,decorate] (1.75,0.25) -- (1.5,0.5);
\draw [line width=3pt] (3,1) node[above] {-} -- (3.5,1);
\draw (3.5,1) -- (4,1) node[right] {$LM_{L}$};
\draw (3,1) -- (2,2);
\draw (3,1) -- (2,0);
\draw [paint=purple,decorate] (2.5,1.5) -- (2.75,1.25);
\draw [paint=purple,decorate] (2.5,0.5) -- (2.25,0.25);
\draw [paint=purple,decorate] (2,1) -- (2,1.5);
\node (aa) at (2.75,1.65) {$l_c$};
\node (bb) at (2.75,0.35) {$l_d$};
\node (cc) at (1.2,1.65) {$l_a$};
\node (dd) at (1.2,0.35) {$l_b$};
\node (dd) at (2.25,1) {$k$};
\node (dd) at (-1.5,1) {$\times$};
\end{tikzpicture} \\ \nonumber
&\begin{tikzpicture}[decoration={
shape width=.35cm, shape height=.20cm,
shape=isosceles triangle, shape sep=.55cm,
shape backgrounds}]
\draw (0,1) node[left] {$SM_{S}$} -- (0.5,1);
\draw [line width=3pt] (0.5,1) -- (1,1) node[above] {+};
\draw (1,1) -- (2,2);
\draw (1,1) -- (2,0);
\draw [paint=purple,decorate] (1.5,1.5) -- (1.25,1.25);
\draw [paint=purple,decorate] (1.5,0.5) -- (1.75,0.25);
\draw [line width=3pt] (3,1) node[above] {-} -- (3.5,1);
\draw (3.5,1) -- (4,1) node[right] {$SM_{S}.$} ;
\draw (3,1) -- (2,2);
\draw (3,1) -- (2,0);
\draw [paint=purple,decorate] (2.5,1.5) -- (2.75,1.25);
\draw [paint=purple,decorate] (2.5,0.5) -- (2.25,0.25);
\node (aa) at (2.75,1.65) {$s_c$};
\node (bb) at (2.75,0.35) {$s_d$};
\node (cc) at (1.2,1.65) {$s_a$};
\node (dd) at (1.2,0.35) {$s_b$};
\node (dd) at (-1.5,1) {$\times$};
\end{tikzpicture}
\end{align}
The spin and the orbital angular momentum of the electrons are coupled separately and their graphical diagrams obey the following identities:
\begin{equation}\label{eq:coupled355}
\begin{tikzpicture}[decoration={
shape width=.35cm, shape height=.20cm,
shape=isosceles triangle, shape sep=.55cm,
shape backgrounds}]
\draw (0,1) node[left] {$SM_{S}$} -- (0.5,1);
\draw [line width=3pt] (0.5,1) -- (1,1) node[above] {+};
\draw (1,1) -- (2,2);
\draw (1,1) -- (2,0);
\draw [paint=purple,decorate] (1.5,1.5) -- (1.25,1.25);
\draw [paint=purple,decorate] (1.5,0.5) -- (1.75,0.25);
\draw [line width=3pt] (3,1) node[above] {-} -- (3.5,1);
\draw (3.5,1) -- (4,1) node[right] {$SM_{S}$} ;
\draw (3,1) -- (2,2);
\draw (3,1) -- (2,0);
\draw [paint=purple,decorate] (2.5,1.5) -- (2.75,1.25);
\draw [paint=purple,decorate] (2.5,0.5) -- (2.25,0.25);
\node (aa) at (2.75,1.65) {$s_c$};
\node (bb) at (2.75,0.35) {$s_d$};
\node (cc) at (1.2,1.65) {$s_a$};
\node (dd) at (1.2,0.35) {$s_b$};
\node (dd) at (5.5,1) {$= 1,$};
\end{tikzpicture}
\end{equation}
\begin{equation}\label{eq:coupled45}
\begin{tikzpicture}[decoration={
shape width=.35cm, shape height=.20cm,
shape=isosceles triangle, shape sep=.55cm,
shape backgrounds}]
\draw (0,1) node[left] {$LM_{L}$} -- (0.5,1);
\draw [line width=3pt] (0.5,1) -- (1,1) node[above] {+};
\draw (1,1) -- (2,2);
\draw (1,1) -- (2,0);
\draw (2,0) node[below] {-} -- (2,2) node[above] {+} ;
\draw [paint=purple,decorate] (1.75,1.75) -- (1.25,1.25);
\draw [paint=purple,decorate] (1.25,0.75) -- (1.5,0.5);
\draw [paint=purple,decorate] (1.75,0.25) -- (1.5,0.5);
\draw [line width=3pt] (3,1) node[above] {-} -- (3.5,1);
\draw (3.5,1) -- (4,1) node[right] {$LM_{L}$};
\draw (3,1) -- (2,2);
\draw (3,1) -- (2,0);
\draw [paint=purple,decorate] (2.5,1.5) -- (2.75,1.25);
\draw [paint=purple,decorate] (2.5,0.5) -- (2.25,0.25);
\draw [paint=purple,decorate] (2,1) -- (2,1.5);
\node (aa) at (2.75,1.65) {$l_c$};
\node (bb) at (2.75,0.35) {$l_d$};
\node (cc) at (1.2,1.65) {$l_a$};
\node (dd) at (1.2,0.35) {$l_b$};
\node (dd) at (2.25,1) {$k$};
\node (ee) at (5.5,1) {$=$};
\draw (6,0) node[left] {+} -- (9.464,0) node[right] {-};
\draw (6,0) -- (7.732,1);
\draw (6,0) -- (7.732,3);
\draw (7.732,3) -- (9.464,0);
\draw (7.732,1) -- (9.464,0);
\draw (7.732,1) node[below] {-} -- (7.732,3) node[above] {+};
\draw [paint=purple,decorate] (6.866,1.5) -- (7.066,1.846) ;
\draw [paint=purple,decorate] (8.598,1.5) -- (8.798,1.154) ;
\draw [paint=purple,decorate] (6.866,0.5) -- (6.966,0.557) ;
\draw [paint=purple,decorate] (8.598,0.5) -- (8.498,0.557) ;
\draw [paint=purple,decorate] (7.732,2) -- (7.732,2.5) ;
\draw [paint=purple,decorate] (7.732,0) -- (7.832,0) ;
\node (ff) at (8.85,1.65) {$l_c$};
\node (gg) at (8.25,0.35) {$l_d$};
\node (hh) at (6.6,1.65) {$l_a$};
\node (ii) at (7.2,0.35) {$l_b$};
\node (jj) at (7.95,2.1) {$k$};
\node (kk) at (7.732,-0.3) {$L$};
\end{tikzpicture}
\end{equation}
\begin{equation}\label{eq:coupled55}
\begin{tikzpicture}[decoration={
shape width=.35cm, shape height=.20cm,
shape=isosceles triangle, shape sep=.55cm,
shape backgrounds}]
\draw (0,0) node[left] {+} -- (3.464,0) node[right] {-};
\draw (0,0) -- (1.732,1);
\draw (0,0) -- (1.732,3);
\draw (1.732,3) -- (3.464,0);
\draw (1.732,1) -- (3.464,0);
\draw (1.732,1) node[below] {-} -- (1.732,3) node[above] {+};
\draw [paint=purple,decorate] (0.866,1.5) -- (1.066,1.846) ;
\draw [paint=purple,decorate] (2.598,1.5) -- (2.798,1.154) ;
\draw [paint=purple,decorate] (0.866,0.5) -- (0.966,0.557) ;
\draw [paint=purple,decorate] (2.598,0.5) -- (2.498,0.557) ;
\draw [paint=purple,decorate] (1.732,2) -- (1.732,2.5) ;
\draw [paint=purple,decorate] (1.732,0) -- (1.832,0) ;
\node (aa) at (2.85,1.65) {$l_c$};
\node (bb) at (2.25,0.35) {$l_d$};
\node (cc) at (0.6,1.65) {$l_a$};
\node (dd) at (1.2,0.35) {$l_b$};
\node (dd) at (1.95,2.1) {$k$};
\node (dd) at (1.732,-0.3) {$L$};
\node (ff) at (6,1) {$=(-1)^{l_b+l_c+L+k}\Gj{l_a}{l_b}{L}{l_d}{l_c}{k}$,};
\end{tikzpicture}
\end{equation}
where $\Gj{l_a}{l_b}{L}{l_d}{l_c}{k}$ is the $6j$-symbol~\citep{Lindgren1986,Edmonds1957,Landau1977}. Then the basic formula for the Coulomb matrix elements for unsymmetrized configurations in the bra and ket is
\begin{align}\label{eq:maelemten45}
(r_{12}^{-1})_{ab,cd}^{\aleph}&= \sum_{k}R^{k}(ab,cd)\braket{a||\mathbf{C}^{k}||c}\braket{b||\mathbf{C}^{k}||d}\\ \nonumber
&\times(-1)^{l_b+l_c+L}\Gj{l_a}{l_b}{L}{l_d}{l_c}{k}.
\end{align}
\noindent Now, using the following expression for the reduced matrix element
\begin{equation}
\braket{l||\mathbf{C}^{k}||l^{\prime}}= (-1)^{l}\left[(2l+1)(2l^{\prime}+1)\right]^{\frac{1}{2}}\tj{l}{k}{l^{\prime}}{0}{0}{0},
\end{equation}
we can rewrite the matrix element as
\begin{align}\label{eq:maelemten55}
(r_{12}^{-1})_{ab,cd}^{\aleph}&= \sum_{k}R^k(ab,cd)(-1)^{l_a+l_c+L}\\ \nonumber
&\times\left[(2l_a+1)(2l_b+1)(2l_c+1)(2l_d+1)\right]^{\frac{1}{2}}\\ \nonumber
&\times\tj{l_a}{k}{l_c}{0}{0}{0}\tj{l_b}{k}{l_d}{0}{0}{0}\Gj{l_a}{k}{l_c}{l_d}{L}{l_b},
\end{align}
by means of $\tj{j_2}{j_1}{j_3}{m_2}{m_1}{m_3}=(-1)^{j_1+j_2+j_3}\tj{j_1}{j_2}{j_3}{m_1}{m_2}{m_3}$, we have
\begin{align}\label{eq:maelemten65}
(r_{12}^{-1})_{ab,cd}^{\aleph}&= \sum_{k}R^k(ab,cd)(-1)^{l_b+l_d+L}\\ \nonumber
&\times\left[(2l_a+1)(2l_b+1)(2l_c+1)(2l_d+1)\right]^{\frac{1}{2}}\\ \nonumber
&\times\tj{l_a}{l_c}{k}{0}{0}{0}\tj{l_b}{l_d}{k}{0}{0}{0}\Gj{l_a}{k}{l_c}{l_d}{L}{l_b}.
\end{align}
At this point, we can use $\tj{j_1}{j_2}{j_3}{-m_1}{-m_2}{-m_3}=(-1)^{j_1+j_2+j_3}\tj{j_1}{j_2}{j_3}{m_1}{m_2}{m_3}$ and $(-1)^k=(-1)^{-k}$, for k an integer, to obtain
\begin{align}\label{eq:maelemten75}
(r_{12}^{-1})_{ab,cd}^{\aleph}&= \sum_{k}R^k(ab,cd)(-1)^{L-k}\\ \nonumber
&\times \left[(2l_a+1)(2l_b+1)(2l_c+1)(2l_d+1)\right]^{\frac{1}{2}}\\ \nonumber
&\times\tj{l_a}{l_c}{k}{0}{0}{0}\tj{l_b}{l_d}{k}{0}{0}{0}\Gj{l_a}{k}{l_c}{l_d}{L}{l_b}.
\end{align}
\subsubsection{Radial Matrix Elements}
The radial integral which we have denoted $R^k(ab,cd)$ as a factorized term in the inter-electronic Coulomb matrix element is given by
\begin{equation}\label{eq:radialelem}
R^k(ab,cd)=\int_{0}^{\infty}\int_{0}^{\infty} \mathcal{U}_{a}(r_{1})\mathcal{U}_{b}(r_{2})\frac{r^k_<}{r^{k+1}_>}\mathcal{U}_{c}(r_{1})\mathcal{U}_{d}(r_{2})dr_{1}dr_{2},
\end{equation}
where we have used the relation $\mathcal{U}_{i}({r})=rR_i({r})$. Now, in order to get the solution of equation~\eqref{eq:radialelem}, we may define the function $Y^k({r})$~\citep{Bachau2001,McCurdy2004,Castro-Granados2012}.
\begin{align}\label{eq:functionary}
Y_{bd}^k({r})&=r\int_{0}^{\infty} \mathcal{U}_{b}(r^{\prime})\frac{r^k_<}{r^{k+1}_>}\mathcal{U}_{d}(r^{\prime})dr^{\prime}, \\ \nonumber
&=\int_{0}^{r} \mathcal{U}_{b}(r^{\prime})\left(\frac{r^{\prime}}{r}\right)^k\mathcal{U}_{d}(r^{\prime})dr^{\prime}+\int_{r}^{\infty} \mathcal{U}_{b}(r^{\prime})\left(\frac{r}{r^{\prime}}\right)^{k+1}\mathcal{U}_{d}(r^{\prime})dr^{\prime},
\end{align}
with this definition we can rewrite the radial integral~\eqref{eq:radialelem} as
\begin{equation}\label{eq:radialelem2}
R^k(ab,cd)=\int_{0}^{\infty} \mathcal{U}_{a}({r})\frac{Y^{k}_{bd}({r})}{r}\mathcal{U}_{c}({r})dr,
\end{equation}
We need a two-step way to calculate the radial integral~\eqref{eq:radialelem}. Firstly we compute the function $Y^{k}_{bd}({r})$ and immediately we insert it into the equation~\eqref{eq:radialelem2}. Anyway, one may try to compute this function by different methods. An efficient method is to solve the associated Poisson's equation. Actually, we can rewrite the integral form of equation~\eqref{eq:functionary} as a differential equation for $Y^{k}_{bd}({r})$ using the {\textit Leibniz integral rule}~\citep{Abramowitz1965}
\begin{align}\label{eq:leibnizrule}
\text{if}\hspace{20pt}F(x)&=\int_{a(x)}^{b(x)}f(x,t)dt,\\ \nonumber
\Longrightarrow\hspace{10pt}\frac{dF(x)}{dx}&=f(x,b(x))\frac{db(x)}{dx}-f(x,a(x))\frac{da(x)}{dx}+\int_{a(x)}^{b(x)}\frac{\partial}{\partial x}f(x,t)dt.
\end{align}
Firstly, we calculate the first and second derivative of $Y^{k}_{bd}({r})$
\begin{align}
\frac{d}{dr}Y^{k}_{bd}({r})&=-\frac{k}{r}\int_{0}^{r} \mathcal{U}_{b}(r^{\prime})\left(\frac{r^{\prime}}{r}\right)^k\mathcal{U}_{d}(r^{\prime})dr^{\prime}\label{eq:firstfuny}\\ \nonumber
&+\frac{k+1}{r}\int_{r}^{\infty} \mathcal{U}_{b}(r^{\prime})\left(\frac{r}{r^{\prime}}\right)^{k+1}\mathcal{U}_{d}(r^{\prime})dr^{\prime},\\
\frac{d^2}{dr^2}Y^{k}_{bd}({r})&=-\frac{2k+1}{r}\mathcal{U}_b({r})\mathcal{U}_d({r}) \label{eq:secfuny} \\ \nonumber
&+\frac{k(k+1)}{r^2}\int_{0}^{r} \mathcal{U}_{b}(r^{\prime})\left(\frac{r^{\prime}}{r}\right)^k\mathcal{U}_{d}(r^{\prime})dr^{\prime}\\ \nonumber
&+\frac{k(k+1)}{r^2}\int_{r}^{\infty} \mathcal{U}_{b}(r^{\prime})\left(\frac{r}{r^{\prime}}\right)^{k+1}\mathcal{U}_{d}(r^{\prime})dr^{\prime},
\end{align}
after that, combining the equations~\eqref{eq:functionary} and~\eqref{eq:secfuny}, we obtain the ordinary differential equation
\begin{equation}\label{eq:poissonyy}
\frac{d^2}{dr^2}Y^{k}_{bd}({r})=\frac{k(k+1)}{r^2}Y^{k}_{bd}({r})-\frac{2k+1}{r}\mathcal{U}_b({r})\mathcal{U}_d({r}).
\end{equation}
This is a non-homogeneous one-dimensional Poisson's equation which must satisfy the following boundary conditions
\begin{subequations}\label{eq:condiniy}
\begin{align}
Y^{k}_{bd}(0)&=0,\label{eq:condiniy1}\\
Y^{k}_{bd}(L)&=\frac{1}{L^k}\int_0^L\mathcal{U}_b(r^\prime)r^{\prime k}\mathcal{U}_b(r^\prime)dr^\prime.\label{eq:condiniy2}
\end{align}
\end{subequations}
Given that we are solving the problem within a finite box, the upper limit in the integration is $L$ instead of infinity. Moreover, our numerical implementation to obtain the solution of this differential equation is to expand $Y^{k}_{bd}({r})$ in the same basis of B-splines used to solve the one-electron Schr\"odinger equation, see section~\eqref{sec:hybsplines}. Nevertheless, our basis of B-splines satisfies only the first of these boundary conditions. To solve the equation~\eqref{eq:poissonyy}, with both of the boundary conditions~\eqref{eq:condiniy}, we may use the Green's function~\citep{McCurdy2004,Jackson1998} for two-point boundary conditions, in the interval $(0,L)$
\begin{equation}\label{eq:greenfnps}
G(r,r^\prime)=\frac{r^k_<}{r^{k+1}_>}-\frac{r^kr^{\prime k}}{L^{2k+1}},
\end{equation}
which satisfies the the equation
\begin{equation}\label{eq:greeneqps}
\left(\frac{d^2}{dr^2}-\frac{k(k+1)}{r^2}\right)G(r,r^\prime)=-\frac{2k+1}{r}\delta(r-r^\prime).
\end{equation}
In the first place, we must seek a solution to the function $Y^{k}_{bd}({r})^{(0)}$, that satisfies the boundary condition~\eqref{eq:condiniy1}, but actually, at $r=L$ satisfies $Y^{k}_{bd}({L})^{(0)}=0$ instead of the condition~\eqref{eq:condiniy2}. We expand this function in the basis of B-splines, which are functions that satisfies the boundary conditions that they vanish at $0$ and $L$,
\begin{equation}\label{eq:ybsplines}
Y^{k}_{bd}({r})^{(0)}=\sum_{i}C^k_iB_i({r}).
\end{equation}
Replacing this expansion into the equation~\eqref{eq:poissonyy}, we obtain
\begin{equation}\label{eq:eqexpandbs}
\left(\frac{d^2}{dr^2}-\frac{k(k+1)}{r^2}\right)\sum_{i}C^k_jB_j({r})=-\frac{2k+1}{r}\mathcal{U}_b({r})\mathcal{U}_d({r}).
\end{equation}
Multiplying by one of the B-splines from the left and integrating over $r$ gives the following algebraic matrix equation for the coefficients $C^k_j$
\begin{align}\label{eq:poissonM}
\sum_jT^k_{ij}C^k_j=(2k+1)U_i^{bd},\\ \nonumber
\boldsymbol{T}^k\boldsymbol{C}^k=(2k+1)\boldsymbol{U}^{bd},
\end{align}
the latter written in compact matrix form, where
\begin{equation}\label{eq:poissonM1}
T^k_{ij}=-\int_0^LB_i({r})\left(\frac{d^2}{dr^2}-\frac{k(k+1)}{r^2}\right)B_j({r})dr
\end{equation}
and
\begin{equation}\label{eq:poissonM2}
U_i^{bd}=\int_0^LB_i({r})\frac{1}{r}\mathcal{U}_b({r})\mathcal{U}_d({r})dr.
\end{equation}
Equation~\eqref{eq:poissonM} has the solution (the second line in compact matrix form)
\begin{align}\label{eq:poissonM3}
C^k_i&=(2k+1)\sum_j(T^k)^{-1}_{ij}U_j^{bd},\\ \nonumber
\boldsymbol{C}^k&=(2k+1)(\boldsymbol{T}^k)^{-1}\boldsymbol{U}^{bd},
\end{align}
in this way we have the solution to $Y^{k}_{bd}({r})^{(0)}$. In order to calculate the actual solution $Y^{k}_{bd}({r})$ of the Poisson's equation~\eqref{eq:poissonyy}, with the proper boundary conditions~\eqref{eq:condiniy}, we need to add a term which is a solution of the homogeneous equation. Using the Green's function~\eqref{eq:greenfnps}, we may add an exact expression which is analogous to its second term, this is
\begin{align}\label{eq:soltoty}
Y^{k}_{bd}({r})&=Y^{k}_{bd}({r})^{(0)}+\frac{r^{k+1}}{L^{2k+1}}\int_0^L \mathcal{U}_b({r^\prime})r^{\prime k}\mathcal{U}_d({r^\prime}),\\ \nonumber
&=(2k+1)\sum_{ij}B_i({r})(T^k)^{-1}_{ij}U_j^{bd}+\frac{r^{k+1}}{L^{2k+1}}Q^k_{bd},\\ \nonumber
&=(2k+1)\boldsymbol{B}^T({r})(\boldsymbol{T}^k)^{-1}\boldsymbol{U}_{bd}+\frac{r^{k+1}}{L^{2k+1}}Q^k_{bd},
\end{align}
the latter written in compact matrix form, and where
\begin{equation}\label{eq:qintegral}
Q^k_{bd}=\int_0^L \mathcal{U}_b({r^\prime})r^{\prime k}\mathcal{U}_d({r^\prime}).
\end{equation}
Therefore, we have now a solution to the function $Y^{k}_{bd}({r})$ satisfying the correct boundary conditions~\eqref{eq:condiniy}. Finally, we substitute it back into the original expression for the radial two-electron integral~\eqref{eq:radialelem2}, in order to obtain its solution
\begin{align}\label{eq:radialelem3}
R^k(ab,cd)&=\int_{0}^{L} \mathcal{U}_{a}({r})\frac{Y^{k}_{bd}({r})}{r}\mathcal{U}_{c}({r})dr, \\ \nonumber
&=\int_{0}^{L} \mathcal{U}_{a}({r})\frac{1}{r}\left[(2k+1)\sum_{ij}B_i({r})(T^k)^{-1}_{ij}U_j^{bd}+\frac{r^{k+1}}{L^{2k+1}}Q^k_{bd}\right] \mathcal{U}_{c}({r})dr,\\ \nonumber
&=(2k+1)\sum_{ij}\left[\int_{0}^{L}B_i({r})\frac{1}{r}\mathcal{U}_{a}({r})\mathcal{U}_{c}({r})dr\right](T^k)^{-1}_{ij}U_j^{bd}\\ \nonumber
&+\frac{1}{L^{2k+1}}Q^k_{bd}\left[\int_{0}^{L}\mathcal{U}_{a}({r})r^k\mathcal{U}_{c}({r})dr\right].
\end{align}
using the equations~\eqref{eq:poissonM2} and~\eqref{eq:qintegral}, we finally arrive at the expression
\begin{align}\label{eq:radialelem4}
R^k(ab,cd)&=(2k+1)\sum_{ij}U_i^{ac}(T^k)^{-1}_{ij}U_j^{bd}+\frac{1}{L^{2k+1}}Q^k_{ac}Q^k_{bd}, \\ \nonumber
&=(2k+1)\boldsymbol{U}^T_{ac}(\boldsymbol{T}^k)^{-1}\boldsymbol{U}_{bd}+\frac{1}{L^{2k+1}}Q^k_{ac}Q^k_{bd}.
\end{align}
A comparative table showing the computational validity of equation~\eqref{eq:radialelem4} can be found in references~\citep{Bachau2001,Castro-Granados2012}.
\subsubsection{Inter-electronic Coulomb matrix elements between antisymmetric configurations}
To summarize, we have calculated the matrix elements of $1/r_{12}$ between non-antisymmetric configurations. Now we must take into account the antisymmetrization in the configurations for the bra and ket states, along with the property of {\textit equivalent} or {\textit non-equivalent} electrons in the configurations, that affects the form of the \ac{WF}.
\subsubsection*{Equivalent---Equivalent Electrons}
\begin{align}\label{eq:eqeqcoul}
(r^{-1}_{12})_{aa,cc}&=\braket{\{(n_al_a)^2\}LS|r^{-1}_{12}|\{(n_cl_c)^2\}LS},\\ \nonumber
&=(r_{12}^{-1})_{ab,cd}^{\aleph}=\braket{(n_al_a)_1(n_al_a)_2LS|r^{-1}_{12}|(n_cl_c)_1(n_cl_c)_2LS},\\ \nonumber
&=\sum_{k}R^k(aa,cc)(-1)^{L-k}(2l_a+1)(2l_c+1)\\ \nonumber
&\times\tj{l_a}{l_c}{k}{0}{0}{0}^2\Gj{l_a}{k}{l_c}{l_c}{L}{l_a}.
\end{align}
In a similar manner, all combinations of {\textit equivalent} and {\textit non-equivalent} two-electron \ac{WF}s, for the general antisymmetrized matrix element, may be written, as a result, in terms of the non-antisymmetrized matrix elements~\eqref{eq:maelemten75}.
\subsubsection*{Equivalent---Non-equivalent Electrons}
\begin{align}\label{eq:eqeqcoul1}
(r^{-1}_{12})_{aa,cd}&=\braket{\{(n_al_a)^2\}LS|r^{-1}_{12}|\{n_cl_c\hspace{4pt}n_dl_d\}LS},\\ \nonumber
&=\bra{(n_al_a)_1(n_al_a)_2LS}r^{-1}_{12}\frac{1}{\sqrt{2}}\left[\ket{(n_cl_c)_1(n_dl_d)_2LS}\right. \\ \nonumber
&+\left.(-1)^{l_c+l_d+L+S}\ket{(n_dl_d)_1(n_cl_c)_2LS}\right],\\ \nonumber
&=\frac{1}{\sqrt{2}}(r_{12}^{-1})_{aa,cd}^{\aleph}+\frac{1}{\sqrt{2}}(-1)^{l_c+l_d+L+S}(r_{12}^{-1})_{aa,dc}^{\aleph},\\ \nonumber
&=\frac{1}{\sqrt{2}}(2l_a+1)[(2l_c+1)(2l_d+1)]^{\frac{1}{2}}\sum_{k}\left[R^k(aa,cd)(-1)^{L-k}\right.\\ \nonumber
&+(-1)^{l_c+l_d-k+S}\left.R^k(aa,dc)\right]\\ \nonumber
&\times\tj{l_a}{l_c}{k}{0}{0}{0}\tj{l_a}{l_d}{k}{0}{0}{0}\Gj{l_a}{k}{l_c}{l_d}{L}{l_a}.\\ \nonumber
\end{align}
\subsubsection*{Non-equivalent---Equivalent Electrons}
\begin{align}\label{eq:eqeqcoul2}
(r^{-1}_{12})_{ab,cc}&=\braket{\{n_al_a\hspace{4pt}n_bl_b\}LS|r^{-1}_{12}|\{(n_cl_c)^2\}LS},\\ \nonumber
&=\frac{1}{\sqrt{2}}\left[\bra{(n_al_a)_1(n_bl_b)_2LS}+(-1)^{l_a+l_b+L+S}\bra{(n_bl_b)_1(n_al_a)_2LS}\right] \\ \nonumber
&\times r^{-1}_{12}\ket{(n_cl_c)_1(n_cl_c)_2LS},\\ \nonumber
&=\frac{1}{\sqrt{2}}(r_{12}^{-1})_{ab,cc}^{\aleph}+\frac{1}{\sqrt{2}}(-1)^{l_a+l_b+L+S}(r_{12}^{-1})_{ba,cc}^{\aleph},\\ \nonumber
&=\frac{1}{\sqrt{2}}(2l_c+1)[(2l_a+1)(2l_b+1)]^{\frac{1}{2}}\sum_{k}\left[R^k(ab,cc)(-1)^{L-k}\right.\\ \nonumber
&+(-1)^{l_a+l_b-k+S}\left.R^k(ba,cc)\right]\\ \nonumber
&\times\tj{l_a}{l_c}{k}{0}{0}{0}\tj{l_b}{l_c}{k}{0}{0}{0}\Gj{l_a}{k}{l_c}{l_c}{L}{l_b}.\\ \nonumber
\end{align}
\subsubsection*{Non-equivalent---Non-equivalent Electrons}
\begin{align}\label{eq:eqeqcoul3}
(r^{-1}_{12})_{ab,cd}&=\braket{\{n_al_a\hspace{4pt}n_bl_b\}LS|r^{-1}_{12}|\{n_cl_c\hspace{4pt}n_dl_d\}LS},\\ \nonumber
&=\frac{1}{2}\left[\bra{(n_al_a)_1(n_bl_b)_2LS}+(-1)^{l_a+l_b+L+S}\bra{(n_bl_b)_1(n_al_a)_2LS}\right] \\ \nonumber
&\times r^{-1}_{12}\left[\ket{(n_cl_c)_1(n_dl_d)_2LS}+(-1)^{l_c+l_d+L+S}\ket{(n_dl_d)_1(n_cl_c)_2LS}\right] ,\\ \nonumber
&=\frac{1}{2}\left[(r_{12}^{-1})_{ab,cd}^{\aleph}+(-1)^{l_c+l_d+L+S}(r_{12}^{-1})_{ab,dc}^{\aleph}\right.\\ \nonumber
&+\left.(-1)^{l_a+l_b+L+S}(r_{12}^{-1})_{ba,cd}^{\aleph}+\frac{1}{\sqrt{2}}(-1)^{l_a+l_b+l_c+l_d}(r_{12}^{-1})_{ba,dc}^{\aleph}\right].\\ \nonumber
&=\frac{1}{2}[(2l_a+1)(2l_b+1)(2l_c+1)(2l_d+1)]^{\frac{1}{2}}\\ \nonumber
&\times\sum_{k}\left[\left[R^k(ab,cd)(-1)^{L-k}+(-1)^{l_a+l_b+l_c+l_d+L-k}R^k(ba,dc)\right]\right.\\ \nonumber
&\times\tj{l_a}{l_c}{k}{0}{0}{0}\tj{l_b}{l_d}{k}{0}{0}{0}\Gj{l_a}{k}{l_c}{l_d}{L}{l_b}\\ \nonumber
&+\left[R^k(ab,dc)(-1)^{l_c+l_d+S-k}+(-1)^{l_a+l_b+S-k}R^k(ba,cd)\right]\\ \nonumber
&\left.\times\tj{l_a}{l_d}{k}{0}{0}{0}\tj{l_b}{l_c}{k}{0}{0}{0}\Gj{l_a}{k}{l_d}{l_c}{L}{l_b}\right].
\end{align}
\newpage
\begin{table}[h]
\centering
\begin{tabular}{ccccccc}
\hline\hline
&$^{1}S^{e}$ & $^{3}S^{e}$ & $^{1}P^{o}$ & $^{3}P^{o}$ & $^{1}D^{e}$ & $^{3}D^{e}$\\ \hline
$1$ & $-2.903509$ & $-2.175228$ & $-2.123797$ & $-2.133156$ & $-2.055619$ & $-2.055635$ \\
$2$ & $-2.145961$ & $-2.068689$ & $-2.055131$ & $-2.058078$ & $-2.031278$ & $-2.031287$ \\
$3$ & $-2.061268$ & $-2.036510$ & $-2.031061$ & $-2.032321$ & $-2.020011$ & $-2.020016$ \\
$4$ & $-2.033582$ & $-2.022607$ & $-2.019896$ & $-2.020543$ & $-2.013884$ & $-2.013888$ \\
$5$ & $-2.021165$ & $-2.015345$ & $-2.013818$ & $-2.014187$ & $-2.010177$ & $-2.010179$ \\
$6$ & $-2.014535$ & $-2.011064$ & $-2.010137$ & $-2.010361$ & $-2.007701$ & $-2.007702$ \\
$7$ & $-2.010570$ & $-2.008295$ & $-2.007659$ & $-2.007815$ & $-2.005394$ & $-2.005396$ \\
$8$ & $-2.007951$ & $-2.005958$ & $-2.005263$ & $-2.005437$ & $-2.002107$ & $-2.002112$ \\
$9$ & $-2.005547$ & $-2.002974$ & $-2.001502$ & $-2.001865$ & & \\
$10$ & & & $-2.001948$ & & & \\ \hline\hline
\end{tabular}
\caption[Energies of singly excited states of symmetries $^{1,3}P^{o}$, $^{1,3}S^{e}$, $^{1,3}D^{e}$ of helium atom]{\label{tab:singly1}Energies (in \ac{a.u.}) for the bound states in helium, below the first ionization threshold He$^+(n=1)$, $E=-2.0$ \ac{a.u.}, for the symmetries $^{1,3}S^e$, $^{1,3}P^o$ and $^{1,3}D^e$, computed with a variational CI method using two-electron configurations in terms of hydrogenic orbitals computed with a basis of B-splines. The basis of configurations is described in tables~\ref{tab:config23} and~\ref{tab:config24}.}
\end{table}
\subsection{Calculations for bound states of helium atom}
In our work the \ac{CI} approach is based on the expansion in terms of antisymmetrized products of atomic orbitals, the latter expanded in B-splines polynomials defined within a finite box of length $L$. B-splines have been widely used in the last years and for a fuller description the reader is referred to~\citep{Bachau2001}. An almost precise ground state energy for helium atom may be obtained using B-splines with an exponential \ac{bp} (knot) sequence, see section~\ref{sec:expseq}, and $25$ B-splines of order $k=7$, generating one-electron orbitals with $l\le4$, with a full \ac{CI}-\ac{WF} of 2500 configurations, which yields the energy $-2.903509$~\ac{a.u.} to be compared with results reported by~\citep{Pekeris1958}, $-2.903742$~\ac{a.u.}
In table~\eqref{tab:singly1} we include all the calculated eigenenergies for the ground state $^1S^e$ and the singly excited states of helium located below the first ionization threshold for
the spectroscopic symmetries $^{1,3}S^e$, $^{1,3}P^o$ and
$^{1,3}D^e$. The tables~\eqref{tab:config23} and~\eqref{tab:config24} show the configurations and its number used in all calculations for all symmetries of bound states of helium atom.
\begin{table}
\centering
\begin{tabular}{ccccc}
\hline\hline
Symmetry & Configurations & $n_1^{max}$ & $n_2^{max}$ & Number of Conf. \\ \hline
\multirow{5}{*}{ $^{1}S^{e}$} &ss&25&25&325\\
&pp&26&26&325\\
&dd&27&27&325\\
&ff&28&28&325\\
&gg&29&29&325\\ \hline
Total&&&&1625 \\ \hline
\multirow{4}{*}{ $^{1}P^{o}$} &sp&25&26&625\\
&pd&26&27&625\\
&df&27&28&625\\
&fg&28&29&625\\ \hline
Total&&&&2500 \\ \hline
\multirow{7}{*}{ $^{1}D^{e}$} &sd&25&27&625\\
&pp&26&26&325\\
&pf&26&28&625\\
&dd&27&27&325\\
&dg&27&289&625\\
&ff&28&28&325\\
&gg&29&29&325\\ \hline
Total&&&&3175 \\ \hline\hline
\end{tabular}
\caption[Number of configurations used in each calculation for the symmetries $^{1}S^{e}$, $^{1}P^{o}$, $^{1}D^{e}$.]{\label{tab:config23}Number of configurations of the type $(n_1 l_1, n_2 l_2)$ included in CI calculations to obtain variational energies quoted in table~\ref{tab:singly1} for spectroscopic states $^1S^e$, $^1P^o$ and $^1D^e$. The second column refers to angular configurations ($l_1,l_2$) (compatible with the total symmetry) and $n_i^{max}$ refers to the highest {\em effective} principal quantum number of the hydrogenic orbitals, i.e., for $s$ orbitals $n$=$1,...,n_s^{max}$; for $p$ orbitals, $n$=$2,...,n_p^{max}$ and so on.}
\end{table}\begin{table}
\centering
\begin{tabular}{ccccc}
\hline\hline
Symmetry & Configurations & $n_1^{max}$ & $n_2^{max}$ & Number of Conf. \\ \hline
\multirow{5}{*}{ $^{3}P^{o}$} &ss&24&25&300\\
&pp&25&26&300\\
&dd&26&27&300\\
&ff&27&28&300\\
&gg&28&29&300\\ \hline
Total&&&&1500 \\ \hline
\multirow{4}{*}{ $^{3}P^{o}$} &sp&25&26&625\\
&pd&26&27&625\\
&df&27&28&625\\
&fg&28&29&625\\ \hline
Total&&&&2500 \\ \hline
\multirow{7}{*}{ $^{3}D^{e}$} &sd&25&27&625\\
&pp&25&26&300\\
&pf&26&28&625\\
&dd&26&27&300\\
&dg&27&29&625\\
&ff&27&28&300\\
&gg&28&29&300\\ \hline
Total&&&&3075 \\ \hline\hline
\end{tabular}
\caption[Number of configurations used in each calculation for the symmetries $^{3}S^{e}$, $^{3}P^{o}$, $^{3}D^{e}$.]{\label{tab:config24}Number of configurations of the type $(n_1 l_1, n_2 l_2)$ included in CI calculations to obtain variational energies quoted in table~\ref{tab:singly1} for spectroscopic states $^3S^e$, $^3P^o$ and $^3D^e$. The second column refers to angular configurations ($l_1,l_2$) (compatible with the total symmetry) and $n_i^{max}$ refers to the highest {\em effective} principal quantum number of the hydrogenic orbitals, i.e., for $s$ orbitals $n$=$1,...,n_s^{max}$; for
$p$ orbitals, $n$=$2,...,n_p^{max}$ and so on. }
\end{table}
\section{\label{sec:feshbach}The projection operator formalism}
Autoionization is a dynamical process of decay that occurs in the continuum spectra of atoms and molecules. It belongs to a general class of phenomena known as {\textit Auger effect} where a quantum physical system "seemingly" spontaneously decays into a partition of its constituent parts~\citep{Drake2006}. The Auger effect, in two-electron atoms, has three variations, inter alia, {\textit autoionization} where a neutral or positively charged composite system decays into an electron and a residual ion, {\textit autodetachment} where the original system is a negative ion, and {\textit radiative stabilization} or {\textit radiative decay}, where the system decays to an autoinization state of lower energy, or a true bound state. It is worth noting that even though the autoionization process is rigorously a part of the scattering continuum, a formalism elaborated by~\citeauthor{Feshbach1962} can be introduced whereby the resonant case can be transformed into a bound-like problem with the scattering elements built around it. The Feshbach's projection operator formalism~\citep{Feshbach1962} has been a widely used method to describe resonance phenomena. It is possible to find a vast literature on its application to atomic and molecular electronic structure,~\citep{Temkin1985a} and references therein. Nevertheless, its practical implementation has been mostly reduced to atomic systems with two and three electrons. A detailed study of the application of the stationary Feshbach method in helium has been performed by~\citep{Sanchez1995}. Also, after the pioneering work of Temkin and Bathia on three-electron systems \citep{Temkin1985b}, the Feshbach formalism has been recently revisited and applied to the $Li$ and $He$ atoms in our group \citep{Cardona2010, Castro-Granados2013}, with a complete formal implementation of the method.
\subsection{Implementation of the Feshbach formalism}
The basic idea of the Feshbach projection operator formalism is based on the definition of projection operators $\mathcal{P}$ and $\mathcal{Q}$ which separate $\Psi$ into scattering-like $P\Psi$ and quadratically integrable or bound-like $Q\Psi$ parts, yielding $\Psi=Q\Psi+P\Psi$; and satisfying the projection operator conditions
\begin{subequations}\label{eq:projprop}
\begin{align}
\mathcal{P}+\mathcal{Q}=1&\hspace{15pt}\text{completeness,}\label{eq:projpropa}\\
\mathcal{P}^2=\mathcal{P},\hspace{5pt}\mathcal{Q}^2=\mathcal{Q}&\hspace{15pt}\text{idempotency, and} \\
\mathcal{P}\mathcal{Q}=\mathcal{Q}\mathcal{P}=0&\hspace{15pt}\text{orthogonality.}
\end{align}
\end{subequations}
Additionally, the projected wave functions must also satisfy the asymptotic boundary conditions
\begin{subequations}\label{eq:projprop1}
\begin{align}
\lim_{r_i \to \infty}\mathcal{P}\Psi=\Psi \\
\lim_{r_i \to \infty}\mathcal{Q}\Psi=0,
\end{align}
\end{subequations}
where the latter expression indicates the confined nature of the localized part of the resonance. This Feshbach splitting of the continuum resonance wave function can be drawn in schematic form as in figure~\ref{fig:functionFB}.
\begin{figure}[h]
\includegraphics[width=\textwidth]{gfx/function2}
\caption[Schematic wave function of a doubly excited state.]{\label{fig:functionFB}Schematic form of the wave function corresponding to an atomic Feshbach resonance. The total resonance wave function splits into an inner radial localized part $\mathcal{Q}\Psi$ and an outer scattering-like part $\mathcal{P}\Psi$, and the latter part does not vanish asymptotically for $r \to \infty$. The localized part carries most of the distinguishable topological information that allows us to discriminate properties among different resonances in helium. }
\end{figure}
By replacing the splitting form of the total wave function $\Psi$=$\mathcal{Q}\Psi+\mathcal{P}\Psi$ into the time independent Schr\"odinger equation $H \Psi$=$E\Psi$, it is straightforward to obtain the following equations for the bound-like and the {\em non-resonant} scattering-like parts~\citep{Cardona2010}
\begin{subequations}\label{eq:qpfunction}
\begin{eqnarray}
(\mathcal{Q}H\mathcal{Q}-\mathcal{E}_n ) \mathcal{Q}\Phi_n=0 \label{eq:qfunction}\\
( \mathcal{P} H' \mathcal{P}-E) \mathcal{P} \Psi^{0} = 0 \label{eq:pfunction},
\end{eqnarray}
\end{subequations}
where $H'$ is the operator containing the atomic Hamiltonian plus an optical potential devoid of any resonant contribution from the state $\mathcal{Q}\Phi_s$ with energy $\mathcal{E}_s$, i.e., $H'$=$H+ V^{n\ne s}_{opt}$ where
\begin{equation}
V^{n\ne s}_{opt}=\IntSum_{n \ne s} \mathcal{P}H\mathcal{Q} \frac{|\Phi_n\rangle \langle \Phi_n|}{E-\mathcal{E}_n} \mathcal{Q}H\mathcal{P}.
\end{equation}
In a similar manner the Hamiltonian splits into four different terms by means of the projection operators (by using the completeness
identity~\eqref{eq:projpropa})
\begin{equation}\label{eq:splitH}
H=\mathcal{Q}H\mathcal{Q}+\mathcal{P}H\mathcal{P}+\mathcal{Q}H\mathcal{P}+\mathcal{P}H\mathcal{Q},
\end{equation}
where the last two terms are responsible for the coupling between both halfspaces which ultimately causes the resonant decay into the continuum. In practice one starts by solving equation \eqref{eq:qfunction} for the $\mathcal{Q}$ space with a \ac{CI} method to obtain a first approximation to the location of resonant states and it implies to use
\begin{equation}
\mathcal{Q} = \mathcal{Q}_1 \mathcal{Q}_2 = (1 - \mathcal{P}_1) (1 -\mathcal{P}_2) = 1 - \mathcal{P}_1 - \mathcal{P}_2 + \mathcal{P}_1 \mathcal{P}_2=1-\mathcal{P}, \label{eq:opoone1}
\end{equation}
then
\begin{equation}
\mathcal{P}=\mathcal{P}_1+\mathcal{P}_2-\mathcal{P}_1\mathcal{P}_2.\label{eq:opoone}
\end{equation}
where $\mathcal{Q}_i$ and $\mathcal{P}_i$ are one-particle projection operators. In this work we are restricted to doubly excited states lying below the second ionization threshold of the He atom, so that
\begin{equation}\label{eq:popertwo}
\mathcal{P}_i =| \phi_{1s} ({\textbf r}_i) \rangle \langle \phi_{1s} ({\textbf r}_i)|.
\end{equation}
Therefore, the $\mathcal{Q}$ operator removes all those configurations containing the $1s$ orbital, then avoiding the variational collapse to the ground state ($1s^2$), to singly excited states ($1s n\ell$) and removing also the single ionization continuum ($1s\epsilon \ell$). As a result the lowest variational energies of the $\mathcal{Q}H\mathcal{Q}$ eigenvalue problem correspond to the doubly excited states or resonances, that were immersed in the single ionization continuum.
\subsection{Resonant $\mathcal{Q}$-halfspace.}
We have performed \ac{CI} calculations for the $\mathcal{Q}H\mathcal{Q}$ doubly excited resonant space using the same configurational basis set that for bound states, but now, removing the $1s$ orbital as a direct effect of the projection operator $\mathcal{Q}$ using the equations~\eqref{eq:opoone1},~\eqref{eq:opoone} and~\eqref{eq:popertwo}. Therefore we are able to obtain 19 \ac{DES} of symmetry $^1S^e$, 26 states for $^1P^o$, and 25 states for $^1D^e$, using 1360, 1634 and 1909 configurations, respectively. On the other hand, we get 17 \ac{DES} of symmetry $^3S^e$, 27 states for $^3P^o$, and 24 states for $^3D^e$, using 1246, 1634 and 1840 configurations, respectively. In order to illustrate the accuracy of our computation, we show in tables~\ref{tab:se},~\ref{tab:po1}, and~\ref{tab:de1} a comparison of the our calculated energies of \ac{DES} below the $N_{th}=2$ with the previously calculated by~\citep{Chen1997} using the saddle-point complex rotation method. From these table we can finally conclude that our results are in close agreement with the reported ones by~\citeauthor{Chen1997}. Otherwise, we have also calculated the \ac{CI}-\ac{WF} of these \ac{DES}. These \ac{WF} will be used as the starting point to build up the two-dimensional two-particle density in the next chapter. We will postpone the analysis of the accuracy of our calculations of the \ac{WF}, by comparing the density with others reported previously in the literature, until there.
\begin{table}
\centering
\begin{tabular}{ccccccc}
\hline\hline
& &\multicolumn{2}{c}{$^{1}S^{e}$} && \multicolumn{2}{c}{$^{3}S^{e}$}\\
\cline{3-4}\cline{6-7}
& & This work & \citep{Chen1997} & & This work & \citep{Chen1997} \\ \hline
$_{2}(1,0)^{+.-}_{2}$ && $-0.7787708$ & $-0.777870$ && \\
$_{2}(1,0)^{+.-}_{3}$ && $-0.5900951$ & $-0.589896$ && $-0.6026003$ & $-0.602577$\\
$_{2}(1,0)^{+.-}_{4}$ && $-0.5449365$ & $-0.544882$ && $-0.5488467$ & $-0.548841$\\
$_{2}(1,0)^{+.-}_{5}$ && $-0.5267032$ & $-0.526687$ && $-0.5284149$ & $-0.528414$\\
$_{2}(1,0)^{+.-}_{6}$ && $-0.5176390$ & $-0.517641$ && $-0.5185441$ & $-0.518546$\\ \hline
$_{2}(-1,0)^{+.-}_{2}$ && $-0.6223959$ & $-0.621810$ && \\
$_{2}(-1,0)^{+.-}_{3}$ && $-0.5481972$ & $-0.548070$ && $-0.5597603$ & $-0.559745$ \\
$_{2}(-1,0)^{+.-}_{4}$ && $-0.5277724$ & $-0.527707$ && $-0.5325090$ & $-0.532505$ \\
$_{2}(-1,0)^{+.-}_{5}$ && $-0.5181340$ & $-0.518100$ && $-0.5205454$ & $-0.520549$ \\
$_{2}(-1,0)^{+.-}_{6}$ && $-0.5127799$ & $-0.512762$ && $-0.5141647$ & $-0.514180$ \\
\hline\hline
\end{tabular}
\caption[Energy positions of resonant doubly excited states of helium symmetries $^{1,3}S^{e}$]{\label{tab:se}Energy positions (in \ac{a.u.}) of resonant doubly excited states of helium located below the second ionization threshold He$^+$ $(n_1=2)$ for the total symmetries $^{1,3}S^e$. Resonances are labelled according to the classification proposed by~\citep{Lin1983} using $_{n_1}(K,T)_{n_2}^A$; $^{2s+1}L^\pi$. The notation must be understood as $A=1$ for the symmetry $^{1}S^{e}$, and $A=-1$ for symmetry $^{3}S^{e}$.}
\end{table}
\begin{table}
\centering
\begin{tabular}{ccccccc}
\hline\hline
& & \multicolumn{2}{c}{$^{1}P^{o}$} & & \multicolumn{2}{c}{$^{3}P^{o}$}\\
\cline{3-4}\cline{6-7}
& & This work & \citep{Chen1997} & & This work & \citep{Chen1997} \\ \hline
& &\multicolumn{2}{c}{ $_{2}(0,1)^{+}_{n}$} & &\multicolumn{2}{c}{$_{2}(1,0)^{+}_{n}$} \\ \cline{3-4}\cline{6-7}
$n=2$ && $-0.6927496$ &$-0.693069 $ && $-0.7614841$ &$-0.760489$\\
$n=3$ && $-0.5640090$ &$-0.564074$ && $-0.5849286$ &$-0.584671$\\
$n=4$ && $-0.5343290$ &$-0.534358$ && $-0.5429314$ &$-0.542837$ \\
$n=5$ && $-0.5214789$ &$-0.521501$ && $-0.5257518$ &$-0.525711$ \\
$n=6$ && $-0.5146989$ &$-0.514732$ && $-0.5171160$ &$-0.517107$\\ \hline
&& \multicolumn{2}{c}{$_{2}(1,0)^{-}_{n}$} && \multicolumn{2}{c}{$_{2}(0,1)^{+}_{n}$} \\ \cline{3-4}\cline{6-7}
$n=3$ && $-0.5970953$ &$-0.597074$ && $-0.5790297$& $-0.579030$\\
$n=4$ && $-0.5464858$ &$-0.546490$ && $-0.5395578$& $-0.539558$ \\
$n=5$ && $-0.5272924$ &$-0.527295$ && $-0.5239415$& $-0.523946$\\
$n=6$ && $-0.5179307$ &$-0.517939$ && $-0.5160618$ & $-0.516079$\\
$n=7$ && $-0.5126711$ &$-0.512679$ && $-0.5115092$& $-0.511547$\\ \hline
&&\multicolumn{2}{c}{ $_{2}(-1,0)^{0}_{n}$} && \multicolumn{2}{c}{$_{2}(-1,0)^{+}_{n}$} \\ \cline{3-4}\cline{6-7}
$n=3$ && $-0.5470914$ &$-0.547087$ && $-0.5488529$&$-0.548841$\\
$n=4$ && $-0.5276130$ &$-0.527613$ && $-0.5286420$ &$-0.528637$\\
$n=5$ && $-0.5181138$ &$-0.518115$ && $-0.5187098$ &$-0.518708$ \\
$n=6$ && $-0.5127857$ &$-0.512789$ && $-0.5131517$ &$-0.513155$\\ \hline\hline
\end{tabular}
\caption[Energy positions of resonant doubly excited states of helium symmetries $^{1,3}P^{o}$]{\label{tab:po1}Energy positions (in \ac{a.u.}) of resonant doubly excited states of helium located below the second ionization threshold He$^+$ $(n_1=2)$ for the total symmetries $^{1,3}P^o$. Resonances are labelled according to the classification proposed by~\citep{Lin1983} using $_{n_1}(K,T)_{n_2}^A$; $^{2s+1}L^\pi$.}
\end{table}
\begin{table}
\centering
\begin{tabular}{ccccccc}
\hline\hline
&&\multicolumn{2}{c}{$^{1}D^{e}$} && \multicolumn{2}{c}{$^{3}D^{e}$}\\
\cline{3-4}\cline{6-7}
&& This work & \citep{Chen1997} && This work & \citep{Chen1997} \\ \hline
$_{2}(1,0)^{+.-}_{2}$ && $-0.7026974$ & $-0.701830$ && \\
$_{2}(1,0)^{+.-}_{3}$ && $-0.5693682$ & $-0.569193$ && $-0.5838054$ & $-0.583784$\\
$_{2}(1,0)^{+.-}_{4}$ && $-0.5367840$ & $-0.539715$ && $-0.5416857$ & $-0.541679$ \\
$_{2}(1,0)^{+.-}_{5}$ && $-0.5227632$ & $-0.522737$ && $-0.5250186$ & $-0.525018$\\
$_{2}(1,0)^{+.-}_{6}$ && $-0.5154448$ & $-0.515451$ && $-0.5166775$ & $-0.516687$\\ \hline
$_{2}(0,1)^{0}_{3}$ && $-0.5564146$ & $-0.556417$&& $-0.5606819$ & $-0.560684$\\
$_{2}(0,1)^{0}_{4}$ && $-0.5315042$ & $-0.531506$&& $-0.5334602$ & $-0.533462$\\
$_{2}(0,1)^{0}_{5}$ && $-0.5201091$ & $-0.520114$&& $-0.5211252$ & $-0.521130$\\
$_{2}(0,1)^{0}_{6}$ && $-0.5139367$ & $-0.513950$&& $-0.5145241$ & $-0.514540$\\ \hline
$_{2}(-1,0)^{0}_{4}$ && $-0.5292883$ & $-0.529292$&& $-0.5293086$ & $-0.529312$\\
$_{2}(-1,0)^{0}_{5}$ && $-0.5189966$ & $-0.519000$&& $-0.5190130$ & $-0.519016$\\
$_{2}(-1,0)^{0}_{6}$ && $-0.5133034$ & $-0.513310$&& $-0.5133149$ & $-0.513322$\\\hline\hline
\end{tabular}
\caption[Energy positions of resonant doubly excited states of helium symmetries $^{1,3}D^{e}$]{\label{tab:de1}Energy positions (in \ac{a.u.}) of resonant doubly excited states of helium located below the second ionization threshold He$^+$ $(n_1=2)$ for the total symmetries $^{1,3}D^e$. Resonances are labelled according to the classification proposed by~\citep{Lin1983} using $_{n_1}(K,T)_{n_2}^A$; $^{2s+1}L^\pi$. The notation must be understood as $A=1$ for the symmetry $^{1}D^{e}$, and $A=-1$ for symmetry $^{3}D^{e}$.}
\end{table}
\chapter{Measures of Information theory based on the electron density}\label{ch:tim}
\section{\label{sec:twodensity}Two-electron distribution function}
The two-electron density function or distribution function $\rho(\mathbf{r}_{1},\mathbf{r}_{2})$ is defined as the probability to find an electron at point $\mathbf{r}_{1}$ and another at point $\mathbf{r}_{2}$. The two-electron density carries almost all the information about quantum correlations of a compound system~\citep{Ezra1982,Ezra1983}. In the following sections we calculate this distribution function by means of a two-electron operator.
\subsection{\label{sec:densityop}Two-electron density function operator}
The two-electron distribution function $\rho(\mathbf{r}_{1},\mathbf{r}_{2})$ is the expectation value of the operator $\hat{G}(\mathbf{r}_{1},\mathbf{r}_{2})$ which has the following form in the position representation for an atom or ion with N electrons~\citep{Ellis1996}
\begin{equation}\label{eq:densityop}
\hat{G}(\mathbf{r}_{1},\mathbf{r}_{2})=\sum_{i=1}^{N}\sum_{j=1}^{i-1}\frac{1}{2}\left[\delta^{3}(\mathbf{u}_{i}-\mathbf{r}_{1})\delta^{3}(\mathbf{u}_{j}-\mathbf{r}_{2})+\delta^{3}(\mathbf{u}_{i}-\mathbf{r}_{2})\delta^{3}(\mathbf{u}_{j}-\mathbf{r}_{1})\right].
\end{equation}
The integral of the two-electron distribution function $\rho$ gives the number of electron pairs, i.e., $\int \bra{ \Psi} \hat{G}({\textbf r}_1,{\textbf r}_2) \ket{ \Psi}d{\textbf r}_1d{\textbf r}_2 = N(N-1)/2$. In our approach, if we consider the atom prepared in a \ac{CI} pure state $\ket{\Psi^{CI}}$, then $\rho(\mathbf{r}_{1},\mathbf{r}_{2})=\braket{\Psi^{CI}|\hat{G}(\mathbf{r}_{1},\mathbf{r}_{2})|\Psi^{CI}}$ is a rather complicated function of six coordinates, three for each electron coordinate. On the other hand, the three coordinates that specify the orientation of the atom or ion in space are irrelevant for our present purpose. Hence, we can average or integrate over the three of the coordinates, i.e., the Euler angles. Consequently, we obtain a two-particle operator which only depends on three relevant variables $r_{1}$ ,$r_{2}$, and the interelectronic angle $\theta$
\begin{align}\label {eq:redop}
\hat{G}(r_{1},r_{2},\theta)&=\sum_{j<i}\frac{1}{2}\left[\delta(u_{i}-r_{1})\delta(u_{j}-r_{2})+\delta(u_{i}-r_{2})\delta(u_{j}-r_{1})\right] \\ \nonumber
&\times \delta(cos\theta_{ij}-cos\theta).
\end{align}
Then, the two-electron density in terms of the internal variables can be recast in the form of the expectation value of the operator~\eqref{eq:redop}
\begin{equation}\label{eq:twodens1}
\rho(r_{1},r_{2},\theta)= \braket{\Psi^{CI}|\hat{G}(r_{1},r_{2},\theta)|\Psi^{CI}},
\end{equation}
and the density in equation~\eqref{eq:twodens1} is normalized to the number of electron pairs
\begin{equation}\label{eq:rednorm}
\int_{0}^{\infty}dr_{1}\int_{0}^{\infty}dr_{2}\int_{-1}^{1}d(cos\theta)\rho(r_{1},r_{2},\theta) = N(N-1)/2.
\end{equation}
In this form, we have a rotational-invariant two-electron density with no dependence on the total azimuthal quantum numbers $M_L$ and $M_S$. This procedure to obtain the two-electron density is equivalent to that proposed by~\citep{Ezra1982,Ezra1983} where a formula for the density is derived following a different procedure. .
With the aim of evaluating $\rho$ using a general numerical \ac{CI}-\ac{WF}, it is convenient to express $\hat{G}$ in the language of {\textit spherical-tensor operator}. For this propose, the two-particle operator can be write as
\begin{align}\label{eq:redop2}
\hat{G}(r_{1},r_{2},\theta)&=\sum_{j<i}\sum_{k=0}^{\infty}\frac{1}{4}(2k+1)P_{k}(cos\theta)P_{k}(cos\theta_{ij}) \\ \nonumber
&\times \left[\delta(u_{i}-r_{1})\delta(u_{j}-r_{2})+\delta(u_{i}-r_{2})\delta(u_{j}-r_{1})\right],
\end{align}
where we have used the completeness relation for Legendre polynomials \citep{Sepulveda2009}
\begin{equation}\label{eq:comleg}
\delta(cos\theta_{ij}-cos\theta)=\sum_{k=0}^{\infty}\frac{1}{2}(2k+1)P_{k}(cos\theta)P_{k}(cos\theta_{ij}).
\end{equation}
Now, the addition theorem for \ac{SH}, see equation~\eqref{eq:addthe1}, enables us to write $P_{k}(cos\theta_{ij})$ in terms of products of \ac{SH}
\begin{equation}\label{eq:addthe}
P_{k}(cos\theta_{ij})=\frac{2\pi}{2k+1}\sum_{q}\mathcal{Y}_{q}^{k}(\theta_{i},\phi_{i})\mathcal{Y}_{q}^{*k}(\theta_{j},\phi_{j}).
\end{equation}
Using definition of ``$\mathbf{C}$ tensor'', having components~\eqref{eq:ccomp144}, and the definition of {\textit tensor scalar product}, we can rewrite \eqref{eq:addthe} as
\begin{equation}\label{eq:addthe2}
P_{k}(cos\theta_{ij})=\mathbf{C}^{k}(i)\cdot\mathbf{C}^{k}(j),
\end{equation}
and we are able to write the two-electron density functions in a rather useful form as follows
\begin{align}\label {eq:redop3}
\hat{G}_{2}(r_{1},r_{2},\theta)&=\sum_{j<i}\sum_{k=0}^{\infty}\frac{1}{4}(2k+1)^{\frac{3}{2}}(-1)^{k}\{\mathbf{C}^{k}(i)\cdot\mathbf{C}^{k}(j)\}^{0}_{0} \\ \nonumber
&\times \left[\delta(u_{i}-r_{1})\delta(u_{j}-r_{2})+\delta(u_{i}-r_{2})\delta(u_{j}-r_{1})\right]P_{k}(cos\theta),
\end{align}
where we have also used the equation~\eqref{eq:tensororank}, in particular for the ${\textbf C}$ tensor in the form
\[\{\mathbf{C}^{k}(i)\cdot\mathbf{C}^{k}(j)\}^{0}_{0}=(-1)^{k}(2k+1)^{-\frac{1}{2}}\mathbf{C}^{k}(i)\cdot\mathbf{C}^{k}(j),\]
\noindent where $\{\mathbf{C}^{k}(i)\cdot\mathbf{C}^{k}(j)\}^{0}_{0}$ is a scalar operator (or a tensor of rank zero).
Finally, for a two-electron atom (helium isoelectronic series) equation~\eqref{eq:redop3} reduces to
\begin{align}\label {eq:redop4}
\hat{G}(r_{1},r_{2},\theta)&=\sum_{k}\frac{1}{4}(2k+1)^{\frac{3}{2}}(-1)^{k}\{\mathbf{C}^{k}(1)\cdot\mathbf{C}^{k}(2)\}^{0}_{0} \\ \nonumber
&\times \left[\delta(u_{1}-r_{1})\delta(u_{2}-r_{2})+\delta(u_{1}-r_{2})\delta(u_{2}-r_{1})\right]P_{k}(cos\theta).
\end{align}
\subsection{\label{sec:densityopelem}Two-particle density operator matrix elements}
The two-electron density function can be computed at different levels of approximation. In our case, in terms of the \ac{CI} method and using its variational \ac{WF}, it can be written as
\begin{equation}\label{eq:twodens2}
\rho(r_{1},r_{2},\theta)=\braket{\Psi^{CI}|\hat{G}(r_{1},r_{2},\theta)|\Psi^{CI}}=\sum_{ij}C_iC_j^*\braket{\psi_i|\hat{G}(r_{1},r_{2},\theta)|\psi_j},
\end{equation}
where the integrations involved in the expectation value must be performed over the coordinates $\{u_1,u_2,\theta\}$ in equation~\eqref{eq:redop4}.
Now we use the same procedure followed to compute the inter-electronic Coulomb operator~\eqref{eq:intermulti}, taken as a reference the non-antisymmetrized matrix elements of the operator $G(r_1,r_2,\theta)$
\begin{equation}\label{eq:Gmatrixnonanty}
(\hat{G}(r_1,r_2,\theta))_{ab,cd}^{\aleph}=\bra{(n_al_a)_1(n_bl_b)_2LS}\hat{G}(r_1,r_2,\theta\ket{(n_cl_c)_1(n_dl_d)_2LS}.
\end{equation}
Inserting the equation~\eqref{eq:redop4} in the equation~\eqref{eq:Gmatrixnonanty} we obtain
\begin{align}\label{eq:nonantGmatrix}
(\hat{G}(r_1,r_2,\theta))_{ab,cd}^{\aleph}&=\sum_k\frac{1}{4}(-1)^k(2k+1)^{\frac{3}{2}}R(ab,cd)P_k(cos\theta)\\ \nonumber
&\times\braket{(LS)_{ab}|\{\mathbf{C}^{k}(1)\mathbf{C}^{k}(2)\}^{0}_{0}|(LS)_{cd}}.
\end{align}
where the two-electron radial integral (which now incidentally does not depend on the sum index $k$ at variance with the Coulomb case) is
\begin{equation}\label{eq:funRG}
R(ab,cd)=\int_{0}^{\infty}\int_{0}^{\infty} \mathcal{U}_{a}(u_{1})\mathcal{U}_{b}(u_{2})\gamma(u_{1},u_{2})\mathcal{U}_{c}(u_{1})\mathcal{U}_{d}(u_{2})du_{1}du_{2},
\end{equation}
with
\begin{equation}\label{eq:gamma1}
\gamma(u_1,u_2)=\delta(u_{1}-r_{1})\delta(u_{2}-r_{2})+\delta(u_{1}-r_{2})\delta(u_{2}-r_{1}).
\end{equation}
This integral is straightforwardly calculated yielding
\begin{equation}\label{eq:funRG1}
R(ab,cd)= \mathcal{U}_{a}(r_{1})\mathcal{U}_{b}(r_{2})\mathcal{U}_{c}(r_{1})\mathcal{U}_{d}(r_{2})+\mathcal{U}_{a}(r_{2})\mathcal{U}_{b}(r_{1})\mathcal{U}_{c}(r_{2})\mathcal{U}_{d}(r_{1}).
\end{equation}
which corresponds to a function of the two radial variables $r_1$ and $r_2$.
Anyway, if we compare the equation~\eqref{eq:nonantGmatrix} with the corresponding for inter-electronic Coulomb matrix elements, equation~\eqref{eq:nonantmatrix}, we realize that both orbital and spin angular momenta integrals are formally equivalent between these equations. Consequently, we may write
\begin{align}\label{eq:maelemten87}
(\hat{G}(r_1,r_2,\theta))_{ab,cd}^{\aleph}&= \sum_{k}\frac{1}{4}(-1)^{L-k}(2k+1)\\ \nonumber
&\times R(ab,cd)\left[(2l_a+1)(2l_b+1)(2l_c+1)(2l_d+1)\right]^{\frac{1}{2}}\\ \nonumber
&\times\tj{l_a}{l_c}{k}{0}{0}{0}\tj{l_b}{l_d}{k}{0}{0}{0}\Gj{l_a}{k}{l_c}{l_d}{L}{l_b}P_k(cos\theta).
\end{align}
\subsubsection{Density operator matrix elements with antisymmetric configurations}
Previously we have calculated the Coulomb $1/r_{12}$ matrix elements with both non-antisymmetric, equation~\eqref{eq:maelemten75}, and antisymmetric configurations, see equations~\eqref{eq:eqeqcoul}--\eqref{eq:eqeqcoul3}. In the same way, the non-antisymmetric density operator matrix elements are calculated in the expressions~\eqref{eq:funRG1} and~\eqref{eq:maelemten87}. Therefore, using the antisymmetrized \ac{WF}, equations~\eqref{eq:ansymm33} and~\eqref{eq:ansymm34}, we can write the matrix element of the operator $\hat{G}(r_1,r_2,\theta)$ between two-electron configurations consisting of equivalent or non-equivalent electrons. Then we proceed as follows:
\subsubsection*{Equivalent---Equivalent Electrons}
\begin{align}\label{eq:eqeqdens}
(\hat{G}(r_1,r_2,\theta))_{aa,cc}&=\braket{\{(n_al_a)^2\}LS|\hat{G}(r_1,r_2,\theta)|\{(n_cl_c)^2\}LS},\\ \nonumber
&=\braket{(n_al_a)_1(n_al_a)_2LS|\hat{G}(r_1,r_2,\theta)|(n_cl_c)_1(n_cl_c)_2LS},\\ \nonumber
&=(\hat{G}(r_1,r_2,\theta))_{ab,cd}^{\aleph}\\ \nonumber
&=\sum_{k}\frac{1}{4}(-1)^{L-k}(2k+1)R(ab,cd)(2l_a+1)(2l_c+1)\\ \nonumber
&\times\tj{l_a}{l_c}{k}{0}{0}{0}^2\Gj{l_a}{k}{l_c}{l_c}{L}{l_a}P_k(cos\theta).
\end{align}
In a similar way as performed in the two-electron Coulomb integrals, the general antisymmetrized matrix elements $(\hat{G} (r_1,r_2, \theta) )_{ab,cd}$ for the density are summarized below for the rest of configurational cases of two-electron configurations.
\subsubsection*{Equivalent---Non-equivalent Electrons}
\begin{align}\label{eq:eqeqdens1}
&(\hat{G}(r_1,r_2,\theta))_{aa,cd}\\ \nonumber
&=\braket{\{(n_al_a)^2\}LS|\hat{G}(r_1,r_2,\theta)|\{n_cl_c\hspace{4pt}n_dl_d\}LS},\\ \nonumber
&=\bra{(n_al_a)_1(n_al_a)_2LS}\hat{G}(r_1,r_2,\theta)\frac{1}{\sqrt{2}}\left[\ket{(n_cl_c)_1(n_dl_d)_2LS}\right. \\ \nonumber
&+\left.(-1)^{l_c+l_d+L+S}\ket{(n_dl_d)_1(n_cl_c)_2LS}\right],\\ \nonumber
&=\frac{1}{\sqrt{2}}(\hat{G}(r_1,r_2,\theta))_{aa,cd}^{\aleph}+\frac{1}{\sqrt{2}}(-1)^{l_c+l_d+L+S}(\hat{G}(r_1,r_2,\theta))_{aa,dc}^{\aleph},\\ \nonumber
&=\frac{1}{\sqrt{32}}(2l_a+1)[(2l_c+1)(2l_d+1)]^{\frac{1}{2}}\sum_{k}(2k+1)\left[R(aa,cd)(-1)^{L-k}\right.\\ \nonumber
&+(-1)^{l_c+l_d-k+S}\left.R(ab,dc)\right]\\ \nonumber
&\times\tj{l_a}{l_c}{k}{0}{0}{0}\tj{l_a}{l_d}{k}{0}{0}{0}\Gj{l_a}{k}{l_c}{l_d}{L}{l_a}P_k(cos\theta).\\ \nonumber
\end{align}
\subsubsection*{Non-equivalent---Equivalent Electrons}
\begin{align}\label{eq:eqeqdens2}
&(\hat{G}(r_1,r_2,\theta))_{ab,cc}\\ \nonumber
&=\braket{\{n_al_a\hspace{4pt}n_bl_b\}LS|\hat{G}(r_1,r_2,\theta)|\{(n_cl_c)^2\}LS},\\ \nonumber
&=\frac{1}{\sqrt{2}}\left[\bra{(n_al_a)_1(n_bl_b)_2LS}+(-1)^{l_a+l_b+L+S}\bra{(n_bl_b)_1(n_al_a)_2LS}\right] \\ \nonumber
&\times \hat{G}(r_1,r_2,\theta)\ket{(n_cl_c)_1(n_cl_c)_2LS},\\ \nonumber
&=\frac{1}{\sqrt{2}}(\hat{G}(r_1,r_2,\theta))_{ab,cc}^{\aleph}+\frac{1}{\sqrt{2}}(-1)^{l_a+l_b+L+S}(\hat{G}(r_1,r_2,\theta))_{ba,cc}^{\aleph},\\ \nonumber
&=\frac{1}{\sqrt{32}}(2l_c+1)[(2l_a+1)(2l_b+1)]^{\frac{1}{2}}\sum_{k}(2k+1)\left[R(ab,cc)(-1)^{L-k}\right.\\ \nonumber
&+(-1)^{l_a+l_b-k+S}\left.R(ba,cc)\right]\\ \nonumber
&\times\tj{l_a}{l_c}{k}{0}{0}{0}\tj{l_b}{l_c}{k}{0}{0}{0}\Gj{l_a}{k}{l_c}{l_c}{L}{l_b}P_k(cos\theta).\\ \nonumber
\end{align}
\subsubsection*{Non-equivalent---Non-equivalent Electrons}
\begin{align}\label{eq:eqeqdens3}
&(\hat{G}(r_1,r_2,\theta))_{ab,cd} \\ \nonumber
&=\braket{\{n_al_a\hspace{4pt}n_bl_b\}LS|\hat{G}(r_1,r_2,\theta)|\{n_cl_c\hspace{4pt}n_dl_d\}LS},\\ \nonumber
&=\frac{1}{2}\left[\bra{(n_al_a)_1(n_bl_b)_2LS}+(-1)^{l_a+l_b+L+S}\bra{(n_bl_b)_1(n_al_a)_2LS}\right] \\ \nonumber
&\times\hat{G}(r_1,r_2,\theta)\left[\ket{(n_cl_c)_1(n_dl_d)_2LS}+(-1)^{l_c+l_d+L+S}\ket{(n_dl_d)_1(n_cl_c)_2LS}\right] ,\\ \nonumber
&=\frac{1}{2}\left[(\hat{G}(r_1,r_2,\theta))_{ab,cd}^{\aleph}+(-1)^{l_c+l_d+L+S}(\hat{G}(r_1,r_2,\theta))_{ab,dc}^{\aleph}\right.\\ \nonumber
&+\left.(-1)^{l_a+l_b+L+S}(\hat{G}(r_1,r_2,\theta))_{ba,cd}^{\aleph}+\frac{1}{\sqrt{2}}(-1)^{l_a+l_b+l_c+l_d}(\hat{G}(r_1,r_2,\theta))_{ba,dc}^{\aleph}\right].\\ \nonumber
&=\frac{1}{8}[(2l_a+1)(2l_b+1)(2l_c+1)(2l_d+1)]^{\frac{1}{2}}\\ \nonumber
&\times\sum_{k}(2k+1)\left[\left[R(ab,cd)(-1)^{L-k}+(-1)^{l_a+l_b+l_c+l_d+L-k}R(ba,dc)\right]\right.\\ \nonumber
&\times\tj{l_a}{l_c}{k}{0}{0}{0}\tj{l_b}{l_d}{k}{0}{0}{0}\Gj{l_a}{k}{l_c}{l_d}{L}{l_b}P_k(cos\theta)\\ \nonumber
&+\left[R(ab,dc)(-1)^{l_c+l_d+S-k}+(-1)^{l_a+l_b+S-k}R(ba,cd)\right]\\ \nonumber
&\left.\times\tj{l_a}{l_d}{k}{0}{0}{0}\tj{l_b}{l_c}{k}{0}{0}{0}\Gj{l_a}{k}{l_d}{l_c}{L}{l_b}P_k(cos\theta)\right].
\end{align}
\subsection{Two-particle and one-particle electronic density functions of helium-like atoms}\label{sec:densityfunc12}
The two-particle electronic density function of an helium-like atom may be obtained from the rotational trace of the diagonal two-electron density matrix~\citep{Ezra1983}, equation~\eqref{eq:twodens2}. By integrating over the angular coordinate $\theta$ we obtain the two-electron radial density function, which reads
\begin{align}\label{eq:twoparticle}
\rho(r_{1},r_{2})=\int_{-1}^1d(cos\theta)\rho(r_1,r_2,\theta).
\end{align}
We finally obtain the one-particle probability density by integrating equation~\eqref{eq:twoparticle} over the radial coordinate $r_2$
\begin{align}\label{eq:oneparticle}
\rho({r})=\int_0^\infty r_2^2 dr_2\int_{-1}^1d(cos\theta)\rho(r_1=r,r_2,\theta).
\end{align}
\subsection{One-particle electronic density function for bound states of Helium atom}
After these computational details, we are now able to explore the one- and two-electron radial densities in helium, which are mathematical distribution functions after all, subject to any topological scrutiny by means of information entropic measures.
\begin{table}[h]
\centering
\begin{tabular}{cccccccc}
\hline\hline
& & &\multicolumn{2}{c}{$\rho(0)$}& &\multicolumn{2}{c}{$E(a.u.)$} \\
\cline{4-5}\cline{7-8}
& & & \citeauthor{Saavedra1995} & Present& & \citeauthor{Saavedra1995} & Present\\ \hline
\multirow{3}{*}{ $^{1}S^{e}$}&1 & & $3.620858$ & $3.620790$ & & $-2.903724$ & $ -2.903508$ \\
&2 & & $2.618920$ & $2.618926$ & & $-2.145974$ & $ -2.145960$ \\
&3& & $2.566253$ & $2.566259$ & & $-2.061272$ & $-2.061267$\\ \hline
\multirow{2}{*}{ $^{3}S^{e}$}&1& & $2.640710$ & $2.640708$ & & $-2.175229$ & $ -2.175228$ \\
&2 & & $2.570120$ & $2.570117$ & & $-2.068689$ & $ -2.068688$ \\ \hline\hline
\end{tabular}
\caption[Values for the one-electron density at the nucleus $\rho(r=0)$ and energies for the
lowest three $L=0$ bound states (singlets and triplets)]{\label{tab:table3}Values for the one-electron density at the nucleus $\rho(r=0)$ and energies for the
lowest three $L=0$ bound states (singlets and triplets). Our results are compared with previous work by~\citep{Saavedra1995}.}
\end{table}
\begin{figure}
\centering
\hspace{-0.2cm}\includegraphics[width=0.63\textwidth]{gfx/one_densities_bound_1Se.pdf}\\
\includegraphics[width=0.62\textwidth]{gfx/one_densities_bound_1Po.pdf}\\
\includegraphics[width=0.61\textwidth]{gfx/one_densities_bound_1De.pdf}\\
\caption[Electronic one-particle density $\rho({r})$ for the lowest five bound states in the spectroscopic symmetries $^{1}S^{e}$, $^{1}P^{o}$ and $^{1}D^{e}$]{\label{fig:onedens12}Electronic one-particle radial density $\rho({r})$ for the lowest five bound states in the spectroscopic symmetries $^{1}S^{e}$, $^{1}P^{o}$ and $^{1}D^{e}$. Note that the y-axis is in logarithmic scale.}
\end{figure}
We begin by assessing the quality of our results by comparing them with previous ones in the literature. Our energies and values for the one-electron radial density at $r=0$ for the lowest $^1S^e$ and $^3S^e$ states in helium are included in table~\ref{tab:table3}, and compared with those available from~\citep{Saavedra1995}. The values reported by the latter authors were obtained using explicitly correlated \ac{WF} following the work by~\citep{Pekeris1958,Pekeris1959,Pekeris1962} with perimetric coordinates. Evidently, our \ac{CI}-\ac{WF} has not such degree of sophistication but, increasing our angular correlation, we can reproduce up to six figures in the energies and up to five in the radial densities. This nice comparison endorses our computational procedure, which is not aimed at obtaining precise numerical results for bound states but for highly lying \ac{DES}, for which explicitly correlated methods are less indicated. Anyway, we are mostly interested on the qualitative behavior. The one-particle density calculated at different levels of theory should present differences probing the effects of electron correlation. For instance, the clear effects of electron correlation are visible when comparing the one-electron densities obtained with explicitly correlated configurations and with an uncorrelated Hartree-Fock method (see figure 2 in~\citep{Saavedra1995}). The figure~\ref{fig:onedens12} depicts the one-particle densities $\rho({r})$ obtained here with our \ac{CI} method. Finally, we want to mention the following interesting qualitative result: only the ground state of Helium $1^1S^e$ and the excited state $1^1P^o$ have a monotonically decreasing behavior; on the other hand, a non-monotonically decreasing behavior is observed for all of the remaining excited states, this phenomenon was previously observed by~\citep{Rigier1984} and~\citep{Saavedra1995}, and they incidentally report a monotonically decreasing \ac{HF} density function $\rho{}$ for the excited state $2^1S^e$. But this difference with the present \ac{CI}-\ac{WF} density is fundamentally due to the lack of a properly described electron correlation in the \ac{HF} method.
\subsection{Two-particle electronic density function for doubly excited states of Helium atom}
The two-electron radial (two-dimensional dependence) probability is calculated by means of equation~\eqref{eq:twoparticle}. It only involves the spatial coordinates, i.e, we have traced over the angular degrees of freedom. Since we deal with indistinguishable particles we expect that $\rho(r_1,r_2)$ be symmetric about the bisector line of plane $(r_1,r_2)$ (i.e., the line $r_1=r_2$) under the permutation of the particle index.
The properties of electron correlation on \ac{DES} of two-electron atoms is a problem of considerable theoretical interest~\citep{Cooper1963, Lin1974, Sinanoglu1974, Herrick1975, Ezra1982, Ezra1983}. In order to analyze the electron correlation in the density distribution,~\citeauthor{Ezra1982} have undertaken a detailed study of the two-electron density $\rho(r_1,r_2,\theta_{12})$ via the associated conditional probability $\rho(r_1,\theta_{12},| r_2=\alpha)$ which is the probability of finding an electron at a distance $r_1$ from the nucleus with interelectronic angle $\theta_{12}$ given that the other electron is at distance $\alpha$ from the nucleus. They conclude that a qualitative examination of the conditional density of the two-electron atoms, calculated via a \ac{CI} approach using Sturmian functions, enables them to find a remarkable degree of collective rotor-vibrator behavior in the $N=2$ shell, showing that the molecular interpretation of the doubly excited spectrum due to~\citep{Kellman1980} is a useful qualitative picture.
\begin{figure}
\centering
\includegraphics[width=0.35\textwidth]{gfx/densitytwo_1Se_Part1.pdf}
\includegraphics[width=0.35\textwidth]{gfx/densitytwo_1Se_Part2.pdf}\\
\includegraphics[width=0.35\textwidth]{gfx/densitytwo_1Se_Part3.pdf}
\includegraphics[width=0.35\textwidth]{gfx/densitytwo_1Se_Part4.pdf}\\
\includegraphics[width=0.35\textwidth]{gfx/densitytwo_1Se_Part5.pdf}
\includegraphics[width=0.35\textwidth]{gfx/densitytwo_1Se_Part6.pdf}\\
\includegraphics[width=0.35\textwidth]{gfx/densitytwo_1Se_Part7.pdf}
\includegraphics[width=0.35\textwidth]{gfx/densitytwo_1Se_Part8.pdf}
\caption[Two-electron radial density $\rho(r_{1},r_{2})r_{1}^{2}r_{2}^{2}$ for the lowest eight resonant $^1S^e$ states in He]{\label{fig:wide1}Two-electron radial density $\rho(r_{1},r_{2})r_{1}^{2}r_{2}^{2}$ for the lowest eight resonant $^1S^e$ states in He, located below the second ionization threshold. Resonances are labelled according to the classification proposed by~\citep{Lin1983} using $_{n_1}(K,T)_{n_2}^A$. The energy ordering of the resonances is indicated by the alphabet labels inside the plots.}
\end{figure}
In order to establish a deep qualitative understanding of the structure and classification of \ac{DES} of two-electron atoms we will focus on studying the two-dimensional electronic density, $\rho(r_{1},r_{2})r_{1}^{2}r_{2}^{2}$, where detailed information of the structure of resonant states can be found. In an earlier work~\citep{Cortes1993}, the authors introduce a multipole expansion of the density of the resonances. They could obtain a description of the electron correlations from the \ac{WF}, using the so called "correlation $Z$ diagrams"~\citep{Macias1989,Macias1991}. Consequently, they concluded that, in general, the electronic density plots of the $^1P^o$ \ac{DES} are roughly scaled pictures of each other and their classification offers no difficulty, e.g., the $(K,T)$ labels may be used throughout the whole $Z$ diagram. Here, we have calculated the two-electron radial density $\rho(r_1,r_2)r_1^2r_2^2$ with the interest of establishing a qualitative and comparative understanding of the~\citep{Herrick1975,Lin1983} classification scheme of \ac{DES} in the He atom.
In the figures~\ref{fig:wide1},~\ref{fig:wide2},~\ref{fig:wide3},~\ref{fig:wide4},~\ref{fig:wide5}, and ~\ref{fig:wide6} we show the electronic probability density $\rho(r_{1},r_{2})r_{1}^{2}r_{2}^{2}$ of resonant \ac{DES} of helium located below the second ionization threshold He$^+$ $(n_1=2)$ for the total symmetries $^{1,3}S^e$, $^{1,3}P^o$, $^{1,3}D^e$, respectively. The resonances are organized under a criterion of increasing energy and are labelled according to the classification proposed by~\citep{Lin1983} and ~\citep{Herrick1975}.
By the way, the radial correlation which is described by the $A$ quantum number introduced by~\citep{Lin1984} is evidenced in these two-dimensional electronic densities. From the figures it is clearly seen that the density has an anti-node at the line $r_1=r_2$ for $A=+1$ and a node for $A=-1$, i.e, the quantum number $A$ describes the even or odd symmetry of the \ac{WF} with respect to the line $r_1 = r_2$ and reflects the Pauli principle~\citep{Brandefelt1996}. In figures~\eqref{fig:wide1} and~\eqref{fig:wide2} for $^{1,3}S^e$ states, where only $T = 0$ is allowed, it is shown that $A =+1$ corresponds to the spin singlet states which show an anti-node at $r_1=r_2$; on the other hand $A =-1$ labels the spin triplet states, which now have a node at the line $r_1=r_2$ as can be expected. The symmetries $^{1,3}P^o$ have a more complicated behavior that is pictured in figures~\eqref{fig:wide3} and~\eqref{fig:wide4}. The singlet $^1P^o$ states have an alternating behavior between the values of the quantum number $A=1,-1,0$ evidencing again anti-node ($A=+1$) and node ($A=-1$) behaviors. The $A=0$ value is also predicted by~\citep{Lin1993} in order to generalize the fact that the third series in the figure~\ref{fig:wide1} is not possible to classify with a label $A=\pm 1$. Then, the triplet $^3P^o$ states have only an alternating behavior between the values of the quantum number $A=\pm1$. This fact shows again the anti-node ($+1$) and the node behavior ($-1$), as expected. Finally, the states of symmetry $^1D^e$ shown in figure~\ref{fig:wide5} only admits the values $A=+1,0$ and the states of symmetry $^3D^e$ in figure~\ref{fig:wide6} only involve the values $A=-1,0$. In conclusion, there is a strong relationship between the topological behavior of the electronic density distribution and the quantum label $A$, which ultimately describes de radial correlation of the \ac{DES} (symmetric or asymmetric stretching vibration as a correlated motion of the electron pair with respect to the nucleus).
\begin{figure}
\centering
\includegraphics[width=0.35\textwidth]{gfx/densitytwo_3Se_Part1.pdf}
\includegraphics[width=0.35\textwidth]{gfx/densitytwo_3Se_Part2.pdf}\\
\includegraphics[width=0.35\textwidth]{gfx/densitytwo_3Se_Part3.pdf}
\includegraphics[width=0.35\textwidth]{gfx/densitytwo_3Se_Part4.pdf}\\
\includegraphics[width=0.35\textwidth]{gfx/densitytwo_3Se_Part5.pdf}
\includegraphics[width=0.35\textwidth]{gfx/densitytwo_3Se_Part6.pdf}\\
\includegraphics[width=0.35\textwidth]{gfx/densitytwo_3Se_Part7.pdf}
\includegraphics[width=0.35\textwidth]{gfx/densitytwo_3Se_Part8.pdf}
\caption[Two-electron radial density $\rho(r_{1},r_{2})r_{1}^{2}r_{2}^{2}$ for the lowest eight resonant $^3S^e$ states in He]{\label{fig:wide2}Two-electron radial density $\rho(r_{1},r_{2})r_{1}^{2}r_{2}^{2}$ for the lowest eight resonant $^3S^e$ states in He, located below the second ionization threshold. Resonances are labelled according to the classification proposed by~\citep{Lin1983} using $_{n_1}(K,T)_{n_2}^A$. The energy ordering of the resonances is indicated by the alphabet labels inside the plots.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.28\textwidth]{gfx/densitytwo_1Po_Part1.pdf}
\includegraphics[width=0.28\textwidth]{gfx/densitytwo_1Po_Part2.pdf}\\
\includegraphics[width=0.28\textwidth]{gfx/densitytwo_1Po_Part3.pdf}
\includegraphics[width=0.28\textwidth]{gfx/densitytwo_1Po_Part4.pdf}\\
\includegraphics[width=0.28\textwidth]{gfx/densitytwo_1Po_Part5.pdf}
\includegraphics[width=0.28\textwidth]{gfx/densitytwo_1Po_Part6.pdf}\\
\includegraphics[width=0.28\textwidth]{gfx/densitytwo_1Po_Part7.pdf}
\includegraphics[width=0.28\textwidth]{gfx/densitytwo_1Po_Part8.pdf}\\
\includegraphics[width=0.28\textwidth]{gfx/densitytwo_1Po_Part9.pdf}
\includegraphics[width=0.28\textwidth]{gfx/densitytwo_1Po_Part10.pdf}
\caption[Two-electron radial density $\rho(r_{1},r_{2})r_{1}^{2}r_{2}^{2}$ for the lowest ten resonant $^1P^o$ states in He]{\label{fig:wide3}Two-electron radial density $\rho(r_{1},r_{2})r_{1}^{2}r_{2}^{2}$ for the lowest ten resonant $^1P^o$ states in He, located below the second ionization threshold. Resonances are labelled according to the classification proposed by~\citep{Lin1983} using $_{n_1}(K,T)_{n_2}^A$. The energy ordering of the resonances is indicated by the alphabet labels inside the plots.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.28\textwidth]{gfx/densitytwo_3Po_Part1.pdf}
\includegraphics[width=0.28\textwidth]{gfx/densitytwo_3Po_Part2.pdf}\\
\includegraphics[width=0.28\textwidth]{gfx/densitytwo_3Po_Part3.pdf}
\includegraphics[width=0.28\textwidth]{gfx/densitytwo_3Po_Part4.pdf}\\
\includegraphics[width=0.28\textwidth]{gfx/densitytwo_3Po_Part5.pdf}
\includegraphics[width=0.28\textwidth]{gfx/densitytwo_3Po_Part6.pdf}\\
\includegraphics[width=0.28\textwidth]{gfx/densitytwo_3Po_Part7.pdf}
\includegraphics[width=0.28\textwidth]{gfx/densitytwo_3Po_Part8.pdf}\\
\includegraphics[width=0.28\textwidth]{gfx/densitytwo_3Po_Part9.pdf}
\includegraphics[width=0.28\textwidth]{gfx/densitytwo_3Po_Part10.pdf}
\caption[Two-electron radial density $\rho(r_{1},r_{2})r_{1}^{2}r_{2}^{2}$ for the lowest ten resonant $^3P^o$ states in He]{\label{fig:wide4}Two-electron radial density $\rho(r_{1},r_{2})r_{1}^{2}r_{2}^{2}$ for the lowest ten resonant $^3P^o$ states in He, located below the second ionization threshold. Resonances are labelled according to the classification proposed by~\citep{Lin1983} using $_{n_1}(K,T)_{n_2}^A$. The energy ordering of the resonances is indicated by the alphabet labels inside the plots.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.28\textwidth]{gfx/densitytwo_1De_Part1.pdf}
\includegraphics[width=0.28\textwidth]{gfx/densitytwo_1De_Part2.pdf}\\
\includegraphics[width=0.28\textwidth]{gfx/densitytwo_1De_Part3.pdf}
\includegraphics[width=0.28\textwidth]{gfx/densitytwo_1De_Part4.pdf}\\
\includegraphics[width=0.28\textwidth]{gfx/densitytwo_1De_Part5.pdf}
\includegraphics[width=0.28\textwidth]{gfx/densitytwo_1De_Part6.pdf}\\
\includegraphics[width=0.28\textwidth]{gfx/densitytwo_1De_Part7.pdf}
\includegraphics[width=0.28\textwidth]{gfx/densitytwo_1De_Part8.pdf}\\
\includegraphics[width=0.28\textwidth]{gfx/densitytwo_1De_Part9.pdf}
\includegraphics[width=0.28\textwidth]{gfx/densitytwo_1De_Part10.pdf}
\caption[Two-electron radial density $\rho(r_{1},r_{2})r_{1}^{2}r_{2}^{2}$ for the lowest ten resonant $^1D^e$ states in He]{\label{fig:wide5}Two-electron radial density $\rho(r_{1},r_{2})r_{1}^{2}r_{2}^{2}$ for the lowest ten resonant $^1D^e$ states in He, located below the second ionization threshold. Resonances are labelled according to the classification proposed by~\citep{Lin1983} using $_{n_1}(K,T)_{n_2}^A$. The energy ordering of the resonances is indicated by the alphabet labels inside the plots.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.28\textwidth]{gfx/densitytwo_3De_Part1.pdf}
\includegraphics[width=0.28\textwidth]{gfx/densitytwo_3De_Part2.pdf}\\
\includegraphics[width=0.28\textwidth]{gfx/densitytwo_3De_Part3.pdf}
\includegraphics[width=0.28\textwidth]{gfx/densitytwo_3De_Part4.pdf}\\
\includegraphics[width=0.28\textwidth]{gfx/densitytwo_3De_Part5.pdf}
\includegraphics[width=0.28\textwidth]{gfx/densitytwo_3De_Part6.pdf}\\
\includegraphics[width=0.28\textwidth]{gfx/densitytwo_3De_Part7.pdf}
\includegraphics[width=0.28\textwidth]{gfx/densitytwo_3De_Part8.pdf}\\
\includegraphics[width=0.28\textwidth]{gfx/densitytwo_3De_Part9.pdf}
\includegraphics[width=0.28\textwidth]{gfx/densitytwo_3De_Part10.pdf}
\caption[Two-electron radial density $\rho(r_{1},r_{2})r_{1}^{2}r_{2}^{2}$ for the lowest ten resonant $^3D^e$ states in He]{\label{fig:wide6}Two-electron radial density $\rho(r_{1},r_{2})r_{1}^{2}r_{2}^{2}$ for the lowest ten resonant $^3D^e$ states in He, located below the second ionization threshold. Resonances are labelled according to the classification proposed by~\citep{Lin1983} using $_{n_1}(K,T)_{n_2}^A$. The energy ordering of the resonances is indicated by the alphabet labels inside the plots.}
\end{figure}
\section{\label{sec:infortheo}Information-theoretic measures}
The physical and chemical properties of atoms and molecules strongly depend on the topological properties of the electronic density function. This function characterise the probability structure of quantum-mechanical states. The structural (topological) properties are for instance the spreading, uncertainty, randomness, disorder, localization, and the small and strong changes of the probability distribution. In order to analyse and quantify the topological properties of a system we can use the widely known measures of the modern information theory described by~\citep{MacKay2003,Cover2006,Lopez-Rosa2010} and the references therein. The pioneering work of~\citep{Bialynicki-Birula1975} pointed out the importance of applying the methods and concepts of the classical information theory~\citep{Shannon1949,Fisher1972} to the wave mechanics.
In this section we consider two information theoretic measures which complementarily describe the spreading of a probability density function in a box. First a measure of the global character of the distribution, able to quantify the total extent of the probability density distribution using a logarithmic functional known as Shannon entropy. Secondly, we introduce another interesting quantity called Fisher information which is a functional of the gradient of the probability density. At variance, this measure is very sensitive to the point-wise analytic behavior of the density. This quantity has a local character.
Our goal is to apply these two objects of analysis or quantifiers to the one-electron radial densities, see equation~\eqref{eq:oneparticle}, of the resonant \ac{DES} in helium in order to obtain information of the their classification via the topological structure of the density.
The application of these two quantifiers over the two-electron density functions $\rho(r_1,r_2)$ is possible but cumbersome. The integration over one of the radial coordinates to obtain the one-electron radial density, projects all the rich subtleties of the two-particle distributions included in figures~\ref{fig:wide1}-\ref{fig:wide6} into one axis ($r_1$ or $r_2$). Nevertheless we find that one-particle density may still have enough information content to discriminate qualitatively different states within a Rydberg series.
\subsection{\label{sec:entropies}Global measure: Shannon entropy}
The birth of modern information theory was due to the pioneering paper of~\citep{Shannon1949}. Claude Shannon in 1940s was investigating, in addition to how to make communication procedures safer, how much people could communicate with each other through a physical system (e.g. a telephone network). Shannon was seeking the way to send two or more calls down a single wire. In order to achieve his pursuit he needed to provide a precise mathematical definition of the information concept. Shannon came up with the definition that the information content of an event is proportional to the $log$ of its inverse probability $p$ of occurrence~\citep{Vedral2010}:
\begin{equation}{\label{eq:information}}
I_S=ln\frac{1}{p}.
\end{equation}
This definition of information expresses two relevant properties: (1) the fact that less likely events, the ones for which the probability of happening is very small are the ones that carry more information; (2) the total information in two independent events should be the sum of the two individual amounts of information. For this reason, and the fact that the joint probability of the two independent events are the product of the individual probabilities, the information definition involves a logarithmic function. Shannon originally named his measure of information as "entropy" by a direct suggestion of John von Neumann. We often write the Shannon entropy as a function of a probability distribution, $p_1,\dots,p_n$, i.e., as the expectation value of the expression~\eqref{eq:information} with this probability distribution
\begin{equation}{\label{eq:shannondisc}}
S(p_1,\dots,p_n)=-\sum_xp_xlnp_x.
\end{equation}
Finally, the Shannon entropy is generalized, for an arbitrarily continuous probability distribution function, as \citep{Catalan2002,Cover2006,Lopez-Rosa2009, Lopez-Rosa2010, Angulo2011, Antolin2011}
\begin{equation}{\label{eq:shannonentropy}}
S[\rho]=-\int\rho(\mathbf{r})ln\rho(\mathbf{r})d\mathbf{r}
\end{equation}
The Shannon entropy is a direct measure of the uncertainty for a probability distribution. It talks about ignorance or lack of information concerning an experimental event or outcome. Nevertheless, as we have noted before, the Shannon entropy is a measure of the amount of information that we expect to gain on performing a probabilistic experiment.
\subsection{\label{sec:inffisher}Local measure: Fisher information}
Instead of providing information on the global structure of the probability distribution function, there are other functionals more sensitive to the local topology, for instance, the Fisher entropy. It is a better indicative of the local irregularities or the oscillatory nature of the density as well as a witness of disorder of the system. Therefore, this function is able to detect the local changes of the density in order to provide a better description of the system in terms of the measure of information in the outcome of an experiment. The Fisher information is defined as~\citep{MacKay2003,Cover2006,Lopez-Rosa2010}
\begin{eqnarray}{\label{eq:fisherinformation}}
I[\rho] & = & \int\left|\mathbf{\nabla}ln\rho(\mathbf{r})\right|^{2}\rho(\mathbf{r})d\mathbf{r}\\ \nonumber
& = & \int\frac{\left|\mathbf{\nabla}\rho(\mathbf{r})\right|^{2}}{\rho(\mathbf{r})}d\mathbf{r}
\end{eqnarray}
The Fisher information measure has been a successful concept to identify, characterize and interpret numerous phenomena and physical processes such as e.g., correlation properties in atoms, the periodicity and shell structure in the periodic table of chemical elements. It has been used for the variational characterization of quantum equation of motion, and also to re-derive the classical thermodynamics without requiring the usual concept of Boltzmann's entropy, as well as other large variety of applications, see~\citep{Lopez-Rosa2010} and the references therein.
One of the more remarkable applications of the Fisher information is its deep relationship with \ac{DFT} where it plays a central role. The relevance of Fisher information in quantum mechanics and \ac{DFT} was first emphasized more than thirty years ago. It states that the quantum mechanical kinetic energy can be considered a measure of the information distribution, see for instance~\citep{Nagy2003} and the references therein. This well established relationship between the quantum mechanical kinetic energy functional and the Fisher information is called in the literature the Weizs\"acker kinetic energy functional.
\section{Results and discussions}\label{sec:resultanddiscussions}
In this section we show and discuss the results obtained for both measures introduced before: Shannon entropy and the Fisher information for the electronic density function of \ac{DES} of helium atom. Additionally, we also analyse the behavior of the one-particle electronic density itself and the differential entropies, i.e., the arguments of the both measures, for the resonant states of symmetries $^{1,3}S^e$, $^{1,3}P^o$, and $^{1,3}D^e$ for the same atom. Nevertheless, in order to obtain an intuitive picture of the meaning for each of the entropic measures, we initially present the same analysis, as an interesting illustration for the bound states of the hydrogen atom.
\subsection{Information measures of Hydrogen atom}\label{sec:hydrogenresults}
The hydrogen atom has been considered to have a central role in quantum physics and chemistry. Its analysis is basic not only to gain a full insight into the intimate structure of matter but also for other numerous phenomena like light-matter interaction~\citep{Bransden2003}, the behavior of heterostructures like quantum-dots, and so on. On the whole, since the birth of quantum mechanics, the hydrogen atom had become as a paradigm, mainly because its Schr\"odinger equation can be solved analytically.
\begin{figure}[h]
\centering
\includegraphics[width=0.85\textwidth]{gfx/hydrogenentropies}
\caption{\label{fig:shannfisherhyd}Shannon Entropy and Fisher Information or the lowest bound states (for $l=0,1$ and $2$) of the Rydberg series below the ionization threshold in the hydrogen atom.}
\end{figure}
In this section we obtain the information measures, i.e., the Shannon entropy and the Fisher information for the hydrogen atom. Let us now deal with the analytical expression~\eqref{eq:solhyd1} for the \ac{WF} of this atom
\begin{equation}\label{eq:solhyd12}
\psi_{E,l,m}(r,\theta,\phi)=R_{E,l}({r})\mathcal{Y}^l_{m_l}(\theta,\phi),
\end{equation}
where $R_{E,l}({r})$ is the radial function and $\mathcal{Y}^l_{m_l}(\theta,\phi)$ is the spherical harmonic which describes the angular dependence. We are only interested in calculating the entropic measures on the radial part of the \ac{WF}, therefore we trace over the angular degrees of freedom. Then, the radial one-particle electronic density can be written, in terms of the equation~\eqref{eq:radialf}, as
\begin{align}\label{eq:solhyd13}
\rho({r})&=|R_{E,l}({r})|^2\\ \nonumber
&=\left|\left\{\left(\frac{2Z}{n}\right)^{3}\frac{(n-l-1)!}{2n[(n+l)!]^3}\right\}^{\frac{1}{2}}e^{-\frac{\rho}{2}}\rho^l L^{2l+1}_{n+1}(\rho)\right|^2,
\end{align}
where $\rho=\frac{2Z}{n}r$ and the $L^i _k(\rho)$ are the associated Laguerre polynomials.
\begin{figure}[h]
\centering
\includegraphics[width=0.85\textwidth]{gfx/shannonarg_l0}
\caption[Components of the integrand in the Shannon entropy integral formula for the six lowest bound states of hydrogen with $l=0$]{\label{fig:sharg_l0}Components of the integrand in the Shannon entropy integral formula for the six lowest bound states of hydrogen with $l=0$. The panel (a) shows the electronic density times the angular factor $4\pi$ and the volume J-factor $r^2$. Panel b) shows the logarithm of the density at two different scales and panel c) shows the full integrand of the Shannon entropy (or differential Shannon entropy). The inset is a blow-up of the inner radial region.}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.85\textwidth]{gfx/shannonarg_l1}
\caption[Components of the integrand in the Shannon entropy integral formula for the six lowest bound states of hydrogen with $l=1$]{\label{fig:sharg_l1}Components of the integrand in the Shannon entropy integral formula for the six lowest bound states of hydrogen with $l=1$. The panel (a) shows the electronic density times the angular factor $4\pi$ and the volume J-factor $r^2$. Panel b) shows the logarithm of the density at two different scales and panel c) shows the full integrand of the Shannon entropy (or differential Shannon entropy). The inset is a blow-up of the inner radial region.}
\end{figure}
The Shannon entropy for the lowest bound states in the Rydberg series with angular momentum $l=0,1$ and $2$ in H, is shown in figure~\ref{fig:shannfisherhyd}. This quantity is a monotonically increasing function when the energy of the bound states increases, and the curve seems to reach an asymptotic behavior against the location of the ionization threshold at $E=0$ \ac{a.u.}, regardless the value of the angular momentum $l$. This behavior evidences the fact that the density becomes more an more spread with the energy excitation. Even though the ground state and the low energy states has different values of Shannon entropy for different values of the angular momentum $l$, these values tend to converge to the same one for highly excited manifolds ($n \to \infty$) in the Rydberg series, for which all electron densities become highly oscillatory and spread out, regardless of the details at short distances for different angular momentum. Previous results for the ground state of hydrogen are reported by~\citep{Sen2005} and some analytical expressions are provided by~\citep{Lopez-Rosa2005}. In addition, the figures~\ref{fig:sharg_l0}, \ref{fig:sharg_l1}, and \ref{fig:sharg_l2} show the components of the integrand for the Shannon entropy according to Eq. (2.26) for the lowest eigenstates of hydrogen atom with angular momentum $l=0,1$ and $2$, respectively. In each figure, the panel (a) shows the electronic density, the panel (b) its logarithm, and in panel (c) the complete differential Shannon entropy (the full integrand) is shown. In the figure~\ref{fig:sharg_l0}(a) the electronic density of the ground state of Hydrogen atom is extremely localized close to the nucleus. By inserting the logarithm of the electron density in the definition of the Shannon entropy, many details of the density distributions at large radial distances are incorporated into the entropy. This behavior of the differential Shannon entropy as the energy increases (the peak of the maximum decreases but the distribution spreads out) explains the monotonically increasing character of the Shannon entropy. Incidentally, those states which are degenerated in energy (same $n$ but different $l$, like $2s$ and $2p$ or $3s$, $3p$ and $3d$), in spite of having different spreading of the density, the associated values for the Shannon entropy are similar, a behavior that can be understood from the differential probabilities in figures~\ref{fig:sharg_l0},~\ref{fig:sharg_l1}, and~\ref{fig:sharg_l2}. Finally, it is clear, from direct comparison, that the density of highly excited estates with the same energy and different angular momentum are almost indistinguishable for the Shannon entropy measure.
\begin{figure}[h]
\centering
\includegraphics[width=0.85\textwidth]{gfx/shannonarg_l2}
\caption[Components of the integrand in the Shannon entropy integral formula for the six lowest bound states of hydrogen with $l=2$]{\label{fig:sharg_l2}Components of the integrand in the Shannon entropy integral formula for the six lowest bound states of hydrogen with $l=2$. The panel (a) shows the electronic density times the angular factor $4\pi$ and the volume J-factor $r^2$. Panel b) shows the logarithm of the density at two different scales and panel c) shows the full integrand of the Shannon entropy (or differential Shannon entropy). The inset is a blow-up of the inner radial region.}
\end{figure}
On the other hand, the Fischer information plot in figure~\ref{fig:shannfisherhyd} for hydrogen shows that this quantity decreases monotonically to the limit value zero at the ionization threshold ($n\to \infty$), but, at variance with the Shannon entropy, the Fischer information measure shows a distinctive trend for each angular momentum value $l$ (see figure~\ref{fig:shannfisherhyd}). Therefore, it is possible to conclude that the density became almost homogeneous, i.e., it reaches a high oscillatory homogeneous behavior, for highly excited states. Fisher information is higher for the ground state which is more localized and has smaller uncertainty, i.e., the accuracy in estimating the localization of the particle is bigger. This behavior is depicted in figures~\ref{fig:sharg_l0} (a),~\ref{fig:sharg_l1} (a) and~\ref{fig:sharg_l2} (a) evidencing the strong localization of the ground state. Moreover, as is shown in the figures the argument of the Fisher information, or the differential Fisher information, shows that the major contribution to this local measure comes from the regions of the electronic density close to the nucleus for the ground state. However, for the excited states, this contribution becomes more and more unimportant. Some analytical results can be found in~\citep{Lopez-Rosa2005,Lopez-Rosa2010}.
\newpage
\begin{figure}
\centering
\includegraphics[width=0.85\textwidth]{gfx/shannon1Se_new}
\caption{\label{fig:shannon1Se}Shannon entropy $S(\rho)$ for the ground state and singly excited $^1S^e$ states in helium (left panel) and doubly excited states belonging to the two series $_2(1,0)^+_n$ and $_2(-1,0)^+_n$ within the $^1S^e$ symmetry of helium (right panel).}
\end{figure}
\subsection{Results of information theory measures for doubly excited states of helium}\label{sec:hydrogen results}
We calculate, using a \ac{CI}-\ac{FM} approach, the eigenenergies and eigenfunctions of the Rydberg series of He \ac{DES} below the second ionization threshold for symmetries $(^{1,3}S^e,^{1,3}P^o,^{1,3}D^e)$. However, as we have said before, we are not interested in reproducing the highly precise values for the energies already reported in the literature. Instead, we focus our effort to obtain a reasonable good description of the \ac{WF} itself, since our workhorse is related to the radial density and our final results are analyzed more qualitatively than quantitatively. In addition, we have also calculated the information-theoretic measures of the ground state and singly excited states of helium atom, as we have done in the previous section with the bound states of hydrogen atom and we present an analysis. In the following sections we present the results of our numerical studies on the Shannon entropy and the Fisher information integrals for each of the symmetries named before. In order to obtain all the entropic measures of helium presented through this work, we have used a numerical integration scheme based on the Gauss-Legendre quadrature~\citep{Abramowitz1965,Press2007} which is a very suitable approximation of the definite integral of an arbitrary function usually stated as a weighted sum of the function values at very specified points within the domain of integration.
\begin{figure}
\centering
\includegraphics[width=0.85\textwidth]{gfx/shannon3Se_new}
\caption{\label{fig:shannon3Se}Shannon entropy $S(\rho)$ for singly excited $^3S^e$ states in helium (left panel) and doubly excited states belonging to the two series $_2(1,0)^-_n$ and $_2(-1,0)^-_n$ within the $^3S^e$ symmetry of helium (right panel).}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.85\textwidth]{gfx/shannon1Po_new}
\caption{\label{fig:shannon1Po}Shannon entropy $S(\rho)$ for the singly excited $^1P^o$ states in helium (left panel) and doubly excited states belonging to the three series $_2(0,1)^+_n$, $_2(1,0)^-_n$ and $_2(-1,0)^0_n$ within the $^1S^e$ symmetry of helium (right panel).}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.85\textwidth]{gfx/shannon3Po_new}
\caption{\label{fig:shannon3Po}Shannon entropy $S(\rho)$ for the singly excited $^3P^o$ states in helium (left panel) and doubly excited states belonging to the three series $_2(1,0)^+_n$, $_2(0,1)^-_n$ and $_2(-1,0)^-_n$ within the $^3P^o$ symmetry of helium (right panel).}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.85\textwidth]{gfx/shannon1De_new}
\caption{\label{fig:shannon1De}Shannon entropy $S(\rho)$ for the singly excited $^1D^e$ states in helium (left panel) and doubly excited states belonging to the three series $_2(1,0)^+_n$, $_2(0,1)^0_n$ and $_2(-1,0)^0_n$ within the $^1D^e$ symmetry of helium (right panel).}
\end{figure}
\subsubsection{Shannon entropy of the $^{1,3}S^e$, $^{1,3}P^o$, and $^{1,3}D^e$ doubly excited states of helium atom}
In this section we analyse the Shannon entropy calculated via the equation~\eqref{eq:shannonentropy} using the one-particle radial density
$\rho({r})$ for the singlet and triplet resonant states of helium atom. In doing so, we show this quantity in figure~\ref{fig:shannon1Se} for the symmetry $^{1}S^e$ where we depict the results for the ground state and the singly excited states on the left panel. The behavior of this quantity for these states is very close to the obtained for the bound states of the hydrogen atom as is evidenced by a comparison with the left panel of figure~\ref{fig:shannfisherhyd}. The Shannon entropy increases monotonically to reach an asymptotic behavior when the energy of the Rydberg series approaches the first ionization threshold. In fact, it seems reasonable to find that the Shannon entropy must diverge to infinity once the ionization threshold is crossed, since the corresponding continuum \ac{WF} becomes fully delocalized in the configurational space. In the same way, as is shown in the figure, there is a strong localization of the density of the ground state close to the nucleus. A previous result for the Shannon entropy in the ground state of He is reported by~\citep{Sen2005}.
On the other hand, in the right panel of the figure~\ref{fig:shannfisherhyd} we show the Shannon entropy values for the two $(K,T)$ series of $^1S^e$ \ac{DES} in helium. Since these states also form Rydberg series in the continuum above the first ionization threshold but below the second one, and we only analyze the contribution of the localized $\mathcal{Q}$ part of the resonance to the Shannon entropy, the behavior of the entropy for \ac{DES} resembles that of the Rydberg series of the bound states. It is important to say that even though we have plotted the two $(K,T)$ series belonging to this symmetry using different colours for each one (i.e., $_2(1,0)^+_n$ in red and $_2(-1,0)^+_n$ in blue), the Shannon entropy is hardly capable to distinguish them, i.e., it is impossible to say, based on the values of Shannon entropy and without a previous knowledge of the classification, which state belongs to any specific series. By the way, although the Shannon entropy seems to converge to a constant value as the energy approaches the second ionization threshold, this is an apparent behavior due to the finite box approximation in our computations, i.e., all our \ac{WF} are set to zero at the box boundary $r$=$L$, and this edge condition also affects the inner part of the density. This fact produces an inaccurate description of the one-particle electronic density for the highly lying resonances within each Rydberg series. Some figures concerning the differential properties of the Shannon entropy are relegated to Appendix~\ref{ch:supgraphics}. For instance, other figures include the electronic radial density, its logarithm and the radially differential Shannon entropy for the two series $_2(1,0)^+_n$ and $_2(-1,0)^+_n$ in the $^1S^e$ symmetry, respectively. In conclusion, the behavior of the Shannon entropy for the He resonances with increasing energy reflects the spreading of the density to longer radial distances in the configurational space. The loss of the compactness with higher excitation in the electronic density naturally increases the entropy content. However, this Shannon entropy as an integral measure is not able to discriminate (after integration) the differential subtleties associated to the several $(K,T)$ series within the same total $^{2S+1}L^\pi$ spectroscopical symmetry. It is also evident that the density of highly excited resonances are truncated at $r=L$. However, there are many ways to deal with the described trouble like, inter alia, extending the box size and increasing the number of basis in the \ac{CI} approach, but this is not the scope of the present work. We leave this convergence analysis for another future work which must include the analysis of the isoelectronic series of the helium atom where. There are no reported values of Shannon entropy for \ac{DES} of helium atom in the literature and this work is a first approach to the subject.
In a similar way we have calculated the Shannon entropy of the ground state, the singly excited states and the \ac{DES} of the symmetries $^{3}S^e$ plotted in figure~\ref{fig:shannon3Se}, $^{1,3}P^o$ plotted in figures~\ref{fig:shannon1Po} and ~\ref{fig:shannon3Po}, and $^{1,3}D^e$ depicted in figures~\ref{fig:shannon1De} and~\ref{fig:shannon3De}. From these figures it is possible to conclude that the behavior of the Shannon entropy calculated for the states of these symmetries has qualitatively the same characteristics discussed for the symmetry $^{1}S^e$. Both bound states and \ac{DES} increase their Shannon entropy content in a similar trend towards their corresponding upper ionization threshold. Notice that in the resonant case we are dealing only with the bound-like part of the \ac{DES} according to the Feshbach partitioning. Therefore, the Shannon entropy is no more that a witness for the compactness or diffuseness in the inner part of the total resonance \ac{WF}. This physically means that the outer indistinguishable electron becomes more and more delocalized according to its state of excitation. This fact can be evidenced in the figure where is depicted the one-particle density and the Shannon entropy argument for the bound states of the symmetry $^{1}P^o$, and in the figures where are depicted the same functions of the series $_2(0,1)^+_n$, $_2(1,0)^-_n$, and $_2(-1,0)^0_n$, respectively, belonging to the symmetry $^{1}P^o$. Finally, it is possible to conclude that Shannon entropy does not provide additional information about \ac{DES} of helium in any particular symmetry. Consequently, this specific measure is unable to extract crucial information about the topological features of the density relevant to the problem of classification of resonances.
\begin{figure}[h!]
\centering
\includegraphics[width=0.85\textwidth]{gfx/shannon3De_new}
\caption{\label{fig:shannon3De}Shannon entropy $S(\rho)$ for the singly excited $^3D^e$ states in helium (left panel) and doubly excited states belonging to the three series $_2(1,0)^-_n$, $_2(0,1)^0_n$ and $_2(-1,0)^0_n$ within the $^3D^e$ symmetry of helium (right panel).}
\end{figure}
\subsubsection{Fisher information of the $^{1,3}S^e$, $^{1,3}P^o$, and $^{1,3}D^e$ doubly excited states of helium atom}\label{sec:fisherinf}
The Fisher information, $I[\rho]$, is another important information quantity~\citep{Fisher1972, Cover2006}. It is a measure of the gradient content of a distribution function and, for this reason, is a local measure which examines more profoundly changes in the electronic distribution. For our present purposes, i.e., the analysis and characterization of the topological properties of \ac{DES} of helium we have calculated this quantity using the expression~\eqref{eq:fisherinformation} in terms of their one-particle densities. In addition, we have also calculated the values of this information measure of singly excited states of helium with for the sake of comparison. Let us start with the Fisher information for the bound states with total symmetry $^1S^e$, in the left panel of the figure~\ref{fig:fisher1Se} we show the Fischer information value for the lowest bound $^1S^e$ states against their energy. The Fisher information decreases its value monotonically for increasing excitation energy, similar to the hydrogen bound states in figure~\ref{fig:shannfisherhyd}. The same decreasing behavior is observed for the bound $^1S^e$ states in the figure~\ref{fig:fisher3Se}. Surprisingly, the limit value for the Fischer information at ionization threshold is zero (see figure~\ref{fig:shannfisherhyd}), but in the case of the He atom the limit value is around $8$. Then, at variance with the Shannon entropy, whose value diverges at threshold, the Fischer information or gradient content seems to provide a discriminating limiting value at threshold. The show the Fisher information integral argument and from their analysis we can conclude that the Fischer information of the highly excited bound states does not differ significantly, to yield a limiting saturation value.
\begin{figure}[h!]
\centering
\includegraphics[width=0.85\textwidth]{gfx/fisher1Se_new}
\caption{\label{fig:fisher1Se}Fisher Information $I(\rho)$ for the ground and singly excited states (left panel) and for the doubly excited states for the two series $_2(1,0)^+_n$ and $_2(-1,0)^+_n$ belonging to the total symmetry $^1S^e$.}
\end{figure}
Similarly, the values of the Fischer information for the $^1S^e$ resonances in helium are included in the right panel of figure~\ref{fig:fisher1Se}. In contrast, this measure as applied to He resonances does have neither a decreasing nor monotonic behavior against the increasing energy. Actually, the Fischer information seems to split into two different paths (one, that corresponds to the series $_2(1,0)^+_n$, decreases their gradient content and the other, associated to the series $_2(-1,0)^+_n$ augments it). The two resonance series seem to tend to a different limiting value at the second ionization threshold, but ultimately they collapse to the same final value around $0.8$.
The figure~\ref{fig:fisher1Po} corresponds to the Fisher information calculated for the bound states and resonances of helium belonging to the symmetry $^1P^o$. At variance with the $^1S^e$ symmetry in figure~\ref{fig:fisher1Se}, the Fischer information increases with the excitation energy to reach a limiting value $\sim 16$ at the ionization threshold. This behavior will be common to the bound states of the other symmetries of helium $^3S^e$, $^3P^o$ and $^{1,3}D^e$ as can be observed in the left part of figures~\ref{fig:fisher3Se},~\ref{fig:fisher3Po},~\ref{fig:fisher1De}, and~\ref{fig:fisher3De}, respectively. The panel on the right in figure~\ref{fig:fisher1Po} shows the Fisher information values for the three $(K,T)$ series within the symmetry $^1P^o$. It is clearly noticeable that the three $(K,T)$ series follow paths with completely different behavior, to finally collapse to the same point at the second ionization threshold. Although the assignment of individual resonances was associated to a given path from our previous knowledge of the existence of $(K,T)$ series, clearly the Fisher information is able to distinguish local topological properties in the one-particle radial density among the different series. This result is important because it puts in evidence the existence of different resonant series by simply analyzing the electronic density with a selected tool like Fisher information.
\begin{figure}
\centering
\includegraphics[width=0.85\textwidth]{gfx/fisher3Se_new}
\caption{\label{fig:fisher3Se}Fisher Information $I(\rho)$ for the singly excited states (left panel) and for the doubly excited states for the two series $_2(1,0)^-_n$ and $_2(-1,0)^-_n$ belonging to the total symmetry $^3S^e$.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.85\textwidth]{gfx/fisher1Po_new}
\caption{\label{fig:fisher1Po}Fisher Information $I(\rho)$ for the singly excited states (left panel) and for the doubly excited states for the two series $_2(0,1)^+_n$, $_2(1,0)^-_n$ and $_2(-1,0)^0_n$ belonging to the total symmetry $^1P^o$.}
\end{figure}
Furthermore, if we analyse other symmetries we find the same behavior. In the right panel of the figures~\ref{fig:fisher3Se},~\ref{fig:fisher3Po},~\ref{fig:fisher1De},~\ref{fig:fisher3De} we have plotted the Fisher information of \ac{DES} for the symmetries $^3S^e$, $^3P^o$, $^1D^e$, and $^3D^e$, respectively. The splitting is particularly evident and strong in the symmetry $^3D^e$ where the resonances belonging to each of the three series $_2(1,0)^-_n$, $_2(0,1)^0_n$, and $_2(-1,0)^0_n$ can be identified without ambiguity within a given set of points (see figure~\ref{fig:fisher3De}). We may conclude that Fisher information provides a deeper insight into the quantum correlations (interelectronic correlations) that characterizes the behavior of the helium atom prepared in an autoionizing state. This measure can discriminate in the one-particle density the two-electron correlations which are the roots of the quantum properties of \ac{DES} of helium. On the other hand, in the figures
we have depicted: a) The gradient part of the Fisher integral argument $r^2|\partial_r\rho({r})|^2$ in two different intervals, b)the one-particle electronic density $4\pi\rho({r})$, c)the derivative of the density $4\pi\partial_r\rho({r})$, and d) the complete Fisher information argument or the differential Fisher information. From the parts a) and d) of these figures we may conclude that the contribution to the total Fisher information divides itself into two regions just at $r=1$ \ac{a.u.}. The main contribution to this quantity seems to come from the first of these regions, i.e., $0\le r\le1$. The parts b) and c) of all these figures show a very peculiar feature of the one-electron density of the \ac{DES}. This density function has a critical point, i.e. a local minimum or maximum, or an inflexion point, just at $r=1$ \ac{a.u.}. Consequently, the derivative of the density, which is shown in c), vanishes at this critical point. This behavior is common for all one-particle electronic density of \ac{DES} regardless of the symmetry. Actually, we do not know yet the reason of this bizarre property in the density.. Moreover, it seems that there is an underlying universality in the topological structure of the resonances which emerges with the electronic density, yet to be uncovered.
\begin{figure}
\centering
\includegraphics[width=0.85\textwidth]{gfx/fisher3Po_new}
\caption{\label{fig:fisher3Po}Fisher Information $I(\rho)$ for the singly excited states (left panel) and for the doubly excited states for the two series $_2(1,0)^+_n$, $_2(0,1)^-_n$ and $_2(-1,0)^-_n$ belonging to the total symmetry $^3P^o$.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.85\textwidth]{gfx/fisher1De_new}
\caption{\label{fig:fisher1De}Fisher Information $I(\rho)$ for the singly excited states (left panel) and for the doubly excited states for the two series $_2(1,0)^+_n$, $_2(0,1)^0_n$ and $_2(-1,0)^0_n$ belonging to the total symmetry $^1D^e$.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.85\textwidth]{gfx/fisher3De_new}
\caption{\label{fig:fisher3De}Fisher Information $I(\rho)$ for the singly excited states (left panel) and for the doubly excited states for the two series $_2(1,0)^-_n$, $_2(0,1)^0_n$ and $_2(-1,0)^0_n$ belonging to the total symmetry $^3D^e$.}
\end{figure}
\chapter{Entanglement on doubly excited states of Helium atom}\label{ch:entanglement}
Entanglement is understood as a fundamental physical characteristic of the quantum compound systems that inexorably implies its non-separability into the constituents parts. A physical system $S$, composed of two subsystems $S_1$ and $S_2$ and described by a state operator $\rho_S$ ($S$ is called bipartite quantum system), is defined as entangled with respect to $S_1$ and $S_2$ if we can not write the state operator as a convex sum
\begin{equation}{\label{eq:densityopsep}}
\rho_S=\sum_lw_l\hspace{3pt}\rho_{S_1}^l\otimes\rho_{S_2}^l,
\end{equation}
where the weights $w_l$ satisfy the conditions $w_l\ge0$ and $\sum_lw_l=1$. The singlet state of a pair of two-level system (e.g., a system of two particles with spin $s=1/2$) is the one of the simplest examples of an entangled state
\begin{equation}{\label{eq:densityopsep1}}
\ket{\Psi}=\frac{1}{\sqrt{2}}\left(\ket{\alpha}_1\otimes\ket{\beta}_2-\ket{\beta}_1\otimes\ket{\alpha}_2\right).
\end{equation}
If the compound (bipartite) system was prepared in an entangled state it cannot be expressed as a factorized state in terms of the individual states of each subsystem, i.e., $\ket{\Psi}\neq\ket{\psi_1}\otimes\ket{\psi_2}$. Consequently, it is not possible to assign a vector state to each subsystem individually.
The concept of {\textit entanglement} ({\textit entrelazamiento} in spanish or {\textit verschr\"ankung} in german) was coined by E. Schr\"odinger a sequel of the \ac{EPR} paper. Schr\"odinger pointed out the striking implications of the entanglement concept~\citep{Schrodinger1935,Schrodinger1936,Wheeler1984}:\\
\\
{\textit When two systems, of which we know the states by their respective representatives, enter into temporary physical interaction due to known forces between them, and when after a time of mutual influence the systems separate again, then they can no longer be described in the same way as before, viz. by endowing each of them with a representative of its own. I would not call that one but rather the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought.}
\section{\label{sec:entanglement}Quantum entanglement of indistinguishable particles}
The emergence of new ideas from the classical and quantum information theories provides an alternative perspective about the study of the electronic structure in atoms and, particularly, the measurement of the degree of quantum entanglement for both fermionic and bosonic indistinguishable particles. Entanglement is a fundamental feature of the compound quantum systems~\citep{Horodecki2009}. Moreover, it has been an important and useful resource for quantum information processing~\citep{Bengtsson2006,Nielsen2000}. It is yet to clarify what is the role of the entanglement in some phenomena like quantum phase transitions~\citep{Zanardi2006,Zanardi2007} and its relationship with the ionization processes in atoms and molecules, understood as phase transitions. By the way, the fundamental question is how to establish whether a general multipartite quantum state is entangled. For the case of pure states, the Schmidt decomposition is a successful and widely accepted measurement of entanglement~\citep{Peres1995}. Unfortunately, most proposed measures of entanglement for general (mixed) states involves high demanding extremizations which are difficult to handle analytically. Only for a few cases of physical systems we have found analytical measures of entanglement, e.g., for the general case of a pair of binary quantum objects (qubits) there is a formula for the entanglement of formation, called {\textit concurrence}, as a function of the density matrix~\citep{Wootters1998, Horodecki2009}. There are some additional criteria or witnesses for entanglement based, particularly, on the separability of quantum states~\citep{Horodecki2009,Peres1996,Horodecki1996,Huber2010}. In our case of fermionic systems in atoms and molecules we are interested to characterize the quantum correlations, at short distances from the nucleus, of pure states of a two-particle fermionic system taking into account the indistinguishability of the two electrons. A pure state of indistinguishable particles must be written in terms of Slater determinants which introduces the indistinguishability by means of the {\textit symmetrization postulate}, see section~\ref{sec:symmetrizationpos}. However, this procedure does not introduce the necessarily correlation in order to define a fermionic system as entangled. This is common known as {\textit statistical entanglement} which is a non-distillable kind of entanglement, i.e., it is not useful as a resource in quantum information processing and quantum computation. The symmetrization postulate is introduced over factorized configurations of distinguishable electrons leading to a Slater determinant. This is the case of the \ac{HF} method, which is based in the better description of the \ac{WF} with a single Slater determinant. Furthermore, the additional quantum correlations arise
when the description of the quantum correlations is done with a \ac{WF} that possesses more than one Slater determinant. In general, this scheme was designed \ac{CI}, see section~\ref{sec:cimethod}. Moreover, the correlation energy is defined as the difference between the limit \ac{HF} energy and the limit \ac{CI} energy (or the exact energy if this exist). Consequently, in the case of fermions an analogous concept to the Schmidt decomposition is introduced, which is called the Slater decomposition and the Slater rank~\citep{Schliemann2001}.
Following ~\citep{Ghirardi2004}, the general criterion to measure the entanglement of a system of two indistinguishable particles can be jointly established by: i) via the possibility of the decomposition of Slater-Schmidt and the determination of the Slater rank, ii) with the analysis of the von-Neumann entropy or the linear entropy of the reduced density operator of one-particle. This procedures enable us to elucidate whether the fermionic correlations of the states are elementary effects of the indistinguishability of the particles or they are direct evidence of the entanglement. The entanglement notion of a compound system of two-fermions is discussed widely by~\citep{Schliemann2001}.
\subsection{\label{sec:entanglementhe}Entanglement measure of the helium atom}
The quantum formalism describes the total state of a compound system in a Hilbert space as a {\textit tensor} product of the subsystems spaces, i.e., $\otimes^{n}_{l=1}H_{l}$ \citep{Osenda2007,Osenda2008}. The {\textit superposition principle} allows us to write the total state of the system as a sum of antisymmetrized products of spin-orbitals, which conforms the \ac{CI} method~\citep{Yanez2009, Dehesa2012a, Dehesa2012b}. With a basis of spin-orbitals of size $N$ it is possible to write the total \ac{WF} in the particular form as required to determine the Slater rank, as follows~\citep{Schliemann2001}.
\begin{equation} \label{expansionslater}
\ket{\Psi (1,2)} = \sum_i a_i \frac{1}{\sqrt{2}} [ |2i-1\rangle_1 \otimes |2i\rangle_2 - |2i\rangle_1 \otimes |2i-1\rangle_2 ]
\end{equation}
where the index $i$ runs over all the spin-orbitals of the one-electronic basis and the coefficients $a_i$ must satisfy the normalization condition $\sum_ia_i=1$. This is the condition of the Theorem $3.2$ of the reference~\citep{Ghirardi2004}. The number of coefficients $a_i\neq0$ which appear in the expansion~\eqref{expansionslater} is named the Slater number or rank of the state $\ket{\Psi (1,2)}$. The relationship between the Slater rank and the concept of entanglement can be defined as: an state is entangled if the Slater number follows the condition $N_S>1$. Therefore, the state whose description is based on a single Slater determinant is not entangled, i.e., the unique correlation present in the state is due to the symmetrization postulate. For this reason, a state described in terms of the \ac{CI} method is, by default, a fermionic entangled state. Anyway, the entanglement information for a bipartite system of two electrons can be found in the reduced density operator $\hat{\rho}_1=Tr_2\hat{\rho}$. This means that we must average over all relevant coordinates of subsystem $2$ by taking the partial trace. The reduced density matrix is calculated by using a partial trace over the second electron in the full density matrix
\begin{align}\label{eq:trace}
\hat{\rho}(\mathbf{r}_1,\mathbf{r}_1^\prime)&=Tr_2\hat{\rho}(\mathbf{r}_1,\mathbf{r}_2;\mathbf{r}_1^\prime\mathbf{r}_2^\prime) \\ \nonumber
&=\int d\mathbf{r}_2 \Psi^{CI}(\mathbf{r}_1,\mathbf{r}_2)\Psi^{*CI}(\mathbf{r}_1^\prime,\mathbf{r}_2).
\end{align}
Now, we can use the following two quantities to measure the amount of entanglement between the particles of a two-electron system: the linear entropy which is also a measure of the purity of the reduced system
\begin{align}\label{eq:linearentropy}
S_L&=1-Tr[\hat{\rho}(\mathbf{r}_1,\mathbf{r}_1^\prime)^2].\\ \nonumber
&=1-\int d\mathbf{r}\hat{\rho}(\mathbf{r},\mathbf{r})^2
\end{align}
and the von-Neumann entropy
\begin{align}\label{eq:vnentropy}
S_{VN}&=-Tr[\hat{\rho}(\mathbf{r}_1,\mathbf{r}_1^\prime)Log_2\hat{\rho}(\mathbf{r}_1,\mathbf{r}_1^\prime)].\\ \nonumber
&=-\int d\mathbf{r}[\hat{\rho}(\mathbf{r},\mathbf{r})Log_2\hat{\rho}(\mathbf{r},\mathbf{r})].
\end{align}
Since the~\ac{CI} method is based on the use of (spin and angular momentum) symmetry-adapted two-electron configurations (see equation~\eqref{eq:ciwfunc}), the reduced density matrix $\rho(\mathbf{r},\mathbf{r}^\prime)$ can be calculated in a very simple algebraic form in terms of the \ac{CI} expansion coefficients in equation~\eqref{eq:ciwfunc}; then avoiding the very demanding numerical integration of multidimensional integrals of the density matrix.~\citep{Coe2008, Abdullah2009,Dehesa2012a,Dehesa2012b}. With these considerations and within the \ac{CI} method, the partial trace, the linear entropy and the von-Neumann entropy take the following straightforward form
\begin{equation}
\hat{\rho}_{ n_{1}l_{1};n_{1}^{\prime}l_{1}^{\prime}}=\sum_{n l}C_{n_{1}l_{1};n l}C^{*}_{n_{1}^{\prime}l_{1}^{\prime};n l}, \label{eq:reduceddden2}
\end{equation}
and
\begin{subequations}
\begin{align}
S_L&= 1-\sum_{nl,n^\prime l^\prime} \hat{\rho}_{ nl;n^{\prime}l^{\prime}} \hat{\rho}_{ n^\prime l^\prime;nl}, \label{eq:linearen2}\\
S_{VN}&= -\sum_i \lambda_i Log_2 \lambda_i, \label{eq:vonnewman2}
\end{align}
\end{subequations}
where the $\lambda$s are the eigenvalues of the one-electron reduced density matrix $\hat{\rho}_{ n_{1}l_{1};n_{1}^{\prime}l_{1}^{\prime}}$.
Our goal is to extend these measures of entanglement to the general analysis of the resonant states of the helium atom in order to obtain a deeper insight of their electronic correlation structure, as well as to uncover the classification of the resonant Rydberg series below the second ionization threshold under the scrutiny of entanglement witnesses. Some emerging studies have been recently performed on the analysis of quantum entanglement in two-electron systems~\citep{Yanez2010, Manzano2010}, particularly, in toy models of two-electrons which can be solved analytically (see also~\citep{Amovilli2003,Amovilli2004}), e.g., Moshisky's atom~\citep{Moshinsky1968}, Hooke's atom ~\citep{Taut1993}, or Crandall's atom~\citep{Crandall1984}. In the Moshinky's atom all the interactions between particles are harmonic. In the Hook's atom, the interelectronic interaction is replaced by a Coulomb interaction and in the Crandall's atom the interaction between electrons are changed by a polarization like interaction $1/r^2_{12}$. None of these toy models allows ionization of electrons, i.e, there are no presence of resonant states which appear when all the interactions are Coulomb like. However, the quantum systems subject to these interaction potentials can be solved exactly, and therefore exact \ac{WF} and density operators are readily available. Consequently, the entanglement measures can be calculated exactly. Nevertheless, there are only entanglement values for the ground state and for a few of excited states in these systems. In the reference~\citep{Manzano2010}, additionally to the analysis of the entanglement for the Hooke's atom and for the Crandall's atom from the linear entropy, there is a preliminarily attempt to obtain an entanglement analysis in two-electron atoms. To this purpose, the authors employ \ac{CI} of Hylleraas coordinates of the (s,t,u) kind which are explicitly correlated. However, their analysis is focused only to the ground state.
Finally, in the following section we present our calculated measures of entanglement of both bound and \ac{DES} of the helium atom via our high-quality \ac{CI}-\ac{FM} reduced density matrix.
\subsection{Results of the entanglement amount in the eigenspectrum of the helium atom.}
\begin{table}[h!]
\centering
\begin{tabular}{cccccccc}
\hline\hline
& \citeauthor{Dehesa2012a} &&\multicolumn{2}{c}{ \citeauthor{Benenti2013}}&&\multicolumn{2}{c}{Present}\\
\cline{2-2}\cline{4-5}\cline{7-8}
& $S_{L}$&&$S_{L}$&$S_{VN}$&&$S_{L}$&$S_{VN}$\\ \hline
$\ket{1s1s;^1\hspace{-3.5pt}S^e}$ &0.015914 && 0.01606 & 0.0785 && 0.011460 & 0.066475 \\
$\ket{1s2s;^1\hspace{-3.5pt}S^e}$ &0.48866 && 0.48871 & 0.991099 && 0.487222 & 0.988964 \\
$\ket{1s3s;^1\hspace{-3.5pt}S^e}$&0.49857 && 0.49724 & 0.998513 && 0.497154 & 0.998530 \\
$\ket{1s4s;^1\hspace{-3.5pt}S^e}$&0.49892 && 0.49892 & 0.999577 && 0.498909 & 0.999631 \\
$\ket{1s5s;^1\hspace{-3.5pt}S^e}$&0.4993 && 0.499565 & 0.999838 && 0.499468 & 0.999881 \\ \hline
$\ket{1s2s;^3\hspace{-3.5pt}S^e}$& 0.50038 && 0.500378 & 1.00494 && 0.500375 & 1.004924 \\
$\ket{1s3s;^3\hspace{-3.5pt}S^e}$& 0.50019 && 0.5000736 & 1.00114 && 0.500073 & 1.001136 \\
$\ket{1s4s;^3\hspace{-3.5pt}S^e}$& 0.49993 && 0.5000267 &1.000453 && 0.500026 & 1.000450\\
$\ket{1s5s;^3\hspace{-3.5pt}S^e}$& 0.50012 && 0.5000125 & 1.000091 && 0.500012 & 1.000227 \\
\hline\hline
\end{tabular}
\caption[Linear Entropy and von-Neumann Entropy for bound states of helium: Symmetries $^{1}S^e$ and $^{3}S^e$. ]{\label{tab:table1079}Linear Entropy and von Neumann Entropy for bound states of helium: Symmetries $^{1}S^e$ and $^{3}S^e$. \citep{Dehesa2012a,Dehesa2012b,Benenti2013} and the present work.}
\end{table}
\subsubsection{A comparison with previously reported values of the amount of entanglement for the $^{1,3}S^e$ bound states of helium atom and results of the entanglement in the bound states of $^{1,3}P^o$ and $^{1,3}D^e$ symmetries. }
In the first place, we have calculated the linear entropy $S_L$ (equation~\eqref{eq:linearen2}) and the von-Neumann entropy $S_{VN}$ (equation~\eqref{eq:vonnewman2}) for several bound eigenstates of the helium atom. Our results, along with previously published results by other authors, are included in the table~\ref{tab:table1079}~\citep{Dehesa2012a,Dehesa2012b,Benenti2013}. We present in this table the amount of entanglement for the ground state of helium atom and additionally for some singly excited $^{1,3}S^e$ states. Our results are in a good agreement with the reported ones regardless of the method used to calculate the integrals. \citeauthor{Dehesa2012a} calculate the integral~\eqref{eq:linearentropy} using a Monte Carlo multidimensional numerical integration of a $12$-dimensional definite integral. They build the electronic density by means of the explicitly correlated Kinoshita-type~\ac{WF}s~\citep{Koga1995,Koga1996}. This high demanding computational method provides a good description of the density and then of the entanglement amount for bound states of helium atom, However, due to the intrinsic complexity of this method regardless of the goodness of the Kinoshita-like expansions, we have chosen the alternative \ac{CI}-\ac{FM} to deal with \ac{DES}. On the other hand,~\citeauthor{Benenti2013} also employ a \ac{CI} method, where the radial \ac{WF} is obtained by means of a variational procedure using expansions in terms uncorrelated orthogonal Slater-type orbitals. In this sense, our present method is similar to that of \citeauthor{Benenti2013}, although in our case we use orthogonal hydrogenic orbitals in terms
of B-splines. The important issue is that by using a \ac{CI} method with orthonormal basis, the expansion variational coefficients straightforwardly allow us to build the density matrix according to the equation~\eqref{eq:reduceddden2}. Consequently, the computation of the linear and von Neumann entropies can be readily performed without explicit complex (Montecarlo) integration procedures. Once again, our results of the entanglement amount are in a very good agreement with the values reported by the other authors. We have found that the entanglement value of bound $^1S^e$ states (see linear and von Neumann entropies in table~\ref{tab:table1079}) increases with the excitation to reach a saturation value at the first ionization threshold. At variance, for the bound $^3S^e$ states, the entanglement content slightly reduces its value from its maximum corresponding to the lowest triplet state. Our intuition suggests that the entanglement must be stronger for states of helium atom that keep the electrons localized close to the nucleus. This is in agreement with the result of the $^3S^e$ bound states, but is not with $^1S^e$ singly excited states. The linear entropy in these states seem to increase monotically towards a constant value (the value of maximal mixing of the reduced density matrix). This behavior is shown in the left panel of the figures~\ref{fig:entanglement1Se} and~\ref{fig:entanglement3Se}. Additionally, the increasing behavior of entanglement for the $^1S^e$ bound states seem to be unique; the figures~\ref{fig:entanglement1Po},~\ref{fig:entanglement1De},~\ref{fig:entanglement3Po} and ~\ref{fig:entanglement3De} show that all the remaining bound states of helium have a decreasing behavior of entanglement as a function of the energy of the singly excited states and hence this fact is again in agreement with the intuitive picture of decreasing correlation for high excited states of helium.
\begin{figure}[h]
\centering
\includegraphics[width=0.75\textwidth]{gfx/entanglement_LE_1Se}
\caption{\label{fig:entanglement1Se}Linear entropy $S_L(\rho)$ for the ground and the lowest singly excited states for the $^1S^e$ symmetry below the first ionization threshold (left panel) and for the two series $_2(1,0)^+_n$ and $_2(1,0)^+_n$ of resonances belonging to symmetry $^1S^e$ below the second ionization threshold (right panel).}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.70\textwidth]{gfx/entanglement_LE_1Po}
\caption{\label{fig:entanglement1Po}Linear entropy $S_L (\rho)$ for the lowest singly excited states of $^1P^o$ symmetry below the first ionization threshold (left panel) and for the three series $_2(0,1)_n^+$, $_2(1,0)_n^-$, and $_2(-1,0)_n^0$ of resonant doubly excited states of symmetry $^1P^o$ below the second ionization threshold (right panel).}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.70\textwidth]{gfx/entanglement_LE_1De}
\caption{\label{fig:entanglement1De}Linear entropy $S_L (\rho)$ for the lowest singly excited states of $^1D^e$ symmetry below the first ionization threshold (left panel) and for the three series $_2(1,0)_n^+$, $_2(0,1)_n^0$, and $_2(-1,0)_n^0$ of resonant doubly excited states of symmetry $^1D^e$ below the second ionization threshold (right panel).}
\end{figure}
\subsubsection{Entanglement amount of the $^{1,3}S^e, ^{1,3}P^o, ^{1,3}D^e$ doubly excited states of helium atom.}
We assume that most of the entanglement content in the resonance \ac{WF} is given by the localized part of the state according to the Feshbach partitioning. This Q-space part is computed with a \ac{CI} approach, that provides a description in terms of a large combination of configurations based upon antisymmetrized products of orthogonal orbitals. This structure of the \ac{WF} corresponds to an entangled state. We provide the numerical values of the amount of entanglement for the He resonances, for all spin singlets and triplets with $L=0,1$ and $2$, below the second ionization threshold. As a entanglement measure we firstly use the linear entropy. Absolute values for the linear entropy in atomic systems are difficult to be interpreted so far, and we are more interested in its relative behavior as a function of the excitation energy in the resonance Rydberg series. In the figures~\ref{fig:entanglement1Se},~\ref{fig:entanglement1Po},~\ref{fig:entanglement1De},~\ref{fig:entanglement3Se},~\ref{fig:entanglement3Po} and~\ref{fig:entanglement3De} we have plotted the linear entropy as a measure of the amount of entanglement of \ac{DES} of helium atom. The behavior of the entanglement of these resonant states differs strongly from the singly excited states. The entanglement neither increases monotonically nor decreases with the energy; instead it has, at first view, a not well defined behavior, almost a random one. If we take no notice of the a priori colour separation in the figures, it may take a long time to pick out any underlying regularity. However, after a more careful scrutiny it is possible to identify two or more independent trends. We have assigned a colour to each of the different paths in the figures to discriminate the independent behavior of each $(K,T)$ series. The fundamental issue is that the linear entropy seems to clearly distinguish the resonance $(K,T)$ series according to the content of entanglement, whose behavior is not monotonic with increasing energy and it is different for each series. This fact becomes as a fundamental tool that enables us to discriminate the resonances into separated set based only in the very strange quantum correlation of non separability of a global quantum state of the helium atom.
The von Neumann entropies can also be used as a witness of entanglement. we have also calculated the von Neumann entropies for the bound and resonant states in He. We find a behavior for the resonances series very close to the linear entropies and these results are relegated to Appendix~\ref{ch:supgraphics} as a compilation of figures~\ref{fig:entanglementVN1Se},~\ref{fig:entanglementVN1Po},~\ref{fig:entanglementVN1De},~\ref{fig:entanglementVN3Se},~\ref{fig:entanglementVN3Po}, and~\ref{fig:entanglementVN3De}.
As we suggested above in the section~\ref{sec:fisherinf}, the Fisher information also allow us to reach a similar conclusion about the classification of resonances based on a pure topological analysis of the electronic density. The very first question that arises is how a local measure can be compared with essentially a global measure like linear entropy and both provides similar qualitative trends. We can try a very first attempt to respond to this question saying that all the quantum correlations features of the \ac{DES} were hidden in the off-diagonal elements of the reduced density matrix, i.e, the so called coherences elements of the state operator. We must remember that a measure like Shannon entropy are calculated over the electronic density that represents only the diagonal elements of the state operator.
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{gfx/entanglement_LE_3Se}
\caption{\label{fig:entanglement3Se}Linear entropy $S_L (\rho)$ for the lowest singly excited states of $^3S^e$ symmetry below the first ionization threshold (left panel) and for the two series $_2(1,0)^-_n$ and $_2(-1,0)^-_n$ of resonant doubly excited states of symmetry $^3S^e$ below the second ionization threshold (right panel).}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{gfx/entanglement_LE_3Po}
\caption{\label{fig:entanglement3Po}Linear entropy $S_L (\rho)$ for the lowest singly excited states of $^3P^o$ symmetry below the first ionization threshold (left panel) and for the three series $_2(1,0)^-_n$, $_2(0,1)^0_n$, and $_2(-1,0)^0_n$ of resonant doubly excited states of symmetry $^3P^o$ below the second ionization threshold (right panel).}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{gfx/entanglement_LE_3De}
\caption{\label{fig:entanglement3De}Linear entropy $S_L (\rho)$ for the lowest singly excited states of $^3D^e$ symmetry below the first ionization threshold (left panel) and for the three series $_2(1,0)^+_n$, $_2(0,1)^-_n$, and $_2(-1,0)^-_n$ of resonant doubly excited states of symmetry $^3D^e$ below the second ionization threshold (right panel).}
\end{figure}
\chapter{Conclusiond and Perspectives}\label{ch:conclusions}
\begin{enumerate}
\item The Shannon entropy increases monotonically for both bound states and resonances. This quantity is not able to separate the different $(K,T)$ series in the resonant manifold. The global characteristics of the density function cannot be used to classify the doubly excited states.
\item The Fisher information seems to have a trend towards a constant value in each case. This measure as applied to He resonances does have neither a decreasing nor monotonic behavior against the increasing energy. Actually, the Fischer information seems to split into different paths. Because this quantity is sensitive to strong changes on the density function over a small-sized region, these local strong variations allow to identify each resonance $(K,T)$ series.
\item Linear entropy measures the amount of entanglement between the two electrons in our system. We can conclude that the linear entropy clearly may distinguish the resonance $(K,T)$ series according to the content of entanglement, whose behavior is not monotonic with increasing energy and it is different for each series. This fact becomes as a fundamental tool that enables us to discriminate the resonances into separated set based only in the very strange quantum correlation of non separability of a global quantum state of the helium atom.
\item We leave the convergence analysis, due to the finite box approximation, for another future work which must include the analysis of the entanglement in the isoelectronic series of the helium atom.
\item We also leave the analysis of information theory measures, complexity and quantum entanglement of the resonances belonging to the upper continuum thresholds associated to excited target configurations ( He$^+$ $(n=2,3,4,\dots)+ e^-$) for another future work.
\end{enumerate}
\chapter{The basis of B-Splines}
An extended method in quantum mechanics for solving the Schr\"odinger and Dirac equations has been the use of basis sets. Applying the variational method to the Schr\"odinger equation, Hylleraas obtained a value of $-2.903648$~\ac{a.u.} for the ground state energy of helium using only five trial functions~\citep{Hylleraas1929}. Slater-type orbitals are another kind of basis functions, proposed by~\citep{Slater1930}, used as atomic orbitals in the variational atomic orbital linear combination method. This widely used method enables us to transform the solution of a differential equation into an generalized eigenvalue problem~\citep{Shore1973,Shore1974, Johnson1988, Cheng1994,Bachau2001}. The use of basis sets of many types have been routinely used in molecular physics and other branches of physics. In atomic physics this method has been systematically used together with another accurate techniques like finite-difference methods. Nowadays, the great development of high optimized and accurate numerical routines of linear algebra for matrix diagonalization allows for the implementation of several basis solution schemes.
\begin{figure}[h]
\centering
\includegraphics[width=0.85\textwidth]{gfx/bsplines}
\caption[Set of ten B-splines of order $k=5$.]{\label{fig:biplaneso}Set of ten B-splines with order $k=5$ originated from the knot sequence $\{0,0,0,0,0,1.6667,3.3333,5,6.6667,8.3333,10,10,10,10,10\}$, within a box of length $L=10$. Multiple knots are included at the edges of the box to guarantee continuity and smoothness in the derivatives.}
\end{figure}
\section{General features of B-splines}\label{sec:bsplines}
In this appendix we present the B-splines basis set and its most important features. Set of B-splines are designed originally to generalize polynomials for the propose of approximating arbitrary functions~\citep{Schoenberg1946}; an exhaustive description of this particular base set and their properties can be found in the book of~\citep{deBoor1978,Bachau2001}. It is necessary to introduce some mathematical definitions in order to obtain a deep understanding of the B-splines set.
\begin{itemize}
\item The B-splines are piecewise, positive, and compact $L^2$ integrable polynomial functions of order $k$ (degree $k-1$) which are defined in a restricted space known as a {\textit box}.
\item The polynomials of order $k$ (maximum degree $k-1$) are
\[p(x)=a_0+a_1x+\dots+a_{k-1}x^{k-1}\]
\item Consider an interval $I=[a,b]$ divided into $l$ subintervals $I_j=[\zeta_j,\zeta_{j+1}]$ by a sequence of $l+1$ points $\{\zeta_j\}$ in strict ascending order
\[a=\zeta_1<\zeta_2<\dots<\zeta_{l+1}=b\]
The $\zeta_j$ will be called {\textit Breakpoint} (\ac{bp}).
\item Now, with the aim to define adequate continuity conditions at each interior \ac{bp} $\zeta_j$ let us associate with them a second sequence of non-negative integers, $\nu_j$, $j=2,\dots,l$, $\nu_j\geq0$, which define continuity condition $C^{\nu_j-1}$ (any function which is continuos, on a given interval, together with its derivatives up to order $n$ is said to be of class $C^n$) at the associated \ac{bp} $\zeta_j$. With the end \ac{bp}s $\zeta_1$ and $\zeta_{l+1}$ we associate $\nu_1=\nu_{l+1}=0$, that is we do not require any continuity.
\item The last set of points that we need to introduce is $\{t_i\}$ where each $t_i$ is called a {\textit knot}. This sequence of points is given in ascending order but they are not necessarily distinct $\{t_i\}_{i,\dots,m},$ where $t_1\le t_2\le\dots\le t_m$. The $\{t_i\}$ sequence are associated with $\zeta_j$ and $\nu_j$ as follows:\\
\begin{eqnarray*}
t_1 &=& t_2 = \dots =t_{\mu_1}=\zeta_1;\hspace{15pt} \mu_1=k\\ \nonumber
t_{\mu_1+1} &=&\dots=t_{\mu_1+\mu_2}=\zeta_2\\ \nonumber
&\dots&\\ \nonumber
t_{p+1} &=&\dots=t_{p+\mu_i}=\zeta_i;\hspace{30pt} p=\sum_{r=1}^{i-1}\mu_r\\ \nonumber
&\dots&\\ \nonumber
t_{n+1} &=&\dots=t_{n+k}=\zeta_{l+1};\hspace{25pt}\mu_{l+1}=k\hspace{20pt}n=\sum_{r=1}^{l}\mu_r \nonumber
\end{eqnarray*}
where $\mu_j$ is the multiplicity of the knots $t_i$ at $\zeta_j$ and is given by $\mu_j=k-\nu_j$. The most common choice for knot multiplicity at inner \ac{bp}s is unity; corresponding to maximum continuity $C^{k-2}$. This is our choice in all calculations throughout the thesis. With this choice the whole number of B-spline functions is fixed and given by $n=l+k-1$.
\item Any smooth function can be expressed as a linear combination of the $B_i$ and it will be called a \textit{ \ac{pp}-function} over $[a,b]$.
\[f=\sum_{i=1}^nc_iB_i\]
\end{itemize}
The following qualitative features of B-splines provides us a depth understanding of their behavior:
\begin{enumerate}
\item A single B-spline $B(x)$ is defined by its order $k>0$, and a set of $k+1$ knots, $\{t_i,\dots,t_{i+k} \}$ such that $t_i<t_{i+k}$.
\item $B(x)$ is a \ac{pp}-function of order $k$ over $[t_i,t_{i+k}]$.
\item $B(x)>0$ for $x\in\hspace{2pt}(t_i,t_{i+k})$.
\item $B(x)=0$ for $x\notin\hspace{2pt}[t_i,t_{i+k}]$.
\item The knots need not be equidistant and the shape of B(x) changes smoothly with the change of the knots.
\item The B-splines are normalized as $\sum_iB_i(x)=1$ over the whole $[t_i, t_{i+k}]$.
\item The set of B-splines do not form an orthonormal set of basis functions.
\end{enumerate}
\begin{figure}
\centering
\includegraphics[width=0.85\textwidth]{gfx/bsplines1}
\caption[Recursive evaluation of B-splines.]{\label{fig:biplanes}The B-splines are generated by a recursive evaluation scheme. This figure sketches the recursive method for evaluating B-splines up to order $k=4$, relative to de knot sequence $\{0,1,2,3,4,5,6,7,8,9,10\}$. }
\end{figure}
\section{Building up B-splines function set}
Each interval $I_j=[\zeta_j,\zeta_{j+1}]=[t_i,t_{i+1}]$ is characterized by a pair of consecutive knots $t_i<t_{i+1}$. The point $t_i$ is called the left knot of interval $I_j$, and determines de indices of the $B_i$ contributing over each interval $(t_i, t_{i+k})$. Additionally, over this interval exactly $k$ B-splines are nonzero, $B_j(x)\neq0$ for $j=i-k+1,\dots,i$. The first being $B_{i-k+1}$, which ends at $t_{i+1}$, and the last is $B_i$, which starts at $t_i$. Therefore, we have identically $B_i(x)\cdot B_j(x)=0$ for $|i-j|\geq k$.
In general a family of B-spline functions, $B_i(x)$, $i=1,\dots,n$ is completely defined given $k>0$, $n>0$ and a sequence of knots $\mathbf{t}=\{t_i\}_{i=1,\dots,n+k}$.
The B-spline functions can be generated by a recursive evaluation method. Each function satisfies the following recursion relation
\[B^k_i(x)=\frac{x-t_i}{t_{i+k-1}-t_i}B^{k-1}_{i}(x)+\frac{t_{i+k}-x}{t_{i+k}-t_{i+1}}B^{k-1}_{i+1}(x);\]
\noindent we must define the B-splines of order $k=1$
\[B_i^1(x)=1\hspace{15pt}t_i\leq x<t_{i+1}\hspace{15pt}\text{and}\hspace{15pt} B_i^1(x)=0 \hspace{15pt}\text{otherwise,}\]
\noindent this formula becomes the basis for the algorithm employed for the practical evaluation of B-splines: given an arbitrary point $x$, one may generate, by means of recursion, the values of all the $k$ B-splines which are nonzero at x, as is illustrated in fig~\ref{fig:biplanes}. The derivative of a B-spline or order $k$, being a \ac{pp}-function or order $k-1$, can be also expressed as a linear combination of B-splines of the same order
\[\partial_xB^k_i(x)=\frac{k-1}{t_{i+k-1}-t_i}B^{k-1}_{i}(x)-\frac{k-1}{t_{i+k}-t_{i+1}}B^{k-1}_{i+1}(x).\]
The practical evaluation of the $k$ B-spline function for an arbitrarily value of $x$ is done using the \texttt{FORTRAN} routines originally developed by~\citep{deBoor1978}. The routine \texttt{BSPLVP} provides a numerical implementation of recursion relation both for B-splines and his derivatives; this routine requires as input, as you can expect, the order $k$ of B-spline, the knot sequence $t_i$, and the desired point $x$ where we want to evaluate the functions. All this quantities, as we saw previously, defines the B-spline functions. Additionally, we must specify the index of the left-closest knot $t_i$ to x which can be calculated using the routine \texttt{INTERV}. This routine provides the $k$ B-splines functions that are nonzero at $x$, i.e., $B^{(k)}_{i-k+1}(x),\dots,B^{(k)}_{i}(x)$.
\section{Breakpoint sequences}\label{sec:bps}
To successfully achieve the desired behavior of \ac{WF}s that we want to calculate, by means of a variational approach using B-splines, it is necessary to choose the adequate sequence of \ac{bp}s $\zeta_j$. Most realistic calculations, in atomic and molecular physics, have made use of different sequences which are selected after a deep consideration of the electron density distribution within a box of length $L$. Actually, there is a collection of different sets of \ac{bp}s to be chosen. Now, let us review the most commonly used.
\subsection{Linear sequence}
This is the simplest of the all knot sequences that we might consider. It divides, by means of $l$ \ac{bp}s, an arbitrary interval $[r_{min}, r_{max}]$ into $(l-1)$ segments of {\textit equal} length. The practical implementation of this sequence is straightforward.
\begin{equation}\label{eq:linearsequence}
\zeta_j=r_{min}+\frac{r_{max}-r_{min}}{l-1}(j-1)\hspace{20pt}j=1,\dots,l.
\end{equation}
\begin{figure}
\centering
\includegraphics[width=0.85\textwidth]{gfx/bpsequences.pdf}
\caption{\label{fig:bpsequences} Comparison of three sequences of breakpoints: linear sequence (circular points), linear-exponential sequence (diamond points), and the exponential sequence (square points).}
\end{figure}
This kind of \ac{bp}s is not able to describe appropriately the bound states of H-like atoms. These particular states are strongly localized in regions near the atomic nucleus~\citep{Castro-Granados2012}. Large radial boxes along with a finite number of \ac{bp}s, equally spaced, makes the radial spacing at short distances inappropriate to describe the fine details of bound states wave functions. On the other hand, linear sequences shows to be superior when describing unbound states, since the equally spaced succession of \ac{bp}s offers a high flexibility for the basis to describe the highly oscillatory behavior of the scattering-like wave functions. In Fig.~\ref{fig:bpsequences} we plot the linear sequence of \ac{bp}s, together with the exponential and linear-exponential sequences.
\subsection{Exponential sequence}\label{sec:expseq}
In many calculations we need to adequately describe the \ac{WF} or the density of probability for a many-body system in which the particles are strongly localized in some region of space. Bound states of atoms have this localization very close to the atomic nucleus. For a proper description of these states we need to choose a sequence of knots that must be accumulated very close to the atomic nucleus without making strong considerations about the properties of the sequence in some other regions, as is shown in Fig.~\ref{fig:bpsequences}. The exponential sequence can be expressed as
\begin{equation}\label{eq:exponentialsequence}
\zeta_j=\delta e^{\alpha(j-1)}\hspace{20pt}\alpha=\frac{ln\left(\frac{L}{\delta}\right)}{l-1} \hspace{20pt}j=1,\dots,l.,
\end{equation}
where $\delta$ denotes an arbitrarily initial non-zero value for the sequence. It is customarily to choose this point to be $\delta\leq 0.1$ with the appropriate units. Even though this sequence works fine for bound states, it does not for unbound ones. The exponential spacing at large distances is not suitable for the strong oscillations of the continuum \ac{WF}.
\subsection{Exponential-linear sequence}
Sometimes it is necessary to deal with problems in which we need to describe several states with radically different behaviors. Consequently, we can define combined sequences to treat these situations. Therefore, if our intention is to describe, in some specific procedure, both bound and unbound states evenly we require a combined sequence with at least two characteristics: accumulation of \ac{bp}s near the nucleus, and a flexible distribution of them in order to get an adequate description of highly oscillatory functions. With the adequate conditions of continuity of the functions that defines the whole sequence and its derivative, we can define a piecewise sequence which changes smoothly between elementary sequences (like linear or exponential) as is shown in Fig.~\ref{fig:bpsequences}.
\chapter{Hydrogen {\textit \`a la} B-Splines}\label{ch:hybsplines}
Before we study the hydrogen atom using a B-spline basis, we briefly review its general full quantum solution. This topic is treated in more detail in almost every introductory textbook of quantum mechanics and atomic physics~\citep{Cohen-tannoudji1977,Bransden2003,Friedrich2005}.
\section{Full quantum solution for hydrogen atom}\label{sec:fullhy}
The Hamiltonian for a system of two particles, of masses $m_e$ and $m_p$, which are interacting through a time-independent central potential $V({r})$ with $r=|\mathbf{r}_e-\mathbf{r}_p|$ is given by
\begin{equation}\label{eq:hydhamil}
H=\frac{\hat{\boldsymbol{p}}^{2}_e}{2m_{e}}+\frac{\hat{\boldsymbol{p}}^{2}_p}{2m_{p}}+V({r}),
\end{equation}
where $\boldsymbol{\hat{p}}_j=-i\hbar\nabla_{\boldsymbol{r}_j}$ is the momentum operator for each particle.
The Schr\"odinger equation for a hydrogen atom can be written as
\begin{equation}\label{eq:schro1}
i\hbar\frac{\partial}{\partial t}\psi(\boldsymbol{r}_{e},\boldsymbol{r}_{p},t)=
\left[-\frac{\hbar^2}{2m_{e}}\nabla^2_{\boldsymbol{r}_{e}}-\frac{\hbar^2}{2m_{p}}
\nabla^2_{\boldsymbol{r}_{p}}+V({r})
\right]\psi(\boldsymbol{r}_{e},\boldsymbol{r}_{p},t),
\end{equation}
where $m_p$ ($\boldsymbol{r}_{p}$) is the mass (coordinate) of the proton, $m_e$ ($\boldsymbol{r}_{e}$) is the mass (coordinate) of the electron, and $V({r})$ is the Coulomb potential that depends only on the distance between the proton and the electron. As done in every textbook, we introduce the relative coordinate $\boldsymbol{r}$ and the coordinate of the center of mass $\boldsymbol{R}$,
\begin{equation}
\boldsymbol{r}=\boldsymbol{r}_{e}-\boldsymbol{r}_{p},~~\boldsymbol{R}=\frac{m_{e}\boldsymbol{r}_{e}+m_{p}\boldsymbol{r}_{
p}}{M},\label{eq:coor_trans}
\end{equation}
here, $M=m_{e}+m_{p}$ is the total mass of the hydrogen atom. In these new coordinates, the Schr\"odinger equation~\ref{eq:schro1} becomes
\begin{equation}\label{eq:schro2}
i\hbar\frac{\partial}{\partial
t}\psi(\boldsymbol{R},\boldsymbol{r},t)=\left[-\frac{\hbar^2}{2M}\nabla^2_{\boldsymbol{R}}-\frac{\hbar^2}{2\mu}\nabla^2_{
\boldsymbol{r}}+V({r})\right]\psi(\boldsymbol{R},\boldsymbol{r},t),
\end{equation}
where $\mu=m_{e}m_{p}/(m_{e}+m_{p})$ is the reduced mass. So, two separations of the equation~\ref{eq:schro2} can be made. In the first place, the time dependence shall be separated since the potential is time-independent. After that, the spatial part of the \ac{WF} shall be separated into a product of functions; one of them as a function of centre of mass coordinate $\boldsymbol{R}$ and the other one as a function of the relative coordinate $\boldsymbol{r}$. Therefore, the whole \ac{WF} can be written as
\begin{equation}
\label{eq:sep}
\psi(\boldsymbol{R},\boldsymbol{r},t)=\psi_{CM}(\boldsymbol{R})\psi_{r}(\boldsymbol{r})e^{-\frac{i}{\hbar}(E_{CM}+E)t}\,,
\end{equation}
where $\psi_{CM}(\boldsymbol{R})$ and $\psi_{r}(\boldsymbol{r})$ satisfy, respectively,
\begin{align}
-\frac{\hbar^2}{2M}\nabla^2_{\boldsymbol{R}}\psi_{CM}(\boldsymbol{R}) &= E_{CM}\psi_{CM}(\boldsymbol{R}),\label{eq:centermasseqn}\\
-\frac{\hbar^2}{2\mu}\nabla^2_{\boldsymbol{r}}\psi_{r}(\boldsymbol{r})+V({r})\psi_{r}(\boldsymbol{r}) &= E_{r}\psi_r(\boldsymbol{r}).\label{eq:relativeeqn}
\end{align}
This shows that the motion of a hydrogen atom can be separated into two independent parts: $\psi_{CM}(\boldsymbol{R})$ describes the motion of a free particle with mass $M$ and energy $E_{CM}$, while $\psi_r(\boldsymbol{r})$ describes the motion of a particle of reduced mass $\mu$ in a Coulomb potential $V({r})$. However, this by no means implies that the motions of the electron and the proton can be described by two independent \ac{WF}s. Note that both equations~\ref{eq:centermasseqn} and~\ref{eq:relativeeqn} are time-independent. The total energy of the system is clearly $E_{tot}=E_{CM}+E_r$.
As suggested above, we are considering an hydrogen-like atom with an atomic nucleus of total charge $Ze$ and an electron of charge $-e$. The Coulomb interaction between the positive nucleus and the electron is expressed by the Coulomb potential
\begin{equation}
\label{eq:coulomb}
V({r})=-\frac{Ze^2}{(4\pi\epsilon_0)r},
\end{equation}
where r is the distance between the two particles and $\epsilon_0$ is the vacuum permittivity. Consequently, using this potential into the equation~\ref{eq:relativeeqn} we obtain, working in the centre of mass system, the following Schr\"odinger equation for the relative motion
\begin{equation}\label{eq:relativeeqn1}
\left [-\frac{\hbar^2}{2\mu}\nabla^2-\frac{Ze^2}{(4\pi\epsilon_0)r}\right]\psi(\boldsymbol{r}) = E\psi(\boldsymbol{r}).
\end{equation}
Owing to the interaction potential~\ref{eq:coulomb} is central, the Schr\"odinger equation~\ref{eq:relativeeqn1} may be separated in spherical coordinates. In other words we may write the \ac{WF}, as a particular solution of this equation, in the following way
\begin{equation}\label{eq:solhyd1}
\psi_{E,l,m}(r,\theta,\phi)=R_{E,l}({r})\mathcal{Y}^l_{m_l}(\theta,\phi),
\end{equation}
where $\mathcal{Y}^l_{m_l}(\theta,\phi)$ is a spherical harmonic corresponding to the orbital angular momentum quantum number $l$ and to the magnetic quantum number $m_l$ (with $m=-l,-l+1,\dots,l$). In this way, the functions $\mathcal{U}_{E,l}({r})=rR_{E,l}({r})$ satisfy the radial equation written in \ac{a.u.}
\begin{equation}\label{eq:radschr}
\left[-\frac{1}{2}\frac{d^2}{dr^2}+V_{eff}({r})\right]\mathcal{U}_{E,l}({r})=E\mathcal{U}_{E,l}({r}),
\end{equation}
where
\begin{equation}
\label{eq:veff}
V_{eff}({r})=-\frac{Z}{r}+\frac{l(l+1)}{2r^2}
\end{equation}
is called the effective potential. Consequently, the problem of solving the Schr\"odinger equation~\ref{eq:schro2} reduces to that of solving the radial one-dimensional equation~\ref{eq:radschr} which corresponds to a particle of mass $\mu$ moving in an effective potential $V_{eff}$.
The analytic procedure to obtain a particular solution of equation~\ref{eq:schro2} is shown in almost every textbook of quantum mechanics and atomic physics. So, the energies and the normalised radial functions for the bound states of hydrogenic atoms may be written, in \ac{a.u.}, as
\begin{equation}
\label{eq:energy}
E_n=-\frac{Z^2}{2n^2}
\end{equation}
\begin{equation}
\label{eq:radialf}
R_{nl}({r})=-\left\{\left(\frac{2Z}{n}\right)^{3}\frac{(n-l-1)!}{2n[(n+l)!]^3}\right\}^{\frac{1}{2}}e^{-\frac{\rho}{2}}\rho^l L^{2l+1}_{n+1}(\rho),
\end{equation}
with
\begin{equation}
\label{eq:rho}
\rho=\frac{2Z}{n}r,
\end{equation}
and where the $L^i _k(\rho)$ are the associated Laguerre polynomials.
\newpage
\section{Numerical approach to hydrogen atom using the B-spline basis}\label{sec:hybsplines}
In this section we introduce the fundamental scheme for the description of the atomic structure of one-electron systems using B-splines functions~\citep{deBoor1978, Bachau2001}. Even though this scheme has a wide useful range, for instance one-valence electron atoms and closed-shell systems such as rare gases, we only take the case of hydrogen atom, the most elemental atom with only one proton and one electron, and any hydrogen-like ions characterised by the Coulomb potential. One fundamental feature of B-splines basis is its flexibility; the basis allows us to calculate both the energies and the \ac{WF}s in a given central potential. This calculation may be may be performed very efficiently with
arbitrary accuracy up to the machine precision, with a present-day laptop computer.
The radial equation~\ref{eq:radschr} may be solved numerically using the B-spline basis in a subspace, defined by the basis itself, making the assumption that the radial function $\mathcal{U}_{E,l}({r})$, with the initial condition $\mathcal{U}_{E,l}(0)=0$, can be approximated, in a given box, by B-spline functions. Consequently, we may naturally expand the solution of radial equation belonging to that subspace in terms of the B-spline basis set as
\begin{equation}
\label{eq:radialbs}
\mathcal{U}_{E,l}({r})=\sum_{i=1}^{N_{max}}c_i^{E,l}B_i({r}),
\end{equation}
where $B_i({r})$ is the $i^{\text{th}}$ B-spline of order $k$ as defined above. The B-spline method requires the definition of a set of knot sequence which depend, as we have said before, on the following parameters: a set of mesh points called the \ac{bp}s defined in $[0,L]$, the order $k$ of the \ac{pp}, and the continuity conditions at each \ac{bp}. We commonly choose these parameters according to the physical problem at hand. In fact, for the hydrogen-like atoms, we must divide the box in which we solve the problem into segments whose endpoints form the \ac{bp} sequence. As suggested above, we may freely select any \ac{bp} sequence; the optimal choice will depend on the type of result we are particularly interested in. Take the case of bound states in which the \ac{WF} has a highly drift towards nucleus and it vanishes at some finite distance; an optimal choice for this particular problem has a suitable accumulation of \ac{bp}s in this specific region of the box. The exponential sequence, as can be seen in section~\ref{sec:bps}, may describe with accuracy the behavior of this kind of \ac{WF} due to its strong accumulation of \ac{bp}s when $r\rightarrow0$. On the contrary, the linear sequence, in which the segments between two consecutive \ac{bp}s have the same constant value, can describe properly the \ac{WF} of continuum states of hydrogen-like ions which oscillates indefinitely in their radial part. If, in contrast, our main interest is to describe simultaneously both bound and continuum states, we must choose any mixed \ac{bp}s sequence. Actually, a mixed sequence, made up with two elementary sequences such as an exponential sequence up to some distance from the nucleus and a linear sequence afterwards, might successfully describe both types of \ac{WF}s. Finally, we may establish the maximum continuity conditions, $\nu_j=k-1$, at each \ac{bp} in the box without any restriction, with this choice the multiplicity of knot is just $\mu_j=1$ corresponding to continuity $C^{k-2}$. Nevertheless, in order to satisfy the boundary conditions, we require the solution to vanish at the endpoints, tat is $\mathcal{U}_{E,l}(0)=\mathcal{U}_{E,l }(L)=0$. This conditions are met either by removing the first and the last B-splines from the basis set or by setting the continuity conditions $\nu_j=1$ at the boundaries.
The hydrogen energies and \ac{WF}, corresponding to bound states, which satisfies equation~\ref{eq:radschr} are calculated, for a given fixed angular momentum $l$, by solving the system of $N_{max}$ (size of B-spline basis) linear equations obtained after substituting~\ref{eq:radialbs} into~\ref{eq:radschr} and then projecting on $B_j({r})$. This procedure is formally equivalent, when it is written in matrix form, to solving the following generalized eigensystem problem:
\begin{equation}
\label{eq:radialeigen}
\mathbf{H}_l\cdot\mathbf{c}=E\mathbf{S}\cdot\mathbf{c},
\end{equation}
for $E_{n,l}$, $\boldsymbol{c}^{nl}=\{c^{nl}_i\}$ with $i=1,\dots,N_{max}$, and where
\begin{align}
\label{eq:melemham}
(\mathbf{H}_l)_{ij}&=-\frac{1}{2}\int_0^LB_i({r})\frac{d^2}{dr^2}B_j({r})dr - Z\int_0^L\frac{B_i({r})B_j({r})}{r}dr\\ \nonumber
& +\frac{l(l+1)}{2}\int_0^L\frac{B_i({r})B_j({r})}{r^2}dr,
\end{align}
\begin{equation}
\label{eq:radialeigen1}
(\boldsymbol{S})_{ij}=\int_0^LB_i({r})B_j({r})dr.
\end{equation}
The matrix $\boldsymbol{S}$ is called the overlap matrix. It is originated from the fact that B-splines do not form an orthonormal set of basis functions. The hydrogenic orbitals obtained in terms of B-splines are then used in the configurations interaction method to solve the helium electronic structure, as described in the chapter~\ref{sec:qmd} in this thesis.
\chapter{Supplementary Graphics }\label{ch:supgraphics}
\section{Arguments of the integrals of the theoretic information measures }
In this appendix we present some additional figures which emphasise the conclusions established in earlier chapters in relation with the bound and doubly excited states of helium atom.
\section{Figures of the von Neumann entropy}
\begin{figure}[h]
\centering
\includegraphics[width=0.75\textwidth]{gfx/entanglement_VN_1Se}
\caption{\label{fig:entanglementVN1Se}von Neumann entropy $S_{VN}(\rho)$ for the ground and the lowest singly excited states for the $^1S^e$ symmetry below the first ionization threshold (left panel) and for the two series $_2(1,0)^+_n$ and $_2(1,0)^+_n$ of resonances belonging to symmetry $^1S^e$ below the second ionization threshold (right panel).}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.75\textwidth]{gfx/entanglement_VN_1Po}
\caption{\label{fig:entanglementVN1Po}von Neumann entropy $S_{VN}(\rho)$ for the lowest singly excited states of $^1P^o$ symmetry below the first ionization threshold (left panel) and for the three series $_2(0,1)_n^+$, $_2(1,0)_n^-$, and $_2(-1,0)_n^0$ of resonant doubly excited states of symmetry $^1P^o$ below the second ionization threshold (right panel).}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.75\textwidth]{gfx/entanglement_VN_1De}
\caption{\label{fig:entanglementVN1De}von Neumann entropy $S_{VN}(\rho)$ for the lowest singly excited states of $^1D^e$ symmetry below the first ionization threshold (left panel) and for the three series $_2(1,0)_n^+$, $_2(0,1)_n^0$, and $_2(-1,0)_n^0$ of resonant doubly excited states of symmetry $^1D^e$ below the second ionization threshold (right panel).}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.75\textwidth]{gfx/entanglement_VN_3Se}
\caption{\label{fig:entanglementVN3Se}von Neumann entropy $S_{VN}(\rho)$ for the lowest singly excited states of $^3S^e$ symmetry below the first ionization threshold (left panel) and for the two series $_2(1,0)^-_n$ and $_2(-1,0)^-_n$ of resonant doubly excited states of symmetry $^3S^e$ below the second ionization threshold (right panel).}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.75\textwidth]{gfx/entanglement_VN_3Po}
\caption{\label{fig:entanglementVN3Po}von Neumann entropy $S_{VN}(\rho)$ for the lowest singly excited states of $^3P^o$ symmetry below the first ionization threshold (left panel) and for the three series $_2(1,0)^-_n$, $_2(0,1)^0_n$, and $_2(-1,0)^0_n$ of resonant doubly excited states of symmetry $^3P^o$ below the second ionization threshold (right panel).}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.75\textwidth]{gfx/entanglement_VN_3De}
\caption{\label{fig:entanglementVN3De}von Neumann entropy $S_{VN}(\rho)$ for the lowest singly excited states of $^3D^e$ symmetry below the first ionization threshold (left panel) and for the three series $_2(1,0)^+_n$, $_2(0,1)^-_n$, and $_2(-1,0)^-_n$ of resonant doubly excited states of symmetry $^3D^e$ below the second ionization threshold (right panel).}
\end{figure}
\part{Two-electron systems: Stationary Approach}
\include{Chapters-v4/Chapter02}
\cleardoublepage
\part{Measures of Information theory applied to the analysis of doubly excited states of two-electron atoms or ions}
\include{Chapters-v4/Chapter03}
\include{Chapters-v4/Chapter04}
\part{Summary and Perspectives}
\include{Chapters-v4/Chapter05}
\chapter*{Abstract}
The electronic density $\rho({r})$ in atoms, molecules and solids is, in general, a distribution that can be observed experimentally, containing
spatial information projected from the total wave function. These density distributions can be though as probability distributions subject to the scrutiny of the analytical methods of information theory, namely, entropy measures, quantifiers for the complexity, or entanglement measures. Resonant
states in atoms have special properties in their wave functions, since although they pertain to the scattering continuum spectrum, they show
a strong localization of the density in regions close to the nuclei. Although the classification of resonant doubly excited states of He-like atoms in terms of labels of approximate quantum numbers have not been exempt from controversies, a well known proposal follows after the works by~\citep{Herrick1975,Lin1983}, with a labeling based on $K$, $T$, and $A$ numbers in the form $_{n_1}(K, T)^A_{n_2}$ for the Rydberg series of increasing $n_2$ and for a given ionization threshold He$^+$ ($N=n_1$).
In this work we intend to justify this kind of classification from the topological analysis of the one-particle $\rho({r})$ and two-particle $\rho(r_1, r_2)$ density distributions of the localized part of the resonances (computed with a Feshbach projection formalism and configuration interaction wave functions in terms of B-splines bases), using global quantifiers (Shannon) as well as local ones (Fisher information)~\citep{Lopez-Rosa2005,Lopez-Rosa2009,Lopez-Rosa2010}. For instance, the Shannon entropy is obtained after global integration of the density and the Fisher information contains local information on the gradient of the distribution. In addition, we also studied measures for the entanglement using the von Neumann and linear entropies~\citep{Manzano2010,Dehesa2012a,Dehesa2012b}, computed from the reduced one-particle density
matrix within our correlated configuration interaction approach.
We find in this study that global measures like the Shannon entropy hardly distinguishes among resonances in the whole Rydberg series. On the contrary, measures like the Fisher information, von Neumann and linear entropies are able to qualitatively discriminate the resonances, grouping them according to their $(K, T)^A$ labels.
\endgroup
\vfill
\chapter*{Declaration}
\begin{flushright}{\slshape
We want to stand upon our own feet and look fair and square at the world -- its good facts, its bad facts, its beauties, and its ugliness; see the world as it is and be not afraid of it. Conquer the world by intelligence and not merely by being slavishly subdued by the terror that comes from it. The whole conception of God is a conception derived from the ancient Oriental despotisms. It is a conception quite unworthy of free men. When you hear people in church debasing themselves and saying that they are miserable sinners, and all the rest of it, it seems contemptible and not worthy of self-respecting human beings. We ought to stand up and look the world frankly in the face. We ought to make the best we can of the world, and if it is not so good as we wish, after all it will still be better than what these others have made of it in all these ages. A good world needs knowledge, kindliness, and courage; it does not need a regretful hankering after the past or a fettering of the free intelligence by the words uttered long ago by ignorant men. It needs a fearless outlook and a free intelligence. It needs hope for the future, not looking back all the time toward a past that is dead, which we trust will be far surpassed by the future that our intelligence can create.} \medskip
--- \defcitealias{Russel1957}{Bertrand Russel}\citetalias{Russel1957} \citep{Russel1957}
\end{flushright}
\pdfbookmark[1]{Acknowledgments}{acknowledgments}
\bigskip
\begingroup
\let\clearpage\relax
\let\cleardoublepage\relax
\let\cleardoublepage\relax
\chapter*{Acknowledgments}
Foremost, I would like to express my sincere gratitude to my advisor Prof. Ph.D. Jos\'e Luis Sanz Vicario for the continuous support of my master study and research, for his patience, motivation, and immense knowledge and humanity. His guidance helped me in all the time of research and writing of this thesis. In addition to my advisor, I would like to thank the rest of my thesis committee: Prof. Ph.D. Julio C\'esar Arce Clavijo and Prof. Ph.D. Karen Milena Fonseca Romero, for their insightful comments, and hard questions.
Thanks a lot to my friends in the UdeA Atomic and Molecular Physics Group: Fabiola G\'omez, Guillermo Guirales, Boris Rodr\'iguez, Alvaro Valdez, Leonardo Pachon, Andr\'es Estrada, Carlos Florez, Melisa Dom\'inguez, Juliana Restrepo, Sebasti\'an Duque, Juan David Botero, Johan Tirana, Jairo David Garcia, for the stimulating discussions and for all the fun we have had in the last years. Many thanks to Herbert Vinck and Ligia Salazar for their selfless friendship.
My sincere thanks also goes to Ph.D. Juan Carlos Angulo Ib\'a\~nes of the University of Granada (Spain), for the internship opportunity in his group and leading me working on information theory measures.
I would like to express my sincere gratitude to the sponsors that make possible the Spain internship: The Cooperativa de Ahorro y Cr\'edito G\'omez Plata, the AUIP Asociaci\'on Iberoamericana de Posgrados, and the GFAM Group.
My deepest gratitude to Sergio Palacio. I greatly value his close friendship and I deeply appreciate his belief in me. I am also grateful with Natalia Bedoya, his girlfriend. Their constant support and care helped me overcome setbacks and stay focused on the very relevant things of life.
Finally, I would like to thank all my family: Fernando, David, M\'onica, Astrid, Rogelio and Daniela, and, Maria Fernanda; especially to my parents Pedro Restrepo and Marina Cuartas, for giving birth to me at the first place and supporting me material and spiritually throughout my life.
\endgroup
\section*{Colophon}
This document was typeset using the typographical look-and-feel \texttt{classicthesis} developed by Andr\'e Miede.
The style was inspired by Robert Bringhurst's seminal book on typography ``\emph{The Elements of Typographic Style}''.
\texttt{classicthesis} is available for both \LaTeX\ and \mLyX:
\begin{center}
\url{http://code.google.com/p/classicthesis/}
\end{center}
Happy users of \texttt{classicthesis} usually send a real postcard to the author, a collection of postcards received so far is featured here:
\begin{center}
\url{http://postcards.miede.de/}
\end{center}
\bigskip
\noindent\finalVersionString
\chapter*{Acronyms}
\begin{acronym}[HE]
\acro{DES}{\textit{Doubly Excited States}}
\acro{CI}{\textit{Con\-fi\-gu\-ra\-tion Interaction}}
\acro{SH}{\textit{Spherical Harmonics}}
\acro{UML}{\textit{Unified Modeling Language}}
\acro{bp}{\textit{Breakpoint}}
\acro{pp}{\textit{Piecewise Polynomial}}
\acro{a.u.}{\textit{Atomic Units}}
\acro{WF}{\textit{Wave Function}}
\acro{FM}{\textit{Feshbach Method}}
\acro{HSL}{\textit{Herrick-Sinano\u{g}lu-Lin}}
\acro{CGC}{\textit{Clebsch-Gordan Coefficients}}
\acro{TISE}{\textit{Time Independent Schr\"odinger Equation}}
\acro{HF}{\textit{Hartree-Fock}}
\acro{DFT}{\textit{Density Functional Theory}}
\acro{EPR}{\textit{Einstein-Podolsky-Rosen}}
\end{acronym}
\endgroup
\cleardoublepage
\chapter*{Declaration}
\thispagestyle{empty}
Put your declaration here.
\bigskip
\noindent\textit{Medell\'in, Colombia\xspace, November -- 2013\xspace}
\smallskip
\begin{flushright}
\begin{tabular}{m{5cm}}
\\ \hline
\centeringJuan Pablo Restrepo Cuartas\xspace \\
\end{tabular}
\end{flushright}
\chapter*{Publications}
Some ideas and figures have appeared previously in the following publications:
\bigskip
\noindent Put your publications from the thesis here. The packages \texttt{multibib} or \texttt{bibtopic} etc. can be used to handle multiple different bibliographies in your document.
|
1,108,101,565,591 | arxiv | \section*{Proof Without Words}
\begin{minipage}[t]{3in}
Japheth Wood\\
Bard College\\
Annandale-on-Hudson, NY 12504
\end{minipage}
\begin{minipage}[c]{3in} \centering
\includegraphics[scale=1.5,trim=31 24 35 32,clip]{PetersenGraphBW3.pdf}
The Petersen Graph
\end{minipage}
\begin{center}
\includegraphics[scale=2.3,trim=30 26 74 26,clip]{PetersenGraphV7.pdf}
The Automorphism Group of the Petersen Graph is isomorphic to $S_5$.
\end{center}
\subsection*{Abstract}
The automorphism group of the Petersen Graph is shown to be isomorphic to the symmetric group on 5 elements. The image represents the Petersen Graph with the ten 3-element subsets of $\{1, 2, 3, 4, 5\}$ as vertices. Two vertices are adjacent when they have precisely one element in common.
\end{document}
|
1,108,101,565,592 | arxiv |
\section{Introduction}
Numerical methods to solve partial differential equations (PDEs) have become
ubiquitous in science and industry.
Many approaches subdivide the domain of the PDE into a mesh of cells that
constitute the computational elements.
The finite/spectral element/volume methods are among the most prevalent
techniques and establish mathematical links between nearest-neighbor elements;
see e.g.\ \cite{StrangFix88, BabuskaGuo92, AinsworthOden00,
CockburnKarniadakisShu00, LeVeque02, FischerKruseLoth02}.
This concept is tremendously useful to realize parallel computing, where each
process works on a subregion of the mesh, and their coupling is implemented by
communicating data only between processes that hold adjacent elements.
For some applications, however, nearest-neighbor-only communication is an
artificial restriction.
This applies to element-based particle tracking, such as the
particle/marker-in-cell methods \cite{Harlow64, HarlowWelch65}, used for
example in plasma physics \cite{Dawson83} or viscoelasticity
\cite{MoresiDufourMuhlhaus03}, to semi-Lagrangian methods such as
\cite{DrakeFosterMichalakesEtAl95}, and to smoothed particle hydrodynamics
\cite{GingoldMonaghan77} and molecular dynamics
\cite{EckhardtHeineckeBaderEtAl13}.
Here the mathematical design allows for moving numerical information by more
than one element per time step.
If this is attempted in practice, new ideas are needed to locate points on
non-neighbor remote elements and to find their assigned process.
If the ``points'' are extended geometric objects that can stretch across more
than one element/process, such as in rigid body dynamics
\cite{PreparataShamos85}, an algorithm must cope with multivalued results.
Another extension in reference to the above-men\-tioned classic methods is the
association of variably sized data to elements.
An obvious example is the $hp$-adaptive finite element method
\cite{SuliSchwabHouston00}, where the data size for an element depends on its
degree of approximation.
More generally, we may think of multiple phases or sub-processes, say physical
or chemical, that differ locally in their data usage.
We may also think of selecting a subset of elements for processing while
ignoring the rest, which can be useful for visualization (visible vs.\
non-visible \cite{FoleyDamFeinerEtAl90}) or file-based output (relevant vs.\
irrelevant according to a user).
Efficiently and adaptively managing such data and repartitioning it between
processes is nontrivial.
In this paper, we present several low-level algorithm building blocks that
facilitate the operations motivated above.
Our focus is on (a) highly scalable methods that (b) operate on dynamically
adaptive meshes.
Targeting simulations that run on present-day supercomputers, allowing for
meshes that adapt every few, each, or even several times per time step,
requires a carefully designed logical organization of the elements.
To support efficient searches and to aid in creation and partitioning of data,
we choose a combination of a distributed tree hierarchy and a linear ordering
of elements via a space filling curve (SFC) \cite{TuOHallaronGhattas05,
SundarSampathBiros08}.
We do not use hashing of SFC indices, which has been an early successful
approach \cite{SalmonWarrenWinckelmans94, GriebelZumbusch00}, and realize
sort-free algorithms by maintaining the elements in ascending SFC order.
We assume individually accessible elements, which prohibits the use of
incremental tree encodings \cite{BaderBockSchwaigerEtAl10}.
To allow for general geometries, we generalize to a forest of one or more
trees \cite{StewartEdwards04, BangerthHartmannKanschat07,
BursteddeWilcoxGhattas11}.
\paragraph{Sparse construction}
The first algorithm, presented in \secref{build}, serves to derive a worker
mesh with an only weakly related refinement pattern.
This can be useful in materials simulations to create an independent forest
adapted to one subsystem (say a fracture zone).
Another use is to postprocess geology data in a certain subdomain or for a
certain soil type.
Last but not least, we may create a worker forest for just the visible elements
to support in-situ visualization.
These worker forests can be partitionend independently from the source forest
to run the sub-tasks concurrently while preserving overall load-balance.%
---
Our approach avoids repeated cycles of refinement and coarsening.
The use of a source forest is one difference to the bottom-up construction of
an octree described in \cite{SundarSampathBiros08}.
We require no sorting, and we let each process work on its own partition
without communication.
\paragraph{Remote process search}
In \secref{traverse}, we propose a top-down algorithm to find non-local
generalized points.
It has the following key features.
1.\
We search for multiple points at the same time to reduce mesh memory access,
and we enable multiple match elements/processes per point.
2.\
We enable both optimistic matching (for example to use fast bounding-box checks
closer to the root) and early discarding (to prune search subtrees as quickly
as possible).
3.\
We match points to subtrees on remote processes, even though we do not have
access to any remote element (ghost or otherwise).
This becomes possible due to our lightweight encoding of the forest's
partition.---
Prior work most closely related
uses a sorting of the points by their SFC index
\cite{RahimianLashukVeerapaneniEtAl10}.
This is highly effective but precludes multiple and optimistic matching and
early pruning.
In addition, our approach is a modular library implementation, which eliminates
custom programming and duplicate data structures used previously
\cite{MirzadehGuittetBursteddeEtAl16} and accelerates third-party application
development \cite{Albrecht16}.
\paragraph{Partition-independent I/O}
When writing data to permanent storage, it is an advantage for testing and
general reproducibility if the output format is independent of the number of
processes und the partition of elements that has been used to compute it.
Devising such a format for a single tree is not hard.
For a forest encoded with minimal metadata, however, such a logic becomes
surprisingly involved.
We devote \secref{partindep} to develop a parallel algorithm just to obtain the
global number of elements per tree, which is then sufficient to write
partition-independent meta information.---
This feature enables loading and decoding the mesh on arbitrary process counts.
We discuss extensions to save and load not just the mesh, but also fixed-size
and variable-size per-element application data in a partition-independent way.
\paragraph{Variably sized data}
We touch on
transfering application data on partitioning in \secref{auxcomm}.
While many applications use the built-in \texttt{p4est}\xspace feature to transparently
repartition a fixed-size per-element payload \cite{Burstedde15}, this does not
allow to hold the data in a linear array, which would often be preferred.
To lift this limitation, and to allow for repartitioning variable-size
per-element data as well, we include algorithm schematics aligned to our
specification of forest metadata.
\paragraph{Particle tracking example}
\secref{particles} presents a technology demonstration that executes all
algorithms put forth in this paper.
We use an element-based scheme that solves Newton's equations of motion for
each particle.
The elements are refined, coarsened, and partitioned dynamically to keep the
number of particles per element near a specified number.
Especially the non-local search of points is crucial to redistribute the
particles to the elements/processes after their positions are updated.---
The algorithms presented here share the property that the forest need not be
2:1 balanced and that they do not depend on a ghost layer.
Their functionality relies not on neighbor relations but on the SFC logic that
defines and encodes the parallel partition.
This is a distinction from typical parallel PDE solvers and allows for a wide
range of applications in simulation and data management and processing.
We hint at various examples and use cases in the respective sections of the
paper.
While we maintain the notion of elements, they need not necessarily refer to a
classical finite element or a numerical solver context at all.
We allow for arbitrary-size application data to be redistributed in parallel in
the same optimized way that is used for the adaptive mesh, which opens up the
performance and scalability established for managing meshes
\cite{BursteddeGhattasGurnisEtAl10, IsaacBursteddeWilcoxEtAl15}
to many sorts of data.
\section{Principles and conventions}
\seclab{principles}
Throughout the paper, we will be dealing with integers exclusively.
When referring to integer intervals $[ a, b ) \cap \mathbb{Z}$, we omit the
intersection for brevity.
All arrays are 0-based.
Cumulative arrays (i.e., arrays storing partial sums) are typeset in uppercase
fraktur.
We denote the number of parallel processes (MPI ranks) by $P$, the number of
trees in the forest of octrees by $K$, and the global number of elements
(leaves of the forest) by $N$.
Thus, a process number reads $p \in [ 0, P )$ and a tree number $k \in [ 0, K
)$.
\subsection{Cycles of adaptation}
\seclab{cycles}
In a typical adaptive numerical simulation, the mesh evolves between time steps
in cycles of mesh refinement and coarsening (RC), mesh balancing and/or
smoothing (B), and repartition (P) for load balancing.
Not to be confused with the latter, mesh balancing may refer to establishing a
2:1 size condition between direct neighbor elements
\cite{TuOHallaronGhattas05, SundarSampathBiros08,
BursteddeWilcoxGhattas11, IsaacBursteddeGhattas12} and mesh smoothing to
establishing a graded transition in the sizes of more or less nearby elements.
After RC+B\xspace, the new mesh exists in the same partition boundaries as the
previous one, while families of four (2D) or eight (3D) sibling elements have
been replaced by their parent, or vice versa.
We note that refinement and coarsening is usually not applied recursively,
except possibly during the initialization phase of a simulation.
Since RC+B\xspace changes the number of elements independently on each process, load
balance is lost, and P redistributes the elements in parallel to reinstate it.
To guarantee that one cycle of coarsening is always possible, the partition
algorithm may be modified to place every sibling of one family on the same
process \cite{SundarBirosBursteddeEtAl12}.
In some applications it may be beneficial to partition before refinement,
possibly using weights depending on refinement and coarsening indicators, in
order to avoid crashes when one process refines every local element and runs
out of memory.
P is complementary to RC+B\xspace in the sense that it changes the partition boundary
while the elements stay the same.
This design ensures modularity between and flexible combination of individual
algorithms and simplifies the projection and transfer of simulation data
\cite[Figures 3 and 4]{BursteddeGhattasStadlerEtAl08}:
\begin{principle}[Complementarity principle]
\label{principle:complementarity}
A collective mesh operation shall either change the local element sizes within
the existing partition boundary, or change the partition boundary and keep the
elements the same, but not both.
\end{principle}
It should be noted that time stepping is not the only motivation to use
adaptivity:
When utmost accuracy of a single numerical solve is required, we may use
a-posteriori error estimation to refine and solve the same problem repeatedly
at successively higher resolutions;
when setting up a geometric multigrid solver, we create a hierarchy of coarser
versions of a mesh.
In both scenarios, we may add mesh smoothing and most definitely repartitioning
at each level of resolution using the same RC+B+P\xspace algorithms, which is key for
the scalability of geometric/algebraic solvers
\cite{BursteddeGhattasStadlerEtAl09,
SundarBirosBursteddeEtAl12, RudiMalossiIsaacEtAl15}.
\subsection{Encoding a parallel forest}
\seclab{encoding}
We briefly introduce the relevant properties of the \texttt{p4est}\xspace data structures
and algorithms \cite{BursteddeWilcoxGhattas11}, which we see in this paper as a
reference implementation of an abstract forest of octrees.
We consider a forest that is two- or three-dimensional, $d = 2$ or $3$, which
generalizes easily to arbitrary dimensions.
The topology of a forest is defined by its connectivity, i.e.\xspace, an enumeration of
tree roots viewed as cubes mapped into $\mathbb{R}^3$ together with a specification of
each one's neighbor trees across the tree faces, edges (in 3D), and corners.
Neighbor relations include the face/edge/corner number as viewed from the
neighbor and a relative orientation, since the coordinate systems of touching
trees need not align.
The mesh primitives in \texttt{p4est}\xspace are quadrilaterals in 2D and hexahedra in 3D.
They arise as leaves of a quadtree (2D) or octree (3D), where a root can be
subdivided (refined) into $2^d$ child branches (subtrees).
The subdivision can be performed recursively on the subtrees.
For simplicity, we will generally use the term quadrant for a tree node.
A quadrant is either a branch quadrant (it has child quadrants) or it is a leaf
quadrant.
The root quadrant is a leaf if the tree is not refined and a branch otherwise.
We call leaves in both 2D and 3D the elements of the adaptive mesh.
In practice, we limit the subdivision to a maximum depth or level $L$, where
the root is at level $\ell = 0$.
Accordingly, a quadrant is uniquely defined by the tree it belongs to, the
coordinates $(x_i) = (x, y, z)$ of its first corner, each an integer in
$\halfopen{0, 2^L}$, and its level $\ell \in \closedin{0, L}$.
A quadrant of level $\ell$ has integer edge length $2^{L - \ell}$, and its
coordinates are integer multiples of this length.
We assume that a space filling curve (SFC) is defined that maps all possible
quadrants of a given level bijectively into a set of curve indices
$\halfopen{0, 2^{d \ell}}$.
We may always embed this index into the space $\halfopen{0, 2^{d L}}$ by
left-shifting by $d(L - \ell)$ bits.
The level can be appended to the curve index to make the index unique across
all levels.
The SFC must satisfy a locality property:
The children of a quadrant are called a family and have indices that come after
any predecessor and before any successor of their parent quadrant.
As a consequence, two quadrants are either related as ancestor and descendant,
meaning that the latter is contained in the former, or not intersecting at
all: Partially overlapping quadrants do not exist.
Common choices of SFC are the Hilbert curve \cite{Hilbert91} and the Morton- or
$z$-curve \cite{Morton66} used in \texttt{p4est}\xspace.
In fact, the algorithms in this paper are equally fit to operate on a
forest of triangles or tetrahedra, as long as its connectivity is well defined
and it is equipped with an SFC such as the one designed for the \texttt{t8code}\xspace
\cite{BursteddeHolke16}.
A forest is stored in a data object that exists on each participating process.
Most of its data members are local, that is, apply to just the process where
they are stored, while others are shared, meaning that their values are
identical between all processes.
The shared data is minimal such that it uniquely defines the parallel
partition.
We use linearized tree storage that only stores the leaves and ignores the
non-leaf nodes \cite{SundarSampathBiros08}.
The leaves are ordered in sequence of the trees, and within each tree in
sequence of the SFC.
Sometimes we reference local data for the tree with global number $k$ inside a
forest object $s$ by $\mathcal{K} = s.\mathrm{trees}[k]$.
The partition of leaves is disjoint, which allows us to speak of the owning
process of an element.
For convenience, the local data of each process includes the numbers of its
first and last non-empty trees.
The trees between and including its first and last are called its local trees.
The first and last trees of a process may be incomplete, in which case the
remaining elements belong to preceding processes for the first, and to
succeeding processes for the last local tree.
If a process has more than two trees, the middle ones must be complete.
If a process has elements of only one tree, its first and last tree are the
same.
In this case, if that tree is incomplete, its remaining elements may
be on processes both preceding and succeeding.
A process may also be empty, that is, have no elements, in which case it has no
valid first and last tree.
For each of its local trees, a process stores an offset defined by the sum of
local elements over all preceding trees, and the tree's boundaries by way of
its first and last local descendants.
The first (last) local descendant is the first (last) descendant of maximum
level $L$ of its first (last) local element in this tree.
For example, the first local descendant of a complete tree in Morton encoding
has coordinates $x_i = 0$,
while the last has coordinates $x_i = 2^L - 1$, $i \in \halfopen{0, d}$.
The local elements are stored in one flat array for each local tree.
Thus, the tree number for every local element is implicit.
Non-local elements are not stored.
The shared data of the forest is the array $\mathfrak{E}[p]$, the sum of local elements
over all preceding processes, and the array $\mathfrak{m}[p].(\mathrm{tree}, \mathrm{desc})$ of the
first local tree and local descendant, for every process $p$.
The first local descendant of a process is identical to the first local
descendant of its first tree.
Consequently, the array of first descendants is sufficient to recreate the
first and last local descendants $\mathcal{K}.f$, $\mathcal{K}.l$ of any tree local to
any process.
We call $\mathfrak{m}$ the array of partition markers, since they define the partition
boundary in its entirety (see \figref{pmarkers}).
By design of the SFC, the entries of $\mathfrak{m}$ are ascending first by tree and then
by the index of the first local descendant.
Whether a process begins
with a given tree and quadrant,
even if the quadrant is non-local and/or a branch, is trivial to check by
examining $\mathfrak{m}$; see \algref{beginsquadrant}.
\begin{figure}
\begin{center}
\includegraphics[width=.9\columnwidth]{forest_markers.pdf}
\end{center}%
\caption{Sketch of a forest of $K = 2$ quadtrees $k_i = i$ (left) and
the mesh it encodes (right).
Each tree in the mesh has its own coordinate system that determines
the order of elements along the space filling curve (black arrows).
The forest is partitioned between $P = 3$ processes $p_j \equiv j$
(color coded).
The partition markers $\mathfrak{m}[0, 1, 2]$ (orange) are quadrants of
a fixed maximum level; we do not draw $\mathfrak{m}[3]$.
They correspond to the black dotted lines on the left
that are sometimes called separators \cite{GriebelZumbusch00}.
This forest is load balanced with cumulative element counts
$\mathfrak{E} = [0, 7, 15, 23]$.}%
\label{fig:pmarkers}%
\end{figure}%
\begin{algorithm}
\caption{\fun{begins\_\-with} (process $p$, tree number $k$, quadrant $b$)}
\alglab{beginsquadrant}
\begin{algorithmic}[1]
\REQUIRE $b$ is a quadrant in tree $k$
\hfill\COMMENT{omit tree ``number'' from now on}
\RETURN $\mathfrak{m}[p] = (k, \text{first descendant of $b$})$
\hfill\COMMENT{comparison yields true or false}
\end{algorithmic}%
\end{algorithm}%
As stated above, the arrays $\mathfrak{E}$ and $\mathfrak{m}$ are available to each process,
a feature that is crucial throughout.
It has been found exceedingly convenient to store one additional element in
these zero-based arrays, namely $\mathfrak{E}[P]$ and $\mathfrak{m}[P]$.
Quite naturally, $\mathfrak{E}[P]$ is the global number of elements, and the number of
elements on process $p$ is $\mathfrak{E}[p + 1] - \mathfrak{E}[p]$ for all $p \in
\halfopen{0, P}$.
Setting $\mathfrak{m}[P]$
to the first descendant of the non-existent tree $K$
permits to encode any empty process $p$, including the last one,
by $\mathfrak{m}[p] = \mathfrak{m}[p + 1]$, that
is, by successive partition markers being equal in both tree and descendant.
If one or several successive processes are empty, we say that all of them begin
on the same tree and quadrant as the next non-empty process.
By design, \algref{beginsquadrant} returns true for all of them.
It follows from the above conventions that the array $\mathfrak{m}$ contains
information on the ownership of trees as well:
\begin{property}
\label{property:gfptreeownership}
Not every tree needs to occur in $\mathfrak{m}$.
If $k$ occurs and the range of processes $[p, q]$ is widest such that
\fun{begins\_\-with} ($p'$, $k$, $b$) for all $p \le p' \le q$ and the same $b$,
and $p$ is the first satisfying this condition for any $b$,
then the first descendant of tree $k$ is in the partition of either $p - 1$
or $q$.
More specifically, it is $q$ if and only if
\fun{begins\_\-with} ($q$, $k$, $\mathrm{root}$).
If $k$ does not occur in $\mathfrak{m}$, then all of its quadrants are owned by
the last process $p$ that satisfies $\mathfrak{m}[p].\mathrm{tree} < k$.
\end{property}
\section{Forest construction from sparse leaves}
\seclab{build}
In many use cases an application must construct a mesh for which only a small
subset of current elements is relevant:
\begin{itemize}
\item
To isolate elements of a given refinement level (and fill the gaps with the
coarsest possible elements to complete the mesh), for example to implement
multigrid or local time stepping.
\item
To postprocess only the mesh elements
selected by a given filter (such as for writing to disk the data of one
part of a much bigger model).
\item
A computation deals with points distributed independently of the element
partition and varying strongly in density, and we seek to create a mesh
representing the points.
\item
For parallel visualization, we want to process only the part of the mesh
inside the view angle of a virtual camera.
\end{itemize}
Repeated coarsening addresses only some of these cases and is unnecessarily
slow when it does:
We would execute multiple cycles and carefully maintain data consistency
between them.
Coarsening may also be inadequate entirely, such as in the case of
points where we might want to create a highly refined element for each one,
potentially finer than in the original mesh.
The operation we need is akin to making a copy of the existing mesh (we will
keep the original and its data to continue with the simulation eventually), and
then executing multiple cycles of RC+P\xspace on the copy, all in a fast one-pass
design.
In particular, we want to avoid creating forest metadata or element storage and
discarding it again.
While some details in this section are new, the presentation is more of a
tutorial in working with our mathematical encoding of a parallel forest of
octrees.
This section also serves to introduce some subalgorithms required later.
\subsection{Algorithmic concept}
\seclab{buildalgo}
We propose the following procedure \pforestfun{build}:
\begin{enumerate}
\item
Initialize a context object from an existing source forest.
\item
Add leaves one by one, which need not exist in the source mesh but must be
contained in the local partition, and must be non-overlapping and
their index non-decreasing relative to the ones added previously.
\item
Free the context, not before creating a new forest object as a result: It
is defined as the coarsest possible forest (a) containing the added leaves
and (b) respecting the same partition.
\end{enumerate}
The resulting forest has the same partition boundary as the source, thus the
above procedure satisfies \principleref{complementarity}.
One advantage is that the construction is communica\-tion-free, with the
caveat that the result depends on the total number of processes.
However, since the result is a valid forest object, it can be subjected
to calls to RC+B\xspace if so desired, and P in order to load balance it for its
special purpose.
Its number of elements may be smaller than that of the source, possibly by
orders of magnitude, significantly accelerating the computation downstream.
As a difference to
\cite{SundarSampathBiros08}, we use a source forest to guide the algorithm.
The monotonicitiy requirement, to add leaves in the order of the source index,
eliminates the linear-logarithmic runtime of a sorting step.
Monotonicity can be realized for example by iterating
through the existing leaves in the source, or by calling the top-down forest
traversal \pforestfun{search}.
The latter approach has the advantage that the traversal can be pruned early to
skip tree branches of no interest, not accessing these source elements at all.
Furthermore, inheriting the source ordering implicitly provides the map between
a leaf in the source forest that triggers an addition and the leaf added to the
result, and permits to reprocess the source element data for use with the
result on the fly.
\pforestfun{build} shares the property of \pforestfun{search} that the serial version is
useful in itself, since the tasks mentioned above may well occur in a
single-process code.
The parallel version of \pforestfun{build} is near identical to the serial one, with
the exception that the local number of leaves in the result, one integer, is
shared with \mpifun{Allgather}.
This is standard procedure in \texttt{p4est}\xspace for refinement, coarsening, and 2:1
balance.
Apart from that, the algorithm is communication-free.
\subsection{Details description of \pforestfun{build}}
\seclab{builddetails}
\begin{algorithm}
\caption{\fun{begin\_tree} (context $c$, tree $k$, element offset $o$)}
\alglab{begintree}
\begin{algorithmic}[1]
\REQUIRE Source forest $c.s$ and result forest $c.r$ initialized
(e.g., by \algref{buildbegin})
\REQUIRE $c.s.\mathrm{first\_local\_tree} \le k \le c.s.\mathrm{last\_local\_tree}$
\STATE $c.k \leftarrow k$
\hfill\COMMENT{remember updated tree number}
\STATE $c.r.\mathrm{trees}[k].o \leftarrow o$
\hfill\COMMENT{store offset as sum of all preceding elements}
\STATE $c.\mathrm{most\_recently\_added} \leftarrow \mathrm{is\_invalid}$
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{\pforestfun{build\_begin} (source forest $s$) $\rightarrow$ context $c$%
\hfill (collective call)}
\alglab{buildbegin}
\begin{algorithmic}[1]
\STATE $c \leftarrow$ new context storing reference to $s$
and new partially filled result forest $r$
\IF{$s$ has elements on this process}
\STATE \fun{begin\_tree} ($c$, $s.\mathrm{first\_local\_tree}$, $0$)
\ENDIF
\end{algorithmic}
\end{algorithm}
We use a context data structure to track the internal state of building the new
forest from an ascending (and usually sparse) set of local leaves.
It is initialized by \pforestfun{build\_begin} (\algref{buildbegin}) and contains a copy of the
variables of the source forest that stay the same, most importantly the
boundaries of local trees plus the array of partition markers.
These copies become parts of the result forest at the end of the procedure.
In practice, redundant data may be avoided by copy-on-write.
The state information contains the number of the tree currently being visited
and a copy of the most recently added element, which serves to verify that a
newly added element is of a larger SFC index and not overlapping
(\algref{buildadd}, \lineref{buildaddrequire}).
In adding elements, we pass through the local trees in order.
When adding multiple elements to one tree, we cache them and postpone the final
processing of this tree until we see an element added to a higher tree for the
first time.
If at least one element has been added to the tree, we can rely on the
functions \pforestfun{complete\_\-subtree}, originally built around a fragment of
CompleteOctree \cite[Algorithm 4, lines 16--19]{SundarSampathBiros08} and
reworked \cite{IsaacBursteddeGhattas12}, and \pforestfun{complete\_\-region}, a
reimplementation of the function CompleteRegion originally described for
\texttt{Dendro}\xspace \cite[Algorithm 3]{SundarSampathBiros08}.
Both functions are adapted to the multi-tree data structures of \texttt{p4est}\xspace and
parameterized by the number of the tree to work on.
\begin{algorithm}
\caption{\pforestfun{enlarge\_\-first} (quadrant $f$ is modified, quadrant $b$)}
\alglab{enlargefirst}
\begin{algorithmic}[1]
\REQUIRE $f$ is descendant of $b$ (i.e., equal to $b$ or a strict
descendant of it)
\STATE $w = f.x \bor f.y \bor f.z$
\hfill\COMMENT{bitwise or; omit $z$ coordinate in 2D}
\WHILE{$f.\ell > b.\ell$ \AND ($w \band 2^{L - f.\ell}) = 0$}
\linelab{firstcomp}
\STATE $f.\ell \leftarrow f.\ell - 1$
\hfill\COMMENT{turn $f$ into parent;
valid due to $= 0$ comparison in \lineref{firstcomp}}
\ENDWHILE
\ENSURE $f$ has the same first descendant as on input
and is still descendant of $b$
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{\pforestfun{enlarge\_\-last} (quadrant $l$ is modified, quadrant $b$)}
\alglab{enlargelast}
\begin{algorithmic}[1]
\STATE $\ell \leftarrow l.\ell$; $w = l.x \band l.y \band l.z$
\hfill\COMMENT{bitwise and; omit $z$ coordinate in 2D}
\WHILE{$l.\ell > b.\ell$ \AND ($w \band 2^{L - l.\ell}) \ne 0$}
\STATE $l.\ell \leftarrow l.\ell - 1$
\hfill\COMMENT{turn $l$ into parent;
requires \lineref{lastfix} to become well defined}
\ENDWHILE
\STATE $l.x \leftarrow l.x \band \neg (2^{L - l.\ell} - 2^{L - \ell})$
\hfill\COMMENT{bitwise negation; repeat for $y$ (and $z$ in 3D)}
\linelab{lastfix}
\ENSURE $l$ has the same last descendant as on input
and is still descendant of $b$
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{\fun{end\_tree} (context $c$) $\rightarrow$ element offset $o$}
\alglab{endtree}
\begin{algorithmic}[1]
\STATE $\mathcal{K} \leftarrow c.r.\mathrm{trees}[c.k]$
\hfill\COMMENT{reference to result tree dat
}
\IF{$\mathcal{K}.\mathrm{elements} = \emptyset$}
\STATE $a \leftarrow$ \pforestfun{nearest\_\-common\_\-ancestor} ($\mathcal{K}.f$, $\mathcal{K}.l$)
\IF{$\mathcal{K}.f$ is the first descendant of $a$ \AND
$\mathcal{K}.l$ is its last}
\STATE $\mathcal{K}.\mathrm{elements} \leftarrow \{ a \}$
\hfill\COMMENT{tree consists of one element}
\ELSE
\STATE $f \leftarrow \mathcal{K}.f$; $l \leftarrow \mathcal{K}.l$
\hfill\COMMENT{first and last local descendants of tree}
\STATE $c \leftarrow$ child of $a$ containing $f$;
\pforestfun{enlarge\_\-first} ($f$, $c$);
\linelab{childfc}
\hfill\COMMENT{modify $f$}
\STATE $d \leftarrow$ child of $a$ containing $l$;
\pforestfun{enlarge\_\-last} ($l$, $d$);
\linelab{childld}
\hfill\COMMENT{modify $l$}
\STATE \pforestfun{complete\_\-region} ($\mathcal{K}$, $f$, $l$)
\hfill\COMMENT{fill elements in $\mathcal{K}$ from $f$ to $l$ inclusive}
\ENDIF
\ELSE
\STATE \pforestfun{complete\_\-subtree} ($\mathcal{K}$)
\hfill\COMMENT{fill gaps
with coarsest possible elements}
\ENDIF
\RETURN $\mathcal{K}.o + \# \mathcal{K}.\mathrm{elements}$
\end{algorithmic}
\end{algorithm}
In the event that no element has added to some local tree, we fill the range
between its first and last local descendants with the coarsest possible
elements.
To this end, we first generate the smallest common ancestor of the two
descendants, which contains the local portion of the tree.
If the tree descendants are equal to the ancestor's first and last descendants,
respectively, the ancestor is the tree's only element.
Otherwise, we identify the two (necessarily distinct) children of the ancestor
that contain one of the tree descendants each, and find the descendants'
respective largest possible ancestor that (a) has the same first (last)
descendant and (b) is not larger than the child.
We do this with \algref{enlargefirst} \pforestfun{enlarge\_\-first} and \algref{enlargelast}
\pforestfun{enlarge\_\-last}, respectively.
We then call \pforestfun{complete\_\-region} with the resulting elements to fill the tree.
The finalization of a tree for the cases discussed above is listed in
\algref{endtree}.
The reader may notice that the logic in Lines~\ref{line:childfc} and
\ref{line:childld}, along with the enlargement algorithms, could be tightened
further by passing just the number $a.\ell + 1$ instead of the children $c$ and
$d$.
We omit such final optimizations in \texttt{p4est}\xspace when not harmful to its
performance, since the information on the child quadrants is valuable for
checking the consistency of the code.
In practice, we execute the \textbf{Require} and \textbf{Ensure} statements
that make use of $c$ and $d$ in every debug compile.
\begin{algorithm}
\caption{\pforestfun{build\_add}
(context $c$, tree $k$, quadrant $b$, callback \fun{Add})}
\alglab{buildadd}
\begin{algorithmic}[1]
\REQUIRE $c.k \le k \le c.s.\mathrm{last\_local\_tree}$
\hfill\COMMENT{adding element to same or higher tree}
\WHILE{$c.k < k$}
\STATE $o \leftarrow$ \fun{end\_tree} ($c$)
\hfill\COMMENT{finalize current tree, adding its elements to offset}
\STATE \fun{begin\_tree} ($c$, $c.k + 1$, $o$)
\hfill\COMMENT{commence the next tree}
\ENDWHILE
\IF{$c.\mathrm{most\_recently\_added} \ne \mathrm{is\_invalid}$}
\REQUIRE $c.\mathrm{most\_recently\_added}$ less equal and not an ancestor of $b$
\linelab{buildaddrequire}
\IF{$c.\mathrm{most\_recently\_added} = b$}
\linelab{buildaddredundant}
\RETURN
\hfill\COMMENT{convenient exception allows for redundant adding}
\ENDIF
\ENDIF
\STATE $\mathcal{K}.\mathrm{elements} \leftarrow \mathcal{K}.\mathrm{elements}
\cup \lbrace b \rbrace$
\hfill\COMMENT{sparse leaves in tree structure until finalized}
\STATE $c.\mathrm{most\_recently\_added} \leftarrow b$
; \fun{Add} ($b$)
\hfill\COMMENT{optionally initialize application data}
\end{algorithmic}
\end{algorithm}
We allow to call the \pforestfun{build\_add} function repeatedly with the same element, which
is a convenience when using the feature of \pforestfun{search} to maintain a list of
multiple points to search \cite{IsaacBursteddeWilcoxEtAl15}, several of which
may trigger the addition of the current element.
A new element may just as well be finer or coarser than the one in the source,
as long as it is added in order.
The element is added once, and we provide the convenience callback \fun{Add} to
establish its application data; see
\algref{buildadd}.
\begin{algorithm}
\caption{\pforestfun{build\_end} (context $c$) $\rightarrow$ result forest $r$%
\hfill (collective call)}
\alglab{buildend}
\begin{algorithmic}[1]
\IF{$c.s$ has elements on this process (else $n \leftarrow 0$)}
\WHILE{$c.k < c.s.\mathrm{last\_local\_tree}$}
\STATE \fun{begin\_tree} ($c$, $c.k + 1$, \fun{end\_tree} ($c$))
\hfill\COMMENT{finalize and commence as above}
\ENDWHILE
\STATE local element count $n \leftarrow$ \fun{end\_tree} ($c$)
\hfill\COMMENT{we are done with the last local tree}
\ENDIF
\STATE $c.r.\mathrm{numbers} \leftarrow$ \mpifun{Allgather} ($n$)
\hfill\COMMENT{one integer per process}
\RETURN $c.r$
\hfill\COMMENT{also free $c$'s remaining members and $c$ itself}
\end{algorithmic}
\end{algorithm}
The source code to this section can be found in the files \pforestfun{search\_\-build}
\cite{Burstedde10}.
\section{Recursive partition search}
\seclab{traverse}
Frequently, points or geometrically more complex objects need to be located
relative to a mesh.
The task is to identify one or several elements touching, intersecting, or
otherwise relevant to that object.
There are varied examples of such objects and their uses:
\begin{itemize}
\item
Input/output:
\begin{itemize}
\item
Earthquake point sources to feed energy into seismology simulations
\item
Sea buoys for measuring the water level in tsunami simulations
\end{itemize}
\item
Numerical/technical:
\begin{itemize}
\item
Particle locations in tracer advection schemes
\item
Departure points in a semi-Lagrangian method
\end{itemize}
\item
Geometric shapes:
\begin{itemize}
\item
Randomly distributed grains to construct a porous med\-ium
\item
Trapezoids that represent the field of view of a virtual camera
\item
Constructive solid geometry objects for rigid body interactions
\end{itemize}
\end{itemize}
In the following, we refer to all those objects as points.
We distinguish three degrees of generality required depending on the
application.
\begin{enumerate}
\item
Local:
When it suffices that each process shall identify strictly the points that are
inside its local partition, we may call \pforestfun{search}
\cite[Algorithm 3.1]{IsaacBursteddeWilcoxEtAl15}
to accomplish this task
economically and communication-free.
\item
Near:
The points are searched in a specified proximity around the local partition.
For example, in most numerical applications we work with direct neighbors in
the mesh.
Usually we collect one layer of ghost elements that encode the size, position,
and owner process of direct remote neighbors.
If the ghost elements are ordered by the SFC, they can be searched very much
like the local elements \cite{BursteddeWilcoxGhattas11}.
This principle can be extended to multiple layers of ghosts
\cite
[\pforestfun{ghost\_\-expand}]
{GuittetIsaacBursteddeEtAl16}.
However, the number of ghost layers must be limited, since the number of ghost
elements collected on any given process cannot be much larger than the number
of local elements due to memory constraints.
\item
Global:
Every process may potentially ask for the location of every point.
This variant is clearly the most challenging,
since a naive implementation would cause $\mathcal{O}(P^2)$ work and/or all-to-all
communication.
\end{enumerate}
This section is dedicated to develop a lean and general solution of the global
problem 3.
The main task is to
identify which points match the local partition and which do not,
and in the latter case, which process(es) they match.
It will be advantageous to follow the forest structure top-down to reduce the
number of binary searches and to tighten their ranges as much as possible.
To avoid traversing the forest more than once, we use the top-down context over
all relevant points as a whole.
Given the metadata we hold for the forest, the algorithm is communication-free.
While an all-to-all parallel search is not expected to scale,
our approach is efficient when the application requires data that is
near in a generalized sense but not accessible by Local and Near searches.
If, for example, we search through a neighborhood in space that extends to
a small multiple of the width of a process domain,
such as in a large-CFL Lagrangian method,
we prune the search for the domain outside of the
neighborhood
and the procedure scales well.
\subsection{Idea of the recursion}
\seclab{traverseprinciple}
We know that the local part of the search can be executed using \pforestfun{search}.
Assuming we remembered all points that do not match locally and run two nested
loops to search each of those points on every remote process, this would be
rather costly.
The alternative of sorting the coordinates of the points in order of the SFC
and comparing it with the partition markers is not applicable when the points
are extended geometric shapes.
Instead, we repurpose the idea behind \pforestfun{search} and apply it to the partition
markers instead of the local quadrants.
This inspires a top-down traversal of the partition of the forest without
accessing any element (which would be impossible anyway, since remote
elements are generally unknown to a given process).
To illustrate the principle, consider a branch quadrant of a given tree and
assume that we know the process that owns its first local descendant and the
one that owns its last.
These two processes define the relevant window onto the array of partition
markers.
Hence, we are done if the first and last process are identical:
This is the owner of all leaves below the branch.
Otherwise, we split the branch quadrant into its $2^d$ children and look for
them in the window of partition markers using a multi-target binary search.
This gives us for each child its first and last process, which allows us to
continue this thought recursively, using each child in turn as the current
branch.
The above procedure has several useful properties.
First, to bootstrap the recursion, we execute a loop over all (importantly, not
just the local) trees since a point may exist in any tree.
The partition markers allow us to determine for each tree which processes own
elements of it.
The ascending order of trees, processes, and partition markers inherent in the
SFC allows us to walk through this information quickly.
Furthermore, a leaf can only have one owner process, which means that the
recursion is guaranteed to terminate on a leaf, if not before, even when this
leaf is remote and thus not known to the current process.
Second, we process all points in one common recursion, which combined with
per-point user decisions of whether it intersects the branch allows us to prune
the search tree early and only follow the relevant points further down.
Both the search window and the set of relevant points shrink
with increasing depth of the branch.
Finally, it is possible to do optimistic matching, meaning returning
matches for a point and more than one branch, which may allow for cheaper
match queries in practice.
Any sharp and more costly matching can be delayed if this is advantageous.
The motivation for this is quite natural in view of searching extended
geometric shapes that may overlap with more than one process partition.
We illustrate the process in \figref{recursion}.
\begin{figure}
\begin{center}
\includegraphics[width=.4\columnwidth]{sC.pdf}
\hspace{3ex}
\includegraphics[width=.4\columnwidth]{sD.pdf}
\\[2ex]
\includegraphics[width=.4\columnwidth]{sF.pdf}
\hspace{3ex}
\includegraphics[width=.4\columnwidth]{sG.pdf}
\end{center}%
\caption{Steps of \pforestfun{search\_\-partition} on process $2$ (blue; cf.\
\figref{pmarkers}), which must locate 5 points (red), 3 of which
are not local.
Top left: recursion at level 1 after 5 search
quadrants (orange) have returned true in \fun{Match}.
Top right: The red quadrant is remote and the recursion stops
at the branch containing the point.
The green quadrant is remote as well and coincides with a leaf.
The two blue quadrants are local, one branch and one leaf.
The recursion continues with 4 level 2 quadrants (orange).
Bottom left: One search quadrant matches on the red process.
Bottom right: Alternate result obtained by an extension to the
algorithm
akin to \pforestfun{search}, continuing the recursion to the local leaves
(blue-white).
}%
\label{fig:recursion}%
\end{figure}%
\subsection{Technical description of \pforestfun{search\_\-partition}}
\seclab{traversetechnical}
As outlined in \secref{encoding}, \texttt{p4est}\xspace stores one partition marker per
process that contains the number of its first tree.
To find the processes relevant for each tree, we need to reverse this map.
In principle, we could run one binary search per tree to find the smallest
process that owns a part of it.
Instead of doing this and spending $K \log P$ time, we can exploit the
ascending order of both trees and processes, and the fact that the range
of processes for a tree is contiguous, to run the combined and optimized
multi-target search \scfun{array\_\-split} presented in \cite{IsaacBursteddeWilcoxEtAl15}.
We restate the precise convention for its input and output parameters in
\algref{scsplit}.
\begin{algorithm}
\caption{\scfun{array\_\-split} (input array $\mathfrak{a}$,
offset array $\mathfrak{O}$,
number of types $T$)}
\alglab{scsplit}
\begin{algorithmic}[1]
\REQUIRE $\mathfrak{a}$ is sorted ascending by some type $0 \le \mathfrak{a}[i].t < T$
(repetitions allowed)
\REQUIRE $\mathfrak{O}$ has $T + 1$ entries to be computed by this function
\ENSURE The positions $i$ of $\mathfrak{a}$ that hold entries of type $t$ are
$\mathfrak{O}[t] \le i < \mathfrak{O}[t + 1]$
\ENSURE If there are no entries of type $t$ in $\mathfrak{a}$, then
$\mathfrak{O}[t] = \mathfrak{O}[t + 1]$
\end{algorithmic}
\end{algorithm}
To create the map from tree to process, we use the partition markers $\mathfrak{m}$ as
input array $\mathfrak{a}$.
We exploit the fact that it has $P + 1$ entries and there is $P'$ minimal such
that $\mathfrak{m}[p'].\mathrm{tree} = K$ for all $p' \in [P' , P]$.
Usually, we have $P' = P$, but me way also encounter the case $P' < P$ if the
final range of processes $p \in [P' , P)$ has no elements und hence no trees.
Designating the tree number of the partition marker as the type for \scfun{array\_\-split},
we see that we must specify $T = K + 1$ types and the offset array $\mathfrak{O}$ must
have $K + 2$ entries.
\algref{scsplit} gives us
\begin{equation}
\mathfrak{O}[0] = 0
, \quad
\mathfrak{O}[K] = P' \le P
, \quad \text{and} \quad
\mathfrak{O}[K + 1] = P + 1
.
\end{equation}%
Now, running the loop over all trees $0 \le k < K$, we need to determine the
first and last processes $p_\mathrm{first}$, $p_\mathrm{last}$ owning elements of tree $k$.
We know for a fact that
\begin{equation}
p_\mathrm{last} = \mathfrak{O}[k + 1] - 1
.
\end{equation}%
This can be seen since $p_\mathrm{last} \ge \mathfrak{O}[k + 1]$ would mean that $p_\mathrm{last}$
could not have any elements of trees $k$ and less.
And if there were a $p'$ with $p_\mathrm{last} < p' < \mathfrak{O}[k + 1]$, then $p_\mathrm{last}$ would
not be the last process of tree $k$.
To determine $p_\mathrm{first}$, we distinguish the cases of (a) no process beginning
in this tree, (b) a process begins at its first descendant, and (c) a process
begins elsewhere in $k$.
We name this algorithm \texttt{processes} (\algref{processes}) and call it
with the the type $t = k$ and the root quadrant of the tree.
\begin{algorithm}
\caption{\texttt{processes}
(offset array $\mathfrak{O}$, type $t$, quadrant $b$)
$\rightarrow$ ($p_\mathrm{first}$, $p_\mathrm{last}$)%
}
\alglab{processes}
\begin{algorithmic}[1]
\REQUIRE By context, $b$ is a quadrant in some tree $k$
\STATE $p_\mathrm{last} \leftarrow \mathfrak{O}[t + 1] - 1$
\hfill\COMMENT{this value is final}
\STATE $p_\mathrm{first} \leftarrow \mathfrak{O}[t]$
\hfill\COMMENT{initialization}
\IF{$p_\mathrm{first} \le p_\mathrm{last}$
\AND
\fun{begins\_\-with} ($p_\mathrm{first}$, $k$, $b$)}
\WHILE{$p_\mathrm{first}$ is empty}
\STATE
$p_\mathrm{first} \hspace{.1ex}\text{$++$}$
\hfill\COMMENT{empty processes use same type as their successor}
\ENDWHILE
\ELSE
\STATE
$p_\mathrm{first} \hspace{.1ex}\text{$--$}$
\hfill\COMMENT{there must be exactly one
earlier process for this type}
\ENDIF
\ENSURE Range $[p_\mathrm{first}, p_\mathrm{last}]$ is widest s.t.\ each end has at
least one item of type $t$
\end{algorithmic}
\end{algorithm}
We show the toplevel call \pforestfun{search\_\-partition} in \algref{searchpartition}.
For clarity, we have excluded the local search of points (covered in detail in
\cite{IsaacBursteddeWilcoxEtAl15}) and reduced the presentation to the search
over the parallel partition.
Since it does not communicate, it can be called by any process at any time.
It identifies the relevant processes for each tree in turn as discussed above
and then invokes the recursion for each tree.
The recursion keeps track of the points to be searched by a user-defined
callback function \fun{Match}.
This callback is passed the range of processes relevant for the current branch
quadrant and may return false to indicate an early termination of the
recursion.
The points and the callback to query them do not need to relate to invocations
on other processes.
\begin{algorithm}
\caption{\pforestfun{search\_\-partition}
(point set $Q$,
callback \fun{Match})%
}
\alglab{searchpartition}
\begin{algorithmic}[1]
\STATE \scfun{array\_\-split} ($\mathfrak{m}$, $\mathfrak{O}$, $K + 1$)
\hfill\COMMENT{split partition markers $\mathfrak{m}$ by their tree number}
\FORALL{tree numbers $0 \le k < K$}
\STATE
$a \leftarrow \mathrm{root}$
\hfill\COMMENT{construct toplevel quadrant to begin}
\STATE ($p_\mathrm{first}$, $p_\mathrm{last}$) $\leftarrow$
\texttt{processes} ($\mathfrak{O}$, $k$, $a$)
\hfill\COMMENT{potential owners of quadrants in $k$}
\STATE \texttt{recursion} ($a$, $p_\mathrm{first}$, $p_\mathrm{last}$, $Q$, \fun{Match})
\hfill\COMMENT{bootstrap recursion for tree $k$}
\ENDFOR
\end{algorithmic}
\end{algorithm}
The recursion is detailed in \algref{searchrecursion}.
Each step takes a branch quadrant $b$ and the first and last processes that own
elements of it.
If they are the same, this is the owner of all elements below $b$ and the
recursion ends.
Otherwise,
the task is to find the first and last processes $p_{i,\mathrm{first}}$ and
$p_{i,\mathrm{last}}$ for each child $c_i$ of $b$.
Here we use \scfun{array\_\-split} with an input array that is the minimal window on the
markers, defined by
\begin{equation}
\eqnlab{Adefinerecursion}%
\mathfrak{a}[j] = \mathfrak{m}[p_\mathrm{first} + 1 + j]
\quad\text{for}\quad
0 \le j < \Delta p = p_\mathrm{last} - p_\mathrm{first}
.
\end{equation}%
This ensures that all elements of $\mathfrak{a}$ refer to processes beginning inside $b$.
We set their type to the number of the child of $b$ in which they begin, which
fixes $T = 2^d$ and yields
\begin{equation}
\mathfrak{O}[0] \ge 0
,
\quad
\mathfrak{O}[2^d] = \Delta p
,
\quad\text{and}\quad
p_{i,\mathrm{last}} = \mathfrak{O}[i + 1] + p_\mathrm{first}
.
\end{equation}%
If we want to repurpose
\texttt{processes}
to determine $p_{i,\mathrm{first}}$ and
$p_{i,\mathrm{last}}$, we need to make sure that the offset array indexes into processes,
which we accomplish by adding $p_\mathrm{first} + 1$ to each of its elements
(\lineref{Opluspfirst
) to correct for the window selection
\eqnref{Adefinerecursion}.
\begin{algorithm}
\caption{\texttt{recursion}
\newline\mbox{}\hfill
(quadrant $b$,
processes $p_\mathrm{first}$, $p_\mathrm{last}$,
point set $Q$,
callback \fun{Match})%
}
\alglab{searchrecursion}
\begin{algorithmic}[1]
\REQUIRE By context, $b$ is a quadrant in some tree $k$
\REQUIRE The first descendant of $b$ is owned by $p_\mathrm{first}$,
its last by $p_\mathrm{last}$
\STATE Set of matched points $M \leftarrow \emptyset$
\FORALL{$q \in Q$}
\IF{\fun{Match} ($b$, $p_\mathrm{first}$, $p_\mathrm{last}$, $q$)}
\STATE $M \leftarrow M \cup \{ q \}$
\hfill\COMMENT{determine the points that we keep}
\ENDIF
\ENDFOR
\IF{$M = \emptyset$ \OR $p_\mathrm{first} = p_\mathrm{last}$}
\RETURN since all matches failed and/or
all quadrants below $b$ belong to $p_\mathrm{first}$
\ENDIF
\STATE \scfun{array\_\-split} ($\mathfrak{m}[p_\mathrm{first} + 1, \ldots, p_\mathrm{last}]$, $\mathfrak{O}$, $2^d$)
\hfill\COMMENT{split by child id relative to $b$}
\FORALL{$c_i \in \texttt{Children}$ ($b$), $0 \le i < 2^d$}
\STATE ($p_{i,\mathrm{first}}$, $p_{i,\mathrm{last}}$) $\leftarrow$
\texttt{processes} ($\mathfrak{O} + p_\mathrm{first} + 1$, $i$, $c_i$)
\hfill\COMMENT{owning descendants of $c_i$}
\linelab{Opluspfirst}
\STATE \texttt{recursion} ($c_i$, $p_{i,\mathrm{first}}$, $p_{i,\mathrm{last}}$, $M$, \fun{Match})
\hfill\COMMENT{pursue remaining points to bottom}
\ENDFOR
\end{algorithmic}
\end{algorithm}
\section{Partition-independent file I/O}
\seclab{partindep}
This section is dedicated to a new parallel algorithm that is essential to
store a parallel forest on disk in a partition-independent format.
There are two aspects to this:
\begin{principle}[partition independence]
\label{principle:arbitrarympisize}
On writing, the organization and contents of file(s) written for a given state
of data shall be independent of the parallel partition of the simulation.
On reading, any number of processes shall be allowed to read such a file
(provided that the total memory available is sufficient).
\end{principle}
Our algorithm efficiently obtains a count of elements per tree across process
boundaries.
Its technical design is presented in \secref{pertree}.
Since the need for this algorithm is not obvious at first, we begin with a more
general discussion.
A central observation is that file I/O
can be slower than the simulation itself, and sometimes entirely
impractical, due to physical limits and the data size that needs to be written.
There are several avenues to circumvent this obstacle:
\begin{itemize}
\item
In-situ analysis refers to post-processing simulation data while it is still
in simulation memory.
The goal is often to compute image files or to infer
statistical information such that the simulation data may be discarded
afterwards.
While this approach eliminates a large part of the I/O volume, it is not
applicable to checkpointing and restart.
\item
Compression of the state, lossless or lossy, ideally by orders of magnitude,
and writing the compressed state to disk.
One idea is to apply off-the-shelf compression algorithms to the data as a
byte stream without regarding the numerical context.
Since compression usually requires header information for each chunk of data,
it is not trivial to design a partition-independent format.
\item
An application-specific compression is to use adaptivity to coarsen the mesh
and to restrict the simulation data accordingly.
Depending on the application, the CP cycle (\secref{cycles}) or the
sparse tree construction described in \secref{build} may be chosen for this
purpose.
Advantages are that code already written for error estimation and dynamic AMR
may be reused for this purpose, and that the coarse data remains in the same
format as the original.
\end{itemize}
We conclude that writing the state, either raw for small- to medium-size
simulations or compressed by adaptation, will be an indispensable operation
even when in-situ post-processing has become widely available.
There are multiple reasons for supporting partition independent I/O:
\begin{enumerate}
\item
Data is often transferred to a different computer for post-processing, having a
different number of processors and a different runtime/batch system.
\item
The scalability of post-processing algorithms is usually less than that of
simulation algorithms.
\item
We would like to make regression-testing, reproduction and post-processing least
restrictive and most convenient further down the data processing chain.
\end{enumerate}
When using non-uniform meshes, writing numerical data must be accompanied by
writing the mesh, otherwise it will not be possible to load and recreate the
information necessary to analyze or continue the simulation in the future.
The least common denominator to create a partition-independent state is then to
write one file for the mesh and one file for each set of per-element numerical
data.
Let us consider the (simpler) situation of a one-tree forest first.
If we were to include $P$ and the arrays $\mathfrak{m}$ and $\mathfrak{E}$ in the mesh file,
it would not be partition independent.
Thus, the only header information permitted is the global element count $N =
\mathfrak{E}[P]$.
In practice, it is written by the first process, but any other process would be
able to write the header as well.
For each element we store its coordinates $x_i$ and the level, which are of
fixed size $s$.
The window of the mesh file to be written by process $p$ is
\begin{equation}
\eqnlab{filewindow}
\text{size of header }+ s \times \halfopen{ \mathfrak{E}[p], \mathfrak{E}[p + 1]}
,
\end{equation}
which is easily done in parallel using the MPI I/O standard.
On reading, each process learns the values $p$ and $P$ from the MPI environment
and reads the header to learn $N$.
This is sufficient to compute a new array $\mathfrak{E}$ \cite[equation
(2.5)]{BursteddeWilcoxGhattas11}, which is in turn sufficient to read the local
elements from the file by \eqnref{filewindow}.
The first element read fixes the local partition marker $\mathfrak{m}[p]$, while
an empty process sets it to an invalid state.
The partition markers are shared by one call to \mpifun{Allgather} and examined once
to repair the invalid entries due to empty processes.
For a multi-tree forest, we encounter two additional tasks.
The first is writing the number of trees and their connectivity to the file.
In \texttt{p4est}\xspace, we exploit the fact that the connectivity is known to each
process and include it in the header.
The second task is deeper:
When reading the window of local elements, it is not known which tree(s) they
belong to.
Of course, we may store the tree number in each element, but this would be
redundant and add some dozen percent to the file size.
One way to encode the tree assignment of elements efficiently is to
postulate an array $\mathfrak{N}$ of cumulative global element counts over trees and
to include it in the header.
\subsection{Determining element counts per tree}
\seclab{pertree}
Our goal is to make the global number of elements in every tree known to every
process.
As discussed above, this operation supports loading a forest from a
partition-independent file stored to disk.
In a more abstract sense, we may think of it as the completion of shared
information in that we count the global number of elements not only per process
but also per tree.
Let $N_k > 0$ be the global number of elements in tree $k$ that is generally
not available from the distributed data structure.
Our goal is to compute these counts and encode them in a cumulative array $\mathfrak{N}$
with $K + 1$ members and increasing entries,
\begin{subequations}
\begin{gather}
\eqnlab{cumulativeN}.
\mathfrak{N}[k'] = \sum_{k = 0}^{< k'} N_k
,
\quad 0 \le k' \le K
\qquad\Rightarrow
\\
\mathfrak{N}[0] = 0
,
\qquad
\mathfrak{N}[k + 1] - \mathfrak{N}[k] = N_k
\:,
\qquad
\mathfrak{N}[K] = \sum_{k = 0}^{< K} N_k = N
.
\end{gather}%
\end{subequations}%
This format is convenient in facilitating binary searches through the results.
Since any $N_k$ may be greater equal $2^{32}$ and thus requires 64 bits of
storage, holding cumulative instead of per-tree counts does not change the
memory footprint.
We aim to avoid the communication and computation cost $\mathcal{O}( K P )$
of a naive implementation, i.e., one that has every process count the elements
in every tree.
Our proposal is to define a unique process responsible for computing the
element count in any given tree,
and to minimize communication by sending at most one
message per process to obtain the counts.
This shall hold even if a process is responsible for more than one tree.
Multiple conventions are thinkable to decide on the responsible process, where
we demand that the decision is made without communication.
We also demand that all pairs of sender and receiver processes are decided
without communication.
One suitable choice is the following.
\begin{convention}
\label{convention:treecounter}
The process $p$ responsible for computing the number of elements in tree $k$,
which we denote by $p_k$, is the one that owns the first element in $k$,
unless more than one process has the first descendant of tree $k$ as their
partition marker.
In the latter case, we take $p_k$ as the first process in that set, which is
necessarily empty.
\end{convention}
This convention ensures that the range of trees that a process is responsible
for is contiguous (or empty).
In addition, it guarantees that $k < k'$ implies $p_k \le p_{k'}$.
Allowing for empty processes to be responsible fixes $p_0 = 0$ in all cases.
\begin{property}
\label{property:emptyonetree}
An empty process is responsible for at most one tree.
\end{property}
\begin{proof}
If an empty process were responsible for two different trees, both would have
to occur in its partition marker, which is impossible by definition.
\end{proof
Let us proceed by listing the phases of the algorithm $\mathfrak{N} \leftarrow
\pforestfun{count\_\-pertree}$.
\begin{enumerate}
\item
Determine for each process $p$ the number of trees
that it is
responsible for,
\begin{equation}
K_p = \# \{ k : p_k = p \}
, \qquad
0 \le K_p \le K
.
\end{equation}%
We may additionally define an array $\mathfrak{K}$ of cumulative counts,
\begin{equation}
\eqnlab{treecountsoffsets}
\mathfrak{K}[p'] = \sum_{p = 0}^{< p'} K_p
\qquad\Rightarrow\qquad
\mathfrak{K}[0] = 0
, \qquad
\mathfrak{K}[P] =
K
.
\end{equation}%
Due to the design of the partition markers and \conventionref{treecounter},
every process populates these arrays identically in $\mathcal{O}(\max\{ K, P \})$
time, requiring no communication.
We provide \algref{treecount} to detail this computation.
\begin{algorithm}
\caption{\texttt{responsible} (computes arrays of tree counts $(K_p)$,
tree offsets $\mathfrak{K}$)}
\begin{algorithmic}[1]
\STATE
$p \leftarrow 0$;
$k \leftarrow 0$;
$K_0 \leftarrow 0$
\LOOP
\ENSURE Process $p$ is the minimum of all $p'$ with
$k = \mathfrak{m}[p'].\mathrm{tree}$ (cf.\ \propertyref{gfptreeownership})
\ENSURE Responsibility for $k$ has been assigned to either $p$ or $p - 1$
\REPEAT
\STATE
$p \hspace{.1ex}\text{$++$}$;
$K_p \leftarrow 0$
\hfill\COMMENT{find the first process that begins in a later tree}
\UNTIL{$\mathfrak{m}[p].\mathrm{tree} > k$}
\STATE
$k \hspace{.1ex}\text{$++$}$
\hfill\COMMENT{proceed to that tree incrementally}
\WHILE{$k < \mathfrak{m}[p].\mathrm{tree}$}
\STATE
$K_{p - 1} \hspace{.1ex}\text{$++$}$;
$k \hspace{.1ex}\text{$++$}$
\hfill\COMMENT{while assigning in-between trees}
\ENDWHILE
\IF{$k = K$}
\STATE
$K_{p'} \leftarrow 0$ \textbf{forall} $p' \in [p + 1, P)$;
\textbf{break loop}
\hfill\COMMENT{assign remaining slots}
\ELSIF
\fun{begins\_\-with} ($p$, $k$, $\mathrm{root}$)%
}
\STATE $K_{p} \hspace{.1ex}\text{$++$}$
\hfill\COMMENT{it is legal if $p$ is empty}
\ELSE
\STATE $K_{p - 1} \hspace{.1ex}\text{$++$}$
\hfill\COMMENT{$p - 1$ is never empty}
\ENDIF
\ENDLOOP
\STATE Compute
$\mathfrak{K}$ from $(K_p)$ by \eqnref{treecountsoffsets}
\end{algorithmic}
\alglab{treecount}
\end{algorithm}
\item
While the previous step is identical on all processes, let us now take the
perspective of an individual process $p$ with $K_p > 0$.
It must obtain the number of elements in each of the $K_p$ trees it is
responsible for and store the result, say, in an array $\mathfrak{n}$ of the same
length.
We initialize each slot with the number of process-local elements in that
tree,
\begin{equation}
\mathcal{K}_i = \mathrm{trees} \left\lbrack \mathfrak{K}[p] + i \right\rbrack
, \quad
\mathfrak{n} [i] = \# \mathcal{K}_i.\mathrm{elements}
, \quad
\text{for all $i \in \halfopen{ 0, K_p }$.}
\end{equation}%
\begin{proposition}
\label{proposition:allbutfinal}
The counts in all but the last element of $\mathfrak{n}$ are final,
\begin{equation}
\mathfrak{n}[i] = N_k
\quad\text{for all $k - \mathfrak{K}[p] = i \in \halfopen{ 0, K_p - 1 }$.}
\end{equation}%
\end{proposition}%
\begin{proof}%
If process $p$ is empty, it is responsible for at most one tree, $K_p \le
1$, so there is nothing to prove.
Otherwise, it owns the first element of every tree it is responsible for.
This means that all but the last one of these trees are complete on $p$ and
their number of local elements is also their global number of elements.
\end{proof}
\item
\label{itemmpiirecv}
It remains to determine the number of remote elements in the last tree
$k = \mathfrak{K}[p + 1] - 1$ that a process is responsible for.
They are necessarily located on higher processes.
First, we add the elements of the subsequent processes that begin and end
in this same tree.
Identifying these processes is best expressed as a C-style code snippet:
\begin{equation}
\eqnlab{incempties}
\text{\textbf{for}
($q \leftarrow p + 1$;
$q < P$ \textbf{and} $K_q = 0$;
$q \hspace{.1ex}\text{$++$}$)
\texttt{\{\}}}
\end{equation}
The addition itself is quick by using the cumulative element counts,
\begin{equation}
\mathfrak{n}_\Delta = \mathfrak{E}[q] - \mathfrak{E}[p + 1]
,
\end{equation}
where we benefit from the convention that $\mathfrak{E}[P] = N$.
If the process $q$ that the loop \eqnref{incempties} ends with begins on the
next highest tree, it does not contribute elements to $k$, and we set
$\mathfrak{n}_q = 0$.
This condition applies as well if there are no more processes, $q = P$, due
to the definition of $\mathfrak{m}[P]$.
Otherwise, $k$ is $q$'s first local tree, and we require $q$ to send a message
that contains its local count of elements in this tree, which $p$ receives as
$\mathfrak{n}_q$.
Either way, the final element count is obtained by the update
\begin{equation}
\mathfrak{n}[K_p - 1] \leftarrow \mathfrak{n}[K_p - 1] + \mathfrak{n}_\Delta + \mathfrak{n}_q
.
\end{equation}
\item
We have seen above that some processes are required to send a message
containing the count of local elements in their first local tree to a lower
process.
By the reasoning in~\ref{itemmpiirecv}., the processes that send a message
are precisely those that are responsible for at least one tree and own at
least one element in a preceding tree.
The condition for process $p$ being a sender is thus
\begin{equation}
\eqnlab{conditionsend}
K_p > 0
\quad \wedge \quad
\mathfrak{m}[p].\mathrm{tree} < \mathfrak{K}[p]
.
\end{equation}
What is the receiving process? Again, the answer is a short loop:
\begin{equation}
\eqnlab{decempties}
\text{\textbf{for}
($q \leftarrow p - 1$;
$K_q = 0$;
$q \hspace{.1ex}\text{$--$}$)
\texttt{\{\}}}
\end{equation}
\begin{property}
\label{property:underrun}
It is guaranteed that the loop does not underrun $q = 0$.
\end{property}
\begin{proof}
The initialization is safe due to \eqnref{conditionsend}, which implies that
a sender always satisfies $p > 0$.
Furthermore, if all preceding processes had $K_{p'} = 0$, then $p$ would be
responsible for tree $k = 0$, which would contradict \eqnref{conditionsend}.
\end{proof}
\item
\label{itemallgatherv}
At this point, every process has computed $\mathfrak{n}$, the global count of
elements in every tree that it is responsible for.
If such distributed knowledge suffices for the application, we may stop here.
If it should be shared instead, we can reuse the arrays $(K_p)$ and $\mathfrak{K}$
to feed one call to \mpifun{All\-gatherv} (they have the correct format by design).
The amount of data gathered is one long integer per tree, thus the total data
size is $K$ times 8 bytes.
\end{enumerate}
Computing the cumulative counts $\mathfrak{N}$ from the freshly established values
$N_k$ is straightforward by \eqnref{cumulativeN}, assuming that the final phase
\ref{itemallgatherv} is executed to share $(N_k)$ between all processes.
The algorithm \pforestfun{count\_\-pertree} does work of the order $\mathcal{O} (\max \{ K, P \})$, where
the constant is negligible since the computations are rather minimalistic.
What is more important is that we send strictly less than $\min \{ K, P \}$
point-to-point messages, all of them carrying one integer, and each process
being sender and/or receiver of at most one message.
We expect such a communication to be fast.
Going back to our original motivation to store and load partition-independent
forest files, may may add that, mathematically speaking, we could skip phase 5
and delegate the writing of $\mathfrak{N}$ to parallel MPI I/O.
In practice, however, it is simpler and quite probably quicker to execute phase
5 and have rank zero write all of $\mathfrak{N}$ into the file header, since it writes
the rest of the header anyway.
\subsection{Saving and loading numerical data}
\seclab{saveloadvariable}
Considering I/O of the numerical data, the concept of linear tree storage
proves useful in the sense that the data file neither requires a header nor
distinguishes between a one-tree mesh or a forest.
All metadata required to understand the format is contained in the saved mesh
file.
Supposing the data size per element is fixed, we use \eqnref{filewindow} with
that size $s$ to identify the window onto the data file to write into.
If, on the other hand, the data size per element is variable, we may use a
fixed-size MPI write of the per-element data sizes into an additional file, or
into the header of the data file, to have this information available when
reading the data.
Before writing the data,
we would require a call to
\mpifun{Allgather} of the process-local sums over the per-element data sizes
in order to identify the window onto the data file.
This information serves just to compute the window.
It is not written to the file to preserve partition-independence.
On reading, we proceed analogously:
Each process reads the fixed-size data first to determine the local window of
the element sizes, then feeds their sum into an \mpifun{Allgather} call, which is
sufficient to identify the window onto the variable-size data that it reads.
\subsection{Optimizations and alternatives}
\seclab{ioalternatives}
We rely on the MPI file I/O standard for writing data in parallel to large
files.
Since we know both the overall file size and the window that a process has to
read or write, we are able to generalize to a configurable number $M \ge 1$ of
smaller files, which have identical sizes up to plus/minus one byte (or another
convenient integer unit).
An advantage of this method is that only the relevant processes write to each
file, which reduces its number of writers to roughly $P / M$.
On the other hand, both the MPI I/O layer and the parallel file system perform
data striping transparently and specifically targeted to the
architecture and layout of the storage system.
We can reasonably expect that these layers are better optimized than anything
that we can create in application space without knowing about the properties of
the drivers and hardware.
By the current design, every process holds the information required to write the
header of the file.
A slight generalization of the $P$-to-$M$ map would be to replicate the header
for each of the smaller files to avoid a possible bottleneck when all processes
attempt to read the header from the same location on disk.
In a second pass, internal or external to the simulation, the files may be
additionally and independently compressed by standard algorithms.
Again, it
seems more economic and general to leave such additional chunking and
compression to the deeper layers of the parallel file system.
\section{Auxiliary communication routines}
\seclab{auxcomm}
Standard element-based numerical methods lead to a symmetric communication
pattern, that is, every sender also receives a message and vice versa.
The data sent per element is most often of fixed size, thus every process is
able to specify the message size in a call to say \mpifun{Irecv}.
In other applications,
the communication pattern may no longer be symmetric, which means that the
receiver processes have to be notified about the senders.
In addition, if the data size per element is variable, we also have to inform
the receiver about the message sizes in order to transfer them when
repartitioning the mesh.
We discuss solutions for both issues in this section.
\subsection{Reversing the communication pattern}
\seclab{notify}
Pattern reversal can be understood as the transposition of the sender-receiver
matrix, which is an operation available from parallel linear algebra packages;
see e.g.\ \cite{MirzadehGuittetBursteddeEtAl16}.
When trying to minimize code dependencies, we may ask about an efficient way
to code the reversal ourselves.
A parallel algorithm based on a binary tree has been discussed in
\cite{IsaacBursteddeGhattas12}.
Without going into detail, we propose an extension that uses an $n$-ary tree,
where the number of children at each level is configurable, to reduce the
depth and thus the latency of the operation.
We will refer to this algorithm as \pforestfun{nary\_\-notify}.
\subsection{Data transfer on repartitioning}
\seclab{transfer}
Like all \texttt{p4est}\xspace algorithms, \pforestfun{build} and \pforestfun{search\_\-partition} are agnostic of
the application.
They provide callbacks \fun{Add} and \fun{Match} as a convenient way for the application
to access and modify per-element data.
By its original design, the \texttt{p4est}\xspace implementation manages a per-element
payload of user-defined size, which is convenient for storing flags or other
application metadata.
This data is preserved during RC+B\xspace for elements that do not change,
and may be reprocessed by callbacks for elements that do.
The data is sent and received transparently during partition P, which means
that it persists throughout the simulation.
However, we do not recommend to store numerical data via the payload mechanism,
since this memory is expected to fragment progressively by adaptation.
It will be more cache efficient to allocate a contiguous block of memory that
is accessed in sequence of the local elements
\cite{BursteddeBurtscherGhattasEtAl09}, either as an array of structures or as
multiple arrays.
Such memory is allocated in application space, and so far there is no general
function to transfer it when the forest is partitioned.
In the following, we
outline algorithms to accomplish this for fixed and variable per-element
data sizes, respectively.
One standard example that would call the fixed size transfer is a discontinuous
Galerkin method using a fixed polynomial degree for discretization
\cite{FischerKruseLoth02}.
An $hp$ method, on the other hand, would call the variable size variant.
Another example for using the latter may be a finite element method that
assigns every node value on an inter-element boundary to exactly one owner
element, leading to varying numbers of owned nodes per element.
Furthermore,
\pforestfun{build} creates a forest that is composed of selected elements explicitly
added by the algorithm and coarse fill-in elements that complete the tree.
The former will be constructed from application data, while the latter might
remain empty.
When we transfer the element data of the result forest in an ensuing call to
partition, we would use the variable size transfer to safely handle data
sizes of zero.
As described in \secref{encoding}, the partition of the forest is stored by the
markers $\mathfrak{m}$ and the local element counts $\mathfrak{E}$.
If we consider a forest before and after partition (an operation that adheres
to \principleref{complementarity}), the only difference between the two forests
is in the values of the partition markers and the assignment of local elements
to processes.
To determine the MPI sender and receiver pairs, we compare the element counts
$\mathfrak{E}$ before and after but may ignore all other data fields inside the forest
objects.
The messages sizes follow from $\mathfrak{E}$ as well.
Thus, the fixed size data transfer is algorithmically similar to the transfer
of elements during partitioning.
We refer to this operation as
\begin{equation*}
\text{\pforestfun{transfer\_\-fixed}
($\mathfrak{E}$ before/after, data array before/after, data size).}
\end{equation*}
Note that it is possible to split it into a begin/end pair to perform
computation while the messages are in transit.
In practice, we proceed along the lines of \algref{fixedrepartitionscheme}.
\begin{algorithm}
\caption{fixed size data transfer (forest $f$, data $d_\mathrm{before}$)
$\to$ data $d_\mathrm{after}$}
\alglab{fixedrepartitionscheme}
\begin{algorithmic}[1]
\STATE $\mathfrak{E}_\mathrm{before} \leftarrow f.\mathfrak{E}$
\hfill\COMMENT{deep copy element counts before partition}
\linelab{fixedone}
\STATE \pforestfun{partition} ($f$)
\hfill\COMMENT{modify members of forest in place}
\STATE $\mathfrak{E}_\mathrm{after} \leftarrow f.\mathfrak{E}$
\hfill\COMMENT{reference counts after partition}
\linelab{fixedtwo}
\STATE $d_\mathrm{after} \leftarrow$ allocate fixed size data ($f$)
\hfill\COMMENT{layout known from forest}
\STATE \pforestfun{transfer\_\-fixed}
($\mathfrak{E}_\mathrm{before}$,
$\mathfrak{E}_\mathrm{after}$,
$d_\mathrm{before}$,
$d_\mathrm{after}$,
size (element data)%
)
\STATE free ($d_\mathrm{before}$) ; free ($\mathfrak{E}_\mathrm{before}$)
\hfill\COMMENT{memory no longer needed}
\end{algorithmic}
\end{algorithm}
When the data size varies between elements, we propose to store the sizes in an
array with one integer entry for each local element.
As with the fixed size, the data itself is contiguous in memory in ascending
order of the local elements.
A non-redundant implementation calls the fixed size transfer with the
array of sizes to make the data layout available to the destination processes.
With this information known, the memory for the data after partition is
allocated in another contiguous block and the transfer for the data of variable
size executes.
We have implemented this generalized communication routine as
\begin{equation*}
\text{\pforestfun{transfer\_\-variable}
($\mathfrak{E}$ before/after, data before/after, sizes before/after).}
\end{equation*}
Thus, we pay a second round of asynchronous point-to-point communication for
the benefit of code simplicity and reuse.
Alternatively, it would be possible to rewrite the algorithm using a polling
mechanism to minimize wait times at the expense of CPU load.
The listing for the combined partition and transfer is
\algref{variablerepartitionscheme}.
\begin{algorithm}
\caption{variable size data transfer
\newline\mbox{}\hfill
(forest $f$, data $d_\mathrm{before}$, sizes $s_\mathrm{before}$)
$\to$ (data $d_\mathrm{after}$, sizes $s_\mathrm{after}$)}
\alglab{variablerepartitionscheme}
\begin{algorithmic}[1]
\STATE
\hfill\COMMENT{%
partition as in \algref{fixedrepartitionscheme},
Lines~\ref{line:fixedone}--\ref{line:fixedtwo}}%
\hfill\mbox{}
\STATE $s_\mathrm{after} \leftarrow$ allocate array of sizes ($f$)
\hfill\COMMENT{layout known from forest}
\STATE \pforestfun{transfer\_\-fixed}
($\mathfrak{E}_\mathrm{before}$,
$\mathfrak{E}_\mathrm{after}$,
$s_\mathrm{before}$,
$s_\mathrm{after}$,
size (integer)%
)
\STATE $d_\mathrm{after} \leftarrow$ allocate variable size data ($f$, $s_\mathrm{after}$)
\STATE \pforestfun{transfer\_\-variable}
($\mathfrak{E}_\mathrm{before}$,
$\mathfrak{E}_\mathrm{after}$,
$d_\mathrm{before}$,
$d_\mathrm{after}$,
$s_\mathrm{before}$,
$s_\mathrm{after}$)
\STATE free ($s_\mathrm{before}$) ; free ($d_\mathrm{before}$) ; free ($\mathfrak{E}_\mathrm{before}$)
\hfill\COMMENT{memory no longer needed}
\end{algorithmic}
\end{algorithm}
\section{Demonstration: parallel particle tracking}
\seclab{particles}
To exercise the algorithms introduced above, we present a particle tracking
application.
The particles move independently of each other by a gravitational attraction
to several fixed-position suns, following Newton's laws.
Each particle is assigned to exactly one quadrant that contains it and, by
consequence, to exactly one process.
The mesh dynamically adapts to the particle positions by enforcing the rule
that each element may contain at most $E$ many particles.
If more than this amount accumulate in any given element, it is refined.
If the combined particle count in a family of leaves drops below $E / 2$, they
are coarsened into their parent.
The features used by this example are:
\begin{itemize}
\item Explicit Runge-Kutta (RK) time integration of selectable order:
We use schemes where only the first subdiagonal of RK coefficients
is nonzero, thus we store just one preceding stage.
This applies to explicit Euler, Heun's methods of order 2 and 3 and the
classical RK method of order 4.
\item Weighted partitioning \cite{BursteddeWilcoxGhattas11}:
Each quadrant is assigned the weight approximately proportional to
the number of particles it contains.
This way the RK time integration is load balanced between the processes.
\item Partition traversal (\secref{traverse}):
In each RK stage, the next evaluated positions of the local particles
are bulk-searched in the partition.
If found on the local process, we continue
a local search to find its next local owner quadrant.
If found on a remote process, we send it to that process for the
next RK stage.
\item Reversal of the communication pattern (\secref{notify}):
A process does not know from which processes it receives new particles,
thus we call the $n$-ary notify function to determine the \mpifun{Irecv}
operations we need to post.
\item Variable-size parallel data transfer on partitioning
(\secref{transfer}):
Since the amount of particles per quadrant varies, we send
variable amounts of per-element data from the old owners to the new.
\item Construction of a sparse forest (\secref{build}):
At selected times of the simulation, we use a small subset of particles
to build a new forest, where each of the selected particles is placed
in a quadrant of a given maximal level.
The rest of this forest is filled with the coarsest possible quadrants.
Depending on the setup, it has less elements and is
thus better suited for offline post-processing or visualization.
\item Partition-independent I/O (\secref{pertree}):
We compute the cumulative per-tree element counts for both the
current and the sparse forests.
\end{itemize}
\subsection{Simulation setup}
\seclab{simsetup}
The problem is formulated in the 3D unit cube $[0, 1]^3$.
We mesh it with one tree except where explicitly stated.
If a particle leaves the domain, it is erased, thus the global number may drop
with time.
The three suns are not moving.
The initial particle distribution is Gau\ss{}-normal.
Each particle has unit mass and initial velocity $0$ and the gravitational
constant is $\gamma = 1$; see \tabref{suns} for details.
\begin{table}
\begin{center}
\begin{tabular}{ccc|c}
$x$ & $y$ & $z$ & mass \\
\hline
.48 & .58 & .59 & .049 \\
.58 & .41 & .46 & .167 \\
.51 & .52 & .42 & .060 \\
\end{tabular}
\hspace{5ex}
\begin{tabular}{c|c}
\multicolumn{2}{c}{particle distribution (Gau\ss)} \\
\hline
center & $\mu = (.3, .4, .5)$ \\
standard deviation & $\sigma = .07$ \\
\end{tabular}
\end{center}
\caption{The three suns (left)
and the parameters of the initial particle distribution (right).}%
\label{tab:suns}%
\end{table}%
The parameters of a simulation include the global number of particles, the
maximum number $E$ of particles per element, minimum and maximum levels of
refinement, the order of the RK method, the time step $\Delta t$ and the final
simulated time $T$.
The initial particle distribution and mesh are created in a setup loop.
Beginning with a minimum-level uniform mesh, we compute the integral of the
initial particle density per element and normalize by the integral over the domain.
We do this numerically using a tensor-product two-point Gau\ss{} rule.
From this, we compute the current number of particles in each element, compare
it with $E$ and refine if necessary.
After refinement, we partition and repeat the cycle until the loop terminates
by sufficient refinement or the specified maximum level is reached.
Only then we allocate the local particles' memory and create the particles
using per-element uniform random sampling.
Thus, neither the global particle number nor their distribution is met exactly,
but both approach the ideal with increasing refinement.
To make the test on the AMR algorithms as strict as possible, the parallel
particle redistribution and the mesh refinement and partitioning occur once in
each stage of each RK step.
We choose the time step $\Delta t$ proportional to the characteristic element
length to establish a typical CFL number.
Thus, we may create a scaling series of increasing problem size (that is,
particle count and resolution) at fixed CFL.
Our non-local particle transfer is designed to support arbitrarily large CFL,
where the amount of senders and receivers for each process effectively depends
on the CFL only, even if the problem size is varied by orders of magnitude.
We run each series to a fixed final time $T$, which produces a certain
distribution of the particles in space (see \figref{plot7r4b}).
\begin{figure}
\begin{center}
\includegraphics[width=.9\columnwidth]{plot7r4b.pdf}
\end{center}
\caption
Trajectory of seven out of 44 particles tracked to time $T = 2$ with
the fourth-order RK method and $\Delta t = .003$.
The initial positions of the particles are visible on the left hand side.}%
\label{fig:plot7r4b}%
\end{figure}%
The number of time steps required doubles with each refinement level.
To allow for a meaningful comparison between different problem sizes,
we measure the wall clock times for the RK method and all parallel algorithms
in the final time step, averaging over the RK stages.
We compute the per-tree element counts and the sparse forest at selected times
of the simulation (see \figref{plot7cut}), where we only use the timing of the
last one at $T$.
\begin{figure}
\begin{center}
\includegraphics[width=.43\columnwidth]{t000-cut.png}
\hspace{2ex}
\includegraphics[width=.43\columnwidth]{t250-cut.png}
\end{center}
\caption{Zoom into sparse forests created at time $t = 0$ (left) and $t = .5$
(right), respectively, using the the same setup as for \figref{plot7r4b},
here with $\Delta t = .002$.
Of the 44 particles tracked, the same seven are added to both sparse forests
as individual level-8 elements (cf.\ \algref{buildadd}).
Elements up to level 6 are drawn as blue wireframe, level 7 as
transparent orange and level 8 as solid red.}%
\label{fig:plot7cut}%
\end{figure}%
We use three problem setups of increasing overall particle count and CFL, which
we run to $T = .4$ with the 3rd order RK scheme (see \tabref{setup}).
We use process counts from 16 to 65536 in multiples of eight, which matches the
multiplier of the particle counts and in consequence that of the element
counts.
\begin{sidewaystable}
\begin{center}
\begin{tabular}{c|ccc|ccc|ccc}
& \multicolumn{3}{c|}{particles}
& \multicolumn{3}{c|}{elements}
& \\
levels & \#req & \#eff & \#end &
level & \#eff & \#end &
$\Delta t$ & \#steps & \#peers
\\
\hline
3--9 & 12800 & 13318 & 12917 &
7 & 8548 & 12762 & .008 & 50 & 5.31 \\
5--11 & 819200& 852580 & 842250 &
9 & 538392 & 784848 & .002 & 200 & 6.96 \\
7--13 & 52428800 & 54513360 & 54283090 &
11 & 34418420 & 49821220 & .0005 & 800 & 7.12 \\
\hline
3--9 & 102400 & 102374 & 98359 &
6 & 1632 & 2059 & .016 & 25 & 8.06 \\
5--11 & 6553600 & 6553472 & 6424887 &
8 & 102068 & 131587 & .004 & 100 & 11.5 \\
7--13 & 419430400 & 419415934 & 416393854 &
10 & 6530210 & 8700623 & .001 & 400 & 11.1 \\
\hline
4--10 & 5120000 & 5119830 & 4935040 &
8 & 55434 & 79486 & .016 & 25 & 10.7 \\
6--12 & 327680000 & 327677582 & 321323140 &
10 & 3528876 & 6366600 & .004 & 100 & 22.6 \\
8--14 & 20971520000 & 20971146506 & 20820237439 &
12 & 225760858 & 353507330 & .001 & 400 & 19.7 \\
\end{tabular}
\end{center}
\caption{Three problem sizes, each run with process counts from 16 to 65536
in multiples of 8. We only show every other run (multiple of 64).
The problems have maximum particle counts per element $E$ of 5 and
twice 320, respectively.
For each run, we provide the specified minimum and maximum levels, the
particle counts referring to the initial request, the count
effectively reached on initialization, and the count at $T = .4$,
respectively.
Over time, we lose some particles that leave the domain.
For the elements, we show the initial maximum level and
global count and the count at final time $T$.
Over time, we create more elements since they move
closer together at $t = .4$, which leads to a deeper tree.
On the right, we show the time step size, number of steps,
and the average number of communication peers for particle transfer.
The CFL number increases between the three problem sizes,
which can bee seen by comparing the levels with $\Delta t$,
and reflects in \#peers.
The overall largest run creates 20.97 billion particles.}%
\label{tab:setup}%
\end{sidewaystable}%
\subsection{Load balance}
\seclab{simload}
We know that \texttt{p4est}\xspace has a fast partitioning routine to equidistribute the
elements between the processes \cite{BursteddeHolke16b}.
Here we need to equidistribute the load of the RK time integration, which is
proportional to the local number of particles.
To this end, we assign each element a weight $w$ for partitioning that derives
from the number of particles $e$ in this element, $w = 1 + e$.
We offset the weight by 1 to bound the memory used by elements that contain
zero or very few particles.
We test the load balance by measuring the RK integration times in a weak
and strong scaling experiment.
From \figref{plotrk} we see that scalability is indeed close to perfect.
\begin{figure}
\begin{center}
\includegraphics[width=.49\columnwidth]{rkprob2b.pdf}
\hfill
\includegraphics[width=.49\columnwidth]{rkpr2str.pdf}
\end{center}
\caption{Scaling of the Runge-Kutta time integration.
We use the mid-size problem from \tabref{setup} and rerun each line
with $8\times$ and $64\times$ processes (equivalently, rerun the
$8\times$ and $64\times$ smaller problems with the same process count),
hence three dots per line.
Left: The number of MPI processes is color-coded.
We confirm optimal weak scaling since the dots lie on top of each other
and optimal strong scaling by the fact that the lines lie on top of each
other and have unit slope.
Right: A typical strong scaling diagram, indicating simulation size by the
levels of refinement.
These plots indicate successful load balance by the particle-weighted
partitioning of elements.
}%
\label{fig:plotrk}%
\end{figure}%
A weight function that counts both elements and particles in some ratio has
been proposed before \cite{GassmoellerHeienPuckettEtAl16}, as has the
initialization of particles based on integrating a distribution function.
In the above reference, parallelization is based on a one-cell ghost layer.
The use of algorithms like ours for non-local particle transfer and variable
data, as we describe it below, has not yet been covered as far as we know.
\subsection{Particle search and communication}
\seclab{simcomm}
We use the top-down forest tra\-ver\-sal \algref{searchpartition},
\pforestfun{search\_\-partition}, augmented with a local search to determine for each local
particle whether it changes the local element or leaves the process domain.
In the first case, we find this element, and in the latter case, we find which
process it is sent to.
Once we know this, we reverse the communication pattern using \pforestfun{nary\_\-notify} to
inform the receivers about the senders and send the particles using
non-blocking MPI.
Moving particles between elements is followed by mesh coarsening and
refinement, which generally upsets the load balance, so we repartition the
forest.
This changes an individual element's ownership, and thus the contained
particles' ownership, from one process to another.
Thus, we transfer the particles again, this time from the old to the new
partition.
Transfering the particles
after partition
is done by the two-stage \algref{variablerepartitionscheme}, where we first
send the number of particles for each element (fixed-size message volume per
element) and then send the particles themselves (variable-size volume).
According to our measurements, \pforestfun{nary\_\-notify} has runtimes well below or around
1~ms for the small- and mid-size problems.
The large problem gives rise to runtimes of about 5~ms.
The fixed-size particle transfer is a sub-millisecond call.
\begin{figure}
\begin{center}
\includegraphics[width=.49\columnwidth]{searchp2.pdf}
\hfill
\includegraphics[width=.49\columnwidth]{transfc2.pdf}
\end{center}
\caption{Combined partition and local search (left) and transfer of
variable-size element data (right) for the mid-size problem, where the
runtimes are measured in the final time step.}%
\label{fig:searchtrans}%
\end{figure}%
Runtimes of the remaining calls \pforestfun{search\_\-partition} and \pforestfun{transfer\_\-variable}
for the mid-size problem are displayed in \figref{searchtrans}.
Their scalability is generally acceptable given their small absolute runtimes.
\begin{table}
\begin{center}
\begin{tabular}{c|c|c|c}
$P$ & small & medium & large \\
\hline
16 & 9.29e-3 & 41.9e-3 & 3.12 \\
128 & 10.5e-3 & 51.6e-3 & 3.63 \\
1024 & 11.6e-3 & 60.6e-3 & 4.13 \\
8192 & 12.8e-3 & 69.4e-3 & 4.62 \\
65536 & 13.9e-3 & 77.9e-3 & 5.10
\end{tabular}
\\[2ex]
\begin{tabular}{c|cccc}
$P$ / $K$ & 1 & 8 & 64 & 512 \\
\hline
16 & 9.29e-3 & 9.05e-3 & 13.9e-3 & 58.4e-3 \\ %
1024 & 11.6e-3 & 11.4e-3 & 16.3e-3 & 61.8e-3 \\
65536 & 13.9e-3 & 13.7e-3 & 18.8e-3 & 66.2e-3 \\
\end{tabular}
\end{center}
\caption{Top:
Absolute runtimes in seconds of \pforestfun{search\_\-partition} augmented with a
local search for the three problem sizes from \tabref{setup}.
Each column presents a weak scaling exercise, where ideal times would
be constant.
The three runs have comparable rates between 60k and 82k particles per
second.
Bottom:
We use a forest with $K$ trees in a cubic brick layout,
where the refinement in each tree is reduced accordingly to make the
meshes identical (shown for the small problem).
For roughly a hundred trees and above the run times increase with $K$
while remaining largely independent of the process count $P$.
}%
\label{tab:prob3searchp}%
\end{table}%
The runtimes of \pforestfun{search\_\-partition} for all three problem sizes are compared in
\tabref{prob3searchp}.
They grow by less than a factor of 2 in weak scaling while increasing the
process and particle counts by more than three orders of magnitude.
In this test, we also experiment with forest meshes of up to $K = 2^{d \times
B}$ trees, where $B$ runs from 0 to 3 and per-tree minimum and maximum levels
decrease by $B$, which keeps the meshes identical independent of $K$.
Since the forest connectivity is unstructured, the limit of many trees loses
the hierarchic property of the mesh, which reflects in a slower search.
Up to 512 trees we see search times of less than 1/10th seconds for the small
problem.
For $1/8$th of the large problem (not shown in the table), the search times
increase by a factor between 7 and 10 from 1 to 512 trees (.32 seconds on $K =
1$, $P = 16$ to 3.43 seconds on $K = 512$, $P = 64\mathrm{Ki}$).
\subsection{Sparse forest and per-tree counts}
\seclab{simbuild}
At the end of the simulation, we create a sparse forest for output and
post-processing.
We use every 100th particle for the small size problem and every 1000th particle
for the medium and large size problems; let us call this factor $R \ge 1$.
The ratio of $E$ and $R$ and the specified maximum level determine the size of
the sparse forest.
If the maximum level is high, we create a deeper forest and more elements
compared to the simulation.
If $E/R$ is one, we keep the number of elements roughly the same, if it is
less than one, the sparse forest will have less elements.
These two effects may offset each other.
In our examples, the sparse forest is smaller in the small-scale problem and
larger in the mid- and large size problems.
The built times of the largest run for each problem setup are
4.8~ms for the small, 20.5~ms for the medium, and 358~ms for the large size
problem, each obtained with 65536 MPI processes.
Especially for the two larger problems, we have much less elements than
particles, such that the number of elements per process is in the aggressive
strong scaling regime.
The global per-tree counting of elements has runtimes below or around 1~ms
except for the runs on 65536 processes, where it is 4.4~ms for all three
problem setups (using one tree).
When reproducing the same mesh with a brick forest of as much as 512 trees,
the run times do not change in any significant way.
Since the messages are sent concurrently (the algorithm avoids daisy-chaining),
this is achieved by design.
This function has been tested in even more varied situations by the community
for several years (transparently through \pforestfun{save}).
\section{Conclusion}
\seclab{conclusion}
This paper provides algorithms that support the efficient parallelization of
computational applications of increased generality.
Such generalization may refer to multiple aspects.
One concerns the location of objects in the partition beyond a one-cell ghost
layer, together with flexible criteria for matching and pruning.
Another is the fast repartitioning of variable-sized element data in linear
storage.
When considering the increased importance of scalable end-to-end simulation,
our algorithms may aid in pre-processing (setting up correlated spatial fields
in parallel, or finding physical source and receiver locations) and
post-processing and reproducibility (writing/reading partition-independent
formats of variable-size element data, optionally selecting readapted subsets).
Our algorithms are application-agnostic, that is, they do not
interpret the data or meshes they handle, and perform well-defined
tasks while hiding the complexity of their execution.
Most are fairly low-level in the sense that they reside in the parallelization
and metadata layer of an application.
They can be integrated by third-party libraries and frameworks and often do not
need to be exposed to the domain scientist.
This approach supports modularity, code reuse, and ideally the division of
responsibilities and quicker turnaround times in development.
We draw on the benefits of a distributed tree hierarchy and a linear ordering
of mesh entities.
Without such a hierarchy, the tasks we solve here would be a lot harder or even
impractical (such as the partition search).
We develop all algorithms for a multi-tree forest, noting that they apply
meaningfully to the common special case of a single tree.
We find that any algorithm runtimes range between milliseconds and a few
seconds, where one second or more occur only for specific algorithms using the
largest setups.
All algorithms are practical and scalable to 21e9 particles and 64Ki MPI
processes on a BlueGene/Q supercomputer system.
\section*{Acknowledgments}
B.\ gratefully acknowledges travel support by the Bonn Hausdorff Center for
Mathematics (HCM) funded by the Deutsche Forschungsgemeinschaft (DFG), Germany.
The author would like to thank the Gauss Centre for Supercomputing (GCS) for
providing computing time through the John von Neumann Institute for Computing
(NIC) on the GCS share of the supercomputer JUQUEEN \cite{Juqueen} at
J{\"u}lich Supercomputing Centre (JSC).
GCS is the alliance of the three national supercomputing centres HLRS
(Universit\"at Stuttgart), JSC (Forschungszentrum J{\"u}lich), and LRZ
(Bayerische Akademie der Wissenschaften), funded by the German Federal Ministry
of Education and Research (BMBF) and the German State Ministries for Research
of Baden-W{\"u}rttemberg (MWK), Bayern (StMWFK), and Nordrhein-Westfalen (MIWF).
The \texttt{p4est}\xspace software is described on \url{http://www.p4est.org/}.
The source code of the routines discussed in this paper, including the
numerical demonstrations, is available on
\url{http://www.github.com/cburstedde/p4est/}.
I would like to thank Tobin Isaac again for inventing \scfun{array\_\-split} back in the day.
It is amazing how useful this little algorithm is.
\bibliographystyle{siam}
|
1,108,101,565,593 | arxiv | \section{Introduction}
In a GARCH-X type model, the variance of a generalized autoregressive conditional heteroskedasticity (GARCH) type model is augmented by a set of exogenous regressors (X). Naturally, the question arises if the more general GARCH-X type model can be reduced to the simpler GARCH type model. Statistically speaking, the problem reduces to testing whether the coefficients on the exogenous regressors are equal to zero. As noted in \cite{PedersenRahbek:19} (PR hereinafter), the testing problem is non-standard due to the presence of two nuisance parameters that could possibly be at the boundary of the parameter space. In addition, under the null hypothesis, one of the nuisance parameters, the ``GARCH parameter'', is not identified when the other, the ``ARCH parameter'', is at the boundary. In order to address this possible lack of identification, PR suggest a two-step (testing) procedure, where rejection in the first step is taken as ``evidence'' that the model is identified. In the second step, the authors then impose an ``additional assumption'', which implies that a specific entry of the inverse information equals zero, to obtain an asymptotic null distribution of their second-step test statistic that is nuisance parameter free. There are two potential problems with this two-step procedure. First, it may not control (asymptotic) size, i.e., its (asymptotic) size may exceed the nominal level, for reasons similar to those that invalidate ``naive'' post-model-selection inference \citep[see e.g.,][]{LP:05,LP:08}. In addition, the aforementioned ``additional assumption'' may not be satisfied, which may possibly aggravate the problem. Second, the two-step procedure may, due to its two-step nature and despite the possible lack of (asymptotic) size control, have poor power in certain parts of the parameter space, as suggested by simulations in PR.\footnote{The simulation results in Appendix D of PR show that the two-step procedure has a null rejection frequency below the nominal level for ``very small'' values of the ARCH parameter; see also Section \ref{MC}.}
In this paper, we use the results in \cite{AC1} (AC hereinafter), extended to allow for parameters to be near or at the boundary of the parameter space,\footnote{Here, the parameter space is equal to a product space; see \cite{Cox:22} for related results in the context of more general shapes of the parameter space.} to derive the asymptotic distributions of the two test statistics used in PR under weak, semi-strong, and strong identification (using the terminology in AC). These asymptotic distribution results, in turn, allow us characterize the asymptotic size of any test for testing the null hypothesis that the coefficients on the exogenous regressors are equal to zero. We numerically establish lower bounds on the asymptotic sizes of the two-step procedure proposed by PR as well as a second testing procedure proposed by PR that assumes that the ARCH parameter is known to be in the interior of the parameter space. These bounds are given by 6.65\% and 9.48\%, respectively, for a 5\% nominal level, which implies that the two testing procedures do not control asymptotic size (at the 5\% nominal level).\footnote{These lower bounds (also) apply if the ``additional assumption'' and, in case of the second testing procedure, the assumption that the ARCH parameter is in the interior of the parameter space are satisfied; see Remark \ref{} for details.} Furthermore, we propose a new test based on the second-step test statistic of PR that uses plug-in least favorable configuration critical values and, thus by construction, controls asymptotic size.
In a small simulation study, we find that our asymptotic theory provides good approximations to the finite-sample behaviors of the tests, or testing procedures, that we consider. In particular, we find that the testing procedures proposed by PR can suffer from overrejection in finite samples. Furthermore, we find that our new test has greater power than the two-step procedure for ``very small'' values of the ARCH parameter, a presumably empirically important region of the parameter space. This finding is line with the intuition that the two-step procedure, in some sense, ``sacrifices'' power for such parameter constellations due to its two-step nature.
The remainder of this paper is organized as follows. In Section \ref{TP}, we introduce the testing problem as well as the two testing procedures proposed by PR. In Section \ref{AT}, we present the asymptotic distribution results. Section \ref{AsySz} presents the characterization result for asymptotic size and obtains the lower bounds on the asymptotic sizes of the two testing procedures proposed by PR. It also introduces our new test. The result of our simulation study are presented in Section \ref{MC}. Additional material, including proofs, is relegated to the Appendix.
Throughout this paper, we use the following conventions. All limits are taken ``as $n \to \infty$''. $e_i$ denotes a vector of zeros (of suitable dimension) with a one in the $i^\text{th}$ position. For any matrix $A$, $A_{ij}$ denotes the entry with row index $i$ and column index $j$. Furthermore, $X_n(\pi) = o_{p \pi}(1)$ means that $\sup_{\pi \in \Pi} \| X_n(\pi) \| = o_p(1)$, where $\| \cdot \|$ denotes the Euclidean norm. Lastly, ``for all $\delta_n \to 0$'' abbreviates ``for all sequences of positive scalar constants $\{ \delta_n: n \geq 1\}$ for which $\delta_n \to 0$''.
\section{Testing problem} \label{TP}
For ease of exposition, we consider a simple version of the GARCH-X(1,1) model with a single exogenous variable (as in PR). In particular, the model is given by
\begin{equation} \label{y}
y_t = h_t(\theta)^{1/2} z_t,
\end{equation}
where $\theta = (\psi',\pi)' = (\beta',\zeta,\pi)'$ and
\begin{equation} \label{h}
h_t(\theta) = h_t(\psi,\pi) = h_t(\beta,\zeta,\pi) = \zeta(1-\pi) + \beta_1 y_{t-1}^2 + \pi h_{t-1}(\psi,\pi) + \beta_2 x^2_{t-1}
\end{equation}
with $h_0(\theta) = \zeta$.\footnote{For ease of reference, we adopt the notation in AC, where $\beta$ governs the identification strength of $\pi$ (see \eqref{verification_assumption_A} below) and $\psi$ is always identified.} Here, $\{y_t,x_t\}_{t=0}^n$ is observed and $\{z_t\}_{t=0}^n$ is unobserved. The true parameter space for $\theta$, i.e., the space of all possible true values of $\theta$, is given by $\Theta^* = \Psi^* \times \Pi^*$, where
\[
\Psi^* = \{ \psi : 0 \leq \beta_1 \leq \overline{\beta}^*_1, 0 \leq \beta_2 \leq \overline{\beta}^*_2, \underline{\zeta}^* \leq \zeta \leq \overline{\zeta}^* \} \text{ and } \Pi^* = \{ \pi : 0 \leq \pi \leq \overline{\pi}^* \}
\]
for some $0 < \overline{\beta}^*_1 < \infty$, $0 < \overline{\beta}^*_2 < \infty$, $0 < \underline{\zeta}^* < \overline{\zeta}^* < \infty $, and $0< \overline{\pi}^* < 1$.
The model is estimated by quasi-maximum likelihood. In particular, the objective function is given by (-$\frac{1}{n}$ times) the Gaussian-based conditional quasi log-likelihood function, i.e., $Q_n(\theta) = \frac{1}{n} \sum_{t=1}^n l_t(\theta)$, where
\[
l_t(\theta) = \frac{1}{2} \log(2 \tilde{\pi} ) + \frac{1}{2} \log(h_t(\theta)) + \frac{y_t^2}{2h_t(\theta)}
\]
and where $\tilde{\pi} = 3.14...$. The quasi-maximum likelihood estimator is given by
\[
\hat{\theta}_n= \operatornamewithlimits{arg\ min}_{\theta \in \Theta} Q_n(\theta),
\]
where $\Theta = \Psi \times \Pi$ denotes the optimization parameter space with
\[
\Psi = \{ \psi : 0 \leq \beta_1 \leq \overline{\beta}_1, 0 \leq \beta_2 \leq \overline{\beta}_2, \underline{\zeta} \leq \zeta \leq \overline{\zeta} \} \text{ and } \Pi = \{ \pi : 0 \leq \pi \leq \overline{\pi} \}
\]
for some $\overline{\beta}^*_1 < \overline{\beta}_1< \infty$, $\overline{\beta}^*_2 < \overline{\beta}_2< \infty$, $0 < \underline{\zeta} < \underline{\zeta}^*$, $\overline{\zeta}^* < \overline{\zeta} < \infty$, and $\overline{\pi}^*< \overline{\pi} < 1$. Note that, given the definitions of $\Theta^*$ and $\Theta$, (the true values of) $\beta_1$, $\beta_2$, and $\pi$ are allowed to be at the boundary of the optimization parameter space.
While our asymptotic distribution results are useful for analyzing a wide range of testing problems, we are mainly interested in testing
\begin{equation} \label{testing_problem}
H_0: \beta_2 = 0 \text{\ vs.\ } H_1:\beta_2 > 0.
\end{equation}
As pointed out in PR, this testing problem is non-standard in that, under $H_0$, there are two nuisance parameters that \textit{may} be at the boundary of the (optimization) parameter space, $\beta_1$ and $\pi$. Furthermore, when $\beta_1$ is at the boundary ($\beta_1 = 0$) then $\pi$ is not identified under $H_0$. To see this, note that, given $h_0(\theta) = \zeta$, we have
\begin{equation} \label{verification_assumption_A}
h_t(\theta) = \zeta + \beta_1 \sum_{i=0}^{t-1} \pi^i y_{t-i-1}^2 + \beta_2 \sum_{i=0}^{t-1} \pi^i x_{t-i-1}^2
\end{equation}
and, thus, $h_t(0,\zeta,\pi) = \zeta$ $\forall \theta = (\beta,\zeta,\pi) = (0,\zeta,\pi) \in \Theta, \forall n \geq 1$. In words, under $H_0$, $\beta_1 = 0$ implies that the distribution of the data does not depend on $\pi$, i.e., $\pi_1$ and $\pi_2$ (with $\pi_1 \neq \pi_2$) are observationally equivalent.
Let $\mathcal{T}_n$ denote a generic test statistic for testing \eqref{testing_problem} and let cv$_{n,1-\alpha}$ denote the corresponding nominal level $\alpha$ critical value, which may depend on $n$. The size of the test that rejects $H_0$ when $\mathcal{T}_n > \text{cv}_{n,1-\alpha}$ is given by Sz$_\mathcal{T} = \sup_{\gamma \in \Gamma:\beta_2 = 0} P_\gamma (\mathcal{T}_n > \text{cv}_{n,1-\alpha})$, where $\Gamma$ denotes the true parameter space for $\gamma = (\theta,\phi)$ and where $\phi$ denotes the distribution of $\{ x_t, z_t \}$. We say that a test controls size if Sz$_\mathcal{T} \leq \alpha$. We note that ``uniformity'' (over $\Gamma$) is built into the definition of Sz$_\mathcal{T}$ and that whether a test controls size crucially depends on $\Gamma$. Typically, it is infeasible to compute Sz$_\mathcal{T}$. Therefore, we rely on asymptotic approximations. In particular, we approximate the (``finite-sample'') size of a test by its asymptotic size, which is given by
\[
\text{AsySz}_\mathcal{T} = \limsup_{n\to \infty} \sup_{\gamma \in \Gamma:\beta_2 = 0} P_\gamma (\mathcal{T}_v > \text{cv}_{n,1-\alpha}).
\]
While $\text{AsySz}_\mathcal{T}$ ``still'' depends on $\Gamma$, it generally only does so through a finite-dimensional parameter, making its evaluation ``easier''. We say that a test controls asymptotic size if AsySz$_\mathcal{T} \leq \alpha$. In large samples, AsySz$_\mathcal{T}$ provides a good approximation of Sz$_\mathcal{T}$ so that a test that controls asymptotic size can be expected to ``approximately'' control size. Therefore, in what follows we focus on whether or not a given test, or testing procedure, controls asymptotic size.
\subsection{Testing procedures proposed by PR}
Given the non-standard nature of the testing problem in \eqref{testing_problem}, PR propose a two-step procedure to deal with the presence of nuisance parameters on the boundary of the parameter space as well as the lack of identification of $\pi$ when $\beta_1 = 0$. In the first step, PR propose to test $H_0^\dagger: \beta_1 = \beta_2 = 0$ using the corresponding (rescaled) quasi-likelihood ratio statistic
\[
LR^\dagger_n = 2n (Q_n(\hat{\theta}^\dagger_{n,0}) - Q_n(\hat{\theta}_n) ) /\hat{c}_n^\dagger,
\]
where $\hat{\theta}^\dagger_{n,0} = \operatornamewithlimits{arg\ min}_{\theta \in \Theta^\dagger_0}Q_n({\theta})$ with $\Theta^\dagger_0 = \{ \theta \in \Theta: \beta_1 = \beta_2 = 0\}$ and where\footnote{\label{estimation_c}Note that other estimators of $c_0 = c(\gamma_0) = E_{\gamma_0} (z_t^2 - 1)^2/2$ are available (see Section \ref{AT} for the ``definition'' of $\gamma_0$): For example, $\hat{c}_{\text{alt}}(\theta) = \left( \frac{1}{n} \sum_{t=1}^n \frac{y_t^4}{h_t(\theta)^2} - 1 \right)/2$ evaluated at $\hat{\theta}^\dagger_{n,0}$ or $\hat{\theta}_{n}$; similarly, one could evaluate $\hat{c}(\theta)$ at $\hat{\theta}_{n}$. Note that \cite{Andrews:01} also considers (rescaled) quasi-likelihood ratio statistics of the form
\[
2n (\min_{\pi \in \Pi} Q_n(\hat{\psi}^\dagger_{n,0}(\pi),\pi)/c(\hat{\psi}_n(\pi),\pi) - \min_{\pi \in \Pi}Q_n(\hat{\psi}_n(\pi),\pi)/c(\hat{\psi}_n(\pi),\pi) ),
\]
where $\hat{\theta}^\dagger_{n,0} = ((\hat{\psi}^\dagger_{n,0}(\hat{\pi}^\dagger_{n,0}))',\hat{\pi}^\dagger_{n,0})'$. We note that $Q_n(\hat{\psi}^\dagger_{n,0}(\pi),\pi)$ does not depend on $\pi$.
}
\[
\hat{c}^\dagger_n = \hat{c}(\hat{\theta}^\dagger_{n,0}) = \left( \frac{1}{n} \sum_{t=1}^n \left( \frac{y_t^2}{h_t(\hat{\theta}^\dagger_{n,0})} - 1 \right)^2 \right)/2 = \left( \frac{1}{n} \sum_{t=1}^n \left( \frac{y_t^2}{\hat{\zeta}^\dagger_{n,0}} - 1\right)^2 \right)/2 .
\]
Here, $\hat{\theta}^\dagger_{n,0} = (\hat{\zeta}^\dagger_{n,0},0,0,\hat{\pi}^\dagger_{n,0})'$ with $\hat{\pi}^\dagger_{n,0} = \Pi$. The asymptotic null distribution of $LR^\dagger_n$ can be derived using the results in \cite{Andrews:01} (see e.g., Theorem 2.1 in PR) and, although not available in closed form, can easily be simulated from. If $H_0^*$ is not rejected, then $H_0$ is not rejected. If $H_0^*$ is rejected, then PR conclude that $\beta_1 > 0$. Given that $\beta_1 > 0$, the nuisance parameter $\pi$ is identified and the only remaining ``problem'' is that $\pi$ may still be at the boundary ($\pi = 0$). In the second step, PR then suggest to test \eqref{testing_problem}, under the maintained assumption that $\beta_1 > 0$, using the corresponding (rescaled) quasi-likelihood ratio statistic
\[
LR_n = 2n (Q_n({\hat{\theta}_{n,0}}) - Q_n(\hat{\theta}_n) )/\hat{c},
\]
where $\hat{\theta}_{n,0} = \operatornamewithlimits{arg\ min}_{\theta \in \Theta_0}Q_n({\theta})$ with $\Theta_0 = \{ \theta \in \Theta: \beta_2 = 0\}$ and where $\hat{c}_n = \hat{c}(\hat{\theta}_{n,0})$.\footnote{Alternatively, we could evaluate $\hat{c}(\theta)$ at $\hat{\theta}_{n}$ or use $\hat{c}_{\text{alt}}(\theta)$ (see footnote \ref{estimation_c}) evaluated at $\hat{\theta}_{n,0}$ or $\hat{\theta}_{n}$.}
In this context, PR make an ``additional assumption'' on the dependence between $\{ x_t \}$ and $\{ y_t \}$ that implies that a certain entry of the inverse of the information matrix equals zero when $\pi = 0$. This, in turn, implies that the asymptotic null distribution of $LR_n$ simplifies to $\max(0,Z)^2$, where $Z\sim N(0,1)$. Formally, the two-step procedure (TS) is defined as follows: Reject $H_0$ if
\[
{TS}_n = \mathbbm{1}(LR_n^\dagger > \widetilde{LR}^\dagger_{n,1-\alpha}) \times LR_n > \text{cv}_{1-\alpha},
\]
where $\text{cv}_{1-\alpha}$ and $\widetilde{LR}^\dagger_{1-\alpha}$ denote the $1-\alpha$ quantiles of $\max(0,Z)^2$ and the asymptotic distribution given in \eqref{asy_dist_LR_star} with $b = 0$ and unknown quantities replaced by consistent estimators (see e.g., Appendix \ref{CD}), respectively.\footnote{PR do not formally define their two-step procedure in that they do not define the nominal levels that ought to be used in the two steps, say $\alpha_1$ and $\alpha_2$. Here, we take $\alpha_1 = \alpha_2 = \alpha$. Of course, given the results in this paper, it is in principle possible to choose $\alpha_1$ and $\alpha_2$ in order to ensure that AsySz$_{TS}\leq \alpha$. We refrain from doing so, however, because, given the results in this paper, the motivation for using a two-step procedure is rendered obsolete; see e.g., the new test.} While it is possible to derive the asymptotic null distribution of $LR_n$ without this ``additional assumption'', PR refrain from doing so.\footnote{If $\beta_1 > 0$, or rather $\beta_1 \geq c > 0$ for some $c \in \mathbb{R}$, then it is straightforward to test \eqref{testing_problem} using the approach in \cite{Ketz:JMP}, without any assumptions on the dependence between $\{ x_t \}$ and $\{ y_t \}$.}
PR also mention that if it is \textit{a priori} known that $\beta_1 > 0$ then one may directly test \eqref{testing_problem} using $LR_n$ and their suggested critical value, $\text{cv}_{1-\alpha}$. We refer to this test as the ``second testing procedure (of PR)''. Formally, the second testing procedure (S) is defined as follows: If $\beta_1 > 0$, reject $H_0$ if
\[
S_n = LR_n > \text{cv}_{1-\alpha}.
\]
There are several potential issues with the above testing procedures. First, the intuition underlying the two-step procedure is based on the assumption that the first-step test never makes a type I error. This assumption, however, cannot hold and a type I error in the first step may very well propagate to the type I error of the overall (two-step) procedure.\footnote{The only way for the first-step test to never make a type I error is to take the nominal level of the corresponding test equal to zero. This, however, would lead the first-step test (as well as the two-step procedure) to have zero power.} Relatedly, $\beta_1$ may be close to zero relative to the sample size; see Section \ref{AT} for details. In that case, the probability of rejecting $H_0^\dagger$ exceeds the first-step nominal level and $\pi$ is only weakly identified such that the asymptotic distribution result for $LR_n$ that takes $\pi$ to be (strongly) identified may only provide a very poor approximation to its actual finite-sample distribution. The latter may also be an issue for the second testing procedure. In both cases, one may be worried that the testing procedure does not control asymptotic size. As mentioned above, asymptotic size depends on $\Gamma$. For sake of brevity, the definition of $\Gamma$ is given in Appendix \ref{ACverification}.
\section{Asymptotic distribution results} \label{AT}
As shown in the recent literature \citep[see e.g.,][AC]{AG1}, asymptotic size is intrinsically linked to the asymptotic distribution of the test statistic under (drifting) sequences of true parameters. Let $\gamma_n = (\theta_n,\phi_n) = (\beta_n,\zeta_n,\pi_n,\phi_n) \in \Gamma$ denote the true parameter for $n \geq 1$. In the context at hand, the following (sets of) sequences are key:
\begin{align*}
\Gamma(\gamma_0) =& \{ \{ \gamma_n \in \Gamma : n \geq 1 \} : \gamma_n \to \gamma_0 = (\theta_0,\phi_0) = (\beta_0,\zeta_0,\pi_0,\phi_0) \in \Gamma \},\\
\Gamma(\gamma_0,0,b) = &\{ \{ \gamma_n \} \in \Gamma(\gamma_0) : \beta_0 = 0 \text{ and } \sqrt{n} \beta_n \to b \in \mathbb{R}^2_{+,\infty} \}, \text{ and } \nonumber \\ \nonumber
\Gamma(\gamma_0,\infty,b,\omega_0,p) =& \{ \{ \gamma_n \} \in \Gamma(\gamma_0) : \sqrt{n} \beta_n \to b \in \mathbb{R}^2_{+,\infty} \text{ with } \| b \| = \infty, \\
\nonumber & \ \beta_n/\| \beta_n \| \to \omega_0 \in \mathbb{R}_+^2, \text{ and } \sqrt{n} \| \beta_n \| \pi_n \to p \in [0,\infty]\}.
\end{align*}
In what follows, we use the terminology ``under $\{\gamma_n\} \in \Gamma(\gamma_0)$'' to mean ``when the true parameters are $\{ \gamma_n \} \in \Gamma(\gamma_0)$ for any $\gamma_0 \in \Gamma$, ``under $\{\gamma_n\} \in \Gamma(\gamma_0,0,b)$'' to mean ``when the true parameters are $\{ \gamma_n \} \in \Gamma(\gamma_0,0,b)$ for any $\gamma_0 \in \Gamma$ with $\beta_0 = 0$ and any $b \in \mathbb{R}^2_{+,\infty}$'', and ``under $\{\gamma_n\} \in \Gamma(\gamma_0,\infty,b,\omega_0,p)$'' to mean ``when the true parameters are $\{ \gamma_n \} \in \Gamma(\gamma_0,\infty,b,\omega_0,p)$ for any $\gamma_0 \in \Gamma$, any $b \in \mathbb{R}^2_{+,\infty} \text{ with } \| b \| = \infty$, any $\omega_0 \in \mathbb{R}_+^2$ with $\| \omega_0\| = 1$ if $b_1=b_2=\infty$ or $\omega_0 = e_j \text{ if } b_j = \infty \text{ and } b_{j'} < \infty \text{ for } j,j' \in \{ 1,2\}$ and $j'\neq j$, and any $p \in [0,\infty]$ if $\pi_0 = 0$ or $p = \infty$ if $\pi_0 >0$''. We note that under sequences of true parameters for which $\sqrt{n} \beta_n \to b$ with $\| b \| < \infty$, identification of $\pi$ is weak, while under sequences of true parameters for which $\beta_n \to 0$ but $\sqrt{n} \beta_n \to \infty$, identification of $\pi$ is semi-strong.
All claims in this section, including the following, are verified in Appendix \ref{ACverification}. Under $\{\gamma_n\} \in \Gamma(\gamma_0)$, we have $\sup_{\pi \in \Pi} \| \hat{\psi}_n(\pi) - \psi_n \| \to_p 0$ and $\| \hat{\psi}_n - \psi_n \| \to_p 0$ when $\beta_0 = 0$, while $\hat{\theta}_n - \theta_n \to_p 0$ when $\beta_0 \neq 0$.
\subsection{Results for $\beta_n \to 0$ including asymptotic distribution for $\| b \| < \infty$} \label{Results_close_to_zero}
First, we derive results for $\beta$ close to zero, i.e., $\beta_n \to 0$. To that end, let $\hat{\theta}_n = (\hat{\psi}_n(\hat{\pi}_n), \hat{\pi}_n)$, where $\hat{\psi}_n(\pi) = (\hat{\beta}_n(\pi),\hat{\zeta}_n(\pi)) \in \Psi$ is the concentrated extremum estimator of $\psi$ for given $\pi \in \Pi$, i.e.,
\[
\hat{\psi}_n(\pi) = \operatornamewithlimits{arg\ min}_{\psi \in \Psi} Q_n(\psi,\pi),
\]
and where $\hat{\pi}_n$ is the minimizer of the concentrated objective function $Q_n^c(\pi) = Q_n(\hat{\psi}_n(\pi),\pi)$, i.e.,
\[
\hat{\pi}_n= \operatornamewithlimits{arg\ min}_{\pi \in \Pi} Q_n^c(\pi).
\]
With a slight abuse of notation, we let $\hat{\pi}_n = 1$ whenever $\hat{\pi}_n = \Pi$, which occurs when $\hat{\beta}_n(\pi) = 0$ for all $\pi \in \Pi$. Under $\{\gamma_n\} \in \Gamma(\gamma_0,0,b)$, $Q_n(\psi,\pi)$ admits the following quadratic expansion in $\psi$ around $\psi_{0,n} = (0,\zeta_n)$ for given $\pi$:
\begin{align}
Q_n(\psi,\pi) &= Q_n(\psi_{0,n},\pi) + D_\psi Q_n(\psi_{0,n},\pi)'(\psi-\psi_{0,n}) \nonumber \\
&+ \frac{1}{2} (\psi-\psi_{0,n})' H(\pi;\gamma_0)(\psi-\psi_{0,n}) + R_n(\psi,\pi), \label{quadratic_expansion}
\end{align}
where the remainder $R_n(\psi,\pi)$ satisfies
\[
\sup_{\psi \in \Psi: \| \psi - \psi_{0,n} \| \leq \delta_n} | a_n^2(\gamma_n) R_n(\psi,\pi) | = o_{p\pi}(1)
\]
for all $\delta_n \to 0$ and where
\[
a_n(\gamma_n) = \left\{ \begin{array}{ll} n^{1/2} & \text{if } \{ \gamma_n \} \in \Gamma(\gamma_0,0,b) \text{ and } \|b\| < \infty \\ \| \beta_n \|^{-1} & \text{if } \{ \gamma_n \} \in \Gamma(\gamma_0,0,b) \text{ and } \|b\| = \infty \end{array} \right. .
\]
Here, $D_\psi Q_n(\theta)$ denotes the left/right partial derivatives of $Q^\infty_n(\theta) = \frac{1}{n} \sum_{t=1}^n l^\infty_t(\theta)$ with respect to $\psi$, where
\[
l^\infty_t(\theta) = \frac{1}{2} \log(2 \tilde{\pi} ) + \frac{1}{2} \log(h_t^\infty(\theta)) + \frac{y_t^2}{2h^\infty_t(\theta)}
\]
and where
\[
h_t^\infty(\theta) = \zeta + \beta_1 \sum_{i=0}^{\infty} \pi^i y_{t-i-1}^2 + \beta_2 \sum_{i=0}^{\infty} \pi^i x_{t-i-1}^2.
\]
In particular,
\[
D_\psi Q_n(\theta) = \frac{1}{n} \sum_{t=1}^n l^\infty_{\psi,t}(\theta),
\]
where
\[
l^\infty_{\psi,t}(\theta) = \frac{\partial}{\partial \psi} l^\infty_t(\theta) = \frac{1}{2h^\infty_t(\theta)} \left( 1- \frac{y_t^2}{h^\infty_t(\theta)} \right) \frac{\partial h^\infty_t(\theta)}{\partial \psi}
\]
and where
\[
\frac{\partial h^\infty_t(\theta)}{\partial \psi} = \left( \sum_{i=0}^{\infty} \pi^i y_{t-i-1}^2, \sum_{i=0}^{\infty} \pi^i x_{t-i-1}^2, 1 \right)'.
\]
$H(\pi;\gamma_0)$ is defined below. Next, define the empirical process $\{G_n(\pi): \pi \in \Pi \}$ by
\[
G_n(\pi) = \sqrt{n} \frac{1}{n} \sum_{t=1}^n ( l^\infty_{\psi,t}(\psi_{0,n},\pi) - E_{\gamma_n} l^\infty_{\psi,t}(\psi_{0,n},\pi)).
\]
Under $\{\gamma_n\} \in \Gamma(\gamma_0,0,b)$, $G_n(\cdot) \Rightarrow G(\cdot;\gamma_0)$, where $\Rightarrow$ denotes weak convergence and where $G(\cdot;\gamma_0)$ is a mean zero Gaussian process with bounded continuous sample paths and covariance Kernel given by
{\footnotesize
\begin{align}
&\Omega(\pi_1,\pi_2; \gamma_0) = \nonumber \\ &\frac{c_0}{2} \left[ \begin{array}{ccc} \frac{2c_0}{1-\pi_1\pi_2} + \frac{1}{(1-\pi_1)(1-\pi_2)} & \frac{1}{\zeta_0}E_{\gamma_0} \sum_{i=0}^\infty \pi_1^i z^2_{t-i-1} \sum_{j=0}^\infty \pi_2^j x^2_{t-j-1} & \frac{1}{\zeta_0} \frac{1}{1-\pi_1} \\ \frac{1}{\zeta_0}E_{\gamma_0} \sum_{i=0}^\infty \pi_1^i x^2_{t-i-1} \sum_{j=0}^\infty \pi_2^j z^2_{t-j-1} & \frac{1}{\zeta_0^2} E_{\gamma_0} \sum_{i=0}^\infty \pi_1^i x^2_{t-i-1} \sum_{j=0}^\infty \pi_2^j x^2_{t-j-1} & \frac{1}{\zeta_0^2}E_{\gamma_0} \sum_{j=0}^\infty \pi_1^j x^2_{t-j-1} \\ \frac{1}{\zeta_0} \frac{1}{1-\pi_2} & \frac{1}{\zeta_0^2}E_{\gamma_0} \sum_{j=0}^\infty \pi_2^j x^2_{t-j-1} & \frac{1}{\zeta_0^2} \end{array} \right] \label{omega}
\end{align}}for $\pi_1, \pi_2 \in \Pi$, where $c_0 = c(\gamma_0) = \frac{E_{\gamma_0}(z_t^2-1)^2}{2} = \frac{E_{\gamma_0}z_t^4-1}{2} $.\footnote{The last equality follows by the definition of $\Gamma$.} Furthermore, we have that
\[
H(\pi;\gamma_0) = \Omega(\pi,\pi; \gamma_0)/c_0
\]
for $\pi \in \Pi$. Let
\[
Z_n(\pi;\gamma_0) = -a_n(\gamma_n) H^{-1}(\pi;\gamma_0)D_\psi Q_n(\psi_{0,n},\pi).
\]
Then, under $\{\gamma_n\} \in \Gamma(\gamma_0,0,b)$, we have
\begin{equation} \label{dist_Z}
Z_n(\pi;\gamma_0) \overset{d}{\to} \begin{cases} Z(\pi;\gamma_0,b) & \text{if } \|b\| < \infty \\
- H^{-1}(\pi;\gamma_0) K(\pi;\gamma_0) \omega_0 & \text{if } \|b\| = \infty \text{ and } \beta_n/\| \beta_n \| \to \omega_0
\end{cases},
\end{equation}
where
\[
Z(\pi;\gamma_0,b) = - H^{-1}(\pi;\gamma_0) \{ G(\pi;\gamma_0) + K(\pi;\gamma_0)b \}
\]
and where
\[
K(\pi;\gamma_0) = - \frac{1}{2} \left[ \begin{array}{cc} \frac{2c}{1-\pi\pi_0} + \frac{1}{(1-\pi)(1-\pi_0)} & \frac{1}{\zeta_0}E_{\gamma_0} \sum_{i=0}^\infty \pi^i z^2_{t-i-1} \sum_{j=0}^\infty \pi_0^j x^2_{t-j-1} \\ \frac{1}{\zeta_0}E_{\gamma_0} \sum_{i=0}^\infty \pi^i x^2_{t-i-1} \sum_{j=0}^\infty \pi_0^j z^2_{t-j-1} & \frac{1}{\zeta_0^2} E_{\gamma_0} \sum_{i=0}^\infty \pi^i x^2_{t-i-1} \sum_{j=0}^\infty \pi^j_0 x^2_{t-j-1} \\ \frac{1}{\zeta_0} \frac{1}{1-\pi_0} &\frac{1}{\zeta_0^2}E_{\gamma_0} \sum_{j=0}^\infty \pi_0^j x^2_{t-j-1} \end{array} \right].
\]
Next, let
\[
q_n(\lambda,\pi;\gamma_0) = (\lambda - Z_n(\pi;\gamma_0))'H(\pi;\gamma_0)(\lambda - Z_n(\pi;\gamma_0))
\]
such that the quadratic expansion in \eqref{quadratic_expansion} can be written as
\begin{align*}
Q_n(\psi,\pi) &= Q_n(\psi_{0,n},\pi) - \frac{1}{2a^2_n(\gamma_n)}Z_n(\pi;\gamma_0)'H(\pi;\gamma_0)Z_n(\pi;\gamma_0)\\
&+ \frac{1}{2a^2_n(\gamma_n)} q_n(a_n(\gamma_n)(\psi-\psi_{0,n}),\pi;\gamma_0) + R^*_n(\psi,\pi).
\end{align*}
Note that the first two terms do not depend on $\psi$. As a result, following \cite{Andrews:99,Andrews:01}, it can then be shown that the (scaled and demeaned) minimizer of $Q_n(\psi,\pi)$ with respect to $\psi$ for given $\pi$ asymptotically behaves like the minimizer of the ``asymptotic version'' of $q_n(\cdot)$ over an appropriately defined parameter space for $\lambda$.
\subsubsection{Asymptotic distribution for $\| b \| < \infty$} \label{ad_b_less_infty}
In particular, under $\{\gamma_n\} \in \Gamma(\gamma_0,0,b)$ with $\| b \| < \infty$, we have
\begin{equation} \label{lambda_hat}
\sqrt{n} (\hat{\psi}_n(\pi) -\psi_{0,n}) \overset{d}{\to} \hat{\lambda}(\pi;\gamma_0,b),
\end{equation}
where
\[
\hat{\lambda}(\pi;\gamma_0,b) = \operatornamewithlimits{arg\ min}_{\lambda \in \Lambda} q(\lambda,\pi;\gamma_0,b).
\]
Here,
\[
q(\lambda,\pi;\gamma_0,b) = (\lambda - Z(\pi;\gamma_0,b))'H(\pi;\gamma_0)(\lambda - Z(\pi;\gamma_0,b))
\]
and
\[
\Lambda = [0,\infty]^2 \times [-\infty,\infty].
\]
Next, let
\[
\hat{\pi}(\gamma_0,b) = \operatornamewithlimits{arg\ min}_{\pi \in \Pi} q(\hat{\lambda}(\pi;\gamma_0,b),\pi;\gamma_0,b)
\]
and, with a slight abuse of notation, let $\hat{\pi}(\gamma_0,b) = 1$ whenever $\hat{\pi}(\gamma_0,b) = \Pi$. Then, we have the following first main result of this paper. Namely, under $\{\gamma_n\} \in \Gamma(\gamma_0,0,b)$ with $\| b \| < \infty$, we have
\begin{equation} \label{asy_dist_estimator}
\left( \begin{array}{c} \sqrt{n} (\hat{\psi}_n -\psi_{0,n}) \\ \hat{\pi}_n \end{array} \right) \overset{d}{\to} \left( \begin{array}{c} \hat{\lambda}(\hat{\pi}(\gamma_0,b);\gamma_0,b) \\ \hat{\pi}(\gamma_0,b) \end{array} \right)
\end{equation}
and\footnote{Note that $\min_{\pi \in \Pi} - \hat{\lambda}(\pi;\gamma_0,b)'H(\pi;\gamma_0)\hat{\lambda}(\pi;\gamma_0,b) = - \hat{\lambda}(\hat{\pi}(\gamma_0,b);\gamma_0,b)'H(\pi;\gamma_0)\hat{\lambda}(\hat{\pi}(\gamma_0,b);\gamma_0,b)$.}
\begin{equation} \label{asy_dist_objective}
2n(Q_n(\hat{\theta}_n) - Q_{0,n}) \overset{d}{\to} \min_{\pi \in \Pi} - \hat{\lambda}(\pi;\gamma_0,b)'H(\pi;\gamma_0)\hat{\lambda}(\pi;\gamma_0,b).
\end{equation}
The corresponding asymptotic distribution results for $\hat{\theta}^\dagger_{n,0}$ and $\hat{\theta}_{n,0}$ are obtained similarly, by replacing $\Lambda$ with $\{0\}^2 \times [-\infty,\infty]$ and $ [0,\infty] \times \{0\} \times [-\infty,\infty]$, respectively.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=54mm]{{Graphs/GARCHX_T_500_zeta_1_beta1_0_beta2_0_pi_0.2_pi_max_0.9_rho_0.5_rho_ze_0_pi_new}.pdf}
\includegraphics[width=54mm]{{Graphs/GARCHX_T_500_zeta_1_beta1_0.08944_beta2_0_pi_0.2_pi_max_0.9_rho_0.5_rho_ze_0_pi_new}.pdf}
\includegraphics[width=54mm]{{Graphs/GARCHX_T_500_zeta_1_beta1_0.17889_beta2_0_pi_0.2_pi_max_0.9_rho_0.5_rho_ze_0_pi_new}.pdf}
\caption{Asymptotic and finite-sample ($n = 500$) densities of $\hat{\pi}_{n}$ with $\sqrt{n} \beta_1 = b_1 = 0, 2,$ and 4 from left to right and $\beta_2 = b_2 = 0$. Here, $\pi = 0.2$, $\varphi = 0.5$, and $\kappa = 0$.}
\label{plot_pi_n_500}
\end{center}
\end{figure}
Figure \ref{plot_pi_n_500} shows the asymptotic and finite-sample ($n=500$) densities of $\hat{\pi}_n$ for several values of $b_1$ and $b_2 = 0$; see Section \ref{AsySz} for details on the data generating process ({\textit{dgp}}) and Appendix \ref{CD} for details on how the asymptotic distribution is simulated.\footnote{Figure \ref{plot_beta_zeta_n_500} in Appendix \ref{additional_graphs} plots the corresponding distributions for the elements of $\hat{\psi}_n$.} We observe that the asymptotic distribution provides a good approximation to the finite-sample distribution. Furthermore, we observe that identification strength strongly impacts the distribution of $\hat{\pi}_n$, with large deviations from normality. When $b = 0$, the distribution of $\hat{\pi}_n$ is (almost) completely flat with point masses at the boundaries of the optimization parameter space as well as at 1; recall that $\hat{\pi}_n = 1$ whenever $\hat{\pi}_n = \Pi$. And, as $b_1$ increases, the distribution of $\hat{\pi}_n$ {starts} to resemble the normal distribution (centered at the true value, $\pi = 0.2$, and truncated to the support $\Pi$). This observation is in line with the asymptotic distribution results for $\| b \| = \infty$; see Section \ref{ad_b_infty}.
The asymptotic distribution result in \eqref{asy_dist_objective} allows us to derive the asymptotic distributions of $LR_n^\dagger$ and $LR_n$ under $\{\gamma_n\} \in \Gamma(\gamma_0,0,b)$ with $\| b \| < \infty$. Let $S_\beta = [I_2 \ 0_2]$ denote the selection matrix that (among $\psi$) selects the entries pertaining to $\beta$ and let $\hat{\lambda}_\beta(\pi;\gamma_0,b) = S_\beta \hat{\lambda}(\pi;\gamma_0,b)$. Then, the asymptotic distribution of $LR^\dagger_n$ under $\{\gamma_n\} \in \Gamma(\gamma_0,0,b)$ with $\| b \| < \infty$ is given by
\begin{equation} \label{asy_dist_LR_star}
LR^\dagger(\gamma_0,b) = \max_{\pi \in \Pi} \hat{\lambda}_\beta(\pi;\gamma_0,b)'(c_0S_\beta H^{-1}(\pi;\gamma_0) S_\beta')^{-1}\hat{\lambda}_\beta(\pi;\gamma_0,b) .
\end{equation}
We note that for $b = 0$ we recover the asymptotic distribution of $LR^\dagger_n$ under $H_0^\dagger$, see also Theorem 2.1 in PR. The asymptotic distribution of $LR_n$ under $\{\gamma_n\} \in \Gamma(\gamma_0,0,b)$ with $\| b \| < \infty$ is given by\footnote{If $\{z_t\}$ and $\{x_t\}$ are independent, in which case PR's ``additional assumption'' is also satisfied, then $S_\beta H^{-1}(\pi;\gamma_0) S_\beta'$ is diagonal and the asymptotic distributions of $LR_n^\dagger$ and $LR_n$ simplify. Letting $S_{\beta_1} = [1 \ 0 \ 0]$ and $S_{\beta_2} = [0 \ 1 \ 0]$ as well as $\hat{\lambda}_{\beta_1}(\pi;\gamma_0,b) = S_{\beta_1} \hat{\lambda}(\pi;\gamma_0,b)$ and $\hat{\lambda}_{\beta_2}(\pi;\gamma_0,b) = S_{\beta_2} \hat{\lambda}(\pi;\gamma_0,b)$, we have, for $i \in \{1,2\}$,
\[
\hat{\lambda}_{\beta_i}(\pi;\gamma_0,b) \sim \max(Z_{\beta_i}(\pi;\gamma_0,b),0),
\]
with $Z_{\beta_i}(\pi;\gamma_0,b) = S_{\beta_i} Z(\pi;\gamma_0,b)$. Furthermore, we have $S_{\beta_1} \hat{\lambda}^r(\pi;\gamma_0,b) = S_{\beta_1} \hat{\lambda}(\pi;\gamma_0,b)$. Letting $\Delta(\pi) = (c_0S_\beta H^{-1}(\pi;\gamma_0) S_\beta')^{-1}$, the asymptotic distribution of $LR_n$, for example, is then given by
\[
\max_{\pi \in \Pi} \left[ \hat{\lambda}_{\beta_1}(\pi;\gamma_0,b)^2 \Delta_{11}(\pi) + \hat{\lambda}_{\beta_2}(\pi;\gamma_0,b)^2 \Delta_{22}(\pi) \right] - \max_{\pi \in \Pi} \hat{\lambda}_{\beta_1}(\pi;\gamma_0,b)^2 \Delta_{11}(\pi).
\]}
\begin{align}
LR(\gamma_0,b) =& \max_{\pi \in \Pi} \hat{\lambda}_\beta(\pi;\gamma_0,b)'(c_0S_\beta H^{-1}(\pi;\gamma_0) S_\beta')^{-1}\hat{\lambda}_\beta(\pi;\gamma_0,b) \nonumber \\ -& \max_{\pi \in \Pi} \hat{\lambda}^r_\beta(\pi;\gamma_0,b)'(c_0S_\beta H^{-1}(\pi;\gamma_0) S_\beta')^{-1}\hat{\lambda}^r_\beta(\pi;\gamma_0,b), \label{asy_dist_LR}
\end{align}
where $\hat{\lambda}^r_\beta(\pi;\gamma_0,b) = S_{\beta} \hat{\lambda}^r(\pi;\gamma_0,b) $,
\[
\hat{\lambda}^r(\pi;\gamma_0,b) = \operatornamewithlimits{arg\ min}_{\lambda \in \Lambda^r} q(\lambda,\pi;\gamma_0,b),
\]
and
\[
\Lambda^r = [0,\infty] \times \{ 0 \} \times [-\infty,\infty].
\]
The results in \eqref{asy_dist_LR_star} and \eqref{asy_dist_LR} provide us with the asymptotic distributions of $LR^\dagger_n$ and $LR_n$, respectively, for true values of $\beta_1$ that are close to the boundary relative to the sample size, $b_1 < \infty$, which are permitted under $H_0$. Together, these results allow us to determine the asymptotic null rejection frequencies of the testing procedures proposed by PR under empirically relevant values of $\beta_1$. These asymptotic null rejection frequencies play a crucial role in determining whether the testing procedures control asymptotic size; see Section \ref{AsySz}.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=54mm]{{Graphs/GARCHX_T_500_zeta_1_beta1_0_beta2_0_pi_0.2_pi_max_0.9_rho_0.5_rho_ze_0_LR_star_new}.pdf}
\includegraphics[width=54mm]{{Graphs/GARCHX_T_500_zeta_1_beta1_0.08944_beta2_0_pi_0.2_pi_max_0.9_rho_0.5_rho_ze_0_LR_star_new}.pdf}
\includegraphics[width=54mm]{{Graphs/GARCHX_T_500_zeta_1_beta1_0.17889_beta2_0_pi_0.2_pi_max_0.9_rho_0.5_rho_ze_0_LR_star_new}.pdf}
\includegraphics[width=54mm]{{Graphs/GARCHX_T_500_zeta_1_beta1_0_beta2_0_pi_0.2_pi_max_0.9_rho_0.5_rho_ze_0_LR_new}.pdf}
\includegraphics[width=54mm]{{Graphs/GARCHX_T_500_zeta_1_beta1_0.08944_beta2_0_pi_0.2_pi_max_0.9_rho_0.5_rho_ze_0_LR_new}.pdf}
\includegraphics[width=54mm]{{Graphs/GARCHX_T_500_zeta_1_beta1_0.17889_beta2_0_pi_0.2_pi_max_0.9_rho_0.5_rho_ze_0_LR_new}.pdf}
\caption{Asymptotic and finite-sample ($n = 500$) densities of $LR_n^\dagger$ (top row) and $LR_n$ (bottom row) with $\sqrt{n} \beta_1 = b_1 = 0,2,$ and 4 from left to right and $\beta_2 = b_2 = 0$. Here, $\pi = 0.2$, $\varphi = 0.5$, and $\kappa = 0$.}
\label{plot_LRs_n_500}
\end{center}
\end{figure}
Figure \ref{plot_LRs_n_500} shows the asymptotic and the finite-sample ($n = 500$) densities of $LR_n^\dagger$ and $LR_n$ for the same \textit{dgp} that underlies Figure \ref{plot_pi_n_500}. While the asymptotic distribution of $LR_n$ provides a very good approximation to its finite-sample distribution, the asymptotic distribution of $LR_n^\dagger$ seems to be first-order stochastically dominated by its finite-sample distribution. However, looking at Figure \ref{plot_LRs_n_10000}, which reproduces the top row of Figure \ref{plot_LRs_n_500} with $n=$ 10,000, this seems to be a ``small sample'' phenomenon.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=54mm]{{Graphs/GARCHX_T_10000_zeta_1_beta1_0_beta2_0_pi_0.2_pi_max_0.9_rho_0.5_rho_ze_0_LR_star_new}.pdf}
\includegraphics[width=54mm]{{Graphs/GARCHX_T_10000_zeta_1_beta1_0.02_beta2_0_pi_0.2_pi_max_0.9_rho_0.5_rho_ze_0_LR_star_new}.pdf}
\includegraphics[width=54mm]{{Graphs/GARCHX_T_10000_zeta_1_beta1_0.04_beta2_0_pi_0.2_pi_max_0.9_rho_0.5_rho_ze_0_LR_star_new}.pdf}
\caption{Asymptotic and finite-sample ($n =$ 10,000) densities of $LR_n^\dagger$ with $\sqrt{n} \beta_1 = b_1 = 0, 2,$ and 4 from left to right and $\beta_2 = b_2 = 0$. Here, $\pi = 0.2$, $\varphi = 0.5$, and $\kappa = 0$.}
\label{plot_LRs_n_10000}
\end{center}
\end{figure}
\subsubsection{Results for $\| b \| = \infty$ (and $\beta_n \to 0$)} \label{results_b_infty}
Under $\{\gamma_n\} \in \Gamma(\gamma_0,\infty,b,\omega_0,p)$ with $\beta_0 = 0$, we have that
\begin{equation} \label{Lemma32b}
\| \beta_n \|^{-2} (Q_n^c(\pi) - Q_{0,n}) \to \eta(\pi;\gamma_0,\omega_0)
\end{equation}
uniformly over $\pi \in \Pi$, where
\[
\eta(\pi;\gamma_0,\omega_0) = -\frac{1}{2} \omega_0'K(\pi;\gamma_0)' H^{-1}(\pi;\gamma_0) K(\pi;\gamma_0) \omega_0.
\]
Furthermore, we have that $\eta(\pi;\gamma_0,\omega_0)$ is uniquely minimized at $\pi = \pi_0 \ \forall \gamma_0 \in \Gamma$ with $\beta_0 = 0$. It can then be shown that $\hat{\pi}_n - \pi_n \to_p 0$ under $\{\gamma_n\} \in \Gamma(\gamma_0,\infty,b,\omega_0,p)$ and $\| \beta_n \|^{-1} (\hat{\psi}_n - \psi_n) = o_p(1)$ under $\{\gamma_n\} \in \Gamma(\gamma_0,\infty,b,\omega_0,p)$ with $\beta_0 = 0$, where the latter result ensures that \eqref{quadratic_expansion_full_vector} holds; see the discussion below Assumption D1 in AC.
\subsection{Asymptotic distribution for $\| b \| = \infty$} \label{ad_b_infty}
Under $\{\gamma_n\} \in \Gamma(\gamma_0,\infty,b,\omega_0,p)$, we have
\begin{equation} \label{quadratic_expansion_full_vector}
Q_n(\theta) = Q_n(\theta_n) + DQ_n(\theta_n)'(\theta-\theta_n) + \frac{1}{2} (\theta-\theta_n)'D^2Q_n(\theta_n)(\theta - \theta_n) + R^*_n(\theta),
\end{equation}
where the remainder $R^*_n(\theta)$ satisfies
\[
\sup_{\theta \in \Theta_n(\delta_n)} |nR_n^*(\theta)| = o_p(1)
\]
for all $\delta_n \to 0$, where $\Theta_n(\delta_n) = \{ \theta \in \Theta: \| \psi - \psi_n \| \leq \delta_n \| \beta_n \| \text{ and } \| \pi - \pi_n \| \leq \delta_n\}$. Here, $DQ_n(\theta)$ and $D^2Q_n(\theta)$ denote the vector of first-order left/right partial derivatives and the matrix of second-order left/right partial derivaties of $Q^\infty_n(\theta)$ with respect to $\theta$, respectively. Define
\[
B(\beta) = \left[ \begin{array}{cc} I_3 & 0_{3\times1} \\ 0_{1\times3} & \| \beta \| \end{array} \right],
\]
\[
J_n = B^{-1}(\beta_n) D^2 Q_n(\theta_n) B^{-1}(\beta_n),
\]
and
\[
Z_n^\infty = - \sqrt{n} J_n^{-1} B^{-1}(\beta_n) DQ_n(\theta_n).
\]
Then \eqref{quadratic_expansion_full_vector} can be rewritten as
\[
Q_n(\theta) = Q_n(\theta_n) - \frac{1}{2n} {Z_n^\infty}'J_nZ_n^\infty + \frac{1}{2n} q_n^{\infty}(\sqrt{n}B(\beta_n)(\theta-\theta_n)) + R^*_n(\theta),
\]
where
\[
q_n^{\infty}(\lambda) = (\lambda - Z_n^\infty)'J_n(\lambda - Z_n^\infty).
\]
Note that
\[
B^{-1}(\beta_n)DQ_n(\theta_n) = \frac{1}{n} \sum_{t=1}^n \frac{1}{2h^\infty_t(\theta_n)} \left( 1- \frac{y_t^2}{h^\infty_t(\theta_n)} \right) \tau(\beta_n/\|\beta_n\|,\pi_n),
\]
where
\begin{align*}
\tau(\beta/\|\beta\|,\pi) &= B^{-1}(\beta) \frac{\partial h^\infty_t(\theta)}{\partial \theta} \\ & = \left( \sum_{i=0}^{\infty} \pi^i y_{t-i-1}^2, \sum_{i=0}^{\infty} \pi^i x_{t-i-1}^2, 1, \sum_{i=1}^{\infty} i \pi^{i-1} \left(\frac{\beta_{1}}{\|\beta\|} y_{t-i-1}^2 + \frac{\beta_{2}}{\|\beta\|} x_{t-i-1}^2\right) \right)'.
\end{align*}
Then, under $\{\gamma_n\} \in \Gamma(\gamma_0,\infty,b,\omega_0,p)$, we have $\sqrt{n} B^{-1}(\beta_n)DQ_n(\theta_n) \overset{d}{\to} N(0,V(\gamma_0,\omega_0))$, where
\[
V(\gamma_0,\omega_0) = \frac{c_0}{2} E_{\gamma_0} \frac{\tau(\omega_0,\pi_0)}{h^\infty_t(\theta_0)} \frac{\tau'(\omega_0,\pi_0)}{h^\infty_t(\theta_0)},
\]
and $J_n \to_p J(\gamma_0,\omega_0)$, where $J(\gamma_0,\omega_0) = V(\gamma_0,\omega_0)/c_0$, such that
\begin{equation} \label{Z_infty}
Z_n^\infty \overset{d}{\to} Z^\infty(\gamma_0,\omega_0) \sim N(0,c_0J^{-1}(\gamma_0,\omega_0)).
\end{equation}
Let
\[
q^{\infty}(\lambda;\gamma_0,\omega_0) = (\lambda - Z^\infty(\gamma_0,\omega_0))'J(\gamma_0,\omega_0)(\lambda - Z^\infty(\gamma_0,\omega_0)).
\]
Then, under $\{\gamma_n\} \in \Gamma(\gamma_0,\infty,b,\omega_0,p)$, we have
\begin{equation} \label{lambda_infty}
\sqrt{n}B(\beta_n)(\hat{\theta}_n - \theta_n) \overset{d}{\to} \hat{\lambda}^\infty(\gamma_0,\omega_0,b,p),
\end{equation}
where $\hat{\lambda}^\infty(\gamma_0,\omega_0,b,p) = \operatornamewithlimits{arg\ min}_{\lambda \in \Lambda^\infty} q^{\infty}(\lambda;\gamma_0,\omega_0)$ and where
\[
\Lambda^\infty = [-b_1,\infty] \times [-b_2,\infty] \times [-\infty,\infty] \times [-p,\infty].
\]
Furthermore, under $\{\gamma_n\} \in \Gamma(\gamma_0,\infty,b,\omega_0,p)$, we have
\begin{equation} \label{obj_fun_infty}
2n(Q_n(\hat{\theta}_n) - Q_{n}(\theta_n)) \overset{d}{\to} - \hat{\lambda}^\infty(\gamma_0,\omega_0,b,p)'J(\gamma_0,\omega_0)\hat{\lambda}^\infty(\gamma_0,\omega_0,b,p).
\end{equation}
Let $S_{\beta,\pi} = [I_2 \ 0_{2\times2}; 0_3' \ 1]$ be the selection matrix that selects (among $\theta$) the entries pertaining to $\beta$ and $\pi$ and define $\hat{\lambda}^\infty_{\beta,\pi}(\gamma_0,\omega_0,b,p) = S_{\beta,\pi} \hat{\lambda}^\infty(\gamma_0,\omega_0,b,p)$. Also, let
\[
\lambda^{\infty,r}(\gamma_0,\omega_0,b,p) = \operatornamewithlimits{arg\ min}_{\lambda \in \Lambda^{\infty,r}} q^{\infty}(\lambda;\gamma_0,\omega_0),
\]
where
\[
\Lambda^{\infty,r} = [-b_1,\infty] \times [-b_2,-b_2] \times [-\infty,\infty] \times [-p,\infty],
\]
and define $\hat{\lambda}^{\infty,r}_{\beta,\pi}(\gamma_0,\omega_0,b,p) = S_{\beta,\pi} \hat{\lambda}^{\infty,r}(\gamma_0,\omega_0,b,p)$. Then, the asymptotic distribution of $LR_n$, under $\{\gamma_n\} \in \Gamma(\gamma_0,\infty,b,\omega_0,p)$, is given by
\begin{align}
LR^\infty(\gamma_0,\omega_0,b,p) =& \hat{\lambda}^\infty_{\beta,\pi}(\gamma_0,\omega_0,b,p)'(cS_{\beta,\pi} J^{-1}(\gamma_0,\omega_0) S_{\beta,\pi}')^{-1}\hat{\lambda}^\infty_{\beta,\pi}(\gamma_0,\omega_0,b,p) \nonumber \\ -& \hat{\lambda}^{\infty,r}_{\beta,\pi}(\gamma_0,\omega_0,b,p)'(cS_{\beta,\pi} J^{-1}(\gamma_0,\omega_0) S_{\beta,\pi}')^{-1}\hat{\lambda}^{\infty,r}_{\beta,\pi}(\gamma_0,\omega_0,b,p). \label{LR_infty}
\end{align}
In the following section, we use the foregoing asymptotic distribution results to analyze the asymptotic size of the two testing procedures proposed by PR.
\section{Asymptotic size and a new test} \label{AsySz}
First, we provide a characterization result for asymptotic size that holds for any test for testing \eqref{testing_problem}. To that end, let
\[
H = \{ h = (b_1, \gamma_0): 0 \leq b_1 < \infty \text{ and } \gamma_0 \in \text{cl}(\Gamma) \text{ with } \beta_0 = 0 \}
\]
and
\[
H^\infty = \{ h = (p , \gamma_0) : 0 \leq p < \infty \text{ and } \gamma_0 \in \text{cl}(\Gamma) \text{ with } \pi_0 = 0 \}.
\]
Furthermore, let $RP_\mathcal{T}(h)$ denote the asymptotic rejection probability of the test (that uses $\mathcal{T}_n$) under $\{\gamma_n\} \in \Gamma(\gamma_0,0,b)$ with $b_1 < \infty$ and $b_2 = 0$, where $h = (b_1,\gamma_0)$. Similarly, let $RP_\mathcal{T}^\infty(h)$ denote the asymptotic rejection probability of the test under $\{\gamma_n\} \in \Gamma(\gamma_0,\infty,b,\omega_0,p)$ with $\pi_0 = 0$, $b_1 = \infty$, $b_2 = 0$, and $\omega_0 = e_1$, where $h = (p,\gamma_0)$. Lastly, let $RP_\mathcal{T}^{\infty,\infty} \in [0,1]$ be such that $\limsup_{n \to \infty} RP_{\mathcal{T},n}(\gamma_n) \geq RP_\mathcal{T}^{\infty,\infty}$ for any $\{\gamma_n\} \in \Gamma(\gamma_0,\infty,b,\omega_0,\infty)$ with $\pi_0 = 0$, $b_1 = \infty$, $b_2 = 0$, and $\omega_0 = e_1$ and $RP_{\mathcal{T},n}(\gamma_n) \to RP_\mathcal{T}^{\infty,\infty}$ for some $\{\gamma_n\} \in \Gamma(\gamma_0,\infty,b,\omega_0,\infty)$ with $\pi_0 = 0$, $b_1 = \infty$, $b_2 = 0$, and $\omega_0 = e_1$, where $RP_{\mathcal{T},n}(\gamma_n)$ denotes the finite-sample rejection probability of the test under $\gamma_n$. We can now state our characterization result for AsySz$_\mathcal{T}$.
\begin{pro} \label{pro1} Given $\Gamma$ as defined in Appendix \ref{ACverification}, we have
\[
\textup{AsySz}_\mathcal{T} = \max \{\sup_{h \in H} RP_\mathcal{T}(h), \sup_{h \in H^\infty} RP_\mathcal{T}^\infty(h) , RP_\mathcal{T}^{\infty,\infty} \}
\]
for any test for testing \eqref{testing_problem}.
\end{pro}
The proof follows along the lines of the proof of Lemma 2.1 in AC and is given in Appendix \ref{ACverification}, along with the verification of all claims made in this section.
Next, we consider the two testing procedures proposed by PR. We have
\[
RP_{TS}(h) = P\left( \mathbbm{1}(LR^\dagger(\gamma_0,b) > LR^\dagger_{1-\alpha}(\gamma_0,0)) \times LR(\gamma_0,b) > \text{cv}_{1-\alpha} \right),
\]
where $LR^\dagger_{1-\alpha}(\gamma_0,0)$ denotes the $1-\alpha$ quantile of $LR^\dagger(\gamma_0,0)$, and
\[
RP_{S}(h) = P\left( LR(\gamma_0,b) > \text{cv}_{1-\alpha} \right).
\]
We note that $RP_{TS}(h)$ and $RP_{S}(h)$ depend on $\gamma_0$ through $H(\pi;\gamma_0)$, $\Omega(\pi_1, \pi_2;\gamma_0)$, and $K(\pi;\gamma_0)$.
Since $LR^\dagger_n \to \infty$ and $P(\widetilde{LR}^\dagger_{n,1-\alpha} < \infty) = 1$ under $\{\gamma_n\} \in \Gamma(\gamma_0,\infty,b,\omega_0,p)$ with $\pi_0 = 0$, $b_1 = \infty$, $b_2 = 0$, and $\omega_0 = e_1$, we have that
\[
RP_{TS}^\infty(h) = RP_{S}^\infty(h) = P\left( LR^\infty(\gamma_0,p) > \text{cv}_{1-\alpha} \right),
\]
where $LR^\infty(\gamma_0,p) = LR^\infty(\gamma_0,e_1,(\infty,0)',p)$ with $\pi_0 = 0$. Now, let
\[
\rho(\gamma_0) = \frac{\left( J^{-1}(\gamma_0,e_1)\right)_{\beta_2,\pi}}{\sqrt{\left( J^{-1}(\gamma_0,e_1)\right)_{\beta_2,\beta_2}\left( J^{-1}(\gamma_0,e_1)\right)_{\pi,\pi}}}
\]
and $q(\gamma_0) = q(\rho(\gamma_0)) = \sin^{-1}(\rho(\gamma_0))/2\tilde{\pi}$. Furthermore, let ${LR}^\infty_{1-\alpha}(\gamma_0,0)$ denote the $1-\alpha$ quantile of ${LR}^\infty(\gamma_0,p)$ and note that $LR_{1-\alpha}^\infty(\gamma_0,\infty) = \text{cv}_{1-\alpha}$. It can be shown that $LR_{1-\alpha}^\infty(\gamma_0,p)$ is strictly monotonically increasing (decreasing) in $p$ if $ \rho(\gamma_0) < 0$ ($ \rho(\gamma_0) > 0$), while $LR_{1-\alpha}^\infty(\gamma_0,p) = \text{cv}_{1-\alpha}$ if $\rho(\gamma_0) = 0$; this result is in line with and extends the result in PR. It then follows, from Theorem 2.1 in \cite{Kopylev:11}, that
\begin{align}
& \sup_{h \in H^\infty} P\left( LR^\infty(\gamma_0,b) > \text{cv}_{1-\alpha} \right) \nonumber \\
=& \max \{ \sup_{\gamma_0:(0,\gamma_0) \in H^\infty, \rho(\gamma_0) > 0} 1 - ((1/2 - q(\gamma_0)) + 1/2 F_{\chi^2}(\text{cv}_{1-\alpha};1) + q(\gamma_0) F_{\chi^2}(\text{cv}_{1-\alpha};2)), \alpha \}, \label{cf_expression}
\end{align}
where $F_{\chi^2}(\cdot;k)$ denotes the cumulative distribution function of a $\chi^2$ random variable with degree of freedom $k$. Noting that $ 1 - ((1/2 - q(\gamma_0)) + 1/2 F_{\chi^2}(\text{cv}_{1-\alpha};1) + q(\gamma_0) F_{\chi^2}(\text{cv}_{1-\alpha};2))$ is strictly increasing in $\rho(\gamma_0)$, we have that $\sup_{h \in H^\infty} P\left( LR^\infty(\gamma_0,b) > \text{cv}_{1-\alpha} \right)$ is bounded from above by $11.46\%$ for $\alpha = 0.05$; this number is obtained by evaluating \eqref{cf_expression} at $\rho(\gamma_0) = 1$.
In what follows, we determine lower bounds on $\sup_{h \in H} RP_{TS}(h)$ and $\sup_{h \in H} RP_{S}(h)$ by numerically evaluating $RP_{TS}(h)$ and $RP_{S}(h)$ at certain choices of $h \in H$. Similarly, we determine a lower bound on \eqref{cf_expression} by numerically evaluating $\rho(\gamma_0)$ at certain choices of $\gamma_0$ such that $(0,\gamma_0) \in H^\infty$ (and by plugging the resulting value into $1 - ((1/2 - q(\gamma_0)) + 1/2 F_{\chi^2}(\text{cv}_{1-\alpha};1) + q(\gamma_0) F_{\chi^2}(\text{cv}_{1-\alpha};2))$). Combining the above lower bounds then provides us with lower bounds on AsySz$_{TS}$ and AsySz$_{S}$.
The numerical evaluation is based on the following \textit{dgp}, which is inspired by PR (see their Appendix D). $\phi$ is such that $x_t$ follows an AR(1), i.e.,
\begin{equation} \label{phi}
x_t = \varphi x_{t-1} + \epsilon_t,
\end{equation}
where
\[
\left( \begin{array}{c} z_t \\ \epsilon_t \end{array} \right) \sim N\left( \left( \begin{array}{c} z_t \\ \epsilon_t \end{array} \right), \left( \begin{array}{cc} 1 & \kappa \\ \kappa & 1 \end{array} \right) \right).
\]
We note that, given the above $\phi$, the ``additional assumption'' of PR is satisfied if and only if $\kappa = 0$. In what follows, we take $\bar{\pi} = 0.9$ and obtain results for $\alpha = 5\%$; details on the numerical evaluation can be found in Appendix \ref{CD}. We find that $RP_{TS}(h) = 6\%$ where $h$ is given by $b_1 = 2.5$, $\zeta_0 = 1$, $\pi_0 = 0.64$, $\varphi_0 = 0.5$, and $\kappa_0 = 0$. Furthermore, we find that $RP_{S}(h) = 9.48\%$ where $h$ is as before, except that $b_1 = 0$ (and $\pi_0 = \Pi^*$). Lastly, we find that $RP^\infty_{TS}(h) = RP^\infty_{S}(h) = 6.65\%$ where $h$ is given by $p = 0$, $\beta_1 = 0.3$, $\zeta_0 = 1$, $\pi_0 = 0$, $\varphi_0 = 0$, and $\kappa_0 = 0.99$ so that $\rho(\gamma_0) = 0.39$.\footnote{This lower bound is increasing in $\kappa_0$, e.g., we have $RP^\infty_{TS}(h) = RP^\infty_{S}(h) = 7.11\%$ where $h$ is as before except that $\beta_1 = 0.25$ and $\kappa_0 = 0.999$ so that $\rho(\gamma_0) = 0.49$. We refrain from reporting results for larger $\kappa_0$ due to the decreasing accuracy in the numerical evaluation (as $\kappa_0$ approaches 1).} Noting that $RP_{TS}^{\infty,\infty} = RP_{S}^{\infty,\infty} = \alpha$, we obtain the following Corollary to Proposition \ref{pro1}.
\begin{cor} \label{cor1}
Given $\Gamma$ as defined in Appendix \ref{ACverification} with $\overline{\pi} = 0.9$, AsySz$_{TS} \geq 6.65\%$ and AsySz$_{S} \geq 9.48\%$ at $\alpha = 5\%$.
\end{cor}
\begin{rem} \label{}
We note that the two lower bounds obtained in Corollary \ref{cor1} also apply if $\Gamma$ is restricted to satisfy the ``additional assumption'' in PR and, in case of the second testing procedure, the assumption that $\beta_1 > 0$. This is due to the point-wise nature of the assumptions and the ``suprema'' in the definitions of Sz$_\mathcal{T}$ and AsySz$_\mathcal{T}$. For example, the ``additional assumption'' in PR is only imposed at $\pi = 0$ and does not exclude sequences of true parameters that are such that $\rho(\gamma_n) \to \rho(\gamma_0) > 0$ and $\pi_n > 0$ for all $n \geq 1$, while $\pi_n \to \pi_0 = 0$.
\end{rem}
Figure \ref{plot_LRs_n_500} shows that the (asymptotic) distribution of $LR_n$ does not vary a lot with $\beta_1$, at least for the particular $\gamma$ under consideration. This motivates our suggestion to test \eqref{testing_problem} using $LR_n$ combined with a plug-in least favorable configuration critical value (PI-LF); in what follows, the resulting test is abbreviated as $LR$-$LF$. In the context at hand, we have to consider two identification scenarios ($b_1 < \infty$ and $b_1 = \infty$) that result in two different asymptotic null distributions of $LR_n$. The idea is simple: in each scenario, all unknown quantities/parameters of the asymptotic distribution of $LR_n$ that are consistently estimable are replaced by estimators. For the remaining parameters, we determine the least favorable configuration, i.e., we determine under which values of these parameters the $1-\alpha$ quantile of the asymptotic distribution is maximized. The corresponding $1-\alpha$ quantile then serves as critical value for the scenario under consideration. Finally, the maximum of the thus obtained critical values and $\text{cv}_{1-\alpha}$ constitutes our proposed PI-LF. We note that, under $\{\gamma_n\} \in \Gamma(\gamma_0,0,b)$ with $b_1 < \infty$ and $b_2 = 0$, $b_1$ and $\pi_0$ are not consistently estimable, while, under $\{\gamma_n\} \in \Gamma(\gamma_0,\infty,b,\omega_0,p)$ with $\pi_0 = 0$, $b_1 = \infty$, $b_2 = 0$, and $\omega_0 = e_1$, $p$ is not consistently estimable. Our final implementation follows the above (``general'') approach with two twists. First, under $\{\gamma_n\} \in \Gamma(\gamma_0,0,b)$ with $b_1 < \infty$ and $b_2 = 0$, we restrict the search of the least favorable configuration to values of $b_1$ and $\pi_0$ that satisfy $3(\sqrt{n}b_1)^2+\pi_0^2+2(\sqrt{n}b_1)\pi_0 < 1$; this condition is motivated by the condition $E_{\gamma} y_t^2 < \infty$ \citep[see e.g.,][]{MS:00} that is also imposed by $\Gamma$. Second, if $\hat{\beta}_{n,1} = 0$, then we only consider the critical value obtained from the $b_1 < \infty$ scenario, because the probability of observing $\hat{\beta}_{n,1} = 0$ under $\{\gamma_n\} \in \Gamma(\gamma_0,\infty,b,\omega_0,p)$ with $\pi_0 = 0$, $b_1 = \infty$, $b_2 = 0$, and $\omega_0 = e_1$ approaches 0. See Appendix \ref{CD} for more details on the construction of critical values. By design, we have the following result.
\begin{cor} \label{cor2}
Given $\Gamma$ as defined in Appendix \ref{ACverification}, AsySz$_{LR\text{-}LF} = \alpha$ for $\alpha \in (0,1/2)$.
\end{cor}
\section{Monte Carlo} \label{MC}
In this section, we use simulations to assess (i) how well the finite-sample null rejection frequencies of the different tests, or testing procedures, that we consider are approximated by the foregoing asymptotic theory and (ii) how these tests compare in terms of (finite-sample) power. We generate data from the model given in \eqref{y}, \eqref{h}, and \eqref{phi}. We use a burn-in phase of 100 observations and set the starting values for the $y_t$ and $x_t$ series equal to zero. The sample size $n$ (after discarding the burn-in observations) is equal to 500. The number of simulations is 1,000. Table \ref{RF_table} shows the finite-sample rejection frequencies (in \%) of the likelihood ratio test for testing $H_0^\dagger$ ($LR^\dagger$), the two-step procedure ($TS$), the second testing procedure ($S$), and the test that uses $LR_n$ together with PI-LF ($LR$-$LF$) at the 5\% nominal level for different true parameter constellations. Throughout, $\zeta$ is set equal to 1, while $\pi$, $\kappa$, $\varphi$, $\beta_1$, and $\beta_2$ are varied as indicated in the table.
\begin{table}[h!]
\begin{center}
\caption{Finite-sample rejection frequencies (in \%) at 5\% nominal level}
\label{RF_table}
\begin{tabular}{c|rrr|rrr|rrr|r|r}
\hline
\hline
& (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10) & (11) \\
\hline
$\pi$ & \multicolumn{3}{c|}{0.2} & \multicolumn{3}{c|}{0.2} &\multicolumn{3}{c|}{0.2} & 0.64 & 0 \\
$\varphi$ & \multicolumn{3}{c|}{0.5} & \multicolumn{3}{c|}{0.5} &\multicolumn{3}{c|}{0.5} & 0.5 & 0 \\
$\kappa$ & \multicolumn{3}{c|}{0} & \multicolumn{3}{c|}{0} &\multicolumn{3}{c|}{0} & 0 & 0.99 \\
$\beta_1$ & \multicolumn{3}{c|}{0} & \multicolumn{3}{c|}{0.05} &\multicolumn{3}{c|}{0.1} & 0.11 & 0.3 \\
$\beta_2$ & 0 & 0.05 & 0.1 & 0 & 0.05 & 0.1 & 0 &0.05 & 0.1 & 0 & 0 \\
\hline
$LR^\dagger$ & 3.1 & 22.9 & 57.5 & 14.0 & 36.0 & 65.2 & 40.6 & 57.5 & 79.1 & 70.5 & 99.0 \\
$TS$ & 2.0 & 21.9 & 56.6 & 3.4 & 25.8 & 59.7 & 5.6 & 29.9 & 62.9 & 6.2 & 8.2\\
$S$ & 11.0 & 45.6 & 77.9 & \ 9.0 & 41.1 & 73.3 & 8.3 & 37.2 & 68.9 & 6.9 & 8.2 \\
$LR$-$LF$ & 5.3 & 34.5 & 67.7 & 4.8 & 30.9 & 61.9 & 4.0 & 26.8 & 57.1 & 3.9 & 0.0 \\
\hline
\end{tabular}
\end{center}
\end{table}
Columns (1), (4), (7), (10), and (11) of Table \ref{RF_table} report null rejection frequencies for $TS$, $S$, and $LR$-$LF$ ($\beta_2 = 0$). We see that the asymptotic null rejection probabilities that underlie the lower bounds on asymptotic size obtained in Section \ref{AsySz} provide good approximations to the corresponding finite-sample rejection frequencies. In particular, column (1) shows an 11\% rejection frequency of $S$, which is close to the corresponding asymptotic rejection probability of 9.48\%. Similarly, columns (10) and (11) show rejection frequencies of 6.2\% and 8.2\% for $TS$, which are close to the corresponding asymptotic rejection probabilities of 6\% and 6.65\%, respectively; note that $\sqrt{500} \times 0.11 \approx 2.5$ and $\sqrt{500} \times 0.3 \approx 6.7$.\footnote{Here, 6.7 appears to be large enough for the asymptotic theory obtained for $b_1 = \infty$ to provide a good approximation, which is corroborated by a rejection frequency of $LR^\dagger$ close to 100\%.} Furthermore, Table \ref{RF_table} reveals that $TS$ has null rejection frequencies below the nominal level for ``very small'' values of $\beta_1$ (see columns (1) and (4)); this is not surprising given the nature of the two-step procedure and the low rejection frequency of $LR^\dagger$ for such values of $\beta_1$. For the \textit{dgp}s considered in columns (1)--(9) our numerical evaluations show that $b_1 = 0$ is the (unique) least favorable configuration for $LR_n$. As a result, the $LR$-$LF$ offers sizeable power gains over $TS$ for ``very small'' values of $\beta_1$ (see columns (2), (3), (5), and (6)), where the latter, by continuity of the power curve, in some sense ``sacrifices'' power. Not surprisingly, for large(r) values of $\beta_1$ the power ranking is reversed (see columns (8) and (9)).
|
1,108,101,565,594 | arxiv | \section{Introduction}
\label{sec:intro}
Analysis of agents whose motion follows some patterns or rules is a topic that has been of increasing interest in recent years. Mobile agent behaviors are frequently analyzed in multiple research fields, e.g., video surveillance, crowd monitoring, sport events and situation awareness \cite{Hsieh2008, Cristani2013, Piriou2006}. This paper aims towards the representation and incremental learning of causal motivations modifying the state of moving agents in an observable way. Incremental learning is able to deal with situations where new static objects (attractors or repellers) are progressively added. Observations that produce deviations from previous knowledge are used to increase the environmental understanding. Accordingly, new models can be characterized by comparing expected learned behaviors with the observed one.
Cognitive systems that analyze data from unstructured information represent a fundamental topic in which a wide research is under development \cite{Aggarwal2011}. Due to the large amount of information that can be collected from technological devices such as positioning and locating systems, it becomes important to provide methods that allow not only to detect and track objects robustly but also to extract meaningful information to explain the contextual motivation that can be related to their motion.
The Bayesian inference methodology has proven to be a powerful tool to model and explain interactive dynamical behaviors through a probabilistic framework. Several problems can be solved using a stochastic formulation, e.g., crowd monitoring \cite{Napoletano2015,Calderara2008}, crowd analysis \cite{ Ferrer2014,Wu2014,Andrade2006,Chiappino2015}, event detection \cite{Wu2014,Andrade2006,Hongeng2004,Dore2010}, etc.
Understanding the context of video sequences is an important task that improves the identification of events involving interactions among agents in a scene. Detection of events from video is a topic that is gaining interest in the research community. Several works have been presented in this area, some of them are focused on detection of abnormal behaviors in surveillance videos \cite{Mehran2009,Yuan2015,Chiappino2015,Vahid2016,1020947}, detection of groups of people in scenes \cite{Chan2012,Vahid2015} and identification of dangerous massive crowds \cite{Bu2011}. This particular work is focused on cases in which no visual information is available but the system can acquire and process data about the dynamic localization of the involved agents.
When a particular object is introduced in the scene, it becomes essential to know the effect that it exerts on the state of moving agents that interact with it. From that point of view, Situation Awareness (SA) can be seen as a problem of joint data estimation where a set of measurements about the position of agents provide information about the motivation that makes them move in a certain way.
Particular zones of the environment can be considered as repulsive or attractive static objects creating a force field over agents, that causes a modification of their motion. The knowledge of a learned force field can be used to express other environmental constraints by using the agent deviation from the expected model already characterized.
The core of this work relies on the understanding of interactions between mobile agents and static objects. By observing the motion of agents it is possible to infer characteristics of external objects through the estimation of velocity fields associated with their effect on the environment. A Bayesian method has been selected for this task due to its capacity of considering probabilities to model interdependent events with their related uncertainty. Bayesian Models or Probabilistic Graphical Models (PGMs) \cite{Koller2009} are able to represent and temporally predict upcoming situations and they have proved to be useful in SA problems \cite{Park2014,Costa2009,Carvalho2010}.
Formally, SA is defined as ``the perception of the elements within a volume of time and space, the comprehension of their meaning, the explanation of their present (observed) status and the ability to project the same in the near future'' \cite{Bhatt2012}. From this perspective, SA represents a relevant issue to consider for understanding influences of objects inside scenes. The goal of this work is to identify repulsive and attractive properties of unknown objects given the movement of agents in the scene.
The paper is organized as follows: Section \ref{sec:Problem} gives a brief introduction to the concepts of force fields, section \ref{sec:ProblemDes} describes the problem that will be tackled, section \ref{sec:format} presents the proposed method, while results are analyzed in section \ref{sec:results} and conclusions are drawn in section \ref{sec:typestyle}.
\section{Force field terminology}
\label{sec:Problem}
Taking into consideration a classical mechanics approach, a force is defined as a vectorial quantity that acts on a body to cause a change in its state of motion \cite{Wagh2012}. Forces can be classified in action-reaction (when bodies, which are in contact, change their momenta \cite{Wagh2012}) and action-at-a-distance forces (when objects interact without being physically touched).
Considering that social interactions can be often modeled as contact-less, it becomes possible to explain social phenomena in a certain environment by modeling interactions between entities with action-at-a-distance forces.
A force field $\vec{F}$ is defined as a vector point-function which has the property that at every point of the space takes a particular value related to the magnitude and direction of a force acting on a particle of unit of mass placed there \cite{Tenenbaum2012}. Accordingly, in this work, the particles of unit of mass affected by force fields will be called agents.
A central force field $\vec{F}=f(r)\hat{r}$ is a special case of force field in which the motion of agents is affected depending on the distance $r$ to a center of force, which is generally associated with the center of mass of the object that produces the force field. $\hat{r}$ is a unit vector in the direction of $r$.
Additionally, a force field is called conservative, if it can be expressed in terms of the gradient of a function $\Phi(x,y,z)$. In the case of central force fields, since they are spherically symmetric, they will be conservative as well, such that:
\begin{equation} \label{eq3c}
\vec{F}=f(r)\hat{r}=-\nabla \Phi(r)
\end{equation}
As an example, let $\Phi_{g}(r)$ be the potential function of the gravitational force field, such that:
\begin{equation} \label{eq3e}
\Phi_{g}(r)=-\frac{GMm}{r}
\end{equation}
Where m and M are respectively the masses of the agent and the object that exerts the force field. $G = 6.67 \times 10^{-11} \frac {N m^{2}}{kg^{2}} $ is the gravitational constant and $r$ is the distance between the agent and the object's center of mass.
By substituting the potential function shown in equation (\ref{eq3e}) into equation (\ref{eq3c}), it is possible to obtain the gravitational force $\vec{F}_{g}$ in (\ref{eq3f}).
\begin{equation} \label{eq3f}
\vec{F}_{g}=-\frac{GMm}{r^{2}}\hat{r}
\end{equation}
In Fig. \ref{fig:1a} and Fig. \ref{fig:1b} are presented the plots of the potential energy $-\Phi_{g}(r)$ and the force field $\vec{F}_{g}$ respectively. For visualization purposes, in both figures a two-dimensional plane whose units are given in meters was considered in the axes $x$ and $y$. The center of force is placed in the origin of such plane i.e., $(0,0)$ and both masses, M and m, are considered as $1kg$.
In Fig. \ref{fig:1a}, the $z$ axis shows the magnitude of the gravitational potential. Fig. \ref{fig:1b} $z$ represents the magnitude of the force field. The effect of both is plotted in Fig. \ref{fig:1c}.
\begin{figure}[!htb]
\begin{minipage}[b]{0.48\columnwidth}
\centering
\includegraphics[width=1.05\linewidth]{Newton1}
\caption{Gravitational Potential Energy}
\label{fig:1a}
\vspace{1ex}
\end{minipage
\hfill
\begin{minipage}[b]{0.48\columnwidth}
\centering
\includegraphics[width=1.05\linewidth]{Newton2}
\caption{Gravitational Force Field}
\label{fig:1b}
\vspace{3ex}
\end{minipage}
\hfill
\begin{minipage}[b]{1\columnwidth}
\centering
\includegraphics[width=0.65\linewidth]{Newton3}
\caption{Gravitational Force Field with its Potential Energy}
\label{fig:1c}
\vspace{3ex}
\end{minipage}
\end{figure}
Accordingly, this paper is focused on characterizing the effects of central force fields $f(r)\hat{r}$ in terms of agent motions.
\section{Problem description}
\label{sec:ProblemDes}
\subsection{Continuous Case}
\label{sssec:Continuous}
The dynamic equation for describing the kinematic motion of an agent with constant acceleration is described below:
\begin{equation} \label{eq1aa}
p(t +\Delta t)= p(t) + v(t)\Delta t + \frac{a \Delta t^2 }{2}
\end{equation}
Where $p(t)$ represents the agent position in a certain instant $t$, $v(t)$ is its velocity and $a$ its acceleration. By substituting the expression of force $F=ma$ in equation (\ref{eq1aa}), it is possible to express the force in terms of the agent motion, such that:
\begin{equation} \label{eq1ab}
F=2m\bigg[\frac{p(t +\Delta t)- p(t) - v(t)\Delta t}{\Delta t^2}\bigg]
\end{equation}
Considering that the current and next position of the agent i.e., $p(t)$ and $p(t +\Delta t)$ are known, it is possible to obtain the force $F$ by calculating the parameter $v(t)$. Taking advantage from the additive property of velocities, it is possible to express such parameter as follows:
\begin{equation} \label{eq1ac}
v(t)=\sum_i^N v^i(t)
\end{equation}
Where $v^i(t)$ represents the velocity due to a force field produced by an external object $i$.
The main idea of this work is to incrementally learn the velocities $v^i(t)$ associated with the force fields produced by static objects. If such velocities become space dependent, it is possible to define a velocity field $v(\mathbb{R}^2)$ for a two dimensional environment. The description of the problem in terms of discrete time is formulated in the following.
\subsection{Discrete Case}
\label{sssec:Discrete}
By observing trajectories in a two-dimensional space $\mathbb{R}^2$, it is possible to define $Z_k$ as a vector of observations of the agent position $(x,y)$ at a time instant $k$. The state of the agent $X_{k}$ is defined as its position and velocity at a time $k$.
The relation between measurements of agent positions $Z_k$ and its state $X_{k}$ is expressed in equation (\ref{eq5}).
\begin{equation} \label{eq5}
Z_{k} = HX_{k} + \nu_{k}
\end{equation}
Where,
$$
X_k=\left[ \begin{array}{c} x_k \\ y_k \\ \dot{ x_k}\\ \dot{ y_k} \end{array} \right] ; H=\begin{bmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 \end{bmatrix}
$$
$\nu_{k}$ represents the noise in an instant $k$ produced by the sensor that is measuring the position of the agent.
It is possible to consider the next general dynamic model for describing the motions of agents:
\begin{equation} \label{eq4aa}
X_{k+\Delta k} = FX_{k} + BV_{k} + n_{k}
\end{equation}
Where,
$$
F=\begin{bmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0\end{bmatrix} ; B=\begin{bmatrix} \Delta k& 0 \\ 0 & \Delta k \\ 1 & 0 \\ 0 & 1 \end{bmatrix} ; V_k=\left[ \begin{array}{c} \dot{ x_k} \\ \dot{ y_k} \end{array} \right]
$$
Where $V_{k}$ is the velocity of the agent that is due to the presence of fixed objects inside the scene at time $k$; parameter $n_k$ represents the noise of the dynamic model at time $k$.
If $V_k$ is considered as the sum of all the effects produced by the objects interacting with the agent at a time $k$, it is possible to rewrite the term $V_k$, such that:
\begin{equation} \label{eq4ab}
V_{k}= \sum_{i=1}^{N} V^{i}_{k}
\end{equation}
Where $N$ represents the total number of objects that affect the state of the agent. From equation (\ref{eq4ab}), it is possible to see that the classic mechanics additivity of velocities is used to include all the external influences into the proposed dynamic formulation that estimates the state of agents in time.
The sum of velocities caused by different objects inside an environment allows a hierarchical representation of the scene. The effect of each object can be incrementally learned by comparing the current motion of an agent with models that were learned from previous observations. Section \ref{sec:format} explains the method that is proposed to do so.
\section{Proposed Method}
\label{sec:format}
Let's start considering the simplest dynamical model where null velocity contributions are assumed, i.e., a non-interactive model. Its formulation is shown in equation (\ref{eq4ac}).
\begin{equation} \label{eq4ac}
X_{k+\Delta k}^1= F{X}_{k}^1 + n_k
\end{equation}
Where $X_{k+\Delta k}^1$ is the estimation of the next point based on a non-interactive dynamical model. In this way, deviations from such a model will give information about the effects of unknown objects in the scene.
By taking an observation at time instant $k+\Delta k$, it is possible to obtain the term $\tilde{Y}^1_{k}$, associated with the difference between the state estimation obtained from a non-interactive dynamical model and the measurement taken by a sensor, such that:
\begin{equation} \label{eq4ad}
\tilde{Y}_{k}^1 = Z_{k+\Delta k} - H X_{k+\Delta k}^1
\end{equation}
$\tilde{Y}_{k}^1$ is the innovation or residual measurement produced by the non-interactive model. In the ideal case, $\tilde{Y}_{k}^1$ goes to zero, that means the proposed dynamical model correctly describes the behavior of the agent.
Given the case in which $\tilde{Y}_{k}^1$ is significantly different from zero, it is necessary to propose a more complete model that uses the residual of the non-interactive model $\tilde{Y}_{k}^1$ to accurately predict the agent behavior. From this perspective, it is considered the next new dynamical model:
\begin{equation} \label{eq4aab0}
{X}_{k+\Delta k}^2= {X}_{k+\Delta k}^1 + BV_k^1
\end{equation}
The motivation of the above expression relies on the fact of modeling the effect of unknown objects as a velocity term $BV_k^1$, see equation (\ref{eq4aa}).
Accordingly, the innovation of the new model, $\tilde{Y}_{k}^2$, must be close to zero, such that:
\begin{equation} \label{eq4aab}
Z_{k+\Delta k} - H {X}_{k+\Delta k}^2=0
\end{equation}
By considering equations (\ref{eq4ad}) and (\ref{eq4aab0}) in the expression of equation (\ref{eq4aab}), it is possible to obtain $V_k^1$ in terms of $\tilde{Y}_{k}^1$, such that:
\begin{equation} \label{eq4ae}
\tilde{Y}_{k}^1 = H B {V}_{k}^1 \Longrightarrow V_k^1=\frac{\tilde{Y}^1_{k}}{\Delta k}
\end{equation}
In this sense, a more complete dynamic model, which includes more knowledge about forces involved in the environment, can be used as a new estimator as it is shown below.
\begin{equation} \label{eq4aaa}
X_{k+\Delta k}^2 = FX_{k}^2+ B\frac{\tilde{Y}_{k}^1}{\Delta k} + n_{k}
\end{equation}
One of the purposes of the present work is to learn the parameters $\frac{\tilde{Y}_{k}^i}{\Delta k}$ related to each static object $i$.
Given the case that an object $i$ appears in the scene, it is possible to extract information about its effect by calculating $\frac{\tilde{Y}_{k}^i}{\Delta k}$. This process is done by using the non-interactive model, see equation (\ref{eq4ac}), together with the $i-1$ models previously estimated.
The non-interactive model can be seen as a baseline from which other models that include information about static objects can be created.
Through the characterization of the effects of several objects in the scene, the knowledge about the environment is increased and more complete dynamic models that describe the motion of agents can be obtained.
In general, supposing that $N$ static objects are placed in the environment one by one, such that it is possible to characterize each of them from the knowledge previously acquired, it is possible to write the next general expression for the hierarchical formulation of models:
\begin{equation} \label{eq4aad}
X_{k+\Delta k}^{N+1} = FX_{k}^{N+1} + B\sum_{i=0}^{N} \frac{\tilde{Y}^{i}_{k}}{\Delta k} + n_{k}
\end{equation}
Where $\tilde{Y}_k^{0} = 0$, which represents a non-interactive model.
An equivalent expression of equation (\ref {eq4ab}), that expresses the sum of $N$ objects' effects in terms of agent velocities added in a hierarchical way, is defined as follows:
\begin{equation} \label{eq4aac}
V_{k}= \sum_{i=0}^{N} \frac{\tilde{Y}^{i}_{k}}{\Delta k}
\end{equation}
Where $\tilde{Y}^i_{k}$ is the innovation or residual generated by the object $i$. As discussed before, this term is defined as the difference between observed agent position and dynamical model prediction, see equation (\ref{eq4ad}). $\Delta k$ represents the sample time.
By supposing that time invariant force fields are caused by static objects, it is possible to make $\tilde{Y}_{k}$ dependent on the agent location only, i.e., on the measured position $Z$:
\begin{equation} \label{eq4af}
\tilde{Y}_{k} = \tilde{Y}_{Z} \Longrightarrow V_k=V_Z
\end{equation}
Where $\tilde{Y}_{Z}$ and $V_Z$ are spatially dependent. Accordingly, considering an unknown object $i$ that produces a central force field $f(r_{i})\hat{r_{i}}$ and therefore a velocity field $\upsilon(r_{i})\hat{r_{i}}$, it becomes necessary to calculate the parameter $r_{i}$ to approximate both fields. Since $r_{i}$ is defined as the distance between the agent and the static object $i$; and considering the state of such object as $X_{i}^{O} = [x^i_{o}\quad y^i_{o}\quad 0 \quad0]^T$. It is possible to write the term $ r_{i}$ as follows:
$$
r_{i}=\sqrt{\lVert Z - HX^{O}_{i} \rVert}
$$
Where $Z$ represents the measurement of the agent position. Since $X^{O}_{i}$ remains constant for fixed objects, the distance $r_{i}$ only depends on the position of the agent.
It is possible to obtain the velocity field generated by an object $i$ all over a two-dimensional plane by looking at the behavior of agents in each point in the space $\mathbb{R}^2$ as it is described by the following equation:
\begin{equation} \label{eq4aaf}
\upsilon_{i}(r_i)\hat{r} =V^i_{Z \in \mathbb{R}^2} \hat{V}
\end{equation}
Where $V^i_{Z \in \mathbb{R}^2}$ represents the magnitude of the velocity field measured from a set of measurements that cover all the space $\mathbb{R}^2$ and $\hat{V}$ represents the direction of such velocities in every point.
Since in real cases there is only sparse information of the agents' motion, it is necessary to consider a method to generalize each velocity field from the measured information. This step is discussed in section \ref{sssec:ANNApproach}.
The objective of this work is to build a set of dynamical models through the estimation of velocity fields $V^i_{Z \in \mathbb{R}^2}$ related to static objects. Such models can be used as future estimators to hierarchically characterize new velocity fields produced by other unknown objects in an environment.
\subsection{Kalman Filtering Formulation}
\label{sssec:KF Formulation}
It is proposed a hierarchical method where Kalman filters (KFs) are used to track the motion of each agent in order to obtain information concerning unknown static objects inside the environment.
KFs are formulated as more observations related to new objects are obtained. Each unknown object is modeled separately in order to characterize its effects independently.
Velocity fields produced by static objects are characterized through perturbations in the velocity of agents that are captured by the innovations of KFs.
The effect of an object $i$ is encoded in a new Kalman filter as a control vector $V^i_k$, see equation (\ref{eq4aa}) and (\ref{eq4ab}). Therefore, KFs integrating characteristics of different objects can be obtained and used to estimate posterior information of other objects inside the scene.
This hierarchical characterization with incremental learning of the effects of unknown objects is fundamental when analyzing real scenarios: in this way, it is possible to start with an approximation of velocity fields generated in normal situations, i.e., given by the motion of agents with routine behaviors. Afterwards, when abnormal situations happen, it is possible to find new objects that explain such variations from the normality.
Through the approximation of velocity fields, it is possible to extract information about locations and effects of objects that cause perturbations in the agents' motions. Such information is incrementally added to KF models whose innovation will work as sensors for detecting and characterizing future unknown static objects, see Fig. \ref{fig:KFs}.
\begin{figure}[htb]
\centering
\centerline{\includegraphics[width=8.3cm]{ImageKFs}}
\caption{Hierarchical representation of KFs.}
\label{fig:KFs}
\end{figure}
A bank of KFs is created from a first filter that uses a non-interactive dynamic model described in equation (\ref{eq4ac}). This consideration allows to start building new KFs that include different dynamical models explaining more complex situations and from which unknown effects in the environment can be learned.
Effects of objects that appear one by one in the scene are added based on equation (\ref{eq4aad}), where each effect is added as a control input vector into a KF formulation.
Regarding the observed information, it is considered the equation (\ref{eq5}), in which $Z_k$ is a measure obtained through an external sensor, such as a real-time locating system, a GPS or a radar. In case a nonlinear sensor is used, an Extended Kalman Filter (EKF) can be used to extend the proposed approach.
The method described in the present work can be of great help where video surveillance cameras are not available but only information about the localization of individuals can be used. In several situations, measurements of cameras are not reliable due to problems such as noise in images, partial or full occlusion, scene illumination changes, etc. \cite{Yilmaz2006}. In those cases, even partial information about location of moving subjects (e.g., acquired from smartphones) could characterize possible abnormalities in an environment.
The proposed approach is useful in diverse scenarios, as detection of crashes and wrong parked objects in streets \cite{Lan2015} or for fire detection with surveillance purposes \cite{Foggia2015}. In order to characterize static objects with few samples, the fitting process described in the next section is proposed.
\subsection{Artificial Neural Networks for Velocity Field Fitting}
\label{sssec:ANNApproach}
An Artificial Neural Network (ANN) is used for approximating each velocity field generated by an unknown object. The inputs of each ANN are the observed agent's two-dimensional coordinates $Z$ and its outputs are the agent's velocity components in such position $V_Z$ obtained through dividing the KF innovation by the sampling time, see equation (\ref{eq4ae}).
The final objective of the fitting process is to find a nonparametric function $G(x,y)$ that relates the coordinates over the whole two-dimensional environment with the velocity field generated by an unknown object, see Fig. \ref{fig:Second}.
\begin{figure}[htb]
\centering
\centerline{\includegraphics[width=8cm]{ImagePaper2a2}}
\caption{General scheme of the proposed ANN.}
\label{fig:Second}
\end{figure}
The function $G(Z \in \mathbb{R}^2)$ is an approximation of the complete velocity field in $\mathbb{R}^2$. In this way, by taking equation (\ref{eq4aaf}) as reference, it is possible to obtain the relations shown in equation (\ref{eq4aag}) for an object $i$.
\begin{equation} \label{eq4aag}
\upsilon_{i}(r_i)\hat{r} =V^i_{Z \in \mathbb{R}^2} \hat{V} \approx G^i(Z \in \mathbb{R}^2)\hat{g}
\end{equation}
Where $\hat{g}$ represents the direction of the velocity produced by the approximation with an ANN.
For each static object inside the environment, it has to be considered an ANN that models its behavior. The architecture of each ANN consists of one hidden layer of $10$ neurons that uses the hyperbolic tangent sigmoid as neural transfer function and it is trained based on the Levenberg-Marquardt backpropagation training function.
In this work, for the characterization of the velocity field generated by each object, it was considered a total of $100$ agent trajectories that are affected by it. The starting points of agents are chosen randomly to obtain spread information inside the scene. Each ANN is trained taking into account the inputs and outputs shown in Fig.\ref{fig:Second}.
\subsection{Extraction of Information Related to Unknown Objects}
\label{sssec:InfoExtract}
Given a trained ANN that approximates the velocity field $\upsilon(r)\hat{r}$ of an unknown object all over a two-dimensional plane, it is possible to infer its center of the force by considering the divergence of the vector field $\nabla\cdot\upsilon(r)\hat{r}$.
Assuming that objects produce central force fields, the algorithm \ref{Algo1a} is considered for inferring their center of force and nature, i.e., attractive or repulsive.
\begin{algorithm}
\label{Algo1a}
\KwData{-Trained ANN according with Fig \ref{fig:Second}. \\
- Discretization rate: $step$, for environment division.}
\KwResult{Position of the object $objx$ and $objy$.\\
-Nature of the object: $class$.}
$\upsilon\gets$ Vector field produced by an ANN in an environment divided using a stepsize $step$ in $x$ and $y$\\
$div=divergence(\upsilon)$\tcp*{Divergence of the vector field generated by ANN.}
$absdiv=abs(div)$\tcp*{Absolute value of the divergence of the velocity field.}
$[objx,objy]=max(absdiv)$\tcp*{Extraction of coordinates with maximum value.}
$signdiv=sign(div(objx,objy))$\tcp*{Evaluation of sign in particular point of $div$.}
\If{$signdiv< 0$}{
$class= ``Attractive"$\tcp*{Object classified as attractive.}
\Else{
$class= ``Repulsive"$\tcp*{Object classified as repulsive.}
}
}
\caption{Extraction of the object location and nature.}
\end{algorithm}
As it can be seen from the algorithm \ref{Algo1a}, by having the ANN approximation of the velocity field $G(Z \in \mathbb{R}^2)\hat{g}$, it is possible to extract the position and nature of an object that exerts a central force field, this knowledge is useful for understanding environment characteristics and to establish relations of causality between static objects and mobile agents in the scene.
\section{RESULTS}
\label{sec:results}
\subsection{Experimental setup}
\label{sssec:ExpSetup}
For demonstration purposes, an environment is defined in a two-dimensional space whose measurements go from $-20$ to $20$ in both axes. The size of the environment does not have any consequence in the performance of the proposed method in terms of functionality. Although, for better resolution in the results it is necessary to divide the environment in several parts in order to evaluate the ANNs with more points. In this sense, it is an advantage to represent objects in small environments.
Compact scenes also present the advantage of needing few trajectories to estimate the velocity field in the entire plane, i.e., the smaller the environment, the less samples are needed to obtain a good representation of it even with sparse data.
Two static attractive objects with different centers of force are placed inside the environment. They will be called first attractor, labeled as $a_{1}$ and second attractor, labeled as $a_{2}$. Their respective centers of force are $(0,15)$ and $(-10,10)$; and their velocity fields are $\upsilon_{a_{1}}$ and $\upsilon_{a_{2}}$ whose formulations are shown in equations (\ref{eq1}) and (\ref{eq2}), respectively.
\begin{equation} \label{eq1}
\upsilon_{a_1} (d_{a_{1}}) =
\begin{cases}
\sqrt{d_{a_1}}/b_{1} & 0\leq d_{a_1}\leq c_{1} \\
\sqrt{c_{1}}/b_{1} & c_{1}< d_{a_1} \le f_{1}\\
0 & f_{1}> d_{a_1}
\end{cases}
\end{equation}
\begin{equation} \label{eq2}
\upsilon_{a_{2}} (d_{a_{2}}) =
\begin{cases}
b_{2} e^{-(d_{a_{2}}-c_{2})^2/\alpha_{1}} & 0\leq d_{a_{2}}\leq c_{2} \\
b_{2} & c_{2}< d_{a_2} \leq f_{2}\\
0 & f_{2}> d_{a_2}
\end{cases}
\end{equation}
Where $d_{a_{1}}$ and $d_{a_{2}}$ represent the distance between an agent and the center of force of the respective attractor. $b_{1}$ and $b_{2}$ are constants that control the amplitude of the velocity field. $c{1}$ and $c{2}$ are constants related to the distance from the center of force in which agents start decreasing their velocity while they are approaching to their destinations. $f_{1}$ and $f_{2}$ are constants related to the distance from the center of force in which the attractor does not have any influence. $\alpha_{1}$ represents the way in which the second attractor makes agents decrease their velocities as they approach to it.
Moving agents go towards either of two attractors. The starting point of agents is randomly selected, assuming that agents can appear from whatever point in the environment. Once an agent reaches its destination is removed from the scene and no more information is received from it.
A time interval $\Delta k=1$ is considered as the sampling time that a locating system requires to acquire information about the agent positions inside the environment. Each time instant $k$, a Gaussian noise is applied to the coordinate positions $(x, y)$ of each agent in order to simulate perturbations caused by external factors different from the velocity fields produced by objects in the environment.
Subsequently, a repulsive object is introduced in the scene, its center of force is placed in $(0,-5)$ and equation $ (\ref{eq3})$ is used for computing its velocity field.
\begin{equation}\label{eq3}
\upsilon_r(d_r)=b_{3} e^{(d_r)^2/\alpha_{2} }
\end{equation}
Similar to the attractors described previously, $d_r$ represents the distance between the agent and the repulsive object. $b_{3}$ is a constant that affects the magnitude of the velocity field and $\alpha_{2}$ is a constant related to the way in which the velocity field increases when an agent is approaching towards the center force of the object.
The constants considered in the section \ref{sssec:ExpSetup} for calculation purposes are presented below.
\begin{gather*}
b_{1}=2 \textbf{;} \quad b{2}=1.1 \textbf{;} \quad b_{3}=0.8 \\
c_{1}=4 \textbf{;} \quad c{2}=8 \\
f_{1}=80 \textbf{;} \quad f{2}=80 \\
\alpha_{1}=50 \textbf{;} \quad \alpha_{2}=1000
\end{gather*}
The velocity field of each object is taken as an unknown parameter to estimate, the theoretical expressions shown in equations (\ref{eq1}), (\ref{eq2}) and (\ref{eq3}) are used to measure the performance of the proposed method. Accordingly, the particular situation that will be taken into consideration to evaluate the proposed method is depicted in the Fig. \ref{fig:First}.
\begin{figure}[htb]
\centering
\centerline{\includegraphics[width=8.7cm]{ImagePaper1}}
\caption{Trajectories of agents with different destinations and same starting point, both are influenced by a repulsive object placed in the middle of their paths.}
\label{fig:First}
\end{figure}
Next section shows the results obtained by applying the methodology explained in section \ref{sec:format} over the scenario described in the present section. Velocity fields are obtained by applying the proposed method and are compared with the theoretical ones.
\subsection{Evaluation of the method}
\label{sssec:ExpSetup}
For the first attractor, by taking into consideration its theoretical velocity field described in the equation (\ref{eq1}), it is possible to obtain the graphic of Fig. \ref{fig:Third}. Afterwards, by considering $\theta$ as the angle of the velocity field evaluated in a particular point, it is possible to calculate $|\theta|$, whose behavior is plotted in Fig. \ref{fig:Fourth}.
By applying the Euclidean norm on the velocity field generated by the ANN approximation for the first attractor, it is possible to obtain the graph of Fig. \ref{fig:Fifth}. By calculating the absolute value of the angle formed by the velocity generated by the ANN, it is possible to obtain Fig. \ref{fig:Sixth}.
By comparing Fig. \ref{fig:Third} and Fig. \ref{fig:Fourth} with Fig. \ref{fig:Fifth} and Fig. \ref{fig:Sixth} respectively, it can be seen that the proposed formulation provides a reliable approximation to the real velocity field produced by an object inside the scene.
Similarly, the process described for the first attractor was repeated for the other attractive object as well. The ground truth of the velocity field produced by the second attractor and its modulus of the angle are depicted in Fig. \ref{fig:Seventh} and Fig. \ref{fig:Eighth}, respectively. The corresponding approximations by using the proposed method are shown in Fig. \ref{fig:Ninth} and Fig. \ref{fig:Tenth}.
\begin{figure}[!htb]
\begin{minipage}[b]{0.48\columnwidth}
\centering
\includegraphics[width=1.05\linewidth]{ImagePaper3}
\caption{Theoretical velocity field for first attractor}
\label{fig:Third}
\vspace{3ex}
\end{minipage
\hfill
\begin{minipage}[b]{0.48\columnwidth}
\centering
\includegraphics[width=1.05\linewidth]{ImagePaper4}
\caption{Theoretical orientation for first attractor field}
\label{fig:Fourth}
\vspace{3ex}
\end{minipage}
\hfill
\begin{minipage}[b]{0.48\columnwidth}
\centering
\includegraphics[width=1.05\linewidth]{ImagePaper5}
\caption{ANN approx. velocity field for first attractor}
\label{fig:Fifth}
\vspace{3ex}
\end{minipage
\hfill
\begin{minipage}[b]{0.48\columnwidth}
\centering
\includegraphics[width=1.05\linewidth]{ImagePaper6}
\caption{ANN approx. orientation for first attractor field}
\label{fig:Sixth}
\vspace{3ex}
\end{minipage}
\end{figure}
\begin{figure}[!htb]
\begin{minipage}[b]{0.48\columnwidth}
\centering
\includegraphics[width=1.05\linewidth]{ImagePaper7}
\caption{Theoretical velocity field for 2nd attractor}
\label{fig:Seventh}
\vspace{3ex}
\end{minipage
\hfill
\begin{minipage}[b]{0.48\columnwidth}
\centering
\includegraphics[width=1.05\linewidth]{ImagePaper8}
\caption{Theoretical orientation for 2nd attractor field}
\label{fig:Eighth}
\vspace{3ex}
\end{minipage}
\hfill
\begin{minipage}[b]{0.48\columnwidth}
\centering
\includegraphics[width=1.05\linewidth]{ImagePaper11}
\caption{ANN approx. velocity field for 2nd attractor}
\label{fig:Ninth}
\vspace{3ex}
\end{minipage
\hfill
\begin{minipage}[b]{0.48\columnwidth}
\centering
\includegraphics[width=1.05\linewidth]{ImagePaper12}
\caption{ANN approx. orientation for 2nd attractor field}
\label{fig:Tenth}
\vspace{3ex}
\end{minipage}
\end{figure}
\begin{figure}[!htb]
\begin{minipage}[b]{0.48\columnwidth}
\centering
\includegraphics[width=1.05\linewidth]{ImagePaper9}
\caption{Theoretical velocity field for the repeller}
\label{fig:Eleventh}
\vspace{3ex}
\end{minipage
\hfill
\begin{minipage}[b]{0.48\columnwidth}
\centering
\includegraphics[width=1.05\linewidth]{ImagePaper10}
\caption{Theoretical orientation for the repeller field}
\label{fig:Twelveth}
\vspace{3ex}
\end{minipage}
\hfill
\begin{minipage}[b]{0.48\columnwidth}
\centering
\includegraphics[width=1.05\linewidth]{ImagePaper13}
\caption{ANN approx. velocity field for the repeller}
\label{fig:thirteenth}
\vspace{3ex}
\end{minipage
\hfill
\begin{minipage}[b]{0.48\columnwidth}
\centering
\includegraphics[width=1.05\linewidth]{ImagePaper14}
\caption{ANN approx. orientation for the repeller field}
\label{fig:Fourteenth}
\vspace{3ex}
\end{minipage}
\end{figure}
For the approximation of the repulsive object, it is important to mention that its characteristics were approximated after having the model for both attractors. Accordingly, the approximated velocity fields generated by both attractors were used as prior knowledge to hierarchically characterize the velocity field of the repulsive object introduced in the scene. The ground truth information of the repulsive object is depicted in Fig. \ref{fig:Eleventh} and Fig. \ref{fig:Twelveth}. The approximation using the proposed method is shown in Fig. \ref{fig:thirteenth} and Fig. \ref{fig:Fourteenth}.
For extracting the position of each object and its nature, i.e., attractive or repulsive, the algorithm \ref{Algo1a} was used with $step = 0.3$. Overall results are shown in Table \ref{table:1}.
\begin{table}[h!]
\caption{Results of localization and nature of objects }
\centering
\begin{tabular}{||c c c c ||}
\hline
\textbf{Object} & \textbf{Real Position} &\textbf{Estimated Position} & \textbf{Nature} \\ [0.5ex]
\hline\hline
Attractor1 & (0, 15) & (-0.2, 15.1) & Attractive \\ [0.5ex]
Attractor2 & (-10, 10) & (-10.1, 10) & Attractive \\
Repeller & (0, -5) & (0.1, -4.7) & Repulsive \\ [0.5ex]
\hline
\end{tabular}
\label{table:1}
\end{table}
From Table \ref{table:1}, it is possible to see that proposed approach has great accuracy in the localization of the center of forces of unknown static objects. It also recognizes correctly the attractive or repulsive behavior of each object.
\section{CONCLUSIONS}
\label{sec:typestyle}
A method to estimate velocity fields caused by static objects given the motion of agents was formulated and validated for attractive and repulsive objects. The innovation or measurement residual of a KF formulation was proposed to perform the characterizations of such velocity fields.
The effect of unknown static objects is represented into a KF formulation by a term that consists in a control input model multiplied by a control vector. The additivity of such terms allows to build different dynamical models with more complete information that describe the motion of agents in a hierarchical way. In this sense, each new object that appears in the scene can be added as part of the control vector into a dynamical formulation. This methodology allows to incrementally learn information about the effects inside an environment by analyzing the movement of agents.
Artificial Neural Networks were used to perform a regression over sparse data in order to generalize velocity fields for the whole environment. Theoretical and practical results are compared, showing that the proposed method allows to properly estimate the effects of unknown static objects in two-dimensional environments.
For a future work, this approach will be the basis of a system for detecting abnormalities in guarded environments. This method can be also extended for the characterization of dynamical moving objects in which online learning techniques should be explored to characterize information inside the scene as time evolves.
\bibliographystyle{IEEEbib}
|
1,108,101,565,595 | arxiv | \section{Introduction}
\label{Sec:Introduction}
Currently a variety of different notions of quantumness are discussed.
It is of fundamental interest to classify the nature into one part showing classical signatures, and another one dominated by quantum phenomena.
In quantum optics, the most common notion of nonclassicality is esta\-blished through the Glauber-Sudarshan $P$~function~\cite{glauber63,sudarshan63}.
If this distribution does not resemble a classical probability density, the corresponding state is called a nonclassical one~\cite{titulaer65,mandel86}.
Quantum entanglement is another notion of quantumness~\cite{horodecki09, guehne09}.
Although each indivi\-dual subsystem can be quantum, for a separable state the correlations between the subsystems can be des\-cribed by classical statistics~\cite{werner89}.
Entanglement or inseparability can be fully characterized by negativities of optimized quasiprobability (QP) distributions~\cite{sperling09, sperling12}, denoted as $P_{\rm Ent}$.
In quantum information theory, a quantum-correlation (QC) is also characterized by the so-called quantum discord~\cite{ollivier01, henderson01}.
A nonzero discord describes the back action of a measurement in one subsystem to another one.
Discord is often considered as a general measure of QC beyond entanglement.
Recently, however, it has been shown that the notions of QC based on quantum discord and on negativities of the $P$~function are maximally inequivalent~\cite{ferrarro12}.
The general characterization of the QCs of radiation fields has been formulated in terms of the space-time dependent $P$~functional~\cite{vogel08}.
This yields a full hierarchy of inequalities for observable correlation functions.
Whenever the described systems violate classical probability theory, the $P$~functional is a QP representation of quantum light.
In the following we restrict attention to equal-time measurements, so that the $P$~functional simplifies to the $n$-partite $P$~function of the global quantum state $\hat\rho$,
\begin{align}
\hat\rho =\int {\rm d}^{2n}\boldsymbol\alpha\, P(\boldsymbol\alpha)\,|\boldsymbol\alpha\rangle\langle \boldsymbol\alpha|,
\end{align}
where $|\boldsymbol\alpha\rangle=|\alpha_1,\dots,\alpha_n\rangle$ denotes multimode products of coherent states.
One severe disadvantage of the $P$~function consists in its strong singularities occurring for many quantum states, even in the single-mode case.
As a consequence, in general this function is experimentally not accessible and hence of limited practical value.
Only in special cases the $P$~function is regular~\cite{agarwal92} and can be measured~\cite{kiesel08}.
The concept of phase space representations has been generalized to the $s$-parametrized QPs~\cite{cahill69}.
The latter include popular examples: the $P$~function ($s=1$), the Wigner function ($s=0$), and the Husimi~$Q$~function ($s=-1$).
Diminishing the $s$~parameter, the QPs become more regular, but less sensitive for testing quantumness.
For squeezed states, all $s$-parametrized QPs are either positive or irregular.
Especially the Wigner function is popular, since it is easily obtained in experiments, e.g. for quantum light, molecules, and trapped atoms~\cite{smithey93,dunn95, ourjoumtsev06, Deleglise08,leibfried96}.
Further generalizations of the QP methods were introduced in~\cite{agarwal70}.
For the single-mode case, nonclassicality QPs, $P_{\rm Ncl}$, have been introduced~\cite{kiesel10}.
They are regularized versions of the highly singular $P$~functions.
For any nonclassical single-mode state, they show negativities and can be directly obtained from experimental data~\cite{kiesel11,kiesel11a,kiesel12}.
Hence $P_{\rm Ncl}$ is a powerful tool for the full experimental characterization of quantum effects of single-mode fields.
In the present paper, we generalize the regularization of the $P$ function for multimode light.
The resulting QP distribution, $P_{\rm QC}$, uncovers any QC occurring in the multimode $P$~function.
For practical applications it is of great importance that $P_{\rm QC}$ can be directly sampled from experimental data obtained by multimode homodyne detection.
We show that our method uncovers QCs contained in a family of highly singular $P$~functions, which exhibit QC's beyond quantum discord and quantum entanglement.
The paper is structured as follows.
We provide a scenario for use of our method in Sec.~\ref{Sec:Motivation} through the full phase-randomized two-mode squeezed vacuum state, which is not entangled, has zero quantum discord, a positive Wigner function, and classical reduced subsystems.
In Sec.~\ref{Sec:MultipartitePQCfunction} we present the regularization of the $P$ function for multimode radiation fields.
The resulting QP distribution, $P_{\rm QC}$, uncovers any QC occurring in the multimode $P$~function.
We show in Sec.~\ref{Sec:BeyondEntanglementDiscord} that for this special state, our method uncovers QCs contained in the highly singular $P$~function, via negativities of the bipartite filtered QP distribution $P_{\rm QC}$.
The QCs are clearly visible, even when the other signatures of QCs do not persist.
In Sec.~\ref{Sec:Sampling} we provide the approach of direct sampling of $P_{\rm QC}$ from experimental data obtained by multimode homodyne detection.
A summary and some conclusions are given in Sec.~\ref{Sec:Conclusions}.
\section{Motivation}
\label{Sec:Motivation}
Let us consider a realistic experimental scenario for generating a bipartite continuous variable state.
The inputs of a 50:50 beam splitter are equally squeezed states in orthogonal quadratures, cf. Fig. \ref{experiment}.
In the output ports of this setup one obtains a two-mode squeezed vacuum (TMSV) state.
Such states are entangled and have non-zero quantum discord.
\begin{figure}[ht]
\centering
\includegraphics*[width=5cm]{experimental.pdf}
\caption{(Color online) Experimental setup for the generation of a phase randomized two-mode squeezed vacuum.
A 50:50 beam splitter combines squeezed-vacuum states, which are squeezed in orthogonal quadratures.
One of the output channels is fully phase randomized, $\delta\varphi=2\pi$.}
\label{experiment}
\end{figure}
The situation is drastically changed, if phase randomization, indicated by $\delta\varphi$ in Fig.~\ref{experiment}, occurs in one of the output ports.
The entanglement properties of the state depend sensitively on the dephasing~\cite{sperling12}. In the case of an equally distributed phase, $\delta\varphi=2\pi$, the resulting output state is a fully phase randomized TMSV state,
\begin{align}
\hat\rho=\sum_{n=0}^\infty (1-p)p^n |n\rangle\langle n|\otimes|n\rangle\langle n|,
\label{Eq State}
\end{align}
with $0<p<1$, $p$ being related to the squeezing parameter of the initial input fields.
It is important to consider properties of this state which are closely related to its QCs.
First, the phase randomized TMSV state is a convex mixture of the product states $|n\rangle\langle n|\otimes|n\rangle\langle n|$.
Hence, it is classical with respect to the property of entanglement.
Second, due to the orthogonality of the Fock states, $\langle n|n'\rangle=0$ for $n\neq n'$, this state has no QCs in the sense of the quantum discord.
This conclusion is obvious from the general representation of quantum states with zero discord given in~\cite{datta10}.
So far our state belongs to the class of quantum states considered in~\cite{ferrarro12} with the aim to demonstrate the inequivalence of QCs based on the $P$~function and the quantum discord.
In our context, we require the following additional properties.
Third, let us consider the reduced density operator, $\hat\rho_{\rm red}$.
Due to the symmetry of this state with respect to the interchange of both subsystems, we find that
\begin{align}
\hat\rho_{\rm red}={\rm Tr}_A\hat\rho={\rm Tr}_B\hat\rho=\sum_{n=0}^\infty (1-p)p^n |n\rangle\langle n|.
\end{align}
This state is a thermal one with a mean photon number $\bar n_{\rm th}=p/(1-p)$.
Therefore it shows a classical behavior with respect to the reduced single-mode states.
This property insures that any identified signature of quantumness exposes a true QC effect.
Fourth, the full state $\hat\rho$ is a mixture -- due to the phase randomization -- of two-mode Gaussian states.
Hence it has a non-negative Wigner function, which is shown in Fig.~\ref{wigner}.
This QP distribution, which was often applied in experiments~\cite{smithey93,dunn95, ourjoumtsev06, Deleglise08,leibfried96}, is improper to visualize QCs of this state
by attaining negative values.
\begin{figure}[ht]
\centering
\includegraphics*[scale=0.5]{wigner.pdf}
\caption{(Color online) The Wigner function is shown for a fully phase randomized two-mode squeezed vacuum state, with $p=0.8$.
It is non-negative in the full two-mode phase space.}
\label{wigner}
\end{figure}
Summarizing the features of the fully phase randomized TMSV state, we have introduced an experimentally realizable quantum state with the following properties:
\begin{enumerate}
\item no entanglement;
\item zero quantum discord;
\item classical reduced single-mode states;
\item non-negative Wigner function.
\end{enumerate}
Despite these strong signatures of classicality, this state will be proven to show clear QC effects.
It is a non-Gaussian state that can be experimentally generated as outlined above.
To prove that this state describes QCs, we have to visualize the negativities of its strongly singular, two-mode $P$~function:
\begin{align}
\nonumber P(\alpha_A,\alpha_B)&=\sum_{n=0}^\infty (1-p) p^n \sum_{k,l=0}^n \binom{n}{k}\binom{n}{l}\, (-1)^{k+l} \\
&\times \frac{n!}{k!\,l!}\, \partial^k_{\alpha_A}\partial^k_{\alpha_A^*} \delta(\alpha_A)\, \partial^l_{\alpha_B}\partial^l_{\alpha_B^*} \delta(\alpha_B).
\label{eq:P-rand}
\end{align}
We will shown that there are QCs between the subsystems $A$ and $B$ going beyond entanglement and quantum discord.
\section{Regularized multipartite $P_{\rm QC}$~function}
\label{Sec:MultipartitePQCfunction}
To be useful for experiments, quantumness criteria must be based on well-behaved functions.
Hence a regularization of the highly singular Glauber-Sudarshan $P$~representation is required.
We can construct a multimode regular $P_{\rm QC}$ function in the form of a convolution
\begin{align}\label{Eq:PQC-Def}
P_{\rm QC}(\boldsymbol\alpha; w)=\int {\rm d}^{2n}\boldsymbol\alpha'\, P(\boldsymbol\alpha-\boldsymbol\alpha')\,\tilde\Omega_w(\boldsymbol\alpha'),
\end{align}
where $\tilde\Omega_w$ is a suitable function, which we are going to construct.
The occurring width parameter $w$ provides the property $P_{\rm QC}(\boldsymbol\alpha; w)\to P(\boldsymbol\alpha)$ for $w\to\infty$. A direct sampling formula for measured quadrature data will be given.
We start from the multimode characteristic function.
The $n$-mode characteristic function $\Phi$ is defined as the Fourier transform of the $P$~function
\begin{align}
\Phi(\boldsymbol\beta) =\int {\rm d}^{2n}\boldsymbol\alpha\, P(\boldsymbol\alpha)\, {\rm e}^{\boldsymbol\beta\cdot\boldsymbol\alpha^*-\boldsymbol\beta^*\cdot\boldsymbol\alpha}.
\end{align}
The characteristic function $\Phi$ is always a continuous function, independent of singularities in $P(\boldsymbol\alpha)$.
Single-mode nonclassicality criteria based on the characteristic function have been introduced~\cite{vogel00,Ri-Vo02} and applied~\cite{CF-exp1,CF-exp2,kiesel09}.
Note that the $n$-mode characteristic function is bounded in the form $|\Phi(\boldsymbol\beta)|\leq \exp(|\boldsymbol\beta|^2/2)$.
The convolution in Eq.~\eqref{Eq:PQC-Def} is a point-wise product in Fourier space.
Hence, we consider the so-called multimode filtered characteristic function $\Phi_{\rm QC}(\boldsymbol\beta;w)$,
\begin{align}
\Phi_{\rm QC}(\boldsymbol\beta;w)=\Phi(\boldsymbol\beta) \prod_{k=1}^n\Omega(\beta_k/w),
\end{align}
being the Fourier transformed of $P_{\rm QC}$ for $0<w<\infty$.
Our choice for the filter $\Omega(\beta)$ is the autocorrelation function
\begin{align}\label{Eq:SingleModeFilter}
\Omega(\beta)=\left(\frac{2}{\pi}\right)^{3/2}\int {\rm d}^2\beta'\,{\rm e}^{-|\beta+\beta'|^4}{\rm e}^{-|\beta'|^4}.
\end{align}
Finally, we get the regularization kernel in Eq.~\eqref{Eq:PQC-Def} by the inverse Fourier transform as
\begin{align}\label{Eq:ConvolutionKernel}
\tilde\Omega_w(\boldsymbol\alpha)= \prod_{k=1}^n \left(\frac{1}{\pi^2}\int {\rm d}^2\beta\,\,\Omega(\beta/w)\, {\rm e}^{\beta^\ast\alpha_k-\beta\alpha_k^\ast} \right).
\end{align}
For the single-mode case, such a filter $\Omega (\beta/w) $ in Eq.~\eqref{Eq:SingleModeFilter} has been introduced and characterized in Ref.~\cite{kiesel10}.
It has been shown that this filter suppresses the exponentially rising behavior of $\exp(|\beta|^2/2)$.
In addition, it has been shown that this kind of filter belongs to the class of invertible filters, $\Omega(\beta/w)\neq 0$.
Hence, deconvolution of Eq.~\eqref{Eq:PQC-Def} yields the $P$~function, and thus the full state $\hat\rho$, from the QP~distribution $P_{\rm QC}$.
Let us comment here on the structure of the regularizing function $\tilde\Omega_w$ defined in Eq.~(\ref{Eq:ConvolutionKernel}).
This multimode function can be written as a product of single-mode functions which do not depend on the state, the proof is given in Appendix~A.
A product filter is a practical tool that is sufficient to identify any kind of nonclassicality in the multimode $P$~function.
The latter allows us directly to recognize QCs in quantum systems with classical parts.
The regularizing function must not introduce any quantum-correlation to $P_{\rm QC}$, which is absent in the Glauber-Sudarshan $P$~function.
For any nonclassical state which includes some QC, there exist values $w$ and $\boldsymbol\alpha$ for which $P_{\rm QC}(\boldsymbol\alpha; w)<0$.
The other way around, a state is classical, if for all values $w$ and $\boldsymbol\alpha$ the function $P_{\rm QC}(\boldsymbol\alpha; w)$ represents a classical probability density.
This means that we may identify QCs via uncorrelated filtering.
\section{Beyond entanglement and discord}
\label{Sec:BeyondEntanglementDiscord}
\subsection{Direct verification of quantum correlations}
For the proof of the existence of QCs in the phase randomized TMSV state, the regularized two-mode quantum-correlation QP, $P_{\rm QC}$, is calculated from Eqs. ~(\ref{eq:P-rand}), (\ref{Eq:PQC-Def}), (\ref{Eq:SingleModeFilter}), and (\ref{Eq:ConvolutionKernel}).
The result in Fig.~\ref{pfil} clearly shows that $P_{\rm QC}$ becomes negative for properly chosen arguments.
In view of the properties of the state listed above, these negativities are a direct proof of the quantum nature of the correlations between the two subsystems $A$ and $B$. Although the
Wigner function in Fig.~\ref{wigner} contains the full information on the quantum state, it does not directly visualize the QC properties of the state under study.
The regular QP distribution $P_{\rm QC}$, on the other hand, directly displays the QCs within the considered randomized TMSV state by attaining negative values.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.45]{w150.pdf}
\caption{(Color online) The regularized quasiprobability $P_{\rm QC}$ is shown for a fully phase randomized two-mode squeezed vacuum state, for $p=0.8$ and $w=1.5$.
Clear negativities visualize the nonclassical correlations of the state.}
\label{pfil}
\end{figure}
Here the question may arise of whether the quantum correlations of the fully phase randomized TMSV state can be used in any application.
The heralded control of the number of photons in an arbitrarily chosen time interval can be done with these kinds of quantum-correlated states. In our case, for sufficiently strong squeezing the control could be extended to higher photon numbers compared with the often used heralding via spontaneous parametric down conversion.
More generally, even the time sequence of a train of photons could be controlled to some extend by the quantum correlations inherent in such a state.
\subsection{Identifying general quantum correlations}
Now we consider the characterization of QC for arbitrary states in a more general context.
Negative values of $P_{\rm QC}$ also include negativities which can be explained by entanglement, quantum discord, or single-mode nonclassicalities.
Let us outline how one can identify those particular effects in general.
A nonzero quantum discord may be directly or numerically computed~\cite{giorda}.
For entanglement, one can get the entanglement QP $P_{\rm Ent}$, e.g. for the considered phase-randomized TMSV state it is given in Ref.~\cite{sperling12}.
It is worth mentioning that a nonclassical multimode $P$~function is necessary to describe an entangled state~\cite{sperling09}.
As our example shows, however, a negative $P_{\rm QC}$ is not sufficient to verify entanglement.
In a last step, one can try to find negativities of $P_{\rm QC}$ which may be given by single-mode nonclassicalities.
Here one has to consider the reduced states.
These reduced states can be characterized by the single-mode nonclassicality QP, $P_{\rm Ncl}$, introduced in~\cite{kiesel10}.
More generally, one may consider $n$-mode states which are fully characterized by $P_{\rm QC}(\alpha_1,\dots,\alpha_n;w)$.
Tracing over some subsystems, we get a state which is described by the remaining subsystems only, e.g.:
\begin{align}
P_{\rm QC}(\alpha_1,\dots,\alpha_{n-1};w)=\int {\rm d}^2\alpha_n\,P_{\rm QC}(\alpha_1,\dots,\alpha_n;w).
\end{align}
If the original state is quantum correlated but none of the partially traced states is quantum correlated, one may refer to this kind of correlation as genuine $n-$mode~QC.
The multimode analysis does not depend on the kind of mode decomposition of the radiation field.
We may consider a two-mode scenario, a similar processing can be done in the multimode case.
The individual modes may be represented by the corresponding annihilation operators $\hat a_{A}$ and $\hat a_{B}$.
In addition, the system may be decomposed in two other modes, $\hat a_{A'}$ and $\hat a_{B'}$.
A mode transformation is given by a unitary transform $\boldsymbol U$ of the form
\begin{align}
\begin{pmatrix}
\hat a_{A'}\\ \hat a_{B'}
\end{pmatrix}
=
\begin{pmatrix}
U_{A'A} & U_{A'B} \\
U_{B'A} & U_{B'B}
\end{pmatrix}
\begin{pmatrix}
\hat a_{A}\\ \hat a_{B}
\end{pmatrix}.
\end{align}
One can easily see that such a transformation maps a coherent state $|\alpha_{A},\alpha_{B}\rangle$ to
\begin{align}
|\alpha_{A'},\alpha_{B'}\rangle=|U_{A'A}\alpha_{A}+U_{A'B}\alpha_{B},U_{B'A}\alpha_{A}+U_{B'B}\alpha_{B}\rangle,
\end{align}
which is also a coherent product state.
Hence, we conclude that $P_{\rm QC}(\alpha_{A},\alpha_{B};w)$ has negativities for some $w$, iff $P_{\rm QC}(\alpha_{A'},\alpha_{B'};w')$ has negativities for some $w'$.
Note that such an equivalence is, in general, not true for the reduced state in $A$ and $A'$ or $B$ and $B'$.
\section{Sampling of $P_{\rm QC}$}
\label{Sec:Sampling}
For applications of our method, a direct sampling formula is useful which yields $P_{\rm QC}$ from the measured data.
We consider multimode homodyne detection, which gives a sample of $N$ measured multimode quadrature values $\{( x_k[j], \varphi_k[j])_{k=1}^n\}_{j=1}^N$.
The index $k$ denotes the mode and $j$ numbers the measured values for the quadrature $x_k[j]$ and its corresponding phase $\varphi_k[j]$.
Since our multimode regularizing function $\tilde\Omega_w$ can be written as a product, we obtain the sampling formula
\begin{align}\label{Eq:Sampling}
P_{\rm QC}(\boldsymbol\alpha;w)=\frac{1}{N}\sum_{j=1}^N \prod_{k=1}^n f(x_k[j],\varphi_k[j],\alpha_k;w).
\end{align}
The needed single-mode pattern function reduces for phase randomized states to
\begin{align}\label{Eq:patternfct}
f(x,\alpha;w)=&\frac{1}{\pi^2}\int_{-\infty}^{+\infty} {\rm d}b\, b\, {\rm e}^{b^2/2}\, \Omega\left(b/w\right)\times\\
&\int_{0}^{2\pi} {\rm d}\varphi\, {\rm e}^{ibx(\pi/2-\varphi)+2 i |\alpha| b \sin\left[\arg \alpha -\varphi\right]},\nonumber
\end{align}
and it has to be calculated only once.
In order to provide the confidence of the negativity of the sampling in Eq.~\eqref{Eq:Sampling}, we need to estimate the statistical uncertainties.
The empirical variance of the sampling, $\sigma^2\left[P_{\rm QC}(\boldsymbol\alpha;w)\right]$, is derived in the Appendix~B.
The statistical uncertainty of the sampled estimate in Eq.~\eqref{Eq:Sampling} is
\begin{align}
\Delta[P_{\rm QC}(\boldsymbol\alpha;w)]=\frac{ \sigma\left[P_{\rm QC}(\boldsymbol\alpha;w)\right]}{\sqrt N}.
\end{align}
Thus one can obtain the confidence of a negativity as
\begin{align}
\mathcal{C}(\boldsymbol\alpha;w)= \frac{|P_{\rm QC}(\boldsymbol\alpha;w)|}{\Delta[P_{\rm QC}(\boldsymbol\alpha;w)]},
\end{align}
for a given $\boldsymbol\alpha$ with $P_{\rm QC}(\boldsymbol\alpha;w)<0$.
One may choose the value $w>0$, such that $\mathcal{C}(\boldsymbol\alpha;w)$ becomes maximal.
\section{Summary and Conclusions}
\label{Sec:Conclusions}
A regularized multimode version of the Glauber-Sudarshan $P$~function is introduced: the quantum-correlation quasiprobability $P_{\rm QC}$, which is parameterized only by a single width parameter $w$.
Through its negativities, it directly visualizes quantum correlations included in any multimode quantum state.
The negativities occur, if and only if the state has a nonclassical multimode Glauber-Sudarshan $P$~function.
The method of regularization is universal, since it does not depend on the considered state.
Our method is applied to a fully phase-randomized two-mode squeezed-vacuum state.
This state is shown to be classical with respect to other established notions of quantum correlations, it is not entangled and it has zero quantum discord.
The two-mode Wigner function is shown to be positive semidefinite.
Last but not least, the state is locally classical in each mode. However, the negativities in the bipartite quantum-correlation quasiprobability clearly uncover the quantum-correlation properties of this state, despite the disappearance of quantum entanglement and quantum discord.
For the efficient application of our method in experiments, we explicitly provide a direct sampling formula. It is based on the data which are recorded in two-mode balanced homodyne detection.
One may directly determine the quantum-correlation quasiprobabilities
together with their statistical errors. On this basis it is straightforward
to estimate the statistical significance of the observed quantum-correlation effects.
\section*{Acknowledgments}
The authors are grateful to T. Kiesel for enlightening discussions.
We are also grateful to the anonymous referee for helpful comments.
This work was supported by the Deutsche Forschungsgemeinschaft through SFB 652.
|
1,108,101,565,596 | arxiv | \section{The gist of it}
{\bf To show:}
If there is a one-to-one map from $4 \times A$ to $4 \times B$
(which need not hit all of the range $4 \times B$),
then there is a one-to-one map from $A$ to $B$.
\noindent
{\bf A word to the wise:} Check out what Rich Schwartz has to say in
\cite{schwartz:four}.
\noindent {\bf Proof.\ }
Think of $4 \times B$ as a deck of cards where for each $x$ in $B$
there are cards of rank $x$ in each of the four suits
spades, hearts, gems, clubs.
Note that while we use the word `rank', in this game all ranks will
be worth the same:
Who is to say that a king is worth more or less than a queen?
Think of $A$ as a set of racks, where each rack has four spots
to hold cards,
and think of $4 \times A$ as the set of all the spots in all the racks.
Think of a one-to-one map from $4 \times A$ to $4 \times B$
as a way to fill the racks with cards,
so that all the spots have cards,
though some of the cards may not have been dealt out and are still in the deck.
Name the four spots in each rack by the four suits as they come in bridge,
with spades on the left, then hearts, gems, clubs.
Call a spade `good' if it is in the spades spot of its rack,
and `bad' if not.
Do these two rounds in turn.
(As you read what is to come, look on to where we have worked out
a case in point.)
{\bf Shape Up}:
If a rack has at least one bad spade and no good spade,
take the spade that is most to the left and swap it to the spades
spot so that it is now good.
Do these swaps all at the same time in all of the hands,
so as not to have to choose which swap to do first.
{\bf Ship Out}:
Each bad spade has a good spade in its rack,
thanks to the Shape Up round.
Swap each bad spade for the card whose rank is that of the good spade
in its rack, and whose suit is that of its spot.
To see how this might go, say that in some rack the queen of spades
is in the spades spot, while the jack of spades is in the hearts spot.
In this case we should swap the jack of spades for the queen of hearts.
(Take care not to swap it for the jack of hearts!)
Note that some spades may need to be swapped
with cards that were left in the deck,
but this is fine.
Do all these swaps at once, for all the bad spades in all the racks.
This works since no two bad spades want to swap with the same card.
{\bf Note.} If you want, you can make it
so that when there is more than one bad spade in a rack,
you ship out just the one that is most to the left.
Now shape up, ship out, shape up, ship out, and so on.
At the end of time there will be no bad spades to be seen.
(Not that all bad spades will have shaped up, or been put back in the deck:
Some may be shipped from spot to spot through all of time.)
Not all the cards in the spades spots need be spades,
but no card to the right of the spades spot is a spade.
So if we pay no heed to the spades spots,
we see cards of three suits set out in racks with three spots each,
which shows a one-to-one map from $3 \times A$ to $3 \times B$.
You see how this goes, right?
We do a new pass, and get rid of the hearts from the last two spots in
each rack.
Then one last pass and the clubs spots have just clubs in them.
So the clubs show us a one-to-one map from the set $A$ of racks to the
set $B$ of ranks.
Done.
Is this clear?
It's true that we have
left some things for you to think through on your own.
You might want to look at
\cite{schwartz:four},
where Rich Schwartz has put in things that we have left out.
\newpage
Here's the first pass in a case where all the cards have been dealt out.
Note that in this case we could stop right here and use the spades to
match $A$ with $B$, but that will not work when $A$ and $B$ get big.
\footnotesize
\begin{verbatim}
Start:
4g 6h Qg 8g 9h *Qs* 4c Ag 6c *4s*
Jh Ah 9c 8h *As* Tc Tg 5h Qc *Js*
Kc *6s* 4h 6g *Ts* *9s* Jc Kg *8s* 8c
5c 5g *Ks* *5s* Th Jg Ac Qh 9g Kh
Shape up:
4g *6s* *Ks* *5s* *As* *Qs* 4c Ag *8s* *4s*
Jh Ah 9c 8h 9h Tc Tg 5h Qc *Js*
Kc 6h 4h 6g *Ts* *9s* Jc Kg 6c 8c
5c 5g Qg 8g Th Jg Ac Qh 9g Kh
Ship out:
4g *6s* *Ks* *5s* *As* *Qs* 4c *Ts* *8s* *4s*
Jh Ah 9c 8h 9h Tc Tg 5h Qc |4h|
Kc 6h *Js* 6g |Ag| |Qg| Jc Kg 6c 8c
5c 5g *9s* 8g Th Jg Ac Qh 9g Kh
Ship out:
4g *6s* *Ks* *5s* *As* *Qs* 4c *Ts* *8s* *4s*
Jh Ah 9c 8h 9h Tc Tg 5h Qc |4h|
*9s* 6h |Kg| 6g |Ag| |Qg| Jc *Js* 6c 8c
5c 5g |Kc| 8g Th Jg Ac Qh 9g Kh
Shape up:
*9s* *6s* *Ks* *5s* *As* *Qs* 4c *Ts* *8s* *4s*
Jh Ah 9c 8h 9h Tc Tg 5h Qc |4h|
4g 6h |Kg| 6g |Ag| |Qg| Jc *Js* 6c 8c
5c 5g |Kc| 8g Th Jg Ac Qh 9g Kh
Ship out:
*9s* *6s* *Ks* *5s* *As* *Qs* 4c *Ts* *8s* *4s*
Jh Ah 9c 8h 9h Tc *Js* 5h Qc |4h|
4g 6h |Kg| 6g |Ag| |Qg| Jc |Tg| 6c 8c
5c 5g |Kc| 8g Th Jg Ac Qh 9g Kh
Shape up:
*9s* *6s* *Ks* *5s* *As* *Qs* *Js* *Ts* *8s* *4s*
Jh Ah 9c 8h 9h Tc 4c 5h Qc |4h|
4g 6h |Kg| 6g |Ag| |Qg| Jc |Tg| 6c 8c
5c 5g |Kc| 8g Th Jg Ac Qh 9g Kh
\end{verbatim}
\normalsize
\section{Discussion and plan} \label{discuss}
\subsection{Division by any finite number}
Write $A \preceq B$ (`$A$ is less than or equal to $B$')
if there is an injection from $A$ to $B$.
Write $A \asymp B$
(`$A$ equals $B$', with apologies to the equals sign)
if there is a bijection.
The method we've described for dividing by four
works fine for any finite $n$,
so we have:
\begin{theorem} \label{thleq}
For any finite $n$,
$n \times A \preceq n \times B$ implies $A \preceq B$.
\end{theorem}
From the Cantor-Bernstein theorem
(Prop. \ref{cb} below)
we then get
\begin{theorem} \label{theq}
For any finite $n$,
$n \times A \asymp n \times B$ implies $A \asymp B$.
\end{theorem}
As an application of the Theorem \ref{thleq},
Lindenbaum and Tarski
(cf.
\cite[p. 305]{lindenbaumTarski:ensembles},
\cite[Theorem 13]{tarski:cancellation})
proved the following:
\begin{theorem} \label{euclid}
If
$m \times A \asymp n \times B$ where $\gcd(m,n)=1$,
then there is some $R$
so that $A \asymp n \times R$ and $B \asymp m \times R$.
\end{theorem}
We reproduce Tarski's proof in Section \ref{sec:division} below.
Combining Theorems \ref{theq} and \ref{euclid}
yields this omnibus result for division of an equality
\cite[Corollary 14]{tarski:cancellation}:
\begin{theorem}
If
$m \times A \asymp n \times B$ where $\gcd(m,n)=d$,
then there is some $R$
so that $A \asymp (n/d) \times R$ and $B \asymp (m/d) \times R$.
\end{theorem}
\subsection{Pan Galactic Division}
We call the shape-up-or-ship-out algorithm for eliminating
bad spades \emph{Shipshaping}.
As we've seen, Shipshaping is the basis for a division algorithm
that we'll call
\emph{Pan Galactic Long Division}.
As the name suggests, there is another algorithm called
\emph{Pan Galactic Short Division},
which we'll come to presently.
`Pan Galactic' indicates that we think this is the `right way'
to approach division,
and some definite fraction of intelligent life forms in the universe
will have discovered it.
Though if this really is the right way to divide,
there should be no need for this Pan Galactic puffery,
we should just call these algorithms \emph{Long Division} and
\emph{Short Division}.
Which is what we will do.
\subsection{What is needed for the proof?}
Shipshaping and its associated division procedures are effective (well-defined,
explicit, canonical, equivariant, \ldots),
and do not require the well-ordering principle or any other form
of the axiom of choice.
Nor do we need the axiom of power set,
which is perhaps even more suspect than the axiom of choice.
In fact we don't even need the axiom of infinity,
in essence because if there are no infinite sets that
all difficulties vanish.
Still weaker systems would suffice:
It would be great to know
just how strong a theory is needed.
\subsection{Whack it in half, twice}
Division by 2 is easy (cf. Section \ref{two}),
and hence so is division by 4:
\[
4 \times A \preceq 4 \times B \;\Longrightarrow\; 2 \times A \preceq 2 \times B
\;\Longrightarrow\; A \preceq B
.
\]
We made it hard for ourselves in order to show a method
that works for all $n$.
It would have been more natural to take $n=3$,
which is the crucial test case for division.
If you can divide by 2, and hence by 4,
there is no guarantee that you can divide by 3;
whereas if you can divide by 3, you can divide by any $n$.
This is not meant as a formal statement, it's what you might call a Thesis.
We chose $n=4$ instead of $n=3$
because there are four suits in a standard deck of cards,
and because there is already a paper called `Division by three'
\cite{conwaydoyle:three},
which this paper is meant to supersede.
\subsection{Plan} \label{plan}
In Section \ref{two} we discuss division by two, and explain why
it is fundamentally simpler than the general case.
In Section \ref{short} we introduce Short Division.
In Section \ref{pgs} we take a short break to play Pan Galactic Solitaire.
In Section \ref{cancel} we reproduce classical results
on subtraction and division,
so that this work can stand on its own as the definitive resource
for these results,
and as preparation for Section \ref{timing},
where we discuss how long
the Long and Short Division algorithms take to run.
In Section \ref{history}
we discuss the tangled history of division.
In Section \ref{vale}
we wrap up with some platitudes.
\section{Division by two} \label{two}
Why is division by two easy?
The flippant answer is that $2-1=1$.
Take a look at Conway and Doyle
\cite{conwaydoyle:three}
to see one manifestation of this.
Here is how this shows up in the context of Shipshaping.
The reason Shipshaping has a Shape Up round is that for $n>2$,
there can be more than one bad spade.
When $n=2$ there can't be more than one bad spade.
In light of this, we can leave out the Shape Up rounds.
When a bad spade lands in the hearts spot of a rack
with a heart in the spades spot, we just leave it there.
It's probably easier to understand this if we give up the good-bad
distinction, and just say that the rule is that when both cards in a rack
are spades, we ship out the spade in the hearts spot.
At the end of time, there will be at most one spade in each rack,
so in the Long Division setup there will be at least one heart;
assigning to each rack the rank of the rightmost heart gives
us an injection from racks to ranks.
In this approach to division by 2,
we find that there is no need to worry about doing everything in
lockstep.
We can do the Ship Out steps in any order we please,
without organizing the action into rounds.
As long as any two-spade rack eventually gets attended to,
we always arrive at the same final configuration of cards.
This is the kind of consideration that typically plays an important
role in the discussion of distributed computation,
where you want the result not to depend on accidents of what happens first.
It's quite the opposite of what concerns us here, where,
in order to keep everything canonical, we can't simply say,
`Do these steps in any old order.'
Without the axiom of choice, everything has to be done
synchronously.
Now in fact the original Shipshaping algorithm
works fine as an asynchronous algorithm for any $n$,
but the limiting configuation will depend on the order in which swaps are
carried out,
so the result won't be canonical.
More to the point, without the Axiom of Choice
we can't just say,
`Do these operations in whatever order you please.'
In contrast to the real world, where the ability to do everything
synchronously would be a blessing,
for us it is an absolute necessity.
\section{Short Division} \label{short}
In Short Division we reverse the roles of $A$ and $B$, so that
$A$ is the set of ranks and $B$ the set of racks.
An injection from $4 \times A$ to $4 \times B$ now shows a way to deal
out all the cards, leaving no cards in the deck, though some of
the spots in the racks may remain empty.
We do just one round of Shipshaping.
This works just as before, the only new twist being that if the
spades spot is empty when we come to shape up a bad spade,
we simply move the spade
over into the spades spot.
Since now all the cards have been dealt out, we don't ever
have occasion to swap with a card still in the deck.
When $A$ is finite, all the spades will shape up,
and show a bijection from ranks ($A$) to racks ($B$).
When $A$ is infinite,
some of the bad spades may get `lost',
having been shipped out again and again
without ever shaping up.
These lost spades will each have passed through
an infinite sequence of spots,
and all these infinite sequences will be disjoint.
We use these sequences to hide the lost spades,
as follows.
Let $A_\kw{good}$ denote the ranks of the good spades at the end of the game,
and $B_\kw{good} \asymp A_\kw{good}$ the racks where they have landed.
$A_\kw{bad} = A - A_\kw{good}$ is the the set of ranks of the lost spades.
To each element of $A_\kw{bad}$ there is an infinite chain of spots in
$3 \times B_\kw{good}$, and these chains are all disjoint.
Using these disjoint chains we can use the usual `Hilbert Hotel' scheme
to define a bijection between
$A_\kw{bad} \cup 3 \times B_\kw{good}$ and $3 \times B_\kw{good}$,
i.e. we make $3 \times B_\kw{good}$ `swallow' $A_\kw{bad}$.
But if $3 \times B_\kw{good}$ can swallow $A_\kw{bad}$ then so can $B_\kw{good}$
(cf. Proposition \ref{swallow} below):
$A_\kw{bad} \cup B_\kw{good} \asymp B_\kw{good}$.
So
\[
A = A_\kw{bad} \cup A_\kw{good} \asymp A_\kw{bad} \cup B_\kw{good} \asymp B_\kw{good}
\preceq B
.
\]
{\bf Note.} To make apparent the sequences of cards accumulated by the lost
spades,
we can modify the game
by making stacks of cards accumulate under the spades,
to record their progress.
The Shape Up round is unchanged,
except that we swap the whole spade stack into the
spades spot.
In the Ship Out round, when a spade ships out it takes its stack with it,
and places it on top of the card it was meant to swap with.
Spades that eventually shape up will have finite piles
beneath them,
but lost spades will accumulate
an infinite stack of cards.
\section{Pan Galactic Solitaire} \label{pgs}
Let us take a short break to play Pan Galactic Solitaire.
Deal the standard 52-card deck out in four rows of 13 cards.
The columns represent the racks, with spots in each rack
labelled spades, hearts, diamonds, clubs going from top to bottom.
(Though as you'll see it makes no difference how we label the spots,
this game is suit-symmetric.)
The object is to `fix' each column
so that the ranks of the four cards are equal
and each card is in the suit-appropriate spot.
We move one card at a time.
There is no shaping up;
shipping out swaps are allowed based on any suit, not just spades.
So for example if in some column the 3 of hearts is in the hearts spot
and the 6 of hearts is in the diamonds spot,
we may swap the 6 of hearts for the 3 of diamonds.
We don't know a good strategy for this game.
Computer simulations show that various simple strategies yield a probability
of winning of about 1.2 per cent.
Younger players find this frustratingly low, and either play with a smaller
deck or allow various kinds of cheating.
Older players are not put off by the challenge,
and at least one has played the game often enough to have won twice.
Though because the game recovers robustly from an occasional error,
he cannot be certain that he won these games fair and square.
A couple of apps have been written to implement this game on the computer.
In some versions if you click on the 6 of hearts (as in the
example above)
the app locates the 3 of diamonds for you and carries out the swap.
This makes the game go much faster,
but it is much less fun to play than if you must locate and
click on the 3 of diamonds,
or better yet, drag the 6 of hearts over onto the 3 of diamonds.
One theory as to why the automated version of the game is less fun to play
is that the faster you can play the game,
the more frequently you lose.
This game is called Pan Galactic Solitaire from the conviction that something
like it will have occurred to a definite fraction of all life forms
that have discovered Short and Long Division.
\section{Cancellation laws} \label{cancel}
In this section we reproduce basic results about cancellation.
These results are all cribbed from
Tarski
\cite{tarski:cancellation},
though the notation is new and (we hope) improved.
To simplify notation,
write $+$ for disjoint union and $n A$ for $n \times A$.
(This abbreviation is long overdue.)
$A-B$ will mean set theoretic complement, with the implication
that $B$ is a subset of $A$.
The results below guarantee the existence of certain injections and bijections,
and the proofs are backed by algorithms.
For finite sets these can all be made to run snappily.
\subsection{Subtraction laws}
The only ingredients here are the Cantor-Bernstein construction
and the closely related Hilbert Hotel construction.
We start with Cantor-Bernstein.
\begin{prop}[Cantor-Bernstein] \label{cb}
\[
A \preceq B
\;\land\;
B \preceq A
\;\Longrightarrow\;
A \asymp B
.
\]
\end{prop}
\noindent {\bf Proof.\ }
For a proof,
draw the picture and follow your nose
(cf.\ \cite{conwaydoyle:three}).
Here is a version of the proof emphasizing that the desired
bijection is the pointwise limit of a sequence of finitely-computable
bijections, which will be important to us in
Section \ref{timing} below.
It suffices to show that from an injection
\[
f: A \rightarrow B \subset A
\]
we can get a bijection.
Say we have any function
$f:A \rightarrow B$,
not necessarily injective
(this relaxation of the conditions is useful,
so that we can think of finite examples).
Every $a \in A$ makes a Valentine card.
To start, every $a \in A-B$ gives their Valentine to $f(a) \in B$.
Any $b \in B$ that gets a Valentine then gives their own Valentine to
$f(b)$.
Repeat this procedure ad infinitum,
and at the end of the day
every $b \in B$
has at least one Valentine,
and every Valentine has a well-defined location
(it has moved at most once).
Let $g$ associate to $a \in A$ the $b \in B$ that has $a$'s
Valentine.
$g$ is always a surjection,
and if $f$ is an injection, $g$ is a bijection.
Here's a variation on the proof.
Again each $a \in A$ makes a Valentine,
only this time every $a \in A$ gives their Valentine to $f(a)$.
Now any $b \in B$ that has no Valentine demands its Valentine back.
Repeat this clawing-back ad infinitum,
and at the end of the day,
every Valentine has a well-defined location,
and if $f$ was injective, or more generally if $f$ was finite-to-one,
every $b \in B$ has a Valentine,
and we're done as before.
The twist here is that if $f$ is not finite-to-one,
at the end of the day some $b$'s may be left without a Valentine.
So they demand their Valentine back, continuing a transfinite
chain of clawings-back.
We may or may not be comfortable concluding that after some transfinite
time, every $b \in B$ will have a Valentine.
For present purposes we needn't worry about this, since
when $f$ is injective the fuss is all over after at most $\omega$ steps.
$\quad \clubsuit$
\noindent
{\bf Notation.}
Write $A \ll B$
(`$A$ is swallowed by $B$' or `$A$ hides in $B$')
if $A + B \preceq B$.
By Cantor-Bernstein this is equivalent to $A + B \asymp B$.
Another very useful equivalent condition is that there exist
disjoint infinite chains inside $B$,
one for each element of $A$.
By repeated swallowing we have:
\begin{prop} \label{multiswallow}
For any $n$, if
\[
A_i \ll B
,\;
i=0,\ldots,n-1
\]
then
\[
A_0 + \ldots + A_{n-1} \ll B
.
\quad \clubsuit
\]
\end{prop}
Here are two close relatives of Cantor-Bernstein, proved by the same
back-and-forth construction.
\begin{prop}
\[
A + C \preceq B + C
\;\Longrightarrow\;
A - A_0 \preceq B
,
\]
where $A_0 \ll C$.
$\quad \clubsuit$
\end{prop}
\begin{prop}
\[
A + C \asymp B + C
\;\Longrightarrow\;
A - A_0 \asymp B - B_0
,
\]
where $A_0,B_0 \ll C$.
$\quad \clubsuit$
\end{prop}
\begin{prop}
\[
A + C \preceq B + 2C
\;\Longrightarrow\; A \preceq B + C
\]
\end{prop}
\noindent {\bf Proof.\ }
\[
A+C \preceq B + C + C
\;\Longrightarrow\; A - A_0 \preceq B + C
\]
with $A_0 \ll C$.
So
\[
A = (A - A_0) + A_0 \preceq B + C + A_0 \preceq B + C
.
\quad \clubsuit
\]
\begin{prop} \label{mnineq}
For finite $m < n$,
\[
A + m C \preceq B + n C
\;\Longrightarrow\;
A \preceq B + (n-m) C
.
\]
\end{prop}
\noindent {\bf Proof.\ }
The proof is by induction, and we can get the number of recursive steps
down to $O(\log(m))$.
$\quad \clubsuit$
From this we get
\begin{prop} \label{preswallow}
For $n \geq 1$
\[
A + n C \preceq B + n C
\;\Longrightarrow\;
A + C \preceq B + C
.
\quad \clubsuit
\]
\end{prop}
From Cantor-Bernstein we then get
\begin{prop}
For $n \geq 1$
\[
A + n C \asymp B + n C
\;\Longrightarrow\;
A + C \asymp B + C
.
\quad \clubsuit
\]
\end{prop}
Here's the key result for Short Division:
\begin{prop} \label{swallow}
For $n \geq 1$
\[
A \ll n C
\;\Longrightarrow\;
A \ll C
.
\]
\end{prop}
\noindent {\bf Proof.\ }
This is the special case of Proposition \ref{preswallow} when $B$ is empty.
Here we redo the proof in this special case,
in preparation for the timing considerations of Section \ref{timing}.
Think of $nC$ as a deck of cards, where $(i,c)$ represents a card of suit
$i$ and rank $c$.
Since $A \ll nC$,
there are disjoint infinite chains
\[
s_a:\omega \rightarrow nC
,\;a \in A
.
\]
Let $\alpha(a)$ to be the smallest
$i$ such that $s_a$ contains infinitely many cards of suit $i$,
and let $\rho_{a}$ be the sequence of ranks of those cards.
(For future reference, note that we could trim this down to the
ranks of those cards of suit $\alpha(a)$ that come after the
last card of a lower-numbered suit.)
Let
\[
A_i = \{a \in A:\alpha(a)=i\}
.
\]
The infinite chains
\[
\rho_{a}: \omega \rightarrow C
,\;
a \in A_i
\]
are disjoint, so
\[
A_i \ll C
\]
So by Proposition \ref{multiswallow},
\[
A = A_0 + \ldots + A_{n-1} \ll C
.
\]
The required injection from $A+C$ to $C$ is obtained as a composition
of injections $f_i$ which map $A+C$ onto $A-A_i + C$, leaving $A-A_i$ fixed.
$\quad \clubsuit$
\begin{comment}
Let $\tau_{a,i}$
be the subsequence of $s_a$ consisting of all those cards
of suit $i$ coming after the last card of suit $<i$.
Write
\[
\tau_{a,i}(k) = (i,\rho_{a,i}(k))
,
\]
so that $\rho_{a,i}$ is the corresponding sequence of ranks.
Let $\alpha(a)$ be the one and only $i$ such that the sequence $\tau_{a,i}$ is
infinite.
and empty for $i>\alpha(a)$.
\end{comment}
Here's a result
that will be handy when we come to the
Euclidean algorithm.
\begin{prop} \label{handy}
For $m<n$
\[
A + m C \asymp n C
\;\Longrightarrow\;
A + E \asymp (n-m) C
,
\]
where $E \ll nC$, and hence $E \ll C$.
\end{prop}
\noindent {\bf Proof.\ }
From above we have
\[
A \preceq (n-m) C
.
\]
Write
\[
A + E \asymp (n-m)C
,
\]
so that
\[
A + E + m C \asymp n C
.
\]
Since $A + mC \asymp n C$ this gives
\[
E + nC \asymp nC
\;\Longrightarrow\;
E \ll nC
\;\Longrightarrow\;
E \ll C
.
\quad \clubsuit
\]
Finally,
here's the result Conway and Doyle needed to make their division method
work.
\begin{prop}
For $n \geq 1$,
\[
n A \preceq n B
\;\land\;
B \preceq A
\;\Longrightarrow\;
A \preceq B
,
\]
and hence by Cantor-Bernstein $A \asymp B$.
\end{prop}
\noindent {\bf Proof.\ }
Write
\[
A \asymp B + C
.
\]
From
\[
nA \preceq nB
\]
and
\[
A = B+C
\]
we get
\[
nB+nC \preceq nB
,\]
i.e.
\[
nC \ll nB
.
\]
But
\[
nC \ll n B \;\Longrightarrow\; C \ll n B \;\Longrightarrow\; C \ll B
,
\]
so
\[
A \asymp B + C \preceq B
.
\quad \clubsuit
\]
\subsection{Division laws} \label{sec:division}
Finally we come to division.
We've already proved Theorems \ref{thleq} and \ref{theq}.
All that remains now is Theorem \ref{euclid}.
{\bf Proof of Theorem \ref{euclid}.}
As you would expect,
the proof is a manifestation of
the Euclidean algorithm.
If $m=1$ we are done.
Otherwise we will use the usual recursion.
\[
mB \preceq n B \asymp m A
.
\]
Using division we get
\[
B \preceq A
.
\]
Write
\[
A \asymp B + C
.
\]
\[
mB+mC \asymp mA \asymp n B
.
\]
From Lemma \ref{handy} we have
\[
mC + E \asymp (n-m) B
,
\]
with $E \ll B$, and hence $E \ll A$.
We think of $E$ as `practically empty'.
If it were actually empty, we'd recur using $C$ in place of $A$.
Ditto if we knew that $E \ll C$.
As it is we use $C+E$ in the recursion,
hoping that this amounts to practically the same thing.
\[
m(C+E) \asymp mC + E + (m-1)E
\asymp (n-m)B + (m-1)E
\asymp (n-m)B
.
\]
So by induction, for some $R$ we have
\[
C+E \asymp (n-m)R
,
\]
\[
B \asymp m R
.
\]
Since $E \ll A$,
\[
A \asymp A + E
\asymp B + C + E
\asymp
m R + (n-m) R
\asymp n R
\]
Done.
We won't undertake to determine
the best possible running time here.
But in order to make sure it requires at most a logarithmic number
of divisions,
we will want to check that for the recursion
we can subtract any multiple $km$ from $n$
as along as $km < n$.
Here's the argument again, in abbreviated form.
\[
km B \preceq nB \asymp m A
;
\]
\[
kB \preceq A
;
\]
\[
A \asymp kB+C
;
\]
\[
mkB+mC \asymp mA \asymp n B
;
\]
\[
mC+E \asymp (n-km)B
,\;
E \ll B \preceq A
;
\]
\[
m(C+E) \asymp mC+E+(m-1)E \asymp (n-km)B + (m-1)E \asymp (n-km)B
;
\]
\[
C+E \asymp (n-km)R
,\;
B \asymp mR
;
\]
\[
A \asymp A+E \asymp kB+C+E \asymp kmR + (n-km)R \asymp n R
.
\quad \clubsuit
\]
\section{How long does it take?} \label{timing}
\subsection{The finite case}
In Shipshaping, every swap puts at least one card in a spot where it
will stay for good, so the number of swaps is at most $n|A|$:
This holds both in the Long Division setup where $nA$ is the set of spots,
and in
the Short Division setup where $nA$ is the set of cards.
For a distributed process where all Shape Up and Ship Out rounds take
place simultaneously
the number of rounds could still be nearly this big, despite the
parallelism,
because one bad spade could snake its way through nearly all the non-spades
spots.
If we simulate this distributed process on a machine with one processor,
the running time will be $O(n |A|)$
(or maybe a little longer,
depending on how much you charge for various operations).
{\bf Note.} While $|B|$ might be as large as $n|A|$,
nothing requires us to allocate storage for all $n|B|$ cards (in Long Division)
or spots (in Short Divison).
For finite $A$,
in Short Division
all spades will shape up after one pass of Shipshaping,
and show an injection from ranks to racks.
So the running time for Short Division is $O(n|A|)$,
running either as a distributed process or on a single processor.
This is as good as we could hope for.
Still for finite $A$,
Long Division with the naive recursion takes $n-1$ passes of Shipshaping.
If we divide by 2 whenever an intermediate value of $n$ is even,
we can get this down to at most $O(\log(n))$ passes.
The number of suits remaining
gets halved at least once every other pass,
so the total number of swaps over all rounds of Shipshaping will be at most
\[
|A|(n + n + n/2 + n/2 + n/4 + n/4 + \ldots)
=
4n|A|
.
\]
Hence the total time for Long Division is $O(n|A|)$,
running either as a
distributed process or on a single-processor machine.
This is the same order as for Short Division, though Short Division
will win out when you look at explicit bounds.
\subsection{The infinite case}
For $A$ infinite,
to talk about running times we will need a notion of transfinite
synchronous distributed computation.
The general idea is to support taking the kind of pointwise limits
that show up in Shipshaping,
where in the limit the contents of each spot is well defined,
as is the goodness (but not the location) of each spade.
(See \ref{timing} below for more specifics.)
One round of Shipshaping will take time $\omega$.
For Long Division sped up as in the finite case so that as to take
$O(\log(n))$ passes,
the running time will be $\omega \cdot O(\log(n))$.
{\bf Note.}
Here and throughout,
we'll be rounding running times down to the nearest limit ordinal,
so as not to have to worry about some finite number of post-processing
steps.
For real speed, we'll want to use Short Division.
The swallowing step can be implemented by a recursion which,
like Long Division, can be sped up to take $O(\log(n))$ passes.
Though the number of passes is on the same order as for Long Division,
this can still be considered an improvement,
to the extent that Shipshaping is more complicated than swallowing.
The big advantage of Short Division stems from the fact that
the swallowing can be sped up to run in time $\omega$,
in essence by running the steps of the recursion in tandem.
And the swallowing can be configured to
run simultaneously with the Shipshaping.
Sped up in this way, Short Division can be done in time at most $\omega$.
We discuss this further in Section \ref{timing} below.
By contrast,
we've never found a way to run the division stages of Long Division
in tandem.
\subsection{Dividing a bijection}
To divide a bijection to get a bijection,
we can simultaneously compute injections
each way, and then combine them using the Cantor-Bernstein
construction in additional time $\omega$.
(Cf. Proposition \ref{cb}.)
So if dividing an injection takes time $\omega$,
dividing a bijection takes time at most $\omega \cdot 2$.
Can this be improved upon?
It is tempting to start running
the Cantor-Bernstein algorithm using partial information about the two
injections being computed, but we haven't made this work.
The case to look at first is $n=2$,
where Long Division is all done after one pass of Shipshaping.
\subsection{Speeding up Long Division}
A division algorithm takes as input an injection $f_n:nA \rightarrow nB$,
and produces as output an injection $f_1:A \rightarrow B$.
In the Long Division setup, where all the spots are filled though
not all the cards need have been dealt out, one pass
of Shipshaping gets us from $f_n$ to $f_{n-1}$,
or more generally,
from $f_{n}$ to $f_{n(k-1)/k}$, for any $k$ dividing $n$.
In particular, when $n$ is even one pass gets us $f_{n/2}$.
This allows us to get
from $f_n$ to $f_1$ in $O(\log(n))$ passes.
As one Shipshaping pass takes time at most $\omega$, this makes for
a total running time of at most $\omega \cdot O(\log(n))$.
Various tricks can be used to cut down the number of passes
in Long Division.
We can run Shipshaping using any divisor of $n$ we please,
or better yet, run it simultaneously for all divisors.
Knowing $f_m$ for as many values of $m$ as possible
can be useful because if we know
injections from $m_k A$ to $m_k B$ then we can paste them together to get
an injection from $m A$ to $m B$ for any positive linear combination
$m = \sum_k r_k m_k$.
This pasting takes only finite time,
which in this context counts as no time at all.
Combining these observations we can shave down the number of Shipshaping
passes needed,
and in the process we observe some intriguing phenomena.
But we can never get the number of passes
below $\lceil \log_2 n \rceil$, which is
at most
a factor of two better than
what we achieve with the naive method of dividing by 2 when any intermediate
$n$ is even.
\subsection{Speeding up Short Division}
For real speed, we use Short Division.
In this setup, all the cards are dealt
out, though not all the spots need be filled.
As the Shipshaping algorithm runs,
we observe a steadily decreasing collection of bad spades,
together with steadily lengthening disjoint sequences in $(n-\{0\}) \times B$
telling the spots through which these bad spades have passed.
In the limit, the bad spades that remain have wandered forever,
and they index disjoint injective sequences in $(n-\{0\}) \times B$.
The proof of Proposition \ref{preswallow} offers a recursive algorithm for
hiding the lost bad spades in with the good, which when sped up in the by-now
usual way requires $O(\log(n))$ recursive `passes', which if carried out
one after the other, give us a total running time of
$\omega \cdot O(\log(n))$.
(We've silently absorbed the $\omega$ coming from the
single pass of Shipshaping.)
This is on the same order as what we achieved with Long Division.
Having to do only one round of Shipshaping could be viewed as an improvement,
on the grounds that a Shipshaping is more complicated than a swallowing pass.
But as we've stated before, the real advantage of Short Division
come from the fact that we can run all the swallowing passes in
tandem with each other, and with the Shipshaping algorithm.
Here's roughly how it works.
As we run the Shipshaping pass for Short Division,
the set of bad spades decreases, while the sequences of spots they have
visited steadily increases.
As these preliminary results trickle in,
we can be computing
how we propose
to hide the shrinking set of bad spades among the growing set of good spades.
Eventually every bad spade knows where it will go,
as does every good spade.
In the limit we have an injection from spades, and hence ranks, to racks.
The whole procedure is done after only $\omega$ steps
(though as always we reserve the right to do
some fixed finite number of post-processing steps).
The main thing missing here is how we end up hiding the lost spades,
and how we compute this.
The fundamental idea is to trim the sequence of spots visited by
a lost spade down to the subsequence
consisting of all the hearts visited,
then all the diamonds after the last heart,
then all the spades after the last diamond.
These trimmed sequences can be computed in tandem with Shipshaping,
so they are ready for use after time $\omega$.
(See Section \ref{chips} below.)
Now because these sequences are increasing
we can determine the limiting suit of all these sequences
by running along them, keeping track of the suit,
which eventually stops changing.
After another $\omega$ steps we've computed what in the notation
of Proposition \ref{swallow} was the function $\alpha$.
Then we can distinguish the lost spades according to this limiting value.
We first hide those that limit with hearts, then diamonds, then clubs.
This hiding takes only $O(n)$ post-processing steps,
which we disregard, for a total running time of $\omega \cdot 2$:
$\omega$ to compute and trim the tracks of the lost spades;
$\omega$ to determine the limiting suits of the lost spades.
To get the running time down to $\omega$ is trickier.
The idea is that we start using the trimmed sequences before we are done
computing them.
The injection we compute will be different from that just described,
because of artifacts associated to the
cards in the trimmed sequences that come before those of the limiting suit
(e.g. a finite number of hearts coming before an infinite number of diamonds).
We omit the details.
\subsection{Chipshaping} \label{chips}
Here is a variation on Shipshaping that incorporates
the simultaneous determination of the trimmed sequences of spots that the
lost spades have passed through.
In this variation, we begin by placing a poker chip on top of each spade,
which will serve as its representative during the Shape Up and Ship Out rounds.
(If the racks are tilted, we'll have to lean the chip against the card,
maybe it would be better to think of the cards laid out in rows
as in Pan Galactic Solitaire.)
When we shape up a chip we move the card and chip together,
but when we ship out we move only the chip.
We add a third round called Trim: If a card has a chip on it, and the card
is a spot to the left of the spot for its suit (e.g. a club in a hearts spot),
we leave the chip where it is, and swap the card underneath the chip
to where it belongs, meaning the spot for its suit in
the rack having the spade of its rank in the spades spot.
(If the chip has just moved, this is where the chip came from;
in any case it is someplace the chip has visited before.)
We repeat the Trim round until no more moves are possible, meaning that no
card that is topped by a chip is `above its station'.
This takes only a finite number of rounds (at most equal to the number of
rounds Ship Out rounds we have done).
Then we continue: Shape Up, Ship Out, Trim, Shape Up, Ship Out, Trim, Trim, Shape Up, Ship Out, Trim, Trim, Trim,\ldots
If you try this, you'll see that it is really quite nice, though it is
annoying that we might have to wait
through an ever larger number of Trim rounds.
An alternative is to do just one Trim round, but this entails---well, try it,
and you'll see.
In any event, we aren't quite computing everything we'll need in order to
see an injection from $A$ to $B$ in the limit after only $\omega$ steps.
For that, it seems that
we might have to go beyond what you can conveniently do with just
the original deck of cards and some chips.
Like, say, add some local memory for pointers,
maybe in the form of stickers affixed to cards or racks.
There's nothing wrong with this from the point of view of distributed
synchronous computation.
It just won't be as much fun.
\subsection{Transfinite synchronous distributed computation} \label{trans}
We're using here a loose notion of transfinite computation.
Roughly speaking, we're imagining that we have a processor for each element of
$A$ and $B$, or maybe (for convenience) for each element of $n \times A$
and $n \times B$.
Each processor has some finite set of flags,
a finite set of registers that can store the name
of an element of $A$ or $B$, and a finite set of ports.
The processors can communicate by sending messages to a designated
port of another processor;
if two processors simultaneously attempt
to send to the same port of another processor,
the whole computation crashes.
We allow processors to set the flag of another processor;
if two or more processors simultaneously
try set the same flag, there is no conflict.
Certain flags and registers may be designated as suitable for reading at limit
times $\omega, \omega \cdot 2, \ldots$.
If the state of one of these flags or the contents of one of these registers
fails to take on a limiting value, the whole computation crashes.
This is as precise as we want to get here.
Really, we're hoping to find that a suitable formulation has already
been made and explored.
It would be great if there is essentially one suitable formulation,
but we are by no means certain of this.
\section{Some history} \label{history}
Here's a brief rundown on the history of Theorems \ref{thleq} and \ref{theq}.
Theorems \ref{thleq} and \ref{theq}
follow easily from the well-ordering principle,
since then $n \times A \asymp A$ when $A$ is infinite.
The well-ordering principle is a consequence of the power set axiom
and the axiom of choice (which without the power set axiom may take
several inequivalent forms).
The effective proofs we are interested in don't use either axiom.
Briefly put,
Bernstein stated Theorem 2 in 1905 and gave a proof for $n=2$,
but nobody could make sense of what he had written about extending the
proof to the general case.
In the 20's Lindenbaum and Tarski found proofs of Theorems 2 and 1
but didn't publish them.
Lindenbaum died and Tarski forgot how their proofs went.
In the 40's Tarski found and published two new proofs of Theorem 1.
In the 90's Conway and Doyle found a proof,
after finally peeking at Tarski
\cite{tarski:cancellation};
based on what Tarski had written, they decided that their proof was
probably essentially the same as Lindenbaum and Tarski's lost proof.
For the gory details,
here is Tarski
\cite{tarski:cancellation}, from 1949.
(We'll change his notation slightly to match that used here.)
\begin{quotation}
Theorem 2 for $n=2$ was first proved by F. Bernstein
\cite[pp.\ 122 ff.]{bernstein:mengenlehre};
for the general case Bernstein gave only a rough outline of a proof,
the understanding of which presents some difficulties.
Another very elegant proof of Theorem 2 in the case $n=2$
was published later by W. Sierpinski
\cite{sierpinski:two};
and a proof in the general case was found, but not published,
by the late A. Lindenbaum
\cite[p. 305]{lindenbaumTarski:ensembles}.
Theorem 1 --- from which Theorem 2 can obviously be derived by means of the
Cantor-Bernstein equivalence theorem --- was first obtained for $n=2$
by myself,
and then extended to the general case by Lindenbaum
\cite[p. 305]{lindenbaumTarski:ensembles};
the proofs, however, were not published.
Recently Sierpinski
\cite{sierpinski:leq}
has published a proof of Theorem 1 for $n=2$.
A few years ago I found two different proofs of Theorem 1
(and hence also, indirectly, of Theorem 2).
\ldots\
The second proof is just the one which I should like to present in this paper.
It is in a sense an extension of the original proof given by me for $n=2$,
and is undoubtedly related to Lindenbaum's proof for the general case.
Unfortunately, I am not in a position to state how close this relation is.
The only facts concerning Lindenbaum's proof which I clearly remember
are the following:
the proof was based on a weaker though related result previously obtained by me,
which will be given below \ldots;
the idea used in an important part of the proof was rather similar to the one
used by Sierpinski in the above-mentioned proof of Theorem 2 for $n=2$.
Both of these facts apply as well to the proof I am going to outline.
On the other hand, my proof will be based upon a lemma \ldots, which seems
to be a new result and which may present some interest in itself;
it is, however, by no means excluded that the proof of this
lemma could have been easily obtained by analyzing Lindenbaum's argument
\end{quotation}
Observe Tarski's delicate description of Bernstein's attempts at the
general case.
Conway and Doyle
\cite{conwaydoyle:three}
were less circumspect:
\begin{quote}
We are not aware of anyone other than Bernstein himself
who ever claimed to understand the argument.
\end{quote}
Around 2010 Arie Hinkis claimed to understand Bernstein's argument
(see \cite{hinkis}).
Peter proposed to Cecil that he look into this claim for his
Darmouth senior thesis project.
Between 2011 to 2013 we spent many long hours
trying to understand Hinkis's retelling of
Bernstein's proof,
and exploring variations on it.
In the end, we hit upon the proofs given here.
These proofs
are very much in the spirit of Bernstein's approach,
but not close enough that we believe
that Bernstein knew anything very like this,
and we remain skeptical that Bernstein ever knew a correct proof.
There are very many possible variations of Bernstein's general idea,
a lot of which seem to \emph{almost} work.
Still, these two proofs can be seen as a vindication of Bernstein's approach.
Our finding them owes everything to Hinkis's faith in Bernstein.
\section{Valediction} \label{vale}
Tarski
\cite[p.\ 78]{tarski:cancellation}
wrote that `an effective proof of [Theorems \ref{thleq} and \ref{theq}]
is by no means simple.'
We hope that you will disagree,
if not from this treatment then from Rich Schwartz's
enchanting presentation in
\cite{schwartz:four}.
The reason that division can wind up seeming simple to us is that
combinatorial arguments and algorithms are second nature to us now.
A different way of looking at the problem makes its apparent difficulties
disappear.
For an even more persuasive example of this, consider the Cantor-Bernstein
theorem,
and the confusion that accompanied it when it was new.
From a modern perspective,
you just combine the digraphs associated to the two injections
and follow your nose.
After a century of combinatorics and computer science,
it's easy to understand how Pan Galactic Division works.
Can we hope that someday folks will marvel at how hard it was to
discover it?
|
1,108,101,565,597 | arxiv | \section{Introduction}
Brane setups \cite{Han-Wit} have been widely attempted to
provide an alternative to algebro-geometric methods in the construction of gauge
theories (see \cite{Giveon} and references therein). The advantages of the latter
include the enlightening of important properties of manifolds such as mirror
symmetry, the provision of convenient supergravity descriptions and in instances
of pure geometrical engineering, the absence of non-perturbative objects.
The former on the other hand, give
intuitive and direct treatments of the gauge theory. One can conveniently
read out much information concerning the
gauge theory from the brane setups, such as the dimension of the Coulomb and Higgs
branches \cite{Han-Wit}, the mirror symmetry \cite{Han-Wit,Boer,Kapustin,
P-Zaf} in 3
dimensions first shown in \cite{IS},
the Seiberg-duality in 4
dimensions \cite{Elitzur}, and exact solutions
when we lift the setups from Type IIA to M Theory \cite{Mlift}.
In particular, when discussing ${\cal N}=2$ supersymmetric gauge theories in
4 dimensions, there are three known methods currently in favour.
The first method is {\it geometrical engineering}
exemplified by works in \cite{Mirror};
the second uses D3 branes as {\it probes} on orbifold singularities of
the type
$\C^2/\Gamma$ with $\Gamma$ being a finite discrete subgroup of $SU(2)$
\cite{Quiver}, and the third, the usage of {\it brane setups}.
These three approaches are related to each other by
proper T or S Dualities \cite{Karch}.
For example,
the configuration of stretching Type IIA D4 branes between $n+1$ NS5
branes placed in a
circular fashion, the so-called {\bf elliptic model}\footnote{We call it elliptic
even though
there is only an $S^1$ upon which we place the D4 branes; this is because from the
M Theory perspective, there is another direction: an $S^1$ on which we compactify to
obtain type Type IIA. The presence of two $S^1$'s makes the theory toroidal, or
elliptic.
Later we shall see how to make use of $T^2=S^1 \times S^1$ in Type IIB.
For clarity we shall refer to the former as the ${\cal N}=2$ elliptic model and the latter,
the ${\cal N}=1$ elliptic model.},
is precisely T-dual to D3 branes stacked upon ALE\footnote{
Asymptotically Locally Euclidean, i.e., Gorenstein singularities that locally represent
Calabi-Yau manifolds.} singularities of type $\widehat{A_n}$ (see \cite{Mlift,
Karl,B-Karch,Park,Erlich} for detailed discussions).
The above constructions can be easily generalised to ${\cal N}=1$
supersymmetric field theories in 4 dimensions.
Methods from geometric engineering as well as D3 branes as probes
now dictate the usage of orbifold singularities of the type
$\C^3/\Gamma$ with $\Gamma$ being a finite discrete subgroup of
$SU(3)$ \cite{Conf,Quiver1}.
A catalogue of all the discrete subgroups of $SU(3)$ in this context
is given in \cite{Han-He,Muto1}.
Now from the brane-setup point of view, there are two ways to arrive at
the theory. The first is to rotate certain branes in the configuration to
break the supersymmetry from ${\cal N}=2$ to ${\cal N}=1$ \cite{Elitzur}.
The alternative is to add
another type of NS5 branes, viz., a set of NS5$'$ branes placed
perpendicularly to the original NS5, whereby
constructing the so-called {\bf Brane Box Model} \cite{Han-Zaf,Han-Ura}.
Each of these two different approaches has its own merits.
While the former (rotating branes) facilitates the deduction of Seiberg Duality,
for the latter (Brane Box Models), it is easier to construct a class of new,
finite, chiral
field theories \cite{Han-S}. By finite we mean that in the field theory the
divergences may be cancelable.
From the perspective of branes on geometrical singularities,
this finiteness corresponds to the
cancelation of tadpoles in the orbifold background and from that of
brane setups, it corresponds to the no-bending requirement of
the branes \cite{Karch,Han-S,Leigh}.
Indeed, as with the ${\cal N}=2$ case, we can still show
the equivalence among these
different perspectives by suitable S or T Duality transformations.
This equivalence is explicitly shown in \cite{Han-Ura} for the case of
the Abelian finite subgroups of $SU(3)$.
More precisely, for the group $Z_k \times Z_{k'}$ or $Z_k$
and a chosen decomposition
of ${\bf 3}$ into appropriate irreducible representations thereof
one can construct
the corresponding Brane Box Model that gives the
same quiver diagram as the one
obtained directly from the geometrical methods of
attack; this is what we mean by equivalence \cite{Conf}.
Indeed, we are not satisfied with the fact that this abovementioned equivalence
so far exists only for Abelian singularities and would like to see how it may be
extended to non-Abelian cases.
The aim for constructing Brane Box Models of non-Abelian finite
groups is twofold: firstly we would generate a new category of finite supersymmetric
field theories and secondly we would demonstrate how the equivalence between the
Brane Box Model and D3 branes as probes is true beyond the Abelian case and hence
give an interesting physical perspective on non-Abelian groups.
More specifically, the problem we wish to tackle is that given any finite
discrete subgroup $\Gamma$ of $SU(2)$ or $SU(3)$,
what is the brane setup (in the T-dual picture)
that corresponds to D3 branes as probes on orbifold singularities afforded by
$\Gamma$?
For the $SU(2)$ case, the answer for the $\widehat{A}$ series
was given in \cite{Mlift} and that for the $\widehat{D}$ series,
in \cite{Kapustin}, yet $\widehat{E_{6,7,8}}$ are still unsolved.
For the $SU(3)$ case, the situation is even worse.
While \cite{Han-Zaf,Han-Ura} have given solutions to the Abelian groups
$Z_k$ and $Z_k\times Z_{k'}$, the non-Abelian $\Delta$ and $\Sigma$ series
have yet to be treated.
Though it is not clear how the generalisation can be done for
arbitrary non-Abelian singularities,
it is the purpose of this writing to take one further step from \cite{Han-Zaf,Han-Ura},
and address the next simplest series of dimension three
orbifold theories, viz., those of $\C^3/Z_k \times D_{k'}$ and
construct the corresponding
Brane Box Model and show its equivalence to geometrical methods. In addition to
equivalence we demonstrate how the two pictures are bijectively related for the
group of interest and that given one there exists a unique description in the other.
The key input is given by Kutasov, Sen and Kapustin in \cite{Kapustin,Kutasov,Sen}.
Moreover \cite{Han-Zaf2}
has briefly pointed out how his results may be used, but
without showing the consistency and equivalence.
The paper is organised as follows.
In section \sref{sec:review} we shall briefly review some techniques of brane setups
and orbifold projections in the context of finite quiver theories. Section
\sref{sec:group} is then devoted to a crucial digression on the mathematical properties
of the group of our interest, or what we call $G:=Z_k \times D_{k'}$. In section
\sref{sec:BB} we construct the Brane Box Model for $G$, followed by concluding
remarks in section \sref{sec:conc}.
\section*{Nomenclature}
Unless otherwise stated, we shall, throughout our paper, adhere to the notation
that $\omega_n = e^{\frac{2 \pi i}{n}}$, the $n$th root of unity,
that $G$ refers to the group $Z_k\times D_{k'}~$, that without ambiguity
$Z_k$ denotes $\Z_k$, the cyclic group of $k$ elements, that $D_k$ is
the binary dihedral group of order $4k$ and gives the affine Dynkin
diagram of $\widehat{D}_{k+2}$,
and that $d_k$ denotes
the ordinary dihedral group of order $2k$. Moreover $\delta$ will be defined as
$(k,2k')$, the greatest common divisor (GCD) of $k$ and $2k'$.
\section{A Brief Review of $D_n$ Quivers, Brane Boxes,
and Brane Probes on Orbifolds} \label{sec:review}
The aim of our paper is to construct the Brane Box Model of the
non-Abelian finite group $Z_k\times D_{k'}~$ and to show its consistency as well as
equivalence to geometric methods. To do so,
we need to know how to read out the gauge groups and matter content
from quiver diagrams which describe a particular
field theory from the geometry side.
The knowledge for such a task is supplied in \sref{subsec:Quiver}.
Next, as mentioned in the introduction, to construct field theories
which could be encoded in the $D_k$ quiver diagram,
we need an important result from \cite{Kapustin,Kutasov,Sen}.
A brief review befitting
our aim is given in \sref{subsec:Kapustin}.
Finally in \sref{subsec:BBZZ} we present the rudiments of the
Brane Box Model.
\subsection{Branes on Orbifolds and Quiver Diagrams} \label{subsec:Quiver}
It is well-known that a stack of coincident $n$ D3 branes gives rise to an ${\cal N}=4$
$U(n)$ super-Yang-Mills theory on the four dimensional world volume. The $U(1)$ factor
of the $U(n)$ gauge group decouples when we discuss the low energy dynamics of
the field theory and can be ignored, therefore giving us an effective $SU(n)$ theory.
For ${\cal N}=4$ in 4 dimensions the R-symmetry is $SU(4)$. Under such an R-symmetry,
the fermions in the vector multiplet transform in
the spinor representation of $SU(4) \simeq Spin(6)$ and the scalars,
in the vector representation of $Spin(6)$, the universal cover of $SO(6)$.
In the brane picture we can identify the R-symmetry as the $SO(6)$
isometry group which acts on the six transverse directions of the D3-branes.
Furthermore, in the AdS/CFT picture,
this $SU(4)$ simply manifests as the $SO(6)$ isometry group of the
5-sphere in $AdS_{5}\times S^{5}$ \cite{Conf}.
We shall refer to this gauge theory of the D3 branes as the parent theory and
consider the consequences of putting the stack on geometric singularities.
A wide class of finite Yang-Mills theories of various
gauge groups and supersymmetries is obtained when the parent theory is placed on orbifold
singularities of the type $\C^m/\Gamma$ where $m=2,3$.
What this means is that we select a discrete finite group $\Gamma \subset SU(4)$
and let its irreducible representations $\{{\bf r}_i\}$ act on the Chan-Paton
indices $I,J=1,...,n$
of the D3 branes by permutation. Only those matter fields of the parent theory
that are invariant under the group action of $\Gamma$ remain, the rest are
eliminated by this so-called ``orbifold projection''.
We present the properties of the parent and the orbifolded theory in the following
diagram:
\[
\begin{array}{|l|lll|}
\hline
&$Parent Theory$ & \stackrel{\Gamma,{\rm~irreps~}=\{{\bf r}_i\}}{\longrightarrow}
&$Orbifold Theory$\\
\hline
$SUSY$ &{\cal N}=4 & &
\begin{array}{l}
{\cal N}=2, {\rm~for~} \C^2/\{\Gamma\subset SU(2)\} \\
{\cal N}=1, {\rm~for~} \C^3/\{\Gamma\subset SU(3)\} \\
{\cal N}=0, {\rm~for~} (\C^3\simeq\R^6)/\{\Gamma\subset \{SU(4)\simeq SO(6)\}\} \\
\end{array}\\
\hline
\begin{array}{c}
$Gauge$ \\
$Group$
\end{array} &U(n) & & \prod\limits_{i} SU(N_i),
{\rm~~~~~~~where~}\sum\limits_{i} N_i \dim{\bf r}_i = n\\
\hline
$Fermion$ &\Psi_{IJ}^{\bf{4}} & & \Psi_{f_{ij}}^{ij} \\
$Boson$ &\Phi_{IJ}^{\bf{6}} & & \Phi_{f_{ij}}^{ij}
{\rm~~~~~~~where~} I,J=1,...,n; f_{ij}=1,...,a_{ij}^{{\cal R}={\bf 4},{\bf 6}}\\
&&&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~{\cal R}\otimes
{\bf r}_{i}=\bigoplus\limits_{j}a_{ij}^{{\cal R}}\\
\hline
\end{array}
\]
Let us briefly explain what the above table summarises.
In the parent theory,
there are, as mentioned above, gauge bosons $A_{IJ=1,...,n}$ as singlets of $Spin(6)$,
adjoint Weyl fermions $\Psi_{IJ}^{\bf{4}}$
in the fundamental $\bf{4}$ of $SU(4)$ and adjoint scalars
$\Phi_{IJ}^{\bf{6}}$ in the antisymmetric $\bf{6}$ of $SU(4)$.
The projection is the condition that
\[
A = \gamma(\Gamma) \cdot A \cdot \gamma(\Gamma)^{-1}
\]
for the
gauge bosons and
\[
\Psi({\rm~or~}\Phi) = R(\Gamma) \cdot \gamma(\Gamma) \cdot
\Psi({\rm~or~}\Phi) \cdot \gamma(\Gamma)^{-1}
\]
for the fermions and bosons respectively
($\gamma$ and $R$ are appropriate representations of $\Gamma$).
Solving these relations by using Schur's Lemma gives the information on
the orbifold theory.
The equation for $A$ tell us that the original $U(n)$
gauge group is broken to
$\prod\limits_{i} SU(N_i)$ where $N_i$ are positive integers such that
$\sum\limits_{i} N_i \dim{\bf r}_i = n$. We point out here that henceforth
we shall use the {\it regular representation} where $n = N|\Gamma|$ for some
integer $N$ and $n_i = N \dim{\bf r}_i$. Indeed other choices are possible
and they give rise to {\it Fractional Branes}, which not only provide interesting
dynamics but are also crucial in showing the equivalence between brane setups
and geometrical engineering \cite{Dog1,Karch1}.
The equations for $\Psi$ and $\Phi$ dictate that they become bi-fundamentals
which transform
under various pairs $(N_i,\bar{N_j})$ within the product gauge group. We have
a total of
$a_{ij}^{\bf{4}}$ Weyl fermions $\Psi _{f_{ij}=1,...,a_{ij}^{\bf{4}}}^{ij}$
and $a_{ij}^{\bf 6}$ scalars $\Phi _{f_{ij}}^{ij}$
where $a_{ij}^{\cal R}$ is defined by
\begin{equation}
{\cal R}\otimes {\bf r}_i=\bigoplus\limits_{j}a_{ij}^{\cal R} {\bf r}_j
\label{aij}
\end{equation}
respectively for ${\cal R} = 4,6$.
The supersymmetry of the orbifold theory is determined by analysing the
commutant of $\Gamma$ as it embeds into the parent $SU(4)$ R-symmetry.
For $\Gamma$ belonging to $SU(2)$, $SU(3)$ or the full $SU(4)$,
we respectively obtain ${\cal N}=2,1,0$. The corresponding geometric
singularities are as presented in the table.
Furthermore, the action of $\Gamma$ clearly differs for $\Gamma \subset
SU(2,3,$~or~$4)$ and the {\bf 4} and {\bf 6} that give rise to
the bi-fundamentals must be decomposed appropriately.
Generically, the number of trivial (principal) 1-dimensional irreducible representations
corresponds to the co-dimension of the singularity.
For the matter matrices $a_{ij}$, these irreducible representations give
a contribution of $\delta_{ij}$ and therefore to guaranteed adjoints.
For example, in the case of ${\cal N}=2$, there are
2 trivial {\bf 1}'s in the {\bf 4} and for ${\cal N}=1$,
${\bf 4} = {\bf 1}_{\rm trivial} \oplus {\bf 3}$.
In our paper, we focus on the latter case since $Z_k\times D_{k'}~$
is in $SU(3)$ and gives rise to ${\cal N}=1$. Furthermore we acknowledge the
inherent existence of the trivial 1-dimensional irrep and focus on the decomposition
of the {\bf 3}.
The matrices $a_{ij}^{{\cal R}={\bf 4,6}}$ in (\ref{aij})
and the numbers $\dim{\bf r}_i$ contain
all the information about the matter fields and gauge groups of the orbifold theory.
They can be conveniently encoded into so-called {\bf quiver diagrams}.
Each node of such a diagram treated as a finite graph represents
a factor in the product gauge group
and is labeled by $\dim{\bf r}_i$. The (possibly oriented)
adjacency matrix for the graph is prescribed precisely by $a_{ij}$.
The cases of ${\cal N} = 2,3$ are done \cite{Quiver,Han-He,Muto1,Greene}
and works toward the (non-supersymmetric) ${\cal N} =0$ case are underway \cite{Su4}.
In the ${\cal N} = 2$ case, the quivers must
coincide with $ADE$ Dynkin diagrams treated
as unoriented graphs in order that the orbifold theory be finite \cite{Mirror}.
The quiver diagrams in general are suggested to be related to WZW modular
invariants \cite{Han-He,He-Song}.
This is a brief review of the construction via geometric methods and it is our
intent now to see how brane configurations reproduce examples thereof.
\subsection{$D_k$ Quivers from Branes}\label{subsec:Kapustin}
Let us first digress briefly to $A_k$ quivers from branes.
In the case of $SU(2) \supset \Gamma = \widehat{A_k} \simeq Z_{k+1}$, the quiver
theory should be represented by an affine $A_k$ Dynkin diagram, i.e., a regular
polygon with $k+1$ vertices. The gauge group is
$\prod\limits_{i} SU(N_i) \times U(1)$
with $N_i$ being a $k+1$-partition of $n$ since ${\bf r}_i$ are all
one-dimensional\footnote{The $U(1)$ corresponds to the
centre-of-mass motion and decouples from other parts of the theory
so that when we discuss
the dynamical properties, it does not contribute.}.
However, we point out that on a classical level we expect
$U(N_i)$'s from the brane perspective rather than $SU(N_i)$. It is only after
considering the one-loop quantum corrections in the field theory
(or bending in the brane picture) that we realise that
the $U(1)$ factors are frozen. This is explained in \cite{Mlift}.
On the other hand, from the point of view of D-branes
as probes on the orbifold singularity,
associated to the anomalous $U(1)$'s are field-dependent
Fayet-Illiopoulos terms generating which freezes the $U(1)$ factors.
Thess two prespectives are T-dual to each other.
Further details can be found in \cite{LRA}.
Now, placing $k+1$ NS5 branes on a circle with $N_i$ stacked D4 branes
stretched between the
$i$th and $i+1$st NS5 reproduces precisely this gauge group
with the correct bifundamentals provided by open strings ending on the adjacent
D4 branes (in the compact direction). This circular model thus furnishes the brane
configuration of an $A_n$-type orbifold theory and is summarised in \fref{fig:A}.
Indeed T-duality in the compact direction transforms the $k+1$ NS5 branes into
a nontrivial metric, viz., the $k+1$-centered Taub-NUT, precisely that expected
from the orbifold picture.
\begin{figure}
\centerline{\psfig{figure=A.eps,width=4.0in}}
\caption{The ${\cal N}=2$ elliptic model of D4 branes stretched between NS5 branes
to give quiver theories of the $\widehat{A_k}$ type.}
\label{fig:A}
\end{figure}
Since both the NS5 and the D4 are offsprings of the M5 brane, in the M-Theory context,
the circular configuration becomes $\R^4 \times \bar\Sigma$ in $\R^{10,1}$, where
$\bar\Sigma$ is a $k+1$-point compactification of a the Riemann surface $\Sigma$
swept out by the worldvolume of the fivebrane \cite{Mlift}. The duality group, which is the
group of automorphisms among the marginal couplings that arise in the resulting field theory,
whence becomes the fundamental group of ${\cal M}_{k+1}$,
the moduli space of an elliptic curve with $k+1$ marked points.
The introduction of ON$^0$ planes facilitates the next type of ${\cal N}=2,d=4$
quiver theories, namely those encoded by affine $\widehat{D_k}$ Dynkin
diagrams \cite{Kapustin}.
The gauge group is now
$SU(2N)^{k-3} \times SU(N)^4 \times U(1)$ (here $U(1)$ decouples also, as explained before)
dictated by the Dynkin indices of
the $\widehat{D_k}$ diagrams.
There are two ways to see the $\widehat{D_k}$ quiver in the brane picture: one in
Type IIA theory and the other, in Type IIB.
Because later on in the construction of the Brane Box
Model we will use D5 branes which are in Type IIB, we will focus on Type IIB only (for
a complete description and how the two descriptions are related by T-duality,
see \cite{Kapustin}).
In this case, what we need is the ON$^0$-plane
which is the S-dual of a peculiar pair: a D5 brane on top of an O5$^-$-plane.
The one
important property of the ON$^0$-plane is that it has an orbifold description
$\R^6 \times \R^4/{\cal I}$ where
${\cal I}$ is a product of world sheet fermion operator $(-1)^{F_L}$ with the
parity inversion of the $\R^4$ \cite{Sen}.
Let us place 2 parallel vertical ON$^0$ planes and $k-2$ NS5 branes in between and
parallel to both as in \fref{fig:D}. Between the ON$^0$ and its immediately
adjacent NS5, we stretch $2N$ D5 branes; $N$ of positive charge
on the top and $N$ of negative charge below.
Now due to the projection of the ON$^0$ plane, $N$ D5 branes of positive charge give
one $SU(N)$ gauge group and $N$ D5 branes of negative charge give
another. Furthermore, these D5 branes end on NS5 branes and the
boundary condition on the NS5 projects out the bi-fundamental hypermultiplets
of these two $SU(N)$ gauge groups
(for the rules of such projections see \cite{Kapustin}).
Moreover, between the two adjacent interior
NS5's we stretch $2N$ D5 branes, giving $SU(2N)$'s for the gauge group.
From this brane setup we immediately see
that the gauge theory is encoded in the affine Quiver diagram of
$\widehat{D_k}$.
\begin{figure}
\centerline{\psfig{figure=D.eps,width=4.0in}}
\caption{D5 branes stretched between ON$^0$ branes, interrupted by NS5 branes
to give quiver theories of the $\widehat{D_k}$ type.}
\label{fig:D}
\end{figure}
\subsection{Brane Boxes}\label{subsec:BBZZ}
We have seen in the last section, that positioning appropriate branes
according to Dynkin diagrams - which for $\Gamma \subset SU(2)$ have
their adjacency matrices determined by the representation of $\Gamma$,
due to the McKay Correspondence \cite{Han-He} - branengineers some
orbifold theories that can be geometrically engineered. The exceptional
groups however, have so far been elusive
\cite{Kapustin}. For $\Gamma \subset SU(3)$, perhaps related to the fact
that there is not yet a general McKay Correspondence\footnote{For
Gorenstein singularities of dimension 3, only those of the Abelian
type such that 1 is not an eigenvalue of $g$ $\forall g\in \Gamma$ are isolated.
This restriction perhaps limits na\"{\i}ve brane box
constructions to Abelian orbifold groups \cite{Han-Ura}. For a discussion on
the McKay Correspondence as a ubiquitous thread, see \cite{He-Song}.}
above dimension 2, the problem becomes more subtle; brane setups have
been achieved for orbifolds of the Abelian type, a restriction that has been
argued to be necessary for consistency \cite{Han-Zaf,Han-Ura}. It is thus
the purpose of this writing to show how a group-theoretic ``twisting''
can relax this condition and move beyond Abelian theories; to this we shall
turn later.
We here briefly review the so-called $Z_k \times Z_{k'}$ elliptic brane
box model. The orbifold theory corresponds to
$\C^3 / \{ \Gamma = Z_k \times Z_{k'} \subset SU(3) \}$
and hence by arguments before we are
in the realm of ${\cal N} = 1$ super-Yang-Mills. The generators for $\Gamma$
are given, in its fundamental 3-dimensional representation\footnote{We
have chosen the directions in the transverse spacetime upon which
each cyclic factor acts; the choice is arbitrary. In the language of
finite groups, we have chosen the transitivity of the collineation sets.
The group at hand, $Z_k \times Z_{k'}$, is in fact the first example of
an intransitive subgroup of $SU(3)$.
For a discussion of finite subgroups of unitary groups, see \cite{Su4} and
references therein.}, by diagonal
matrices $diag(e^{\frac{2\pi i}{k}},e^{\frac{-2\pi i}{k}},1)$ corresponding
to the $Z_k$ which act non-trivially on the first two coordinates of $\C^3$
and $diag(1,e^{\frac{2\pi i}{k'}},e^{\frac{-2\pi i}{k'}})$
corresponding to the $Z_{k'}$ which act non-trivially on the
last two coordinates of $\C^3$.
Since $\Gamma$ is a direct product of Abelian groups, the representation
thereof is simply a Kronecker tensor product of the two cyclic groups.
Or, from the branes perspective, we should in a sense take a Cartesian
product or sewing between two ${\cal N}=2$ elliptic $A_{k-1}$ and $A_{k'-1}$
models discussed above,
resulting in a brane configuration on $S^1 \times S^1 = T^2$. This is
the essence of the (${\cal N}=1$ elliptic) Brane Box Model \cite{Han-Zaf,Han-Ura}.
Indeed the placement of a perpendicular set of branes breaks the supersymmetry
of the ${\cal N} = 2$ model by one more half, thereby giving the desired
${\cal N}=1$. More specifically, we place $k$ NS5 branes in the $012345$
and $k'$ NS5$'$ branes in the $012367$ directions, whereby forming
a grid of $kk'$ boxes as in \fref{fig:BBZZ}.
\begin{figure}
\centerline{\psfig{figure=BBZZ.eps,width=2.7in}}
\caption{Bi-fundamentals arising from D5 branes stretched between grids of
NS5 and NS5$'$ branes in the elliptic brane box model.}
\label{fig:BBZZ}
\end{figure}
We then stretch $n_{ij}$ D5 branes in the $012346$ directions
within the $i,j$-th box and compactify the $46$ directions (thus making
the low-energy theory on the D5 brane to be 4 dimensional).
The bi-fundamental fields are then given according to adjacent boxes
horizontally, vertically and diagonally and the gauge groups is
$(\bigotimes\limits_{i,j}SU(N))\times U(1) = SU(N)^{kk'}\times U(1)$ (here
again the $U(1)$ decouples) as expected from geometric
methods. Essentially we construct one box for each irreducible representation of
$\Gamma=Z_k \times Z_{k'}$ such that going in the 3 directions as shown
in \fref{fig:BBZZ} corresponds to tensor decomposition of the irreducible
representation in that grid and a special ${\bf 3}$-dimension representation
which we choose when we construct the Brane Box Model.
We therefore see the realisation of Abelian orbifold theories in dimension
3 as brane box configurations; twisted identifications of the grid can
in fact lead to more exotic groups such as $Z_k \times Z_{kk'/l}$.
More details can be found in \cite{Han-Ura}.
\section{The Group $G=Z_k \times D_{k'}$} \label{sec:group}
It is our intent now to investigate the next simplest example of intransitive
subgroups of $SU(3)$, i.e., the next infinite series of orbifold theories
in dimension 3 (For definitions on the classification of collineation groups,
see for example \cite{Su4}). This will give us a first example of a Brane Box Model
that corresponds to non-Abelian singularities.
Motivated by the $Z_k \times Z_{k'}$ treated in section
\sref{sec:review}, we let the second factor be the binary dihedral group
of $SU(2)$, or the $D_{k'}$ series (we must point out that in our notation,
the $D_{k'}$ group gives the $\widehat{D}_{k'+2}$ Dynkin diagram).
Therefore $\Gamma$ is the group
$G = Z_k \times D_{k'}$, generated by
\[
\alpha = \left( \begin{array}{ccc}
\omega_{k} & 0 & 0 \\
0 & \omega_{k}^{-1} & 0\\
0 & 0 & 1
\end{array}
\right)
~~~~~~~~
\beta = \left( \begin{array}{ccc}
1 & 0 & 0 \\
0 & \omega_{2k'} & 0 \\
0 & 0 & \omega_{2k'}^{-1}
\end{array}
\right)
~~~~~~~~
\gamma =\left( \begin{array}{ccc}
1 & 0 & 0 \\
0 & 0 & i \\
0 & i & 0
\end{array}
\right)
\]
where $w_x := e^{\frac{2 \pi i}{x}}$. We observe that indeed $\alpha$ generates
the $Z_k$ acting on the first two directions in $\C^3$ while $\beta$ and $\gamma$
generate the $D_{k'}$ acting on the second two.
We now present some crucial properties of this group $G$ which shall be used in
the next section. First we remark that the $\times$ in $G$ is really an abuse of
notation, since $G$ is certainly not a direct product of these two groups. This
is the cause why na\"{\i}ve constructions of the Brane Box Model fail and to this
point we shall turn later.
What we really mean is that the actions on the first two and last two coordinates
in the transverse directions by these subgroups are to be construed as separate.
Abstractly, we can write the presentation of $G$ as
\begin{equation}
\alpha \beta = \beta \alpha,~~~~\beta \gamma =\gamma \beta^{-1},~~~~
\alpha^{m} \gamma \alpha^{n} \gamma =\gamma \alpha^{n} \gamma \alpha^{m}
~~~~\forall m,n \in \Z
\label{relations}
\end{equation}
These relations compel all elements in $G$ to be writable in the form
$\alpha^{m} \gamma \alpha^{\tilde{m}} \gamma^{n} \beta^{p}$. However, before discussing
the whole group, we find it very useful to discuss the subgroup generated by $\beta$ and
$\gamma$, i.e the binary dihedral group $D_{k'}$ as a degenerate ($k=1$) case of $G$,
because the properties of the binary dihedral group turn out to be crucial for the structure
of the Brane Box Model and the meaning of ``twisting'' which we shall clarify later.
\subsection{The Binary Dihedral $D_{k'} \subset G$}\label{subsec:D}
All the elements of $D_{k'}$ can be written as
$ \beta^{p}\gamma^{n}$ with $n=0,1$ and $p=0,1,...,2k'-1$, giving the
order of the group as $4k'$. We now move onto Frobenius characters.
It is easy to work out the structure of conjugate classes. We have two conjugate
classes $(1), (\beta^{k'})$ which have only one element, $(k'-1)$ conjugate classes
$(\beta^p,\beta^{-p}),p=1,..,k'-1$ which have two elements and two conjugate classes
$( \beta^{p~\rm{even}}\gamma),( \beta^{p~\rm{odd}}\gamma)$ which have $k'$ elements.
The class equation is thus as follows:
\[
4k' = 1 + 1 + (k' - 1)\cdot 2 + 2 \cdot k'.
\]
Moreover there are 4 1-dimensional and $k'-1$ 2-dimensional irreducible
representations
such that the characters for the 1-dimensionals depend on the parity of $k'$.
Now we have enough facts to clarify our notation: the group $D_{k'}$ gives
$k'+3$ nodes (irreducible representations) which corresponds to the Dynkin diagram of
$\widehat{D_{k'+2}}$.
We summarise the character table as follows:\\
\[
\doublerulesep 0.7pt
\begin{array}{cc}
k' \rm{even} &
\begin{array}{|c|c|c|c|c|c|c|}
\hline \hline
& C_{n=0}^{p=0} & C_{n=0}^{p=k'} & C_{n=0}^{\pm \rm{even}~p} &
C_{n=0}^{\pm \rm{odd}~p} & C_{n=1}^{\rm{even}~p} & C_{n=1}^{\rm{odd}~p} \\ \hline
|C| & 1 & 1 & 2 & 2 & k' & k' \\ \hline
\#C & 1 & 1 & \frac{k'-1}{2} & \frac{k'-1}{2}
& 1 & 1 \\ \hline \hline
\Gamma_1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \hline
\Gamma_2 & 1 & -1 & 1 & -1 & 1 & -1 \\ \hline
\Gamma_3 & 1 & 1 & 1 & 1 & -1 & -1 \\ \hline
\Gamma_4 & 1 & -1 & 1 & -1 & -1 & 1 \\ \hline
\Gamma_l & \multicolumn{4}{|c|}{(\omega_{2k'}^{lp}+
\omega_{2k'}^{-lp})~~~~l=1,..,k'-1} & 0 & 0 \\ \hline
\end{array}
\\ \\
k' \rm{odd} &
\doublerulesep 0.7pt
\begin{array}{|c|c|c|c|c|c|c|}
\hline \hline
& C_{n=0}^{p=0} & C_{n=0}^{p=k'} & C_{n=0}^{\pm \rm{even}~p} &
C_{n=0}^{\pm \rm{odd}~p} & C_{n=1}^{\rm{even}~p} & C_{n=1}^{\rm{odd}~p} \\ \hline
|C| & 1 & 1 & 2 & 2 & k' & k' \\ \hline
\#C & 1 & 1 & \frac{k'-2}{2} & \frac{k'}{2}
& 1 & 1 \\ \hline \hline
\Gamma_1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \hline
\Gamma_2 & 1 & 1 & 1 & -1 & \omega_4 & -\omega_4 \\ \hline
\Gamma_3 & 1 & 1 & 1 & 1 & -1 & -1 \\ \hline
\Gamma_4 & 1 & 1 & 1 & -1 & -\omega_4 & \omega_4 \\ \hline
\Gamma_l & \multicolumn{4}{|c|}{(\omega_{2k'}^{lp}+
\omega_{2k'}^{-lp})~~~~l=1,..,k'-1} & 0 & 0 \\ \hline
\end{array}
\\
\end{array}
\]
In the above tables, $|C|$ denotes the number of group elements in conjugate class
$C$ and $\#C$, the number of conjugate classes belonging to this type. Therefore
$\sum\limits_C \#C\cdot|C|$ should equal to order of the group.
When we try to
look for the character of the 1-dimensional irreps, we find it to be the same as the
character of the factor group $D_{k'}/N$ where $N$ is the normal subgroup generated by
$\beta$. This factor group is Abelian of order 4 and is different depending
on the parity of $k'$. When $k'=\rm{even}$, it
is $Z_2\times Z_2$ and when $k'=\rm{odd}$ it is $Z_4$. Furthermore, the conjugate class
$(\beta^p,\beta^{-p})$ corresponds to different elements in this factor group
depending on the parity of $p$, and we distinguish the two different cases in the
table as $C_{n=0}^{\pm \rm{odd}~p}$ and $C_{n=0}^{\pm \rm{even}~p}$.
\subsection{The whole group $G = Z_k \times D_{k'}$}
Now from (\ref{relations}) we see that all elements of G can be written in the form
$\alpha^{m} \gamma \alpha^{\tilde{m}} \gamma^{n} \beta^{p}$ with
$m,\tilde{m}=0,..,k-1$, $n=0,1$ and $p=0,..2k'-1$, which we abbreviate as
$(m,\tilde{m},n,p)$. In the matrix form of our fundamental representation, they become
\[
\begin{array}{ll}
(m,\tilde{m},n=0,p)= & (m,\tilde{m},n=1,p)= \\
\left( \begin{array}{ccc}
\omega_{k}^{m+\tilde{m}} & 0 & 0\\
0 & 0 & i\omega_{k}^{-m} \omega_{2k'}^{-p} \\
0 & i\omega_{k}^{-\tilde{m}}\omega_{2k'}^{p} & 0
\end{array}
\right),
&
\left( \begin{array}{ccc}
\omega_{k}^{m+\tilde{m}} & 0 & 0 \\
0 & -\omega_{k}^{-m} \omega_{2k'}^{p} & 0 \\
0 & 0 & -\omega_{k}^{-\tilde{m}}\omega_{2k'}^{-p}
\end{array}
\right). \\
\end{array}
\]
Of course this representation is not faithful and there is a non-trivial orbit; we can
easily check the repeats:
\begin{equation}
\begin{array}{l}
(m,\tilde{m},n=0,p)=(m+\frac{k}{(k,2k')},\tilde{m}-\frac{k}{(k,2k')},n=0,p-\frac{2k'}
{(k,2k')}), \\
(m,\tilde{m},n=1,p)=(m+\frac{k}{(k,2k')},\tilde{m}-\frac{k}{(k,2k')},n=1,p+\frac{2k'}
{(k,2k')})
\end{array}
\label{orbit}
\end{equation}
where $(k,2k')$ denotes the largest common divisor between them. Dividing by the
factor of this repeat immediately gives the
order of $G$ to be $\frac{4k'k^2}{(k,2k')}$.
We now move on to the study of the characters of the group. The details of the
conjugation automorphism, class equation and irreducible representations we shall leave to the Appendix
and the character tables we shall present below; again we have two cases,
depending on the parity of $\frac{2k'}{(k,2k')}$. First however we start with
some preliminary definitions. We define $\eta$ as a function of $n$, $p$ and
$h = 1,2,3,4$.
\begin{equation}
\begin{array}{cc}
k' = \rm{even} &
\begin{array}{ccccc}
& (n=1,p=\rm{even}) & (n=1,p=\rm{odd}) & (n=0,p=\rm{even}) & (n=0,p=\rm{odd}) \\
\eta^{1} & 1 & 1 & 1 & 1 \\
\eta^{2} & 1 & -1 & 1 & -1 \\
\eta^{3} & 1 & 1 & -1 & -1 \\
\eta^{4} & 1 & -1 & -1 & 1
\end{array}
\\
k' = \rm{odd} &
\begin{array}{ccccc}
& (n=1,p=\rm{odd}) & (n=1,p=\rm{even}) & (n=0,p=\rm{even}) & (n=0,p=\rm{odd})\\
\eta^{1} & 1 & 1 & 1 & 1 \\
\eta^{2} & 1 & -1 & \omega_4 & -\omega_4 \\
\eta^{3} & 1 & 1 & -1 & -1 \\
\eta^{4} & 1 & -1 & - \omega_4 & \omega_4
\end{array}
\\
\end{array}
\label{eta}
\end{equation}
Those two tables simply give the character tables of $Z_2\times Z_2$ and $Z_4$
which we saw in the last section.
Henceforth we define $\delta := (k,2k')$.
Furthermore, we shall let
$\Gamma^n_x$ denote an $n$-dimensional irreducible representation indexed by some (multi-index) $x$.
For $\frac{2k'}{\delta} = \rm{even}$,
there are $4k$ 1-dimensional irreducible representations indexed by $(l,h)$ with $l=0,1,..,k-1$ and
$h=1,2,3,4$ and
$k(\frac{k'k}{(k,2k')}-1)$ 2-dimensionals indexed by $(d,l)$ with
$d=1,..,\frac{k'k}{(k,2k')}-1;l=0,..,k-1$.
For $\frac{2k'}{\delta} = \rm{odd}$,
there are $2k$ 1-dimensional irreducible representations indexed by $(l,h)$ with $l=0,..,k-1;h=1,3$ and
$k(\frac{k'k}{(k,2k')}-\frac{1}{2})$ 2-dimensionals indexed by $(d,l)$
$d=1,..,\frac{k'k}{(k,2k')}-1;l=0,..,k-1$ and
$d=\frac{k'k}{(k,2k')};l=0,..,\frac{k}{2}-1$.
Now we present the character tables.\\
{\large $\frac{2k'}{\delta} = \rm{even}$}
\[
\doublerulesep 0.7pt
\begin{array}{|c|c|c|c|}
\hline \hline
|C| & 1 & 2 & \frac{k'k}{(k,2k')} \\ \hline
\#C & 2k & k(\frac{k'k}{(k,2k')}-1) & 2k \\
\hline \hline
&
\begin{array}{c}
m = 0,..,\frac{k}{\delta}-1; ~i = 0,..,\delta-1; \\
~\tilde{m} = m + \frac{i k}{\delta}; ~n=1; \\
~p=k'-\frac{ik'}{(k,2k')},2k'-\frac{ik'}{(k,2k')}
\end{array}
&
\begin{array}{c}
m = 0,..,\frac{k}{\delta}-1; ~i = 0,..,\delta-1; ~n=1 \\
\left\{
\begin{array}{l}
s = 0,..,m-1; ~p = 0,..2k'-1;\\
~~~~~~\tilde{m} = s + \frac{i k}{\delta};\\
s = m; \mbox{and require further that} \\
~~~~~~p < (-p - \frac{2 i k'}{\delta}) \bmod (2k')\\
\end{array}
\right.
\end{array}
&
\begin{array}{c}
m = 0;\\
\tilde{m} = 0,..,k-1;\\
p = 0,1;\\
n = 0\\
\end{array} \\ \hline
\Gamma^1_{(l,h)} & \multicolumn{3}{|c|}{
\omega_{k}^{(m+\tilde{m})l} \eta^{h},~~~~~~l=0,1,..,k-1;~~h=1,..,4}
\\ \hline
\Gamma^2_{(d,l)} & \multicolumn{2}{|c|}{
\begin{array}{c}
(-1)^{d}(\omega_{k}^{-dm}\omega_{2k'}^{dp}+\omega_{k}^{-d\tilde{m}}
\omega_{2k'}^{-dp}) \omega_{k}^{(m+\tilde{m})l}\\
~~~~~~~~~~~~~d\in[1,\frac{k'k}{(k,2k')}-1];~~l\in [0,k)\\
\end{array}
}
& 0 \\ \hline
\end{array}
\]
{\large $\frac{2k'}{\delta} = \rm{odd}$}
\[
\doublerulesep 0.7pt
\begin{array}{|c|c|c|c|}
\hline \hline
|C| & 1 & 2 & \frac{k'k}{(k,2k')} \\ \hline
\#C & k & k(\frac{k'k}{(k,2k')}-\frac12) & k \\
\hline \hline
&
\begin{array}{c}
m = 0,..,\frac{k}{\delta}-1;\\
i = 0,..,\delta-1 \mbox{ and even}; \\
~\tilde{m} = m + \frac{i k}{\delta}; ~n=1; \\
~p=k'-\frac{ik'}{(k,2k')},\\~~~~~~2k'-\frac{ik'}{(k,2k')}
\end{array}
&
\begin{array}{c}
m = 0,..,\frac{k}{\delta}-1; ~i = 0,..,\delta-1; ~n=1 \\
\left\{
\begin{array}{l}
s = 0,..,m-1; ~p = 0,..2k'-1;\\
~~~~~~\tilde{m} = s + \frac{i k}{\delta};\\
s = m; \mbox{and require further that} \\
~~~~~~p < (-p - \frac{2 i k'}{\delta}) \bmod (2k')
\mbox{ for even } i\\
~~~~~~p \le (-p - \frac{2 i k'}{\delta}) \bmod (2k')
\mbox{ for odd } i\\
\end{array}
\right.
\end{array}
&
\begin{array}{c}
m = 0;\\
\tilde{m} = 0,..,k-1;\\
p = 0;\\
n = 0\\
\end{array} \\ \hline
\Gamma^1_{(l,h)} & \multicolumn{3}{|c|}{
\omega_{k}^{(m+\tilde{m})l} \eta^{h},~~~~~~l=0,1,..,k-1;~~h=1,3}
\\ \hline
\Gamma^2_{(d,l)} & \multicolumn{2}{|c|}{
\begin{array}{c}
(-1)^{d}(\omega_{k}^{-dm}\omega_{2k'}^{dp}+\omega_{k}^{-d\tilde{m}}
\omega_{2k'}^{-dp}) \omega_{k}^{(m+\tilde{m})l}\\
~~~~~~~~~~~~d\in[1,\frac{k'k}{(k,2k')}-1];~~l\in [0,k)\\
\end{array}
}
& 0 \\ \hline
\Gamma^2_{(d,l)} & \multicolumn{2}{|c|}{
\begin{array}{c}
(-1)^{d}(\omega_{k}^{-dm}\omega_{2k'}^{dp}+\omega_{k}^{-d\tilde{m}}
\omega_{2k'}^{-dp}) \omega_{k}^{(m+\tilde{m})l}\\
~~~~~~~~~~~~d=\frac{k'k}{(k,2k')};~~l\in [0,\frac{k}{2})\\
\end{array}
}
& 0 \\ \hline
\end{array}
\]
Let us explain the above tables in more detail. The third row of each table
give the representative elements of the various conjugate classes.
The detailed description of the group elements in
each conjugacy class is given in the Appendix.
It is easy to see, by using the above character tables, that given two
elements $(m_i,\tilde{m}_i,n_i,p_i)~~i=1,2$, if they share the same
characters (as given in the last two rows), they belong to
same conjugate class as to be expected since the character is a class
function.
We can be more precise and actually write down the 2 dimensional irreducible representation
indexed by $(d,l)$ as
\begin{equation}
\begin{array}{l}
(m,\tilde{m},n=0,p)= \omega_{k}^{(m+\tilde{m})l}
\left( \begin{array}{cc}
0 & i^d \omega_{k}^{-dm} \omega_{2k'}^{-dp} \\
i^d \omega_{k}^{-d\tilde{m}} \omega_{2k'}^{dp} & 0
\end{array} \right)
\\
(m,\tilde{m},n=1,p)=\omega_{k}^{(m+\tilde{m})l}
\left( \begin{array}{cc}
(-1)^d \omega_{k}^{-dm} \omega_{2k'}^{dp} & 0 \\
0 & (-1)^d \omega_{k}^{-d\tilde{m}} \omega_{2k'}^{-dp}
\end{array} \right)
\end{array}
\label{2d}
\end{equation}
\subsection{The Tensor Product Decomposition in $G$}\label{subsec:decomp}
A concept crucial to character theory and representations
is the decomposition of tensor products into tensor sums among the
various irreducible representations, namely the equation
\[
{\bf r}_k \otimes {\bf r}_i = \bigoplus\limits_j a_{ij}^k {\bf r}_j.
\]
Not only will such an equation enlighten us as to the structure of the
group, it will also provide quintessential information to the brane box
construction to which we shall turn later. Indeed the ${\cal R}$ in
(\ref{aij}) is decomposed into direct sums of irreducible representations ${\bf r}_k$, which
by the additive property of the characters, makes the fermionic and bosonic
matter matrices $a_{ij}^{\cal R}$ ordinary sums of matrices $a_{ij}^k$.
In particular, knowing the specific decomposition of the {\bf 3},
we can immediately construct the quiver diagram prescribed by
$a_{ij}^{\bf 3}$ as discussed in \sref{subsec:Quiver}.
We summarise the decomposition laws as follows (using the multi-index notation
for the irreducible representations introduced in the previous section).
{\large $\frac{2k'}{\delta} = \rm{even}$}
\[
\begin{array}{|c|c|}
\hline
{\bf 1} \otimes {\bf 1}' & (l_1,h_1)_1 \otimes (l_2,h_2)_1 = (l_1+l_2,h_3)_1\\
& \mbox{where $h_3$ is such that $\eta^{h_1}\eta^{h_2} = \eta^{h_3}$
according to (\ref{eta}).} \\ \hline
{\bf 2} \otimes {\bf 1} & (d,l_1)_2 \otimes (l_2,h_2)_1=\left\{
\begin{array}{l}
(d,l_1+l_2)_2~~{\rm when}~~h_2=1,3. \\
(\frac{k'k}{(k,2k')}-d,l_1+l_2-d)_2~~{\rm when}~~h_2=2,4
\end{array}
\right.
\\ \hline
{\bf 2} \otimes {\bf 2}' &
\begin{array}{l}
(d_1,l_1)_2 \otimes (d_2 \le d_1,l_2)_2 = \\
~~~~~~~~~~(d_1+d_2,l_1+l_2)_2 \oplus (d_1-d_2,l_1+l_2-d_2)_2 \\
{\rm where} \\
(d_1-d_2,l_1+l_2-d_2)_2 := \\
~~~~~~~~~~(l_1+l_2-d_2,h=1)_1 \oplus (l_1+l_2-d_2,h=3)_1
{\rm~~if~~} d_1 = d_2 \\
(d_1+d_2,l_1+l_2)_2 := \\
~~~~~~~~~~(l_1+l_2,h=2)_1 \oplus (l_1+l_2,h=4)_1
{\rm~~if~~} d_1+d_2=\frac{k'k}{\delta} \\
(d_1+d_2,l_1+l_2)_2 := \\
~~~~~~~~~~(\frac{2k'k}{(k,2k')}-(d_1+d_2),(l_1+l_2)-(d_1+d_2))_2
{\rm~~if~~} d_1+d_2>\frac{k'k}{\delta} \\
\end{array}
\\ \hline
\end{array}
\]
{\large $\frac{2k'}{\delta} = \rm{odd}$}
\[
\begin{array}{|c|c|}
\hline
{\bf 1} \otimes {\bf 1}' & (l_1,h_1)_1 \otimes (l_2,h_2)_1=\left\{
\begin{array}{l}
(l_1+l_2,h=1)_1~~{\rm if}~~h_1=h_2 \\
(l_1+l_2,h=3)_1~~{\rm if}~~h_1\neq h_2
\end{array} \right.
\\ \hline
{\bf 2} \otimes {\bf 1} & (d,l_1)_2 \otimes (l_2,h_2)_1=\left\{
\begin{array}{l}
(d,l_1+l_2)_2 \\
(d,l_1+l_2-\frac{k}{2})_2 {\rm~~if~~}
d=\frac{k'k}{(k,2k')} {\rm~and~} l_1+l_2 \ge \frac{k}{2}
\end{array} \right.
\\ \hline
{\bf 2} \otimes {\bf 2}' &
\begin{array}{l}
(d_1,l_1)_2 \otimes (d_2 \le d_1,l_2)_2 = \\
~~~~~~~~~~(d_1+d_2,l_1+l_2)_2 \oplus (d_1-d_2,l_1+l_2-d_2)_2 \\
{\rm where} \\
(d_1-d_2,l_1+l_2-d_2)_2 := \\
~~~~~~~~~~(l_1+l_2-d_2,h=1)_1 \oplus (l_1+l_2-d_2,h=3)_1
{\rm~~if~~} d_1 = d_2 \\
(d_1+d_2,l_1+l_2)_2 := \\
~~~~~~~~~~(d_1 + d_2, l_1 + l_2 - \frac{k}{2})_2
{\rm~~if~~} d_1+d_2=\frac{k'k}{\delta}{\rm~and~} l_1 + l_2 \ge \frac{k}{2}\\
(d_1+d_2,l_1+l_2)_2 := \\
~~~~~~~~~~(\frac{2k'k}{(k,2k')}-(d_1+d_2),(l_1+l_2)-(d_1+d_2))_2
{\rm~~if~~} d_1+d_2>\frac{k'k}{\delta} \\
\end{array}
\\ \hline
\end{array}
\]
\subsection{$D_{\frac{kk'}{\delta}}$, an Important Normal Subgroup}\label{subsec:H}
We now investigate a crucial normal subgroup $H \triangleleft G$. The purpose
is to write $G$ as a canonical product of $H$ with the factor group formed by
quotienting $G$ thereby, i.e., as $G \simeq G/H \times H$.
The need for this rewriting of the group will
become clear in \sref{sec:BB} on the brane box construction.
The subgroup we desire is the one presented in the following:
\begin{lemma}
The subgroup
\[
H := \{(m,-m,n,p)|m=0,..,k-1;n=0,1;p=0,...,2k'-1 \}
\]
is normal in $G$ and is isomorphic to $D_{\frac{kk'}{\delta}}$.
\end{lemma}
To prove normality we use the multiplication and conjugation rules in $G$
given in the Appendix as (\ref{conj}) and (\ref{multi}).
Moreover, let $D_{\frac{kk'}{\delta}}$ be generated by $\tilde{\beta}$ and
$\tilde{\gamma}$ using the notation of \sref{subsec:D}, then isomorphism
can be shown by the following bijection:
\[
\begin{array}{l}
(m,-m,1,p) \longleftrightarrow \tilde{\beta}^{\frac{2k'}{\delta}m-
\frac{k}{\delta}(p-k')}, \\
(m,-m,0,p) \longleftrightarrow \tilde{\beta}^{\frac{2k'}{\delta}m+
\frac{k}{\delta}p} \tilde{\gamma}.
\end{array}
\]
Another useful fact is the following:
\begin{lemma}
The factor group $G/H$ is isomorphic to $Z_k$.
\end{lemma}
This is seen by noting that $\alpha^l,l=0,1,...k-1$ can be used as representatives
of the cosets. We summarise these results into the following
\begin{proposition}
There exists another representation of $G$, namely
$Z_k \times D_{k'} \simeq Z_k \mbox{$\times\!\rule{0.3pt}{1.1ex}\,$} D_{\frac{kk'}{\delta}}$, generated by the same
$\alpha$ together with
\[
\begin{array}{cc}
\tilde{\beta}^{\frac{2k'}{\delta}m- \frac{k}{\delta}p} := (m,-m,1,p+k') = ~~~~~~~~~~
& \tilde{\gamma} := \gamma = (0,0,0,0)= ~~~~~~~~~~\\
\left( \begin{array}{ccc}
1 & 0 & 0 \\
0 & \omega_{k}^{-m} \omega_{2k'}^{p} & 0 \\
0 & 0 & \omega_{k}^{m} \omega_{2k'}^{-p}
\end{array} \right), &
\left( \begin{array}{ccc}
1 & 0 & 0 \\
0 & 0 & i \\
0 & i & 0 \\
\end{array} \right).
\end{array}
\]
The elements of the group can now be written as
$\alpha^a \tilde{\beta}^b \tilde{\gamma}^n$ with $a\in [0,k)$,
$b\in [0,\frac{2kk'}{\delta})$ and $n=0,1$, constrained by the presentation
\[
\{\alpha^k=\tilde{\beta}^{\frac{2kk'}{\delta}}=1,
\tilde{\beta}^{\frac{kk'}{\delta}}=\tilde{\gamma}^2=-1,
\alpha \tilde{\beta} = \tilde{\beta} \alpha , \tilde{\beta} \tilde{\gamma} =
\tilde{\gamma} \tilde{\beta}^{-1},
\alpha \tilde{\gamma} =\tilde{\beta}^{\frac{2k'}{\delta}} \tilde{\gamma} \alpha\}
\]
\end{proposition}
In the proposition, by $\mbox{$\times\!\rule{0.3pt}{1.1ex}\,$}$ we do mean the internal semi-direct product between
$Z_k$ and $H := D_{\tilde{k}} := D_{\frac{kk'}{\delta}}$, in the sense \cite{Alperin} that
(I) $G=HZ_k$ as cosets, (II) $H$ is normal in $G$ and $Z_k$ is another subgroup,
and (III) $H \cap Z_k = 1$. Now we no longer abuse the symbol $\times$ and
unambiguously use $\mbox{$\times\!\rule{0.3pt}{1.1ex}\,$}$ to show the true structure of $G$.
We remark that this representation is in some sense more natural (later we shall see
that this naturality is not only mathematical but also physical). The mathematical
natuality is seen by the lift from the normal subgroup
$H$. We will see what is the exact meaning
of the ``twist'' we have mentioned before. When we include the generator
$\alpha$ and lift the normal subgroup $D_{\frac{kk'}{\delta}}$
to the whole group $G$, the structure of
conjugacy classes will generically change as well. For example, from
\begin{equation}
\alpha (\tilde{\beta}^b \tilde{\gamma}) \alpha^{-1} =
(\tilde{\beta}^{b+\frac{2k'}{\delta}} \tilde{\gamma}),
\label{add1}
\end{equation}
we see that the two different conjugacy classes $(\tilde{\beta}^{\rm{even}~b} \tilde{\gamma})$
and $(\tilde{\beta}^{\rm{odd}~b} \tilde{\gamma})$
will remain distinct if $\frac{2k'}{\delta}=\rm{even}$
and collapse into one single conjugacy class
if $\frac{2k'}{\delta}=\rm{odd}$. We formally call the latter case
{\bf twisted}. Further clarifications regarding the
structure of the conjugacy classes
of $G$ from the new points of view, especially physical, shall be most welcome.
After some algebraic manipulation, we can
write down all the conjugacy classes of $G$ in this new description.
For fixed $a$ and $\frac{2k'}{\delta}=\rm{even}$,
we have the following classes: $(\alpha^a \tilde{\beta}^{-\frac{k'}{\delta}a})$,
$(\alpha^a \tilde{\beta}^{\frac{kk'}{\delta}-\frac{k'}{\delta}a}),$
$(\alpha^a \tilde{\beta}^b, \alpha^a \tilde{\beta}^{-b-\frac{2k'}{\delta}a})$
(with $b \neq -\frac{k'}{\delta}a$ and $\frac{kk'}{\delta}-\frac{k'}{\delta}a$),
$(\alpha^b \tilde{\beta}^{p~\rm{even}} \tilde{\gamma})$ and
$(\alpha^b \tilde{\beta}^{p~\rm{odd}} \tilde{\gamma})$. The crucial point here is that,
for every value of $a$, the structure of conjugacy classes is almost the same as that
of $D_{\frac{kk'}{\delta}}$. There is a 1-1
correspondence (or the lifting without the ``twist'') as we go from the
conjugacy classes of $H$ to $G$, making it possible to use
the idea of \cite{Han-Zaf2} to construct the corresponding
Brane Box Model. We will see this point more clearly later.
On the other hand, when
$\frac{2k'}{\delta}=\rm{odd}$, for fixed $a$, the conjugacy classes are
no longer in 1-1 correspondence
between $H$ and $G$. Firstly, the last two
classes of $H$ will combine into only one of $G$. Secondly,
the classes which contain only one element (the first two in $H$) will remain
so only for $a=\rm{even}$; for $a=\rm{odd}$, the they will combine into
one single class of $G$ which has two elements.
So far the case of $\frac{2k'}{\delta}=\rm{odd}$ befuddles us and we do not know
how the twist obstructs the construction of the Brane Box Model. This
twist seems to suggest quiver theories on non-affine $D_k$ diagrams because
the bifurcation on one side collapses into a single node, a phenomenon
hinted before in \cite{Han-He,Han-Zaf2}.
It is a very interesting problem which we leave to further work.
\section{The Brane Box for $Z_k\times D_{k'}~$} \label{sec:BB}
\subsection{The Puzzle}
The astute readers may have by now questioned themselves why such a long digression on
the esoterica of $G$ was done; indeed is it not enough to straightforwardly combine the
$D_{k'}$ quiver technique with the elliptic model and stack $k$ copies of Kapustin's
configuration on a circle to give the $Z_k\times D_{k'}~$ brane boxes?
Let us investigate where this na\"{\i}vet\'{e} fails. According to the discussions in
\sref{subsec:BBZZ}, one must construct one box for each irreducible representation of $G$. Let us place 2
ON$^0$ planes with $k'$ parallel NS5 branes in between as in \sref{subsec:Kapustin},
and then copy this $k$ times in the direction of the ON$^0$ and compactify that direction.
This would give us $k + k$ boxes each containing 2 1-dimensional irreducible representations corresponding
to the boxes bounded by one ON$^0$ and one NS5 on the two ends. And in the middle
we would have $k(k'-1)$ boxes each containing 1 2-dimensional irreducible representation.
Therefrom arises a paradox already! From the discussion of the group
$G=Z_k \times D_{k'}$ in \sref{sec:group}, we recall that there are
$4k$ 1-dimensional irreducible representations and $k(\frac{k'k}{(k,2k')}-1)$ 2-dimensionals if
$\frac{2k'}{\delta} = \rm{even}$ and for $\frac{2k'}{\delta} = \rm{odd}$, $2k$
1-dimensionals and $k(\frac{k'k}{(k,2k')}-\frac12)$ 2-dimensionals.
Our attempt above gives a mismatch of the number the 2-dimensionals
by a factor of as large as $k$; there are far too many 2-dimensionals
for $G$ to be placed into the required $kk'$ boxes.
This mismatch tells us that such na\"{\i}ve constructions of the Brane Box Model fails.
The reason is that in this case what we are dealing with is a non-Abelian group
and the noncommutative property thereof twists the na\"{\i}ve structure
of the singularity. To correctly account for the property of the singularity
after the non-Abelian twisting, we should attack in a new direction.
In fact, the discussion of the normal
subgroup $H$ in \sref{subsec:H} is precisely the way to see the
structure of singularity more properly.
Indeed we have hinted, at least for $\frac{2k'}{\delta}=\rm{even}$,
that the na\"{\i}ve structure of the Brane Box Model can be applied again with a little
modification, i.e., with the replacement of $D_{k'}$ by $D_{\frac{kk'}{\delta}}$.
Here again we have the generator of $Z_k$ acting on the first two coordinates of
$\C^3$ and the generators of $D_{\frac{kk'}{\delta}}$ acting on the last two.
This is the subject of the next sub section where we will
give a consistent Brane Box Model for $G = Z_k \times D_{k'}$.
\subsection{The Construction of Brane Box Model}
Let us first discuss the decomposition of the fermionic {\bf 4} for which we
shall construct the brane box (indeed the model will dictate the fermion
bi-fundamentals, bosonic matter fields will be given therefrom by supersymmetry).
As discussed in \cite{Han-He} and \sref{subsec:Quiver},
since we are in an ${\cal N}=1$ (i.e., a co-dimension
one theory in the orbifold picture), the {\bf 4} must decompose into
${\bf 1} \oplus {\bf 3}$ with the {\bf 1} being trivial. More precisely,
since $G$ has only 1-dimensional or 2-dimensional irreducible representations, for giving the correct quiver diagram which corresponds to the Brane Box Model
the
{\bf 4} should go into one trivial 1-dimensional, one non-trivial 1-dimensional
and one 2-dimensional according to
\[
{\bf 4} \longrightarrow (0,1)_1 \oplus (l',h')_1 \oplus (d,l)_2.
\]
Of course we need a constraint so as to ensure that such a decomposition
is consistent with the unity-determinant condition of the matrix representation
of the groups. Since from (\ref{2d}) we can compute the determinant of the $(d,l)_2$ to be
$(-1)^{(n+1)(d+1)}\omega_{k}^{(m+\tilde{m})(2l-d)}$, the constraining condition
is $l'+2l-d\equiv 0(\bmod k)$.
In particular we choose
\begin{equation}
{\bf 3} \longrightarrow (l'=1,h'=1)_1+(d=1,l=0)_2;
\label{decomp}
\end{equation}
indeed this choice is precisely in accordance with the defining matrices of
$G$ in \sref{sec:group} and we will give the Brane Box Model corresponding to this
decomposition and check consistency.
Now we construct the brane box using the basic idea in \cite{Han-Zaf2} .
Let us focus on the case of $\delta := (k,2k')$ being even where we have
$4k$ 1-dimensional irreducible representations and $k(\frac{k'k}{(k,2k')}-1)$ 2-dimensionals.
We place 2 ON$^0$ planes vertically at two sides. Between them we place
$\frac{kk'}{\delta}$ vertically parallel NS5 branes
(which give the structure of $D_{\frac{kk'}{\delta}}$).
Next we place $k$ NS5$'$ branes horizontally (which give the
structure of $Z_k$) and identify the $k$th with the zeroth.
This gives us a grid of $k(\frac{kk'}{\delta}+1)$ boxes. Next we put $N$ D5 branes with
positive charge and $N$ with negative charge in those grids.
Under the decomposition (\ref{decomp}), we can connect the structure of singularity to
the structure of Brane Box Model by placing the irreducible representations into the grid of boxes
\`{a} la \cite{Han-Zaf,Han-Ura} as follows (the setup is shown in
\fref{fig:BBZD}).
First we place the $4k$ 1-dimensionals at the two sides such that those boxes each
contains two: at the left we have $(l'=0,h'=1)_1$ and $(l'=0,h'=3)_1$
at the lowest box and with the upper boxes containing subsequent increments on $l'$.
Therefore we have the list, in going up the boxes,
$\{ (0,1)_1~\&~(0,3)_1;
(1,1)_1 ~\&~ (1,3)_1; (2,1)_1 ~\&~ (2,3)_1; ... (k-1,1)_1 ~\&~ (k-1,3)_1\}$.
The right side has a similar list:
$\{ (0,2)_1 ~\&~ (0,4)_1;
(1,2)_1 ~\&~ (1,4)_1; (2,2)_1 ~\&~ (2,4)_1; ... (k-1,2)_1 ~\&~ (k-1,4)_1\}$.
Into the middle grids we place the 2-dimensionals, one to a box, such that the bottom
row consists of
$\{(d=1,l=0)_2,(2,0)_2,(3,0)_2,...(\frac{kk'}{\delta}-1,0)_2 \}$
from left to right. And as we go up we increment $l$ until $l=k-1$ ($l=k$ is
identified with $l=0$ due to our compactification).
\begin{figure}
\centerline{\psfig{figure=BBZD.eps,width=5.4in}}
\caption{The Brane Box Model for $Z_k\times D_{k'}~$. We place $d := \frac{kk'}{\delta}$
NS5 branes in between 2 parallel ON$^0$-planes and $k$ NS5$'$ branes perpendicularly
while identifying the 0th and the $k$th circularly. Within the boxes of this grid, we
stretch D5 branes, furnishing bi-fundamental as indicated by the arrows shown.}
\label{fig:BBZD}
\end{figure}
Now we must check the consistency condition. We choose the bi-fundamental directions
according to the conventions in \cite{Han-Zaf,Han-Ura}, i.e., East, North and Southwest.
The consistency condition is that for the irreducible representation in box $i$, forming the tensor product with
the {\bf 3} chosen in (\ref{decomp}) should be the tensor sum of the irreducible representations
of the neighbours in the 3 chosen directions, i.e.,
\begin{equation}
{\bf 3} \otimes R_i = \bigoplus\limits_{j\in{\rm Neighbours}} R_j
\label{consistency}
\end{equation}
Of course this consistency condition is precisely (\ref{aij}) in a different
guise and
checking it amounts to seeing whether the Brane Box Model gives the
same quiver theory as does the geometry, whereby showing the equivalence
between the two methods.
Now the elaborate tabulation in \sref{subsec:decomp} is seen to be not in vain;
let us check (\ref{consistency}) by column in the brane box as in
\fref{fig:BBZD}.
For the $i$th entry in the leftmost column, containing $R_i=(l',1~{\rm or}~3)$,
we have $R_i \otimes {\bf 3} = (l',1~{\rm or}~3)_1 \otimes ((1,1)_1 \oplus
(1,0)_2) = (l'+1,1~{\rm or}~3)_1 \oplus (1,l')_2$. The righthand side is
precisely given by the neighbour of $i$ to the East and to the North and since
there is no Southwest neighbour, consistency (\ref{consistency}) holds for
the leftmost column. A similar situation holds for the rightmost column,
where we have ${\bf 3} \otimes (l',2~{\rm or}~4) = (l'+1,2~{\rm or}~4)_1
\oplus (\frac{kk'}{\delta}-1,l'-1)_2$, the neighbour to the North and the
Southwest.
Now we check the second column, i.e., one between the first and second NS5-branes.
For the $i$th entry $R_i = (1,l)_2$, after tensoring with the {\bf 3},
we obtain $(1,l+1)_2 \oplus (l+1,l+0)_2 \oplus ((l+0-1,1)_1 \oplus
(l+0-1,3)_1)$, which are the irreducible representations precisely in the 3 neighbours: respectively
East, North and the two 1-dimensional in the Southwest. Whence
(\ref{consistency}) is checked. Of course a similar situation occurs for the
second column from the right where we have ${\bf 3} \otimes
(R_i = (\frac{kk'}{\delta}-1,l)_2) = (\frac{kk'}{\delta}-1,l+2)_2 \oplus
(\frac{kk'}{\delta}-1-1,l-1)_2 \oplus ((l,2)_1 \oplus (l,4)_1)$, or
respectively the neighbours to the North, Southwest and the East.
The final check is required of the interior box, say $R_i = (d,l)_2$.
Its tensor with {\bf 3} gives $(d,l+1)_2 \oplus (d-1,l-1)_2 \oplus
(d+1,l)_2$, precisely the neighbours to the North, Southwest and East.
\subsection{The Inverse Problem}
A natural question arises from our quest for the correspondence
between brane box constructions and branes as probes: is such a
correspondence bijective? Indeed if the two are to be related by some
T Duality or generalisations thereof, this bijection would be necessary.
Our discussions above have addressed one direction: given a $Z_k\times D_{k'}~$ singularity,
we have constructed a consistent Brane Box Model. Now we must ask
whether given such a configuration with $m$ NS5 branes between two
ON$^0$ planes and $k$ NS5$'$ branes according to \fref{fig:BBZD},
could we find a unique $Z_k\times D_{k'}~$ orbifold which corresponds thereto?
The answer fortunately is in the affirmative and is summarised in the following:
\begin{proposition}
For $\frac{2k'}{(k,2k')}$ being even\footnote{Which is the case upon which
we focus.}, there exists a bijection\footnote{Bijection in the sense that given
a quiver theory produced from one picture there exists a unique method in the
other picture which gives the same quiver.} between the Brane Box Model and
the D3 brane-probes on the orbifold for the
group $G := Z_k \times D_{k'} \cong Z_k \mbox{$\times\!\rule{0.3pt}{1.1ex}\,$} D_{m:=\frac{kk'}{(k,2k')}}$.
In particular
\begin{itemize}
\item (I) Given $k$ and $k'$, whereby determining $G$ and hence the orbifold theory,
one can construct a unique Brane Box Model;
\item (II) Given $k$ and $m$ with the condition that $k$ is a divisor of $m$,
where $k$ is the number of NS5 branes perpendicular to $ON^0$ planes and
$m$ the number of NS5 branes between two $ON^0$ planes
as in \fref{fig:BBZD}, one can extract a unique orbifold theory.
\end{itemize}
\end{proposition}
Now we have already shown (I) by our extensive discussion in the previous
sections. Indeed, given integers $k$ and
$k'$, we have twisted $G$ such that it is characterised by $k$ and
\begin{equation}
\label{addm}
m:=\frac{kk'}{(k,2k')},
\end{equation}
two numbers that uniquely fix the brane configuration.
The crux of the remaining direction (II) seems to be the issue whether we could,
given $k$ and $m$, ascertain the values
of $k$ and $k'$ uniquely? For if so then our Brane Box Model, which is solely
determined by $k$ and $m$, would be uniquely mapped to a $Z_k\times D_{k'}~$ orbifold, characterised
by $k$ and $k'$. We will show below that though this is not so and $k$ and $k'$ cannot
be uniquely solved, it is still true that $G$ remains unique. Furthermore, we will
outline the procedure by which we can find convenient choices of $k$ and $k'$ that
describe $G$.
Let us analyse this problem in more detail.
First we see that $k$, which determines the $Z_k$ in $G$, remains unchanged.
Therefore our problem is further reduced to: given $m$, is there a unique
solution of $k'$ at fixed $k$? We write $k,k',m$ as:
\begin{equation}
\label{kk}
\begin{array}{c}
k=2^{q}l f_2 \\
k'=2^{p} l f_1 \\
m=2^{n} f_3
\end{array}
\end{equation}
where with the extraction of all even factors, $l,f_1$ and $f_2$
are all odd integers and $l$ is the greatest common divisor of $k$ and $k'$ so that
$f_1,f_2$ are coprime. What we need to know are $l,f_1$ and $p$ given $k,q,n$ and $f_3$.
The first constraint is that $\frac{2k'}{(k,2k')}=\rm{even}$, a condition on which our
paper focuses. This immediately yields the inequality $p\geq q$. The definition of
$m$ (\ref{addm}) above further gives
\[
2^{n} f_3=m=2^p l f_1 f_2= 2^{p-q}k f_1.
\]
From this equation, we can solve
\begin{equation}
\label{OI1}
p=n,~~~~~~~f_1=\frac{m}{2^{p-q}k}
\end{equation}
Now it remains to determine $l$. However, the solution for
$l$ is not unique. For example, if we take $l=l_1 l_2$ and $(l_2,f_1)=1$, then
the following set $\{\tilde{k},\tilde{k'}\}$ will give same $k,m$:
\[
\begin{array}{c}
\tilde{k}=k=2^{q}l_1 l_2 f_2 \\
\tilde{k'}=2^{p} l_1 f_1 \\
m=2^{n} f_3
\end{array}
\]
This non-uniqueness in determining $k,k'$ from $k,m$ may at first seem discouraging.
However we shall see below that different pairs of $\{k,k'\}$ that give the
same $\{k,m\}$ must give the same group $G$.
We first recall that $G$ can be written as
$Z_k \mbox{$\times\!\rule{0.3pt}{1.1ex}\,$} D_{m=\frac{kk'}{(k,2k')}}$.
For fixed $k,m$ the two subgroups $Z_k$ and $D_m$ are same. For the whole group
$Z_k \mbox{$\times\!\rule{0.3pt}{1.1ex}\,$} D_{m=\frac{kk'}{(k,2k')}}$ be unique no matter which $k'$ we choose we just need to show that the algebraic relation which generate
$Z_k \mbox{$\times\!\rule{0.3pt}{1.1ex}\,$} D_{m=\frac{kk'}{(k,2k')}}$ from $Z_k$ and $D_m$ is same. For that,
we recall from the proposition
in section \sref{subsec:H}, that in twisting $G$ into its internal semi-direct form,
the crucial relation is
\[
\alpha \tilde{\gamma}=\tilde{\beta}^{\frac{2k'}{(k,2k')}} \tilde{\gamma} \alpha
\]
Indeed we observe that $\frac{k'}{(k,2k')}= \frac{m}{k}$ where the condition
that $k$ is a divisor of $m$ makes the expression having meaning. Whence given $m$ and $k$,
the presentation of $G$ as $Z_k \mbox{$\times\!\rule{0.3pt}{1.1ex}\,$} D_m$ is uniquely fixed, and hence $G$
is uniquely determined. This concludes our demonstration for the above proposition.
Now the question arises as to what values of $k$ and $k'$ result in the
same $G$ and how the smallest pair (or rather, the smallest $k'$ since
$k$ is fixed) may be selected. In fact our discussion
above prescribes a technique of finding such a pair. First we
solve $p,f_1$ using (\ref{OI1}), then we find the largest factor $h$ of $k$ which
satisfies $(h,f_1)=1$. The smallest value of $k'$ is then such that
$l=\frac{k}{h}$ in (\ref{kk}).
Finally, we wish to emphasize
that the bijection we have discussed is not true for arbitrary $\{m,k\}$ and we
require that $k$ be a divisor of $m$ as is needed in demonstration of the
proposition. Indeed, given $m$ and $k$ which do not satisfy
this condition, the 1-1 correspondence between the Brane Box Model and the
orbifold singularity is still an enigma and will be
left for future labours.
\section{Conclusions and Prospects} \label{sec:conc}
We have briefly reviewed some techniques in two contemporary
directions in the construction
of gauge theories from branes, namely branes as geometrical probes on orbifold
singularities or as constituents of configurations of D branes stretched between
NS branes.
Some rudiments in the orbifold procedure,
in the brane setup of
${\cal N}=2$ quiver theories of the $\widehat{D_k}$ type as well as in the
${\cal N}=1$ $Z_k \times Z_{k'}$ Brane Box Model have been introduced.
Thus inspired, we have
constructed the Brane Box Model for an infinite series of
non-Abelian finite subgroups of $SU(3)$, by combining some methodology
of the aforementioned brane setups.
In particular, we have extensively studied the properties, especially the
representation and character theory of the intransitive collineation group
$G := Z_k \times D_{k'} \subset SU(3)$,
the next simplest group after $Z_k \times Z_{k'}$ and a natural extension thereof.
From the geometrical perspective, this amounts to the study of Gorenstein
singularities of the type $\C^3 / G$ with the $Z_k$ acting on the first two
complex coordinates of $\C^3$ and $D_{k'}$, the last two.
We have shown why na\"{\i}ve Brane Box constructions for $G$ fail (and indeed
why non-Abelian groups in general may present difficulties). It is only after
a ``twist'' of $G$ into a semi-direct product form $Z_k \mbox{$\times\!\rule{0.3pt}{1.1ex}\,$} D_{\frac{kk'}{(k,2k')}}$,
an issue which only arises because of the non-Abelian nature of $G$, that
the problem may be attacked. For $\frac{2k'}{(k,2k')}$ even, we have successfully
established a consistent Brane Box Model. The resulting gauge theory is that of
$k$ copies of $\widehat{D}$-type quivers circularly arranged (see \fref{fig:BBZD}).
However for $\frac{2k'}{(k,2k')}$ odd, a degeneracy occurs and we seem to arrive at
ordinary (non-Affine) $D$ quivers, a phenomenon hinted at by some previous
works \cite{Han-Zaf2,Han-He} but still remains elusive. Furthermore, we have
discussed the inverse problem, i.e., whether given a configuration of the
Brane Box Model we could find the corresponding branes as probes on orbifolds.
We have shown that when $k$ is a divisor of $m$
the two perspectives are bijectively related
and thus the inverse problem can be solved.
For general $\{m,k\}$, the answer of the inverse problem is still not clear.
Many interesting problems arise and are open. Apart from clarifying the physical
meaning of ``twisting'' and hence perhaps treat the $\frac{2k'}{(k,2k')}$ odd case,
we can try to construct Brane Boxes for more generic non-Abelian groups.
Moreover, marginal couplings and duality groups thereupon may be extracted
and interpreted as brane motions; this is of particular interest because
toric methods from geometry so far have been restricted to Abelian singularities.
Also, recently proposed brane diamond models \cite{Aganagic} may be combined with
our techniques to shed new insight. Furthermore during the preparation of this
manuscript, a recent paper that deals with brane configurations for $\C^3/\Gamma$
singularities for non-Abelian $\Gamma$ (i.e the $\Delta$ series in $SU(3)$)
by $(p,q)$5-brane webs has come to our attention \cite{Muto2}.
We hope that our construction, as the Brane Box Model realisation of a non-Abelian
orbifold theory in dimension 3, may lead to insight in these various directions.
\section*{Acknowledgements}
{\it Catharinae Sanctae Alexandriae et Ad Majorem Dei Gloriam...\\}
We would like to extend our sincere gratitude to A. Kapustin,
A. Karch, A. Uranga
and A. Zaffaroni for fruitful discussions. Furthermore we
would like to thank O. DeWolfe, L. Dyson, J. Erlich, A. Naqvi,
M. Serna and J. S. Song for their suggestions and help. YHH is also obliged
to N. Patten and S. Mcdougall for charming diversions.
\section*{Appendix}
Using the notation introduced in \sref{sec:group}, we see that the
conjugation within $G$ gives
\begin{equation}
(q,\tilde{q},\tilde{n},k)^{-1}(m,\tilde{m},n,p) (q,\tilde{q},\tilde{n},k)=
\left\{
\begin{array}{l}
(\tilde{m}+q-\tilde{q},m-q+\tilde{q},n,2k-p) $ for $ n=0,\tilde{n}=0 \\
(m-q+\tilde{q}, \tilde{m}+q-\tilde{q},n,2k+p) $ for $ n=0,\tilde{n}=1 \\
(\tilde{m},m,n,-p) $ for $ n=1,\tilde{n}=0 \\
(m,\tilde{m},n,p) $ for $ n=1,\tilde{n}=1.
\end{array}
\right.
\label{conj}
\end{equation}
Also, we present the multiplication rules in $G$ for reference:
\begin{eqnarray}
(m,\tilde{m},0,p_1)(n,\tilde{n},0,p_2) & = & (m+\tilde{n},\tilde{m}+n,1,p_2-p_1)
\nonumber \\
(m,\tilde{m},0,p_1)(n,\tilde{n},1,p_2) & = & (m+\tilde{n},\tilde{m}+n,0,p_2+p_1-k')
\nonumber \\
(m,\tilde{m},1,p_1)(n,\tilde{n},0,p_2) & = & (m+n,\tilde{m}+\tilde{n},0,p_2-p_1-k')
\nonumber \\
(m,\tilde{m},1,p_1)(n,\tilde{n},1,p_2) & = & (m+n,\tilde{m}+\tilde{n},1,p_2+p_1-k')
\label{multi}
\end{eqnarray}
First we focus on the conjugacy class of elements such that $n=0$.
From (\ref{orbit}) and (\ref{conj}), we see that if two
elements are within the same conjugacy class, then they must have the same
$m+\tilde{m} \bmod k$.
Now we need to distinguish between two cases:
\begin{itemize}
\item (I) if $\frac{2k'}{(k,2k')}=\rm{even}$, the orbit conditions conserve
the parity of $p$, making even and odd $p$ belong to different conjugacy classes;
\item (II) if $\frac{2k'}{(k,2k')}=\rm{odd}$, the orbit conditions change $p$ and
we find that all $p$ belong to the same conjugacy class
they have the same value for $m+\tilde{m}$.
\end{itemize}
In summary then, for $\frac{2k'}{(k,2k')}=\rm{even}$, we have $2k$
conjugacy classes each of which has $\frac{k'k}{(k,2k')}$ elements;
for $\frac{2k'}{(k,2k')}=\rm{odd}$, we have $k$
conjugacy classes each of which has $\frac{2k'k}{(k,2k')}$ elements.
Next we analyse the conjugacy class corresponding to $n=1$.
For simplicity, we divide the interval $[0,k)$ by factor $(k,2k')$ and define
\[
V_{i} = \left[\frac{ik}{(k,2k')},\frac{(i+1)k}{(k,2k')}\right)
\]
with $i=0,...,(k,2k')-1$.
Now from (\ref{orbit}), we can always fix $m$ to belong $V_{0}$.
Thereafter, $\tilde{m}$ and $p$ can change freely within $[0,k)/[0,2k')$.
Again, we have two different cases.
(I) If $\frac{2k'}{(k,2k')}=\rm{even}$, for every subinterval $V_{i}$
we have $2k_{0}$ (we define $k_0:=2\frac{k}{(k,2k')}$) conjugacy classes each containing
only one element, namely,
\[
(m,\tilde{m}=m+\frac{ik}{(k,2k')},n=1,p=k'-\frac{ik'}{(k,2k')} {\rm ~or~}
2k'-\frac{ik'}{(k,2k')}).
\]
Also we have a total of
$k_{0}\frac{2k'-2}{2}+\frac{k_{0}(k_{0}-1)}{2}
2k'=k_{0}(k'-1)+k'k_{0}^2+k_{0}k'=k'k_{0}^2-k_{0}$
conjugacy classes of 2 elements, namely
$(m,\tilde{m},n=1,p)$ and
$(\tilde{m}-\frac{ik}{(k,2k')},m+\frac{ik}{(k,2k')},n=1,-p-i\frac{2k'}{(k,2k')})$.
Indeed, the total number of conjugacy classes is
$2k+(k,2k')(2k_{0})+(k,2k')(k'k_{0}^2-k_{0})=4k+k(\frac{k'k}{(k,2k')}-1)$, giving
the order of $G$ as expected.
Furthermore, there are $4k$ 1-dimensional irreducible representations
and $k(\frac{k'k}{(k,2k')}-1)$ 2-dimensional irreducible representations. This is consistent since
$\sum_i\dim{\bf r}_i = 1^2\cdot 4k+2^2\cdot k(\frac{k'k}{(k,2k')}-1)=
\frac{4k'k^2}{(k,2k')} = |G|$.
We summarize case (I) into the following table:
\[
\begin{array}{c|c|c|c}
& C_{n=0}^{m+\tilde{m}(\bmod k),p=\rm{odd} / \rm{even}} &
C_{n=1,V_{i}}^{\tilde{m}=m+\frac{ik}{(k,2k')},p=
(k'-\frac{ik'}{(k,2k')}) /
(2k'-\frac{ik'}{(k,2k')})}
& C_{n=1,V_{i}}^{(m,\tilde{m},p)=
(\tilde{m}-\frac{ik}{(k,2k')},
m+\frac{ik}{(k,2k')},-p-i\frac{2k'}{(k,2k')})} \\ \hline
|C| & \frac{k'k}{(k,2k')} & 1 & 2 \\ \hline
\#C & 2k & 2k & k(\frac{k'k}{(k,2k')}-1)
\end{array}
\]
Now let us treat case (II), where $\frac{2k'}{(k,2k')}$ is odd (note that
in this case we must have $k$ even). Here, for $V_{i}$ and $i$ even, the situation
is as (I) but for $i$ odd there are no one-element conjugacy classes. We tabulate the
conjugacy classes in the following:
\[
\begin{array}{c|c|c|c}
& C_{n=0}^{m+\tilde{m}(\bmod k),\rm{any~}p} &
C_{n=1,V_{i},i=\rm{even}}^{\tilde{m}=m+\frac{ik}{(k,2k')},
p=(k'-\frac{ik'}{(k,2k')})
/(2k'-\frac{ik'}{(k,2k')})}
& C_{n=1,V_{i}}^{(m,\tilde{m},p)=
(\tilde{m}-\frac{ik}{(k,2k')},
m+\frac{ik}{(k,2k')},-p-i\frac{2k'}{(k,2k')})} \\ \hline
|C| & \frac{2k'k}{(k,2k')} & 1 & 2 \\ \hline
\#C & k & 2\frac{k}{2}=k & \frac{(k,2k')}{2}[(k'k_{0}^2-k_{0})+k'k_{0}^2]=
k(\frac{k'k}{(k,2k')}-\frac{1}{2})
\end{array}
\]
|
1,108,101,565,598 | arxiv |
\section{Introduction}
Persistent homology is a way of quantifying the topology of a function.
Given a function $f : X \to \Rspace$, persistence scans the homology of the sublevel
sets $f^{-1}(-\infty, r]$ as $r$ varies from $-\infty$ to $\infty$.
As it scans, homology appears and homology disappears.
This history of births and deaths is recorded as a \emph{persistence diagram} \cite{CSEdH}
or a \emph{barcode} \cite{ZC2005}.
What makes persistence special is that the persistence diagram of $f$ is stable
to arbitrary perturbations to $f$.
This is the celebrated \emph{bottleneck stability} of Cohen-Steiner, Edelsbrunner, and Harer \cite{CSEdH}.
Bottleneck stability makes persistent homology a useful tool in data analysis and in pure mathematics.
All of this is in the setting of vector spaces where each homology group is computed using coefficients in a field.
Fix a field $\field$ and let $\Vect$ be the category of $\field$-vector spaces.
As persistence scans the sublevel sets of $f$, it records its homology
as a functor $\Ffunc : (\Rspace, \leq) \to \Vect$ where
$\Ffunc(r) := \Hfunc_\ast \big( f^{-1}(-\infty, r]; \field \big)$ and
$\Ffunc( r \leq s) : \Ffunc(r) \to \Ffunc(s)$ is the map induced by the inclusion of the
sublevel set at $r$ into the sublevel set at $s$.
The functor $\Ffunc$ is called the \emph{persistence module} of~$f$.
Assuming some tameness conditions on $f$, the persistence diagram of $\Ffunc$ is equivalent to its barcode,
but the two definitions are very different.
The \emph{barcode} of $\Ffunc$ is its list of indecomposables.
This list is unique up to a permutation and furthermore, each indecomposable is an
\emph{interval persistence module} \cite{ZC2005, Carlsson2010, Boevey}.
The barcode model is how most people now think about persistence.
However in \cite{CSEdH} where bottleneck stability was first proved, the persistence diagram
is defined as a purely combinatorial object.
The \emph{rank function} of $\Ffunc$ assigns to each pair of values $r \leq s$
the rank of the map $\Ffunc(r \leq s)$.
The M\"obius inversion of the rank function is the \emph{persistence diagram} of $\Ffunc$.
Remarkably, these two very different approaches to persistence give equivalent answers.
The persistence diagram of \cite{CSEdH}
easily generalizes \cite{patel} to the setting of constructible persistence modules valued
in any skeletally small abelian category $\Ccat$.
The rank function of such a persistence module records the image
of each $\Ffunc(r \leq s)$ as an element of the Grothendieck group
of $\mathcal{C}$.
Here we are using the Grothendieck group of an abelian category: this is the abelian group
with one generator for each isomorphism class of objects and one relation for each short exact sequence.
The persistence diagram of $\Ffunc$ is then the M\"obius inversion of this rank function.
A weak form of stability was shown in \cite{patel}.
In this paper, we prove bottleneck stability.
Our proof is an adaptation of the proofs of \cite{CSEdH} and~\cite{crazy_persistence}.
We were hoping that the M\"obius inversion model for persistence would lead to a
good theory of persistence for multiparameter persistence modules
$\Ffunc : (\Rspace^k, \leq) \to~\Ccat$ \cite{Carlsson2009,Lesnick2015}.
The M\"obius inversion applies to arbitrary finite posets.
Assuming some finiteness conditions on $\Ffunc$, we may define its persistence diagram
as the M\"obius inversion of its rank function.
The proof of bottleneck stability presented in this paper requires positivity of the persistence diagram;
see Proposition \ref{prop:positivity}.
Unfortunately, there are simple examples of multiparameter persistence modules
whose persistence diagrams are not positive.
Therefore the proof presented here does not generalize to the multiparameter setting.
It seems that the M\"obius inversion model for persistence works well only in the setting of
one-parameter constructible persistence modules.
\section{Persistence Modules}
Fix a skeletally small abelian category $\mathcal{C}$.
By skeletally small, we mean that the collection of isomorphism classes of objects in $\mathcal{C}$
is a set.
For example, $\mathcal{C}$ may be the category of finite dimensional $\mathsf{k}$-vector spaces,
the category of finite abelian groups, or the category of finite length $R$-modules.
Let $\bar{\Rspace} := \Rspace \cup \{ \infty \}$ be the totally ordered set of real numbers with the point
$\infty$ satisfying $p < \infty$ for all $p \in \Rspace$.
For any $p \in \bar \Rspace$, we let $\infty + p = \infty$.
\begin{defn}
A \define{persistence module} is a functor $\Ffunc : \Rspace \to \mathcal{C}$.
Let
$$S=\{s_1 < s_2 < \cdots < s_k < \infty \} \subseteq \bar \Rspace$$ be a finite subset.
A persistence module $\Ffunc$ is \define{$S$-constructible} if it satisfies the following two conditions:
\begin{itemize}
\item For $p \leq q < s_1$, $\Ffunc(p \leq q) : 0 \to 0$ is the zero map.
\item For $s_i \leq p \leq q< s_{i+1}$, $\Ffunc(p \leq q)$ is an isomorphism.
\item For $s_k \leq p \leq q \leq \infty$, $\Ffunc(p \leq q)$ is an isomorphism.
\end{itemize}
\end{defn}
For example, let $f : M \to \Rspace$ be a Morse function on a compact manifold $M$.
The function $f$ filters $M$ by sublevel sets $M_{\leq r}^f := \{ p \in M \; |\; f(p) \leq r \}$.
For every $r \leq s$, $M_{\leq r}^f \subseteq M_{\leq s}^f$.
Now apply homology with coefficients in a finite abelian group.
The result is a persistence module of finite abelian groups that is constructible
with respect to the set of critical values of $f$ union $\{ \infty\}$.
If one applies homology with coefficients in a field $\mathsf{k}$, then the result
is a constructible persistence module of finite dimensional $\mathsf{k}$-vector spaces.
In topological data analysis, one usually starts with a constructible filtration of a finite simplicial complex.
There is a natural distance between persistence modules called the \emph{interleaving
distance} \cite{proximity}.
For any $\ee \geq 0$, let $\Rspace \times_\ee \{ 0, 1\}$ to be the poset
$\big( \Rspace \times \{0\} \big) \cup \big( \Rspace \times\{1\} \big)$
where $(p,t) \leq (q,s)$ if
\begin{itemize}
\item $t=s$ and $p \leq q$, or
\item $t \neq s$ and $p + \ee \leq q$.
\end{itemize}
Let $\iota_0,\iota_1: \Rspace \hookrightarrow \Rspace \times_\ee \{ 0, 1\}$ be the poset maps
$\iota_0: p \mapsto (p,0)$ and $\iota_1: p \mapsto(p,1)$.
\begin{defn}
\label{defn:interleaving}
An \define{$\ee$-interleaving} between two constructible persistence modules
$\Ffunc$ and $\Gfunc$ is a functor~$\Phi$ that makes the following diagram commute
up to a natural isomorphism:
\begin{equation}
\label{dgm:interleaving}
\begin{gathered}
\xymatrix{
& \Rspace \times_\ee \{ 0, 1\} \ar@{-->}[dd]^{\Phi} \\
\Rspace \ar@{^{(}->}[ru]^{\iota_0} \ar[rd]_{\Ffunc} && \Rspace \ar@{^{(}->}[lu]_{\iota_1} \ar[ld]^{\Gfunc} \\
& \mathcal{C}. &
}
\end{gathered}
\end{equation}
Two constructible persistence modules $\Ffunc$ and $\Gfunc$ are \define{$\ee$-interleaved} if there is an
$\ee$-interleaving between them.
The \define{interleaving distance} $d_I(\Ffunc,\Gfunc)$ between $\Ffunc$ and $\Gfunc$ is the infimum
over all $\ee\geq 0$
such that $\Ffunc$ and $\Gfunc$ are $\ee$-interleaved.
This infimum is attained since both $\Ffunc$ and $\Gfunc$ are constructible.
If $\Ffunc$ and $\Gfunc$ are not interleaved, then we let $d_I(\Ffunc,\Gfunc)=\infty$.
\end{defn}
\begin{prop}[Interpolation]
\label{prop:interpolation}
Let $\Ffunc$ and $\Gfunc$ be two $\ee$-interleaved constructible persistence modules.
Then there exists a one-parameter family of constructible persistence modules
$\{ \Kfunc_t \}_{t \in [0,1]}$
such that $\Kfunc_0 \cong \Ffunc$, $\Kfunc_1 \cong \Gfunc$, and
$d_I ( \Kfunc_t, \Kfunc_s ) \leq \ee |t-s|$.
\end{prop}
\begin{proof}
Let $\Ffunc$ and $\Gfunc$ be $\ee$-interleaved by $\Phi$ as in Definition \ref{defn:interleaving}.
Define $ \Rspace \times_\ee [0,1]$ as the poset with the underlying set $ \Rspace\times[0,1]$
and $(p,t) \leq (q,s)$ whenever $p+\ee |t-s| \leq q$.
Note that $\Rspace \times_\ee \{ 0, 1\}$ naturally embeds into $ \Rspace \times_\ee [0,1]$ via
$\iota:(p,t) \mapsto (p,t)$.
See Figure~\ref{fig:interpolation}.
Finding $\{\Kfunc_t \}_{t\in [0,1]}$ is equivalent to finding a functor $\Psi$ that makes the following diagram
commute up to a natural isomorphism:
\begin{center}
\begin{tikzcd}
\Rspace \times_\ee \{ 0, 1\} \ar[rr,"\Phi"]\ar[d,hook,"\iota"'] && \Ccat\\
\Rspace \times_\ee[0,1] . \ar[urr,"\Psi"',dashed]
\end{tikzcd}
\end{center}
This functor $\Psi$ is the right Kan extension of $\Phi$ along $\iota$ for which
we now give an explicit construction.
For convenience, let $\Pcat := \Rspace \times_\ee \{ 0, 1\}$ and
$\Qcat := \Rspace\times_\ee[0,1]$.
For $(p,t) \in \Qcat$, let $\Pcat \uparrow
(p,t)$ be the subposet of $\Pcat$ consisting of all elements
$(p', t') \in \Pcat$ such that $(p, t) \leq (p',t')$.
The poset $\Pcat \uparrow (p,t)$, for any $p \in \Rspace$ and $t \notin \{ 0, 1 \}$, has two minimal elements: $(p + \ee t , 0)$ and $\big(p + \ee (1-t), 1 \big)$.
\begin{figure}
\centering
\includegraphics[scale = 0.90]{interpolation}
\caption{An illustration of the poset relation on $ \Rspace \times_\ee [0,1]$.}
\label{fig:interpolation}
\end{figure}
For $t \in \{ 0, 1 \}$, the poset $\Pcat \uparrow (p,t)$ has one minimal element,
namely $(p,t)$.
Let $\Psi \big( (p,t) \big) := \lim \Phi |_{\Pcat \uparrow (p,t)}$.
For $(p,t) \leq (q,s)$, the poset $\Pcat \uparrow (q,s)$ is a subposet of
$\Pcat \uparrow (p,t)$.
This subposet relation allows us to define the morphism $\Psi \big( (p,t) \leq (q,s) \big)$
as the universal morphism between the two limits.
Note that $\Psi \big( (p,0) \big)$ is isomorphic
to $\Ffunc(p)$ and $\Psi \big( (p,1) \big)$ is isomorphic to $\Gfunc(p)$.
We now argue that each persistence module $\Kfunc_t := \Psi(\cdot, t)$ is constructible.
As we increase $p$ while keeping $t$ fixed, the limit $\Kfunc_t(p)$ changes only when one of the
two minimal objects of $\Pcat \uparrow (p,t)$ changes isomorphism type.
Since $\Ffunc$ and $\Gfunc$ are constructible,
there are only finitely many such changes to the isomorphism type of $\Kfunc_t(p)$.
\end{proof}
\section{Persistence Diagrams}
Fix an abelian group $\Ggroup$ with a translation invariant partial ordering $\preceq$.
That is for all $a, b, c \in \Ggroup$, if $a \leq b$, then $a + c \leq b +c$.
Roughly speaking, a persistence diagram is the assignment to each interval
of the real line an element of $\Ggroup$.
In our setting, only finitely many intervals will have a nonzero value.
\begin{defn}
Let $\Dgm$ be the \define{poset of intervals} consisting of the following data:
\begin{itemize}
\item The objects of $\Dgm$ are intervals $[p,q) \subseteq \bar\Rspace$
where $p \leq q$.
\item The ordering is inclusion $[p_2,q_2) \subseteq [p_1,q_1)$.
\end{itemize}
Given a finite set $S=\{s_1< s_2< \cdots <s_k < \infty\}\subseteq\bar{\Rspace}$, we use
$\Dgm(S)$ to denote the subposet of $\Dgm$ consisting of all intervals $[p,q)$
with $p,q \in S$.
The \define{diagonal} $\Delta \subseteq \Dgm$ is the subset of intervals of the form
$[p,p)$.
See Figure \ref{fig:dgm}.
\end{defn}
\begin{figure}
\begin{center}
\includegraphics[scale = 0.80]{Dgm.pdf}
\end{center}
\caption{An interval $I = [p,q)$ is visualized as the point $(p,q)$ in the plane.
The poset $\Dgm$ is therefore the set of points in the plane
on and above the diagonal.
In this example, $S = \{ s_1 < s_2 < s_3 < s_4 < \infty\}$ and
$\Dgm(S)$ is its set of grid points.
Given an $S$-constructible persistence module $\Ffunc$ and an interval $[s_2, s_3)$,
$\tilde{\Ffunc} \big( [s_2, s_3) \big) = d \Ffunc \big( [s_2, s_3) \big) - d \Ffunc \big( [s_2, s_4) \big) + d \Ffunc \big( [s_1, s_4) \big) - d \Ffunc \big( [s_1, s_3) \big)$.}
\label{fig:dgm}
\end{figure}
\begin{defn}
A \define{persistence diagram} is a map
$Y : \Dgm \to \Ggroup$ with finite support.
That is, there are only finitely many intervals $I \in \Dgm$
such that $Y(I) \neq 0$.
\end{defn}
We now introduce the bottleneck distance between persistence diagrams.
\begin{defn}
A \define{matching} between two persistence diagrams $Y_1, Y_2 : \Dgm \to \Ggroup$ is a
map $\gamma: \Dgm \times \Dgm \to \Ggroup$
satisfying
\begin{align*}
Y_1(I) & = \sum_{J \in \Dgm} \gamma(I,J) \textrm{ for all }I \in \Dgm \backslash \Delta \\
Y_2(J) & = \sum_{I \in \Dgm} \gamma(I,J) \textrm{ for all } J \in \Dgm \backslash \Delta.
\end{align*}
The \define{norm} of a matching $\gamma$ is
$$ ||\gamma|| := \max_{\big \{I=[p_1,q_1),J=[p_2,q_2) \,\big |\, \gamma(I,J)\neq 0 \big \}}
\big \{|p_1-p_2|,|q_1-q_2| \big\}. $$
If either $q_1$ or $q_2$ is $\infty$, then $ | q_1 - q_2 | = \infty$.
The \define{bottleneck distance} between two persistence diagrams $Y_1, Y_2 : \Dgm \to \Ggroup$ is
$$d_B(Y_1,Y_2) := \inf_\gamma ||\gamma||$$
over all matchings $\gamma$ between $Y_1$ and $Y_2$.
This infimum is attained since persistence diagrams have finite support.
\end{defn}
\section{Diagram of a Module}
We now describe the construction of a persistence diagram
from a constructible persistence module.
Fix a skeletally small abelian category $\Ccat$.
\begin{defn}
The \define{Grothendieck group} $\Ggroup(\mathcal{C})$ of $\mathcal{C}$ is the abelian group
with one generator for each
isomorphism class $[a]$ of objects $a \in \mathsf{ob}\; \mathcal{C}$ and one relation
$[b] = [a] + [c]$
for each short exact sequence
$0 \to a \to b \to c \to 0.$
The Grothendieck group has a natural translation invariant partial ordering where
$[a] \preceq [b]$ whenever $a\hookrightarrow b$.
For each $a\hookrightarrow b$, we have
$a \oplus c \hookrightarrow b \oplus c$ for any object $c$ in $\mathcal{C}$.
This makes $\preceq$ a translation invariant partial ordering.
\end{defn}
\begin{ex}
Here are three examples of $\mathcal{C}$ with their Grothendieck groups.
\begin{itemize}
\item Let $\Vect$ be the category of finite dimensional $\field$-vector spaces
for some fixed field~$\field$.
The isomorphism class of a finite dimensional $\field$-vector space is completely determined
by its dimension.
This means that the free abelian group generated by the set of isomorphism classes in
$\Vect$ is $\bigoplus_{n} \Zspace$ where $n \geq 0$ is a natural number.
Since every short exact sequence in $\Vect$ splits, the only relations are of the form $[A] + [B] = [C]$ whenever $A \oplus B \cong C$.
Therefore $\Ggroup (\Vect) \cong \Zspace$ where the translation invariant partial
ordering $\preceq$ is the usual total ordering on the integers.
\item Let $\Finab$ be the category of finite abelian groups.
A finite abelian group is isomorphic to
$$\dfrac{\Zspace}{p_1^{n_1} \Zspace} \oplus \cdots \oplus
\dfrac{\Zspace}{p_k^{n_k} \Zspace}$$
where each $p_i$ is prime.
The free abelian group generated by the set of isomorphism classes in $\Finab$
is $\bigoplus_{(p,n)} \Zspace$
over all pairs $(p, n)$ where $p$ is prime and $n \geq 0$ a natural number.
Every primary cyclic group $\dfrac{\Zspace}{p^n \Zspace}$ fits into a short exact sequence
\begin{equation*}
\begin{tikzcd}
0 \arrow[r] & \dfrac{\Zspace}{p^{n-1} \Zspace} \arrow[r, "\times p"] &
\dfrac{\Zspace}{p^{n} \Zspace} \arrow[r, "/"] &
\dfrac{\Zspace}{p\Zspace} \arrow[r] & 0
\end{tikzcd}
\end{equation*}
giving rise to a relation
$\left[ \dfrac{\Zspace}{p^{n} \Zspace} \right] =
\left[ \dfrac{\Zspace}{p^{n-1} \Zspace} \right] + \left[ \dfrac{\Zspace}{p \Zspace} \right].$
By induction, $\left[ \dfrac{\Zspace}{p^{n} \Zspace} \right] = n \left[ \dfrac{\Zspace}{p \Zspace} \right]$.
Therefore $\Ggroup( \Finab ) \cong \bigoplus_p \Zspace$ where $p$ is prime.
For two elements $[a], [b] \in \Ggroup( \Finab )$, $[a] \preceq [b]$
if the multiplicity of each prime factor of $[a]$ is at most the multiplicity of each
prime factor of $[b]$.
\item
Let $\Ab$ be the category of finitely generated abelian groups.
A finitely generated abelian group is isomorphic to
$$ \Zspace^m \oplus \dfrac{\Zspace}{p_1^{n_1} \Zspace} \oplus \cdots \oplus
\dfrac{\Zspace}{p_k^{n_k} \Zspace}$$
where each $p_i$ is prime.
The free abelian group generated by the set of isomorphism classes in $\Ab$
is $\Zspace \oplus \bigoplus_{(p,n)} \Zspace$
over all pairs $(p, n)$ where $p$ is prime and $n \geq 0$ a natural number.
In addition to the short exact sequences in $\Finab$, we have
\begin{equation*}
\begin{tikzcd}
0 \arrow[r] & \Zspace \arrow[r, "\times p"] &
\Zspace \arrow[r, "/"] &
\dfrac{\Zspace}{p\Zspace} \arrow[r] & 0
\end{tikzcd}
\end{equation*}
giving rise to the relation $\left[ \dfrac{\Zspace}{p \Zspace} \right] = [0]$.
Therefore $\Ggroup( \Ab) \cong \Zspace$ where $\preceq$ is the usual total ordering on the integers.
Unfortunately all torsion is lost.
\end{itemize}
\end{ex}
Given a constructible persistence module, we now record the images of all its maps
as elements of the Grothendieck group.
\begin{defn}
Let $S = \{s_1 < \cdots < s_k < \infty\}$ be a finite set and $\Ffunc$ an
$S$-constructible persistence module valued in $\mathcal{C}$.
Choose a $\delta > 0$ such that $s_{i-1} < s_i - \delta$, for all $1< i \leq k$.
The \define{rank function} of $\Ffunc$ is the map
$d \Ffunc : \Dgm \to \Ggroup( \mathcal{C} )$ defined as follows:
$$d \Ffunc (I) =
\begin{cases}
\big[ \image \, \Ffunc(p<s_i-\delta) \big] & \text{for } I = [p,s_i) \text{ where } 1 \leq i \leq k \\
\big[ \image \, \Ffunc(p \leq q) \big] & \text{for all other } I=[p,q).
\end{cases}
$$
Note that for any $[p,q) \in \Dgm$, $d \Ffunc \big ( [p,q) \big)$ equals
$d \Ffunc(I)$ where $I$ is the largest interval in $\Dgm(S)$ containing $[p,q)$.
This means that $d \Ffunc$ is uniquely determined by its value on $\Dgm(S)$.
\end{defn}
\begin{prop}
Let $\Ffunc$ be a constructible persistence module valued in a skeletally small abelian category
$\mathcal{C}$.
Then its rank function $d \Ffunc : \Dgm \to \Ggroup(\mathcal{C})$ is a poset
reversing map.
That is for any pair of intervals
$[p_2, q_2) \subseteq [p_1,q_1)$,
$d \Ffunc \big( [p_1, q_1) \big) \preceq d \Ffunc \big( [p_2,q_2) \big)$.
\end{prop}
\begin{proof}
Suppose $\Ffunc$ is $S = \{s_1 < \cdots < s_k < \infty\}$-constructible.
Consider the following commutative diagram:
\begin{equation*}
\xymatrix{
\Ffunc(p_1) \ar[d]_{h := \Ffunc (p_1 \leq q_1)} \ar[rrr]^{e := \Ffunc(p_1 \leq p_2)}
&&& \Ffunc(p_2) \ar[d]^{f := \Ffunc(p_2 \leq q_2)} \\
\Ffunc(q_1) &&& \Ffunc(q_2). \ar[lll]^{g := \Ffunc(q_2 \leq q_1)}
}
\end{equation*}
We may assume $q_1, q_2 \notin S$.
If this is not the case, replace $q_1$ and/or $q_2$ in the above diagram with
$q_1 - \delta$ and $q_2 - \delta$ for some sufficiently small $\delta > 0$.
We have $d \Ffunc \big( [p_1, q_1) \big) = [ \image\, h ]$ and
$d \Ffunc \big( [p_2, q_2) \big) = [ \image\, f ]$.
Let $I := \image\, ( f \circ e )$ and $K := I \cap \ker g$.
Then $K \mono I \mono \image\, f$ and $\image\, h \cong \sfrac{I}{K}$.
Therefore $d \Ffunc \big ( [p_1, q_1) \big) \preceq d \Ffunc \big( [p_2, q_2 ) \big)$.
\end{proof}
Given the rank function $d \Ffunc : \Dgm \to \Ggroup(\mathcal{C})$
of an $S$-constructible persistence module~$\Ffunc$, there is a unique
map $\tilde \Ffunc : \Dgm \to \Ggroup$ such that
\begin{equation}
\label{eq:mobius}
d \Ffunc (I) = \sum_{J \in \Dgm: J \supseteq I } \tilde \Ffunc (J)
\end{equation}
for each $I \in \Dgm$.
This equation is the \emph{M\"obius inversion formula}.
For each $I = [s_i, s_j)$ in $\Dgm(S)$,
\begin{equation}
\label{eq:inversion}
\tilde \Ffunc (I) = d \Ffunc \big( [s_i, s_j) \big) - d \Ffunc \big( [s_{i}, s_{j+1}) \big)
+ d \Ffunc \big( [s_{i-1}, s_{j+1}) \big) - d \Ffunc \big( [s_{i-1}, s_j) \big).
\end{equation}
For each $I = [s_i, \infty)$ in $\Dgm(S)$,
\begin{equation}
\label{eq:inversion_infty}
\tilde \Ffunc (I) = d \Ffunc \big( [s_i, \infty) \big) - d \Ffunc \big( [s_{i-1}, \infty) \big).
\end{equation}
For all other $I \in \Dgm$, $\tilde \Ffunc(I) = 0$.
Here we have to be careful with our indices.
It is possible $s_{j+1}$ or $s_{i-1}$ is not in $S$.
If $s_{j+1}$ is not in $S$, let $s_{j+1} = \infty$.
If $s_{i-1}$ is not in $S$, let $s_{i-1}$ be any value strictly less than $s_1$.
We call $\tilde \Ffunc$ the \emph{M\"obius inversion} of $d \Ffunc$.
\begin{defn}
\label{defn:diagram}
The \define{persistence diagram of a constructible persistence module} $\Ffunc$
is the M\"obius inversion $\tilde \Ffunc$ of its rank function $d \Ffunc$.
\end{defn}
The Grothendieck group of $\Ccat$ has one relation for each short exact sequence in $\Ccat$.
These relations ensure that the persistence diagram of a persistence module is positive which
plays a key role in the proof of Lemma \ref{lem:box}.
\begin{prop}[Positivity \cite{patel}]
\label{prop:positivity}
Let $\Ffunc$ be a constructible persistence module
valued in a skeletally small abelian category $\Ccat$.
Then for any $I\in \Dgm$, we have~$[0] \preceq \tilde \Ffunc(I)$.
\end{prop}
\section{Stability}
We now begin the task of proving bottleneck stability.
Throughout this section, persistence modules are valued in a fixed skeletally small abelian
category $\Ccat$.
\begin{defn}
For an interval $I = [p,q)$ in $\Dgm$ and a value $\ee \geq 0$, let
$$ \square_{\ee} I := \big\{ [r,s) \in \Dgm \, \big |\, p-\ee < r \leq p+\ee \text{ and } q-\ee \leq s < q+ \ee \big \} $$
be the subposet of $\Dgm$ consisting of intervals $\ee$-close to $I$.
If $I$ is too close to the diagonal, that is if $q - \ee \leq p+\ee$, then we let
$\square_\ee I$ be empty.
We call $\square_\ee I$ the $\ee$-\emph{box} around $I$.
See Figure \ref{fig:integration}.
Note that if $q = \infty$, then $\square_\ee I = \big \{ [r, \infty) \; \big | \; p-\ee < r \leq p+ \ee \big \}$.
\end{defn}
\begin{figure}
\begin{center}
\includegraphics[scale = 0.80]{Dgm_box.pdf}
\end{center}
\caption{The shaded area is a box $\square_\ee I$.
Note that $\square_\ee I$ is closed on the bottom and right, and it is open on the top and left.
}
\label{fig:integration}
\end{figure}
\begin{lem}
\label{lem:corners}
Let $\Ffunc$ be an $S$-constructible persistence module, $I = [p,q)$, and $\ee > 0$.
If $\square_\ee I$ is nonempty, then
\begin{equation*}
\label{eq:sum}
\sum_{J \in \square_\ee I} \tilde{F}(J) = d \Ffunc \big( [p+\ee, q-\ee) \big) -
d \Ffunc \big( [p+\ee, q+\ee) \big)
+ d \Ffunc \big( [p-\ee, q+\ee) \big) - d \Ffunc \big( [p-\ee, q-\ee) \big)
\end{equation*}
whenever $q \neq \infty$ and
\begin{equation*}
\label{eq:sum}
\sum_{J \in \square_\ee I} \tilde{F}(J) = d \Ffunc \big( [p+\ee, \infty) \big)
- d \Ffunc \big( [p-\ee, \infty) \big)
\end{equation*}
whenever $q = \infty$.
\end{lem}
\begin{proof}
Both equalities follow easily from the M\"obius inversion formula; see Equation~\ref{eq:mobius}.
If $q \neq \infty$, then
\begin{align*}
\sum_{J \in \square_\ee I} \tilde \Ffunc(J) &= \sum_{\substack{J \in \Dgm: \\ J \supseteq [p+\ee,q-\ee)}}
\tilde \Ffunc(J)
- \sum_{\substack{J \in \Dgm: \\ J \supseteq [p+\ee, q+\ee)}} \tilde \Ffunc(J) +
\sum_{\substack{J \in \Dgm: \\ J \supseteq [p-\ee, q+\ee)}} \tilde \Ffunc(J) -
\sum_{\substack{J \in \Dgm: \\ J \supseteq [p-\ee, q-\ee)} } \tilde \Ffunc(J) \\
& = d \Ffunc \big( [p+\ee,q-\ee) \big) - d \Ffunc \big( [p+\ee,q+\ee) \big) -
d \Ffunc \big( [p-\ee,q+\ee) \big) + d \Ffunc \big( [p-\ee,q-\ee) \big).
\end{align*}
If $q = \infty$, then
\begin{align*}
\sum_{J \in \square_\ee I} \tilde \Ffunc(J) &= \sum_{\substack{J \in \Dgm: \\ J
\supseteq [p+\ee,\infty)}} \tilde \Ffunc(J)
- \sum_{\substack{J \in \Dgm: \\ J \supseteq [p-\ee, \infty)}} \tilde \Ffunc(J) \\
& = d \Ffunc \big( [p+\ee,\infty) \big) - d \Ffunc \big( [p-\ee,\infty) \big).
\end{align*}
\end{proof}
\begin{lem}[Box Lemma]
\label{lem:box}
Let $\Ffunc$ and $\Gfunc$ be two $\ee$-interleaved constructible persistence modules, $I \in \Dgm$,
and $\mu > 0$.
Then
$$\sum_{J \in \square_{\mu} I} \tilde \Ffunc(J) \preceq \sum_{J \in \square_{\mu + \ee} I} \tilde \Gfunc(J)$$
whenever $\square_{\mu + \ee} I$ is nonempty.
\end{lem}
\begin{proof}
Suppose $\Ffunc$ and $\Gfunc$ are
$\ee$-interleaved by $\Phi$ in Diagram \ref{dgm:interleaving}.
Define $\varphi_{r}: \Ffunc(r) \to \Gfunc(r+\ee)$ as
$\Phi \big( (r,0) \leq (r+\ee,1) \big)$ and define
$\psi_{r}: \Gfunc (r) \to \Ffunc (r+\ee)$ as $\Phi \big( (r,1) \leq (r+\ee,0) \big)$.
Suppose $I = [p,q)$ where $q \neq \infty$. By Lemma \ref{lem:corners},
\begin{align*}
\sum_{J \in \square_{\mu} I } \tilde{F}(J) =\; &
d \Ffunc \big( [p+\mu, q-\mu) \big) - d \Ffunc \big( [p+\mu, q+\mu) \big) \\
&+ d \Ffunc \big( [p-\mu, q+\mu) \big) - d \Ffunc \big( [p-\mu, q-\mu) \big) \\
\sum_{J \in \square_{\mu+\ee} I} \tilde{G}(J) =\; &
d \Gfunc \big( [p+\mu+\ee, q-\mu-\ee) \big) - d \Gfunc \big( [p+\mu+\ee, q+\mu+\ee) \big) \\
& + d \Gfunc \big( [p-\mu-\ee, q+\mu+\ee) \big) - d \Gfunc \big( [p-\mu-\ee, q-\mu-\ee) \big).
\end{align*}
Choose a sufficiently small $\delta > 0$ so that we have the following equalities:
\begin{align*}
d \Ffunc \big( [p+\mu, q-\mu) \big) &= \big[ \image\; \Ffunc ( p + \mu < q-\mu-\delta) \big] \\
d \Ffunc \big( [p+\mu, q+\mu) \big) &= \big[ \image\; \Ffunc ( p+\mu < q+\mu-\delta) \big] \\
d \Ffunc \big( [p-\mu, q+\mu) \big) &= \big[ \image\; \Ffunc ( p-\mu < q+\mu-\delta) \big] \\
d \Ffunc \big( [p-\mu, q-\mu) \big) &= \big[ \image\; \Ffunc ( p-\mu < q-\mu-\delta) \big] \\
d \Gfunc \big( [p+\mu+\ee, q-\mu-\ee) \big) &= \big[ \image\; \Gfunc ( p+\mu+\ee < q-\mu-\ee-\delta) \big] \\
d \Gfunc \big( [p+\mu+\ee, q+\mu+\ee) \big) &= \big[ \image\; \Gfunc ( p+\mu+\ee < q+\mu+\ee-\delta) \big]\\
d \Gfunc \big( [p-\mu-\ee, q+\mu+\ee) \big) &= \big[ \image\; \Gfunc ( p-\mu-\ee < q+\mu+\ee-\delta) \big] \\
d \Gfunc \big( [p-\mu-\ee, q-\mu-\ee) \big) &= \big[ \image\; \Ffunc ( p-\mu-\ee < q-\mu-\ee-\delta) \big].
\end{align*}
Consider the following commutative diagram where the horizontal and vertical arrows are the appropriate
morphisms from $\Ffunc$ and $\Gfunc$:
\begin{center}
\begin{tikzcd}
\Gfunc(p-\mu-\ee) \ar[rrrr] \ar[dddd] \ar[dr,"\psi_{p-\mu -\ee}"]
&&&& G(p+\mu+\ee) \ar[dddd] \\
& \Ffunc (p-\mu) \ar[rr] \ar[dd]&& \Ffunc(p+\mu) \ar[dd] \ar[ur,"\varphi_{p+\mu}"]\\
\\
&\Ffunc(q+\mu-\delta) \ar[dl,"\varphi_{q+\mu-\delta}"]&& \Ffunc(q-\mu-\delta) \ar[ll] \\
\Gfunc(q+\mu+\ee-\delta) &&&& \Gfunc(q-\mu-\ee-\delta). \ar[llll] \ar[ul,"\psi_{q-\mu-\ee-\delta}"]
\end{tikzcd}
\end{center}
Choose two values $a < b$ such that $a+ \mu+\ee < b-\mu-\ee$ and let
$$T := \{ a-\mu-\ee < a-\mu < a+\mu < a+\mu+\ee < c < b-\mu-\ee < b-\mu < b+\mu < \infty \}
\subseteq \bar \Rspace.$$
Let $\Hfunc: \Rspace \to \mathcal{C}$ be the $T$-constructible persistence module determined by the following diagram:
\begin{equation*}
\xymatrix{
\Hfunc(a-\mu-\ee) = \Gfunc(p-\mu-\ee) \ar[r] & \Hfunc(a-\mu) = \Ffunc(p-\mu) \ar[r] &
\Hfunc(a+\mu) = \Ffunc(p+\mu) \ar[d] \\
\Hfunc(b-\mu-\ee) = \Ffunc(q-\mu-\delta) \ar[d] & \Hfunc(c) = \Gfunc (q-\mu-\ee-\delta) \ar[l] &
\Hfunc(a+\mu+\ee) = \Gfunc(p+\mu+\ee) \ar[l] \\
\Hfunc(b-\mu) = \Ffunc(q+\mu-\delta) \ar[r] & \Hfunc(b+\mu) = \Gfunc(q+\mu+\ee-\delta).
}
\end{equation*}
Here the value of $\Hfunc$ is given on each value in $T$ and morphisms between adjacent objects are the connecting
morphisms in the above commutative diagram.
For example, for all $a+\mu+\ee \leq r < c$, $\Hfunc(r) = \Gfunc(p+\mu+\ee)$ and
$\Hfunc(a+\ee+\mu \leq r) = \id$.
The morphism $\Hfunc(c \leq b-\mu-\ee)$ is $\psi_{q-\mu-\ee-\delta}$.
We have the following equalities:
\begin{align*}
\big[ \image\; \Ffunc ( p + \mu < q-\mu-\delta) \big] &= d \Hfunc \big( [a+\mu, b-\mu) \big) \\
\big[ \image\; \Ffunc ( p + \mu < q+\mu-\delta) \big] &= d \Hfunc \big( [a+\mu, b+\mu) \big) \\
\big[ \image\; \Ffunc ( p - \mu < q+\mu-\delta) \big] &= d \Hfunc \big( [a-\mu, b+\mu) \big) \\
\big[ \image\; \Ffunc ( p - \mu < q-\mu-\delta) \big] &= d \Hfunc \big( [a-\mu, b-\mu) \big) \\
\big[ \image\; \Gfunc ( p+\mu+\ee < q-\mu-\ee-\delta) \big] &= d \Hfunc \big( [a+\mu+\ee, b-\mu-\ee) \big) \\
\big[ \image\; \Gfunc ( p+\mu+\ee < q+\mu+\ee-\delta) \big] &=
d \Hfunc \big( [a+\mu+\ee, b+\mu+\ee) \big) \\
\big[ \image\; \Gfunc ( p-\mu-\ee < q+\mu+\ee-\delta) \big] &= d \Hfunc \big( [a-\mu-\ee, b+\mu+\ee) \big) \\
\big[ \image\; \Gfunc ( p-\mu-\ee < q-\mu-\ee-\delta) \big] &= d \Hfunc \big( [a-\mu-\ee, b-\mu-\ee) \big).
\end{align*}
By Lemma \ref{lem:corners} along with the above substitutions, we have
$$\sum_{J \in \square_{\mu}[a,b)} \tilde \Hfunc (J) = \sum_{J \in \square_{\mu} I} \tilde{\Ffunc}(J)$$
$$\sum_{J \in \square_{\mu+\ee}[a,b)} \tilde \Hfunc (J) = \sum_{J \in \square_{\mu+\ee} I } \tilde{G}(J).$$
By the inclusion $\square_{\mu} [a,b) \subseteq \square_{\mu + \ee} [a,b) $ along with Proposition \ref{prop:positivity}, we have
$$\sum_{J \in \square_{\mu} [a,b)} \tilde \Hfunc (J) \preceq
\sum_{J \in \square_{\mu+\ee} [a,b)} \tilde \Hfunc (J).$$
This proves the statement.
Suppose $I = [p,\infty)$ and let $z \in \Rspace$ be larger than $\mu + \ee$ and $s_k$ where $\Ffunc$ and $\Gfunc$ are $\{ s_1 < \cdots < s_k < \infty \}$-constructible.
By Lemma \ref{lem:corners},
\begin{align*}
\sum_{J \in \square_{\mu} I } \tilde{F}(J) =\; &
d \Ffunc \big( [p+\mu, \infty) \big) - d \Ffunc \big( [p-\mu, \infty) \big) \\
\sum_{J \in \square_{\mu+\ee} I} \tilde{G}(J) =\; &
d \Gfunc \big( [p+\mu+\ee, \infty) \big) - d \Gfunc \big( [p-\mu-\ee, \infty) \big).
\end{align*}
We have the following equalities:
\begin{align*}
d \Ffunc \big( [p+\mu, \infty) \big) &= \big[ \image\; \Ffunc ( p+\mu < \infty) \big] \\
d \Ffunc \big( [p-\mu, \infty) \big) &= \big[ \image\; \Ffunc ( p-\mu < \infty) \big] \\
d \Gfunc \big( [p+\mu+\ee, \infty) \big) &= \big[ \image\; \Gfunc ( p+\mu+\ee < \infty) \big] \\
d \Gfunc \big( [p-\mu-\ee, \infty) \big) &= \big[ \image\; \Gfunc ( p-\mu-\ee < \infty) \big].
\end{align*}
Consider the following commutative diagram where the vertical and horizontal arrows are the appropriate
morphisms from $\Ffunc$ and $\Gfunc$:
\begin{center}
\begin{tikzcd}
\Gfunc(p-\mu-\ee) \ar[rrrr] \ar[dddd] \ar[dr,"\psi_{p-\mu-\ee}"] &&&& \Gfunc(p+\mu+\ee) \ar[dddd] \\
& \Ffunc (p-\mu) \ar[rr] \ar[dd] && \Ffunc(p+\mu) \ar[dd] \ar[ur,"\varphi_{p+\mu}"] & \\
&&&& \\
& \Ffunc(\infty) \ar[dl,"\varphi_{\infty}"]&& \Ffunc(\infty) \ar[ll] & \\
\Gfunc(\infty) &&&& \Gfunc(\infty). \ar[llll] \ar[ul,"\psi_{\infty}"]
\end{tikzcd}
\end{center}
Note that $\psi_\infty$ and $\varphi_\infty$ are isomorphisms.
Let
$$T := \{ -\mu-\ee < -\mu < \mu < \mu+\ee < \infty \}
\subseteq \bar \Rspace.$$
Let $\Hfunc: \Rspace \to \mathcal{C}$ be the $T$-constructible persistence module determined by the following diagram:
\begin{equation*}
\xymatrix{
\Hfunc(-\mu-\ee) = \Gfunc(p-\mu-\ee) \ar[r] & \Hfunc(-\mu) = \Ffunc(p-\mu) \ar[r] &
\Hfunc(\mu) = \Ffunc(p+\mu) \ar[d] \\
& \Hfunc(z) = \Ffunc (z) & \Hfunc(\mu+\ee) = \Gfunc(p+\mu+\ee). \ar[l]
}
\end{equation*}
Here the value of $\Hfunc$ is given on each value in $T$ and morphisms between adjacent objects are the connecting
morphisms in the above commutative diagram.
We have the following equalities:
\begin{align*}
\big[ \image\; \Ffunc ( p + \mu <\infty) \big] &= d \Hfunc \big( [\mu, \infty) \big) \\
\big[ \image\; \Ffunc ( p - \mu < \infty) \big] &= d \Hfunc \big( [-\mu, \infty) \big) \\
\big[ \image\; \Gfunc ( p+\mu+\ee < \infty) \big] &= d \Hfunc \big( [\mu+\ee, \infty) \big) \\
\big[ \image\; \Gfunc ( p-\mu-\ee < \infty) \big] &= d \Hfunc \big( [-\mu-\ee, \infty) \big).
\end{align*}
By Lemma \ref{lem:corners} along with the above substitutions, we have
$$\sum_{J \in \square_{\mu}[0,\infty)} \tilde \Hfunc (J) = \sum_{J \in \square_{\mu} I} \tilde{\Ffunc}(J)$$
$$\sum_{J \in \square_{\mu+\ee}[0,\infty)} \tilde \Hfunc (J) = \sum_{J \in \square_{\mu+\ee} I } \tilde{G}(J).$$
By the inclusion $\square_{\mu} [0,\infty) \subseteq \square_{\mu+\ee} [0,\infty) $ along with
Proposition \ref{prop:positivity}, we have
$$\sum_{J \in \square_{\mu} [0,\infty)} \tilde \Hfunc (J) \preceq
\sum_{J \in \square_{\mu+\ee} [0,\infty)} \tilde \Hfunc (J).$$
This proves the statement.
\end{proof}
\begin{defn}
The \define{injectivity radius}
of a finite set $S = \{ s_1 < s_2 < \cdots < s_k < \infty\}$ is
$$\rho := \min_{1 < i \leq k} \frac{s_{i}-s_{i-1}}{2}.$$
Note that if a persistence module $\Ffunc$ is $S$-constructible and $I \in \Dgm(S)$, then
$$\tilde \Ffunc (I) = \sum_{J \in \square_\rho I} \tilde \Ffunc (J).$$
\end{defn}
\begin{lem}[Easy Bijection]
\label{lem:bijection}
Let $\Ffunc$ be an $S$-constructible persistence module and $\rho > 0$ the injectivity
radius of $S$.
If $\Gfunc$ is a second constructible persistence module such that $d_I(\Ffunc,\Gfunc) < \rho/2$, then
$d_B(\tilde \Ffunc, \tilde \Gfunc) \leq d_I(\Ffunc,\Gfunc)$.
\end{lem}
\begin{proof}
Let $\ee = d_I (\Ffunc, \Gfunc)$.
Choose a sufficiently small $\mu > 0$ such that $\mu + 2 \ee < \rho$.
We construct a matching $\gamma_\mu : \Dgm \times \Dgm \to \Ggroup(\Ccat)$ such that
\begin{align}
\tilde \Ffunc(I) & = \sum_{J \in \Dgm} \gamma_\mu(I,J) \textrm{ for all }I \in \Dgm \backslash \Delta
\label{eq:forward} \\
\tilde \Gfunc(J) & = \sum_{I \in \Dgm} \gamma_\mu(I,J) \textrm{ for all } J \in \Dgm \backslash \Delta
\label{eq:backward} .
\end{align}
Fix an $I \in \Dgm(S)$.
By Lemma \ref{lem:box},
$$\tilde \Ffunc(I) = \sum_{J \in \square_{\mu} I} \tilde \Ffunc (I) \leq \sum_{J \in \square_{\mu + \ee} I} \tilde \Gfunc(J) \leq \sum_{J \in \square_{\mu + 2\ee} I}\tilde \Ffunc(J) = \tilde \Ffunc(I).$$
Let $\gamma_\mu(I,J) := \tilde \Gfunc(J)$ for all $J \in \square_{\mu + \ee}(I)$.
Repeat for all $I \in \Dgm(S)$.
Equation \ref{eq:forward} is satisfied.
We now check that $\gamma_\mu$ satisfies Equation \ref{eq:backward}.
Fix an interval $J = [p,q)$ and suppose $\tilde \Gfunc (J) \neq 0$.
If $q-p > \ee$, then by Lemma \ref{lem:box}
$$\tilde \Gfunc(J) = \sum_{I \in \square_{\mu} J} \tilde \Gfunc (I) \preceq \sum_{I \in \square_{\mu+\ee} J}
\tilde \Ffunc(J).$$
This means $\gamma_\mu(I, J) \neq 0$ for some $I \in \square_{\mu+\ee} J$.
If $q-p \leq \ee$, then we match $J$ to the diagonal.
That is, we let $\gamma_\mu \big ( [\sfrac{q-p}{2}, \sfrac{q-p}{2} ), J \big) := \tilde \Gfunc(J)$.
By construction, $||\gamma_\mu|| \leq \mu + \ee $ for all $\mu > 0$ sufficiently small. Therefore $d_B ( \tilde \Ffunc, \tilde \Gfunc ) \leq \ee = d_I(\tilde \Ffunc, \tilde \Gfunc)$.
\end{proof}
We are now ready to prove our main result.
\begin{thm}[Bottleneck Stability]
Let $\mathcal{C}$ be a skeletally small abelian category and $\Ffunc, \Gfunc: \Rspace \to \mathcal{C}$
two constructible persistence modules.
Then $d_B \big( \tilde \Ffunc, \tilde \Gfunc \big) \leq d_I (\Ffunc, \Gfunc)$
where $\tilde \Ffunc$ and $\tilde \Gfunc$ are the persistence diagrams of $\Ffunc$ and $\Gfunc$,
respectively.
\end{thm}
\begin{proof}
Let $\ee = d_I(\Ffunc, \Gfunc)$.
By Proposition \ref{prop:interpolation}, there is a one parameter family of constructible persistence modules
$\{\Kfunc_t\}_{t \in [0,1]}$ such that
$d_I(\Kfunc_t, \Kfunc_s) \leq \ee |t-s|$, $\Kfunc_0 \cong \Ffunc$, and $\Kfunc_1 \cong \Gfunc$.
Each $\Kfunc_t$ is constructible with respect to some set $S_t$, and each set $S_t$ has
an injectivity radius $\rho_t > 0$.
For each time $t \in [0,1]$, consider the open interval
$$U(t) = ( t-\sfrac{\rho_t}{4\ee},t+\sfrac{\rho_t}{4\ee} ) \cap [0,1]$$
By compactness of $[0,1]$, there is a finite set
$Q = \{ 0 = t_0 < t_1 < \cdots < t_n = 1 \}$
such that $\cup_{i=0}^n U(t_i) = [0,1]$.
We assume that $Q$ is minimal, that is, there does not exists a pair $t_i, t_j \in Q$ such that
$U(t_i) \subseteq U(t_j)$.
If this is not the case, simply throw away $U(t_i)$ and we still have a covering of $[0,1]$.
As a consequence, for any consecutive pair $t_i < t_{i+1}$, we have $U(t_i) \cap U(t_{i+1}) \neq \emptyset$.
This means
$$t_{i+1} - t_i \leq \frac{1}{4\ee} (\rho_{t_{i+1}}+\rho_{t_i}) \leq \frac{1}{2\ee}
\max\{\rho_{t_{i+1}},\rho_{t_i}\}$$
and therefore $d_I( \Kfunc_{t_{i}}, \Kfunc_{t_{i+1}}) \leq \frac{1}{2} \max \{\rho_{t_{i}},\rho_{t_{i+1}} \}$.
By Lemma \ref{lem:bijection},
$$ d_B \big( \tilde \Kfunc_{t_{i}},\tilde \Kfunc_{t_{i+1}} \big) \leq
d_I \big( \Kfunc_{t_{i}}, \Kfunc_{t_{i+1}}),$$
for all $1\leq i \leq n-1$.
Therefore
$$d_B( \tilde \Ffunc, \tilde \Gfunc \big) \leq
\sum_{i=0}^{n-1} d_B \big( \tilde \Kfunc_{t_{i}},\tilde \Kfunc_{t_{i+1}} \big) \leq
\sum_{i=0}^{n-1} d_I \big( \Kfunc_{t_{i}}, \Kfunc_{t_{i+1}}) \leq \ee.$$
\end{proof}
\newpage
{ |
1,108,101,565,599 | arxiv | \section{Introduction}
Poisson vertex algebras (PVA) arise as the quasi-classical limit of a family of vertex algebras \cite{DSK06} in the same way as Poisson
algebras arise as the quasi-classical limit of a family of associative algebras.
Note also that a PVA is a local counterpart of a Coisson (=chiral Poisson) algebra defined in \cite{BD04}.
Moreover, a PVA can be obtained as a formal Fourier transform of a local Poisson bracket \cite{BDSK09}, which
plays an important role in the theory of infinite-dimensional integrable Hamiltonian systems.
In fact, as demonstrated in \cite{BDSK09}, the language of $\lambda$-brackets \cite{D'AK98,Kac98} in the framework of
Poisson vertex algebras is often more convenient and transparent than the equivalent languages of local Poisson brackets,
used in the book \cite{FT86}, or of Hamiltonian operators, used in the book \cite{Dor93} (and references therein).
Hence, the theory of PVA has been extensively used in order to get a better understanding of generalized Drinfeld-Sokolov
hierarchies for classical affine $\mc W$-algebras \cite{DSKV13,DSKV14,DSKVnew}, Adler-Gelfand-Dickey hierarchies \cite{DSKV15}
and, more generally, Lax type integrable Hamiltonian equations \cite{DSKVold},
and the Lenard-Magri scheme of integrability \cite{DSKT14,DSKT15}.
Furthermore, the notion of a PVA has been extended in \cite{CasPhD,Cas15} to deal with Hamiltonian operators, or, equivalently, local Poisson
brackets, for multidimensional systems of PDEs (namely, PDEs for functions depending on several spatial variables). The notion of multidimensional PVA has been used for studying the theory of symmetries and deformations of the so-called Poisson brackets of hydrodynamic type \cite{DN83}, as well as for the local nonlinear brackets associated with 2D Euler's equation \cite{Cas15b}.
\smallskip
One of the most remarkable accomplishments of the theory of PVA has been the
derivation
of an explicit formula for the bi-Hamiltonian structure underlying classical $\mc W$-algebras
\cite{DSKV16}.
Classical affine $\mc W$-algebras are associated to a pair $(\mf g,f)$ consisting of a
simple Lie algebra $\mf g$ and a nilpotent element $f\in\mf g$.
For a principal nilpotent element $f\in\mf g$, they appeared in the seminal paper by
Drinfeld and Sokolov \cite{DS85}. They were introduced as Poisson algebras of functions
on an infinite dimensional Poisson manifold,
and they were used to study KdV-type integrable
bi-Hamiltonian hierarchies of PDE's, nowadays known as Drinfeld-Sokolov hierarchies.
Subsequently, in the 90's, there was an extensive literature extending the Drinfeld-Sokolov
construction of classical $\mc W$-algebras and the corresponding generalized Drinfeld-Sokolov
hierarchies to other nilpotent elements, \cite{dGHM92,FHM92,BdGHM93,DF95,FGMS95, FGMS96}.
Recently \cite{DSKV13}, classical affine $\mc W$-algebras were described as PVAs.
The powerful tool of the language of $\lambda$-bracket has been then used to get an explicit
formula for the bi-Hamiltonian structure describing them and to give a rigorous
definition, and to compute explicitly, generalized Drinfeld-Sokolov hierarchies \cite{DSKV14,DSKVnew}.
These results may find interesting applications in studying the relations of Drinfeld-Sokolov
hierarchies with Kac-Wakimoto hierarchies \cite{KW89} and
computation of the corresponding tau-functions,
and in the problem of quantization of classical integrable systems \cite{BLZ96} and applications to CFT.
\smallskip
The most powerful tool in the PVA theory is the so-called Master Formula \eqref{masterformula}. It allows to rephrase
the relevant questions in the theory of infinite-dimensional Hamiltonian systems in terms of the $\lambda$-bracket language,
thus providing a completely algebraic computational technique, which replaces all the manipulations used in \cite{FT86,Dor93}
in the setting of the formal calculus of variations.
Note that, in the $\lambda$-bracket language the computations are not necessarily hard to perform by hand,
but their numbers increase dramatically with the growing number of spatial dimensions.
The package \textsl{MasterPVA} and its generalization to the multidimensional case \textsl{MasterPVAmulti} have been written
to exploit a Computer Algebra System, like Mathematica, to automatically compute the Master Formula for PVA.
The choice of Mathematica is motivated by the pre-existing package \textsl{Lambda}, by J.~Ekstrand \cite{Eks11},
aimed to compute operator product expansions in conformal field theory using the $\lambda$-bracket language
within the framework of vertex algebras \cite{Kac98,DSK06}.
These packages have been used in \cite{CasPhD} -- with some preliminary results published in \cite{Cas15} -- in order to compute up to second dispersive order the Casimir functions, the symmetries and the compatible deformations of the bidimensional Poisson brackets of hydrodynamic type. They have been proved effective also when working with scalar structures, whenever explicit computations are needed \cite{CCS15}.
The Mathematica package \textsl{WAlg} provides the implementation of the results about the structure theory of classical affine $\mc W$-algebras obtained in \cite{DSKV16}. It can be used
to compute all
$\lambda$-brackets between generators of the classical affine $\mc W$-algebras $\mc W(\mf g,f)$, where $\mf{g}$ is a simple Lie algebra of type $A,B,C,D$ and $G$, and $f\in\mf g$ is an arbitrary nilpotent element.
Thus we can obtain explicit expressions for the generalized Drinfeld-Sokolov hierarchies and
their bi-Hamiltonian structure by combining the programs \textsl{WAlg} and \textsl{MasterPVA}.
\smallskip
The paper is organized as follows.
In Section \ref{sec:pva} we review the definition of PVA following \cite{BDSK09} and its multidimensional generalization given in \cite{Cas15}.
In particular we introduce the notion of an algebra of differential functions and the Master Formula \eqref{masterformula}
used to perform $\lambda$-brackets computations on it, and we show that a PVA is equivalent to the notion of an Hamiltonian
operator (differently from \cite{Dor93} we call this Hamiltonian operator a Poisson structure). We also recall the connection with infinite-dimensional
Hamiltonian systems.
In Section \ref{sec:dsred} we review the definition of classical affine $\mc W$-algebras using the language of PVA,
following \cite{DSKV13}.
The main results are Theorems \ref{thm:structure-W} and \ref{20140304:thm} which give an explicit description
of, respectively, the differential algebra structure and the Poisson
structure of classical affine $\mc W$-algebras, see also \cite{DSKV16}.
In Section \ref{sec:3} we explain how to use the packages \textsl{MasterPVA} and \textsl{MasterPVAmulti} by giving some
explicit examples.
We show the well-known compatibility between GFZ and Virasoro-Magri PVA,
we derive the Dubrovin-Novikov conditions for a bidimensional Poisson structure of hydrodynamic type
\cite{DN83}, and, finally, we reprove the Mokhov's classification for the $N=1$ multidimensional structures of hydrodynamic type \cite{M88}.
In Section \ref{sec:WAlg} we explain how to use the package \emph{WAlg} by giving some explicit examples. First, we consider
the case of a principal nilpotent element $f$ in the Lie algebra $\mf g=\mf o_7$ and we show how to compute a basis $\{q_j\}_{j\in J}$
of $\mf g^f$ and the corresponding set of generators $\{w(q_j)\}_{j\in J}$ of the classical affine $\mc W$-algebra
$\mc W(\mf g,f)$ given by Theorem \ref{thm:structure-W}.
Then, we consider the case of a minimal nilpotent element in the Lie algebra $\mf{sp}_4$ and
we show how to compute the $\lambda$-brackets among the generators of the corresponding classical affine $\mc W$-algebra
using Theorem \ref{20140304:thm}.
Finally, we compute explicitly all classical affine $\mc W$-algebras $\mc W(\mf g,f)$
corresponding to a simple Lie algebra $\mf g$ of rank $2$ and its principal nilpotent element $f$
and we compare our results with the ones in \cite{DSKW10}.
The complete list of commands provided by the packages \textsl{MasterPVA} and\\ \textsl{MasterPVAmulti}
(respectively \textsl{WAlg})
is given in Section \ref{sec:4} (respectively Section \ref{sec:6}).
\subsubsection*{Acquiring the packages}
The packages have been developed with Mathematica 9.0 and can be downloaded from
\url{http://www.theatron.it/daniele/MasterPVA_files.tar.gz}
\noindent where it is also possible to find the related libraries and the examples provided in this paper.
\subsubsection*{Acknowledgments}
We wish to thank Alberto De Sole, Boris Dubrovin and Victor Kac for introducing us to the
fascinating theory of integrable systems.
Part of this work was done during the visit of the authors to the Department of Mathematics of
the University of Rome La Sapienza in January and February 2016, and to the Department of Mathematics and Applications of the University of Milan-Bicocca in January 2017. We wish to thank this institutions for the kind hospitality.
We also wish to thank theatrOn.it for hosting the packages files.
The first author is supported by the INdAM-Cofund-2012 grant ``MPoisCoho -- Poisson cohomology of multidimensional Hamiltonian structures'' .
The second author is supported by an NSFC
``Research Fund for International Young Scientists'' grant.
\section{Poisson vertex algebras and Hamiltonian equations}\label{sec:pva}
In this section we review the connection between Poisson vertex algebras
and the theory of Hamiltonian equations as laid down in \cite{BDSK09}.
\subsection{Poisson vertex algebras}
Let $\mc V$ be a \emph{differential algebra}, namely a unital commutative associative algebra
over a field $\mb F$ of characteristic $0$, with a derivation $\partial:\mc V\to\mc V$.
\begin{definition}
\begin{enumerate}[(a)]
\item
A \emph{$\lambda$-bracket} on $\mc V$ is an $\mb F$-linear map
$\mc V\otimes\mc V\to\mb F[\lambda]\otimes\mc V$,
denoted by $f\otimes g\to\{f_\lambda g\}$,
satisfying \emph{sesquilinearity} ($f,g\in\mc V$):
\begin{equation}\label{sesqui}
\{\partial f_\lambda g\}=
-\lambda\{f_\lambda g\},\qquad
\{f_\lambda\partial g\}=
(\lambda+\partial)\{f_\lambda g\}\,,
\end{equation}
and the \emph{left and right Leibniz rules} ($f,g,h\in\mc V$):
\begin{align}\label{lleibniz}
\{f_\lambda gh\}&=
\{f_\lambda g\}h+\{f_\lambda h\}g,\\
\{fh_\lambda g\}&=
\{f_{\lambda+\partial}g\}_{\rightarrow}h+\{h_{\lambda+\partial}g\}_{\rightarrow}f\,,\label{rleibniz}
\end{align}
where we use the following notation: if
$\{f_\lambda g\}=\sum_{n\in\mb Z_+}\lambda^n c_n$,
then
$\{f_{\lambda+\partial}g\}_{\rightarrow}h=\sum_{n\in\mb Z_+}c_n(\lambda+\partial)^nh$.
\item
We say that the $\lambda$-bracket is \emph{skew-symmetric} if
\begin{equation}\label{skewsim}
\{g_\lambda f\}+\{f_{-\lambda-\partial}g\}=0\,,
\end{equation}
where, now,
$\{f_{-\lambda-\partial}g\}=\sum_{n\in\mb Z_+}(-\lambda-\partial)^nc_n$
(if there is no arrow we move $\partial$ to the left).
\item
A \emph{Poisson vertex algebra} (PVA) is a differential algebra $\mc V$ endowed
with a $\lambda$-bracket which is skew-symmetric and satisfies
the following \emph{Jacobi identity} in $\mc V[\lambda,\mu]$ ($f,g,h\in\mc V$):
\begin{equation}\label{jacobi}
\{f_\lambda\{g_\mu h\}\}-
\{\{f_\lambda g\}_{\lambda+\mu}h\}-
\{g_\mu\{f_\lambda h\}\}=0\,.
\end{equation}
\end{enumerate}
\end{definition}
\begin{example}\label{affinePVA}
Let $\mf g$ be a Lie algebra over $\mb F$ with a symmetric invariant bilinear form $\kappa$,
and let $s$ be an element of $\mf g$.
The \emph{affine PVA} associated to the triple $(\mf g,\kappa,s)$,
is the algebra of differential polynomials $\mc V=S(\mb F[\partial]\mf g)$
(where $\mb F[\partial]\mf g$ is the free $\mb F[\partial]$-module generated by $\mf g$
and $S(R)$ denotes the symmetric algebra over the $\mb F$-vector space $R$)
together with the $\lambda$-bracket given by
\begin{equation}\label{affinebracket}
\{a_\lambda b\}=[a,b]+\kappa(s|[a,b])+\kappa(a| b)\lambda
\qquad\text{ for } a,b\in\mf g
\,,
\end{equation}
and extended to $\mc V$ by sesquilinearity and the left and right Leibniz rules.
\end{example}
\subsection{Poisson vertex algebra structures on algebras of differential functions}\label{sec:1.2}
The basic examples of differential algebras are
the \emph{algebras of differential polynomials} in the variables $u_1,\ldots,u_{\ell}$:
$$
\mc R_\ell=\mb F[u_i^{(n)}\mid i\in I=\{1,\ldots,\ell\},n\in\mb Z_+]\,,
$$
where $\partial$ is the derivation defined by $\partial(u_i^{(n)})=u_i^{(n+1)}$, $i\in I,n\in\mb Z_+$.
Note that we have in $\mc V$ the following commutation relations:
\begin{equation}\label{partialcomm}
\left[\frac{\partial}{\partial u_i^{(n)}},\partial\right]=\frac{\partial}{\partial u_i^{(n-1)}}\,,
\end{equation}
where the RHS is considered to be zero if $n=0$.
An \emph{algebra of differential functions} in the variables $u_1,\dots,u_\ell$ is a differential algebra extension $\mc V$ of $\mc R_\ell$, endowed
with commuting derivations
$$
\frac{\partial}{\partial u_{i}^{(n)}}:\mc V\to\mc V\,,\qquad i\in I\,,\, n\in\mb Z_+\,,
$$
extending the usual partial derivatives on $\mc R_\ell$, such that only a finite number
of $\frac{\partial f}{\partial u_{i}^{(n)}}$ are non-zero for each $f\in\mc V$, and such that the commutation relations
\eqref{partialcomm} hold on $\mc V$.
The \emph{variational derivative} of $f\in\mc V$ with respect to $u_i$ is, by definition,
$$
\frac{\delta f}{\delta u_i}=\sum_{n\in\mb Z_+}(-\partial)^n\frac{\partial f}{\partial u_i^{(n)}}\,.
$$
The following result explains how to extend an arbitrary $\lambda$-bracket on a set of variables $\{u_i\}_{i\in I}$,
with value in some algebra of differential functions $\mc V$, to a PVA structure on $\mc V$.
\begin{theorem}[{\cite[Theorem 1.15]{BDSK09}}]\label{master}
Let $\mc V$ be an algebra of differential functions in the variables $\{u_i\}_{i\in I}$,
and let $H_{ij}(\lambda)\in\mb F[\lambda]\otimes\mc V,\,i,j\in I$.
\begin{enumerate}[(a)]
\item
The Master Formula
\begin{equation}\label{masterformula}
\{f_\lambda g\}=
\sum_{\substack{i,j\in I\\m,n\in\mb Z_+}}\frac{\partial g}{\partial u_j^{(n)}}(\lambda+\partial)^n
H_{ji}(\lambda+\partial)(-\lambda-\partial)^m\frac{\partial f}{\partial u_i^{(m)}}
\end{equation}
defines a $\lambda$-bracket on $\mc V$
with given $\{u_i{}_\lambda u_j\}=H_{ji}(\lambda),\,i,j\in I$.
\item
The $\lambda$-bracket \eqref{masterformula} on $\mc V$ satisfies the skew-symmetry
condition \eqref{skewsim} provided that the same holds on generators ($i,j\in I$):
\begin{equation}\label{skewsimgen}
\{u_i{}_\lambda u_j\}+\{u_j{}_{-\lambda-\partial}u_i\}=0\,.
\end{equation}
\item
Assuming that the skew-symmetry condition \eqref{skewsimgen} holds,
the $\lambda$-bracket \eqref{masterformula} satisfies the Jacobi identity \eqref{jacobi},
thus making $\mc V$ a PVA, provided that the Jacobi identity holds
on any triple of generators ($i,j,k\in I$):
\begin{equation}\label{jacobigen}
\{u_i{}_\lambda\{u_j{}_\mu u_k\}\}
-\{u_j{}_\mu\{u_i{}_\lambda u_k\}\}
-\{\{u_i{}_\lambda u_j\}_{\lambda+\mu}u_k\}
=0\,.
\end{equation}
\end{enumerate}
\end{theorem}
By Theorem \ref{master}(a), if $\mc V$ is an algebra of differential functions
in the variables $\{u_i\}_{i\in I}$, there is a bijective correspondence between
$\ell\times\ell$-matrices
$H(\lambda)=
\left(H_{ij}(\lambda)\right)_{i,j\in I}\in\Mat_{\ell\times \ell}\mc V[\lambda]$
and the $\lambda$-brackets $\{\cdot\,_\lambda\,\cdot\}_H$ on $\mc V$ defined by the Master formula \eqref{masterformula}.
\begin{definition}\label{hamop}
A \emph{Poisson structure} on $\mc V$ is a matrix $H\in\Mat_{\ell\times\ell}\mc V[\lambda]$
such that the corresponding $\lambda$-bracket $\{\cdot\,_\lambda\,\cdot\}_H$
defines a PVA structure on $\mc V$.
\end{definition}
\begin{example}\label{affineHAMOP}
Consider the affine PVA defined in Example \ref{affinePVA}.
Let $\{u_i\}_{i\in I}$ be a basis of $\mf g$.
The corresponding Poisson structure
$H=\left(H_{ij}(\lambda)\right)\in\Mat_{\ell\times\ell}\mc V[\lambda]$
to the $\lambda$-bracket defined in \eqref{affinebracket} is given by
$$
H_{ij}(\lambda)=
\{u_j{}_{\lambda}u_i\}=
[u_j,u_i]+\kappa(s|[u_j,u_i])+\kappa(u_i| u_j)\lambda\,.
$$
\end{example}
\subsection{Poisson structures and Hamiltonian equations}
The relation between PVAs and Hamiltonian equations
associated to a Poisson structure is based on the following simple observation.
\begin{proposition}\label{pvahamop}
Let $\mc V$ be a PVA. The $0$-th product on $\mc V$
induces a well defined Lie algebra bracket on the quotient space $\quot{\mc V}{\partial\mc V}$:
\begin{equation}\label{lambda=0}
\{{\textstyle\int} f,{\textstyle\int} g\}={\textstyle\int} \left.\{f_\lambda g\}\right|_{\lambda=0}\,,
\end{equation}
where ${\textstyle\int}:\mc V\to\quot{\mc V}{\partial\mc V}$ is the canonical quotient map.
Moreover, we have a well defined Lie algebra action of $\quot{\mc V}{\partial\mc V}$ on $\mc V$
by derivations of the commutative associative product on $\mc V$, commuting with $\partial$,
given by
$$
\{{\textstyle\int} f,g\}=\{f_\lambda g\}|_{\lambda=0}\,.
$$
\end{proposition}
In the special case when $\mc V$ is an algebra of differential functions in $\ell$ variables $\{u_i\}_{i\in I}$
and the PVA $\lambda$-bracket on $\mc V$ is associated to the Poisson structure
$H\in\Mat_{\ell\times\ell}\mc V[\lambda]$,
the Lie bracket \eqref{lambda=0} on $\quot{\mc V}{\partial\mc V}$ takes the form
(cf. \eqref{masterformula}):
\begin{equation}\label{liebrak}
\{{\textstyle\int} f,{\textstyle\int} g\}
=\sum_{i,j\in I}\int\frac{\delta g}{\delta u_j}H_{ji}(\partial)\frac{\delta f}{\delta u_i}
\,.
\end{equation}
\begin{definition}\label{hamsys}
Let $\mc V$ be an algebra of differential functions
with a Poisson structure $H$.
\begin{enumerate}[(a)]
\item
Elements of $\quot{\mc V}{\partial\mc V}$ are called \emph{local functionals}.
\item
Given a local functional $\int h\in\quot{\mc V}{\partial\mc V}$,
the corresponding \emph{Hamiltonian equation} is
\begin{equation}\label{hameq}
\frac{du}{dt}
=\{{\textstyle\int} h,u\}_H
\qquad
\Big(\text{equivalently,}\quad
\frac{du_i}{dt}
=\sum_{j\in I}H_{ij}(\partial)\frac{\delta h}{\delta u_j},\ i\in I\Big)\,.
\end{equation}
\item
A local functional $\int f\in\quot{\mc V}{\partial\mc V}$ is called an \emph{integral of motion}
of equation \eqref{hameq} if $\frac{df}{dt}=0\mod\partial\mc V$
in virtue of \eqref{hameq}, or, equivalently, if ${\textstyle\int} h$ and ${\textstyle\int} f$ are \emph{in involution}:
$$
\{{\textstyle\int} h,{\textstyle\int} f\}_H=0\,.
$$
Namely, ${\textstyle\int} f$ lies in the centralizer of ${\textstyle\int} h$ in the Lie algebra $\quot{\mc V}{\partial\mc V}$
with Lie bracket \eqref{liebrak}.
\item
Equation \eqref{hameq} is called \emph{integrable}
if there exists an infinite sequence ${\textstyle\int} f_0={\textstyle\int} h,\,{\textstyle\int} f_1,\,{\textstyle\int} f_2,\dots$,
of linearly independent integrals of motion in involution.
The corresponding \emph{integrable hierarchy of Hamiltonian equations} is
\begin{equation}\label{eq:hierarchy}
\frac{du}{dt_n}=\{{\textstyle\int} f_n,u\}_H,\ n\in\mb Z_+
\,.
\end{equation}
(Equivalently,
$\frac{du_i}{dt_n}
=\sum_{j\in I}H_{ij}(\partial)\frac{\delta f_n}{\delta u_j}$, $n\in\mb Z_+,\,i\in I$)
\end{enumerate}
\end{definition}
\subsection{Multidimensional Poisson Vertex Algebras}\label{sec:multi}
The definition of a PVA has been extended in \cite{Cas15} in order to study Hamiltonian evolutionary PDEs with several spatial dimensions.
A \emph{$D$-dimensional differential algebra} is a unital commutative associative algebra $\mc V$
over a field $\mb F$ of characteristic $0$, endowed with $D$ commuting derivations $\partial_\alpha:\mc V\to\mc V$,
$\alpha=1\dots,D$.
\begin{definition}
\begin{enumerate}[(a)]
\item
A \emph{$D$-dimensional $\lambda$-bracket} on $\mc V$ is an $\mb F$-linear map
$\mc V\otimes\mc V\to\mb F[\lambda_1,\dots,\lambda_D]\otimes\mc V$,
denoted by $f\otimes g\to\{f_\lambda g\}$,
satisfying \emph{sesquilinearity} ($f,g\in\mc V$, $\alpha=1,\dots,D$):
\begin{equation}\label{sesqui_multi}
\{\partial_\alpha f_\lambda g\}=
-\lambda_\alpha\{f_\lambda g\},\qquad
\{f_\lambda\partial_\alpha g\}=
(\lambda_\alpha+\partial_\alpha)\{f_\lambda g\}\,,
\end{equation}
and the \emph{left and right Leibniz rules} \eqref{lleibniz} and \eqref{rleibniz}.
\item
We say that the $\lambda$-bracket is \emph{skew-symmetric} if
equation \eqref{skewsim} is satisifed.
\item
A \emph{$D$-dimensional PVA} is a $D$-dimensional differential algebra $\mc V$ endowed
with a $\lambda$-bracket which is skew-symmetric and satisfies
the \emph{Jacobi identity} \eqref{jacobi} in $\mc V[\lambda_1,\dots,\lambda_D,\mu_1,\dots,\mu_D]$.
\end{enumerate}
\end{definition}
The definition of an algebra of differential functions given in Section \ref{sec:1.2} can be generalized to the $D$-dimensional case
and the analogous result to Theorem \ref{master} can be obtained (see \cite[Theorem 1]{Cas15}).
\section{Classical affine \texorpdfstring{$\mc W$}{W}-algebras}\label{sec:dsred}
In this section we recall the definition of classical affine $\mc W$-algebras $\mc W(\mf g,f)$ in the language
of Poisson vertex algebras, following \cite{DSKV13}
(which is a development of \cite{DS85}).
\subsection{Setup and notation}\label{slod.4}
Let $\mf g$ be a simple Lie algebra with a non-degenerate symmetric invariant bilinear form $(\cdot\,|\,\cdot)$,
and let $\{f,2x,e\}\subset\mf g$ be an $\mf{sl}_2$-triple in $\mf g$.
We have the corresponding $\ad x$-eigenspace decomposition
$$
\mf g=\bigoplus_{k\in\frac{1}{2}\mb Z}\mf g_{k}
\,\,\text{ where }\,\,
\mf g_k=\big\{a\in\mf g\,\big|\,[x,a]=ka\big\}
\,.
$$
Clearly, $f\in\mf g_{-1}$, $x\in\mf g_{0}$ and $e\in\mf g_{1}$.
We let $d$ be the \emph{depth} of the grading, i.e. the maximal eigenvalue of $\ad x$.
By representation theory of $\mf{sl}_2$, the Lie algebra $\mf g$ admits the direct sum decompositions
\begin{equation}\label{20140221:eq4}
\mf g
=\mf g^f\oplus[e,\mf g]
=\mf g^e\oplus[f,\mf g]
\,.
\end{equation}
They are dual to each other, in the sense that $\mf g^f\perp[f,\mf g]$ and $[e,\mf g]\perp\mf g^e$.
For $a\in\mf g$, we denote by $a^\sharp=\pi_{\mf g^f}(a)\in\mf g^f$ its component in $\mf g^f$
with respect to the first decomposition in \eqref{20140221:eq4}.
Note that, since $[e,\mf g]$ is orthogonal to $\mf g^e$,
the spaces $\mf g^f$ and $\mf g^e$ are non-degenerately paired by $(\cdot\,|\,\cdot)$.
Next, we choose a basis of $\mf g$ as follows.
Let $\{q_j\}_{j\in J^f}$ be a basis of $\mf g^f$ consisting of $\ad x$-eigenvectors,
and let $\{q^j\}_{j\in J^f}$ be the the dual basis of $\mf g^e$.
For $j\in J^f$,
we let $\delta(j)\in\frac12\mb Z$ be the $\ad x$-eigenvalue of $q^j$,
so that
\begin{equation}\label{20130520:eq5}
[x,q_j]=-\delta(j)q_j
\,\,,\,\,\,\,
[x,q^j]=\delta(j)q^j
\,.
\end{equation}
For $k\in\frac12\mb Z_+$
we also let $J^f_{-k}=\{i\in J^f\,|\,\delta(i)=k\}\subset J^f$,
so that $\{q_j\}_{j\in J^f_{-k}}$ is a basis of $\mf g^f_{-k}$,
and $\{q^j\}_{j\in J^f_{-k}}$ is the dual basis of $\mf g^e_{k}$.
By representation theory of $\mf{sl}_2$,
we get a basis of $\mf g$ consisting of the following elements:
\begin{equation}\label{20140221:eq1}
q^j_n=(\ad f)^nq^j
\,\,\text{ where }\,\,
n\in\{0,\dots,2\delta(j)\}
\,\,,\,\,\,\,
j\in J^f
\,.
\end{equation}
This basis consists of $\ad x$-eigenvectors,
and, for $k\in\frac12\mb Z$ such that $-d\leq k\leq d$,
the corresponding basis of $\mf g_k\subset\mf g$ is
$\{q^j_{n}\}_{(j,n)\in J_{-k}}$,
where $J_{-k}$ is the following index set
\begin{equation}\label{20140221:eq5}
J_{-k}
=
\Big\{
(j,n)\in J^f\times\mb Z_+\,\Big|\,
\delta(j)-|k|\in\mb Z_+,\,n=\delta(j)-k
\Big\}
\,.
\end{equation}
The union of all these index sets is the index set for the basis of $\mf g$:
\begin{equation}\label{20140221:eq6}
J
=
\bigsqcup_{h\in\frac12\mb Z}J_h
=
\Big\{
(j,n)\,\Big|\,
j\in J^f,\,n\in\{0,\dots,2\delta(j)\}
\Big\}
\,.
\end{equation}
By \cite[Lemma 2.5]{DSKV16}, the corresponding basis of $\mf g$ dual to \eqref{20140221:eq1} is given
by ($(j,n)\in J$):
\begin{equation}\label{20140221:eq3}
q_j^n
=
\frac{(-1)^n}{(n!)^2\binom{2\delta(j)}{n}}
(\ad e)^nq_j
\,.
\end{equation}
Clearly, the bases \eqref{20140221:eq1} and \eqref{20140221:eq3}
are compatible with the direct sum decompositions \eqref{20140221:eq4}.
In fact, we can write the corresponding projections
$\pi_{\mf g^f}$, $\pi_{[e,\mf g]}=1-\pi_{\mf g^f}$,
$\pi_{\mf g^e}$, and $\pi_{[f,\mf g]}=1-\pi_{\mf g^e}$,
in terms of these bases:
\begin{equation}\label{20130520:eq1}
\begin{array}{l}
\displaystyle{
\vphantom{Big(}
a^\sharp = \pi_{\mf g^f}(a)=\sum_{j\in J^f}(a|q^j)q_j
\,\,,\,\,\,\,
\pi_{[e,\mf g]}(a)=\sum_{j\in J^f}\sum_{n=1}^{2\delta(j)}(a|q^j_n)q_j^n
\,,} \\
\displaystyle{
\vphantom{Big(}
\pi_{\mf g^e}(a)=\sum_{j\in J^f}(a|q_j)q^j
\,\,,\,\,\,\,
\pi_{[f,\mf g]}(a)=\sum_{j\in J^f}\sum_{n=1}^{2\delta(j)}(a|q_j^n)q^j_n
\,.}
\end{array}
\end{equation}
Note that when $\delta(j)=0$, then the sums over $n$ in \eqref{20130520:eq1} become empty sums.
\subsection{Construction of the classical affine \texorpdfstring{$\mc W$}{W}-algebra}
\label{sec:2.1}
Recall from Example \ref{affinePVA} that
given an element $s\in\mf g$, we have a PVA structure on
the algebra of differential polynomials $\mc V(\mf g)=S(\mb F[\partial]\mf g)$,
with $\lambda$-bracket given on generators by
\begin{equation}\label{lambda}
\{a_\lambda b\}_z=[a,b]+(a| b)\lambda+z(s|[a,b]),
\qquad a,b\in\mf g\,,
\end{equation}
and extended to $\mc V(\mf g)$ by the sesquilinearity axioms and the Leibniz rules.
Here $z$ is an element of the field $\mb F$.
We shall assume that $s$ lies in $\mf g_d$.
In this case the $\mb F[\partial]$-submodule
$\mb F[\partial]\mf g_{\geq\frac12}\subset\mc V(\mf g)$
is a Lie conformal subalgebra (see \cite{Kac98} for the definition)
with the $\lambda$-bracket $\{a_\lambda b\}_z=[a,b]$, $a,b\in\mf g_{\geq\frac12}$
(it is independent of $z$, since $s$ commutes with $\mf g_{\geq\frac12}$).
Consider the differential subalgebra
$\mc V(\mf g_{\leq\frac12})=S(\mb F[\partial]\mf g_{\leq\frac12})$ of $\mc V(\mf g)$,
and denote by $\rho:\,\mc V(\mf g)\twoheadrightarrow\mc V(\mf g_{\leq\frac12})$,
the differential algebra homomorphism defined on generators by
\begin{equation}\label{rho}
\rho(a)=\pi_{\leq\frac12}(a)+(f| a),
\qquad a\in\mf g\,,
\end{equation}
where $\pi_{\leq\frac12}:\,\mf g\to\mf g_{\leq\frac12}$ denotes
the projection with kernel $\mf g_{\geq1}$.
Recall from \cite{DSKV13} that
we have a representation of the Lie conformal algebra $\mb F[\partial]\mf g_{\geq\frac12}$
on the differential subalgebra $\mc V(\mf g_{\leq\frac12})\subset\mc V(\mf g)$ given by
($a\in\mf g_{\geq\frac12}$, $g\in\mc V(\mf g_{\leq\frac12})$):
\begin{equation}\label{20120511:eq1}
a\,^\rho_\lambda\,g=\rho\{a_\lambda g\}_z
\end{equation}
(note that the RHS is independent of $z$ since, by assumption, $s\in Z(\mf g_{\geq\frac12})$).
The \emph{classical affine} $\mc W$-\emph{algebra} is, by definition,
the differential algebra
\begin{equation}\label{20120511:eq2}
\mc W=\mc W(\mf g,f)
=\big\{g\in\mc V(\mf g_{\leq\frac12})\,\big|\,a\,^\rho_\lambda\,g=0\,\text{ for all }a\in\mf g_{\geq\frac12}\}\,,
\end{equation}
endowed with the following PVA $\lambda$-bracket
\begin{equation}\label{20120511:eq3}
\{g_\lambda h\}_{z,\rho}=\rho\{g_\lambda h\}_z,
\qquad g,h\in\mc W\,.
\end{equation}
\begin{remark}\label{hierarchies_W}
Thinking of $z$ as a formal parameter, equation \eqref{20120511:eq3} gives a 1-parameter
family of PVA structures on $\mc W$, or, equivalently, a bi-Poisson structure.
Indeed, we can write $\{g_\lambda h\}_{z,\rho}=\{g_\lambda h\}_{1,\rho}+z\{g_\lambda h\}_{0,\rho}$, for every $g,h\in\mc W$.
The $\lambda$-bracket $\{\cdot\,_\lambda\,\cdot\}_{1,\rho}$ does not depend on the choice
of $s\in\mf g_d$, while $\{\cdot\,_\lambda\,\cdot\}_{0,\rho}$ does.
Generalizing the results in \cite{DS85} it has been shown in \cite{DSKV13}, using the
Lenard-Magri scheme of integrability \cite{Mag78}, that it is possible to construct an integrable hierarchy of
bi-Hamiltonian equations for $\mc W$, known as generalized Drinfeld-Sokolov hierarchy, under the assumption that $f+s\in\mf g$ is a semisimple element.
Recently, generalized Drinfeld-Sokolov hierarchies for any nilpotent element $f\in\mf{gl}_N$ and
non-zero $s\in\mf g_d$ have been constructed in \cite{DSKVnew} using the theory of Adler type
pseudodifferential operators \cite{DSKVold}.
\end{remark}
\subsection{Structure Theorem for classical affine \texorpdfstring{$\mc W$}{W}-algebras}
\label{sec:2.2}
In the algebra of differential polynomials $\mc V(\mf g_{\leq\frac12})$
we introduce the grading by \emph{conformal weight},
denoted by $\Delta\in\frac12\mb Z$, defined as follows.
For $a\in\mf g$ such that $[x,a]=\delta(a)a$, we let $\Delta(a)=1-\delta(a)$.
For a monomial $g=a_1^{(m_1)}\dots a_s^{(m_s)}$,
product of derivatives of $\ad x$ eigenvectors $a_i\in\mf g_{\leq\frac12}$,
we define its conformal weight as
\begin{equation}\label{degcw}
\Delta(g)=\Delta(a_1)+\dots+\Delta(a_s)+m_1+\dots+m_s\,.
\end{equation}
Thus we get the conformal weight space decomposition
$$
\mc V(\mf g_{\leq\frac12})=\bigoplus_{\Delta\in\frac12\mb Z_+}\mc V(\mf g_{\leq\frac12})\{\Delta\}\,.
$$
For example $\mc V(\mf g_{\leq\frac12})\{0\}=\mb F$,
$\mc V(\mf g_{\leq\frac12})\{\frac12\}=\mf g_{\frac12}$,
and $\mc V(\mf g_{\leq\frac12})\{1\}=\mf g_{0}\oplus S^2\mf g_{\frac12}$.
\begin{theorem}[\cite{DSKV13}]\label{daniele2}
Consider the PVA $\mc W=\mc W(\mf g,f)$ with the $\lambda$-bracket $\{\cdot\,_\lambda\,\cdot\}_{z,\rho}$
defined by equation \eqref{20120511:eq3}.
\begin{enumerate}[(a)]
\item
For every element $q\in\mf g^f_{1-\Delta}$ there exists a (not necessarily unique)
element $w\in\mc W\{\Delta\}=\mc W\cap\mc V(\mf g_{\leq\frac12})\{\Delta\}$
of the form $w=q+g$, where
\begin{equation}\label{20140221:eq9}
g=\sum b_1^{(m_1)}\dots b_s^{(m_s)}\in\mc V(\mf g_{\leq\frac12})\{\Delta\}\,,
\end{equation}
is a sum of products of derivatives of $\ad x$-eigenvectors
$b_i\in\mf g_{1-\Delta_i}\subset\mf g_{\leq\frac12}$,
such that
$$
\Delta_1+\dots+\Delta_s+m_1+\dots+m_s=\Delta
\,\,\text{ and }\,\,
s+m_1+\dots+m_s>1
\,.
$$
\item
Let $\{w_j=q_j+g_j\}_{j\in J^f}$ be any collection of elements in $\mc W$ as in part (a).
(Recall, from Section \ref{slod.4}, that $\{q_j\}_{j\in J^f}$ is a basis of $\mf g^f$ consisting of
$\ad x$-eigenvectors.)
Then the differential subalgebra $\mc W\subset\mc V(\mf g_{\leq\frac12})$
is the algebra of differential polynomials in the variables $\{w_j\}_{j\in J^f}$.
The algebra $\mc W$ is a graded associative algebra,
graded by the conformal weights defined in \eqref{degcw}:
$\mc W
=
\mb F\oplus\mc W\{1\}\oplus\mc W\{\frac32\}\oplus\mc W\{2\}\oplus\mc W\{\frac52\}\oplus\dots$.
\end{enumerate}
\end{theorem}
Recall the first of the direct sum decompositions \eqref{20140221:eq4}.
By assumption, the elements $q^0_j=q_j,\,j\in J^f$, form a basis of $\mf g^f$,
and by construction the elements $q^n_j,\,(j,n)\in J$, with $n\geq1$,
form a basis of $[e,\mf g]$
(here we are using the notation from Section \ref{slod.4}).
Since $\mf g^f\subset\mf g_{\leq\frac12}$, we have the corresponding direct sum decomposition
\begin{equation}\label{20140221:eq7}
\mf g_{\leq\frac12}=\mf g^f\oplus[e,\mf g_{\leq-\frac12}]\,.
\end{equation}
It follows that the algebra of differential polynomials $\mc V(\mf g_{\leq\frac12})$
admits the following decomposition in a direct sum of subspaces
\begin{equation}\label{20140221:eq8}
\mc V(\mf g_{\leq\frac12})
=
\mc V(\mf g^f)
\oplus
\big\langle[e,\mf g_{\leq-\frac12}]\big\rangle_{\mc V(\mf g_{\leq\frac12})}
\,,
\end{equation}
where $\mc V(\mf g^f)$ is the algebra of differential polynomials over $\mf g^f$,
and $\big\langle[e,\mf g_{\leq-\frac12}]\big\rangle_{\mc V(\mf g_{\leq\frac12})}$
is the differential ideal of $\mc V(\mf g_{\leq\frac12})$
generated by $[e,\mf g_{\leq-\frac12}]$.
Theorem \ref{daniele2} implies the following result.
\begin{corollary}[\cite{DSKV16}]\label{20140221:cor}
For every $q\in\mf g^f$ there exists a unique
element $w=w(q)\in\mc W$
of the form $w=q+r$, where
$r\in\big\langle[e,\mf g_{\leq-\frac12}]\big\rangle_{\mc V(\mf g_{\leq\frac12})}$.
Moreover, if $q\in\mf g_{1-\Delta}$,
then $w(q)$ lies in $\mc W\{\Delta\}$ and $r$ is of the form \eqref{20140221:eq9}.
Consequently, $\mc W$ coincides with the algebra of differential polynomials in the variables
$w_j=w(q_j)$, $j\in J^f$.
\end{corollary}
As an immediate consequence of Theorem \ref{daniele2} and Corollary \ref{20140221:cor}
we get the following:
\begin{theorem}\label{thm:structure-W}
The map $\pi_{\mf g^f}$ restricts to a differential algebra isomorphism
$$
\pi:=\pi_{\mf g^f}|_{\mc W}:\,\mc W\,\stackrel{\sim}{\longrightarrow}\,\mc V(\mf g^f)
\,,
$$
hence we have the inverse differential algebra isomorphism
$$
w=:\,\mc V(\mf g^f)\,\stackrel{\sim}{\longrightarrow}\,\mc W
\,,
$$
which associates to every element $q\in\mf g^f$ the (unique) element $w(q)\in\mc W$
of the form $w(q)=q+r$, with $r\in\big\langle[e,\mf g_{\leq-\frac12}]\big\rangle_{\mc V(\mf g_{\leq\frac12})}$.
\end{theorem}
\subsection{Poisson structure of the classical affine $\mc W$-algebra}
Let $\ell=\dim\mf g^f$. By Corollary \ref{20140221:cor} the Poisson structure
$H=\left(H_{ij}(\lambda)\right)_{i,j\in J^f}\in\Mat_{\ell\times\ell}\mc W[\lambda]$ associated to the classical affine
$\mc W$-algebra $\mc W$
defined by equations \eqref{20120511:eq2} and \eqref{20120511:eq3} is given by ($i,j\in J^f$)
\begin{equation}\label{ham_W}
H_{ji}(\lambda)=\{ w(q_i)_\lambda w(q_j)\}_{z,\rho}\,.
\end{equation}
For $h,k\in\frac12\mb Z$, we introduce the notation
\begin{equation}\label{20140304:eq5}
h\prec k
\,\,\text{ if and only if }\,\,
h\leq k-1
\,.
\end{equation}
Also, for $t\geq1$, we denote $\vec{k}=(k_1,k_2,\dots,k_t)\in(\frac12\mb Z)^t$,
and $J_{-\vec{k}}:=J_{-k_1}\times\dots J_{-k_t}$.
Therefore,
an element $(\vec{j},\vec{n})\in J_{-\vec{k}}$ is an $t$-tuple with
\begin{equation}\label{20140304:eq6}
(j_1,n_1)\in J_{-k_1},\dots,(j_t,n_t)\in J_{-k_t}
\,.
\end{equation}
The explicit expression of the Poisson structure $H$ defined by equation \eqref{ham_W} can be obtained by the following result.
\begin{theorem}[{\cite[Theorem 5.3]{DSKV16}}]\label{20140304:thm}
For $a\in \mf g^f_{-h}$ and $b\in\mf g^f_{-k}$, we have
\begin{equation}\label{20140304:eq4}
\begin{array}{l}
\displaystyle{
\phantom{\Big(}
\{w(a)_\lambda w(b)\}_{z,\rho}
=
w([a,b])+(a|b)\lambda+z(s|[a,b])
} \\
\displaystyle{
\phantom{\Big(}
-\sum_{t=1}^\infty
\sum_{
-h+1\leq k_t\prec \dots\prec k_1\leq k
}
\sum_{
(\vec{j},\vec{n})\in J_{-\vec{k}}
}
\big(
w([b,q^{j_1}_{n_1}]^\sharp)
-(b|q^{j_1}_{n_1})(\lambda+\partial)
+z(s|[b,q^{j_{1}}_{n_{1}}])
\big)
} \\
\displaystyle{
\phantom{\Big(}
\times\big(
w([q^{n_1+1}_{j_1},q^{j_2}_{n_2}]^\sharp)
-(q^{n_1+1}_{j_1}|q^{j_2}_{n_2}) (\lambda+\partial)
+z(s|[q^{n_1+1}_{j_1},q^{j_2}_{n_2}])
\big)
\dots } \\
\displaystyle{
\phantom{\Big(}
\dots
\times\big(
w([q^{n_{t-1}+1}_{j_{t-1}},q^{j_t}_{n_t}]^\sharp)
-(q^{n_{t-1}+1}_{j_{t-1}}|q^{j_t}_{n_t}) (\lambda+\partial)
+z(s|[q^{n_{t-1}+1}_{j_{t-1}},q^{j_t}_{n_t}])
\big)
} \\
\displaystyle{
\phantom{\Big(}
\times\big(
w([q^{n_t+1}_{j_t},a]^\sharp)
-(q^{n_t+1}_{j_t}|a) \lambda
+z(s|[q^{n_t+1}_{j_t},a])
\big)
\,.}
\end{array}
\end{equation}
\end{theorem}
Note that in each summand of \eqref{20140304:eq4}
the $z$ term can be non-zero at most in one factor.
In fact, $z$ may occur in the first factor only for $k_1\leq0$,
in the second factor only for $k_1\geq1$ and $k_2\leq-1$,
in the third factor only for $k_2\geq1$ and $k_3\leq-1$,
and so on, and it may occur in the last factor only for $k_t\geq1$.
Since these conditions are mutually exclusive,
the expression in the RHS of \eqref{20140304:eq4} is linear in $z$.
Some special cases and applications of equation \eqref{20140304:eq4} are summarized in the next result.
\begin{proposition}[\cite{DSKV16}]\phantomsection\label{20160203:prop1}
\begin{enumerate}[(a)]
\item
If either $a$ or $b$ lies in $\mf g^f_0$, we have
\begin{equation}\label{20140225:eq1}
\{{w(a)}_\lambda{w(b)}\}_{z,\rho}
=
w([a,b])+(a|b)\lambda+z(s|[a,b])
\,.
\end{equation}
\item
If $a,b\in\mf g^f_{-\frac12}$ we have
\begin{equation}\label{20140226:eq8}
\begin{array}{l}
\displaystyle{
\phantom{\Big(}
\{{w(a)}_\lambda{w(b)}\}_{z,\rho}
=
w([a,b])
+(\partial+2\lambda)
w([a,[e,b]]^\sharp)
-(e|[a,b])\lambda^2
} \\
\displaystyle{
\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,
+\sum_{(j,n)\in J_{-\frac12}}
w([a,q^{j}_{n}]^\sharp)
w([q^{n+1}_{j},b]^\sharp)
+z(s|[a,b])
\,.}
\end{array}
\end{equation}
\item
Consider the element
$L_0=\frac12\sum_{j\in J^f_0}w(q_j)w(q^j)\,\in\mc W\{2\}$.
Then, the element $L=w(f)+L_0\in\mc W\{2\}$ is a Virasoro element of $\mc W$,
and we have
\begin{equation}\label{20140228:eq4}
\{L_\lambda L\}_{z,\rho}
=
(\partial+2\lambda)L-(x|x)\lambda^3+2z(s|f)\lambda
\,.
\end{equation}
For $a\in\mf g^f_{1-\Delta}$ we have
\begin{equation}\label{20140228:eq5}
\{L_\lambda w(a)\}_{z,\rho}
=
(\partial+\Delta\lambda)w(a)
-\frac{(e|a)}2\lambda^3+z\Delta(s|a)\lambda
\,.
\end{equation}
In particular,
for $z=0$, all the generators $w(a),\,a\in\mf g^f$, of $\mc W$ are primary elements for $L$,
provided that $(e|a)=0$.
In other words, for $z=0$, $\mc W$ is an algebra of differential polynomials
generated by $L$ and $\ell-1$ primary elements with respect to $L$.
So, $\mc W$ is a PVA of \emph{CFT type} (cf. \cite{DSKW10}).
\end{enumerate}
\end{proposition}
\begin{remark}
Equations \eqref{20140225:eq1} and \eqref{20140226:eq8} and the definition of the Virasoro element $L$ in in
Proposition \ref{20160203:prop1}(c) are compatible with the analogous in \cite{DSKV14} where the classical affine $\mc W$-algebra
for minimal nilpotent elements has been explicitly described.
\end{remark}
\section{The package MasterPVA}\label{sec:3}
In this section we show how to use the package \textsl{MasterPVA}, both in its one- and multi-dimensional versions.
As a few examples, we prove the compatibility between GFZ and Virasoro-Magri PVA (case $N=D=1$),
we derive the Dubrovin-Novikov conditions for a bidimensional Poisson structure of hydrodynamic type
(case $D=1, N=2$) and we obtain the Mokhov's classification for the $N=1$ multidimensional structures of hydrodynamic type \cite{M88}.
The packages \texttt{MasterPVA.m} and \texttt{MasterPVAmulti.m} must be in a directory where Mathematica can find them. This can be achieved, for example, by using the command \cmd{SetDirectory}. After this, we can load the packages. The two packages cannot be loaded in the same session, because of the conflicting functions and properties definition. However, MasterPVAmulti can effectively deal with $D=1$ PVA, despite using a heavier notation. This is the reason why we provide a package specifically devoted to standard monodimensional PVAs, althought the same input works with \textsl{MasterPVAmulti}.
\begin{flushleft}
\hspace{1mm}\includegraphics[scale=0.55]{eg0.pdf}
\end{flushleft}
\subsection{GFZ and Virasoro-Magri Poisson vertex algebras}\label{sec:kdv}
Let $\mc V$ be an algebra of differential functions extending $R_1=\mb C[u,u',u'',\ldots]$. We recall that the
Gardner-Faddeev-Zacharov (GFZ) PVA structure on $\mc V$ is defined by
\begin{equation}\label{eq:GFZdef}
\{u_\lambda u\}_1=\lambda
\end{equation}
while the Virasoro-Magri PVA with central charge $c\in\mb C$ is defined by
\begin{equation}\label{eq:VMdef}
\{u_\lambda u\}_0=\left(\partial+2\lambda\right)u+c\lambda^3\,.
\end{equation}
We will show the well-known fact that these two structures are compatible, namely that the $\lambda$-bracket
$\{\cdot\,_\lambda\,\cdot\}_z=\{\cdot\,_\lambda\,\cdot\}_0+z\{\cdot\,_\lambda\,\cdot\}_1$ defines
a PVA structure on $\mc V$ for all $z\in\mb C$.
After loading the package, it is necessary to set the number of generators, the name for the generators, for the independent variable with respect to which the derivation $\partial$ acts, and for the formal indeterminate used in the definition of the $\lambda$-brackets, say $\lambda$. The syntax for these commands is
\begin{flushleft}
\hspace{1mm}\includegraphics[scale=0.55]{eg1-1.pdf}
\end{flushleft}
The list of generators, written as functions of the independent variables, is called \cmd{gen} throughout the program. The $\lambda$-brackets between the generators must be provided in form of a $N\times N$ table, whose entries are polynomials in the previously declared formal indeterminate. In this example $N=1$ and we have \cmd{H0} given by equation \eqref{eq:VMdef}
and \cmd{H1} given by equation \eqref{eq:GFZdef}. We denote by \cmd{H} their linear combination.
\begin{flushleft}
\hspace{1mm}\includegraphics[scale=0.55]{eg1-2.pdf}
\end{flushleft}
The skewsymmetry and Jacobi identity on generators (see Theorem \ref{master}) can be checked by using the functions \cmd{PVASkew[]} and \cmd{JacobiCheck[]}.
Indeed the output of \cmd{PVASkew[]} (respectively \cmd{JacobiCheck[]}) is the LHS of
equation \eqref{skewsimgen} (respectively \eqref{jacobigen}).
We get
\begin{flushleft}
\hspace{1mm}\includegraphics[scale=0.55]{eg1-3.pdf}
\end{flushleft}
thus showing that $H_0$ and $H_1$ define two compatible PVA structures on $\mc V$.
Let us define $h_1=\frac12 u^2\in\mc V$.
It is well known that the corresponding Hamiltonian equation \eqref{hameq}
corresponding to the Hamiltonian functional ${\textstyle\int} h_1$ and the Poisson structure
\cmd{H0} is the Korteweg-de Vries (KdV) equation.
Moreover, let us also define $h_2=\frac12u^3+\frac c2uu''\in\mc V$.
The KdV equation is also the Hamiltonian equation corresponding to the Hamiltonian functional
${\textstyle\int} h_2$ and the Poisson structure \cmd{H1}.
\begin{flushleft}
\hspace{1mm}\includegraphics[scale=0.55]{eg1-4.pdf}
\end{flushleft}
In fact, the KdV equation is a bi-Hamiltonian equation and its integrability can be proved using
the Lenard-Magri scheme of integrability \cite{Mag78}.
\subsection{Poisson structures of hydrodynamic type}
Let $\mc V$ be an algebra of differential functions extending $R_N$.
A Poisson structure of hydrodynamic type \cite{DN83} on $\mc V$ is defined by the following $\lambda$-bracket on
generators ($i,j,k=1,\dots,N$):
\begin{equation}\label{eq:HYPB}
\{u_i{}_\lambda u_j\}=g_{ji} \lambda+b_{ji}^k u'_k\,,
\end{equation}
where repeated indices are summed according to Einstein's rule and
$\frac{\partial g_{ji}}{\partial u_h^{(n)}}=\frac{\partial b_{ji}^k}{\partial u_h^{(n)}}=0$, for every $h=1,\dots,N$ and $n\geq1$.
The geometric interpretation of the functions $g_{ij}$ and $b_{ij}^k$ is well known:
the $\lambda$-bracket defined in \eqref{eq:HYPB} defines a PVA structure on $\mc V$ if and only $g_{ij}$ are the components of a flat contravariant metric on a manifold with local coordinates $(u_1,\ldots,u_N)$ and $b_{ij}^k$ are the contravariant Christoffel symbols of the associated Levi-Civita connection. Using \textsl{MasterPVA} we will derive the explicit form of these properties in the case $N=2$.
After loading the package, we initialize the package settings.
\begin{flushleft}
\hspace{1mm}\includegraphics[scale=0.55]{eg2-1.pdf}
\end{flushleft}
We define the matrices $g_{ij}$ and $b_{ij}^k$ and use them to write the $\lambda$-bracket \eqref{eq:HYPB}.
\begin{flushleft}
\hspace{1mm}\includegraphics[scale=0.50]{eg2-2.pdf}
\hspace{1mm}\includegraphics[scale=0.50]{eg2-2b.pdf}
\end{flushleft}
By equating to zero the coefficient of $\lambda$ and the constant term (in $\lambda$) in the equations given by \texttt{PVASkew[P]}
we get the conditions that $g_{ij}$ and $b_{ij}^k$ should satisfy in order to get a skewsymmetric $\lambda$-bracket.
\begin{flushleft}
\hspace{1mm}\includegraphics[scale=0.50]{eg2-3.pdf}
\end{flushleft}
These conditions can be summarized by the equations
\begin{equation}\label{eq:HYPBskew}
g_{ij}=g_{ji}\,,
\qquad
b_{ij}^k+b_{ji}^k=\frac{\partial g_{ij}}{\partial u_k}\,.
\end{equation}
We redefine the functions $g_{ij}$ and $b_{ij}^k$ and the Poisson structure \texttt{H} in order to ensure the validity of
equations \eqref{eq:HYPBskew}.
\begin{flushleft}
\hspace{1mm}\includegraphics[scale=0.50]{eg2-4.pdf}
\end{flushleft}
The further properties that must be satisfied to grant the Jacobi identity can be found using \texttt{JacobiCheck[P]}. Notice that, when the result of \texttt{JacobiCheck[]} is not identically vanishing, the output uses internal variables whose name starts with \texttt{MasterPVA`Private`}: to make the output clearer it is advisable to replace them with the ``external'' names, as it is demonstrated in the following picture. However, reading the conditions for the Jacobi identity is usually much more cumbersome than inspecting the ones for the skewsymmetry.
\begin{flushleft}
\hspace{1mm}\includegraphics[scale=0.50]{eg2-5.pdf}
\end{flushleft}
Nevertheless, we can check that the vanishing of the coefficient of $\lambda^2$ in the Jacobi identity is equivalent to the torsion--free condition for the Levi--Civita connection:
\begin{equation}
g_{ia}b_{kj}^a-g_{ja}b_{ki}^a=0
\,.
\end{equation}
\begin{flushleft}
\hspace{1mm}\includegraphics[scale=0.45]{eg2-6.pdf}\\
\hspace{1mm}\includegraphics[scale=0.45]{eg2-7.pdf}\\
\end{flushleft}
\subsection{Multidimensional scalar PVAs of hydrodynamic type}
The package \textsl{MasterPVAmulti} must be used when dealing with multidimensional PVA
defined in Section \ref{sec:multi}.
Here, we use it to classify multidimensional Poisson structures of hydrodynamic type
for the case $N=1$, $D=3$. This is a special case of a classification theorem proved by
Mokhov, \cite{M88}.
We recall that a multidimensional scalar
$\lambda$-bracket of hydrodynamic type has the form
\begin{equation}\label{20160212:eq1}
\{u_\lambda u\}=\sum_{\alpha=1}^D \left(a_\alpha\lambda_\alpha+ b_\alpha u_\alpha\right)
\,,
\end{equation}
where we set $u_\alpha=\partial_\alpha u$ and $a_\alpha$ and $b_\alpha$ are such that
$\frac{\partial a_\alpha}{\partial \partial^n_\beta u}
=\frac{\partial b_\alpha}{\partial \partial^n_\beta u}=0$, for every $\beta=1,\dots,D$ and $n\geq1$.
Mokhov's theorem states that the $\lambda$-bracket \eqref{20160212:eq1} defines a PVA structure if and only if it is of the form
\begin{equation}\label{eq:mHYPB}
\{u_\lambda u\}=\sum_{\alpha=1}^D c_\alpha\left(2g\lambda_\alpha+ u_\alpha\frac{\partial g}{\partial u} \right)
\,,
\end{equation}
for some $c_\alpha\in\mb C$ and a function $g$
such that $\frac{\partial g}{\partial \partial^n_\beta u}=0$,
for every $\beta=1,\dots,D$ and $n\geq1$.
After loading the package we initialize the variables similarly to what we did at the beginning of
Section \ref{sec:kdv}, but in this case we should specify the spatial dimension $D$.
\begin{flushleft}
\hspace{1mm}\includegraphics[scale=0.55]{eg3-1a.pdf}\\
\hspace{1mm}\includegraphics[scale=0.55]{eg3-1b.pdf}
\end{flushleft}
We define the $\lambda$-bracket as in equation \eqref{20160212:eq1} assuming $N=1$ and $D=3$.
The formal parameter, for which we chose the symbol $\lambda$ in the initialization, is $\{\lambda_1,\lambda_2,\lambda_3\}$.
\begin{flushleft}
\hspace{1mm}\includegraphics[scale=0.55]{eg3-2.pdf}
\end{flushleft}
We use \cmd{PVASkew[P]} to find the conditions that the functions $a_\alpha$ and $b_\alpha$ should satisfy in order to have
a skewsymmetric $\lambda$-bracket. We get
\begin{equation}\label{eq:mHYPB_skew}
2 b_\alpha =\frac{\partial a_\alpha}{\partial u}
\,.
\end{equation}
Hence, we define a new $\lambda$-bracket, called \cmd{Hskew}, where the functions $a_\alpha$ and $b_\alpha$ satisfy equation
\eqref{eq:mHYPB_skew}. Note that \cmd{Hskew} only depends on the functions $a_\alpha$ now.
\begin{flushleft}
\hspace{1mm}\includegraphics[scale=0.55]{eg3-3a.pdf}\\
\hspace{1mm}\includegraphics[scale=0.55]{eg3-3b.pdf}\\
\end{flushleft}
We use \cmd{JacobiCheck[Pskew]} to write the conditions that must be satisfied by the functions $a_\alpha$
in order to get the validity of \eqref{jacobigen}. We denote the LHS of \eqref{jacobigen} by \cmd{JacobiCond}. In particular, by equating to zero the coefficient of $\lambda_\alpha\mu_\beta$ we get a system of ODEs for the functions $a_\alpha$.
\begin{flushleft}
\hspace{1mm}\includegraphics[scale=0.55]{eg3-4.pdf}
\end{flushleft}
A solution to this system is given by
\begin{equation}\label{eq:mHYPB_sol}
a_\alpha=c_\alpha g \qquad\qquad \text{for some function }g\text{ and }c_\alpha\in\mb C
\,.
\end{equation}
Then, we can substitute equation \eqref{eq:mHYPB_sol} in \cmd{JacobiCond}.
\begin{flushleft}
\hspace{1mm}\includegraphics[scale=0.55]{eg3-5.pdf}
\end{flushleft}
Hence, Jacobi identity \eqref{jacobigen} holds, thus showing that \eqref{eq:mHYPB_skew} and \eqref{eq:mHYPB_sol}
are the sufficient and necessary conditions for the $\lambda$-bracket \eqref{20160212:eq1} to define a PVA structure,
as Mokhov's theorem states.
\section{The package WAlg}\label{sec:WAlg}
In this section we show how to use the package \textsl{WAlg}.
Its main function is to compute the $\lambda$-brackets between the generators of the classical affine $\mc W$-algebra
$\mc W(\mf g,f)$ defined in Section \ref{sec:2.1}, where $\mf g$ is a simple Lie algebra of type $A,B,C,D$ and $G$, and $f\in\mf g$ is a nilpotent element.
Thus, we can use this result to compute the $\lambda$-brackets between arbitrary elements of the classical $\mc W$-algebra
and the corresponding generalized Drinfeld-Sokolov hierarchies using the package \textsl{MasterPVA}.
In order to perform computations with \textsl{WAlg} we need to realize the simple Lie algebras
of type $A,B,C,D$ and $G$ as subalgebras of $\mf{gl}_N$.
(We emphasize that we can do the same for simple Lie algebras of
type $E$ and $F$. Unfortunately the dimension of such a representation can be big, as for the case of $E_8$, where the minimal $N=248$.)
Given an element $A=(A_{ij})_{i,j=1}^N\in\mf{gl}_N$, we denote by $A^\text{at}=\left((A^{\text{at}})_{ij}\right)_{i,j=1}^N$ its transpose with respect to the antidiagonal, namely
$(A^{\text{at}})_{ij}=A_{N+1-j,N+1-i}$. Then, we realize the classical Lie algebras as in \cite{DS85}:
\begin{enumerate}[(A)]
\item Type $A_n$: $\mf g=\mf{sl}_n=\{A\in\mf{gl}_{n+1}\mid \tr(A)=0\}$.
\item Type $B_n$: let $S=\sum_{k=1}^{2n+1}(-1)^{k+1}E_{kk}$, then
$\mf g=\mf o_{2n+1}=\{A\in\mf{gl}_{2n+1}\mid A=-SA^{\text{at}}S\}$.
\item Type $C_n$: let $S=\sum_{k=1}^{2n}(-1)^{k+1}E_{kk}$, then
$\mf g=\mf{sp}_{2n}=\{A\in\mf{gl}_{2n}\mid A=-SA^{\text{at}}S\}$.
\item Type $D_n$: let $S=\sum_{k=1}^{n}(-1)^{k+1}(E_{kk}+E_{2n+1-k,2n+1-k})$, then
$\mf g=\mf{o}_{2n}=\{A\in\mf{gl}_{2n}\mid A=-SA^{\text{at}}S\}$.
\end{enumerate}
In the sequel, given a matrix $A\in\mf{gl}_N$, we denote by $\sigma(A)=-SA^{\text{at}}S$, where $S$ can be any of the matrix
appearing in the definition of the classical Lie algebras of type $B,C$ and $D$. Clearly, $A+\sigma(A)$ belongs to the
corresponding classical Lie algebra, since $\sigma^2=\mathbbm{1}_{\mf{gl}_N}$.
We realize $G_2$ as a subalgebra of $D_4$ as follows. Note that the group of automorphisms
of the Dynkin diagram of $D_4$ is isomorphic to $S_3$, the group of permutations on three elements. Then, we can consider the induced action by Lie algebra automorphisms of this group on $\mf o_8$. Then, it is easy to check that:
\begin{enumerate}[(G)]
\item Type $G_2$: $\mf g=\{A\in\mf o_8\mid \tau(A)=A\,,\text{for every }\tau\in S_3\}$.
\end{enumerate}
In particular, we used the following choice of Chevalley generators for $\mf g$:
\begin{align*}
&e_1=E_{23}+E_{67}\,,& &e_2=E_{12}+E_{34}+E_{56}+E_{78}+\frac{1}{2}\left(E_{35}+E_{46}\right)\,,
\\
&h_1=E_{22}-E_{33}+E_{55}-E_{66}\,,& &h_2=E_{11}-E_{22}+2E_{33}-2E_{66}
+E_{77}-E_{88}\,,
\\
&f_1=E_{32}+E_{76}\,, & &f_2= E_{21}+E_{43}+E_{65}+E_{87}+2\left(E_{53}+E_{64}\right)
\,.
\end{align*}
After the choice of the simple Lie algebra $\mf g$ we need to choose a nilpotent element $f\in\mf g$. Since the construction
of classical affine $\mc W$-algebras does not depend on the nilpotent element itself, but only on its nilpotent orbit
(see \cite{DSKV13}), we assume that the nilpotent element is given in input as a strictly lower triangular matrix.
In fact, when giving in input a nilpotent element we can use the classification of nilpotent orbits given in \cite{CMG93}.
Then the program computes an $\mf{sl}_2$-triple $\{f,h=2x,e\}\subset\mf g$ such that $x$ is a diagonal matrix
and $e$ is strictly upper triangular.
Finally, we always assume that the nondegenerate symmetric invariant bilinear form on
$\mf g$ is a multiple of the trace form on matrices ($a,b\in\mf g)$:
$$
(a|b)=c\tr(ab)\,,\qquad c\in\mb C^*\,.
$$
\subsection{The algebraic setup}
The package \textsl{WAlg} requires the use of the default library
\texttt{listK\_6.txt}. Hence the files \texttt{WAlg.m} and \texttt{listK\_6.txt} must be in a folder where Mathematica can find them.
It is also possible to use a different library as described in Section \ref{sec:5.3} to which we refer
for the technical details.
To select the working folder of Mathematica, where it will look for them and the potential output files will be saved, one may use the command \cmd{SetDirectory["path"]}. An alternative method to load the package, different from the one shown in Section \ref{sec:3}, is using the command \cmd{Needs[]}.
Let us use the package \textsl{WAlg} to get the explcit set of generators of the classical
affine $\mc W$-algebra $\mc W(\mf{o}_7,f)$, where $f$ is the principal nilpotent element \cite{DS85}.
We load the package and use the command \cmd{InitializeWAlg[]}.
Recall that $\mf o_7$ is a classical Lie algebra of type $B_3$.
\begin{flushleft}
\hspace{1mm}\includegraphics[scale=0.55]{eg4-1.pdf}
\end{flushleft}
The dimension of the matrix representing $f$ is obtained with the command \cmd{GetDim[]}.
We define the principal nilpotent $f$, and we can also check that it belongs to $\mf o_7$.
The command \cmd{ComputeWAlg[]} takes the nilpotent element $f$ as argument and computes a basis of
$\mf g^f$ given by $\ad x$-eigenvectors. The warning notice is not a problem.
\begin{flushleft}
\hspace{1mm}\includegraphics[scale=0.55]{eg4-2.pdf}
\end{flushleft}
The basis computed for $\mf g^f$ can be recovered with the command \cmd{GetWBasis[]}, We denote it as \cmd{listq}.
The corresponding dual basis (with respect to the trace form) of $\mf g^e$ can be computed using the command \cmd{GetWbasisDual[]}. We denote it by
\cmd{listQ}. Finally, we can also use the command \cmd{GetWEigen[]} to recover all the $\ad x$-eigenvalues (with multiplicities) and put them in a list which we call \cmd{list$\delta$}.
The command \cmd{w[]} works as follows: it takes in input an element of $\mf o_7$, then it applies $\pi_{\mf g^f}$ and
the map $w$ defined in Theorem \ref{thm:structure-W} to this element. The result is a linear combination of the generators
of the classical affine $\mc W$-algebra.
Hence, by Corollary \ref{20140221:cor}, \cmd{w[listq[[i]]]} gives the $i$-th generator of the classical affine $\mc W$-algebra.
(Note that, by an abuse of notation, these generators are denoted by $q_i$ in Mathematica, the same letter used to denote
the corresponding element of $\mf g^f$ to which they are attached through the map $w$. In fact, the notation $w(q_i)$ is used
in Corollary \ref{20140221:cor}.)
\begin{flushleft}
\hspace{1mm}\includegraphics[scale=0.55]{eg4-3.pdf}
\end{flushleft}
In the following example, we apply the command \cmd{w[]} to a random element of $\mf o_7$.
We construct it as follows: first we define a random element $A\in\mf{gl}_7$. Then,
using the function \cmd{Sigma[]} (which, given $A$ as input gives
$\sigma(A)=-SA^{\text{at}}S$ as result) we get the element $A+\sigma(A)\in\mf o_7$
(we can also check it with the commad \cmd{CheckAlg[]}).
\begin{flushleft}
\hspace{1mm}\includegraphics[scale=0.55]{eg4-4a.pdf}\\
\hspace{1mm}\includegraphics[scale=0.55]{eg4-4b.pdf}
\end{flushleft}
The command \cmd{w[]} gives the corresponding linear combination of the generators of the classical affine
$\mc W$-algebra, see Theorem \ref{thm:structure-W}.
\begin{flushleft}
\hspace{1mm}\includegraphics[scale=0.55]{eg4-5.pdf}
\end{flushleft}
Note that, apart from computing explicitly the generators of the classical affine $\mc W$-algebra,
the command \cmd{w[]} is heavily used to implement equation \eqref{20140304:eq4}.
\subsection{Computation of $\lambda$-brackets among generators}
One of the most useful features of \textsl{WAlg} is the implementation of formula \eqref{20140304:eq4}
for the computation of the Poisson structure $H$, defined by equation \eqref{ham_W}, associated to
the classical affine $\mc W$-algebra.
After we compute $H$, we can use the package \textsl{MasterPVA} to compute the $\lambda$-brackets between
any elements of the classical affine $\mc W$-algebra.
Let us how how to proceed in the concrete example of the Lie algebra $\mf{sp}_4$ and its minimal nilpotent element $f$
\cite{DSKV14}.
Recall that $\mf{sp}_4$ is a classical Lie algebra of type $C_2$.
\begin{flushleft}
\hspace{1mm}\includegraphics[scale=0.55]{eg5-1a.pdf}\\
\hspace{1mm}\includegraphics[scale=0.55]{eg5-1b.pdf}
\end{flushleft}
The number of generators of $\mc W(\mf{sp}_4,f)$ is the same as the dimension of $\mf g^f$,
which in this case is 6.
The command \cmd{SetS[]} allows us to set the element in $s\in\mf g_d$,
recall that $d$ is the maximal eigenvalue of $\ad x$, which is used in formula \eqref{20140304:eq4}.
If this command is left without argument it automatically choose a generic $s$. Note that in this example $\mf g_d=\mb Ce$,
so the choice is unique up to a constant.
Finally, the command \cmd{GenerateH[]} gives the Poisson structure $H$ associated to the classical affine $\mc W$-algebra
by equation \eqref{ham_W} implementing the formula \eqref{20140304:eq4}.
The optional parameter in the command is the formal parameter used in the definition of the $\lambda$-bracket, whose default value is $\beta$.
\begin{flushleft}
\hspace{1mm}\includegraphics[scale=0.55]{eg5-2.pdf}
\end{flushleft}
The next step is to allow the package \textsl{MasterPVA} to use the output of \cmd{GenerateH[]}.
In order to do that, we need to set the number of variables, use $q_i$ as the name of the generators,
use $y$ as independent variable, and use $\beta$ as the formal parameter.
\begin{flushleft}
\hspace{1mm}\includegraphics[scale=0.55]{eg5-3.pdf}
\end{flushleft}
Now the commands of \textsl{MasterPVA} can be used. For example, we can check
that $H$ is indeed a Poisson structure.
\begin{flushleft}
\hspace{1mm}\includegraphics[scale=0.55]{eg5-4.pdf}
\end{flushleft}
We use our program to check identity \eqref{20140228:eq4}. The Virasoro element $L$ defined in Proposition
\ref{20160203:prop1}(c) is computed with the command \cmd{GetVirasoro[]}, whose argument is the nilpotent element $f$.
\begin{flushleft}
\hspace{1mm}\includegraphics[scale=0.80]{eg5-5.pdf}
\end{flushleft}
Finally, we can use our program to compute the first few equations of the corresponding
generalized Drinfeld-Sokolov hierarchies. We define \cmd{g0} and \cmd{g1} according to \cite[Section 6.2]{DSKV14} and we compute the Hamiltonian equation \eqref{hameq}.
We get
\begin{flushleft}
\hspace{1mm}\includegraphics[scale=0.70]{eg5-6.pdf}
\end{flushleft}
and
\begin{flushleft}
\hspace{1mm}\includegraphics[scale=0.70]{eg5-7a.pdf}
\end{flushleft}
\begin{flushleft}
\hspace{1mm}\includegraphics[scale=0.70]{eg5-7b.pdf}
\end{flushleft}
The above equations agree with equations (6.19) and (6.20) in \cite{DSKV14}.
After a Dirac reduction (since the generators $w(a)$, where $a\in\mf g_0^f$, do not evolve in time), we get simpler equations
\begin{flushleft}
\hspace{1mm}\includegraphics[scale=0.75]{eg5-8.pdf}
\end{flushleft}
The above results agree with equations (6.21) and (6.22) in \cite{DSKV14}.
The latter is a higher symmetry of the Yajima-Oikawa equation \cite{YO76} (see also \cite{DSKV14-Err}).
\subsection{Classical affine $\mc W$-algebras associated to simple Lie algebras of rank two and
principal nilpotent element}
In this section we provide explicit formulas for the $\lambda$-brackets among generators
for classical affine $\mc W$-algebras $\mc W(\mf g,f)$, where $\mf g=A_2,B_2$ and $G_2$ and $f$ is a principal nilpotent. In this case, $\dim\mf g^f=\rank\mf g=2$. Hence, by Theorem
\ref{thm:structure-W}, as a differential algebra we have that $\mc W(\mf g,f)=\mb C[w_1^{(n)},w_2^{(n)}\mid n\in\mb Z_+]$, where
$w_1=w(q_1)$, $w_2=w(q_2)$ and $\{q_1,q_2\}$ is a basis of $\mf g^f$ as in Section \ref{slod.4}.
\subsubsection{$\mf g=\mf{sl}_3$}
The computations can be found in the file \texttt{A\_2\_principal.nb}. The result is
\begin{align}
\begin{split}\label{eq:A_2}
\{{w_1}_\lambda w_1\}_{z}&=(2\lambda+\partial)w_1-2 c\lambda^3
\,,
\\
\{{w_1}_\lambda w_2\}_{z}&=
(3\lambda+\partial)w_2 +3 c z\lambda
\,,
\\
\{{w_2}_\lambda w_2\}_{z}&=
(2\lambda+\partial)\left(\frac1{3c}w_1^2-\frac{1}{16}w_1''\right)
-(2\lambda+\partial)^3\frac{5}{2^43}w_1+\frac{c}{6}\lambda^3
\,.
\end{split}
\end{align}
Note that after rescaling $c\mapsto-\frac C2$ and setting $L=w_1$, $W=8\sqrt{-6C}w_2$,
equation \eqref{eq:A_2} agrees
with the results in \cite{DSKW10} (only the PVA structure corresponding to $z=0$ is considered
there).
\subsubsection{$\mf g=\mf o_5$}
The computations can be found in the file \texttt{B\_2\_principal.nb}. The result is
\begin{align}
\begin{split}\label{eq:B_2}
&\{{w_1}_\lambda w_1\}_{z}=(2\lambda+\partial)w_1-10 c\lambda^3
\,,
\\
&\{{w_1}_\lambda w_2\}_{z}=
(4\lambda+\partial)w_2 +8 c z\lambda
\,,
\\
&\{{w_2}_\lambda w_2\}_{z}=
(2\lambda+\partial)\left(
\frac{2^23^2}{5^4c^2}w_1^3+\frac{7}{5^2c}w_1w_2-\frac{1}{2^25^3c}(w_1')^2
-\frac{29}{2\cdot5^3c}w_1w_1''\right.
\\
&\left.-\frac1{2^25}w_2''+\frac{3}{2^35^2}w_1^{(4)}
\right)
+(2\lambda+\partial)^3
\left(-\frac{7^2}{2^25^3c}w_1^2-\frac{3}{2^25}w_2+\frac{7}{2^25^2}w_1''
\right)
\\
&+(2\lambda+\partial)^5\frac{7}{2^35^2}w_1-\frac{2c}{5}\lambda^7
+z\left((2\lambda+\partial)\frac{2\cdot7}{5^2}w_1-\frac{2^23c}{5}\lambda^3
\right)
\,.
\end{split}
\end{align}
Note that after rescaling $c\mapsto-\frac{C}{10}$ and setting $L=w_1$, $W=-40C\sqrt2w_2$,
equation \eqref{eq:B_2} agrees
with the results in \cite{DSKW10}. Since $\mf o_5\cong\mf{sp}_4$, the corresponding
classical affine $\mc W$-algebras are isomorphic. In fact we can perform the same computations
starting from the Lie algebra $\mf{sp}_4$, which can be found in the file \texttt{C\_2\_principal.nb}, and check that
we get the same expression for the $\lambda$-brackets given by equation \eqref{eq:B_2}
after rescaling $c$ by a factor $\frac12$.
\subsubsection{$\mf g=G_2$}
The computations can be found in the file \texttt{G\_2\_principal.nb}. The result is
\begin{align}
\begin{split}\label{eq:G2}
\{{w_1}_\lambda w_1\}_{z}&=
(2\lambda+\partial)w_1-28c\lambda^3
\,,
\\
\{{w_1}_\lambda w_2\}_{z}&=
(6\lambda+\partial)w_1+144cz\lambda
\,,
\\
\{{w_2}_\lambda w_2\}_{z}&=\sum_{i=0}^4(2\lambda+\partial)^{2i+1}P_{2i+1}
-\frac{3c}{7}\lambda^{11}+z\left(\sum_{i=0}^1(2\lambda+\partial)^{2i+1}Q_{2i+1}
-\frac{26c}{7}\lambda^5\right)
\,,
\end{split}
\end{align}
where
\begin{align*}
&P_1=
\frac{3^35^2}{7^6 c^4}w_1^5
-\frac{11\cdot13}{2\cdot7^3c^2}w_1^2w_2
-\frac{3\cdot61}{2^57^4c^3}w_1^2 (w_1')^2
+\frac{5}{2^37^2c}w_1'w_2'
-\frac{3\cdot769}{2\cdot7^5c^3}w_1^3w_1''
\\
&+\frac{3\cdot29}{2^37^2c}w_2w_1''
+\frac{3^211\cdot19}{2^87^4c^2}(w_1')^2w_1''
+\frac{3^223\cdot97}{2^67^4c^2}w_1(w_1'')^2
+\frac{5}{2^37c}w_1w_2''
\\
&
+\frac{3\cdot347}{2^77^4c^2}w_1w_1'w_1'''
-\frac{3^2}{2^87^2c}(w_1''')^2+\frac{3\cdot9551}{2^87^4c^2}w_1^2w_1^{(4)}
-\frac{3^2\cdot607}{2^87^3c}w_1''w_1^{(4)}
\\
&
-\frac{1}{2^47}w_2^{(4)}-\frac{3^25}{2^87^3c}w_1'w_1^{(5)}
-\frac{3\cdot5\cdot6\cdot7}{2^87^3c}w_1w_1^{(6)}+\frac{3^25}{2^{10}7^2}w_1^{(8)}
\,,\\
&P_3=-\frac{3\cdot11\cdot479}{2^87^4c^3}w_1^4+\frac{5\cdot31}{2^37^2c}w_1w_2
+\frac{3\cdot5\cdot11\cdot19}{2^87^4c^2}w_1(w_1')^2
+\frac{3\cdot11\cdot23\cdot89}{2^77^4c^2}w_1^2w_1''
\\
&
-\frac{3^311\cdot49}{2^87^3c}(w_1'')^2-\frac{5}{2^37}w_2''-\frac{3\cdot5\cdot11}{2^77^3c}w_1'w_1'''
-\frac{3\cdot5^211^2}{2^87^3c}w_1w_1^{(4)}+\frac{3\cdot5\cdot11}{2^87^2}w_1^{(6)}
\,,
\\
&P_5=\frac{3\cdot5\cdot11\cdot139}{2^87^4c^2}w_1^3-\frac{13}{2^47}w_2
-\frac{3^311}{2^97^3c}(w_1')^2-\frac{3^311\cdot43}{2^87^3c}w_1w_1''
+\frac{3^411}{2^97^2}w_1^{(4)}
\,,\\
&P_7=-\frac{3^211\cdot31}{2^97^3c}w_1^2+\frac{3^311}{2^87^2}w_1''
\,,
\qquad\qquad\quad\,\,
P_9=\frac{3\cdot5\cdot11}{2^{10}7^2}w_1
\,,\\
&Q_1=-\frac{11\cdot13}{2\cdot7^3c}w_1^2+\frac{3\cdot29}{2^37^2}w_1''
\,,
\qquad\qquad
Q_3=\frac{5\cdot31}{2^37^2}w_1
\,.
\end{align*}
The bi-Poisson structure of the classical $\mc W$-algebra $\mc W(G_2,f)$ associated to the Lie algebra $G_2$
and its principal nilpotent element $f$, and the corresponding Drinfeld-Sokolov hierarchy
was already computed in \cite{CDVO08}.
It can be obtained from the bi-Poisson structure \eqref{eq:G2} by performing the change of variables
$$
u_0=\frac{w_1}{2c}\,,
\qquad
u_1=\frac{w_2}{72c}+\frac{3}{686}\left(\frac{w_1}{2c}\right)^3
-\frac{33}{1568}\left(\frac{w_1'}{2c}\right)^2-\frac{13}{392}\frac{w_1}{2c}\frac{w_1''}{2c}
+\frac{1}{57}\frac{w_1^{(4)}}{2c}
\,,
$$
by choosing $c=4$, and by substituting $z$ with $-72z$.
\section{List and explanation of commands}
\subsection{List of commands in MasterPVA}\label{sec:4}
In this section we list the commands provided by \textsl{MasterPVA} and \textsl{MasterPVAmulti}. Most of the commands are the same for both the versions of the package, and the syntax working for the $D=1$ case works the same also when using the multidimensional package; on the other hand, it must be modified accordingly when working with a $D>1$ $\lambda$ bracket.
\cmd{SetN[n\_Integer]} declares the number $N$ of the generators for the PVA. Its default value is $1$.
\smallskip
\cmd{GetN[]} gives the number $N$ of the generators.
\smallskip
\cmd{SetD[d\_Integer]} declares the number $D$ of the derivations (namely, of the independent variables) for the PVA. Its defaut value is $1$. \emph{Available only in \textsl{MasterPVAmulti}}.
\smallskip
\cmd{GetD[]} gives the number $D$ of the derivations. \emph{Available only in \textsl{MasterPVAmulti}}.
\smallskip
\cmd{SetMaxO[n\_Integer]} declares the order of the derivatives of the generators up to which the code computes the $\lambda$ brackets by the Master Formula. Default is 5, quite high for most of the applications.
\smallskip
\cmd{GetMaxO[]} gives the maximum order of the derivatives of the generators taken by the program.
\smallskip
\cmd{SetGenName[newname]} declares the name for the generators. Default is $u$. They will have the form $u(x)$ if $N=1$ or $u_1(x),\ldots,u_N(x)$ for $N>1$.
\smallskip
\cmd{GetGenName[]} gives the name used for the generators.
\smallskip
\cmd{SetVarName[newname]} declares the name for the independent variable(s). Default is $x$.
\smallskip
\cmd{GetVarName[]} gives the name used for the independent variable(s).
\smallskip
\cmd{gen} is the list of generators for the PVA.
\smallskip
\cmd{var} is the list of the independent variables. \emph{Available only in \textsl{MasterPVAmulti}}.
\smallskip
\cmd{SetFormalParameter[newname]} declares the name for the parameter to be used (and recognized by the software) in the definition of the bracket between generators. Default is $\beta$; notice that for $D>1$ the parameter will be a list $(\beta_1,\ldots,\beta_D)$.
\smallskip
\cmd{GetFormalParameter[]} gives the name of the parameter used in the definition of the $\lambda$ bracket.
\smallskip
\cmd{LambdaB[f,g,P,$\lambda$]} computes the $\lambda$ bracket between the two differential polynomials $f$ and $g$, with $P$ the matrix of the brackets between the generators. The result will be a polynomial in the formal indeterminate $\lambda$ (or $(\lambda_1,\ldots,\lambda_D)$ for $D>1$). The Master Formula will take into account the derivatives of the generators up to order \cmd{n=GetMaxO[]}.
\smallskip
\cmd{PVASkew[P]} computes the condition of skewsymmetry for a $\lambda$ bracket
(namely the LHS of \eqref{skewsimgen}) and gives the result in a matrix form.
\smallskip
\cmd{PrintPVASkew[P]} computes the condition of skewsymmetry and gives the result as a table with each equation of the system.
\smallskip
\cmd{JacobiCheck[P]} computes the $\text{LHS}$ of the Jacobi identity \eqref{jacobigen}, and gives the result as a $N\times N\times N$ array. The entries are given as formal polynomials in the (internal) indeterminates $\lambda$ and $\mu$. It is often convenient to clean up the result using the command \cmd{\%\%//.\{MasterPVA`Private`$\lambda$ ->$\lambda$, MasterPVA`Private`$\mu$ ->$\mu$\}}.
\smallskip
\cmd{PrintJacobiCheck[P]} computes the LHS of Jacobi identity \eqref{jacobigen} and gives the result as a table of expressions that must vanish.
\smallskip
\cmd{EvVField[X\_List,f]} applies the evolutionary vector field of characteristic $X^i$, $i=1,\ldots,N$ to the differential polynomial $f$.
\smallskip
\cmd{Integr[f,param\_List]} transforms a polynomial in the indeterminates \cmd{param}$=\{\lambda,\mu,\ldots,\psi,\omega\}$ in a polynomial in $\{\lambda,\ldots,\psi\}$ substituting $\omega$ with $-\lambda-\mu-\cdots-\psi-\partial$, where $\partial$ acts on the coefficients. For $D>1$ case, each of the parameter must be replaced by a list of $D$ entries. This auxiliary function is convenient in the study of the skewsymmetry, since $\{{u_i}_{-\lambda-\partial}u_j\}$ can be obtained by \cmd{Integr[LambdaB[gen[[i]],gen[[j]],P,$\mu$],\{$\lambda$,$\mu$\}]} or for the study of the PVA cohomology (see \cite{DSK13}).
\subsection{List of commands in WAlg}\label{sec:6}
In this section we list the commands provided by \textsl{WAlg}. We discuss separately the commands that constitute the main core of the program and the ones that can have broader applications, for instance to prepare the input the program needs.
Please note that the symbols \verb=q=, \verb=y=, \verb=z=, \verb=\[ScriptS]= (i.~e.~\emph{s}) and \verb=\[Beta]= (i.~e.~$\beta$) are used by the program, hence they should not be used as variable names in your program.
\subsubsection{Principal commands of the program}
\cmd{InitializeWAlg[name\_String,n\_Integer]} is the first command that the program must receive after loading the package. It sets the simple Lie algebra $\mathfrak{g}$ underlying the classical affine $\mathcal{W}$-algebra. If, for instance, one would like to start from $A_6$, the command should be \cmd{InitializeWAlg["A",6]}.
\smallskip
\cmd{SetNil[a\_List]} sets the nilpotent element $f\in\mathfrak{g}$ in order to
construct $\mc W(\mf g,f)$.
\smallskip
\cmd{GetNil[]} gives the nilpotent element $f$ used in the definition of the classical affine $\mathcal{W}$-algebra.
\smallskip
\cmd{GetDim[]} gives the dimension of the matrices used for the explicit representation of $\mathfrak{g}$.
\smallskip
\cmd{SetS[s\_List]} sets the element $s\in\mf g$ used in the definition of the affine PVA, as in \eqref{lambda}. If the command is given without argument, it authomatically chooses a generic element of $\mathfrak{g}_d$. Notice that the command must be called before computing the $\lambda$-brackets between the generators of the classical affine
$\mathcal{W}$-algebra.
\smallskip
\cmd{GetS[]} gives the element $s$ used in \eqref{lambda} after it has been set.
\smallskip
\cmd{ComputeWAlg[nil\_List]} computes a basis for $\mathfrak{g}^f$ made by $\ad x$-eigenvectors,
where $h=2x$ is the diagonal element of the $\mathfrak{sl}_2$-triple containing $f=\cmd{nil}$, as well as
the dual basis (with respect to the trace form) of $\mf g^e$ and the corresponding $\ad x$-eigenvalues (with multipliciities).
All these outputs can be displayed by using the next three commands:
\smallskip
\cmd{GetWBasis[]} gives the list of elements of the aforementioned basis for $\mathfrak{g}^f$;
\smallskip
\cmd{GetWBasisDual[]} gives the list of elements of the dual basis of $\mf g^e$;
\smallskip
\cmd{GetWEigen[]} gives the list of $\ad x$-eigenvalues.
\smallskip
\cmd{GetX[]} gives the element $x=h/2$, where $h$ is the diagonal element of the $\mathfrak{sl}_2$-triple associated to $f$; it can be used only after executing the command \cmd{ComputeWAlg[]}.
\smallskip
\cmd{w[a\_List]} given an element $a\in\mf g$, it applies the projection $\pi_{\mf g^f}$ to it and then the map $w$ defined in Theorem \ref{thm:structure-W}.
\smallskip
\cmd{GenerateH[par\_]} must be run after \cmd{ComputeWAlg[]} and \cmd{SetS[]}. It computes the
Poisson structure $H$ defined by equation \eqref{ham_W} using Theorem \ref{20140304:thm}.
It uses \cmd{par} as the formal indeterminate (the default is $\beta$).
\smallskip
\cmd{LoadTableIndices[filename\_String]} chooses a file different from the default (\verb=listK_6.txt=) as the source of the indices used in the formula \eqref{20140304:eq4}. It is necessary to use it (after generating the suitable file) when
$d>6$, see Section \ref{sec:5.3}.
\smallskip
\cmd{GenerateTableIndices[n\_Integer]} computes a custom list of indices going up to \verb=n=, and saves it in the file \verb=listK_n.txt= for further usage. Notice that the computation is extremely time-consuming, see Section \ref{sec:5.3}.
\smallskip
\cmd{GetVirasoro[nil\_List]} provides the Virasoro element of Proposition \eqref{20160203:prop1}(c)
with $f=\cmd{nil}$.
\subsubsection{Other useful commands}
\cmd{Comm[a\_,b\_]} computes the commutator between the two matrices \verb=a= and \verb=b=.
\smallskip
\cmd{Prod[a\_,b\_]} computes the value of the symmetric invariant bilinear form $\tr(\mathtt{a}\mathtt{b})$.
\smallskip
\cmd{Proj[a\_List]} applies the map $\pi_{\mf g^f}$ to an element of $a\in\mf g$. It must be run after \cmd{ComputeWAlg}.
\smallskip
\cmd{M[i\_Integer,j\_Integer]} gives the elementary matrix (of dimensions \cmd{GetDim[]}) with 1 in the position $(\mathtt{i},\mathtt{j})$.
\smallskip
\cmd{CheckAlg[a\_List]} checks whether the matrix \verb=a= belongs to the Lie algebra declared in \verb=InitializeWAlg[]=.
\smallskip
\cmd{Sigma[a\_List]} computes $\sigma(\mathtt{a})$ according to the definition given in Section \ref{sec:WAlg}.
\smallskip
\cmd{SetDispPar[s\_]}] sets a dispersive parameter (default is 1, hence making it invisible) in formula
\eqref{20140304:eq4}, useful if we want to compute the dispersionless limit of this formula.
\smallskip
\cmd{GetDispPar[]} gives the aforementioned dispersive parameter.
\subsection{Generation of the indices}\label{sec:5.3}
For $a\in\mf g^f_{-l}$ and $b\in\mf g^f_{-m}$, formula \eqref{20140304:eq4} involves a long summation over the indices $(\vec{j},\vec{n})\in J_{-\vec{k}}$, where $\vec{k}=(k_1,\ldots,k_t)$, $-l+1\leq k_1\prec\cdots\prec k_t\leq m$.
For each $t\geq1$, the list of the indices $k_1,\ldots,k_t$ is finite, and moreover given $l$ and $m$ the maximum value for $t$ is the first integer $\bar t$ such that $\bar t\geq l+m$.
The generation of the indices $\vec{k}$ is a long process, and it dramatically slows down the execution time of the command \cmd{GenerateH[]}. To prevent this issue, a precompiled list of indices is distributed together with the package, in the file \cmd{listK\_6.txt}. It contains the default data for the computation of formula \eqref{20140304:eq4}, and it works for
all classical affine $\mc W$-algebras with $d\leq6$ (this is sufficient, for example, to compute the classical affine $\mc W$-algebras associated to all nilpotent orbits of $\mf o_8$).
If one wants to work with Lie algebras with $d>6$, then it is necessary to generate a bigger table of indices, that may be computed before starting the computation of the Poisson structure $H$, and not necessarily in an interactive session. In case the user does not notice that a bigger set of indices would be needed, the command \cmd{GenerateH[]} will produce a long list of error messages.
The Mathematica kernel, without the user interface, can be usually run in a shell with the command \cmd{math}. After loading the package, one generates the table of indices with the command
\begin{center}\cmd{GenerateTableIndices[d]}.\end{center}
The command can take up to several hours to be completed, and generates a file \cmd{listK\_d.txt} saved in the active folder.
To use a previously generated table of indices, the command \cmd{LoadTableIndices[filename]} must be run before \cmd{GenerateH[]}. The file will be looked for in the active folder, unless the full path is specified.
\begin{flushleft}
\hspace{1mm}\includegraphics[scale=0.55]{eg6-2.pdf}
\end{flushleft}
|
1,108,101,565,600 | arxiv | \section{Introduction}
Aggregation is a fundamental irreversible process in which physical
objects merge irreversibly to form larger objects. Aggregation has
numerous applications ranging from astronomy where planetary systems
form via collisions of planetesimals, to atmospheric science
\cite{drake,sein}, to chemical physics where polymeric chains
chemically bond and form polymeric networks or gels
\cite{flory,stock,floryB}, to computer science
\cite{Aldous,jlr,bk-rev}.
The standard framework for modeling aggregation is as
follows. Initially, the system consists of a large number of identical
molecular units (``monomers''). A cluster (``polymer'') is composed of
an integer number of monomers, termed the cluster mass. In each
aggregation event, a pair of clusters merges, thereby forming a larger
cluster whose mass equals the sum of the two original masses.
When the number of aggregation events is unlimited, the system
condenses into a single cluster. However, in most practical
applications, other processes intervene well before that, and as a
result, the final state has multiple clusters, rather than a single
condensate. For example, fragmentation of large clusters into smaller
clusters is one mechanism that may counterbalance aggregation and
prevent condensation.
In this study, we focus on another control mechanism: freezing. We use
the generic term freezing to describe situations where there are two
types of clusters: reactive clusters that participate in aggregation
events and passive clusters that do not participate in aggregation
events.
In particular, we consider the case where reactive clusters have a
finite lifetime. In our model, reactive clusters spontaneously turn
into frozen clusters. Spontaneous freezing can occur via various
mechanisms. For instance, the environment may contain ``traps'' that
absorb the diffusing polymers. In this situation, the reactive
clusters are the free (mobile) polymers and the frozen clusters are
the polymers adsorbed to the trap surface. Another example is a
system of linear polymers with reactive end monomers. In a merging
event, two different chains chemically bond via the end monomers,
while in a freezing event, the two end monomers of the same chain bond
to form a ring. Rings can no longer participate in aggregation
events. Thus, in this case the linear polymers are reactive and the
ring polymers are frozen.
In aggregation with freezing, it is natural to consider the initial
condition where there are reactive clusters only. Of course, the
system ends with frozen clusters only. Of special interest is the
final mass distribution of frozen clusters. In this study, we address
the two classical aggregation rates.
First, we study the simplest aggregation process where the aggregation rate
is independent of the cluster mass. We find that the mass distribution of
reactive clusters decays exponentially with the cluster mass. In general, the
mass density of frozen clusters also decays exponentially with the cluster
mass. However, when the freezing rate is very small, there is a power-law
behavior, $F_k\sim k^{-1}$, over a substantial range of masses.
Second, we consider the case where the aggregation rate is
proportional to the product of the masses, a process that is widely
used to model polymerization and gelation. We find that when the
freezing rate exceeds a certain threshold, no gels form, while when
the freezing rate is below this threshold, at least one gel
forms. Interestingly, the number of gels produced fluctuates from
realization to realization. For supercritical freezing rates, the mass
distribution of frozen clusters decays exponentially, while below this
threshold, it decays algebraically, $F_k\sim k^{-3}$ \cite{bk}.
\section{The master equations}
We analyze the stochastic process of aggregation with freezing using
the rate equation approach. Let us first consider the evolution of
reactive clusters. In aggregation processes, two reactive clusters of
masses $i$ and $j$ merge to form a larger reactive cluster of mass
$i+j$. The aggregation rate $K(i,j)$ is a function of the two cluster
masses. The freezing process is random: reactive clusters may
spontaneously freeze with a constant rate. This freezing rate $f_k$
may be mass-dependent. Therefore, the mass distribution $R_k(t)$ of
reactive clusters of mass $k$ at time $t$ evolves according to the
generalized Smoluchowki equation
\begin{eqnarray}
\label{Rk}
\frac{dR_k}{dt}\!=\!\frac{1}{2}\sum_{i+j=k}K(i,j)R_iR_j
\!-\!R_k\sum_{i=1}^\infty K(i,k)R_i\! -\! f_k R_k.
\end{eqnarray}
The first two terms account for gain and loss of clusters of mass $k$,
and the last term accounts for loss due to freezing. This master
equation assumes perfect mixing as the probability of finding two
clusters at the same position is a product of the probabilities of
finding each of the clusters independently at the same position. We
restrict our attention to the natural case of monodisperse initial
condition, $R_k(0)=\delta_{k,0}$.
The mass distribution of frozen clusters $F_k(t)$ is coupled to the
mass distribution of reactive clusters according to the rate equation
\begin{equation}
\label{Fk}
\frac{dF_k}{dt}=f_k R_k.
\end{equation}
It is simple to check that the total mass density $\sum_k
k(R_k+F_k)=1$ is conserved by the evolution equations
(\ref{Rk})--(\ref{Fk}). Initially, there are no frozen clusters so
$F_k(0)=0$. Eventually, all clusters become frozen, so the final mass
distribution $F_k(\infty)$ of frozen clusters is of particular
interest.
The master equations (\ref{Rk})--(\ref{Fk}) are sets of infinitely
many coupled nonlinear differential equations, and they are generally
unsolvable. Even in the absence of freezing, these equations are
solvable only for special aggregation rates
\cite{drake,Aldous,fran}. The three classical solvable cases are the
constant rate $K(i,j)={\rm const.}$, the sum rate $K(i,j)=i+j$, and
the product rate $K(i,j)=ij$. These cases represent natural
aggregation processes. Mass independent aggregation rates correspond
to an aggregation process where two clusters are chosen randomly to
merge. Aggregation rates proportional to the product of the two
cluster masses correspond to polymerization processes where two
monomers are picked randomly to form a chemical bond; consequently,
their respective clusters are merged. The sum rate is a hybrid between
the two as it is an aggregation process where a randomly chosen
monomer bonds with a randomly chosen polymer. In this study, we focus
on the two most widely used cases of constant and product aggregation
rates.
\section{Constant Aggregation Rate}
First, we discuss how the constant aggregation rate relates to
polymerization in the presence of traps. To treat this problem
formally, one should write down the master equations with
inhomogeneous densities and add diffusion terms. Then, one should
study these equations in the trap-free region subject to the absorbing
boundary conditions imposed by the traps. This approach is not
practical and the reaction-rate approach provides a powerful
alternative \cite{smol,chandra,ovchin,gleb,redner}. The reaction-rate
approach is roughly speaking an effective-medium theory that ignores
the complicated influence of each trap on the diffusion of particles
and instead, represents this influence by averages. The reaction-rate
approach was used by Smoluchowski to compute the aggregation rate
$K(i,j)$ for Brownian particles. Assuming that merging happens
immediately upon collision, and that particles are spherical and have
radii $R_i$ and $R_j$ and diffusion coefficients $D_i$ and $D_j$,
Smoluchowski obtained
\begin{equation}
\label{K}
K(i,j)=4\pi (D_i+D_j)(R_i+R_j).
\end{equation}
Stokes' law shows that the diffusion coefficient of a Brownian
particle is inversely proportional to its radius, $D_k\sim 1/R_k\sim
k^{-1/3}$, and therefore the Brownian kernel becomes
\begin{equation}
\label{KB}
K(i,j)\propto \left(i^{-1/3}+j^{-1/3}\right)\left(i^{1/3}+j^{1/3}\right).
\end{equation}
Here, we ignored the overall multiplicative factor as it is irrelevant
for the current discussion.
The master equations with this complicated Brownian kernel have not
been solved even in the case of pure aggregation. To simplify the
analysis, Smoluchowski suggested to replace the Brownian kernel
(\ref{KB}) by the constant kernel. These two kernels have one common
feature---they both are invariant under the dilatation
$K(ai,aj)=K(i,j)$. Therefore, one expects that both kernels lead to
similar behaviors, and to a certain extent, i.e., as far as overall
scaling properties are concerned, this approximation is sensible
\cite{fran}.
A straightforward extension of Eq.~(\ref{K}) gives the freezing rate
\begin{equation}
\label{fk}
f_k=4\pi\,n\, (D_k+D)(R_k+a)
\end{equation}
where $n$ is the density of traps that are assumed to be spheres of
radius $a$ and diffusion coefficient $D$. The clusters are usually
polymers whose molecular weight is small compared to the size of the
traps; hence $R_k\ll a$ and $D_k\gg D$. Therefore $f_k=4\pi a nD_k$,
yielding the mass dependence
\begin{equation}
\label{fk1}
f_k\propto k^{-1/3}.
\end{equation}
Thus, a constant aggregation rate together with spontaneous freezing
approximate aggregation of Brownian particles in the presence of
traps. We stress that the use of a constant aggregation rate instead
of (\ref{KB}) is an approximation.
\subsection{Constant freezing rates}
Since the constant aggregation rate merely sets the overall time
scale, we may conveniently set its value $K(i,j)=2$ without loss of
generality. Let us first consider constant freezing rates,
$f_k=\alpha$. The master equation (\ref{Rk}) becomes
\begin{equation}
\label{Rk-const}
\frac{dR_k}{dt}=\sum_{i+j=k}R_iR_j-(2R+\alpha)R_k.
\end{equation}
Here, we used the total density of reactive clusters, $R=\sum_k
R_k$. In general, for mass-independent freezing rates, it is possible
to eliminate the freezing term from the master equation by
transforming the mass distribution $R_k=C_k\, e^{-\alpha t}$ and
introducing the time variable
\begin{equation}
\label{tau}
\tau=\frac{1-e^{-\alpha t}}{\alpha}\,.
\end{equation}
The time variable $\tau$ grows from $0$ to $\alpha^{-1}$ as the
physical time increases from $0$ to $\infty$. With these
transformations, the governing equations for the densities $C_k$
reduce to the pure aggregation case
\hbox{$dC_k/d\tau=\sum_{i+j=k}C_iC_j-2C\,C_k$} with the total density
$C=\sum_k C_k$. We briefly recall how to solve these equations. The
total density obeys $dC/d\tau=-C^2$ and subject to the initial
condition $C(0)=1$, the total density is $C(\tau)=(1+\tau)^{-1}$. Let
us now introduce the exponential ansatz $C(\tau)=A\,a^{k-1}$ with
$A(0)=1$ and $a(0)=1$ to satisfy the initial conditions. Substituting
this ansatz into the master equation and equating mass-independent and
mass-dependent terms separately, yields $dA/d\tau=-2(1+\tau)^{-1}A$
and therefore, $A=(1+\tau)^{-2}$, and $da/d\tau=A$ leading to
$a=\tau/(1+\tau)$. The well-known solution for the pure aggregation
case is therefore
\begin{equation}
\label{Ck-sol}
C_k(\tau)=\frac{\tau^{k-1}}{(1+\tau)^{k+1}}\,.
\end{equation}
Thus, the mass distribution of reactive clusters reads
\begin{equation}
\label{Rk-sol}
R_k(\tau)=(1-\alpha\tau) \frac{\tau^{k-1}}{(1+\tau)^{k+1}}\,.
\end{equation}
The exponential mass dependence is as in the pure aggregation
case. Also, the total density of reactive clusters is $R=(1-\alpha
\tau)/(1+\tau)$ and as expected, the reactive clusters do eventually
deplete $R(t=\infty)=R(\tau=1/\alpha)=0$.
The mass distribution of frozen clusters is found by integrating the
equation $dF_k/d\tau=\alpha\,C_k$ with respect to time. Substituting
(\ref{Ck-sol}), and using \hbox{$d\tau/dt=(1-\alpha\tau)=e^{-\alpha
t}$}, the integration is immediate and
\begin{equation}
\label{Fk-sol}
F_k(\tau)=\frac{\alpha}{k}\left(\frac{\tau}{1+\tau}\right)^k.
\end{equation}
We see that in addition to the dominant exponential behavior, there is
an additional algebraic prefactor. The total density of frozen
clusters $F=\sum_k F_k$ is found by summation, $F(\tau)=\alpha\ln
(1+\tau)$ and in particular, the final density of frozen clusters is
$F(\infty)\equiv F(t=\infty)=\alpha\ln (1+1/\alpha)$. Also,
the final mass distribution of frozen clusters is
\begin{equation}
\label{Fk-final}
F_k(\infty)=\frac{\alpha}{k}\left(\frac{1}{1+\alpha}\right)^k.
\end{equation}
In general, the mass distribution decays exponentially, but there is
a $k^{-1}$ algebraic correction.
\subsection{Slow freezing}
The most interesting behavior occurs in the slow freezing limit: as
$\alpha\to 0$, the final mass distribution becomes algebraic
\begin{equation}
F_k(\infty)\simeq \alpha\,k^{-1}.
\end{equation}
This power law holds over a substantial mass range, \hbox{$k\ll
\alpha^{-1}$}. Beyond this scale, the tail is exponential,
\hbox{$F_k(\infty)\simeq \alpha\,k^{-1}\,e^{-\alpha k}$}.
The results in the slow freezing limit can be alternatively obtained
using perturbation theory. Indeed, the modified time variable
coincides with the original time variable, $\tau\to t$ as $\alpha\to
0$, and the pure aggregation results are recovered. In other words,
the freezing loss term $-f_kR_k$ can be neglected in the master
equation (\ref{Rk}). Using this perturbation approach we address two
related problems: general freezing rates and aggregation in
low-dimensional systems.
Let us consider general mass dependent freezing rates $f_k$. Dropping
the loss rate from the master equation, the reactive cluster density
is as in the pure aggregation case $R_k=t^{k-1}(1+t)^{-k-1}$, given by
Eq.~(\ref{Ck-sol}). The mass distribution of frozen clusters is
obtained by integrating Eq.~(\ref{Fk})
\begin{equation}
\label{Fkt-sol}
F_k(t)=\frac{f_k}{k}\left(\frac{t}{1+t}\right)^k.
\end{equation}
We see that the algebraic prefactor $k^{-1}$ is generic. Therefore,
the final mass distribution is
\begin{equation}
\label{Fk-inf}
F_k(\infty)=k^{-1}\,f_k.
\end{equation}
This behavior applies for masses below some threshold mass $k_*$,
while the mass distribution sharply vanishes above the threshold. The
threshold mass is estimated from mass conservation:
\begin{equation}
\label{threshold}
1=\sum_{k\geq 1} kF_k(\infty)\sim \sum_{k=1}^{k_*} f_k.
\end{equation}
As argued above, for Brownian coagulation in the presence of traps,
$f_k=\beta k^{-1/3}$. For slow freezing, $\beta\ll 1$, we conclude
that the final mass distribution is algebraic, $F_k(\infty)\simeq \beta
k^{-4/3}$, below the threshold mass $k_*\sim \beta^{-3/2}$.
The rate equation approach neglects spatial correlations as the
probability of finding two clusters at the same position is
represented by the product of the probabilities of finding each cluster
separately at the same position. This mean-field approximation is
valid only when the spatial dimension exceeds the critical dimension
$d_c$ \cite{gleb,redner}. It is therefore interesting to study the
behavior below the critical dimension.
We address here the Point Cluster Model (PCM) where the radii and the
diffusion coefficients are both mass-independent. In this case
$d_c=2$. The lattice PCM is defined as follows: clusters occupy
single lattice sites and hop to adjacent sites with rate $D$; if a
reactive cluster hops onto a site occupied by another reactive
cluster, both clusters merge. We assume that frozen clusters do not
affect reactive clusters. The PCM without freezing can be solved
exactly in one dimension. When all lattice sites are initially
occupied by monomers, the density of reactive clusters of mass $k$ is
\cite{spouge,af}
\begin{equation}
\label{spouge-sol}
R_k(t)=e^{-4Dt}\left[I_{k-1}(4Dt)-I_{k+1}(4Dt)\right]
\end{equation}
where $I_n$ is the modified Bessel function of order $n$. Here, we
implicitly considered the slow freezing limit. The density of frozen
clusters is found from $dF_k/dt=f_k R_k$, that is of course always
valid. Using the identity \hbox{$\int_0^\infty
dx\,e^{-x}\left[I_{k-1}(x)-I_{k+1}(x)\right]=2$}, the final
distribution of frozen clusters is
\begin{equation}
\label{Fk-inf-1D}
F_k(\infty)=(2D)^{-1}\,f_k.
\end{equation}
Remarkably, the very same answer (\ref{Fk-inf-1D}) is also found for
the continuous version of the PCM. The mass distribution
(\ref{Fk-inf-1D}) holds up to a certain threshold mass. For example,
for the constant freezing rate $f_k=\alpha\ll 1$ the threshold mass is
$k_*\sim \sqrt{D/\alpha}$.
\section{Product aggregation rate}
The product aggregation rate $K(i,j)=ij$ is equivalent to the
Flory-Stockmayer gelation model \cite{flory,stock,aal}. In this
model, any two monomers may form a chemical bond and when this happens
the two respective polymers become one. Thus, the aggregation rate
equals the product of the cluster masses. In this polymerization
process, a polymer network (``gel'') emerges in a finite time, and it
is giant in the sense that it contains a finite fraction of the
monomers in the system. Eventually it grows to engulfs the entire
system. This gelation model is also the simplest mean-field model of
percolation \cite{zhe,sh}.
As in the previous section, we analyze in detail mass-independent
freezing rates, $f_k=\alpha$, for which the master equation (\ref{Rk})
becomes
\begin{equation}
\label{Rkt}
\frac{dR_k}{dt}=\frac{1}{2}\sum_{i+j=k}ij R_i R_j - m k R_k - \alpha R_k
\end{equation}
where $m$ is the total mass density of reactive clusters. If all
clusters are finite in size then $m=M_1=\sum_{k\geq 1}kR_k$ where
$M_n=\sum_k k^n R_k$ is the general $n$th moment of the
distribution. Again, we consider the monodisperse initial conditions
$R_k(t)=\delta_{k,1}$ and $F_k(0)=0$.
Low order moments of the mass distribution obey closed equations and
thus, provide a useful probe of the aggregation dynamics. The total
mass density of reactive clusters satisfies $dm/dt=-\alpha m$
reflecting the loss due to freezing, and therefore
\begin{equation}
\label{M1-sol}
m(t)=m(0)e^{-\alpha t}\,.
\end{equation}
The total mass density decays exponentially with the physical time, or
equivalently, linearly with the modified time, $m(\tau)=1-\alpha
\tau$. Furthermore, the second moment includes in addition to the
linear loss term, a nonlinear term that accounts for changes due to
aggregation, $dM_2/dt=M_2^2-\alpha M_2$. Solving this equation with
arbitrary initial condition yields
\begin{equation}
\label{M2-sol}
M_2(t)=\alpha\,\left[\left(\frac{\alpha}{M_2(0)}-1\right)\,
e^{\alpha t}+1\right]^{-1}.
\end{equation}
Divergence of the second moment signals the emergence of a gel in a
finite time, i.e., the occurrence of the gelation phase
transition. Fixing the freezing rate, the initial conditions govern
whether gelation does or does not occur: gelation occurs when the
initial mass is sufficiently large, $M_2(0)>\alpha$, but otherwise
there is no gelation. Conversely, fixing the initial conditions,
gelation occurs only for slow enough freezing. For the monodisperse
initial conditions, the critical freezing rate is $\alpha_c=1$. When
gelation does occur, the gelation time is
\begin{equation}
\label{tg}
t_g=-\frac{1}{\alpha}\ln \left(1-\frac{\alpha}{M_2(0)}\right).
\end{equation}
The gelation point separates two phases. Prior to the gelation time,
the system contains only finite clusters that undergo cluster-cluster
aggregation. Past the gelation point, the gel grows via cluster-gel
aggregation. We term these two the coagulation phase and the gelation
phase, respectively. The above expressions for the first two moments
are valid for the coagulation phase only.
The mass distribution of reactive clusters is found again by
transforming the mass distribution $R_k=e^{-\alpha t}C_k$ and the time
variable (\ref{tau}). With these transformations, the problem reduces
to the no-freezing case, \hbox{$dC_k/dt=\frac{1}{2}\sum_{i+j=k} ij C_i
C_j-kMC_k$} with $m=Me^{-\alpha t}$. Using the variable
$u(\tau)=\int_0^\tau M(\tau') d\tau'$ and the transformation $C_k=G_k
\tau^{k-1}e^{-ku}$, the master equation reduces to a recursion
equation for the {\it time-independent} coefficients $G_k$:
$(k-1)G_k=\frac{1}{2}\sum_{i+j=k}ij G_iG_j$. This equation is solved
using the generating function technique. The so-called
``tree-function'' $G(z)=\sum_k kG_k e^{kz}$ satisfies $dG/dz=G/(1-G)$
and the solution of this differential equation obeys
$Ge^{-G}=e^z$. The coefficients $G_k=k^{k-2}/k!$ are found using the
Lagrange inversion formula \cite{wilf}. Hence, the mass distribution
of reactive clusters is
\begin{equation}
\label{formal}
R_k(\tau)=\frac{k^{k-2}}{k!}\,\,(1-\alpha\tau)\,\tau^{k-1}\,e^{-k\,u}\,.
\end{equation}
The corresponding generating function
$\mathcal{R}(z)=\sum_{k=1}^\infty k R_k e^{kz}$ can be expressed in
terms of the tree-function
\begin{equation}
\label{rzt}
\mathcal{R}(z)=\tau^{-1}(1-\alpha\tau)\,G(z+\ln \tau-u).
\end{equation}
Explicitly, the tree function is $G(z)=\sum_{k\geq
1}\frac{k^{k-1}}{k!}\, e^{kz}$.
The mass distribution (\ref{formal}) is only a formal solution because
the total mass density $m$ and hence the variable $u$ are yet to be
determined. Prior to gelation, the solution can be obtained in an
explicit form because the various variables are known. From the first
moment (\ref{M1-sol}) then $M=1$ and therefore
\begin{equation}
\label{u}
u=\tau
\end{equation}
for $t<t_g$. In this case, the mass distribution decays exponentially
at large masses and the typical cluster mass is finite. When
$\alpha>1$, there is no gelation transition, and this behavior
characterizes the mass distribution at all times. Otherwise, when
gelation does occur, the gelation time (\ref{tg}) is simply
$\tau_g=1$. The gelation point is marked by an algebraic divergence
of the mass distribution, $R_k(t_g)\sim (1-\alpha)k^{-5/2}$, for large
$k$.
Using the explicit expression for $R_k$ prior to gelation, we can
calculate the mass distribution of the frozen clusters produced up to
that point. Substituting (\ref{u}) into the formal solution
(\ref{formal}) and integrating $dF_k/dt=\alpha R_k$ over time using
$d\tau/dt=e^{-\alpha t}=(1-\alpha \tau)$ yields the distribution $F_k(t_g)$
of frozen clusters produced prior to gelation ($t_g\equiv\infty$ for
$\alpha>1$)
\begin{equation}
\label{Pk-final}
F_k(t_g)=
\begin{cases}
\frac{\alpha}{k^2\cdot k!}\,\,\gamma(k,k)&\alpha\leq 1,\\
\frac{\alpha}{k^2\cdot k!}\,\,\gamma(k,k/\alpha)&\alpha\geq 1,
\end{cases}
\end{equation}
where $\gamma(n,x)=\int_0^x dy\,y^{n-1}\,e^{-y}$ is the incomplete
gamma function. When $\alpha\geq 1$, this quantity equals the final
distribution of frozen clusters, $F_k(\infty)=F_k(t_g)$. At large
masses, the behavior is as follows
\begin{equation}
\label{fk-tail}
F_k(\infty)\simeq
\begin{cases}
\frac{1}{2}\cdot k^{-3} &\alpha=1,\\
A(\alpha)\,k^{-7/2}\exp\left[-B(\alpha) k\right] &\alpha>1,
\end{cases}
\end{equation}
where $A=(2\pi)^{-1/2}\alpha^2/(\alpha-1)$ and
$B=\alpha^{-1}+\ln\alpha -1$. These asymptotic results were obtained
using the steepest descent method. Quantitatively, the mass
distribution differs from that obtained for constant aggregation
rates. However, qualitatively, there is a similarity: there is
exponential decay above a critical freezing rate and algebraic decay
at and below this critical freezing rate. For the constant aggregation
rate, the critical freezing rate vanishes, but for the product
aggregation rate the critical rate is finite.
At the gelation transition a giant cluster that contains a fraction of
the mass in the system emerges. Past the gelation point, two
aggregation processes occur in parallel: in addition to
cluster-cluster aggregation, the giant cluster grows by swallowing
finite clusters. Now, reactive clusters consist of finite clusters
(the ``sol'') with mass $s=M_1$, and the gel with mass $g$. The total
mass density is $m=s+g$. These three masses are coupled via the
evolution equations
\begin{subequations}
\label{mst-eq}
\begin{align}
\frac{dm}{dt}&= - \alpha\,s,\\
\frac{ds}{dt}&=-\alpha\,s-\frac{s(m-s)}{1-s\tau e^{\alpha t}}.
\end{align}
\end{subequations}
The initial conditions are $m(t_g)=s(t_g)=1-\alpha$. The first
equation reflects that as long as the gel remains reactive, mass may
be converted from the reactive state to the frozen state via freezing
of finite clusters. The second equation follows from $ds/dt=-\alpha
M_1-g\,M_2$, obtained by summing (\ref{Rkt}). The second moment is
written explicitly, $M_2= \mathcal{R}_z(z=0)=s/(1-s\tau e^{\alpha
t})$, using the aforementioned identity \hbox{$G'(z)=G/(1-G)$}. The
initial conditions are $m(t_g)=s(t_g)=1-\alpha$. Once these equations
are solved, the solution (\ref{formal}) becomes explicit. We analyze
this equation using perturbation theory in the limits $\alpha\uparrow
1$ and $\alpha\downarrow 0$ as detailed in Appendix A. For general
freezing rates, we solve these equations numerically (Fig.~\ref{sol}).
\begin{figure}[t]
\includegraphics*[width=0.4\textwidth]{fig1.eps}
\caption{The total mass, the sol mass, and the gel mass versus
the modified time $\tau$ for $\alpha=1/2$.}
\label{sol}
\end{figure}
In addition to the freezing of the finite clusters, the gel itself may
freeze. One way to characterize the gel by its maximal possible size
$g_{\rm max}=\lim_{t\to\infty}g(t)$. The limiting behaviors are as
follows (see Appendix A)
\begin{equation}
\label{g-pert}
g_{\rm max}\to
\begin{cases}
1-\frac{\pi^2}{6}\alpha&\alpha \downarrow 0,\\
C(1-\alpha)^2&\alpha \uparrow 1,
\end{cases}
\end{equation}
with $C=1.303892$. The maximal gel size decreases as the freezing rate
increases. Just below the critical freezing rate, the gel is very
small as its size shrinks quadratically with the distance from the
critical point $g\sim (1-\alpha)^2$; perturbation analysis shows that
this behavior is generic and not limited to the maximal gel
size. Therefore, freezing is a mechanism for controlling the gel size:
by using freezing rates just below the critical rate, it is possible
to produce micro-gels.
\begin{figure}[t]
\includegraphics*[width=0.4\textwidth]{fig2.eps}
\caption{The maximal gel mass $g_{\rm max}$ versus the freezing rate
$\alpha$ (solid line). Perturbation theory results are shown using
dashed lines.}
\label{gmax}
\end{figure}
The gel freezes following a random Poisson process: its lifetime $T$
is distributed according to the exponential distribution
\begin{equation}
\label{pt}
P(T)=\alpha\, e^{-\alpha T}.
\end{equation}
As long as the gel is active the system evolves in a deterministic
fashion. When the gel freezes, the total reactive mass exhibits a
discontinuous downward jump (Fig.~\ref{mass}). Since the gel freezes
according to a random process, the mass of the frozen gel is also a
random variable. Moreover, this quantity is not self-averaging as it
fluctuates from realization to realization.
\begin{figure}[t]
\includegraphics*[width=0.375\textwidth]{fig3.eps}
\caption{The mass density $m$ (MASS) versus time $\tau$ (TIME). The
system alternates between the coagulation phase and the gelation
phase. In the former phase the mass decreases linearly according to
(\ref{M1-sol}) such that depletion occurs at time $\tau=1/\alpha$. In
the latter phase, the active mass decreases slower than linear
according to (\ref{mst-eq}). The gelation phase ends when the gel
freezes.}
\label{mass}
\end{figure}
When the gel freezes, the system consists of finite-mass clusters only
and therefore, the system re-enters the coagulation phase. The system
may then undergo a second gelation transition that ends when the gel
freezes. Therefore, the evolution is cyclic with the system
alternating between the coagulation and gelation. For the same reason
that the gel mass fluctuates, so is the number of frozen gels produced
a fluctuating quantity.
The number of gels produced is in principle unlimited, i.e., there is
a finite probability $P_n>0$ that $n$ gels are produced. Since the
second moment diverges at the gelation transition according to
(\ref{M2-sol}), it is necessarily larger than the freezing rate at
some finite time interval following the gelation transition. If the
gel freezes during this time interval then a successive gelation is
bound to occur. We also note that the evolution in the coagulation
phase is deterministic and for example, the first two moments follow
Eqs.~(\ref{M1-sol}) and (\ref{M2-sol}). The ``initial'' conditions are
given by the state of the system when the gel freezes.
It is therefore natural to ask: what is the probability that multiple
gels are produced? This is the probability that the gel freezes prior
to time $t_*$ given by $M_2(t_*)=\alpha$. Using the second moment
$M_2=s/(1-s\tau e^{\alpha t})$ this condition simplifies to
\begin{equation}
\label{cond}
s(t_*)=\alpha\, e^{-\alpha t_*}.
\end{equation}
The probability that multiple gels are produced is obtained by
integrating (\ref{pt}) up to this time, \hbox{$P_{\rm
mult}=\int_0^{t_*-t_g} dT\, P(T)$} with the limiting behaviors (see
Appendix A)
\begin{equation}
\label{pmult}
P_{\rm mult}\to
\begin{cases}
\alpha \ln \frac{1}{e\alpha}&\alpha \downarrow 0,\\
0.450851&\alpha \uparrow 1.
\end{cases}
\end{equation}
This probability increases as the freezing rate increases (figure
\ref{pm}). It is generally substantial, and moreover, it exhibits a
discontinuity at $\alpha=\alpha_c$.
\begin{figure}[t]
\includegraphics*[width=0.4\textwidth]{fig4.eps}
\caption{The multiple gel probability $P_{\rm mult}$ versus the freezing rate
$\alpha$ (solid line). Perturbation theory results are shown using a
dashed line.}
\label{pm}
\end{figure}
\begin{figure}[t]
\includegraphics*[width=0.4\textwidth]{fig5.eps}
\caption{The final mass distribution of the frozen clusters for
$\alpha=1/2$. The simulation results represent an average over
$10^2$ independent realizations in a system of mass $N=10^6$.}
\label{frozen}
\end{figure}
We now address the final mass distribution of frozen gels. Analytic
treatment of the successive gelation phases is difficult due to the
stochastic nature of the freezing process. Numerically, there are two
ways to treat the problem. One may integrate the rate equations
(\ref{mst-eq}) up to the gel freezing time that is distributed
according to (\ref{pt}) and then repeat this procedure if a successive
gelation occurs. We prefer Monte Carlo simulations where, since the
system is finite, there is no need to distinguish the gel from the
finite clusters. In the simulations, we keep track of the total
aggregation rate $R_a=N(M_1^2-M_2)/2$ and of the total freezing rate
$R_f=\alpha NM_0$, where $N$ is the number of monomers. Aggregation
occurs with probability $R_a/(R_a+R_f)$, and freezing occurs with the
complementary probability. A cluster is chosen for aggregation with
probability proportional to its mass. Time is augmented by $\Delta
t=1/(R_a+R_f)$ after each aggregation or freezing event.
For the case $\alpha<1$, numerical simulations provide convincing
evidence that the tail behavior
\begin{equation}
F_k(\infty)\sim k^{-3}, \qquad \alpha<1
\end{equation}
is universal (Fig.~\ref{frozen}). This indicates that frozen clusters
produced during the coagulation phase dominate at large masses as
every such phase is expected to contribute $k^{-3}$ to the tail
according to Eq.~(\ref{fk-tail}). Intuitively, this is clear because
large clusters are quickly merged into the gel and therefore, frozen
clusters produced in the presence of the gel tend to be small.
Interestingly, freezing leads to a new critical exponent in mean-field
percolation.
\section{Conclusions}
In summary, we studied aggregation processes with freezing. For
constant freezing rates, the problem can be formally reduced to the no
freezing case. The mass distribution of frozen clusters resembles the
mass distribution of reactive clusters, decaying exponentially at
large masses, when the freezing is sufficiently fast. Novel behavior
emerges when the freezing rate is slower than some critical value. In
this case, the mass distribution of frozen clusters decays
algebraically. For constant aggregation rate, the critical freezing
rate is zero but for the product aggregation rate, it is finite.
For the product aggregation rate, the freezing rate controls the
gelation process. If it is sufficiently high, no gelation occurs, and
if it is just below the threshold, micro-gels are produced. If one gel
is produced, then multiple gels are possible. In this case, the mass
of the gels and their number are both controlled by a random process
and as a result, they fluctuate from realization to realization. The
system exhibits a series of gelation transitions and it alternates
between ordinary coagulation and gelation. The random freezing
process governs the number of percolation transitions as well as the
mass of the frozen gels.
The behavior when freezing occurs spontaneously is quite different
than that found when freezing is reaction-induced (upon merger a
cluster may freeze with some fixed probability)
\cite{kb,bk-scaling}. For the constant aggregation rate, the mass
distribution is always algebraic and the characteristic exponent is
non-universal as it depends on the freezing probability. For the
product aggregation rate, the gelation transition is always
suppressed, because the probability that a cluster remains reactive
decays exponentially with the number of merger events.
There are many open questions raised by this study. For example, it
will be interesting to investigate the behavior in low-dimensional
systems where the rate equation approach no longer
holds. Additionally, the exponent characterizing the mass distribution
of frozen clusters represents a novel critical exponent in percolation
processes and this should be a challenging problem below the critical
dimension.
|
1,108,101,565,601 | arxiv | \section{Introduction}
\label{SecI}
Interferometric detectors of gravitational waves with frequency
content $0 < f < f_0$ may be thought of as optical configurations with
one or more arms folding coherent trains of electromagnetic waves (or
beams) of nominal frequency $\nu_0 \gg f_0$. At points where these
intersect, relative fluctuations of frequency or phase are monitored
(homodyne detection). Frequency fluctuations in a narrow Fourier band
can alternatively be described as fluctuating side-band
amplitudes. Interference of two or more beams, produced and monitored
by a (nonlinear) device such as a photo detector, exhibits these
side-bands as a low frequency signal again with frequency content
$0 < f < f_0$. The observed low frequency signal is due to frequency
variations of the sources of the beams about $\nu_0$, to relative
motions of the sources and any mirrors (or amplifying microwave or
optical transponders) that do any beam folding, to temporal variations
of the index of refraction along the beams, and, according to general
relativity, to any time-variable gravitational fields present, such as
the transverse trace-less metric curvature of a passing plane
gravitational wave train. To observe these gravitational fields in
this way, it is thus necessary to control, or monitor, the other
sources of relative frequency fluctuations, and, in the data analysis,
to optimally use algorithms based on the different characteristic
interferometer responses to gravitational waves (the signal) and to
the other sources (the noise).
By comparing phases of split beams propagated along equal but
non-parallel arms, frequency fluctuations from the source of the beams
are removed directly at the photo detector and gravitational wave
signals at levels many orders of magnitude lower can be detected.
Especially for interferometers that use light generated by presently
available lasers, which display frequency stability roughly a few
parts in $10^{-13}$ in the millihertz band, it is essential to remove
these fluctuations when searching for gravitational waves of
dimensionless amplitude smaller than $10^{-21}$.
Space-based, two-arm interferometers
\cite{LISA2017,Taiji,TianQin,gLISA2015,Astrod} are prevented from
canceling the laser noise by directly interfering the beams from the
two unequal arms at a single photo detector because laser phase
fluctuations experience different delays. As a result, two Doppler
data from the two arms are measured at two different photo detectors
and are then digitally processed to compensate for the inequality of
the arms. This data processing technique, called Time-Delay
Interferometry (TDI) \cite{TD2020}, entails time-shifting and linearly
combining the two Doppler measurements so as to achieve the required
sensitivity to gravitational radiation.
In a recent article \cite{Vallisneri2020}, a data processing
alternative to TDI has been proposed for the two-arm
configuration. This technique, which has been named TDI-$\infty$ (as
it cancels the laser noise at an arbitrary time $t$ by linearly
combining all the Doppler measurements made up to time $t$), relies on
an identified linear relationship between the two Doppler measurements
made by an unequal-arm Michelson interferometer and the laser
noise. Based on this formulation, TDI-$\infty$ cancels laser phase
fluctuations by applying linear algebra manipulations to the Doppler
data. Through its implementation, TDI-$\infty$ is claimed to (i)
simplify the data processing for gravitational wave signal searches in
the laser-noise-free data over that of TDI, (ii) work for any
time-dependent light-time delays, and (iii) automatically handle data
gaps.
After briefly reviewing the TDI-$\infty$ technique for the two
unequal-arm configuration, we show care must be taken to account for
the light-travel-times when linearly relating the two-way Doppler
measurements to the laser noise \cite{Vallisneri2020}. The two-way
Doppler data at a time $t$ is the result of the interference between
the returning beam and the outgoing beam. As such it contains the
difference between the value of the laser noise at time $t - l_i(t)$
affecting the returning beam (with $l_i(t)$ being the
round-trip-light-time (RTLT)) and the laser noise of the outgoing beam
at time $t$ when the measurement is recorded. From the instant the
laser is switched on (let us say $t=0$) each two-way Doppler
measurement becomes different from zero only for $t \ge l_i (t)$,
i.e. when the returning beam and the outgoing beam start to
interfere. By accounting for this observation in the ``boundary
conditions'' of the Doppler data, we show that it is
possible to introduce a matrix representation of TDI.
We would like to briefly mention here another matrix based
approach. Romano and Woan \cite{PCA2006} have used Bayesian inference
to set up a noise covariance matrix of the data streams. Then by
performing a principal component analysis of the covariance matrix,
they identify the principal components with large eigenvalues with the
laser noise and so distinguish it from other ambient noises and signal
which correspond to small eigenvalues. We argue that this approach is
also a matrix representation of the original TDI.
Here we provide a summary of this article. In section \ref{SecII} we
present the key-points of TDI-$\infty$ and correct the expression of
the matrix introduced in \cite{Vallisneri2020} relating the two arrays
associated with the two-way Doppler measurements to the array of the
laser noise. We then recast this linear relationship in terms of two
square-matrices, each relating the array associated with one of the
two-way Doppler measurement to the array of the laser noise. As
expected these matrices are singular, reflecting the physical
impossibility of reconstructing the laser noise array from the arrays
associated with the two-way Doppler data. In the simple configuration
of a stationary interferometer whose RTLTs are integer-multiples of
the sampling time, we show that the linear combination of the two-way
Doppler arrays canceling the laser noise is equal to the sampled
unequal-arm Michelson TDI combination $X$. In section \ref{SecIII} we
then turn to the problem of a stationary three-arm array with three
laser noises and six one-way Doppler measurements. After deriving the
expressions of the matrices relating the laser noises to the one-way
Doppler measurements, we show that the generators of the space of the
combinations canceling the laser noises are equal to the sampled
TDI-combinations ($\alpha, \beta, \gamma, \zeta$) \cite{TD2020} in
which the delay operators have been replaced by our derived
matrices. This is rigorously established in section \ref{represent} by
showing that the matrix formulation is just a ring representation of
the first module of syzygies - a ring homomorphism. We cover all cases of interest. We first start with delays that are integer multiples of the
sampling interval, then the continuum case when the sampling is continuous
and the sampling interval tends to zero and finally when fractional-delay filtering based on Lagrange polynomials is used for reconstructing the samples at any required time.
For fractional delays we show that homomorphism is valid (i) when all
delays lie in the same interpolation interval, (ii) for each delay lying in different interpolation intervals and
also (iii) for time-dependent arm-lengths. In all these cases we show that there
is a ring homomorphism. Thus the matrix formulation is in principle
the same as the original formulation of TDI, although it might offer
some advantages when implemented numerically. Finally, in section
\ref{SecIV} we present our concluding remarks and summarize our
thoughts about potential advantages in processing the TDI measurements
cast in matrix form when searching for gravitational wave signals.
\section{Matrix representation of the two-way Doppler measurements}
\label{SecII}
Equal-arm interferometer detectors of gravitational waves can observe
gravitational radiation by canceling the laser frequency fluctuations
affecting the light injected into their arms. This is done by
comparing phases of split beams propagated along the equal (but
non-parallel) arms of the detector. The laser frequency fluctuations
affecting the two beams experience the same delay within the two
equal-length arms and cancel out at the photo-detector where relative
phases are measured. This way gravitational-wave signals of
dimensionless amplitude less than $10^{-22}$ can be observed when
using lasers whose frequency stability can be as large as roughly a
few parts in $10^{-18}$ in the kilohertz band.
If the arms of the interferometer have different lengths, however, the
exact cancellation of the laser frequency fluctuations, say $C (t)$,
will no longer take place at the photo-detector. In fact, the larger
the difference between the two arms, the larger will be the magnitude
of the laser frequency fluctuations affecting the detector response.
If $l_1$ and $l_2$ are the RTLTs of the laser beams
within the two arms, it is easy to see that the amount of laser
relative frequency fluctuations remaining in the response are equal
to:
\begin{equation}
\Delta C (t) = C(t - l_1) - C(t - l_2) \, .
\label{DC}
\end{equation}
In the case of a space-based interferometer such as LISA for instance,
whose lasers are expected to display relative frequency fluctuations
equal to about $10^{-13}/\sqrt{\mathrm{Hz}}$ in the mHz band and
RTLTs will differ by a few percent \cite{LISA2017},
Eq.~(\ref{DC}) implies uncanceled fluctuations from the laser as large
as $\approx 10^{-16}/\sqrt{\mathrm{Hz}}$ at a millihertz frequency
\cite{TD2020}. Since the LISA sensitivity goal is about
$10^{-20}/\sqrt{\mathrm{Hz}}$ in this part of the frequency band, it
is clear that an alternative experimental approach for canceling the
laser frequency fluctuations is needed.
An elegant method entirely implemented in the time domain was first
suggested in \cite{TA99} and then generalized in a series of related
publications (see \cite{TD2020} and references therein). Such a
method, named Time-Delay Interferometry (or TDI) as it requires
time-shifting and linearly combining the recorded data, carefully
accounts for the time-signature of the laser noise in the two-way
Doppler data. TDI relies on the optical configuration exemplified by
Fig.~\ref{Fig0} \cite{FB84,FBHHV85, FBHHSV89,GHTF96}. In this
idealized model the two beams exiting the two arms are
{\underbar{not}} made to interfere at a common photo-detector. Rather,
each is made to interfere with the incoming light from the laser at a
photo-detector, decoupling in this way the laser phase fluctuations
experienced by the two beams in the two arms. In the case of a
stationary array, cancellation of the laser noise at an arbitrary time
$t$ requires only four samples of the measurements made at and around
$t$. Contrary to a previously proposed technique \cite{FB84,FBHHV85,
FBHHSV89,GHTF96}, which required processing in the Fourier domain a
large ($\sim$ six months) amount of data to sufficiently suppress the
laser noise at a time $t$ \cite{TA99,TD2020}, TDI can be regarded as a
``local'' method.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width = \linewidth, clip]{Fig0.pdf}
\end{center}
\caption{Light from a laser is split into two beams, each
injected into an arm formed by pairs of free-falling
mirrors. Since the RTLTs, $l_1$ and $l_2$, are
different, now the light beams from the two arms are not
recombined at one photo detector. Instead each is separately made
to interfere with the light that is injected into the arms. Two
distinct photo detectors are now used, and phase (or frequency)
fluctuations are then monitored and recorded there.}
\label{Fig0}
\end{figure}
In a recent publication \cite{Vallisneri2020}, a new ``global''
technique for canceling the laser noise has been proposed. This
technique, which has been named TDI-$\infty$, establishes a linear
relationship between the sampled Doppler measurements and the laser
noise arrays. It is claimed to work for any time-dependent delays and
to cancel the laser noise at an arbitrary time $t$ by taking linear
combinations of the two-way Doppler measurements sampled at all times
before $t$.
To understand the formulation of TDI-$\infty$, let us consider again
the simplified (and stationary) two-arm optical configuration shown in
Figure \ref{Fig0}. In it the laser noise, $C(t)$, folds into the two
two-way Doppler data, $y_1(t)$, $y_2(t)$, in the following way (where
we disregard the contributions from all other physical effects
affecting the two-way Doppler data):
\begin{eqnarray}
y_1(t) = C(t - l_1(t)) - C(t) \ ,
\nonumber
\\
y_2(t) = C(t - l_2(t)) - C(t) \ ,
\label{eq1}
\end{eqnarray}
where $l_1$, $l_2$ are the two RTLTs, in general also
functions of time $t$.
Operationally, Eq. (\ref{eq1}) says that each sample of the two-way
Doppler data at time $t$ contains the difference between the laser
noise $C$ generated at a RTLT earlier, $t - l_i(t) \ , \ i = 1, 2$ and
that generated at time $t$. Figure \ref{fig1} displays graphically
what we have just described. The important point to note here is what
happens during the first $l_i$ seconds from the instant $t = 0$ when
the laser is switched on. Since the $y_i$ measurements are the result
of interfering the returned beam with the outgoing one, during the
first $l_i$ seconds (i.e. from the moment the laser has been turned
on) the $y_i$ measurements are identically equal to zero because no
interference measurements can be performed during this time. In other
words, during the first $l_i$ seconds there is not yet a returning
beam with which the local light is made to interfere with. In
\cite{Vallisneri2020}, however, only the first terms on the
right-hand-sides of Eq. (\ref{eq1}) were disregarded during these time
intervals. Although the TDI-$\infty$ technique is mathematically
correct, by using these nonphysical "boundary conditions" results in
solutions that do not cancel the laser noise when applied to the
Doppler data measured by future space-based interferometers. We
verified this analytically (by implementing the TDI-$\infty$ algorithm
with the help of the program {\it Mathematica} \cite{Wolf02}) when the
two light-times are constant and equal to integer multiples of the
sampling time. We found the resulting solutions to be linear
combinations of the TDI unequal-arm Michelson combination $X$ defined
at each of the sampled times, plus \textit{an additional term} that
would not cancel the laser noise in the measured data. This additional
term is a function of $y_1$ and $y_2$ defined at times $t < l_1, l_2$
and thus, is a manifestation of the non-physical boundary conditions.
In the attempt of avoiding this problem one might consider start
processing the Doppler data at any time $t$ after the first RTLT has
past. However, one would still be confronted by the fact that the
Doppler measurement $y_i$ at time $t$ contains laser noise generated
at time $t$ and at time $t - l_i$. In other words, there exists a
time-mismatch between the array of the Doppler measurement and that of
the laser noise and physical boundary conditions have to be accounted
for in a realistic simulation.
TDI-$\infty$ is a ``global'' data processing algorithm, i.e. its
solutions at time $t$ require use of all samples of the Doppler
measurements recorded up to time $t$. Our computations for assessing
the effects of the nonphysical boundary conditions were carried out
only for time intervals relatively short, namely, for stretches of
data containing about 200 samples. Although they indicate the
dependence of the solutions on the boundary conditions, it is possible
that for year-long stretches of data the effects of the selected
boundary conditions might not be significant. This, however, needs to
be mathematically proved. A detailed mathematical investigation of
this point should be carried out in the future and may require
extensive work.
\begin{figure}
\centering
\includegraphics[width = 7in]{Fig1.pdf}
\caption{The laser noise random process (blue color), $C(t)$, together
with the corresponding two-way Doppler measurement (red color),
$y_i$. The laser is switched on at time $t=0$. Since it takes $l_i$
seconds for the beam to return to the transmitting spacecraft,
$y_i$ is identically equal to zero since no Doppler measurements can
be performed during this time interval.}
\label{fig1}
\end{figure}
In TDI-$\infty$ the sampled two two-way Doppler data are
packaged in a single array in an alternating fashion starting from
time $t=t_0$ when the laser is switched on. Assuming a stationary
array configuration in which the RTLTs $l_1$, $l_2$ are equal to twice
and three times the sampling time $\Delta t$, (as exemplified in
\cite{Vallisneri2020}), the measurements array is linearly related to
the array associated with the samples of the laser noise $C$ through a
rectangular $2N \times N$ matrix $M$ ($N$ being the number of
considered samples) in the following way:
\begin{equation}
\left(
\begin{array}{c}
y_1(t_0) \\
y_2(t_0) \\
y_1(t_1) \\
y_2(t_1) \\
y_1(t_2) \\
y_2(t_2) \\
y_1(t_3) \\
y_2(t_3) \\
y_1(t_4) \\
y_2(t_4) \\
\vdots
\end{array}
\right)
=
\left(
\begin{array}{rrrrrr}
-1 & 0 & 0 & 0 & 0 & \cdots \\
-1 & 0 & 0 & 0 & 0 & \cdots \\
0 & -1 & 0 & 0 & 0 & \cdots \\
0 & -1 & 0 & 0 & 0 & \cdots \\
1 & 0 & -1 & 0 & 0 & \cdots \\
0 & 0 & -1 & 0 & 0 & \cdots \\
0 & 1 & 0 & -1 & 0 & \cdots \\
1 & 0 & 0 & -1 & 0 & \cdots \\
0 & 0 & 1 & 0 & -1 & \cdots \\
0 & 1 & 0 & 0 & -1 & \cdots \\
\vdots & \vdots & \vdots & \vdots & \vdots & \ddots
\end{array}
\right)
\cdot
\left(
\begin{array}{c}
c(t_0) \\
c(t_1) \\
c(t_2) \\
c(t_3) \\
c(t_4) \\
\vdots
\end{array}
\right) \ .
\label{design}
\end{equation}
As shown by Eq. (\ref{design}), rows $1$ through $4$ and row $6$ of
matrix $M$ reflect the assumption made in \cite{Vallisneri2020} of the
two Doppler measurements to contain the laser noise $C$ only at time
$t$ during the time intervals $t_0 \le t < t_0 + l_i$. If, on the
other hand, we correctly assume rows $1$ through $4$ and row $6$ to be
identically equal to zero, the null-space associated to the matrix $M$
will clearly be different.
To better understand and quantify this difference, we split the above
measurement's array in two arrays, ($Y_1, Y_2$), (one per measurement)
and introduce two corresponding ($N \times N$) square-matrices
relating the measurement arrays to the array of the laser noise. We
assume again a stationary configuration with RTLTs ($l_1, l_2$) equal
to twice and three-times the sampling time respectively. The two
vectors, $Y_1$, $Y_2$, are related to the laser noise vector $C$
through the following expressions:
\begin{equation}
Y_1 = A_1 . C \ \ \ ; \ \ \ Y_2 = A_2 . C \ ,
\label{eq5}
\end{equation}
where the symbol $.$ denotes matrix multiplication, and $A_1$, $A_2$
are equal to the following square-matrices:
\begin{equation}
A_1 = \left(
\begin{array}{rrrrrrr}
0 & 0 & 0 & 0 & 0 & 0 & \cdots \\
0 & 0 & 0 & 0 & 0 & 0 & \cdots \\
1 & 0 & -1 & 0 & 0 & 0 & \cdots \\
0 & 1 & 0 & -1 & 0 & 0 & \cdots \\
0 & 0 & 1 & 0 & -1 & 0 & \cdots \\
0 & 0 & 0 & 1 & 0 & & -1 \cdots \\
\vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \ddots
\end{array}
\right) \ \ \ \ ,
A_2 = \left(
\begin{array}{rrrrrrr}
0 & 0 & 0 & 0 & 0 & 0 & \cdots \\
0 & 0 & 0 & 0 & 0 & 0 & \cdots \\
0 & 0 & 0 & 0 & 0 & 0 & \cdots \\
1 & 0 & 0 & -1 & 0 & 0 & \cdots \\
0 & 1 & 0 & 0 & -1 & 0 & \cdots \\
0 & 0 & 1 & 0 & 0 & -1 & \cdots \\
\vdots & \vdots & \vdots & \vdots & \vdots & \ddots & \ddots
\end{array}
\right) \ .
\end{equation}
Note the above matrices incorporate the correct ``boundary
conditions'' as their first few rows are null (the number of null rows
depends on the magnitude of the RTLT). It is evident that the rank of
the matrices $A_1$, $A_2$ is less than the number of samples $N$ and
therefore they cannot be inverted. Physically this means that,
although the laser noise cannot be known/measured at any time $t$, one
can still cancel it by taking suitable linear combinations of the
two-way Doppler data. Let us consider the following linear combination
of the two-way Doppler measurements:
\begin{equation}
X \equiv A_2 . Y_1 - A_1 . Y_2 = (A_2 . A_1 - A_1 . A_2) . C \ .
\end{equation}
We have verified that the commutator $[A_1, A_2]$ starts to become
zero from row 6 onward. If we write the vectors $Y_1$, $Y_2$ in terms
of their components, the linear combination $X$ becomes equal to:
\begin{equation}
X =
\left(
\begin{array}{c}
0 \\
0 \\
0 \\
y_1(t_3) - y_2(t_3) \\
y_1(t_4) - y_2(t_4) \\
-y_1(t_2) + y_1(t_5) + y_2(t_3) - y_2(t_5) \\
-y_1(t_3) + y_1(t_6) + y_2(t_4) - y_2(t_6) \\
\vdots
\end{array}
\right)
\label{X}
\end{equation}
The above vector $X$ is no other than the unequal-arm Michelson TDI
combination sampled at successive sampling times. Note that $X$ starts
to cancel the laser noise after $l_1 + l_2 = 5 \Delta t$ time-samples
have past.
If we would incorporate in the matrices $A_1$, $A_2$ nonphysical ``boundary conditions'', they now would be of rank $N$
and therefore invertible. As each Doppler data could be used to
reconstruct the laser noise, one could then derive a laser-noise-free
combination sensitive to gravitational radiation by taking the
difference of the two reconstructions. Since each reconstruction of
the laser noise at time $t$ would be a linear combination of samples
taken at times determined only by the RTLT of the time-series used,
any time-dependence of the RTLT could be accommodated. In other words,
since each time-series would not be delayed by the RTLT associated
with the other time-series, issues related to the non-commutativity of
the delay operators would not be present.
\section{Matrix Formulation of TDI}
\label{SecIII}
In the general case of three arms, we have six one-way Doppler
measurements and three independent laser noises. The analysis below
will also assume a stationary array and the one-way-light-times to be
equal to $L_1 = \Delta t$, $L_2 = 2 \ \Delta t$, $L_3 = 3 \ \Delta t$
respectively. Although these RTLTs do not reflect the array's
triangular shape, we adopt them so that we can minimize the size of
the matrices introduced for explaining our method without loss of
generality.
By generalizing what was described in the previous section for the $X$
combination, we may write the one-way Doppler data in terms of the
laser noises in the following form (using the notation introduced in
\cite{TD2020}):
\begin{eqnarray}
y_1 & = & D_3 . C_2 - I_3 . C_1 \ \ , \ \ y_{1'} = D_2 . C_3 - I_2 . C_1
\nonumber
\\
y_2 & = & D_1 . C_3 - I_1 . C_2 \ \ , \ \ y_{2'} = D_3 . C_1 - I_3 . C_2
\nonumber
\\
y_3 & = & D_2 . C_1 - I_2 . C_3 \ \ , \ \ y_{3'} = D_1 . C_2 - I_1 . C_3
\label{oneways}
\end{eqnarray}
In Eq.(\ref{oneways}) we index the one-way Doppler data as
follows: the beam arriving at spacecraft $i$ has subscript $i$ and
is primed or unprimed depending on whether the beam is traveling
clockwise or counter-clockwise around the interferometer array, with
the sense defined by a chosen orientation of the array; the
matrices $D_{i}$ correspond to the delay operators of TDI and there
are only three of them because the array is stationary
\cite{TD2020}. The expressions for $D_{i}$ and $I_{i}$ are equal to:
\begin{equation}
D_3 = \left(
\begin{array}{rrrrrrrr}
0 & 0 & 0 & 0 & 0 & 0 & 0 & \cdots \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & \cdots \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & \cdots \\
1 & 0 & 0 & 0 & 0 & 0 & 0 & \cdots \\
0 & 1 & 0 & 0 & 0 & 0 & 0 & \cdots \\
0 & 0 & 1 & 0 & 0 & 0 & 0 & \cdots \\
0 & 0 & 0 & 1 & 0 & 0 & 0 & \cdots \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \ddots
\end{array}
\right) \ \ \ \ ,
I_3 = \left(
\begin{array}{rrrrrrrr}
0 & 0 & 0 & 0 & 0 & 0 & 0 & \cdots \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & \cdots \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & \cdots \\
0 & 0 & 0 & 1 & 0 & 0 & 0 & \cdots \\
0 & 0 & 0 & 0 & 1 & 0 & 0 & \cdots \\
0 & 0 & 0 & 0 & 0 & 1 & 0 & \cdots \\
0 & 0 & 0 & 0 & 0 & 0 & 1 & \cdots \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \ddots
\end{array}
\right) \ ;
\label{op1}
\end{equation}
\begin{equation}
D_2 = \left(
\begin{array}{rrrrrrrr}
0 & 0 & 0 & 0 & 0 & 0 & 0 & \cdots \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & \cdots \\
1 & 0 & 0 & 0 & 0 & 0 & 0 & \cdots \\
0 & 1 & 0 & 0 & 0 & 0 & 0 & \cdots \\
0 & 0 & 1 & 0 & 0 & 0 & 0 & \cdots \\
0 & 0 & 0 & 1 & 0 & 0 & 0 & \cdots \\
0 & 0 & 0 & 0 & 1 & 0 & 0 & \cdots \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \ddots
\end{array}
\right) \ \ \ \ ,
I_2 = \left(
\begin{array}{rrrrrrrr}
0 & 0 & 0 & 0 & 0 & 0 & 0 & \cdots \\
0 & 0 & 0 & 0 & 0 & 0 & 0 & \cdots \\
0 & 0 & 1 & 0 & 0 & 0 & 0 & \cdots \\
0 & 0 & 0 & 1 & 0 & 0 & 0 & \cdots \\
0 & 0 & 0 & 0 & 1 & 0 & 0 & \cdots \\
0 & 0 & 0 & 0 & 0 & 1 & 0 & \cdots \\
0 & 0 & 0 & 0 & 0 & 0 & 1 & \cdots \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \ddots
\end{array}
\right) \ ;
\label{op2}
\end{equation}
\begin{equation}
D_1 = \left(
\begin{array}{rrrrrrrr}
0 & 0 & 0 & 0 & 0 & 0 & 0 & \cdots \\
1 & 0 & 0 & 0 & 0 & 0 & 0 & \cdots \\
0 & 1 & 0 & 0 & 0 & 0 & 0 & \cdots \\
0 & 0 & 1 & 0 & 0 & 0 & 0 & \cdots \\
0 & 0 & 0 & 1 & 0 & 0 & 0 & \cdots \\
0 & 0 & 0 & 0 & 1 & 0 & 0 & \cdots \\
0 & 0 & 0 & 0 & 0 & 1 & 0 & \cdots \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \ddots
\end{array}
\right) \ \ \ \ ,
I_1 = \left(
\begin{array}{rrrrrrrr}
0 & 0 & 0 & 0 & 0 & 0 & 0 & \cdots \\
0 & 1 & 0 & 0 & 0 & 0 & 0 & \cdots \\
0 & 0 & 1 & 0 & 0 & 0 & 0 & \cdots \\
0 & 0 & 0 & 1 & 0 & 0 & 0 & \cdots \\
0 & 0 & 0 & 0 & 1 & 0 & 0 & \cdots \\
0 & 0 & 0 & 0 & 0 & 1 & 0 & \cdots \\
0 & 0 & 0 & 0 & 0 & 0 & 1 & \cdots \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \ddots
\end{array}
\right) \ .
\label{op3}
\end{equation}
The problem of identifying all possible TDI combinations associated with the
six one-way Doppler measurements becomes one of determining six
matrices, $q_{i} , q_{i'}$ such that the following equation holds:
\begin{equation}
\sum_{i=1}^{3} q_i . y_i + \sum_{i'=1}^{3} q_{i'} . y_{i'} = 0 \ ,
\label{TDI}
\end{equation}
where the above equality means ``zero laser noises''. Before
proceeding, note that the matrices $D_i$ and $I_i$ satisfy the
following identities which may be
useful later on:
\begin{eqnarray}
I_i . D_k &=& D_k \,, ~~~~~ i \leq k
\nonumber \\
&\neq& D_k \,, ~~~~~~ i > k
\nonumber \\
I_i . I_k &=& I_k . I_i = I_k \,, ~~~~~~~i \leq k
\label{Identities} \,.
\end{eqnarray}
The above identities in particular state that $I_k^2 = I_k$, that is,
$I_k$ are idempotent, or in other words they are projection operators.
\par
By redefining the matrices $q_i$, $q_{i'}$ in the following way:
\begin{eqnarray}
q_1 . I_3 & \rightarrow & q_1 \ \ , \ \ q_2 . I_1 \rightarrow q_2 \ \ , \ \ q_3 . I_2 \rightarrow q_3 \ ,
\nonumber
\\
q_{1'} . I_2 & \rightarrow & q_{1'} \ \ , \ \ q_{2'} . I_3 \rightarrow q_{2'} \ \ , \ \ q_{3'} . I_1 \rightarrow q_{3'} \ ,
\label{qs}
\end{eqnarray}
Eq. (\ref{TDI}) assumes the following form:
\begin{eqnarray}
& (& - q_1 - q_{1'} + q_3 . D_2 + q_{2'} . D_3) . C_1
\nonumber
\\
+ & (& - q_2 - q_{2'} + q_1 . D_3 + q_{3'} . D_1) . C_2
\nonumber
\\
+ & (& - q_3 - q_{3'} + q_2 . D_1 + q_{1'} . D_2) . C_3 = 0
\label{TDIequations}
\end{eqnarray}
Since the three random processes $C_i \ , \ i=1, 2, 3$ are
independent, the above equation can be satisfied iff the three
matrices multiplying the three random processes are identically equal
to zero, i.e.:
\begin{eqnarray}
& & - q_1 - q_{1'} + q_3 . D_2 + q_{2'} . D_3 = 0 \ ,
\nonumber
\\
& & - q_2 - q_{2'} + q_1 . D_3 + q_{3'} . D_1 = 0 \ ,
\nonumber
\\
& & - q_3 - q_{3'} + q_2 . D_1 + q_{1'} . D_2 = 0 \ .
\label{TDIequations2}
\end{eqnarray}
Since the system of Eqs. (\ref{TDIequations2}) is identical in form to
the corresponding equations derived in \cite{TD2020} (see section 4.3
of \cite{TD2020} and equations therein), the solutions will assume the
same forms. It should be noticed, however, that the ``matrix''
expressions of the generators $\alpha, \beta, \gamma, \zeta$ can be
obtained from the usual TDI-expressions by taking into account that
the $qs$ had been redefined (see Eq.(\ref{qs}) ). This means that the
Sagnac combination $\alpha$, for instance, assumes the following form:
\begin{equation}
\alpha = (y_1 + D_3 y_2 + D_1 D_3 y_3) -
(y_{1'} + D_2 y_{3'} + D_1 D_2 y_{2'}) \ ,
\label{alpha}
\end{equation}
where we have accounted for the identities given by
Eqs. (\ref{Identities}). When considering seven time-samples of the
six one-way measurements, the above expression for $\alpha$ reduces to
the following vector:
\begin{equation}
\alpha =
\left(
\begin{array}{c}
0 \\
0 \\
-y_{1'}(t_2) - y_{3'}(t_0)\\
y_1(t_3) - y_{1'}(t_3) + y_2(t_0) - y_{2'}(t_0) - y_{3'}(t_1) \\
y_1(t_4) - y_{1'}(t_4) + y_2(t_1) - y_{2'}(t_1) + y_3 (t_0) - y_{3'}(t_2) \\
y_1(t_5) - y_{1'}(t_5) + y_2(t_2) - y_{2'}(t_2) + y_3 (t_1) - y_{3'}(t_3) \\
y_1(t_6) - y_{1'}(t_6) + y_2(t_3) - y_{2'}(t_3) + y_3 (t_2) - y_{3'}(t_4) \\
\vdots
\end{array}
\right)
\label{alphaM}
\end{equation}
As in the case of the combination $X$ presented in the previous
section, here also the first few entries of the vector cannot cancel
the laser noises. This is because some of the measurements at those
time stamps are equal to zero. However, it is easy to verify that all
measurements at row seven and higher are different from zero and
reproduce the usual TDI combination $\alpha$ that cancels the laser
noise.
\section{TDI and matrix representations of delay operators}
\label{represent}
In this section we start with the general discussion of the algebraic
structure of time-delay operators and then go on to discuss the
homomorphism between the rings of time-delay operators and
matrices. We consider various cases of (i) time-delays which are
integer multiples of the sampling interval, (ii) the continuum case,
(iii) fractional time-delays with Lagrange interpolation and further
argue how the homomorphism could be extended to the situation of
time-dependent arm-lengths in which case the ring of delay operators
becomes non-commutative.
\par
We remark that the homomorphism concept is fundamental and should hold in every
situation of time delays; whether they are integer multiples of the
sampling interval, or fractional or time dependent. We argue in this section that
this is indeed so.
\subsection{General discussion of group and ring structures of time-delay operators}
\label{general}
Let us consider the data $y_j (t)$ as above. For the purpose of this
section we will drop the subscript from $y_j$ and call it just
$y (t)$. Also in the beginning of this section for purposes of
argument, we consider $- \infty < t < \infty$, that is $t \in {\mathcal R}$, the
set of real numbers. Later we will consider the realistic situation of
finite length data segment. A time delay operator ${\mathcal D}$ with delay $l$
acts on the $y$ as follows:
\begin{eqnarray}
{\mathcal D}: {\mathcal R} &\longrightarrow & {\mathcal R} \,\nonumber \\
y (t) & \longrightarrow & y (t - l) \,.
\end{eqnarray}
After having defined the delay operator ${\mathcal D}$, we may analogously
define several delay operators ${\mathcal D}_1, {\mathcal D}_2, ...$ with time delays
$l_1, l_2, ...$ respectively. The ${\mathcal D}$ operators are translations in
one dimension. The group operation here is then defined as the successive
application of the operators:
$${\mathcal D}_1 {\mathcal D}_2 y (t) = {\mathcal D}_1 y (t - l_2) = y (t - l_1 - l_2). $$
With the operation so defined the ${\mathcal D}$s form an uncountable infinite
group. When the $l_1, l_2$ are constants, the group is Abelian and
coincides with the usual translation group in one dimension.
\par
Now consider the case of time-dependent arm-lengths. Then $l_1$ and
$l_2$ are functions of time themselves, and the product operation
becomes: \begin{equation} {\mathcal D}_1 {\mathcal D}_2 y (t) = {\mathcal D}_1 y (t - l_2 (t)) = y [t - l_1 - l_2
(t - l_1)] \,, \end{equation} which is in general non-commutative and the group
is non-Abelian. Then this is not the usual translation group, but
nevertheless it is a group, when the time rate of change of
arm-lengths respects relativity, that is, ${\dot l} < 1$. Then any
${\mathcal D}$ defines a bijective map from ${\mathcal R}$ to ${\mathcal R}$, so that the inverse
exists.
\par
When we consider several data streams $y_j$ as in two arm or three arm
interferometers, the ${\mathcal D}$ operators in fact form a polynomial ring
instead of only a group with the different ${\mathcal D}_j$ operators as
indeterminates \cite{DNV02}. The ring could be commutative or
non-commutative according as the arm-lengths are time independent
\cite{AET99,DNV02,NV04} or time dependent
\cite{STEA03,TEA04,SVDsymp,DNV10,TD2020}. The TDI data combinations
constitute a module over the polynomial ring of delay operators known
as the first module of syzygies. See \cite{DNV02,TD2020} for
details. The ring operations in general are defined in the obvious way
on a data stream $y (t)$. Given two operators ${\mathcal D}_1$ and ${\mathcal D}_2$: \begin{eqnarray}
({\mathcal D}_1 + {\mathcal D}_2) y (t) &=& {\mathcal D}_1 y (t) + {\mathcal D}_2 y (t) = y (t - l_1) + y (t - l_2) \, \nonumber \\
{\mathcal D}_1 {\mathcal D}_2 y &=& = {\mathcal D}_1 y(t - l_2 (t)) = y [t - l_1 - l_2 (t - l_1)]
\,. \end{eqnarray}
These operations can be extended to the whole ring by linearity. In
the examples in this paper, we consider the arm lengths to be constant
in time and so ${\mathcal D}_1 {\mathcal D}_2 = {\mathcal D}_2 {\mathcal D}_1$ and the polynomial ring is
commutative.
\subsection{Matrix representations of time-delay operators: integer valued time-delays}
We treat this case first as it is the easiest to understand
intuitively. We consider the more realistic situation where the data
segment is of finite duration $[0, T]$. We will also assume that the
data are sampled uniformly with sampling time interval $\Delta t$. Now
there are finite number of samples $N$ labeled by the times
$t_k = k~\Delta t, ~ k = 0, 1, 2, ..., N - 1$ and also we have
$N \Delta t = T$. See \cite{NumericalRecipes} for more details. Here
typically $N$ could be a large number, but the point is that it is
{\it finite}. So the measurements $y$ or the laser noise $C$ can be
represented by $N$ dimensional vectors in ${\mathcal R}^N$. Because noise is a
random process these are random vectors.
\par
The operators ${\mathcal D}$ now take the form of linear transformations from
${\mathcal R}^N \longrightarrow {\mathcal R}^N$ and hence in our formulation can be
represented by $N \times N$ matrices which now for this case we will
represent by just $D$. With the arm-lengths taken as in section
\ref{SecIII}, the operators ${\mathcal D}_1, {\mathcal D}_2, {\mathcal D}_3$ are represented by the
matrices given by Eqs. (\ref{op1}), (\ref{op2}), (\ref{op3}). We have
essentially discretized the previous situation of the continuum. In
the matrix representation we have represented the abstract TDI
operators ${\mathcal D}$ by the matrices $D$. The operations which were valid in
the abstract case map faithfully to the discretized version. The sum
and product of the ${\mathcal D}$ operators maps to the sum and product of the
$D$ matrices - the ring operations are preserved. This is in fact
known as a representation of a group or a ring in the literature. We
now formally define a representation \cite{Gel}:
\begin{mydef}
Let ${\cal G}$ be a group and $V$ be a finite dimensional vector
space. For every $g \in G$ there is associated
$T_g: V \longrightarrow V$ a linear map. Then the map: \begin{eqnarray}
\varphi: {\cal G} & \longrightarrow & Hom (V, V) \, \nonumber \\
g & \longrightarrow & T_g \, \nonumber \\
\end{eqnarray} is called a representation if $\varphi$ is a group
homomorphism, i. e. for every
$g_1, g_2 \in {\cal G}, ~T_{g_1 g_2} = T_{g_1} T_{g_2}$ and the group
identity $e \in G$ maps to $T_e = I$, the unit matrix. $Hom (V, V)$
is the space of linear transformations from $V \longrightarrow V$ -
the endomorphisms of $V$.
\end{mydef}
This definition easily extends to that of rings with identity, where
now the homomorphism must be a ring homomorphism; both operations of
the ring must be preserved under the homomorphism \cite{Burrow}. $V$
is called the carrier space. In our situation, $\varphi$ maps
${\mathcal D} \longrightarrow D$ or $\varphi ({\mathcal D}) = D$; the delay operator ${\mathcal D}$
is mapped to the matrix $D$. It is easy to verify that this is indeed
a ring homomorphism. $V = {\mathcal R}^N$, the space of the vectors $y$ or $C$,
plays the role of the carrier space.
\par
We now elucidate the above discussion with an example of a TDI
observable for LISA. Considering a simple model of LISA with just
three time-delay operators ${\mathcal D}_1, {\mathcal D}_2, {\mathcal D}_3$ and constant
arm-lengths, any TDI observable is six component polynomial vector in
the delay operators. Let us consider the simplest of the TDI
observables, namely, $\zeta$. In the operator picture, it is an
element of the module of syzygies:
\begin{equation}
\zeta = (-
{\mathcal D}_1, - {\mathcal D}_2, - {\mathcal D}_3, {\mathcal D}_1, {\mathcal D}_2, {\mathcal D}_3) \,.
\end{equation}
In the matrix formulation, under the ring homomorphism, the matrix
form of $\zeta$ is:
\begin{equation} \zeta = - D_1 y_1 - D_2 y_2 - D_3 y_3 + D_1 y_{1'} + D_2 y_{2'} +
D_3 y_{3'} \,, \end{equation} where now the $D_k$ are $N \times N$ matrices and
the $y_j, ~y_{j'}$ are $N$ dimensional column vectors. $\zeta$ is now
a $N \times 1$ column vector which is devoid of laser frequency
noise. Let us now check whether $\zeta$ as defined here cancels the
laser frequency noise. We may write $\zeta$ in terms of the laser
noises $C_1, C_2, C_3$ from Eq. (\ref{oneways}): \begin{equation} \zeta = D_1 (I_3
- I_2) C_1 + D_2 (I_1 - I_3) C_2 + D_3 (I_2 - I_1) C_3 \,. \end{equation} At the
time $t_k$, we then have, \begin{equation} \zeta (k) = (I_3 - I_2) C_1 (k - 1) +
(I_1 - I_3) C_2 (k - 2) + (I_2 - I_1) C_3 (k - 3) \,, \end{equation} where in
order to avoid clutter we have written $k$ for the sampling time
$t_k$. From the above equation we deduce that $\zeta (k) = 0$ for
$k \geq 5$ and also $\zeta (0) = \zeta (1) = \zeta (2) = 0$. However,
$\zeta (3) = C_2 (1) - C_1 (2) \neq 0$ and
$\zeta (4) = C_3 (1) - C_2 (2) \neq 0$ and so at these sampling times
the laser noise does not cancel.
\par
In the algebraic approach \cite{DNV02, TD2020}, any TDI observable is
a 6-tuple polynomial vector in the operators ${\mathcal D}_k$. In the matrix
formulation, since the operators ${\mathcal D}_k$ map to the matrices $D_k$
under the ring homomorphism, an operator polynomial maps also to a
matrix. Thus in the matrix formulation any TDI observable is expressed
in terms of 6 matrices $q_i, q_{i'}$; the polynomials $q_i,~q_{i'}$ in
the operators ${\mathcal D}_k$ are now interpreted as matrices. In the two arm
configuration discussed in Section \ref{SecII} only two matrices are
required $A_1$ and $A_2$. In terms of the $D_k$ matrices defined,
$A_1 = D_1 - I_1$ and $A_2 = D_2 - I_2$. In a recent work
\cite{Vallisneri2020} the two $N \times N$ matrices are juxtaposed in
the form of a $2 N \times N$ matrix $M$, called the design matrix, and
in which the $y_1, y_2$ measurements are interleaved together in rows
as in Eq. (\ref{design}). Note that the TDI combinations presented in
matrix form can be repackaged in a format $T . y$
\cite{Vallisneri2020}, which might turn out to be more advantageous
for numerical manipulations and data analysis.
\par
A Bayesian inference approach has been adopted by Romano and Woan
\cite{PCA2006}. They set up a noise covariance matrix of the data
streams $y_j, y_{j'}$ and perform a principal component analysis. From
the principal components they identify large eigenvalues with laser
noise and so distinguish it from the signal. We remark that this is
also a matrix representation of the original TDI, although a little
more complex - it is a tensor representation or product representation
\cite{Gel}. The covariance matrix is a second rank tensor. Any entry
of the covariance matrix $C_{ik}$ is an ensemble average of outer
products of the form
$y_\alpha (i) y_{\beta}^T (k), ~{y}_{\alpha'} (i) y_{\beta}^T (k)$ or
${y}_{\alpha'} (i) {y}_{\beta'}^T (k)$. We use Greek indices
$\alpha, \beta = 1, 2, 3$ to label data streams and operators to distinguish
them from time samples which are tensor indices. At each $(i, k)$,
the products in $y$ contain tensor products - for example,
$y_1 (i) {y}_{1'}^T (k)$ contains products of the $D$ matrices,
namely, $D_{3 ij} D_{2 mk} C_{2j} C_{3m}$. The outer product of the
vectors $C_{2}$ and $C_{3}$, namely,
$C_{2} \otimes C_{3} \in {\mathcal R}^N \otimes {\mathcal R}^N$ is a tensor of second rank
and the product of the $D$s acts on this tensor. These products of
$D$s define the tensor representation. ${\mathcal R}^N \otimes {\mathcal R}^N$ acts as
the carrier space for this representation.
\subsection{The continuum case}
From a logical point of view, this case could have been addressed
immediately after subsection \ref{general}, but for concreteness sake,
we felt that we should first deal with the easier case of constant
time-delays which are integer multiples of the sampling interval.
We have already shown that homomorphism holds for the case of integer
multiples of sampling interval. As a matter of principle, one may
argue that if the Doppler data could be sampled at a rate as high as
required by TDI (corresponding to a sampling time of about (10 m/c)
sec) then we may approach the previous case of integer valued
time-delays and the equality $\phi({\mathcal D}_1 {\mathcal D}_2) = \phi({\mathcal D}_1) \phi({\mathcal D}_2)$
would seem to hold. So this motivates us, on a theoretical basis (also
it is instructive), to examine this question by taking the continuum
limit of the sampling interval $\Delta t \longrightarrow 0$. Then the
matrix representation of a delay operator ${\mathcal D}_1$ with delay $l_1 (t)$
tends to a delta function
$\delta [t' - (t - l_1 (t))] \equiv D_1 (t, t')$. Here the matrix
$D_1 (t, t')$ - a function of 2 variables - acts on the continuous
data stream $y (t)$ as follows: \begin{equation} {\mathcal D}_1 y (t) = \int dt'~ D_1 (t, t')
y (t') = \int dt'~ \delta [t' - (t - l_1 (t))]~ y (t') ~=~ y (t - l_1
(t)) \,, \end{equation} which is consistent with the usual definition of the
operator ${\mathcal D}_1$. Here the homomorphism $\phi$ is
$\phi ({\mathcal D}_1) = D_1 (t, t')$. If one takes two such operators even with
time-dependent delays $l_1 (t)$ and $l_2 (t)$, and applies the two
operators successively then the result is again a delta function with
a delay $l_1 (t) + l_2 (t - l_1 (t))$ as shown below: \begin{eqnarray}
\phi ({\mathcal D}_1) \star \phi ({\mathcal D}_2) &=& (D_1 \star D_2)~(t, t^{\prime \prime}) \,, \nonumber \\
&=& \int dt'~ D_1 (t,t') ~ D_2 (t', t^{\prime \prime}) \,, \nonumber \\
&=& \int dt'~\delta [t' - (t - l_1 (t))]~\delta [t^{\prime \prime} - (t' - l_2 (t'))] \,, \\
&=& \delta [t^{\prime \prime} - \{t - l_1 (t) - l_2 (t - l_1 (t)) \}] \equiv \phi
({\mathcal D}_1 {\mathcal D}_2) \,. \end{eqnarray} This proves that the matrix representation in
the continuum case is also a homomorphism. In general, \begin{equation} \phi ({\mathcal D}_1)
\star \phi ({\mathcal D}_2) = D_1 \star D_2 \neq D_2 \star D_1 = \phi ({\mathcal D}_2)
\star \phi ({\mathcal D}_1). \end{equation} The operators do not commute in general when
the arm lengths are time dependent. The operators then form a
non-commutative polynomial ring. When the delays are constants, the
operators ${\mathcal D}_1$ and ${\mathcal D}_2$ commute and the operators form a
commutative polynomial ring. So far we have shown that the
homomorphism holds in the continuum limit in addition to the case of
delays being integer multiples of the sampling interval (constant
time-delays) - the opposite end, so to speak.
\subsection{Fractional time-delays and time-dependent arm-lengths}
In practice one has nonzero sampling intervals $\Delta t > 0$. But for
LISA, because of practical limitations, this sampling would be too
coarse to be used in the TDI algorithms to cancel the laser frequency
noise. For this purpose one would require data at points between the
sample points. One then applies appropriate fractional delay filters
to the Doppler measurements to achieve this goal digitally.
Fractional delays may be implemented using an interpolation
scheme. Here we employ Lagrange interpolation as in
\cite{Vallisneri2020}. We consider three cases:
\begin{enumerate}
\item Single interval for all delays,
\item Different intervals for each delay,
\item Time-dependent delays.
\end{enumerate}
\subsubsection{Single interval for all delays (time-independent)}
Without loss of generality, we consider $m$ sample points $t = 0, 1, ..., m - 1$ with $\Delta t = 1$. We denote this interval by $I_0 = \{0, 1, 2, ..., m - 1 \}$ which accommodates all delays. The interpolation operation can be cast in a matrix form with a matrix acting on the
data. More specifically, one can envisage a $m \times m$ matrix of
Lagrange polynomials ${\mathcal D}(\alpha)$, where $\alpha$ is the delay, acting
on the data $y$. We write the delays as $\alpha, \beta, ...$ in order to not
confuse with the Lagrange polynomials which are also denoted by
$l_i$. We consider two delays $\alpha$ and $\beta$ with the corresponding $m \times m$ matrices ${\mathcal D} (\alpha)$ and ${\mathcal D} (\beta)$. To establish the homomorphism, we show that ${\mathcal D} (\alpha + \beta) = {\mathcal D} (\alpha) {\mathcal D} (\beta)$. This result easily follows from the properties of Lagrange polynomials, namely, the addition theorem for
Lagrange polynomials. We give the proof of the addition theorem in the
appendix \ref{app:add_thm} .
For concreteness, consider just $m = 3$ points at $t = 0, 1, 2 $. Then the matrix ${\mathcal D} (\alpha)$ is:
\begin{equation}
{\mathcal D} (\alpha) = \left \| \begin{array}{ccc}
l_0 (\alpha) & l_1 (\alpha) & l_2 (\alpha) \\
l_0 (\alpha + 1) & l_1 (\alpha + 1) & l_2 (\alpha + 1) \\
l_0 (\alpha + 2) & l_1 (\alpha + 2) & l_2 (\alpha + 2)
\end{array} \right \| \equiv D_{jk} (\alpha) \,,
\end{equation}
where $D_{jk} (\alpha) = l_k (\alpha + j)$. Taking two such matrices corresponding to $\alpha$ and $\beta$ and multiplying them together, we have,
\begin{equation}
\sum_{k} D_{jk} (\alpha) D_{kn} (\beta) = \sum_{k} l_k (\alpha + j) l_n (\beta + k) \equiv l_n (\alpha + \beta + j) = D_{jn} (\alpha + \beta) \,,
\label{hom}
\end{equation} where we have used the addition theorem in the appendix
\ref{app:add_thm}. Although we have just used 3 time stamps the
results are generally true for $m$ points. Also one might think, that
since the product of Lagrange polynomials appears as entries in the
product of the matrices, it might lead to polynomials of degree
$2m - 2$. But this does not happen, as the addition theorem shows; the terms of degree greater
than $m - 1$ cancel out, leaving behind a $m - 1$ degree polynomial.
\subsubsection{Different intervals for each delay (time-independent)}
In practice, choosing the same set of sample points may not be
feasible for delays much greater than the sampling interval and so
different sets of sample points must be chosen for different delays
but then the matrices may {\it appear} different, because the Lagrange
polynomials are translated. But then care must be taken to translate
the matrices to a common reference in order to compare them. Then the
closure property of the polynomials can be explicitly seen to hold. We
may see this as follows:
Let $I_r = \{r, r + 1, ..., r + m -1 \}$ be the interpolation
interval containing $m$ points around $\alpha$ and a corresponding
interval $I_s = \{s, s + 1, ..., s + m - 1 \}$ around $\beta$. Let
$l_j (t), ~j = 0, 1, ..., m - 1$ be the Lagrange polynomials for the
reference interval $I_0 = \{0, 1, 2, ..., m - 1 \}$. We will call
these the {\it basic} Lagrange polynomials referred to $t = 0$. Then
the Lagrange polynomials for the interval $I_r$ are just the
translated versions of $l_j (t)$, namely, $l_j (t - r)$ and similarly
$l_j (t - s)$ for $I_s$. In this case the translated matrix
representation is $D^{(r)}_{jk} (\alpha) = l_k (\alpha - r + j)$ for delay
$\alpha$ and $D^{(s)}_{jk} (\beta) = l_k (\beta - s + j)$ for $\beta$. Now the
total delay $\alpha + \beta$ in general will lie between $r + s$ and
$r + s + 2m - 2$. We may choose $r$ and $s$ so that
$\alpha + \beta \leq r + s + m - 1$ so that the relevant interval is $I_{r + s} = \{r + s, r + s + 1, ..., r + s + m - 1 \}$. The homomorphism is given by: \begin{equation}
\sum_{k} D^{(r)}_{jk} (\alpha) D^{(s)}_{kn} (\beta) = \sum_{k} l_k (\alpha - r +
j) l_n (\beta - s + k) \equiv l_n (\alpha + \beta - (r + s) + j) = D^{(r +
s)}_{jn} (\alpha + \beta) \,.
\label{hom2}
\end{equation}
This establishes the homomorphism for this case. We have again appealed to the addition theorem of Lagrange
polynomials proved in appendix \ref{app:add_thm}. We could have
perhaps argued that here, since we are concerned about matters of
principle, we may have chosen $m$ sufficiently large to cover all
delays. But we preferred to explicitly establish the homomorphism for
the case of each delay with different interpolating intervals.
\subsubsection{Time-dependent delays}
We further add that Eq. (\ref{hom}) is valid for
time dependent delays also. Now both $\alpha$ and $\beta$ become functions of
time. If one applies the delay $\beta$ first and then $\alpha$, the combined
delay is $\alpha + \beta (\alpha) \equiv \alpha \oplus \beta$ and in the reverse case it
is $\beta + \alpha (\beta) = \beta \oplus \alpha$ which are in general unequal. Then we
have the situation: \begin{equation} {\mathcal D} (\alpha \oplus \beta) = {\mathcal D} [\alpha + \beta (\alpha)] = {\mathcal D}
(\alpha) {\mathcal D} [\beta (\alpha)] \neq {\mathcal D} (\beta) {\mathcal D} [\alpha (\beta)] = {\mathcal D} [\beta + \alpha (\beta)] = {\mathcal D}
(\beta \oplus \alpha) \,.
\label{hom_timedep}
\end{equation} Eq. (\ref{hom_timedep}) shows that the homomorphism also holds for
time-dependent fractional delays for which the operators do not
commute in general.
\par
In summary, we emphasize here that the matrix formulation is a ring
representation of the original TDI formulation. In principle it is no
different. However, in practice, there may be advantages to this
approach, because representations using matrices lend themselves to
easy analytical and numerical manipulations.
\section{Conclusions}
\label{SecIV}
The main result of our article has been the demonstration that the
delay operators characterizing TDI may be represented as
matrices. Through this approach we recovered the well known result
characterizing TDI: the cancellation of the laser noise in an
unequal-arm interferometer is a ``local operation'' as it is achieved
at any time $t$ by linearly combining only a few neighboring
measurement-samples. Our conclusion is the consequence of correctly
accounting for the time-mismatch between the arrays of the Doppler
measurements and that of the laser noise.
In mathematical terms, we have shown that the cancellation of the
laser noises using matrices is just the ring representation of the
original TDI formulation and it is not different from it. We
mathematically prove the homomorphism between the delay operators and
their matrix representation holds in general. We have covered
all cases of interest: (i) time-delays that are constant integer multiples of the sampling interval, (ii) the continuum limit $\Delta t \longrightarrow 0$
including time-dependent arm-lengths and (iii) fractional time-delays when arm-lengths are time-independent (same interval and different intervals of interpolation) or time-dependent. For the fractional delay filters, Lagrange
interpolation has been used to establish the homomorphism.
It should be said, however, that the matrix approach we have
introduced might offer some advantages to the data processing and
analysis tasks of currently planned gravitational wave missions
\cite{LISA2017,Taiji,TianQin} as it is more flexible, allows for
easier numerical implementation and manipulation and also adapts to
time-dependent arm-lengths in a natural way. Further on another
front, it might in fact be possible to extend to our matrix
formulation of TDI the data processing algorithm discussed in
\cite{Vallisneri2020} to handle data gaps. We have just started to
analyze this problem and might report about its solution in a
forthcoming article.
We remark that regardless of the approach we follow, both the original
as well as the matrix approaches look for null spaces whose vectors
describe the TDI observables. In the original TDI approach, the first
module of syzygies is in fact a null space - the kernel of a
homomorphism; the kernel is important because it contains elements,
namely, those TDI observables that map the laser noise to zero.
\section*{Acknowledgments}
M.T. thanks the Center for Astrophysics and Space Sciences (CASS) at
the University of California San Diego (UCSD, U.S.A.) and the National
Institute for Space Research (INPE, Brazil) for their kind hospitality
while this work was done. S.V.D. acknowledges the support of the
Senior Scientist Platinum Jubilee Fellowship from NASI, India.
|
1,108,101,565,602 | arxiv |
\section{Introduction}
\label{sec:introduction}
Many problems of relevance in bioinformatics, signal processing, and
statistical learning can be formulated as minimizing a \emph{composite
function}:
\begin{align}
\minimize_{x \in \mathbf{R}^n} \,f(x) := g (x) + h(x),
\label{eq:composite-form}
\end{align}
where $g$ is a convex, continuously differentiable loss function, and
$h$ is a convex but not necessarily differentiable penalty function or
regularizer. Such problems include the \emph{lasso}
\cite{tibshirani1996regression}, the \emph{graphical lasso}
\cite{friedman2008sparse}, and trace-norm matrix completion
\cite{candes2009exact}.
We describe a family of Newton-type methods for minimizing composite
functions that achieve superlinear rates of convergence subject to
standard assumptions. The methods can be interpreted as
generalizations of the classic proximal gradient method that account
for the curvature of the function when selecting a search
direction. Many popular methods for minimizing composite functions are
special cases of these \emph{proximal Newton-type methods}, and our
analysis yields new convergence results for some of these methods.
\subsection{Notation}
The methods we consider are \emph{line search methods}, which means that they produce a sequence of points
$\{x_k\}$ according to
\[
x_{k+1} = x_k + t_k\Delta x_k,
\]
where $t_k$ is a \emph{step length} and $\Delta x_k$ is a
\emph{search direction}. When we focus on one iteration of an
algorithm, we drop the subscripts ({\it e.g.}\ $x_+ = x + t\Delta x$). All the methods we consider compute search directions by minimizing local quadratic models of the composite function $f$. We use an accent $\hat{\cdot}$ to denote these local quadratic models ({\it e.g.}\ $\hat{f}_k$ is a local quadratic model of $f$ at the $k$-th step).
\subsection{First-order methods}
The most popular methods for minimizing composite functions are
\emph{first-order methods} that use \emph{proximal mappings} to handle
the nonsmooth part $h$. SpaRSA \cite{wright2009sparse} is a popular
\emph{spectral projected gradient} method that uses a \emph{spectral
step length} together with a \emph{nonmonotone line search} to
improve convergence. TRIP \cite{kim2010scalable} also uses a spectral
step length but selects search directions using a trust-region
strategy.
We can accelerate the convergence of first-order methods using ideas
due to Nesterov \cite{nesterov2003introductory}. This yields
\emph{accelerated first-order methods}, which achieve
$\epsilon$-suboptimality within $O(1/\sqrt{\epsilon})$ iterations
\cite{tseng2008accelerated}. The most popular method in this family is
the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA)
\cite{beck2009fast}. These methods have been implemented in the
package TFOCS \cite{becker2011templates} and used to solve problems
that commonly arise in statistics, signal processing, and statistical
learning.
\subsection{Newton-type methods}
There are two classes of methods that generalize Newton-type methods
for minimizing smooth functions to handle composite functions \eqref{eq:composite-form}.
\emph{Nonsmooth Newton-type methods} \cite{yu2010quasi} successively minimize a
local quadratic model of the composite function $f$:
\[
\hat{f}_k(y) = f(x_k) + \sup_{z\in\partial f(x_k)} z^T(y-x_k) + \frac{1}{2}(y-x_k)^TH_k(y-x_k),
\]
where $H_k$ accounts for the curvature of $f$. (Although
computing this $\Delta x_k$ is generally not practical, we can exploit
the special structure of $f$ in many statistical learning problems.)
Our proximal Newton-type methods approximates only the smooth part $g$ with a local quadratic model:
$$
\hat{f}_k(y) = g(x_k) + \nabla g(x_k)^T(y-x_k) + \frac{1}{2}(y-x_k)^TH_k(y-x_k) + h(y).
$$
where $H_k$ is an approximation to $\nabla^2 g(x_k)$. This idea
can be traced back to the \emph{generalized proximal point method} of
Fukushima and Min\'{e} \cite{fukushima1981generalized}. Many
popular methods for minimizing composite functions are special cases
of proximal Newton-type methods. Methods tailored to a specific
problem include \texttt{glmnet} \cite{friedman2007pathwise}, LIBLINEAR
\cite{yuan2012improved}, QUIC \cite{hsieh2011sparse}, and the
Newton-LASSO method \cite{olsen2012newton}. Generic methods include
\emph{projected Newton-type methods} \cite{schmidt2009optimizing,
schmidt2011projected}, proximal quasi-Newton methods
\cite{schmidt2010graphical, becker2012quasi}, and the method of Tseng
and Yun \cite{tseng2009coordinate, lu2011augmented}.
There is a rich literature on solving generalized equations, monotone
inclusions, and variational inequalities. Minimizing composite
functions is a special case of solving these problems, and proximal Newton-type methods are special cases of Newton-type methods for these problems \cite{patriksson1998cost}.
We refer to \cite{patriksson1999nonlinear} for a unified treatment of
\emph{descent methods} (including proximal Newton-type methods) for
such problems.
\section{Proximal Newton-type methods}
\label{sec:proximal-newton-type-methods}
We seek to minimize composite functions $f(x) := g(x) + h(x)$ as in
\eqref{eq:composite-form}. We assume $g$ and $h$ are closed, convex
functions. $g$ is continuously differentiable, and its gradient
$\nabla g$ is Lipschitz continuous. $h$ is not necessarily everywhere
differentiable, but its \emph{proximal mapping}
\eqref{eq:prox-mapping} can be evaluated
efficiently. We refer to $g$ as ``the smooth part'' and $h$ as ``the
nonsmooth part''. We assume the optimal value $f^\star$ is
attained at some optimal solution $x^\star$, not necessarily unique.
\subsection{The proximal gradient method}
The proximal mapping of a convex function $h$ at $x$ is
\begin{equation}
\prox_{h}(x) := \argmin_{y\in\mathbf{R}^n}\,h(y) + \frac{1}{2}\norm{y-x}^2.
\label{eq:prox-mapping}
\end{equation}
Proximal mappings can be interpreted as generalized projections
because if $h$ is the indicator function of a convex set, then
$\prox_h(x)$ is the projection of $x$ onto the set. If $h$ is the
$\ell_1$ norm and $t$ is a step-length, then $\prox_{th}(x)$ is the
\emph{soft-threshold operation}:
\[
\prox_{t\ell_1}(x) = \sign(x)\cdot\max\{\abs{x}-t,0\},
\]
where $\sign$ and $\max$ are entry-wise, and $\cdot$ denotes the entry-wise product.
The \emph{proximal gradient method} uses the proximal mapping of the
nonsmooth part to minimize composite functions:
\begin{gather*}
x_{k+1} = x_k - t_kG_{t_kf}(x_k)
\\ G_{t_kf}(x_k) := \frac{1}{t_k}\left(x_k-\prox_{t_kh}(x_k-t_k\nabla g(x_k)) \right),
\end{gather*}
where $t_k$ denotes the $k$-th step length and $G_{t_kf}(x_k)$ is a
\emph{composite gradient step}. Most first-order methods, including
SpaRSA and accelerated first-order methods, are variants of this
simple method. We note three properties of the composite gradient
step:
\begin{enumerate}
\item $G_{t_kf}(x_k)$ steps to the minimizer of $h$ plus a simple
quadratic model of $g$ near $x_k$:
\begin{align}
x_{k+1} &= \prox_{t_kh}\left(x_k-t_k\nabla g(x_k)\right)
\\ &= \argmin_y\,t_kh(y) + \frac{1}{2}\norm{y-x_k+t_k\nabla g(x_k)}^2
\\ &= \argmin_y\,\nabla g(x_k)^T(y-x_k) + \frac{1}{2t_k}\norm{y-x_k}^2 + h(y).
\label{eq:simple-quadratic}
\end{align}
\item $G_{t_kf}(x_k)$ is neither a gradient nor a subgradient of $f$
at any point; rather it is the sum of an explicit gradient and an
implicit subgradient:
\[
G_{t_kf}(x_k) \in \nabla g(x_k) + \partial h(x_{k+1}).
\]
\item $G_{t_kf}(x)$ is zero if and only if $x$ minimizes $f$.
\end{enumerate}
The third property generalizes the zero gradient optimality condition
for smooth functions to composite functions. We shall use the length
of $G_f(x)$ to measure the optimality of a point $x$.
\begin{lemma}
\label{lem:G-lipschitz}
If $\nabla g$ is Lipschitz continuous with constant $L_1$, then $\norm{G_f(x)}$ satisfies:
\[
\norm{G_f(x)} \le (L_1 + 1)\norm{x - x^\star}.
\]
\end{lemma}
\begin{proof}
The composite gradient steps at $x_k$ and the optimal solution $x^\star$ satisfy
\begin{align*}
G_f(x_k) &\in \nabla g(x_k) + \partial h(x_k - G_f(x_k)) \\
G_f(x^\star) &\in \nabla g(x^\star) + \partial h(x^\star).
\end{align*}
We subtract these two expressions and rearrange to obtain
\[
\partial h(x_k - G_f(x_k)) - \partial h(x^\star) \ni G_f(x) - (\nabla g(x) - \nabla g(x^\star)).
\]
$\partial h$ is monotone, hence
\begin{align*}
0 &\le (x - G_f(x) - x^\star)^T\partial h(x_k - G_f(x_k)) \\
& = -G_f(x)^TG_f(x) + (x - x^\star)G_f(x) + G_f(x)^T(\nabla g(x) - \nabla g(x^\star)) \\
&\hspace{1pc}-(x - x^\star)^T(\nabla g(x) - \nabla g(x^\star)).
\end{align*}
We drop the last term because it is nonnegative ($\nabla g$ is monotone) to obtain
\begin{align*}
0 &\le -\norm{G_f(x)}^2 + (x - x^\star)G_f(x) + G_f(x)^T(\nabla g(x) - \nabla g(x^\star)) \\
&\le -\norm{G_f(x)}^2 - \norm{G_f(x)}(\norm{x - x^\star} + \norm{\nabla g(x) - \nabla g(x^\star)}).
\end{align*}
We rearrange to obtain
\[
\norm{G_f(x)} \le \norm{x - x^\star} + \norm{\nabla g(x) - \nabla g(x^\star)}.
\]
$\nabla g$ is Lipschitz continuous, hence
\[
\norm{G_f(x)} \le (L_1 + 1)\norm{x - x^\star}.
\]
\end{proof}
\subsection{Proximal Newton-type methods}
Proximal Newton-type methods use a local quadratic model (in lieu of
the simple quadratic model in the proximal gradient method
\eqref{eq:simple-quadratic}) to account for the curvature of $g$. A
local quadratic model of $g$ at $x_k$ is
\[
\hat{g}_k(y) = \nabla g(x_k)^T(x-x_k) + \frac{1}{2}(y-x_k)^TH_k(y-x_k),
\]
where $H_k$ denotes an approximation to $\nabla^2 g(x_k)$. A proximal
Newton-type search direction $\Delta x_k$ solves the subproblem
\begin{equation}
\Delta x_k = \argmin_d\,\hat{f}_k(x_k + d) := \hat{g}_k(x_k + d) + h(x_k+d).
\label{eq:proxnewton-search-dir-1}
\end{equation}
There are many strategies for choosing $H_k$. If we choose $H_k$ to be
$\nabla^2 g(x_k)$, then we obtain the \emph{proximal Newton
method}. If we build an approximation to $\nabla^2 g(x_k)$ using
changes measured in $\nabla g$ according to a quasi-Newton strategy,
we obtain a \emph{proximal quasi-Newton method}. If the problem is
large, we can use limited memory quasi-Newton updates to reduce memory
usage. Generally speaking, most strategies for choosing Hessian
approximations for Newton-type methods (for minimizing smooth
functions) can be adapted to choosing $H_k$ in the context of proximal
Newton-type methods.
We can also express a proximal Newton-type search direction using
\emph{scaled proximal mappings}. This lets us interpret a proximal
Newton-type search direction as a ``composite Newton step'' and
reveals a connection with the composite gradient step.
\begin{definition}
\label{def:scaled-prox}
Let $h$ be a convex function and $H$, a positive definite matrix. Then
the scaled proximal mapping of $h$ at $x$ is
\begin{align}
\prox_h^H(x) := \argmin_{y\in\mathbf{R}^n}\,h(y)+\frac{1}{2}\norm{y-x}_H^2.
\label{eq:scaled-prox}
\end{align}
\end{definition}
Scaled proximal mappings share many properties with (unscaled)
proximal mappings:
\begin{enumerate}
\item $\prox_h^H(x)$ exists and is unique for $x\in\dom h$ because the
proximity function is strongly convex if $H$ is positive definite.
\item Let $\partial h(x)$ be the subdifferential of $h$ at $x$. Then
$\prox_h^H(x)$ satisfies
\begin{align}
H\left(x-\prox_h^H(x)\right)\in\partial h\left(\prox_h^H(x)\right).
\label{eq:scaled-prox-1}
\end{align}
\item $\prox_h^H(x)$ is \emph{firmly nonexpansive} in the
$H$-norm. That is, if $u = \prox_h^H(x)$ and $v = \prox_h^H(y)$,
then
\[
(u-v)^TH(x-y)\ge \norm{u-v}_H^2,
\]
and the Cauchy-Schwarz inequality implies
$\norm{u-v}_H \le \norm{x-y}_H$.
\end{enumerate}
We can express a proximal Newton-type search direction as a
``composite Newton step'' using scaled proximal mappings:
\begin{align}
\Delta x = \prox_h^{H}\left(x-H^{-1}\nabla g(x)\right) - x.
\label{eq:proxnewton-search-direction-scaled-prox}
\end{align}
We use \eqref{eq:scaled-prox-1} to deduce that a proximal Newton
search direction satisfies
\[
H\left(H^{-1}\nabla g(x) - \Delta x\right) \in \partial h(x + \Delta x).
\]
We simplify to obtain
\begin{align}
H\Delta x \in -\nabla g(x) - \partial h(x + \Delta x).
\label{eq:proxnewton-search-direction-2}
\end{align}
Thus a proximal Newton-type search direction, like the composite
gradient step, combines an explicit gradient with an implicit
subgradient. Note this expression yields the Newton system
in the case of smooth functions ({\it i.e.}, $h$ is zero).
\begin{proposition}[Search direction properties]
\label{prop:search-direction-properties}
If $H$ is positive definite, then $\Delta x$ in \eqref{eq:proxnewton-search-dir-1}
satisfies
\begin{gather}
f(x_+) \le f(x)+t\left(\nabla g(x)^T\Delta x+h(x+\Delta x)-h(x)\right)+ O(t^2),
\label{eq:search-direction-properties-1}
\\ \nabla g(x)^T\Delta x+h(x+\Delta x)-h(x) \le -\Delta x^TH\Delta x.
\label{eq:search-direction-properties-2}
\end{gather}
\end{proposition}
\begin{proof}
For $t\in(0,1]$,
\begin{align*}
f(x_+) -f(x) &= g(x_+)-g(x)+h(x_+)-h(x)
\\ &\le g(x_+)-g(x)+th(x+\Delta x)+(1-t)h(x)-h(x)
\\ &= g(x_+)-g(x)+t(h(x+\Delta x)-h(x))
\\ &= \nabla g(x)^T(t\Delta x)+t(h(x+\Delta x)-h(x))+O(t^2),
\end{align*}
which proves \eqref{eq:search-direction-properties-1}.
Since $\Delta x$ steps to the minimizer of $\hat{f}$
\eqref{eq:proxnewton-search-dir-1}, $t\Delta x$ satisfies
\begin{align*}
&\nabla g(x)^T\Delta x+\frac{1}{2}\Delta x^TH\Delta x+h(x+\Delta x)
\\ &\hspace{1pc}\le \nabla g(x)^T(t\Delta x)+\frac12 t^2 \Delta x^TH\Delta x+h(x_+)
\\ &\hspace{1pc}\le t\nabla g(x)^T\Delta x+\frac12 t^2 \Delta x^TH\Delta x+th(x+\Delta x)+(1-t)h(x).
\end{align*}
We rearrange and then simplify:
\begin{gather*}
(1-t)\nabla g(x)^T\Delta x + \frac{1}{2}(1-t^2)\Delta x^TH\Delta x
+ (1-t)(h(x+\Delta x)-h(x)) \le 0
\\ \nabla g(x)^T\Delta x+\frac{1}{2}(1+t)\Delta x^TH\Delta x+h(x+\Delta x)-h(x) \le 0
\\ \nabla g(x)^T\Delta x+h(x+\Delta x)-h(x) \le -\frac{1}{2}(1+t)\Delta x^TH\Delta x.
\end{gather*}
Finally, we let $t\to 1$ and rearrange to obtain
\eqref{eq:search-direction-properties-2}.
\end{proof}
Proposition \ref{prop:search-direction-properties} implies the search
direction is a descent direction for $f$ because we can substitute
\eqref{eq:search-direction-properties-2} into
\eqref{eq:search-direction-properties-1} to obtain
\begin{align}
f(x_+) \le f(x)-t\Delta x^TH\Delta x+O(t^2).
\label{eq:descent}
\end{align}
\begin{proposition}
\label{prop:first-order-conditions}
Suppose $H$ is positive definite. Then $x^\star$ is an optimal
solution if and only if at $x^\star$ the search direction $\Delta x$
\eqref{eq:proxnewton-search-dir-1} is zero.
\end{proposition}
\begin{proof}
If $\Delta x$ at $x^\star$ is nonzero, it is a descent direction for
$f$ at $x^\star$. Hence, $x^\star$ cannot be a minimizer of $f$. If
$\Delta x = 0$, then $x$ is the minimizer of $\hat{f}$. Thus
\[
\nabla g(x)^T(td)+\frac12 t^2 d^THd+h(x+td) -h(x)\ge 0
\]
for all $t>0$ and $d$. We rearrange to obtain
\begin{align}
h(x+td)-h(x) \ge -t\nabla g(x)^Td-\frac12 t^2 d^THd. \label{eq:first-order-cond-1}
\end{align}
Let $Df(x,d)$ be the directional derivative of $f$ at $x$ in the direction $d$:
\begin{align}
Df(x,d) &= \lim_{t\to 0} \frac{f(x+td)-f(x)}{t}
\nonumber \\
&= \lim_{t\to 0} \frac{g(x+td)-g(x)+h(x+td)-h(x)}{t}
\nonumber \\
&= \lim_{t\to 0} \frac{t\nabla g(x)^Td+O(t^2)+h(x+td)-h(x)}{t}.
\label{eq:first-order-cond-2}
\end{align}
We substitute \eqref{eq:first-order-cond-1} into
\eqref{eq:first-order-cond-2} to obtain
\begin{align*}
Df(x,u) &\ge \lim_{t\to 0}
\frac{t\nabla g(x)^Td+O(t^2) - \frac12 t^2 d^THd-t\nabla g(x)^Td}{t}
\\ &= \lim_{t\to 0} \frac{-\frac12 t^2 d^THd+O(t^2)}{t} = 0.
\end{align*}
Since $f$ is convex, $x$ is an optimal solution if and only if $\Delta x = 0$.
\end{proof}
In a few special cases we can derive a closed form expression for the
proximal Newton search direction, but we must usually resort to an
iterative method. The user should choose an iterative method that
exploits the properties of $h$. {\it E.g.}, if $h$ is the $\ell_1$ norm, then
(block) coordinate descent methods combined with an active set
strategy are known to be very efficient for these problems
\cite{friedman2007pathwise}.
We use a line search procedure to select a step length $t$ that
satisfies a sufficient descent condition:
\begin{gather}
f(x_+) \le f(x)+\alpha t\Delta
\\ \Delta := \nabla g(x)^T\Delta x+h(x+\Delta x)-h(x),
\label{eq:sufficient-descent}
\end{gather}
where $\alpha\in(0,0.5)$ can be interpreted as the fraction of the
decrease in $f$ predicted by linear extrapolation that we will accept. A simple example of a line search procedure is called \emph{backtracking line search} \cite{boyd2004convex}.
\begin{lemma}
\label{lem:acceptable-step-lengths}
Suppose $H\succeq mI$ for some $m > 0$ and $\nabla g$ is Lipschitz
continuous with constant $L_1$. Then there exists $\kappa$ such that
\begin{align}
t \le \min\left\{1,\frac{2}{\kappa}(1-\alpha)\right\}
\label{eq:step-length-conditions}
\end{align}
satisfies the sufficient descent condition \eqref{eq:sufficient-descent}.
\end{lemma}
\begin{proof}
We can bound the decrease at each iteration by
\begin{align*}
& f(x_+) -f(x) = g(x_+) -g(x) +h(x_+) -h(x)
\\ &\hspace{1pc}\le \int_0^1 \nabla g(x+s(t\Delta x))^T(t\Delta x) ds
+ th(x+\Delta x) +(1-t)h(x) -h(x)
\\ &\hspace{1pc}=\nabla g(x)^T(t\Delta x)+ t(h(x+\Delta x) -h(x))
\\ &\hspace{1pc}\pc+\int_0^1 (\nabla g(x+s(t\Delta x))-\nabla g(x))^T(t\Delta x) ds
\\ &\hspace{1pc}\le t\left(\nabla g(x)^T(t\Delta x)+h(x+\Delta x) -h(x) \right.
\\ &\hspace{1pc}\pc+\left.\int_0^1 \norm{\nabla g(x+s(\Delta x))-\nabla g(x)}\norm{\Delta x} ds\right).
\end{align*}
Since $\nabla g$ is Lipschitz continuous with constant $L_1$,
\begin{align}
f(x_+) -f(x) &\le t\left(\nabla g(x)^T\Delta x+h(x+\Delta x)
- h(x)+\frac{L_1t}{2} \norm{\Delta x}^2\right)
\nonumber
\\ &= t\left(\Delta+\frac{L_1t}{2} \norm{\Delta x}^2\right),
\label{eq:acceptable-step-lengths-1}
\end{align}
where we use \eqref{eq:search-direction-properties-2}. If we choose
$t\le \frac{2}{\kappa}(1-\alpha),\,\kappa = L_1/m$, then
\begin{align}
\frac{L_1t}{2} \norm{\Delta x}^2 &\le m(1-\alpha)\norm{\Delta x}^2 \nonumber
\\ &\le (1-\alpha)\Delta x^TH\Delta x \nonumber
\\ &\le -(1-\alpha)\Delta, \label{eq:acceptable-step-lengths-2}
\end{align}
where we again use \eqref{eq:search-direction-properties-2}. We
substitute \eqref{eq:acceptable-step-lengths-2} into
\eqref{eq:acceptable-step-lengths-1} to obtain
\[
f(x_+) -f(x) \le t\left(\Delta-(1-\alpha)\Delta\right) = t(\alpha\Delta).
\]
\end{proof}
\begin{algorithm}
\caption{A generic proximal Newton-type method}
\label{alg:prox-newton}
\begin{algorithmic}[1]
\Require starting point $x_0\in\dom f$
\Repeat
\State Choose $H_k$, a positive definite approximation to the Hessian.
\State Solve the subproblem for a search direction:
\Statex \hspace{1pc}\pc $\Delta x_k \leftarrow \argmin_d \nabla g(x_k)^Td + \frac{1}{2}d^TH_kd +h(x_k+d).$
\State Select $t_k$ with a backtracking line search.
\State Update: $x_{k+1} \leftarrow x_k + t_k\Delta x_k$.
\Until{stopping conditions are satisfied.}
\end{algorithmic}
\end{algorithm}
\subsection{Inexact proximal Newton-type methods}
Inexact proximal Newton-type methods solve subproblem
\eqref{eq:proxnewton-search-dir-1} approximately to obtain inexact
search directions. These methods can be more efficient than their
exact counterparts because they require less computational expense per
iteration. In fact, many practical implementations of proximal
Newton-type methods such as \texttt{\texttt{glmnet}}, LIBLINEAR, and
QUIC use inexact search directions.
In practice, how exactly (or inexactly) we solve the subproblem is
critical to the efficiency and reliability of the method. The
practical implementations of proximal Newton-type methods we mentioned
use a variety of heuristics to decide how accurately to solve the
subproblem. Although these methods perform admirably in practice,
there are few results on how inexact solutions to the subproblem
affect their convergence behavior.
First we propose an adaptive stopping condition for the subproblem. Then
in section \ref{sec:convergence-results} we analyze the convergence
behavior of inexact Newton-type methods. Finally, in section
\ref{sec:experiments} we conduct computational experiments to compare the
performance of our stopping condition against commonly used heuristics.
Our stopping condition is motivated by the adaptive stopping
condition used by \emph{inexact Newton-type methods} for minimizing
smooth functions:
\begin{align}
\norm{\nabla \hat{g}_k(x_k + \Delta x_k)} \le \eta_k\norm{\nabla g(x_k)},
\label{eq:inexact-newton-stopping-condition}
\end{align}
where $\eta_k$ is called a \emph{forcing term} because it forces the
left-hand side to be small. We generalize
\eqref{eq:inexact-newton-stopping-condition} to composite functions by
substituting composite gradients into
\eqref{eq:inexact-newton-stopping-condition} and scaling the norm:
\begin{align}
\norm{\nabla\hat{g}_k(x_k) + \partial h(x_k+\Delta x_k)}_{H_k^{-1}}
\le \eta_k\norm{G_f(x_k)}_{H_k^{-1}}.
\label{eq:adaptive-stopping-condition}
\end{align}
Following Eisenstat and Walker \cite{eisenstat1996choosing},
we set $\eta_k$ based on how well
$\hat{g}_{k-1}$ approximates $g$ near $x_k$:
\begin{align}
\eta_k =\min\,\left\{0.1,
\frac{\norm{\nabla\hat{g}_{k-1}(x_k)-\nabla g(x_k)}}
{\norm{\nabla g(x_{k-1})}}\right\}.
\label{eq:forcing-term}
\end{align}
This choice yields desirable convergence results and performs admirably in
practice.
Intuitively, we should solve the subproblem exactly if (i) $x_k$ is
close to the optimal solution, and (ii) $\hat{f}_k$ is a good model of
$f$ near $x_k$. If (i), then we seek to preserve the fast local
convergence behavior of proximal Newton-type methods; if (ii), then
minimizing $\hat{f}_k$ is a good surrogate for minimizing $f$. In
these cases, \eqref{eq:adaptive-stopping-condition} and
\eqref{eq:forcing-term} ensure the subproblem is solved accurately.
We can derive an expression like
\eqref{eq:proxnewton-search-direction-2} for an inexact search
direction in terms of an explicit gradient, an implicit subgradient,
and a residual term $r_k$. This reveals connections to the inexact
Newton search direction in the case of smooth
problems. \eqref{eq:adaptive-stopping-condition} is equivalent to
\[
0\in\nabla \hat{g}_k(x_k) +\partial h(x_k + \Delta x_k) + r_k,
\]
for some $r_k$ such that $\norm{r_k}_{\nabla^2 g(x_k)^{-1}} \le
\eta_k\norm{G_f(x_k)}_{H_k^{-1}}$. Hence an inexact search direction
satisfies
\begin{align}
H\Delta x_k \in -\nabla g(x_k) - \partial h(x_k + \Delta x_k) + r_k.
\label{eq:inexact-proxnewton-search-direction}
\end{align}
\section{Convergence results}
\label{sec:convergence-results}
Our first result guarantees proximal Newton-type methods converge
globally to some optimal solution $x^\star$. We assume $\{H_k\}$ are sufficiently positive definite; {\it i.e.}, $H_k \succeq
mI$ for some $m > 0$. This assumption is required to guarantee the
methods are executable, {\it i.e.}\ there exist step lengths that satisfy the
sufficient descent condition ({\it cf.}\ Lemma
\ref{lem:acceptable-step-lengths}).
\begin{theorem}
\label{thm:global-convergence}
If $H_k \succeq mI$ for some $m > 0$, then $x_k$ converges to an
optimal solution starting at any $x_0\in\dom f$.
\end{theorem}
\begin{proof}
$f(x_k)$ is decreasing because $\Delta x_k$ is always a descent
direction \eqref{eq:descent} and
there exist step lengths satisfying the sufficient descent condition
\eqref{eq:sufficient-descent} ({\it cf.}\ Lemma
\ref{lem:acceptable-step-lengths}):
\[
f(x_k)-f(x_{k+1}) \le \alpha t_k\Delta_k \le 0.
\]
$f(x_k)$ must converge to some limit (we assumed $f$ is closed and
the optimal value is attained); hence $t_k\Delta_k$ must decay to zero. $t_k$ is
bounded away from zero because sufficiently small step lengths
attain sufficient descent; hence $\Delta_k$ must decay to zero. We
use \eqref{eq:search-direction-properties-2} to deduce that $\Delta x_k$
also converges to zero:
\[
\norm{\Delta x_k}^2 \le \frac{1}{m} \Delta x_k^TH_k\Delta x_k
\le -\frac{1}{m}\Delta_k.
\]
$\Delta x_k$ is zero if and only if $x$ is an optimal solution ({\it cf.}\
Proposition \ref{prop:first-order-conditions}), hence $x_k$
converges to some $x^\star$.
\end{proof}
\subsection{Convergence of the proximal Newton method}
The proximal Newton method uses the exact Hessian of the smooth part
$g$ in the second-order model of $f$, {\it i.e.}\ $H_k = \nabla^2 g(x_k)$. This method converges
$q$-quadratically:
\[
\norm{x_{k+1}-x^\star} = O\bigl(\norm{x_k-x^\star}^2\bigr),
\]
subject to standard assumptions on the smooth part: we require $g$ to
be locally strongly convex and $\nabla^2 g$ to be locally Lipschitz
continuous, {\it i.e.}\ $g$ is strongly convex and Lipschitz continuous in a
ball around $x^\star$. These are standard assumptions for proving
that Newton's method for minimizing smooth functions converges
$q$-quadratically.
First, we prove an auxiliary result: step lengths of unity satisfy the
sufficient descent condition after sufficiently many iterations.
\begin{lemma}
\label{lem:newton-unit-step}
Suppose (i) $g$ is locally strongly convex with constant $m$ and
(ii) $\nabla^2 g$ is locally Lipschitz continuous with constant
$L_2$. If we choose $H_k = \nabla^2 g(x_k)$, then the unit step
length satisfies the sufficient decrease condition
\eqref{eq:sufficient-descent} for $k$ sufficiently large.
\end{lemma}
\begin{proof}
Since $\nabla^2 g $ is locally Lipschitz continuous with constant $L_2$,
\begin{equation*}
g(x+\Delta x) \le g(x) + \nabla g(x) ^{T} \Delta x
+ \frac{1}{2} \Delta x^T\nabla^2 g(x)\Delta x
+ \frac{L_2}{6} \norm{\Delta x}^3.
\end{equation*}
We add $h(x+\Delta x)$ to both sides to obtain
\begin{align*}
f(x+\Delta x) &\le g(x) + \nabla g(x) ^{T} \Delta x
+ \frac{1}{2} \Delta x^T\nabla^2 g(x)\Delta x
\\ &\hspace{1pc}+\frac{L_2}{6}\norm{\Delta x}^3 + h(x+\Delta x).
\end{align*}
We then add and subtract $h(x)$ from the right-hand side to obtain
\begin{align*}
f(x+\Delta x) &\le g(x) + h(x) + \nabla g(x) ^{T} \Delta x+ h(x+\Delta x) - h(x)
\\ &\hspace{1pc}+ \frac{1}{2} \Delta x^T\nabla^2 g(x)\Delta x +\frac{L_2}{6}\norm{\Delta x}^3
\\ &\le f(x) +\Delta + \frac{1}{2}\Delta x^T\nabla^2 g(x)\Delta x
+ \frac{L_2}{6}\norm{\Delta x}^3
\\ &\le f(x) +\Delta -\frac{1}{2}\Delta +\frac{L_2}{6m}\norm{\Delta x}\Delta,
\end{align*}
where we use \eqref{eq:search-direction-properties-2} and
\eqref{eq:sufficient-descent}. We rearrange to obtain
\begin{align*}
f(x+\Delta x)- f(x) &\le \frac{1}{2}\Delta
+ \frac{1}{2}\Delta x^T\nabla^2 g(x)\Delta x
- \frac{L_2}{6m}\Delta\norm{\Delta x}
\\ &\le \left(\frac{1}{2} -\frac{L_2}{6m}\right)\Delta
+ o\bigl(\norm{\Delta x}^2\bigr).
\end{align*}
We can show $\Delta x_k$ decays to zero via the same argument that
we used to prove Theorem \ref{thm:global-convergence}. Hence, if $k$
is sufficiently large,
\(
f(x_k+\Delta x_k) -f(x_k) < \frac{1}{2} \Delta_k.
\)
\end{proof}
\begin{theorem}
\label{thm:newton-quadratic-convergence}
Suppose (i) $g$ is locally strongly convex with constant $m$, and
(ii) $\nabla^2 g$ is locally Lipschitz continuous with constant
$L_2$. Then the proximal Newton method converges $q$-quadratically
to $x^\star$.
\end{theorem}
\begin{proof}
The assumptions of Lemma \ref{lem:newton-unit-step} are satisfied;
hence unit step lengths satisfy the sufficient descent condition
after sufficiently many steps:
\[
x_{k+1} = x_k + \Delta x_k = \prox_h^{\nabla^2 g(x_k)}\left(x_k-\nabla^2 g(x_k)^{-1}\nabla g(x_k)\right).
\]
$\prox_h^{\nabla^2 g(x_k)}$ is firmly nonexpansive in the $\nabla^2 g(x_k)$-norm, hence
\begin{align*}
&\norm{x_{k+1} - x^\star}_{\nabla^2 g(x_k)} \\
&\hspace{1pc}= \bigl\|\prox_h^{\nabla^2 g(x_k)}(x_k-\nabla^2 g(x_k)^{-1}\nabla g(x_k)) \\
&\hspace{1pc}\pc- \prox_h^{\nabla^2 g(x_k)}(x^\star-\nabla^2 g(x_k)^{-1}\nabla g(x^\star))\bigr\|_{\nabla^2 g(x_k)} \\
&\hspace{1pc}\le \norm{x_k - x^\star+\nabla^2 g(x_k)^{-1}(\nabla g(x^\star) - \nabla g(x_k))}_{\nabla^2 g(x_k)} \\
&\hspace{1pc}\le \frac{1}{\sqrt{m}}\norm{\nabla^2 g(x_k)(x_k-x^\star)-\nabla g(x_k)+\nabla g(x^\star)}.
\end{align*}
$\nabla^2 g$ is locally Lipschitz continuous with constant $L_2$; hence
\[
\norm{\nabla^2 g(x_k)(x_k-x^\star)-\nabla g(x_k)+\nabla g(x^\star)} \le \frac{L_2}{2}\norm{x_k-x^\star}^2.
\]
We deduce that $x_k$ converges to $x^\star$ quadratically:
\[
\norm{x_{k+1}-x^\star} \le \frac{1}{\sqrt{m}}\norm{x_{k+1}-x^\star}_{\nabla^2 g(x_k)} \le \frac{L_2}{2m} \norm{x_k-x^\star}^2.
\]
\end{proof}
\subsection{Convergence of proximal quasi-Newton methods}
If the sequence $\{H_k\}$ satisfy the Dennis-Mor\'{e} criterion
\cite{dennis1974characterization}, namely
\begin{align}
\frac{\norm{\left(H_{k} -\nabla^2 g(x^\star)\right)(x_{k+1}-x_k)}}{\norm{x_{k+1}-x_k}}\to 0,
\label{eq:dennis-more}
\end{align}
then we can prove that a proximal quasi-Newton method converges
$q$-superlinearly:
\[
\norm{x_{k+1}-x^\star} \le o(\norm{x_k-x^\star}).
\]
We also require $g$ to be locally strongly convex and $\nabla^2 g$ to
be locally Lipschitz continuous. These are the same assumptions
required to prove quasi-Newton methods for minimizing smooth functions
converge superlinearly.
First, we prove two auxiliary results: (i) step lengths of unity
satisfy the sufficient descent condition after sufficiently many
iterations, and (ii) the proximal quasi-Newton step is close to the
proximal Newton step.
\begin{lemma}
\label{lem:quasinewton-unit-step}
Suppose $g$ is twice continuously differentiable and $\nabla^2g$ is
locally Lipschitz continuous with constant $L_2$. If $\{H_k\}$ satisfy the Dennis-Mor\'{e}
criterion and their eigenvalues
are bounded, then the unit step length satisfies the sufficient
descent condition \eqref{eq:sufficient-descent} after sufficiently
many iterations.
\end{lemma}
\begin{proof}
The proof is very similar to the proof of Lemma
\ref{lem:newton-unit-step}, and we defer the details to Appendix
\ref{sec:proofs}.
\end{proof}
The proof of the next result mimics the analysis of Tseng and Yun
\cite{tseng2009coordinate}.
\begin{proposition}
\label{prop:tseng}
Suppose $H$ and $\hat{H}$ are positive definite matrices with
bounded eigenvalues: $mI\preceq H \preceq MI$ and $\hat{m}I\preceq
\hat{H} \preceq \hat{M}I$. Let $\Delta x$ and $\Delta \hat{x}$ be
the search directions generated using $H$ and $\hat{H}$
respectively:
\begin{align*}
\Delta x &= \prox_h^{H}\left(x-H^{-1}\nabla g(x)\right) - x, \\
\Delta \hat{x} &= \prox_h^{\hat{H}}\left(x-\hat{H}^{-1}\nabla g(x)\right) - x.
\end{align*}
Then there exists $\bar{\theta}$ such that these two search
directions satisfy
\[
\norm{\Delta x - \Delta\hat{x}}\le \sqrt{\frac{1+\bar{\theta}}{m}}\bigl\|(\hat{H}-H)\Delta x\bigr\|^{1/2}\norm{\Delta x}^{1/2}.
\]
\end{proposition}
\begin{proof}
By \eqref{eq:proxnewton-search-dir-1} and Fermat's rule, $\Delta x$
and $\Delta\hat{x}$ are also the solutions to
\begin{align*}
\Delta x &= \argmin_d\, \nabla g(x)^Td + \Delta x^THd + h(x+d), \\
\Delta\hat{x} &= \argmin_d\, \nabla g(x)^Td + \Delta\hat{x}^T\hat{H}d + h(x+d).
\end{align*}
Hence $\Delta x$ and $\Delta\hat{x}$ satisfy
\begin{align*}
&\nabla g(x)^T\Delta x + \Delta x^TH\Delta x + h(x+\Delta x) \\
&\hspace{1pc}\le \nabla g(x)^T\Delta\hat{x} + \Delta\hat{x}^TH\Delta\hat{x} + h(x+\Delta\hat{x})
\end{align*}
and
\begin{align*}
&\nabla g(x)^T\Delta\hat{x} + \Delta\hat{x}^T\hat{H}\Delta\hat{x} + h(x+\Delta\hat{x}) \\
&\hspace{1pc}\le \nabla g(x)^T\Delta x + \Delta x^T\hat{H}\Delta x + h(x+\Delta x).
\end{align*}
We sum these two inequalities and rearrange to obtain
\[
\Delta x^TH\Delta x - \Delta x^T(H+\hat{H})\Delta\hat{x} + \Delta\hat{x}^T\hat{H}\Delta\hat{x} \le 0.
\]
We then complete the square on the left side and rearrange to obtain
\begin{align*}
&\Delta x^TH\Delta x - 2\Delta x^TH\Delta\hat{x} + \Delta\hat{x}^TH\Delta\hat{x} \\
&\hspace{1pc}\le \Delta x^T(\hat{H}-H)\Delta\hat{x} + \Delta\hat{x}^T(H- \hat{H})\Delta\hat{x}.
\end{align*}
The left side is $\norm{\Delta x - \Delta\hat{x}}_H^2$ and the eigenvalues of $H$ are bounded. Thus
\begin{align}
\norm{\Delta x - \Delta\hat{x}} & \le\frac{1}{\sqrt{m}}\left(\Delta x^T(\hat{H}-H)\Delta x + \Delta\hat{x}^T(H- \hat{H})\Delta\hat{x}\right)^{1/2} \nonumber \\
&\le \frac{1}{\sqrt{m}}\bigl\|(\hat{H}-H)\Delta\hat{x}\bigr\|^{1/2}(\norm{\Delta x} + \norm{\Delta\hat{x}})^{1/2}.
\label{eq:prop:tseng-1}
\end{align}
We use a result due to Tseng and Yun ({\it cf.}\ Lemma 3 in
\cite{tseng2009coordinate}) to bound the term $\left(\norm{\Delta x}
+ \norm{\Delta\hat{x}}\right)$. Let $P$ denote
$\hat{H}^{-1/2}H\hat{H}^{-1/2}$. Then $\norm{\Delta x}$ and
$\norm{\Delta\hat{x}}$ satisfy
\[
\norm{\Delta x} \le \left(\frac{\hat{M}\left(1+\lambda_{\max}(P) + \sqrt{1-2\lambda_{\min}(P)+\lambda_{\max}(P)^2}\right)}{2m}\right)\norm{\Delta\hat{x}}.
\]
We denote the constant in parentheses by $\bar{\theta}$ and conclude that
\begin{align}
\norm{\Delta x} + \norm{\Delta\hat{x}} \le (1 + \bar{\theta})\norm{\Delta\hat{x}}.
\label{eq:prop:tseng-2}
\end{align}
We substitute \eqref{eq:prop:tseng-2} into \eqref{eq:prop:tseng-1} to obtain
\[
\norm{\Delta x - \Delta\hat{x}}^2\le \sqrt{\frac{1+\bar{\theta}}{m}}\bigl\|(\hat{H}-H)\Delta\hat{x}\bigr\|^{1/2}\norm{\Delta\hat{x}}^{1/2}.
\]
\end{proof}
\begin{theorem}
\label{thm:superlinear-convergence}
Suppose (i) $g$ is twice continuously differentiable and locally
strongly convex, (ii) $\nabla^2g$ is locally Lipschitz continuous
with constant $L_2$. If $\{H_k\}$ satisfy the Dennis-Mor\'{e}
criterion and their eigenvalues
are bounded, then a proximal
quasi-Newton method converges $q$-superlinearly to $x^\star$.
\end{theorem}
\begin{proof}
The assumptions of Lemma \ref{lem:quasinewton-unit-step} are
satisfied; hence unit step lengths satisfy the sufficient descent
condition after sufficiently many iterations:
\[
x_{k+1} = x_k + \Delta x_k.
\]
Since the proximal Newton method converges $q$-quadratically ({\it cf.}\ Theorem
\ref{thm:newton-quadratic-convergence}),
\begin{align}
\norm{x_{k+1} - x^\star} &\le \norm{x_k + \Delta x_k^{\rm nt} - x^\star}
+ \norm{\Delta x_k - \Delta x_k^{\rm nt}} \nonumber \\
&\le \frac{L_2}{m}\norm{x_k^{\rm nt}-x^\star}^2 + \norm{\Delta x_k
- \Delta x_k^{\rm nt}},
\label{eq:superlinear-convergence-1}
\end{align}
where $\Delta x_k^{\rm nt}$ denotes the proximal-Newton search direction.
We use Proposition \ref{prop:tseng} to bound the second term:
\begin{align}
\norm{\Delta x_k - \Delta x_k^{\rm nt}} \le
\sqrt{\frac{1+\bar{\theta}}{m}}\norm{(\nabla^2 g(x_k)-H_k)\Delta
x_k}^{1/2}\norm{\Delta x_k}^{1/2}.
\label{eq:superlinear-convergence-2}
\end{align}
$\nabla^2 g$ is Lipschitz continuous and $\Delta x_k$ satisfies the
Dennis-Mor\'{e} criterion; hence
\begin{align*}
\norm{\left(\nabla^2 g(x_k)-H_k\right)\Delta x_k} &\le \norm{\left(\nabla^2 g(x_k)- \nabla^2 g(x^\star)\right)\Delta x_k} \\
&\hspace{1pc}+ \norm{\left(\nabla^2 g(x^\star)-H_k\right)\Delta x_k} \\
&\le L_2\norm{x_k-x^\star}\norm{\Delta x_k} + o(\norm{\Delta x_k}).
\end{align*}
$\norm{\Delta x_k}$ is within some constant $\bar{\theta}_k$ of
$\|\Delta x_k^{\rm nt}\|$ ({\it cf.}\ Lemma 3 in
\cite{tseng2009coordinate}), and we know the proximal Newton method
converges $q$-quadratically. Thus
\begin{align*}
\norm{\Delta x_k} &\le \bar{\theta}_k\norm{\Delta x_k^{\rm nt}} = \bar{\theta}_k\norm{x_{k+1}^{\rm nt} - x_k} \\
&\le \bar{\theta}_k\left(\norm{x_{k+1}^{\rm nt} -x^\star} + \norm{x_k - x^\star}\right) \\
&\le O\bigl(\norm{x_k-x^\star}^2\bigr) + \bar{\theta}_k\norm{x_k-x^\star}.
\end{align*}
We substitute these expressions into \eqref{eq:superlinear-convergence-2} to obtain
\begin{align*}
\norm{\Delta x_k - \Delta x_k^{\rm nt}} = o(\norm{x_k-x^\star}).
\end{align*}
We substitute this expression into \eqref{eq:superlinear-convergence-1} to obtain
\[
\norm{x_{k+1} - x^\star} \le \frac{L_2}{m}\norm{x_k^{\rm nt}-x^\star}^2 + o(\norm{x_k-x^\star}),
\]
and we deduce that $x_k$ converges to $x^\star$ superlinearly.
\end{proof}
\subsection{Convergence of the inexact proximal Newton method}
We make the same assumptions made by Dembo et al.\ in their analysis
of \emph{inexact Newton methods} for minimizing smooth functions
\cite{dembo1982inexact}: (i) $x_k$ is close to $x^\star$ and (ii) the
unit step length is eventually accepted. We prove the inexact proximal
Newton method (i) converges $q$-linearly if the forcing terms $\eta_k$
are smaller than some $\bar{\eta}$, and (ii) converges $q$-superlinearly
if the forcing terms decay to zero.
First, we prove a consequence of the smoothness of $g$. Then, we use
this result to prove the inexact proximal Newton method converges
locally subject to standard assumptions on $g$ and $\eta_k$.
\begin{lemma}
\label{lem:norm-equivalence}
Suppose $g$ is locally strongly convex and $\nabla^2 g$ is locally
Lipschitz continuous. If $x_k$ sufficiently close to $x^\star$, then
for any $x$,
\[
\norm{x-x^\star}_{\nabla^2 g(x^\star)} \le \left(1+\epsilon\right)\norm{x-x^\star}_{\nabla^2 g(x_k)}.
\]
\end{lemma}
\begin{proof}
We first expand $\nabla^2 g(x^\star)^{1/2}(x - x^\star)$ to obtain
\begin{align*}
&\nabla^2 g(x^\star)^{1/2}(x-x^\star) \\
&\hspace{1pc} = \left(\nabla^2 g(x^\star)^{1/2} - \nabla^2 g(x_k)^{1/2}\right)(x-x^\star) + \nabla^2 g(x_k)^{1/2}(x-x^\star) \\
&\hspace{1pc}= \left(\nabla^2 g(x^\star)^{1/2} - \nabla^2 g(x_k)^{1/2}\right)\nabla^2 g(x_k)^{-1/2}\nabla^2 g(x_k)^{1/2}(x-x^\star) \\
&\hspace{1pc}\pc+\nabla^2 g(x_k)^{1/2}(x-x^\star) \\
&\hspace{1pc}= \left(I + \left(\nabla^2 g(x^\star)^{1/2} - \nabla^2 g(x_k)^{1/2}\right)\nabla^2 g(x_k)^{-1/2}\right)\nabla^2 g(x_k)^{1/2}(x-x^\star).
\end{align*}
We take norms to obtain
\begin{equation}
\begin{aligned}
&\norm{x-x^\star}_{\nabla^2 g(x^\star)} \\
&\hspace{1pc}\le \bigl\|I + \bigl(\nabla^2 g(x^\star)^{1/2} - \nabla^2 g(x_k)^{1/2}\bigr)\nabla^2 g(x_k)^{-1/2}\bigr\|\norm{x-x^\star}_{\nabla^2 g(x_k)}.
\end{aligned}
\label{eq:norm-equivalence-1}
\end{equation}
If $g$ is locally strongly convex with constant $m$ and $x_k$
is sufficiently close to $x^\star$, then
\[
\bigl\|\nabla^2 g(x^\star)^{1/2} - \nabla^2 g(x_k)^{1/2}\bigr\| \le \sqrt{m}\epsilon.
\]
We substitute this bound into \eqref{eq:norm-equivalence-1} to deduce that
\[
\norm{x-x^\star}_{\nabla^2 g(x^\star)} \le (1+\epsilon)\norm{x-x^\star}_{\nabla^2 g(x_k)}.
\]
\end{proof}
\begin{theorem}
\label{thm:inexact-newton-local-convergence}
Suppose (i) $g$ is locally strongly convex with constant $m$, (ii)
$\nabla^2 g$ is locally Lipschitz continuous with constant $L_2$,
and (iii) there exists $L_G$ such that the composite gradient step
satisfies
\begin{equation}
\norm{G_f(x_k)}_{\nabla^2 g(x_k)^{-1}} \le L_G\norm{x_k - x^\star}_{\nabla^2 g(x_k)}.
\label{eq:G-lipschitz}
\end{equation}
\begin{enumerate}
\item If $\eta_k$ is smaller than some $\bar{\eta} < \frac{1}{L_G}$,
then an inexact proximal Newton method converges $q$-linearly to
$x^\star$.
\item If $\eta_k$ decays to zero, then an inexact proximal Newton
method converges $q$-superlinearly to $x^\star$.
\end{enumerate}
\end{theorem}
\begin{proof}
We use Lemma \ref{lem:norm-equivalence} to deduce
\begin{align}
\norm{x_{k+1}-x^\star}_{\nabla^2 g(x^\star)}\|_{\nabla^2 g(x^\star)} \le (1+\epsilon_1)\norm{x_k -x^\star}_{\nabla^2 g(x_k)}.
\label{eq:inexact-newton-local-convergence-1}
\end{align}
We use the monotonicity of $\partial h$ to bound $\norm{x_k
-x^\star}_{\nabla^2 g(x_k)}$. First, $\Delta x_k$ satisfies
\begin{equation}
\begin{aligned}
\nabla^2 g(x_k)(x_{k+1} - x^\star) \in-\nabla g(x_k) - \partial h(x_{k+1}) + r_k
\end{aligned}
\label{eq:inexact-newton-local-convergence-2}
\end{equation}
({\it cf.}\ \eqref{eq:inexact-proxnewton-search-direction}).
Also, the exact proximal Newton step at $x^\star$ (trivially) satisfies
\begin{align}
\nabla^2 g(x_k)(x^\star - x^\star) \in -\nabla g(x^\star) - \partial h(x^\star).
\label{eq:inexact-newton-local-convergence-3}
\end{align}
Subtracting \eqref{eq:inexact-newton-local-convergence-3} from
\eqref{eq:inexact-newton-local-convergence-2} and rearranging,
we obtain
\begin{align*}
&\partial h(x_{k+1}) - \partial h(x^\star) \\
&\hspace{1pc}\in \nabla^2 g(x_k)(x_k - x_{k+1} -x^\star + x^\star) -\nabla g(x_k) +\nabla g(x^\star) +r_k.
\end{align*}
Since $\partial h$ is monotone,
\begin{align*}
0 &\le (x_{k+1} - x^\star)^T(\partial h(x_{k+1}) - \partial h(x^\star)) \\
&= (x_{k+1}-x^\star)\nabla^2 g(x_k)(x^\star - x_{k+1}) + (x_{k+1}-x^\star)^T\left(\nabla^2 g(x_k)(x_k-x^\star) \right. \\
&\hspace{1pc}\left.- \nabla g(x_k) + \nabla g(x^\star) + r_k\right) \\
&= (x_{k+1}-x^\star)^T\nabla^2 g(x_k)\left(x_k-x^\star+ \nabla^2 g(x_k)^{-1}\left(\nabla g(x^\star) - \nabla g(x_k) + r_k\right)\right) \\
&\hspace{1pc}-\norm{x_{k+1}-x^\star}_{\nabla^2 g(x_k)}.
\end{align*}
We taking norms to obtain
\begin{align*}
\norm{x_{k+1}-x^\star}_{\nabla^2 g(x_k)}
&\le \norm{x_k-x^\star + \nabla^2 g(x_k)^{-1}\left(\nabla g(x^\star)
- \nabla g(x_k)\right)}_{\nabla^2 g(x_k)} \\
&\hspace{1pc}+ \eta_k\norm{r_k}_{\nabla^2 g(x_k)^{-1}}.
\end{align*}
If $x_k$ is sufficiently close to $x^\star$, for any $\epsilon_2 > 0$ we have
\[
\norm{x_k-x^\star + \nabla^2 g(x_k)^{-1}\left(\nabla g(x^\star) -
\nabla g(x_k)\right)}_{\nabla^2 g(x_k)} \le
\epsilon_2\norm{x_k-x^\star}_{\nabla g^2(x^\star)}
\]
and hence
\begin{align}
\norm{x_{k+1}-x^\star}_{\nabla^2 g(x^\star)}\|_{\nabla^2
g(x^\star)} \le \epsilon_2\norm{x_k-x^\star}_{\nabla
g^2(x^\star)}+ \eta_k\norm{r_k}_{\nabla^2 g(x_k)^{-1}}.
\label{eq:inexact-newton-local-convergence-4}
\end{align}
We substitute \eqref{eq:inexact-newton-local-convergence-4} into
\eqref{eq:inexact-newton-local-convergence-1} to obtain
\begin{equation}
\norm{x_{k+1}-x^\star}_{\nabla^2 g(x^\star)}
\le (1+\epsilon_1)
\bigl(\epsilon_2\norm{x_k-x^\star}_{\nabla g^2(x^\star)}
+ \eta_k\norm{r_k}_{\nabla^2 g(x_k)^{-1}}\bigr).
\label{eq:inexact-newton-local-convergence-5}
\end{equation}
Since $\Delta x_k$ satisfies the adaptive stopping condition
\eqref{eq:adaptive-stopping-condition},
\[
\norm{r_k}_{\nabla^2 g(x_k)^{-1}} \le \eta_k\norm{G_f(x_k)}_{\nabla^2 g(x_k)^{-1}},
\]
and since there exists $L_G$ such that $G_f$ satisfies \eqref{eq:G-lipschitz},
\begin{align}
\norm{r_k}_{\nabla^2 g(x_k)^{-1}} \le \eta_kL_G\norm{x_k-x^\star}_{\nabla g^2(x^\star)}.
\label{eq:inexact-newton-local-convergence-6}
\end{align}
We substitute \eqref{eq:inexact-newton-local-convergence-6} into
\eqref{eq:inexact-newton-local-convergence-5} to obtain
\[
\norm{x_{k+1}-x^\star}_{\nabla^2 g(x^\star)} \le (1 + \epsilon_1)(\epsilon_2 + \eta_k L_G)\norm{x_k-x^\star}_{\nabla g^2(x^\star)}.
\]
If $\eta_k$ is smaller than some
\[
\bar{\eta} < \frac{1}{L_G}\left(\frac{1}{(1 + \epsilon_1)} - \epsilon_2\right) < \frac{1}{L_G}
\]
then $x_k$ converges $q$-linearly to $x^\star$. If $\eta_k$ decays
to zero (the smoothness of $g$ lets $\epsilon_1, \epsilon_2$ decay
to zero), then $x_k$ converges $q$-superlinearly to $x^\star$.
\end{proof}
If we assume $g$ is twice continuously differentiable, we can derive
an expression for $L_G$. Combining this result with Theorem
\ref{thm:inexact-newton-local-convergence} we deduce the convergence
of an inexact Newton method with our adaptive stopping condition
\eqref{eq:adaptive-stopping-condition}.
\begin{lemma}
\label{lem:G-lipschitz-2}
Suppose $g$ is locally strongly convex with constant $m$, and
$\nabla^2 g$ is locally Lipschitz continuous. If $x_k$ is
sufficiently close to $x^\star$, there exists $\kappa$ such that
\[
\norm{G_f(x_k)}_{\nabla^2 g(x_k)^{-1}} \le \left(\sqrt{\kappa}(1 +
\epsilon) + \frac{1}{m}\right)\norm{x_k - x^\star}_{\nabla^2
g(x_k)}.
\]
\end{lemma}
\begin{proof}
Since $G_f(x^\star)$ is zero,
\begin{align}
&\norm{G_f(x_k)}_{\nabla^2 g(x_k)^{-1}} \nonumber \\
&\hspace{1pc}\le \frac{1}{\sqrt{m}}\norm{G_f(x_k) - G_f(x^\star)} \nonumber \\
&\hspace{1pc}\le \frac{1}{\sqrt{m}}\norm{\nabla g(x_k) -\nabla g(x^\star)} + \frac{1}{\sqrt{m}}\norm{x_k-x^\star} \nonumber \\
&\hspace{1pc}\le \sqrt{\kappa}\norm{\nabla g(x_k) -\nabla g(x^\star)}_{\nabla^2 g(x_k)^{-1}} + \frac{1}{m}\norm{x_k-x^\star}_{\nabla^2 g(x_k)}
\label{eq:G-lipschitz-1},
\end{align}
where $\kappa = L_2/m$. The second inequality follows from Lemma
\ref{lem:G-lipschitz}. We split $\norm{\nabla g(x_k) -\nabla
g(x^\star)}_{\nabla^2 g(x_k)^{-1}}$ into two terms:
\begin{align*}
&\norm{\nabla g(x_k) -\nabla g(x^\star)}_{\nabla^2 g(x_k)^{-1}} \\
&\hspace{1pc}=\norm{\nabla g(x_k) -\nabla g(x^\star) + \nabla^2 g(x_k)(x^\star - x_k)}_{\nabla^2 g(x_k)}
+ \norm{x_k - x^\star}_{\nabla^2 g(x_k)}.
\end{align*}
If $x_k$ is sufficiently close to $x^\star$, for any $\epsilon_1 > 0$ we have
\begin{align*}
\norm{\nabla g(x_k) -\nabla g(x^\star) + \nabla^2 g(x_k)(x^\star - x_k)}_{\nabla^2 g(x_k)^{-1}} \le \epsilon_1\norm{x_k - x^\star}_{\nabla^2 g(x_k)}.
\end{align*}
Hence
\[
\norm{\nabla g(x_k) -\nabla g(x^\star)}_{\nabla^2 g(x_k)^{-1}} \le (1 + \epsilon_1)\norm{x_k - x^\star}_{\nabla^2 g(x_k)}.
\]
We substituting this bound into \eqref{eq:G-lipschitz-1} to obtain
\[
\norm{G_f(x_k)}_{\nabla^2 g(x_k)^{-1}} \le \left(\sqrt{\kappa}(1 + \epsilon_1) + \frac{1}{m}\right)\norm{x_k - x^\star}_{\nabla^2 g(x_k)}.
\]
We use Lemma \ref{lem:norm-equivalence} to deduce
\begin{align*}
\norm{G_f(x_k)}_{\nabla^2 g(x_k)^{-1}} &\le (1 + \epsilon_2)\left(\sqrt{\kappa}(1 + \epsilon_1) + \frac{1}{m}\right)\norm{x_k - x^\star}_{\nabla^2 g(x_k)} \\
& \le \left(\sqrt{\kappa}(1 + \epsilon) + \frac{1}{m}\right)\norm{x_k - x^\star}_{\nabla^2 g(x_k)}.
\end{align*}
\end{proof}
\begin{corollary}
Suppose (i) $g$ is locally strongly convex with constant $m$, and
(ii) $\nabla^2 g$ is locally Lipschitz continuous with constant $L_2$.
\begin{enumerate}
\item If $\eta_k$ is smaller than some $\bar{\eta} <
\frac{1}{\sqrt{\kappa} + 1/m}$, an inexact proximal Newton
method converges $q$-linearly to $x^\star$.
\item If $\eta_k$ decays to zero, an inexact proximal Newton
method converges $q$-superlinearly to $x^\star$.
\end{enumerate}
\end{corollary}
\begin{remark}
In many cases, we can obtain tighter bounds on
$\norm{G_f(x_k)}_{\nabla^2 g(x_k)^{-1}}$. {\it E.g.}, when minimizing
smooth functions ($h$ is zero), we can show
\[
\norm{G_f(x_k)}_{\nabla^2 g(x_k)^{-1}} = \|\nabla g(x_k)\|_{\nabla^2 g(x_k)^{-1}}
\le (1+\epsilon)\norm{x_k-x^\star}_{\nabla^2 g(x_k)}.
\]
This yields the classic result of Dembo et al.: if $\eta_k$ is
uniformly smaller than one, then the inexact Newton method converges
$q$-linearly.
\end{remark}
Finally, we justify our choice of forcing terms: if we choose $\eta_k$
according to \eqref{eq:forcing-term}, then the inexact proximal Newton
method converges $q$-superlinearly.
\begin{theorem}
\label{thm:forcing-term-1}
Suppose (i) $x_0$ is sufficiently close to $x^\star$, and (ii) the
assumptions of Theorem \ref{thm:newton-quadratic-convergence} are
satisfied. If we choose $\eta_k$ according to
\eqref{eq:forcing-term}, then the inexact proximal Newton method
converges $q$-superlinearly.
\end{theorem}
\begin{proof}
Since the assumptions of Theorem \ref{thm:newton-quadratic-convergence}
are satisfied, $x_k$ converges locally to $x^\star$. Also, since $\nabla^2
g$ is Lipschitz continuous,
\begin{align*}
&\|\nabla g(x_k) -\nabla g(x_{k-1}) - \nabla^2 g(x_{k-1})\Delta x_{k-1}\| \\
&\hspace{1pc}\le \left(\int_0^1\norm{ \nabla^2 g(x_{k-1}+s\Delta x_{k-1}) -\nabla^2 g(x^\star)}ds\right)\norm{\Delta x_{k-1}} \\
&\hspace{1pc}\pc+ \norm{\nabla^2 g(x^\star) - \nabla^2 g(x_{k-1})}\norm{\Delta x_{k-1}} \\
&\hspace{1pc}\le \left(\int_0^1 L_2\|x_{k-1}+s\Delta x_{k-1} - x^\star\|ds\right)\norm{\Delta x_{k-1}} \\
&\hspace{1pc}\pc+ L_2\norm{x_{k-1} - x^\star}\norm{\Delta x_{k-1}}.
\end{align*}
We integrate the first term to obtain
\[
\int_0^1 L_2\|x_{k-1}+s\Delta x_{k-1} - x^\star\|ds = L_2 \|x_{k-1}- x^\star\| + \frac{L_2}{2}\norm{\Delta x_{k-1}}.
\]
We substituting these expressions into \eqref{eq:forcing-term} to obtain
\begin{align}
\eta_k \le L_2\left(2\norm{x_{k-1}-x^\star} +
\frac{1}{2}\norm{\Delta x_{k-1}}\right)\frac{\norm{\Delta
x_{k-1}}}{\|\nabla g(x_{k-1})\|}.
\label{eq:forcing-term-1-1}
\end{align}
If $\nabla g(x^\star) \ne 0$, $\|\nabla g(x)\|$ is bounded away
from zero in a neighborhood of $x^\star$. Hence $\eta_k$ decays to
zero and $x_k$ converges $q$-superlinearly to $x^\star$. Otherwise,
\begin{equation}
\|\nabla g(x_{k-1})\| = \|\nabla g(x_{k-1})-\nabla g(x^\star)\|
\ge m\norm{x_{k-1}-x^\star}.
\label{eq:forcing-term-1-2}
\end{equation}
We substitute \eqref{eq:forcing-term-1-1} and
\eqref{eq:forcing-term-1-2} into \eqref{eq:forcing-term} to obtain
\begin{align}
\eta_k \le \frac{L_2}{m}\left(2\norm{x_{k-1}-x^\star} + \frac{\norm{\Delta
x_{k-1}}}{2}\right)\frac{\norm{\Delta x_{k-1}}}{\norm{x_{k-1}-x^\star}}.
\label{eq:forcing-term-1-3}
\end{align}
The triangle inequality yields
\[
\norm{\Delta x_{k-1}} \le \norm{x_k-x^\star} + \norm{x_{k-1}-x^\star}.
\]
We divide by $\norm{x_{k-1}-x^\star}$ to obtain
\[
\frac{\norm{\Delta x_{k-1}}}{\norm{x_{k-1}-x^\star}} \le 1 +
\frac{\norm{x_k-x^\star}}{\norm{x_{k-1}-x^\star}}.
\]
If $k$ is sufficiently large, $x_k$ converges $q$-linearly to $x^\star$ and hence
\[
\frac{\norm{\Delta x_{k-1}}}{\norm{x_{k-1}-x^\star}} \le 2.
\]
We substitute this expression into \eqref{eq:forcing-term-1-3} to obtain
\[
\eta_k \le \frac{L_2}{m}\left(4\norm{x_{k-1}-x^\star} + \norm{\Delta x_{k-1}}\right).
\]
Hence $\eta_k$ decays to zero, and $x_k$ converges $q$-superlinearly to $x^\star$.
\end{proof}
\section{Computational experiments}
\label{sec:experiments}
First we explore how inexact search directions affect the convergence
behavior of proximal Newton-type methods on a problem in
bioinfomatics. We show that choosing the forcing terms according to
\eqref{eq:forcing-term} avoids ``oversolving'' the subproblem. Then
we demonstrate the performance of proximal Newton-type methods using a
problem in statistical learning. We show that the methods are suited
to problems with expensive smooth function evaluations.
\subsection{Inverse covariance estimation}
Suppose we are given \emph{i.i.d.}\ samples $x^{(1)},\dots,x^{(n)}$ from
a Gaussian Markov random field (MRF) with unknown inverse covariance
matrix $\bar{\Theta}$:
\[
\mathop{\mathbf{Pr}}(x;\bar{\Theta}) \propto \exp(x^T\bar{\Theta} x/2 - \logdet(\bar{\Theta})).
\]
We seek a sparse maximum likelihood estimate of the inverse covariance matrix:
\begin{align}
\hat{\Theta} := \argmin_{\Theta \in \mathbf{R}^{n\times
n}}\,\trace\left(\hat{\Sigma}\Theta\right) - \log\det(\Theta) +
\lambda\norm{\mathrm{vec}(\Theta)}_1,
\label{eq:l1-logdet}
\end{align}
where $\hat{\Sigma}$ denotes the sample covariance matrix. We
regularize using an entry-wise $\ell_1$ norm to avoid overfitting the
data and promote sparse estimates. $\lambda$ is a
parameter that balances goodness-of-fit and sparsity.
We use two datasets: (i) Estrogen, a gene expression dataset
consisting of 682 probe sets collected from 158 patients, and (ii)
Leukemia, another gene expression dataset consisting of 1255 genes
from 72 patients.\footnote{These datasets are available from
\url{http://www.math.nus.edu.sg/~mattohkc/} with the SPINCOVSE
package.} The features of Estrogen were converted to log-scale and
normalized to have zero mean and unit variance. $\lambda$ was chosen
to match the values used in \cite{rolfs2012iterative}.
We solve the inverse covariance estimation problem \eqref{eq:l1-logdet}
using a proximal BFGS method, {\it i.e.}\ $H_k$ is updated
according to the BFGS updating formula. To explore how inexact search directions affect the convergence
behavior, we use three rules to decide how accurately to solve subproblem
\eqref{eq:proxnewton-search-dir-1}:
\begin{enumerate}
\item adaptive: stop when the adaptive stopping condition
\eqref{eq:adaptive-stopping-condition} is satisfied;
\item exact: solve subproblem exactly;
\item inexact: stop after 10 iterations.
\end{enumerate}
We plot relative suboptimality versus function evaluations and time on
the Estrogen dataset in Figure \ref{fig:estrogen} and the Leukemia
dataset in Figure \ref{fig:leukemia}.
\begin{figure}
\begin{subfigure}{0.5\textwidth}
\includegraphics[width=\textwidth]{estrogen_funEv}
\end{subfigure}%
~
\begin{subfigure}{0.5\textwidth}
\includegraphics[width=\textwidth]{estrogen_time}
\end{subfigure}
\caption{Inverse covariance estimation problem (Estrogen dataset).
Convergence behavior of proximal BFGS method with three subproblem stopping conditions.}
\label{fig:estrogen}
\end{figure}
\begin{figure}
\begin{subfigure}{0.5\textwidth}
\includegraphics[width=\textwidth]{leukemia_funEv}
\end{subfigure}%
~
\begin{subfigure}{0.5\textwidth}
\includegraphics[width=\textwidth]{leukemia_time}
\end{subfigure}
\caption{Inverse covariance estimation problem (Leukemia dataset).
Convergence behavior of proximal BFGS method with three subproblem stopping conditions.}
\label{fig:leukemia}
\end{figure}
On both datasets, the exact stopping condition yields the fastest
convergence (ignoring computational expense per step), followed closely by
the adaptive stopping condition (see Figure \ref{fig:estrogen} and
\ref{fig:leukemia}). If we account for time per step, then the adaptive
stopping condition yields the fastest convergence. Note that the adaptive
stopping condition yields superlinear convergence (like the exact proximal
BFGS method). The third (inexact) stopping condition yields only linear
convergence (like a first-order method), and its convergence rate is
affected by the condition number of $\hat{\Theta}$. On the Leukemia
dataset, the condition number is worse and the convergence is slower.
\subsection{Logistic regression}
Suppose we are given samples $x^{(1)},\dots,x^{(n)}$ with labels
$y^{(1)},\dots,y^{(n)}\in\{0,1\}$. We fit a logit model to our data:
\begin{align}
\minimize_{w \in \mathbf{R}^n}\,\frac{1}{n}\sum_{i=1}^n \log (1+\exp(-y_i w^Tx_i)) + \lambda\norm{w}_1.
\label{eq:l1-logistic}
\end{align}
Again, the regularization term $\norm{w}_1$ promotes sparse solutions
and $\lambda$ balances goodness-of-fit and sparsity.
We use two datasets: (i) \texttt{gisette}, a handwritten digits dataset
from the NIPS 2003 feature selection challenge ($n=5000$), and (ii)
\texttt{rcv1}, an archive of categorized news stories from
Reuters ($n=47,000$).\footnote{These datasets are available from
\url{http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets}.} The
features of \texttt{gisette} have be scaled to be within the interval
$[-1,1]$, and those of \texttt{rcv1} have be scaled to be unit vectors.
$\lambda$ was chosen to match the value reported in
\cite{yuan2012improved}, where it was chosen by five-fold cross validation
on the training set.
We compare a proximal L-BFGS method with SpaRSA and the TFOCS
implementation of FISTA (also Nesterov's 1983 method) on problem
\eqref{eq:l1-logistic}.
We plot relative suboptimality versus function evaluations and time on
the \texttt{gisette} dataset in Figure \ref{fig:gisette} and on the
\texttt{rcv1} dataset in Figure \ref{fig:rcv1}.
\begin{figure}
\begin{subfigure}{0.5\textwidth}
\includegraphics[width=\textwidth]{gisette_funEv}
\end{subfigure}%
~
\begin{subfigure}{0.5\textwidth}
\includegraphics[width=\textwidth]{gisette_time}
\end{subfigure}
\caption{Logistic regression problem (\texttt{gisette} dataset).
Proximal L-BFGS method (L = 50) versus FISTA and SpaRSA.}
\label{fig:gisette}
\end{figure}
\begin{figure}
\begin{subfigure}{0.5\textwidth}
\includegraphics[width=\textwidth]{rcv1_funEv}
\end{subfigure}%
~
\begin{subfigure}{0.5\textwidth}
\includegraphics[width=\textwidth]{rcv1_time}
\end{subfigure}
\caption{Logistic regression problem (\texttt{rcv1} dataset).
Proximal L-BFGS method (L = 50) versus FISTA and SpaRSA.}
\label{fig:rcv1}
\end{figure}
The smooth part requires many expensive exp/log operations to evaluate. On
the dense \texttt{gisette} dataset (30 million nonzero entries in a 6000 by
5000 design matrix), evaluating $g$ dominates the computational cost. The
proximal L-BFGS method clearly outperforms the other methods because
the computational expense is shifted to solving the subproblems, whose
objective functions are cheap to evaluate (see Figure
\ref{fig:gisette}). On the sparse \texttt{rcv1} dataset (40 million nonzero
entries in a 542,000 by 47,000 design matrix), the cost of evaluating $g$
makes up a smaller portion of the total cost, and the proximal L-BFGS
method barely outperforms SpaRSA (see Figure \ref{fig:rcv1}).
\section{Conclusion}
Given the popularity of first-order methods for minimizing composite
functions, there has been a flurry of activity around the development of
Newton-type methods for minimizing composite functions
\cite{hsieh2011sparse, becker2012quasi, olsen2012newton}. We analyze
proximal Newton-type methods for such functions and show that they have
several strong advantages over first-order methods:
\begin{enumerate}
\item They converge rapidly near the optimal
solution, and can produce a solution of high accuracy.
\item They are insensitive to the choice of coordinate system and to the
condition number of the level sets of the objective.
\item They scale well with problem size.
\end{enumerate}
The main disadvantage is the cost of solving the subproblem. We have shown
that it is possible to reduce the cost and retain the fast convergence rate
by solving the subproblems inexactly. We hope our results kindle further
interest in proximal Newton-type methods as an alternative to first-order
and interior point methods for minimizing composite functions.
\section*{Acknowledgements}
We thank Santiago Akle, Trevor Hastie, Nick Henderson, Qiang Liu,
Ernest Ryu, Ed Schmerling, Carlos Sing-Long, Walter Murray, and three
anonymous referees for their insightful comments. J.\ Lee was supported by a National Defense Science and Engineering Graduate Fellowship (NDSEG) and an NSF Graduate Fellowship. Y.\ Sun and M.\ Saunders were partially
supported by the DOE through the Scientific Discovery through Advanced Computing program, grant DE-FG02-09ER25917, and by
the NIH, award number 1U01GM102098-01. M.\ Saunders was also partially supported by the ONR, grant N00014-11-1-0067.
\section{Introduction}
\label{sec:introduction}
Many problems of relevance in bioinformatics, signal processing, and
statistical learning can be formulated as minimizing a \emph{composite
function}:
\begin{align}
\minimize_{x \in \mathbf{R}^n} \,f(x) := g (x) + h(x),
\label{eq:composite-form}
\end{align}
where $g$ is a convex, continuously differentiable loss function, and
$h$ is a convex but not necessarily differentiable penalty function or
regularizer. Such problems include the \emph{lasso}
\cite{tibshirani1996regression}, the \emph{graphical lasso}
\cite{friedman2008sparse}, and trace-norm matrix completion
\cite{candes2009exact}.
We describe a family of Newton-type methods for minimizing composite
functions that achieve superlinear rates of convergence subject to
standard assumptions. The methods can be interpreted as
generalizations of the classic proximal gradient method that account
for the curvature of the function when selecting a search
direction. Many popular methods for minimizing composite functions are
special cases of these \emph{proximal Newton-type methods}, and our
analysis yields new convergence results for some of these methods.
In section \ref{sec:introduction} we review state-of-the-art methods
for problem \eqref{eq:composite-form} and related work on projected
Newton-type methods for constrained optimization. In sections
\ref{sec:proximal-newton-type-methods} and
\ref{sec:convergence-results} we describe proximal Newton-type
methods and their convergence behavior, and in section
\ref{sec:experiments} we discuss some applications of these methods
and evaluate their performance.
\textbf{Notation:} The methods we consider are \emph{line search
methods}, which produce a sequence of points $\{x_k\}$ according to
\[
x_{k+1} = x_k + t_k\Delta x_k,
\]
where $t_k$ is a \emph{step length} and $\Delta x_k$ is a \emph{search
direction}. When we focus on one iteration of an algorithm, we drop
the subscripts ({\it e.g.}, $x_+ = x + t\Delta x$). All the methods we
consider compute search directions by minimizing local models of the
composite function $f$. We use an accent $\hat{\cdot}$ to denote these
local models ({\it e.g.}, $\hat{f}_k$ is a local model of $f$ at the $k$-th
step).
\subsection{First-order methods}
The most popular methods for minimizing composite functions are
\emph{first-order methods} that use \emph{proximal mappings} to handle
the nonsmooth part $h$. SpaRSA \cite{wright2009sparse} is a popular
\emph{spectral projected gradient} method that uses a \emph{spectral
step length} together with a \emph{nonmonotone line search} to
improve convergence. TRIP \cite{kim2010scalable} also uses a spectral
step length but selects search directions using a trust-region
strategy.
We can accelerate the convergence of first-order methods using ideas
due to Nesterov \cite{nesterov2003introductory}. This yields
\emph{accelerated first-order methods}, which achieve
$\epsilon$-suboptimality within $O(1/\sqrt{\epsilon})$ iterations
\cite{tseng2008accelerated}. The most popular method in this family is
the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA)
\cite{beck2009fast}. These methods have been implemented in the
package TFOCS \cite{becker2011templates} and used to solve problems
that commonly arise in statistics, signal processing, and statistical
learning.
\subsection{Newton-type methods}
There are two classes of methods that generalize Newton-type methods
for minimizing smooth functions to handle composite functions
\eqref{eq:composite-form}. \emph{Nonsmooth Newton-type methods}
\cite{yu2010quasi} successively minimize a local quadratic model of
the composite function $f$:
\[
\hat{f}_k(y) = f(x_k) + \sup_{z\in\partial f(x_k)} z^T(y-x_k) +
\frac{1}{2}(y-x_k)^TH_k(y-x_k),
\]
where $H_k$ accounts for the curvature of $f$. (Although computing
this $\Delta x_k$ is generally not practical, we can exploit the
special structure of $f$ in many statistical learning problems.)
\emph{Proximal Newton-type methods} approximate only the smooth part
$g$ with a local quadratic model:
\[
\hat{f}_k(y) = g(x_k) + \nabla g(x_k)^T(y-x_k) + \frac{1}{2}(y-x_k)^TH_k(y-x_k) + h(y),
\]
where $H_k$ is an approximation to $\nabla^2 g(x_k)$. This idea
can be traced back to the \emph{generalized proximal point method} of
Fukushima and Min\'{e} \cite{fukushima1981generalized}.
Proximal Newton-type methods are a special case of cost approximation
(Patriksson \cite{patriksson1998cost}). In particular, Theorem 4.1
(convergence under Rule E) and Theorem 4.6 (linear convergence) of
\cite{patriksson1998cost} apply to proximal Newton-type
methods. Patriksson shows superlinear convergence of the exact
proximal Newton method, but does not analyze the quasi-Newton
approximation, nor consider the adaptive stopping criterion of Section
\ref{sec:convergence-inexact-proxnewton}. By focusing on Newton-type
methods, we obtain stronger and more practical results.
Many popular methods for minimizing composite functions are special
cases of proximal Newton-type methods. Methods tailored to a specific
problem include \texttt{glmnet} \cite{friedman2007pathwise}, \texttt{newglmnet}
\cite{yuan2012improved}, QUIC \cite{hsieh2011sparse}, and the
Newton-LASSO method \cite{olsen2012newton}. Generic methods include
\emph{projected Newton-type methods} \cite{schmidt2009optimizing,
schmidt2011projected}, proximal quasi-Newton methods
\cite{schmidt2010graphical, becker2012quasi}, and the method of Tseng
and Yun \cite{tseng2009coordinate, lu2011augmented}.
This article is the full version of our preliminary work
\cite{lee2012proximal}, and section \ref{sec:convergence-results}
includes a convergence analysis of inexact proximal Newton-type
methods ({\it i.e.}, when the subproblems are solve inexactly). Our main
convergence results are:
\begin{enumerate}
\item The proximal Newton and proximal quasi-Newton methods (with line search) converge superlinearly.
\item The inexact proximal Newton method (with unit step length) converges locally at a linear or superlinear rate depending on the forcing sequence.
\end{enumerate}
We also
describe an adaptive stopping condition to decide how exactly (or
inexactly) to solve the subproblem, and we demonstrate the benefits
empirically.
There is a rich literature on generalized equations, such as monotone
inclusions and variational inequalities. Minimizing composite
functions is a special case of solving generalized equations, and
proximal Newton-type methods are special cases of Newton-type methods
for solving them \cite{patriksson1998cost}. We refer to Patriksson
\cite{patriksson1999nonlinear} for a unified treatment of descent
methods for solving a large class of generalized equations.
\section{Proximal Newton-type methods}
\label{sec:proximal-newton-type-methods}
In problem \eqref{eq:composite-form} we assume $g$ and $h$ are closed,
convex functions, with $g$ continuously differentiable and its
gradient $\nabla g$ Lipschitz continuous. The function $h$ is not
necessarily differentiable everywhere, but its \emph{proximal mapping}
\eqref{eq:prox-mapping} can be evaluated efficiently. We refer to $g$
as ``the smooth part'' and $h$ as ``the nonsmooth part''. We assume
the optimal value is attained at some optimal solution $x^\star$, not
necessarily unique.
\subsection{The proximal gradient method}
The proximal mapping of a convex function $h$ at $x$ is
\begin{equation}
\prox_{h}(x) := \argmin_{y\in\mathbf{R}^n}\,h(y) + \frac{1}{2}\norm{y-x}^2.
\label{eq:prox-mapping}
\end{equation}
Proximal mappings can be interpreted as generalized projections
because if $h$ is the indicator function of a convex set,
$\prox_h(x)$ is the projection of $x$ onto the set. If $h$ is the
$\ell_1$ norm and $t$ is a step-length, then $\prox_{th}(x)$ is the
\emph{soft-threshold operation}:
\[
\prox_{t\ell_1}(x) = \sign(x)\cdot\max\{\abs{x}-t,0\},
\]
where $\sign$ and $\max$ are entry-wise, and $\cdot$ denotes the
entry-wise product.
The \emph{proximal gradient method} uses the proximal mapping of the
nonsmooth part to minimize composite functions. For some step length
$t_k$, the next iterate is
$x_{k+1} = \prox_{t_kh}\left(x_k-t_k\nabla g(x_k)\right)$.
This is equivalent to
\begin{gather}
x_{k+1} = x_k - t_kG_{t_kf}(x_k)
\\ G_{t_kf}(x_k) := \frac{1}{t_k}\left(x_k-\prox_{t_kh}(x_k-t_k\nabla g(x_k)) \right),
\label{eq:composite-gradient-step}
\end{gather}
where $G_{t_kf}(x_k)$ is a
\emph{composite gradient step}. Most first-order methods, including
SpaRSA and accelerated first-order methods, are variants of this
simple method. We note three properties of the composite gradient~step:
\begin{enumerate}
\item Let $\hat{g}$ be a simple quadratic model of $g$ near $x_k$
(with $H_k$ a multiple of $I$):
\[
\hat{g}_k(y) := g(x_k) + \nabla g(x_k)^T(y-x_k) + \frac{1}{2t_k}
\norm{y - x_k}^2 .
\]
The composite gradient step moves to the minimum of $\hat{g}_k + h$:
\begin{align}
x_{k+1} &= \prox_{t_kh}\left(x_k-t_k\nabla g(x_k)\right)
\\ &= \argmin_y\,t_kh(y) + \frac{1}{2}\norm{y-x_k+t_k\nabla g(x_k)}^2
\\ &= \argmin_y\,\nabla g(x_k)^T(y-x_k) + \frac{1}{2t_k}\norm{y-x_k}^2 + h(y).
\label{eq:simple-quadratic}
\end{align}
\item The composite gradient step is neither a gradient nor a subgradient of $f$
at any point; rather it is the sum of an explicit gradient (at $x$) and an
implicit subgradient (at $\prox_h(x)$). The first-order optimality conditions of
\eqref{eq:simple-quadratic} are
\[
\partial h(x_{k+1}) + \frac{1}{t_k}(x_{k+1} - x_k) = 0.
\]
We express $x_{k+1} - x_k$ in terms of $G_{t_kf}$ and rearrange to obtain
\[
G_{t_kf}(x_k) \in \nabla g(x_k) + \partial h(x_{k+1}).
\]
\item The composite gradient step is zero if and only if $x$ minimizes
$f$, {\it i.e.}{} $G_f(x) = 0$ generalizes the usual (zero gradient)
optimality condition to composite functions (where $G_f(x) =
G_{tf}(x)$ when $t=1$).
\end{enumerate}
We shall use the length of $G_f(x)$ to measure the optimality of a
point $x$.
We show that $G_f$ inherits the Lipschitz continuity of $\nabla g$.
\begin{definition}
\label{asu:lipschitz-continuous}
A function $F$ is Lipschitz continuous with constant $L$ if
\begin{equation}
\norm{F(x) - F(y)} \le L\norm{x - y}\text{ for any }x,y.
\label{eq:lipschitz-continuity}
\end{equation}
\end{definition}
\begin{lemma}
\label{lem:G-lipschitz}
If $\nabla g$ is Lipschitz continuous with constant $L_1$, then
\[
\norm{G_f(x)} \le (L_1 + 1)\norm{x - x^\star}.
\]
\end{lemma}
\begin{proof}
The composite gradient steps at $x$ and the optimal solution $x^\star$ satisfy
\begin{align*}
G_f(x) &\in \nabla g(x) + \partial h(x - G_f(x)), \\
G_f(x^\star) &\in \nabla g(x^\star) + \partial h(x^\star).
\end{align*}
We subtract these two expressions and rearrange to obtain
\[
\partial h(x - G_f(x)) - \partial h(x^\star) \ni G_f(x) - (\nabla g(x) - \nabla g(x^\star)).
\]
Since $h$ is convex, $\partial h$ is monotone and
\begin{align*}
0 &\le (x - G_f(x) - x^\star)^T\partial h(x - G_f(x_k))
\\ &= -G_f(x)^TG_f(x) + (x - x^\star)^TG_f(x) + G_f(x)^T(\nabla g(x) - \nabla g(x^\star))
\\ &\hspace{1pc} + (x - x^\star)^T(\nabla g(x) - \nabla g(x^\star)).
\end{align*}
We drop the last term because it is nonnegative ($\nabla g$ is monotone) to obtain
\begin{align*}
0 &\le -\norm{G_f(x)}^2 + (x - x^\star)^TG_f(x) + G_f(x)^T(\nabla g(x) - \nabla g(x^\star))
\\ &\le -\norm{G_f(x)}^2 + \norm{G_f(x)}(\norm{x - x^\star} + \norm{\nabla g(x) - \nabla g(x^\star)}),
\end{align*}
so that
\begin{equation}
\norm{G_f(x)} \le \norm{x - x^\star} + \norm{\nabla g(x) - \nabla g(x^\star)}.
\label{eq:G-lipschitz-1}
\end{equation}
Since $\nabla g$ is Lipschitz continuous, we have
\[
\norm{G_f(x)} \le (L_1 + 1)\norm{x - x^\star}.
\]
\end{proof}
\subsection{Proximal Newton-type methods}
Proximal Newton-type methods use a symmetric positive definite
matrix $H_k \approx \nabla^2 g(x_k)$ to model the curvature of $g$:
\[
\hat{g}_k(y) = g(x_k) + \nabla g(x_k)^T(y-x_k) + \frac{1}{2}(y-x_k)^TH_k(y-x_k).
\]
A proximal Newton-type search direction $\Delta x_k$ solves the subproblem
\begin{equation}
\Delta x_k = \argmin_d\,\hat{f}_k(x_k + d) := \hat{g}_k(x_k + d) + h(x_k+d).
\label{eq:proxnewton-search-dir-1}
\end{equation}
There are many strategies for choosing $H_k$. If we choose $H_k =
\nabla^2 g(x_k)$, we obtain the \emph{proximal Newton method}. If
we build an approximation to $\nabla^2 g(x_k)$ according to a
quasi-Newton strategy, we obtain a \emph{proximal quasi-Newton
method}. If the problem is large, we can use limited memory
quasi-Newton updates to reduce memory usage. Generally speaking, most
strategies for choosing Hessian approximations in Newton-type methods
(for minimizing smooth functions) can be adapted to choosing $H_k$ in
proximal Newton-type methods.
When $H_k$ is not positive definite, we can also adapt strategies for
handling indefinite Hessian approximations in Newton-type methods. The
most simple strategy is Hessian modification: we add a multiple of the
identity to $H_k$ when $H_k$ is indefinite. This makes the subproblem
strongly convex and damps the search direction. In a proximal
quasi-Newton method, we can also do update skipping: if an update
causes $H_k$ to become indefinite, simply skip the update.
We can also express the proximal Newton-type search direction using
\emph{scaled proximal mappings}. This lets us interpret a proximal
Newton-type search direction as a ``composite Newton step'' and
reveals a connection with the composite gradient step.
\begin{definition}
\label{def:scaled-prox}
Let $h$ be a convex function and $H$ be a positive definite matrix. Then
the scaled proximal mapping of $h$ at $x$ is
\begin{align}
\prox_h^H(x) := \argmin_{y\in\mathbf{R}^n}\,h(y)+\frac{1}{2}\norm{y-x}_H^2.
\label{eq:scaled-prox}
\end{align}
\end{definition}
Scaled proximal mappings share many properties with (unscaled)
proximal mappings:
\begin{enumerate}
\item The scaled proximal point $\prox_h^H(x)$ exists and is unique for $x\in\dom h$ because the
proximity function is strongly convex if $H$ is positive definite.
\item Let $\partial h(x)$ be the subdifferential of $h$ at $x$. Then
$\prox_h^H(x)$ satisfies
\begin{align}
H\left(x-\prox_h^H(x)\right)\in\partial h\left(\prox_h^H(x)\right).
\label{eq:scaled-prox-1}
\end{align}
\item Scaled proximal mappings are \emph{firmly nonexpansive} in the
$H$-norm. That is, if $u = \prox_h^H(x)$ and $v = \prox_h^H(y)$,
then
\[
(u-v)^TH(x-y)\ge \norm{u-v}_H^2,
\]
and the Cauchy-Schwarz inequality implies
$\norm{u-v}_H \le \norm{x-y}_H$.
\end{enumerate}
We can express proximal Newton-type search directions as
``composite Newton steps'' using scaled proximal mappings:
\begin{align}
\Delta x = \prox_h^{H}\left(x-H^{-1}\nabla g(x)\right) - x.
\label{eq:proxnewton-search-direction-scaled-prox}
\end{align}
We use \eqref{eq:scaled-prox-1} to deduce that proximal Newton
search directions satisfy
\[
H\left(H^{-1}\nabla g(x) - \Delta x\right) \in \partial h(x + \Delta x).
\]
We simplify to obtain
\begin{align}
H\Delta x \in -\nabla g(x) - \partial h(x + \Delta x).
\label{eq:proxnewton-search-direction-2}
\end{align}
Thus proximal Newton-type search directions, like composite gradient
steps, combine an explicit gradient with an implicit subgradient. This
expression reduces to the Newton system when $h=0$.
\begin{proposition}[Search direction properties]
\label{prop:search-direction-properties}
If $H$ is positive definite, then $\Delta x$ in \eqref{eq:proxnewton-search-dir-1}
satisfies
\begin{gather}
f(x_+) \le f(x)+t\left(\nabla g(x)^T\Delta x+h(x+\Delta x)-h(x)\right)+ O(t^2),
\label{eq:search-direction-properties-1}
\\ \nabla g(x)^T\Delta x+h(x+\Delta x)-h(x) \le -\Delta x^TH\Delta x.
\label{eq:search-direction-properties-2}
\end{gather}
\end{proposition}
\begin{proof}
For $t\in(0,1]$,
\begin{align*}
f(x_+) -f(x) &= g(x_+)-g(x)+h(x_+)-h(x)
\\ &\le g(x_+)-g(x)+th(x+\Delta x)+(1-t)h(x)-h(x)
\\ &= g(x_+)-g(x)+t(h(x+\Delta x)-h(x))
\\ &= \nabla g(x)^T(t\Delta x)+t(h(x+\Delta x)-h(x))+O(t^2),
\end{align*}
which proves \eqref{eq:search-direction-properties-1}.
Since $\Delta x$ steps to the minimizer of $\hat{f}$
\eqref{eq:proxnewton-search-dir-1}, $t\Delta x$ satisfies
\begin{align*}
&\nabla g(x)^T\Delta x+\frac{1}{2}\Delta x^TH\Delta x+h(x+\Delta x)
\\ &\hspace{1pc}\le \nabla g(x)^T(t\Delta x)+\frac12 t^2 \Delta x^TH\Delta x+h(x_+)
\\ &\hspace{1pc}\le t\nabla g(x)^T\Delta x+\frac12 t^2 \Delta x^TH\Delta x+th(x+\Delta x)+(1-t)h(x).
\end{align*}
We rearrange and then simplify:
\begin{gather*}
(1-t)\nabla g(x)^T\Delta x + \frac{1}{2}(1-t^2)\Delta x^TH\Delta x
+ (1-t)(h(x+\Delta x)-h(x)) \le 0
\\ \nabla g(x)^T\Delta x+\frac{1}{2}(1+t)\Delta x^TH\Delta x+h(x+\Delta x)-h(x) \le 0
\\ \nabla g(x)^T\Delta x+h(x+\Delta x)-h(x) \le -\frac{1}{2}(1+t)\Delta x^TH\Delta x.
\end{gather*}
Finally, we let $t\to 1$ and rearrange to obtain
\eqref{eq:search-direction-properties-2}.
\end{proof}
Proposition \ref{prop:search-direction-properties} implies the search
direction is a descent direction for $f$ because we can substitute
\eqref{eq:search-direction-properties-2} into
\eqref{eq:search-direction-properties-1} to obtain
\begin{align}
f(x_+) \le f(x)-t\Delta x^TH\Delta x+O(t^2).
\label{eq:descent}
\end{align}
\begin{proposition}
\label{prop:first-order-conditions}
Suppose $H$ is positive definite. Then $x^\star$ is an optimal
solution if and only if at $x^\star$ the search direction $\Delta x$
\eqref{eq:proxnewton-search-dir-1} is zero.
\end{proposition}
\begin{proof}
If $\Delta x$ at $x^\star$ is nonzero, then it is a descent direction for
$f$ at $x^\star$. Clearly $x^\star$ cannot be a minimizer of $f$.
If $\Delta x = 0$, then $x$ is the minimizer of $\hat{f}$, so that
\[
\nabla g(x)^T(td)+\frac12 t^2 d^THd+h(x+td) -h(x)\ge 0
\]
for all $t>0$ and $d$. We rearrange to obtain
\begin{align}
h(x+td)-h(x) \ge -t\nabla g(x)^Td-\frac12 t^2 d^THd. \label{eq:first-order-cond-1}
\end{align}
Let $Df(x,d)$ be the directional derivative of $f$ at $x$ in the direction $d$:
\begin{align}
Df(x,d) &= \lim_{t\to 0} \frac{f(x+td)-f(x)}{t}
\nonumber \\
&= \lim_{t\to 0} \frac{g(x+td)-g(x)+h(x+td)-h(x)}{t}
\nonumber \\
&= \lim_{t\to 0} \frac{t\nabla g(x)^Td+O(t^2)+h(x+td)-h(x)}{t}.
\label{eq:first-order-cond-2}
\end{align}
We substitute \eqref{eq:first-order-cond-1} into
\eqref{eq:first-order-cond-2} to obtain
\begin{align*}
Df(x,u) &\ge \lim_{t\to 0}
\frac{t\nabla g(x)^Td+O(t^2) - \frac12 t^2 d^THd-t\nabla g(x)^Td}{t}
\\ &= \lim_{t\to 0} \frac{-\frac12 t^2 d^THd+O(t^2)}{t} = 0.
\end{align*}
Since $f$ is convex, $x$ is an optimal solution if and only if $\Delta x = 0$.
\end{proof}
In a few special cases we can derive a closed-form expression for the
proximal Newton search direction, but usually we must resort to an
iterative method to solve the subproblem \eqref{eq:proxnewton-search-dir-1}.
The user should choose an iterative method that exploits the
properties of $h$. For example, if $h$ is the $\ell_1$ norm, (block)
coordinate descent methods combined with an active set strategy are
known to be very efficient \cite{friedman2007pathwise}.
We use a line search procedure to select a step length $t$ that
satisfies a sufficient descent condition: the next iterate $x_+$ satisfies $f(x_+) \le f(x)+\alpha t\lambda$, where
\begin{equation}
\lambda := \nabla g(x)^T\Delta x+h(x+\Delta x)-h(x).
\label{eq:sufficient-descent}
\end{equation}
The parameter $\alpha\in(0,0.5)$ can be interpreted as the fraction of the
decrease in $f$ predicted by linear extrapolation that we will accept.
A simple example is a
\emph{backtracking line search} \cite{boyd2004convex}: backtrack along
the search direction until a suitable step length is
selected. Although simple, this procedure performs admirably in
practice.
An alternative strategy is to search along the \emph{proximal arc},
{\it i.e.}, the arc/curve
\begin{equation}
\Delta x_k(t) := \argmin_y\,\nabla g(x_k)^T(y-x_k) + \frac{1}{2t}(y-x_k)^TH_k(y-x_k) + h(y).
\label{eq:arc-search-subproblem}
\end{equation}
Arc search procedures have some benefits relative to line search
procedures. First, the arc search step is the optimal solution to a subproblem. Second, when the optimal solution lies on a low-dimensional
manifold of $\mathbf{R}^n$, an arc search strategy is likely to identify
this manifold. The main drawback is the cost of obtaining trial
points: a subproblem must be solved at each trial point.
\begin{lemma}
\label{lem:acceptable-step-lengths}
Suppose $H\succeq mI$ for some $m > 0$ and $\nabla g$ is Lipschitz
continuous with constant $L_1$. Then the sufficient descent
condition \eqref{eq:sufficient-descent} is satisfied by
\begin{align}
t \le \min \left\{1,\frac{2m}{L_1}(1-\alpha)\right\}.
\label{eq:step-length-conditions}
\end{align}
\end{lemma}
\begin{proof}
We can bound the decrease at each iteration by
\begin{align*}
& f(x_+) -f(x) = g(x_+) -g(x) +h(x_+) -h(x)
\\ &\hspace{1pc}\le \int_0^1 \nabla g(x+s(t\Delta x))^T(t\Delta x) ds
+ th(x+\Delta x) +(1-t)h(x) -h(x)
\\ &\hspace{1pc}=\nabla g(x)^T(t\Delta x)+ t(h(x+\Delta x) -h(x))
+\int_0^1 (\nabla g(x+s(t\Delta x))-\nabla g(x))^T(t\Delta x) ds
\\ &\hspace{1pc}\le t\left(\nabla g(x)^T\Delta x+h(x+\Delta x) -h(x)
+ \int_0^1 \norm{\nabla g(x+s(\Delta x))-\nabla g(x)}\norm{\Delta x} ds\right).
\end{align*}
Since $\nabla g$ is Lipschitz continuous with constant $L_1$,
\begin{align}
f(x_+) -f(x) &\le t\left(\nabla g(x)^T\Delta x+h(x+\Delta x)
- h(x)+\frac{L_1t}{2} \norm{\Delta x}^2\right)
\nonumber
\\ &= t\left(\lambda+\frac{L_1t}{2} \norm{\Delta x}^2\right).
\label{eq:acceptable-step-lengths-1}
\end{align}
If we choose
$t\le \frac{2m}{L_1}(1-\alpha)$, then
\[
\frac{L_1t}{2} \norm{\Delta x}^2 \le m(1-\alpha)\norm{\Delta x}^2
\le (1-\alpha)\Delta x^T H\Delta x.
\]
By \eqref{eq:search-direction-properties-2}, we have
$\frac{L_1t}{2} \norm{\Delta x}^2 \le -(1-\alpha)\lambda$. We substitute this expression into \eqref{eq:acceptable-step-lengths-1} to obtain the desired expression:
$$
f(x_+) -f(x) \le t\left(\lambda-(1-\alpha)\lambda\right) = t(\alpha\lambda).
$$
\end{proof}
\begin{algorithm}
\caption{A generic proximal Newton-type method}
\label{alg:prox-newton}
\begin{algorithmic}[1]
\Require starting point $x_0\in\dom f$
\Repeat
\State Choose $H_k$, a positive definite approximation to the Hessian.
\State Solve the subproblem for a search direction:
\Statex \hspace{1pc}\pc $\Delta x_k \leftarrow \argmin_d \nabla g(x_k)^Td + \frac{1}{2}d^TH_kd +h(x_k+d).$
\State Select $t_k$ with a backtracking line search.
\State Update: $x_{k+1} \leftarrow x_k + t_k\Delta x_k$.
\Until{stopping conditions are satisfied.}
\end{algorithmic}
\end{algorithm}
\subsection{Inexact proximal Newton-type methods}
\label{sec:inexact-proxnewton}
Inexact proximal Newton-type methods solve the subproblems
\eqref{eq:proxnewton-search-dir-1} approximately to obtain inexact
search directions. These methods can be more efficient than their
exact counterparts because they require less computation per
iteration. Indeed, many practical implementations of proximal
Newton-type methods such as \texttt{\texttt{glmnet}}, \texttt{newGLMNET}, and
QUIC solve the subproblems inexactly.
In practice, how inexactly we solve the subproblem is
critical to the efficiency and reliability of the method. The
practical implementations just mentioned
use a variety of heuristics to decide.
Although these methods perform admirably in practice,
there are few results on how the inexact subproblem solutions
affect their convergence behavior.
We now describe an adaptive stopping condition for the subproblem. In
section \ref{sec:convergence-results} we analyze the convergence
behavior of inexact Newton-type methods, and in
section~\ref{sec:experiments} we conduct computational experiments to
compare the performance of our stopping condition against commonly
used heuristics.
Our adaptive stopping condition follows the one used by
\emph{inexact Newton-type methods} for minimizing smooth functions:
\begin{align}
\norm{\nabla \hat{g}_k(x_k + \Delta x_k)} \le \eta_k\norm{\nabla g(x_k)},
\label{eq:inexact-newton-stopping-condition}
\end{align}
where $\eta_k$ is a \emph{forcing term} that requires the left-hand side
to be small. We generalize
the condition to composite functions by substituting composite gradients into
\eqref{eq:inexact-newton-stopping-condition}: if $H_k \preceq MI$ for
some $M > 0$, we require
\begin{align}
\|G_{\hat{f}_k/M}(x_k + \Delta x_k)\|
\le \eta_k\norm{G_{f/M}(x_k)}.
\label{eq:adaptive-stopping-condition}
\end{align}
We set $\eta_k$ based on how well $\hat{G}_{k-1}$ approximates $G$
near $x_k$: if $mI\preceq H_k$ for some $m > 0$, we require
\begin{align}
\eta_k =\min\,\left\{\frac{m}{2},
\frac{\|G_{\hat{f}_{k-1}/M}(x_k)-G_{f/M}(x_k)\|}{\norm{G_{f/M}(x_{k-1})}} \right\}.
\label{eq:forcing-term}
\end{align}
This choice due to Eisenstat and Walker \cite{eisenstat1996choosing}
yields desirable convergence results and performs admirably in
practice.
Intuitively, we should solve the subproblem exactly if (i) $x_k$ is
close to the optimal solution, and (ii) $\hat{f}_k$ is a good model of
$f$ near $x_k$. If (i), we seek to preserve the fast local convergence
behavior of proximal Newton-type methods; if (ii), then minimizing
$\hat{f}_k$ is a good surrogate for minimizing $f$. In these cases,
\eqref{eq:adaptive-stopping-condition} and \eqref{eq:forcing-term}
are small, so the subproblem is solved accurately.
We can derive an expression like
\eqref{eq:proxnewton-search-direction-2} for an inexact search
direction in terms of an explicit gradient, an implicit subgradient,
and a residual term $r_k$. This reveals connections to the inexact
Newton search direction in the case of smooth problems. The adaptive
stopping condition \eqref{eq:adaptive-stopping-condition} is equivalent to
\begin{align*}
0&\in G_{\hat{f}_k}(x_k + \Delta x_k) + r_k
\\ &= \nabla \hat{g}_k(x_k + \Delta x_k) +\partial h(x_k
+ \Delta x_k + G_{\hat{f}_k}(x_k + \Delta x_k)) + r_k
\\ &= \nabla g(x_k) + H_k\Delta x_k +\partial h(x_k
+ \Delta x_k + G_{\hat{f}_k}(x_k + \Delta x_k)) + r_k
\end{align*}
for some $r_k$ such that $\norm{r_k} \le \eta_k\norm{G_f(x_k)}$. Thus
an inexact search direction satisfies
\begin{align}
H_k\Delta x_k \in -\nabla g(x_k) - \partial h(x_k
+ \Delta x_k + G_{\hat{f}_k}(x_k + \Delta x_k)) + r_k.
\label{eq:inexact-proxnewton-search-direction}
\end{align}
Recently, Byrd et al.\ \cite{byrd2013inexact} analyze the inexact proximal Newton method with a more stringent adaptive stopping condition
\begin{equation}
\|G_{\hat{f}_k/M}(x_k + \Delta x_k)\| \le \eta_k\norm{G_{f/M}(x_k)}\text{ and }\hat{f}_k(x_k+\Delta x_k) - \hat{f}_k(x_k) \le \beta\lambda_k
\label{eq:stringent-adaptive-stopping-condition}
\end{equation}
for some $\beta\in(0,\frac12)$. The second condition is a sufficient descent condition on the subproblem. When $h$ is the $\ell_1$ norm, they show the inexact proximal Newton method with the stopping criterion \eqref{eq:stringent-adaptive-stopping-condition}
\begin{enumerate}
\item converges globally
\item eventually accepts the unit step length
\item converges linearly or superlinearly depending on the forcing terms.
\end{enumerate}
Although the first two results generalize readily to composite functions with a generic $h$, the third result depends on the separability of the $\ell_1$
norm, and do not apply to generic composite functions. Since most practical implementations such as \cite{hsieh2011sparse} and \cite{yuan2012improved} more closely correspond to \eqref{eq:adaptive-stopping-condition}, we state our results for the adaptive stopping condition that does not impose sufficient descent. However our local convergence result combined with their first two results, imply the inexact proximal Newton method with stopping condition \eqref{eq:stringent-adaptive-stopping-condition} globally converges, and converges linearly or superlinearly (depending on the forcing term) for a generic $h$.
\section{Convergence of proximal Newton-type methods}
\label{sec:convergence-results}
We now analyze the convergence behavior of proximal Newton-type
methods. In section \ref{sec:global-convergence} we show that
proximal Newton-type methods converge globally when the subproblems
are solved exactly. In sections \ref{sec:convergence-proxnewton}
and \ref{sec:convergence-prox-quasinewton} we show that proximal
Newton-type methods and proximal quasi-Newton methods converge
$q$-quadratically and $q$-superlinearly subject to standard
assumptions on the smooth part $g$. In section
\ref{sec:convergence-inexact-proxnewton}, we show that the inexact
proximal Newton method converges
\begin{itemize}
\item $q$-linearly when the forcing terms $\eta_k$ are uniformly
smaller than the inverse of the Lipschitz constant of $G_f$;
\item $q$-superlinearly when the forcing terms $\eta_k$ are chosen
according to \eqref{eq:forcing-term}.
\end{itemize}
\subsection*{Notation} $G_f$ is the composite
gradient step on the composite function $f$, and $\lambda_k$ is the
decrease in $f$ predicted by linear extrapolation on $g$ at $x_k$
along the search direction $\Delta x_k$:
\[
\lambda_k := \nabla g(x_k)^T\Delta x_k+h(x_k+\Delta x_k)-h(x_k).
\]
$L_1$, $L_2$, and $L_{G_f}$ are the Lipschitz constants
of $\nabla g$, $\nabla^2 g$, and $G_f$ respectively, while $m$
and $M$ are the (uniform) strong convexity and smoothness constants
for the $\hat{g}_k$'s, {\it i.e.}, $mI\preceq H_k \preceq MI$. If we set
$H_k = \nabla^2 g(x_k)$, then $m$ and $M$ are also the strong
convexity and strong smoothness constants of $g$.
\subsection{Global convergence}
\label{sec:global-convergence}
Our first result shows proximal Newton-type methods converge globally
to some optimal solution $x^\star$. There are many similar results;
{\it e.g.}, those in \cite[section 4]{patriksson1999nonlinear},
and Theorem \ref{thm:global-convergence} is neither the first nor the
most general. We include this result because the proof is simple and
intuitive.
We assume
\begin{enumerate}
\item $f$ is a closed, convex function and $\inf_x\{f(x)\mid x\in\dom f\}$ is attained;
\item the $H_k$'s are (uniformly) positive definite; {\it i.e.}, $H_k \succeq
mI$ for some $m > 0$.
\end{enumerate}
The second assumption ensures that the methods are executable,
{\it i.e.}, there exist step lengths that satisfy the sufficient descent
condition (cf.\ Lemma \ref{lem:acceptable-step-lengths}).
\begin{theorem}
\label{thm:global-convergence}
Suppose $f$ is a closed convex function, and $\inf_x\{f(x)\mid
x\in\dom f\}$ is attained at some $x^\star$. If $H_k \succeq mI$ for
some $m > 0$ and the subproblems \eqref{eq:proxnewton-search-dir-1}
are solved exactly, then $x_k$ converges to an optimal solution
starting at any $x_0\in\dom f$.
\end{theorem}
\begin{proof}
The sequence $\{f(x_k)\}$ is decreasing because $\Delta x_k$ are descent
directions \eqref{eq:descent} and
there exist step lengths satisfying sufficient descent
\eqref{eq:sufficient-descent} (cf.\ Lemma \ref{lem:acceptable-step-lengths}):
\[
f(x_k)-f(x_{k+1}) \le \alpha t_k\lambda_k \le 0.
\]
The sequence $\{f(x_k)\}$ must converge to some limit because $f$ is
closed and the optimal value is attained. Thus $t_k\lambda_k$ must
decay to zero. The step lengths $t_k$ are bounded away from zero
because sufficiently small step lengths attain sufficient
descent. Thus $\lambda_k$ must decay to zero. By
\eqref{eq:search-direction-properties-2}, we deduce that $\Delta
x_k$ also converges to zero:
\[
\norm{\Delta x_k}^2 \le \frac{1}{m} \Delta x_k^TH_k\Delta x_k
\le -\frac{1}{m}\lambda_k.
\]
Since the search direction $\Delta x_k$ is zero if and only if $x$
is an optimal solution (cf.\ Proposition
\ref{prop:first-order-conditions}), $x_k$ must converge to some
optimal solution $x^\star$.
\end{proof}
\subsection{Local convergence of the proximal Newton method}
\label{sec:convergence-proxnewton}
In this section and section \ref{sec:convergence-prox-quasinewton} we
study the convergence rate of the proximal Newton and proximal
quasi-Newton methods when the subproblems are solved exactly. First,
we state our assumptions on the problem.
\begin{definition}
\label{def:strong-convex}
A function $f$ is strongly convex with constant $m$ if
\begin{equation}
f(y) \ge f(x) + \nabla f(x)^T(y - x) + \frac{m}{2}\norm{x - y}^2\text{ for any }x,y.
\label{eq:strong-convex}
\end{equation}
\end{definition}
We assume the smooth part $g$ is strongly convex with constant $m$. This is a standard assumption in the analysis of Newton-type methods
for minimizing smooth functions. If $f$ is twice-continuously
differentiable, then this assumption is equivalent to $\nabla^2 f(x)
\succeq m I$ for any $x$. For our purposes, this assumption can
usually be relaxed by only requiring \eqref{eq:strong-convex} for any
$x$ and $y$ close to $x^\star$.
We also assume the gradient of the smooth part $\nabla g$ and Hessian $\nabla^2g$
are Lipschitz continuous with constants $L_1$ and $L_2$. The
assumption on $\nabla^2 g$ is standard in the analysis of Newton-type
methods for minimizing smooth functions. For our purposes, this
assumption can be relaxed by only requiring
\eqref{eq:lipschitz-continuity} for any $x$ and $y$ close to
$x^\star$.
The proximal Newton method uses the exact Hessian $H_k = \nabla^2
g(x_k)$ in the second-order model of $f$. This method converges
$q$-quadratically:
\[
\norm{x_{k+1}-x^\star} = O\big(\norm{x_k-x^\star}^2\big),
\]
subject to standard assumptions on the smooth part: that $g$ is
twice-continuously differentiable and strongly convex with constant
$m$, and $\nabla g$ and $\nabla^2 g$ are Lipschitz continuous with
constants $L_1$ and $L_2$.
We first prove an auxiliary result.
\begin{lemma}
\label{lem:newton-unit-step}
If $H_k = \nabla^2 g(x_k)$, the unit step
length satisfies the sufficient decrease condition
\eqref{eq:sufficient-descent} for $k$ sufficiently large.
\end{lemma}
\begin{proof}
Since $\nabla^2 g $ is Lipschitz continuous,
\begin{equation*}
g(x+\Delta x) \le g(x) + \nabla g(x) ^{T} \Delta x
+ \frac{1}{2} \Delta x^T\nabla^2 g(x)\Delta x
+ \frac{L_2}{6} \norm{\Delta x}^3.
\end{equation*}
We add $h(x+\Delta x)$ to both sides to obtain
\begin{align*}
f(x+\Delta x) &\le g(x) + \nabla g(x) ^{T} \Delta x
+ \frac{1}{2} \Delta x^T\nabla^2 g(x)\Delta x
+\frac{L_2}{6}\norm{\Delta x}^3 + h(x+\Delta x).
\end{align*}
We then add and subtract $h$ from the right-hand side to obtain
\begin{align*}
f(x+\Delta x) &\le g(x) + h(x) + \nabla g(x) ^{T} \Delta x+ h(x+\Delta x) - h(x)
\\ &\hspace{1pc}+ \frac{1}{2} \Delta x^T\nabla^2 g(x)\Delta x +\frac{L_2}{6}\norm{\Delta x}^3
\\ &\le f(x) +\lambda + \frac{1}{2}\Delta x^T\nabla^2 g(x)\Delta x
+ \frac{L_2}{6}\norm{\Delta x}^3.
\end{align*}
Since $g$ is strongly convex and $\Delta x$ satisfies
\eqref{eq:search-direction-properties-2}, we have
\[
f(x+\Delta x) \le f(x) +\lambda -\frac{1}{2}\lambda +\frac{L_2}{6m}\norm{\Delta x}\lambda.
\]
We rearrange to obtain
\begin{align*}
f(x+\Delta x)- f(x) &\le \frac{1}{2}\lambda -
\frac{L_2}{6m}\lambda\norm{\Delta x} \le \left(\frac{1}{2}
-\frac{L_2}{6m}\norm{\Delta x}\right)\lambda.
\end{align*}
We can show that $\Delta x_k$ decays to zero via the argument
we used to prove Theorem~\ref{thm:global-convergence}. Hence, if $k$
is sufficiently large, $f(x_k+\Delta x_k) -f(x_k) < \frac{1}{2} \lambda_k$.
\end{proof}
\begin{theorem}
\label{thm:newton-quadratic-convergence}
The proximal Newton method converges $q$-quadratically to $x^\star$.
\end{theorem}
\begin{proof}
Since the assumptions of Lemma \ref{lem:newton-unit-step} are
satisfied, unit step lengths satisfy the sufficient descent
condition:
\[
x_{k+1} = x_k + \Delta x_k =
\prox_h^{\nabla^2 g(x_k)}\left(x_k-\nabla^2 g(x_k)^{-1}\nabla g(x_k)\right).
\]
Since scaled proximal mappings are firmly non-expansive in the
scaled norm, we have
\begin{align*}
\norm{x_{k+1} - x^\star}_{\nabla^2 g(x_k)}
&= \big\|\prox_h^{\nabla^2 g(x_k)}(x_k-\nabla^2 g(x_k)^{-1}\nabla g(x_k))
\\ &\hspace{1pc}\pc - \prox_h^{\nabla^2 g(x_k)}(x^\star
- \nabla^2 g(x_k)^{-1}\nabla g(x^\star))\big\|_{\nabla^2 g(x_k)}
\\ &\le \norm{x_k - x^\star +
\nabla^2 g(x_k)^{-1}(\nabla g(x^\star) - \nabla g(x_k))}_{\nabla^2 g(x_k)}
\\ &\le \frac{1}{\sqrt{m}}\norm{\nabla^2 g(x_k)(x_k-x^\star)-\nabla g(x_k)+\nabla g(x^\star)}.
\end{align*}
Since $\nabla^2 g$ is Lipschitz continuous,
we have
\[
\norm{\nabla^2 g(x_k)(x_k-x^\star) - \nabla g(x_k)+\nabla g(x^\star)}
\le \frac{L_2}{2}\norm{x_k-x^\star}^2
\]
and we deduce that $x_k$ converges to $x^\star$ quadratically:
\[
\norm{x_{k+1}-x^\star} \le
\frac{1}{\sqrt{m}}\norm{x_{k+1}-x^\star}_{\nabla^2 g(x_k)} \le
\frac{L_2}{2m} \norm{x_k-x^\star}^2.
\]
\end{proof}
\subsection{Local convergence of proximal quasi-Newton methods}
\label{sec:convergence-prox-quasinewton}
If the sequence $\{H_k\}$ satisfies the Dennis-Mor\'{e} criterion
\cite{dennis1974characterization}, namely
\begin{align}
\frac{\norm{\left(H_{k} -\nabla^2 g(x^\star)\right)(x_{k+1}-x_k)}}{\norm{x_{k+1}-x_k}}\to 0,
\label{eq:dennis-more}
\end{align}
we can prove that a proximal quasi-Newton method converges
$q$-superlinearly:
\[
\norm{x_{k+1}-x^\star} \le o(\norm{x_k-x^\star}).
\]
Again we assume that $g$ is twice-continuously differentiable and
strongly convex with constant $m$, and $\nabla g$ and $\nabla^2 g$ are
Lipschitz continuous with constants $L_1$ and $L_2$. These are the
assumptions required to prove that quasi-Newton methods for minimizing
smooth functions converge superlinearly.
First, we prove two auxiliary results: that (i) step lengths of unity
satisfy the sufficient descent condition after sufficiently many
iterations, and (ii) the proximal quasi-Newton step is close to the
proximal Newton step.
\begin{lemma}
\label{lem:quasinewton-unit-step}
If $\{H_k\}$ satisfy the Dennis-Mor\'{e}
criterion and $mI \preceq H_k \preceq MI$ for some $0 < m \le M$, then the unit step length satisfies the sufficient
descent condition \eqref{eq:sufficient-descent} after sufficiently
many iterations.
\end{lemma}
\begin{proof}
The proof is very similar to the proof of Lemma
\ref{lem:newton-unit-step}, and we defer the details to Appendix
\ref{sec:proofs}.
\end{proof}
The proof of the next result mimics the analysis of Tseng and Yun
\cite{tseng2009coordinate}.
\begin{proposition}
\label{prop:tseng}
Suppose $H_1$ and $H_2$ are positive definite matrices with
bounded eigenvalues: $mI\preceq H_1 \preceq MI$ and $m_2I\preceq
H_2 \preceq M_2I$. Let $\Delta x_1$ and $\Delta x_2$ be
the search directions generated using $H_1$ and $H_2$
respectively:
\begin{align*}
\Delta x_1 &= \prox_h^{H_1}\left(x-H_1^{-1}\nabla g(x)\right) - x, \\
\Delta x_2 &= \prox_h^{H_2}\left(x-H_2^{-1}\nabla g(x)\right) - x.
\end{align*}
Then there is some $\bar{\theta} > 0$ such that these two search
directions satisfy
\[
\norm{\Delta x_1 - \Delta x_2} \le
\sqrt{\frac{1+\bar{\theta}}{m_1}}\big\|(H_2-H_1)\Delta x_1\big\|^{1/2}
\norm{\Delta x_1}^{1/2}.
\]
\end{proposition}
\begin{proof}
By \eqref{eq:proxnewton-search-dir-1} and Fermat's rule, $\Delta x_1$
and $\Delta x_2$ are also the solutions to
\begin{align*}
\Delta x_1 &= \argmin_d\, \nabla g(x)^Td + \Delta x_1^TH_1d + h(x+d),
\\ \Delta x_2 &= \argmin_d\, \nabla g(x)^Td + \Delta x_2^TH_2d + h(x+d).
\end{align*}
Thus $\Delta x_1$ and $\Delta x_2$ satisfy
\begin{align*}
&\nabla g(x)^T\Delta x_1 + \Delta x_1^TH_1\Delta x_1 + h(x+\Delta x_1)
\\ &\hspace{1pc}\le \nabla g(x)^T\Delta x_2 + \Delta x_2^TH_1\Delta x_2 + h(x+\Delta x_2)
\end{align*}
and
\begin{align*}
&\nabla g(x)^T\Delta x_2 + \Delta x_2^TH_2\Delta x_2 + h(x+\Delta x_2)
\\ &\hspace{1pc}\le \nabla g(x)^T\Delta x_1 + \Delta x_1^TH_2\Delta x_1 + h(x+\Delta x_1).
\end{align*}
We sum these two inequalities and rearrange to obtain
\[
\Delta x_1^TH_1\Delta x_1 - \Delta x_1^T(H_1+H_2)\Delta x_2
+ \Delta x_2^TH_2\Delta x_2 \le 0.
\]
We then complete the square on the left side and rearrange to obtain
\begin{align*}
&\Delta x_1^TH_1\Delta x_1 - 2\Delta x_1^TH_1\Delta x_2 + \Delta x_2^TH_1\Delta x_2
\\ &\hspace{1pc}\le \Delta x_1^T(H_2-H_1)\Delta x_2 + \Delta x_2^T(H_1- H_2)\Delta x_2.
\end{align*}
The left side is $\norm{\Delta x_1 - \Delta x_2}_{H_1}^2$ and the
eigenvalues of $H_1$ are bounded. Thus
\begin{align}
\norm{\Delta x_1 - \Delta x_2}
&\le\frac{1}{\sqrt{m_1}}
\left(\Delta x_1^T(H_2-H_1)\Delta x_1
+ \Delta x_2^T(H_1- H_2)\Delta x_2\right)^{1/2} \nonumber
\\ &\le \frac{1}{\sqrt{m_1}}
\big\|(H_2-H_1)\Delta x_2\big\|^{1/2}
(\norm{\Delta x_1} + \norm{\Delta x_2})^{1/2}.
\label{eq:prop:tseng-1}
\end{align}
We use a result due to Tseng and Yun (cf.\ Lemma 3 in
\cite{tseng2009coordinate}) to bound the term $\left(\norm{\Delta
x_1} + \norm{\Delta x_2}\right)$. Let $P$ denote
$H_2^{-1/2}H_1H_2^{-1/2}$. Then $\norm{\Delta x_1}$ and
$\norm{\Delta x_2}$ satisfy
\[
\norm{\Delta x_1} \le
\left(
\frac{M_2\left( 1 + \lambda_{\max}(P)
+ \sqrt{1-2\lambda_{\min}(P)+\lambda_{\max}(P)^2}
\right)}{2m}
\right) \norm{\Delta x_2}.
\]
We denote the constant in parentheses by $\bar{\theta}$ and conclude that
\begin{align}
\norm{\Delta x_1} + \norm{\Delta x_2} \le (1 + \bar{\theta})\norm{\Delta x_2}.
\label{eq:prop:tseng-2}
\end{align}
We substitute \eqref{eq:prop:tseng-2} into \eqref{eq:prop:tseng-1} to obtain
\[
\norm{\Delta x_1 - \Delta x_2}^2
\le \sqrt{\frac{1+\bar{\theta}}{m_1}}
\big\|(H_2-H_1)\Delta x_2\big\|^{1/2}
\norm{\Delta x_2}^{1/2}.
\]
\end{proof}
We use these two results to show proximal quasi-Newton methods
converge superlinearly to $x^\star$ subject to standard assumptions on
$g$ and $H_k$.
\begin{theorem}
\label{thm:superlinear-convergence}
If $\{H_k\}$ satisfy the Dennis-Mor\'{e} criterion and $mI \preceq
H_k \preceq MI$ for some $0 < m \le M$, then a proximal quasi-Newton
method converges $q$-superlinearly to $x^\star$.
\end{theorem}
\begin{proof}
Since the assumptions of Lemma \ref{lem:quasinewton-unit-step} are
satisfied, unit step lengths satisfy the sufficient descent
condition after sufficiently many iterations:
\[
x_{k+1} = x_k + \Delta x_k.
\]
Since the proximal Newton method converges $q$-quadratically
(cf.\ Theorem \ref{thm:newton-quadratic-convergence}),
\begin{align}
\norm{x_{k+1} - x^\star} &\le \norm{x_k + \Delta x_k^{\rm nt} - x^\star}
+ \norm{\Delta x_k - \Delta x_k^{\rm nt}} \nonumber \\
&\le \frac{L_2}{m}\norm{x_k^{\rm nt}-x^\star}^2 + \norm{\Delta x_k
- \Delta x_k^{\rm nt}},
\label{eq:superlinear-convergence-1}
\end{align}
where $\Delta x_k^{\rm nt}$ denotes the proximal-Newton search
direction and $x^{\rm nt} = x_k + \Delta x_k^{\rm nt}$. We use
Proposition \ref{prop:tseng} to bound the second term:
\begin{align}
\norm{\Delta x_k - \Delta x_k^{\rm nt}} \le
\sqrt{\frac{1+\bar{\theta}}{m}}\norm{(\nabla^2 g(x_k)-H_k)\Delta
x_k}^{1/2}\norm{\Delta x_k}^{1/2}.
\label{eq:superlinear-convergence-2}
\end{align}
Since the Hessian $\nabla^2 g$ is Lipschitz continuous and $\Delta
x_k$ satisfies the Dennis-Mor\'{e} criterion, we have
\begin{align*}
\norm{\left(\nabla^2 g(x_k)-H_k\right)\Delta x_k}
&\le \norm{\left(\nabla^2 g(x_k)- \nabla^2 g(x^\star)\right)\Delta x_k}
+ \norm{\left(\nabla^2 g(x^\star)-H_k\right)\Delta x_k}
\\ &\le L_2\norm{x_k-x^\star}\norm{\Delta x_k} + o(\norm{\Delta x_k}).
\end{align*}
We know $\norm{\Delta x_k}$ is within some constant $\bar{\theta}_k$
of $\|\Delta x_k^{\rm nt}\|$ (cf.\ Lemma 3 in
\cite{tseng2009coordinate}). We also know the proximal Newton method
converges $q$-quadratically. Thus
\begin{align*}
\norm{\Delta x_k} &\le \bar{\theta}_k\norm{\Delta x_k^{\rm nt}}
= \bar{\theta}_k\norm{x_{k+1}^{\rm nt} - x_k}
\\ &\le \bar{\theta}_k\left(\norm{x_{k+1}^{\rm nt} -x^\star} + \norm{x_k - x^\star}\right)
\\ &\le O\big(\norm{x_k-x^\star}^2\big) + \bar{\theta}_k\norm{x_k-x^\star}.
\end{align*}
We substitute these expressions into
\eqref{eq:superlinear-convergence-2} to obtain
\begin{align*}
\norm{\Delta x_k - \Delta x_k^{\rm nt}} = o(\norm{x_k-x^\star}).
\end{align*}
We substitute this expression into
\eqref{eq:superlinear-convergence-1} to obtain
\[
\norm{x_{k+1} - x^\star}
\le \frac{L_2}{m} \norm{x_k^{\rm nt}-x^\star}^2
+ o(\norm{x_k-x^\star}),
\]
and we deduce that $x_k$ converges to $x^\star$ superlinearly.
\end{proof}
\subsection{Local convergence of the inexact proximal Newton method}
\label{sec:convergence-inexact-proxnewton}
Because subproblem \eqref{eq:proxnewton-search-dir-1} is rarely
solved exactly, we now analyze the adaptive stopping criterion
\eqref{eq:adaptive-stopping-condition}:
\[
\|G_{\hat{f}_k/M}(x_k + \Delta x_k)\| \le \eta_k\norm{G_{f/M}(x_k)}.
\]
We show that the inexact proximal Newton method with unit step length (i)
converges $q$-linearly if the forcing terms $\eta_k$ are smaller than
some $\bar{\eta}$, and (ii) converges $q$-superlinearly if the forcing
terms decay to zero.
As before, we assume (i) $g$ is twice-continuously differentiable and
strongly convex with constant $m$, and (ii) $g$ and $\nabla^2 g$ are
Lipschitz continuous with constants $L_1$ and $L_2$. We also assume
(iii) $x_k$ is close to $x^\star$, and (iv) the unit step length is
eventually accepted. These are the assumptions made by Dembo et al.\
and Eisenstat and Walker \cite{dembo1982inexact,eisenstat1996choosing}
in their analysis of \emph{inexact Newton methods} for minimizing
smooth functions.
First, we prove two auxiliary results that show (i) $G_{\hat{f}_k}$ is
a good approximation to $G_f$, and (ii) $G_{\hat{f}_k}$ inherits the
strong monotonicity of $\nabla\hat{g}$.
\begin{lemma}
\label{lem:G-newton-approx}
We have
\(
\|G_f(x) - G_{\hat{f}_k}(x)\| \le \frac{L_2}{2}\norm{x - x_k}^2.
\)
\end{lemma}
\begin{proof}
The proximal mapping is non-expansive:
\begin{align*}
\|G_f(x) - G_{\hat{f}_k}(x)\|
&\le \norm{\prox_h(x - \nabla g(x)) - \prox_h(x - \nabla\hat{g}_k(x))}
\le \norm{\nabla g(x) - \nabla\hat{g}_k(x)}.
\end{align*}
Since $\nabla g(x)$ and $\nabla^2 g(x_k)$ are Lipschitz continuous,
\[
\norm{\nabla g(x) - \nabla\hat{g}_k(x)}
\le \norm{\nabla g(x) - \nabla g(x_k) - \nabla^2 g(x_k)(x - x_k)}
\le \frac{L_2}{2}\norm{x - x_k}^2.
\]
Combining the two inequalities gives the desired result.
\end{proof}
The proof of the next result mimics the analysis of Byrd et al. \cite{byrd2013inexact}.
\begin{lemma}
\label{lem:G-strongly-monotone}
$G_{tf}(x)$ with $t \le \frac{1}{L_1}$
is strongly monotone with constant $\frac{m}{2}$, {\it i.e.},
\begin{equation}
(x-y)^T(G_{tf}(x) - G_{tf}(y)) \ge
\frac{m}{2}\norm{x - y}^2\text{ for }t\le\frac{1}{L_1}.
\label{eq:G-strongly-monotone-1}
\end{equation}
\end{lemma}
\begin{proof}
The composite gradient step on $f$ has the form
\begin{align*}
G_{tf}(x) = \frac{1}{t}(x - \prox_{th}(x - t\nabla g(x)))
\end{align*}
({\it cf.}{} \eqref{eq:composite-gradient-step}). We decompose
$\prox_{th}(x - t\nabla g(x))$ (by Moreau's decomposition) to obtain
\[
G_{tf}(x) = \nabla g(x) + \frac{1}{t}\prox_{(th)^*}(x - t\nabla g(x)).
\]
Thus $G_{tf}(x) - G_{tf}(y)$ has the form
\begin{align}
&G_{tf}(x) - G_{tf}(y) \nonumber
\\ &\hspace{1pc}= \nabla g(x) - \nabla g(y) + \frac{1}{t}
\left(\prox_{(th)^*}(x - t\nabla g(x)) - \prox_{(th)^*}(y - t\nabla g(y))\right).
\label{eq:G-strongly-monotone-2}
\end{align}
Let $w=\prox_{(th)^*}(x - t\nabla g(x)) - \prox_{(th)^*}(y - t\nabla
g(y))$ and
\[
d = x - t\nabla g(x) - (y - t\nabla g(y))
= (x - y) - t(\nabla g(x) - t\nabla g(y)).
\]
We express \eqref{eq:G-strongly-monotone-2} in terms of $W =
\frac{ww^T}{w^Td}$ to obtain
\[
G_{tf}(x) - G_{tf}(y) = \nabla g(x) - \nabla g(y) + \frac{w}{t}
= \nabla g(x) - \nabla g(y) + \frac1tWd.
\]
We multiply by $x-y$ to obtain
\begin{align}
&(x-y)^T(G_{tf}(x) - G_{tf}(y)) \nonumber
\\ &\hspace{1pc} =(x-y)^T (\nabla g(x) - \nabla g(y)) + \frac1t(x-y)^T Wd \nonumber
\\ &\hspace{1pc} =(x-y)^T (\nabla g(x) - \nabla g(y)) + \frac1t(x-y)^T
W(x - y - t(\nabla g(x) - \nabla g(y)))
\label{eq:G-strongly-monotone-3}
\end{align}
Let $H(\alpha)= \nabla^2g (x+\alpha (x-y))$. By the mean value
theorem, we have
\begin{align}
&(x-y)^T(G_{tf}(x) - G_{tf}(y)) \nonumber
\\ &\hspace{1pc} = \int_{0}^{1} (x-y)^T \left( H(\alpha) -WH(\alpha)
+ \frac{1}{t} W\right) (x-y)\ d\alpha \nonumber
\\ &\hspace{1pc} = \int_{0}^{1} (x-y)^T \left( H(\alpha) -
\frac{1}{2} (WH(\alpha) +H(\alpha) W) +\frac{1}{t} W \right) (x-y)\ d\alpha
\label{eq:G-strongly-monotone-4}
\end{align}
To show \eqref{eq:G-strongly-monotone-1}, we must show that $H(\alpha)
+ \frac1t W -\frac12(WH(\alpha) + H(\alpha)W)$ is positive definite
for $t \le\frac{1}{L_1}$. We rearrange $(\sqrt{t} H(\alpha) -
\frac{1}{\sqrt{t}} W)(\sqrt{t} H(\alpha) - \frac{1}{\sqrt{t}}
W)\succeq 0$ to obtain
\[
t H(\alpha)^2 +\frac{1}{t} W^2 \succeq WH(\alpha) +H(\alpha)W,
\]
and we substitute this expression into
\eqref{eq:G-strongly-monotone-4} to obtain
\begin{align*}
&(x-y)^T(G_{tf}(x) - G_{tf}(y))
\\ &\hspace{1pc}\ge \int_{0}^{1} (x-y)^T \left(
H(\alpha) - \frac{t}{2} H(\alpha)^2
+ \frac{1}{t} \bigl(W - \frac{1}{2} W^2 \bigr)
\right)
(x-y)\ d\alpha.
\end{align*}
Since $\prox_{(th)^*}$ is firmly non-expansive, we have $\norm{w}^2
\le d^Tw$ and
\[
W = \frac{ww^T}{w^Td} = \frac{\norm{w}^2}{w^Td}\frac{ww^T}{\norm{w}^2}\preceq I.
\]
Since $W$ is positive semidefinite and $W\preceq I$, $W - W^2$ is positive
semidefinite and
\[
(x-y)^T(G_{tf}(x) - G_{tf}(y)) \ge \int_{0}^{1} (x-y)^T \left(
H(\alpha) -\frac{t}{2} H(\alpha)^2 \right) (x-y)\ d\alpha.
\]
If we set $t \le \frac{1}{L_1}$, the eigenvalues of $H(\alpha) -
\frac{t}{2}H(\alpha)^2$ are
\[
\lambda_i(\alpha) - \frac{t}{2}\lambda_i(\alpha)^2 \ge
\lambda_i(\alpha) - \frac{\lambda_i(\alpha)^2}{2L_1} \ge
\frac{\lambda_i(\alpha)}{2} > \frac{m}{2},
\]
where $\lambda_i(\alpha),i=1,\dots,n$ are the eigenvalues of $H(\alpha)$.
We deduce that
\[
(x-y)^T(G_{tf}(x) - G_{tf}(y)) \ge \frac{m}{2} \norm{x-y}^2.
\]
\end{proof}
We use these two results to show that the inexact proximal Newton
method with unit step lengths converges locally linearly or superlinearly depending on the forcing terms.
\begin{theorem}
\label{thm:inexact-newton-linear-convergence}
Suppose
$x_0$ is sufficiently close to $x^\star$.
\begin{enumerate}
\item If $\eta_k$ is smaller than some $\bar{\eta} < \frac{m}{2}$, an
inexact proximal Newton method with unit step lengths converges $q$-linearly to $x^\star$.
\item If $\eta_k$ decays to zero, an inexact proximal Newton method with unit step lengths
converges $q$-super\-linearly to $x^\star$.
\end{enumerate}
\end{theorem}
\begin{proof}
The local model $\hat{f}_k$ is strongly convex with constant $m$.
According to Lemma \ref{lem:G-strongly-monotone},
$G_{\hat{f}_k/L_1}$ is strongly monotone with constant
$\frac{m}{2}$:
\[
(x-y)^T\left(G_{\hat{f}_k/L_1}(x) - G_{\hat{f}_k/L_1}(y)\right) \ge \frac{m}{2}\norm{x-y}^2.
\]
By the Cauchy-Schwarz inequality, we have
\[
\|G_{\hat{f}_k/L_1}(x) - G_{\hat{f}_k/L_1}(y)\| \ge \frac{m}{2}\norm{x-y}.
\]
We apply this result to $x_k +\Delta x_k$ and $x^\star$ to obtain
\begin{equation}
\norm{x_{k+1} - x^\star} =\norm{x_k + \Delta x_k - x^\star}
\le \frac{2}{m}\|G_{\hat{f}_k/L_1}(x_k
+ \Delta x_k) - G_{\hat{f}_k/L_1}(x^\star)\|.
\label{eq:inexact-newton-linear-convergence-0}
\end{equation}
Let $r_k$ be the residual $- G_{\hat{f}_k./L_1}(x_k + \Delta x_k)$.
The adaptive stopping condition
\eqref{eq:adaptive-stopping-condition} requires $\norm{r_k} \le
\eta_k\|G_{f/L_1}(x_k)\|$. We substitute this expression into
\eqref{eq:inexact-newton-linear-convergence-0} to obtain
\begin{align}
\norm{x_{k+1} - x^\star}
&\le \frac{2}{m}\|-G_{\hat{f}_k/L_1}(x^\star) -r_k\| \nonumber
\\ &\le \frac{2}{m}(\|G_{\hat{f}_k/L_1}(x^\star)\|+\norm{r_k}) \nonumber
\\ &\le \frac{2}{m} (\|G_{\hat{f}_k/L_1}(x^\star)\|+\eta_k\norm{G_{f/L_1}(x_k)}.
\label{eq:inexact-newton-linear-convergence-1}
\end{align}
Applying Lemma \ref{lem:G-newton-approx} to $f/L_1$ and $\hat f_k
/L_1$ gives
\[
\|G_{\hat{f}_k/L_1}(x^\star)\|
\le \frac12\frac{L_2}{L_1}\norm{x_k - x^\star}^2+\norm{
G_{f/L_1}(x^\star)}=\frac12\frac{L_2}{L_1}\norm{x_k -
x^\star}^2.
\]
We substitute this bound into
\eqref{eq:inexact-newton-linear-convergence-1} to obtain
\begin{align*}
\norm{x_{k+1} - x^\star} &\le \frac{2}{m} \left(
\frac{L_2}{2L_1}\norm{x_k - x^\star}^2
+ \eta_k\norm{G_{f/L_1}(x_k)}
\right)
\\ &\le \frac{L_2}{mL_1}
\norm{x_k - x^\star}^2
+ \frac{2\eta_k}{m}\norm{x_k - x^\star}.
\end{align*}
We deduce that (i) $x_k$ converges $q$-linearly to $x^\star$ if $x_0$ is
sufficiently close to $x^\star$ and $\eta_k \le \bar{\eta}$ for some
$\bar{\eta} < \frac{m}{2}$, and (ii) $x_k$ converges $q$-superlinearly
to $x^\star$ if $x_0$ is sufficiently close to $x^\star$ and $\eta_k$
decays to zero.
\end{proof}
Finally, we justify our choice of forcing terms: if we choose $\eta_k$
according to \eqref{eq:forcing-term}, then the inexact proximal Newton
method converges $q$-superlinearly. When minimizing smooth functions, we recover the result of Eisenstat and Walker on choosing forcing terms in an inexact Newton method \cite{eisenstat1996choosing}.
\begin{theorem}
\label{thm:forcing-term-1}
Suppose
$x_0$ is sufficiently close to $x^\star$. If we choose $\eta_k$
according to \eqref{eq:forcing-term}, then the inexact proximal
Newton method with unit step lengths converges $q$-superlinearly.
\end{theorem}
\begin{proof}
To show superlinear convergence, we must show
\begin{align}
\frac{\|G_{\hat{f}_{k-1}/L_1}(x_k)-G_{f/L_1}(x_k)\|}{\norm{G_{f/L_1}(x_{k-1})}}\to 0
\label{eq:eta-0}.
\end{align}
By Lemma \ref{lem:G-lipschitz}, we have
\begin{align*}
\|G_{\hat{f}_{k-1}/L_1}(x_k)-G_{f/L_1}(x_k)\|
&\le \frac12\frac{L_2}{L_1} \norm{x_k-x_{k-1}}^2
\\ &\le \frac12\frac{L_2}{L_1}
\left(\norm{x_k - x^\star} +\norm{x^\star - x_{k-1}}\right)^2.
\end{align*}
By Lemma \ref{lem:G-strongly-monotone}, we also have
\[
\norm{G_{f/L_1}(x_{k-1})} = \norm{G_{f/L_1} (x_{k-1}) - G_{f/L_1}
(x^\star)} \ge \frac{m}{2} \norm{x_{k-1} - x^\star}.
\]
We substitute these expressions into \eqref{eq:eta-0} to obtain
\begin{align*}
&\frac{\|G_{\hat{f}_{k-1}/L_1}(x_k)-G_{f/L_1}(x_k)\|}
{\norm{G_{f/L_1}(x_{k-1})}}
\\ &\hspace{1pc}\le \frac{\frac12\frac{L_2}{L_1}
\left(\norm{x_k - x^\star} + \norm{x^\star - x_{k-1}}\right)^2}
{\frac{m}{2} \norm{x_{k-1} - x^\star}}
\\ &\hspace{1pc}= \frac{1}{m}\frac{L_2}{L_1}
\frac{\norm{x_k - x^\star} + \norm{x_{k-1}-x^\star}}
{\norm{x_{k-1} -x^\star}}
\left( \norm{x_k - x^\star} +\norm{x_{k-1}-x^\star}\right)
\\ &\hspace{1pc}= \frac{1}{m}\frac{L_2}{L_1}
\left(1 + \frac{\norm{x_k - x^\star}}{\norm{x_{k-1} -x ^\star}} \right)
\left(\norm{x_k - x^\star} + \norm{x_{k-1}-x^\star}\right).
\end{align*}
By Theorem \ref{thm:inexact-newton-linear-convergence}, we have
$\frac{\norm{x_k - x^\star}}{\norm{x_{k-1} -x ^\star}} < 1$ and
\[
\frac{\|G_{\hat{f}_{k-1}/M}(x_k)-G_{f/M}(x_k)\|}
{\norm{G_{f/M}(x_{k-1})}}
\le \frac{2}{m} \frac{L_2}{M}
\left( \norm{x_k - x^\star} + \norm{x_{k-1}-x^\star}\right).
\]
We deduce (with Theorem \ref{thm:inexact-newton-linear-convergence})
that the inexact proximal Newton method with adaptive stopping
condition \eqref{eq:adaptive-stopping-condition} converges
$q$-superlinearly.
\end{proof}
\section{Computational experiments}
\label{sec:experiments}
First we explore how inexact search directions affect the convergence
behavior of proximal Newton-type methods on a problem in
bioinfomatics. We show that choosing the forcing terms according to
\eqref{eq:forcing-term} avoids ``oversolving'' the subproblem. Then
we demonstrate the performance of proximal Newton-type methods using a
problem in statistical learning. We show that the methods are suited
to problems with expensive smooth function evaluations.
\subsection{Inverse covariance estimation}
Suppose i.i.d.\ samples $x^{(1)},\dots,x^{(m)}$ are from
a Gaussian Markov random field (MRF) with mean zero and unknown inverse covariance
matrix $\bar{\Theta}$:
\[
\mathop{\mathbf{Pr}}(x;\bar{\Theta}) \propto \exp(x^T\bar{\Theta} x/2 - \logdet(\bar{\Theta})).
\]
We seek a sparse maximum likelihood estimate of the inverse covariance matrix:
\begin{align}
\hat{\Theta} := \argmin_{\Theta \in \mathbf{R}^{n\times
n}}\,\trace\left(\hat{\Sigma}\Theta\right) - \log\det(\Theta) +
\lambda\norm{\mathrm{vec}(\Theta)}_1,
\label{eq:l1-logdet}
\end{align}
where $\hat{\Sigma}$ denotes the sample covariance matrix. We
regularize using an entry-wise $\ell_1$ norm to avoid overfitting the
data and to promote sparse estimates. The parameter $\lambda$ balances
goodness-of-fit and sparsity.
We use two datasets: (i) Estrogen, a gene expression dataset
consisting of 682 probe sets collected from 158 patients, and (ii)
Leukemia, another gene expression dataset consisting of 1255 genes
from 72 patients.\footnote{These datasets are available from
\url{http://www.math.nus.edu.sg/~mattohkc/} with the SPINCOVSE
package.} The features of Estrogen were converted to log-scale and
normalized to have zero mean and unit variance. The regularization
parameter $\lambda$ was chosen to match the values used in
\cite{rolfs2012iterative}.
We solve the inverse covariance estimation problem
\eqref{eq:l1-logdet} using a proximal BFGS method, {\it i.e.}, $H_k$ is
updated according to the BFGS updating formula.
(The proximal Newton method would be
computationally very expensive on these large datasets.)
To explore how inexact search directions affect the convergence
behavior, we use three rules to decide how accurately to solve
subproblem \eqref{eq:proxnewton-search-dir-1}:
\begin{enumerate}
\item adaptive: stop when the adaptive stopping condition
\eqref{eq:adaptive-stopping-condition} is satisfied;
\item exact: solve the subproblem accurately (``exactly'');
\item stop after 10 iterations.
\end{enumerate}
We use the TFOCS implementation of FISTA to solve the subproblem. We
plot relative suboptimality versus function evaluations and time on
the Estrogen dataset in Figure \ref{fig:estrogen} and on the Leukemia
dataset in Figure \ref{fig:leukemia}.
\begin{figure}
\begin{subfigure}{0.5\textwidth}
\includegraphics[width=\textwidth]{estrogen_fun_evals}
\end{subfigure}%
~
\begin{subfigure}{0.5\textwidth}
\includegraphics[width=\textwidth]{estrogen_time}
\end{subfigure}
\caption{Inverse covariance estimation problem (Estrogen
dataset). Convergence behavior of proximal BFGS method with three
subproblem stopping conditions.}
\label{fig:estrogen}
\end{figure}
\begin{figure}
\begin{subfigure}{0.5\textwidth}
\includegraphics[width=\textwidth]{leukemia_fun_evals}
\end{subfigure}%
~
\begin{subfigure}{0.5\textwidth}
\includegraphics[width=\textwidth]{leukemia_time}
\end{subfigure}
\caption{Inverse covariance estimation problem (Leukemia
dataset). Convergence behavior of proximal BFGS method with three
subproblem stopping conditions.}
\label{fig:leukemia}
\end{figure}
Although the conditions for superlinear convergence (cf.\ Theorem
\ref{thm:superlinear-convergence}) are not met ($\log\det$ is not
strongly convex), we empirically observe in Figures \ref{fig:estrogen}
and \ref{fig:leukemia} that a proximal BFGS method transitions from
linear to superlinear convergence. This transition is characteristic
of BFGS and other quasi-Newton methods with superlinear convergence.
On both datasets, the exact stopping condition yields the fastest
convergence (ignoring computational expense per step), followed
closely by the adaptive stopping condition (see Figure
\ref{fig:estrogen} and \ref{fig:leukemia}). If we account for time per
step, then the adaptive stopping condition yields the fastest
convergence. Note that the adaptive stopping condition yields
superlinear convergence (like the exact proximal BFGS method). The
third condition (stop after 10 iterations) yields only linear
convergence (like a first-order method), and its convergence rate is
affected by the condition number of $\hat{\Theta}$. On the Leukemia
dataset, the condition number is worse and the convergence is slower.
\subsection{Logistic regression}
Suppose we are given samples $x^{(1)},\dots,x^{(m)}$ with labels
$y^{(1)},\dots,y^{(m)}\in\{-1,1\}$. We fit a logit model to our data:
\begin{align}
\minimize_{w \in \mathbf{R}^n}\,\frac{1}{m}
\sum_{i=1}^m \log(1+\exp(-y_i w^Tx_i)) + \lambda\norm{w}_1.
\label{eq:l1-logistic}
\end{align}
Again, the regularization term $\norm{w}_1$ promotes sparse solutions
and $\lambda$ balances sparsity with goodness-of-fit.
We use two datasets: (i) \texttt{gisette}, a handwritten digits
dataset from the NIPS 2003 feature selection challenge ($n=5000$), and
(ii) \texttt{rcv1}, an archive of categorized news stories from
Reuters ($n=47,000$).\footnote{These datasets are available at
\url{http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets}.} The
features of \texttt{gisette} have been scaled to be within the
interval $[-1,1]$, and those of \texttt{rcv1} have been scaled to be
unit vectors. $\lambda$ matched the value reported in
\cite{yuan2012improved}, where it was chosen by five-fold cross
validation on the training set.
We compare a proximal L-BFGS method with SpaRSA and the TFOCS
implementation of FISTA (also Nesterov's 1983 method) on problem
\eqref{eq:l1-logistic}.
We plot relative suboptimality versus function evaluations and time on
the \texttt{gisette} dataset in Figure \ref{fig:gisette} and on the
\texttt{rcv1} dataset in Figure \ref{fig:rcv1}.
\begin{figure}
\begin{subfigure}{0.5\textwidth}
\includegraphics[width=\textwidth]{gisette_fun_evals}
\end{subfigure}%
~
\begin{subfigure}{0.5\textwidth}
\includegraphics[width=\textwidth]{gisette_time}
\end{subfigure}
\caption{Logistic regression problem (\texttt{gisette}
dataset). Proximal L-BFGS method (L = 50) versus FISTA and
SpaRSA.}
\label{fig:gisette}
\end{figure}
\begin{figure}
\begin{subfigure}{0.5\textwidth}
\includegraphics[width=\textwidth]{rcv1_fun_evals}
\end{subfigure}%
~
\begin{subfigure}{0.5\textwidth}
\includegraphics[width=\textwidth]{rcv1_time}
\end{subfigure}
\caption{Logistic regression problem (\texttt{rcv1}
dataset). Proximal L-BFGS method (L = 50) versus FISTA and
SpaRSA.}
\label{fig:rcv1}
\end{figure}
The smooth part of the function requires many expensive exp/log
evaluations. On the dense \texttt{gisette} dataset (30
million nonzero entries in a $6000 \times 5000$ design matrix), evaluating
$g$ dominates the computational cost. The proximal L-BFGS method
clearly outperforms the other methods because the computational
expense is shifted to solving the subproblems, whose objective
functions are cheap to evaluate (see Figure \ref{fig:gisette}). On the
sparse \texttt{rcv1} dataset (40 million nonzero entries in a $542000
\times 47000$ design matrix), the evaluation of $g$ makes up a
smaller portion of the total cost, and the proximal L-BFGS method
barely outperforms SpaRSA (see Figure \ref{fig:rcv1}).
\subsection{Software: PNOPT}
The methods described have been incorporated into a \textsc{Matlab}
package PNOPT (Proximal Newton OPTimizer, pronounced pee-en-opt) and
are publicly available from the Systems Optimization Laboratory (SOL)%
\footnote{\url{http://www.stanford.edu/group/SOL/}}. PNOPT shares an
interface with the software package TFOCS \cite{becker2011templates}
and is compatible with the function generators included with TFOCS. We
refer to the SOL website for details about PNOPT.
\section{Conclusion}
Given the popularity of first-order methods for minimizing composite
functions, there has been a flurry of activity around the development of
Newton-type methods for minimizing composite functions
\cite{hsieh2011sparse, becker2012quasi, olsen2012newton}. We analyze
proximal Newton-type methods for such functions and show that they have
several benefits over first-order methods:
\begin{enumerate}
\item They converge rapidly near the optimal
solution, and can produce a solution of high accuracy.
\item They scale well with problem size.
\item The proximal Newton method is insensitive to the choice of
coordinate system and to the condition number of the level sets of
the objective.
\end{enumerate}
Proximal Newton-type methods can readily handle composite functions
where $g$ is not convex, although care must be taken to ensure
$\hat{g}_k$ remains strongly convex. The convergence analysis could
be modified to give global convergence (to stationary points) and
convergence rates near stationary points. We defer these extensions to future work.
The main disadvantage of proximal Newton-type methods is the cost of
solving the subproblems. We have shown that it is possible to reduce
the cost and retain the fast convergence rate by solving the
subproblems inexactly. We hope our results will kindle further
interest in proximal Newton-type methods as an alternative to
first-order methods and interior point methods for minimizing
composite functions.
\section*{Acknowledgements}
We thank Santiago Akle, Trevor Hastie, Nick Henderson, Qihang Lin,
Xiao Lin, Qiang Liu, Ernest Ryu, Ed Schmerling, Mark Schmidt, Carlos
Sing-Long, Walter Murray, and four anonymous referees for their
insightful comments and suggestions.
J. Lee was supported by a National Defense Science and Engineering
Graduate Fellowship and a
Stanford Graduate Fellowship.
Y. Sun and M. Saunders were partially supported by the Department
of Energy through the Scientific Discovery through Advanced
Computing program under award DE-FG02-09ER25917,
and by the National Institute of General Medical Sciences of
the National Institutes of Health under award U01GM102098.
M.~Saunders was also partially by
the Office of Naval Research under award N00014-11-1-0067.
The content is solely the responsibility of the authors and does not
necessarily represent the official views of the funding agencies.
|
1,108,101,565,603 | arxiv | \section{Introduction}
\label{sec:introduction}
\iffalse
- broad view of the field and growing interest
- gai
- many different subtasks
- we focus on one
- challenges:
understanding a text
iteratively towards increasingly more complicated questions/tasks
each time different type of questions
in this context, we ask our self how the extension from one setup to the next can be done
\fi
Machine Reading Comprehension (MRC) entails engineering an agent to answer a query about a given context. The complexity of the task comes from the need for the agent to understand both the question and the context. Progress has been largely driven by datasets that have addressed increasingly difficult intermediate tasks. In particular, the SQuAD 1.1 dataset \cite{squad1} was released in 2016, providing an extensive set of paragraphs, questions and answers. As models rivalled human performance on that dataset, SQuAD 2 was released with an additional 50,000 adversarially written unanswerable questions.
Motivated by the general question of how an MRC agent can be adapted when its original MRC task assumptions are relaxed, we work on the specific research problem of relaxing the answerability assumption on the MRC task, and we evaluate our work using the SQuAD 2 dataset.
QANet \cite{qanet} is a feedforward architecture using only convolutions and attention mechanisms for MRC. It is devoid of recurrence, which is a typical ingredient in previous MRC models, and despite its simplicity it achieved state-of-the-art performance on SQuAD 1.1. Observing the absence of a mechanism in QANet to allow for unanswerability, and noting that to the best of our knowledge there has so far been no effort to incorporate one, we decided to base our work on this architecture. Our contribution is two-fold:
Firstly, we present EQuANt, which extends the original QANet architecture to include an answerability module. Working within the time and resource constraints of this project, we achieved a 63.5 F1 score on SQuAD 2, almost double the accuracy of our baseline QANet method. For the sake of reproducibility, we make available an open-source implementation of our model at \url{https://github.com/Francois-Aubet/EQuANt}.
Secondly, we show that by training EQuANt to accomplish two distinct tasks simultaneously, namely answerability prediction and answer extraction, we improve the model's performance on SQuAD 1.1 from that of QANet, verifying that a multitask learning approach can improve an MRC model's performance.
We begin in section \ref{sec:Background} by presenting the background necessary to motivate and understand our contribution. In section \ref{sec:Related Work}, we give an overview of related work and how it complements and differs from our work. In sections \ref{sec:Methods}, \ref{sec: experiment} and \ref{sec: results} we illustrate the design of our model and present and discuss our experimental results. Finally, in section \ref{sec:Conclusion}, we summarise our work and propose potential future work which would extend our contribution.
\section{Background}
\label{sec:Background}
\iffalse
- "Idea is to provide the reader with the background necessary for the rest of the report."
- SQuAD 1 and *that* problem
- QANet paper
- SQuAD 2 and our "extension problem"
- research question
- problem formulation
-- preliminary problem given by SQuAD 1, solved by QANet and other methods, also talk about QANet.
-- unanswerable problem formulation - SQuAD 2.
- datasets (SQuAD 1 and 2)
\fi
The problem of question answering can be formulated specifically in the open domain setting in the following way:
\textit{Given a question, or query, sequence $Q = (q_1, ..., q_{m})$, and a context paragraph sequence $C = (c_1, ...,c_{n})$, assume the answer to the question is a unique connected subsequence of $C$, then identify that subsequence. i.e. Identify $i,j\in \{1,..., n\}$, $i \leq j$, such that the span $A = (c_{i},..., c_{j})$ is the answer to the query $Q$.} \hspace*{\fill} ($\star$)
A recent and significant dataset responsible for much of the development of models in tackling the above-formulated problem is the Stanford Question Answering Dataset (SQuAD), and more specifically its two versions, SQuAD 1.0 and SQuAD 1.1, \cite{squad1}. SQuAD consists of over 100,000 crowdsourced comprehension questions and answers based on Wikipedia articles. Importantly, this dataset is large enough to support complex deep learning models and contains a mixture of long- and short-phrase answers which are directly implied by the associated passage. Since its introduction, SQuAD has inspired healthy competition among researchers to hold the state-of-the-art position on its leaderboard.
The success of an MRC model hinges on its ability to represent both the structures of the questions and contexts, and the relationship between the questions and the contexts. The two most prominent methods in the literature to represent the structures of such kinds of sequential data are attention and recurrence, thus it is not surprising that the best performing models on SQuAD 1.0 leaderboard are attention-based models,
e.g. BERT \cite{BERT}, and RNN-based models, e.g. R\-Net, \cite{rnet}. One prominent attention-based candidate on the leaderboard is QANet, \cite{qanet}, upon which our work is built. We will now provide a brief introduction to QANet and motivate our decision to work with this model.
QANet consists of five functional blocks: a context processing block, a question processing block, a context-query block, a start-probability block and a end-probability block. See figure \ref{fig:equant} for a high level representation of the model. Within the context, question and context-query blocks, an embedding encoder of the form shown in figure \ref{fig:1_encoder} is used repeatedly. This is very similar to the transformer encoder block introduced in \cite{need_paper}, however possesses an additional convolutional layer after positional encoding and before the layernorm and self-attention layer.
These additional separable convolutional layers enable the model to capture local structure of the input data. Having passed through the context-query block, the data is then passed into the two probability blocks, which are both standard feed-forward layers with softmax, to calculate the probability of each word being a start- or end-word. For a detailed description of each portion of the model, the reader is referred to the original paper \cite{qanet}, however further discussion of the components most relevant to our architecture design and experiments can be found in section \ref{subsec:methods_base_qa}.
The original QANet authors \cite{qanet} achieved a result of 73.6 Exact Match (EM)
and 82.7 F1 score on the SQuAD 1 datasets, placing it among the then state-of-the-art models. This is quite remarkable given its apparent conceptual and practical simplicity. Armed with separable convolution and an absence of recurrence, it is able to achieve its results whilst having a faster training and inference time than all of the RNN-based models preceding it \cite{qanet}. Thus we are motivated to investigate the properties of a simple, efficient and accurate model in hope of gaining fundamental understanding of question-answer modelling.
As methods on the top of the SQuAD 1.1 leaderboard started to outperform human EM and F1 scores, a more challenging task was called for, leading to the entrance of SQuAD 2.0 \cite{squad2}. In addition to SQuAD 1.1, SQuAD 2.0 added over 50,000 unanswerable questions written adversarially to look similar to answerable ones.
Under this new setting, we reformulate the question answering problem as the following:
\noindent \textit{For $Q$ and $C$ as given in $(\star)$, release the assumption that an answer exists, but if it does then assume it is a unique connected subsequence of $C$.
First identify the value of the indicator variable $b \in \{0, 1\}$, such that if the answer exists, then $b=1$, otherwise $b=0$. Furthermore, if the context contains the answer, then identify $i,j \in \{1,...,n\}$, $i \leq j$, such that the span $A = {c_i,...,c_j}$ is the answer to the query $Q$.} \hspace*{\fill}($\star \star$)
Inspecting the QANet architecture, it is not hard to see that the model would not give the desired prediction for unanswerable questions, as the value of $b$ is assumed to be $1$ and the length of the span is at least $1$. This motivates our extension of QANet to accommodate for unanswerable questions and identifies SQuAD 2.0 as an appropriate benchmark dataset for our work.
\begin{figure}[t]
\centering
\includegraphics[scale=0.6]{1_encoder_block.pdf}
\caption{The mechanism of one QANet encoder block.}
\label{fig:1_encoder}
\end{figure}
\section{Related Work}
\label{sec:Related Work}
\iffalse
- QANet
- SAN
- BERT and emsemble methods
\item (Optional) Criticism of QANet:
Loss functional of QANet treats the distributions of start and end words as independent. \textcolor{red}{Only mention this if we are going to do differently.}
\fi
\subsection{Open domain question answering}
Recurrence has traditionally been a key component in many natural language tasks, including QA, and since the entrance of attention mechanisms into the QA domain, models have also found success by using attention to guide recurrence, such as the
BiDAF and SAN models \cite{BiDAF,SAN}. A key drawback in traditional recurrence-based architectures is the long training time due to the $O(n)$ complexity in modelling relations between words which are $n$ words apart. Replacing recurrence with pure attention completely reduces the complexity to constant, providing faster algorithms \cite{need_paper}.
SAN, alongside other models, e.g. \cite{hill16, dhingral16}, used multi-step reasoning, implemented using recurrent layers, to predict the answer spans. The purpose of using multiple reasoning states is to extract higher order logical relations in contexts. For example, the model may first learn what a pronoun is referring to before extracting the answer span based upon the reference. In contrast, QANet used entirely multi-headed attention and convolution mechanisms, which encapsulate the intra-context relations, and is in addition superior at modelling long-range relations. Moreover, the large recurrence component of these models creates a burden on training speed, whereas QANet's attention and separable convolution approach saves an order of magnitude on the training complexity by the result stated in the first paragraph of this section.
The transformer architecture proposed in
\cite{need_paper} and its
pre-trained counterpart, BERT \cite{BERT}, have become the common factor in all leading QA models. Unlike QANet, which is specifically designed for QA, BERT is an all-in-one model, capable of aiding many natural language tasks. Thus it is not surprising that BERT is a much larger model than QANet, containing $110M$ parameters in the base model, compared with the fewer than $5M$ parameters that were present in QANet during our computational experiments. As a result, BERT will have significantly greater inference time than QANet. Furthermore, as a result of its multi-faceted abilities, impressive though they are, BERT is less capable of illustrating the interaction of the model with the QA problem specifically. QANet, however, as a simple feed-forward, QA-specific model with less training time, allows more insights into the model's reaction to the problem, hence providing researchers with more intuitions into model enhancement.
\subsection{Unanswerability extension}
One body of unanswerability extension relies on incorporating a no-answer score to the model, which is the main inspiration for our work. Levy et al. extended BiDAF \cite{Levy_et_al, BiDAF} to deal with unanswerable questions by effectively setting a threshold $p$, whereby the model will output no answer if the model's highest confidence in any answers is less than $p$. Our work uses an approach similar to Levy's to verify that QANet generates generally lower probabilities in a dataset with unanswerable questions, but our final model adopts a more explicit approach.
In \cite{extended_SAN}, the original SAN authors extended SAN to accommodate unanswerable questions. Their work added an extra feed-forward module for discrimination of answerable/unanswerable questions and trained an objective which jointly accounts for answer correctness and answerability. We take inspiration from this extended SAN, but the summary statistic fed into the answerability module in our model is obtained from a fundamentally different procedure which is completely devoid of recurrence. We also favour the approach of minimising a joint objective over different tasks in response to recent successes of multi-task approaches in NLP which suggest that a learner's learning generalisability improves as it tries to accomplish more than one task.
Read+Verify and UNet \cite{read_verify, unet} both use an additional answer verifier to improve performance by finding local entailment that supports the answer by comparing the answer sentence with the question. The local entailment finding is able to improve the answer accuracy as it is sensitive to specific types of unanswerability, such as an impossible condition. Due to time constraints, we leave the exploration of utilising verification modules for future work.
\section{Methods}
\label{sec:Methods}
\subsection{Light QANet implementation}\label{subsec:methods_base_qa}
Given that our model builds directly upon QANet, a natural first step
was to work with a computational implementation of this base model. We chose to
use the open-source Tensorflow implementation of QANet hosted at \url{https://github.com/NLPLearn/QANet}
for our QANet experiments and as a base for our extension. A particularly attractive
aspect of this implementation is that it allows straightforward customisation of the
hyperparameters involved in the QANet model. In order to allow for a larger number of
design iterations and to account for limited computational resources, we chose to utilise
this customisability to make our trained QANet model lightweight, and we refer to this
scaled-down version of the original QANet as ``light QANet". Although this choice is likely
to mean that our results, quoted in section \ref{sec: results}, could be improved by increasing
the complexity of the architecture, we stress that our aim is not to surpass state-of-the-art
performance on SQuAD2, but instead to show that it is possible to successfully extend QANet
to the unanswerable domain.
\begin{figure}[t]
\centering
\includegraphics[scale = 0.35]{EQuANt_diagram.pdf}
\caption{EQuANt architecture: combination of QANet and unanswerability extension module.}
\label{fig:equant}
\end{figure}
As in the original paper, our character embeddings are trainable and are initialised by
truncating GloVe vectors \cite{glove}. However, in the interest of model size, we choose to retain
$p_2' = 64$ of the $p_1 = 300$ of each GloVe vector rather than $p_2 = 200$ as
in the original paper. We then used the character embedding convolution of QANet to map the $p_2' = 64$-dimensional vector to a $p_2'' = 96$-dimensional
representation, making the output of
our context and query input embedding layers of dimension
$p_1 + p_2'' = 396$, rather than $p_1 + p_2 = 500$ used by the original authors, resulting
in a significant reduction in the number of parameters in our model. Having utilised the input embedding to represent each word in the context and query
as a $396$-dimensional vector, these vectors then flow into the embedding encoder blocks
(of the form shown in \ref{fig:1_encoder}),
where they are transformed using a series of convolutions, feedforward layers and self-attention.
In these encoder blocks and throughout the rest of the network, we choose our hidden layer size
to be $96$, as opposed to original QANet's hidden layer size of $128$. Furthermore,
although the typical transformer architecture relies on multi-headed self-attention,
with the original QANet using $8$ heads in all layers,
this introduces additional computational overhead. As a result, we minimise this
by using only a single head, however it is straightforward to change the number of heads in our implementation and this may yield fruitful results. All other architecture and hyperparameter choices match those described in the original paper \cite{qanet}.
As well as using this process to gain an understanding of the inner workings of QANet,
we utilised light QANet to provide a principled initialisation for the training of our
extended architecture. In \cite{qanet}, the authors describe how they used an augmented
dataset generated using neural machine translation and how this significantly improved their
results. As having access to this dataset would likely result in improved outcomes for our
model, we initiated contact with the QANet authors, however access to the augmented dataset
was not granted. As a result, we trained light QANet on SQuAD 1.1 for $32,000$ iterations, providing
the results shown in table \ref{table:squad2results} and saved the corresponding weights, allowing them to be restored and used as principled initialisations when performing subsequent model training.
\subsection{Problem Analysis}
\label{subsec: prob_analysis}
In order to gain a better understanding of our research problem both conceptually and practically, and to assess our intuition to extend QANet with an extra answerability module, we specifically ask ourselves two sub-questions: 1) How much does QANet detect distinction between answerable and unanswerable questions? 2) What should be chosen as the input to our answerability module?
We first ran our light QANet on answerable and unanswerable questions extracted from the SQuAD 2 dataset
and analysed the results.
\begin{figure}[t]
\centering
\includegraphics[scale = 0.35]{equant_attempts.pdf}
\caption{Attempts to extend QANet to EQuANt.}
\label{fig:attempts}
\end{figure}
Our results showed that QANet assigns generally lower probability to all possible ``answers" to unanswerable questions. More precisely, given context-query pairs from answerable and unanswerable questions, we study the maximum start-word and end-word probabilities assigned by QANet to all words in the context, and we find that unanswerable questions on average receive lower start- and end-word probabilities on all words in the corresponding context. This shows that the original QANet already captures information about unanswerability, validating the possibility of answerability detection by appending an additional functional module to the basic QANet structure.
Upon inspection of the intermediate outputs of the QANet architecture, we found that QANet respects the variable length of input queries and contexts, resulting in all intermediate outputs of the architecture having variable size. Whilst this is compatible with QANet's original aim of assigning probabilities to every word in the context, it is not immediately compatible with our extension, the purpose of which is to assign an answerability score to the context as a whole. It is thus necessary to design our extension to handle variable input size. In section \ref{subsec: Architecture Design}, we outline three attempted solutions.
\subsection{Enhanced Question Answer Network (EQuANt)}
\label{subsec: Architecture Design}
We now provide details on our exact architecture design, which we name \textbf{E}nhanced \textbf{Qu}estion \textbf{A}nswer \textbf{N}e\textbf{t}work (EQuANt). EQuANt is based on light QANet, with an answerability extension module as motivated in section \ref{subsec: prob_analysis}. We investigated three extension designs, with the final one achieving promising results, in particular almost doubling light QANet's accuracy on SQuAD 2 in both EM and F1 measures.
A component of the QANet architecture which is particularly relevant to the design
of our architectures and our analysis of the inner workings of the model is the
context-query attention layer (see figure \ref{fig:equant}). This layer takes in the encoded
context and query and combines them into a single tensor. A core aspect of this
layer is the similarity matrix, $S$, which has size $(\texttt{number of context
words} \times \texttt{number of query words})$. The $ij^{th}$ element of this matrix represents
the similarity of context word $i$ and query word $j$, calculated using the trilinear function
described in \cite{BiDAF}. This matrix is important for two reasons. Firstly, visual inspection
of its components allows interpretation of the quality of the context and question encodings.
Secondly, if the model is to be successful, $S$ must contain the
information required to determine answerability or lack thereof and represents the first point in
the network where a single tensor must contain this information, making it a natural focal point
for our architecture designs.
These architecture designs are outlined in the remainder of this section and visualised in figure \ref{fig:attempts}. Of the three designs that were implemented, the third (EQuANt 3) exhibited the best performance (see table \ref{table:squad2results}) and was therefore chosen to be our final model. Discussions regarding the performance of each architecture can be found in sections \ref{sec: experiment} and \ref{sec: results}.
\subsubsection{EQuANt 1}
EQuANt 1 takes the context-query attention weights from the context-query attention layer, which are of size \texttt{length of context $\times$ length of question}, and applies two convolutional layers followed by global mean pooling and a feedforward layer. The variable-size dimensions inherited from variable context and question lengths are reduced to 1 during global mean pooling. The final feed-forward layer then transforms the channel dimension obtained from convolution layers to size 1, giving us a scalar which we use as the score. This model performed poorly on the SQuAD 2 dev set, likely due to the information loss when convolving the context-query attention matrix. More discussion is provided in section \ref{sec: experiment}.
\subsubsection{EQuANt 2}
The EQuANt 2 extension stems from the output of the first stacked encoder layer, making each of its inputs of dimension \texttt{length of context $\times$ number of hidden nodes}. We apply two encoder transformations as in figure \ref{fig:1_encoder} and then a feedforward network which transforms the size of the hidden layer (96) to 1, followed by padding the context-length dimension to constant length. Then we apply two layers of 1d convolution, before a final feedforward layer to map to a scalar which we take as the score. This model also performed poorly. Note that padding decreases the proportion of non-zero elements along the context-length dimension in many data points significantly, essentially causing interesting information to be compressed, potentially explaining the lack of success for this model.
\subsubsection{EQuANt 3 (Final design)}
After learning from the failure cases of EQuANt 1 and 2, we aim to design a model which extracts more useful information from the context-query attention matrix, whilst avoiding diluting it with zeros. To extract higher level information from the context-query attention matrix, we use two more encoder blocks as in figure \ref{fig:1_encoder}, after which we down-sample the context-query dimension using global mean pooling. Our exact design is as follows: the answerability module again starts from the output of the first stacked encoder layer and two encoder transformations are also applied. The output from this then undergoes three feedforward layers, which transforms the hidden dimensions from 96 to 48, 48 to 24 and 24 to 1. Finally, a global mean pooling layer takes the variable-length dimension inherited from the context length and transforms it to size 1, giving us the answerability score. EQuANt 3 performed respectfully on the SQuAD 2 dev set, achieving 70.26\% accuracy on answerability prediction.
\begin{figure*}[h!]
\begin{subfigure}{1.3\columnwidth}
\centering
\includegraphics[angle=-90,trim=0 0 0 0,clip,width=0.99\columnwidth]{vis0unanswe.pdf}
\end{subfigure}%
\centering
\begin{subfigure}{1.3\columnwidth}
\centering
\includegraphics[angle=-90,trim=60 0 0 0,clip,width=0.99\columnwidth]{vis1answe.pdf}
\end{subfigure}
\centering
\begin{subfigure}{1.34\columnwidth}
\centering
\includegraphics[angle=-90,trim=60 0 0 0,clip,width=0.99\columnwidth]{vis2suffle2.pdf}
\end{subfigure}
\centering
\caption{Attention maps. Top: Unanswerable question. Middle: Answerable question. Bottom: Shuffled question.}
\label{fig:attention_maps}
\end{figure*}
\subsubsection{Loss function}
Let $\theta$ denote the vector of parameters, $p_0$ the predicted answerability probability, $\delta$ the ground truth of answerability, $p_1$ the predicted start-word probability of the true answer and $p_2$ the predicted end-word probability of the true answer. Then our loss function can be formulated as
\begin{align*}
l(\theta) = \frac{1}{N} \sum_{i = 1}^{N} \left[\mathcal{L}_0^{(i)}(p_0^{(i)}) + \delta^{(i)}\left(\mathcal{L}_1^{(i)}(p_1^{(i)}) + \mathcal{L}_2^{(i)}(p_2^{(i)})\right)\right],
\end{align*}
where $\mathcal{L}_j(p_j)$ with $j=1,2,3$ denotes the cross entropy loss associated with answerability, start-word and end-word predictions respectively.
In our experiments, we performed stochastic gradient descent using the Adam optimiser with hyperparameter settings: batch size = 32, learning rate = 0.001, $\epsilon = 1e-07$, $\beta_1 = 0.8$ and $\beta_2 = 0.999$.
\iffalse
- tentative solutions:
>> EQUANT 1: conv on S bar + global mean pooling
>> EQUANT 2: padding
>> EQUANT (FINAL): global mean pooling
>> loss function
\fi
\iffalse
Random notes not regarding the report, please leave this in and do not edit.
anQ(s) = answerable question(s)
unanQ = unanswerable question
Next steps:
Wed:
- be sure of the things going on with loss1 and loss2 when unanswerable
- prepare stuff to log progress
- change loss function
- run debug
- run a training for everything
Fri:
- continue run but with global loss
- evaluate the y3 prediction with AUC
- analyse the behavior of the network:
- compare the attention values after the first residual_block of 4
- We are actually looking at mulit-task learning: orient it more that way and losely tied weights after the CQ attention?
Measurements and evaluation:
- evaluate our current model on SQuAD1: with the prediction and with setting prediction to one just to see the answer span prediction
-> intersting to be sure that our weights are not overrated
- base line: how??
-> taking the question and answer?
-> taking the input here in the network?
-> try a convolution of the S matrix (or S_ or S__).
Side tasks:
- begin notebook
- begin poster
- do the attention maps nicely (if have time)
- findout what highway network does
- create a small dev set?
- F1 for trainning?
Done:
- design EQANeT1 : simpler and lighter weight.
- make sure the convolutions do what we want them to do
- evaluate the y3 prediction with ACC
- compare the confidence of the prediction on unanQs and anQs (at the softmax)
- solve the problem that there seem to be when using the embedding of SQuAD1 for other things
- change the "evaluation" to work on unanQs
- manage to load the weights of QANet while having additional parameters
- change the code to predict a random y3, make it a module that allows many possible things to be plugged in there
- split SQuAD2 on answerable and un-answerable
-> allows to have average
- change prepro to allways use the embeding dictionaries of SQuAD1
- prepare a train run: load and save in different places
- write a simplistic module
- write loss function
- train
- write a module that could learn the prediction of y3
- put it in colab
- let that module train and see what happens
- put the good squad2!
- run test on it to make sure
- run debug for training
- run a number of epochs of training on the y3 loss only
-> easier to evaluate and gives an estimate of the effectiveness of the method
On anQs of SQuAD2:
Exact Match: 2.901484480431849, F1: 12.696937641452152
- clearly a problem:
- running it on squad 1 to verify that it performs well there
- problem in the word embedding? should not swap this way?
-> then change prepro or train it on squad2an
on squad1:
Exact Match: 62.27057710501419, F1: 74.05862499410316
-> so potential problem in the embedding
unQs:
mean: max1_soft_logit: 0.97226185 max2_soft_logit: 0.97586554
std: max1_soft_logit: 0.022571469 max2_soft_logit: 0.018154122
anQs:
mean: max1_soft_logit: 0.98162544 max2_soft_logit: 0.9821267
std: max1_soft_logit: 0.018626422 max2_soft_logit: 0.013793886
\fi
\section{Experiments}
\label{sec: experiment}
As mentioned in section \ref{subsec: Architecture Design}, the context-query
similarity matrix, $S$, offers insight into the quality of the model's encodings and
should contain the information required to infer answerability or lack thereof.
In order to gain the most insight into potential model behaviour, we investigated
the form of $S$ within our light QANet for three different types of context-query pairs. The first two types
are the standard answerable and adversarially designed unanswerable varieties taken
directly from SQuAD 2. The final type is referred to as shuffled, for which we pair a given
context with a question from a different article, meaning that the question is almost certainly
unanswerable and unrelated to the context paragraph.
For visualisation purposes, we focused on short contexts, and an example of $S$ for a specific
short context and each of the three types of question is shown in figure \ref{fig:attention_maps}.
These results are interesting for two main reasons. Firstly, they show that the learnt encodings
are meaningful. For example, the word ``when" in the adversarial unanswerable question attends to
the date-related part of the context, and the word ``population" in the shuffled question attends
to words in the context associated with geographical regions. Furthermore, these results perhaps
offer insights into why the initial convolution approach was unsuccessful. In particular, it seems that answerable and adversarially unanswerable questions both lead to $S$ matrices with peaked context words for each query word, making it hard for convolutions to successfully identify unanswerability. However, as expected, the $S$ matrices for shuffled questions appear more diffuse and random due to the largely unrelated meanings of the context and query, further emphasising the subtlety in distinguishing answerable and adversarially unanswerable questions.
\begin{figure*}[h!]
\centering
\begin{subfigure}{2\columnwidth}
\centering
\begin{tabular}{ |M{2.5cm}||M{2.5cm}|M{3.5cm}|M{1cm}|M{1cm}|M{1.5cm}| }
\hline
Name & No. of Params & Training Iterations & EM & F1 & Accuracy\\
\hline
\hline
Light QANet & 788,673 & 32,000 & 31.390 & 37.432 & 49.928 \\
\hline
Light QANet & 788,673 & 62,000 & 32.903 & 38.412 & 49.928 \\
\hline
EQuANt 1 & 996,196 & 40,000 & 32.881 & 38.356 & 49.914 \\
\hline
EQuANt 2 & 1,001,520 & 40,000 & 33.512 & 38.894 & 49.914 \\
\hline
EQuANt 3 & 927,970 & 62,000 & 56.843 & 60.980 & 69.114 \\
\hline
EQuANt 3 & 927,970 & 78,000 & 58.140 & 62.360 & 70.26 \\
\hline
\end{tabular}
\captionsetup{width=0.7\textwidth}
\caption{SQuAD 2 dev set results.}
\label{table:squad2results}
\end{subfigure}
\vspace{0.5em}
\centering
\begin{subfigure}{2\columnwidth}
\centering
\begin{tabular}{ |c||c|c|c|c|c| }
\hline
Name & No. of Params & Training Iterations & EM & F1\\
\hline
\hline
Light QANet & 788,673 & 32,000 & 62.270 & 74.058 \\
\hline
Light QANet & 788,673 & 62,000 & 63.623 & 75.841 \\
\hline
EQuANt 3 & 927,970 & 62,000 & 69.29 & 78.80 \\
\hline
\end{tabular}
\captionsetup{width=0.7\textwidth}
\caption{SQuAD 1.1 dev set results.}
\label{table:squad1results}
\end{subfigure}
\end{figure*}
\section{Results \& Discussion}
\label{sec: results}
As mentioned in section \ref{subsec:methods_base_qa}, our first step was to train our light QANet on SQuAD 1.1 for $32,000$ iterations in order to generate a suitable initialisation which was used for the subsequent training of all other models. Evaluation of this trained model on SQuAD 1.1 yields the results shown in the first row of table \ref{table:squad1results}. The quoted number of training iterations for other models in tables \ref{table:squad1results} and \ref{table:squad2results} therefore includes these $32,000$ pre-training iterations. In order to observe how our lightweight model trained on SQuAD 1.1 without data augmentation compares to the original QANet without data augmentation, we trained for a further $30,000$ iterations on SQuAD 1.1, yielding the results shown in the second row of table \ref{table:squad1results}. These EM/F1 scores are 9.98/6.853 lower than the corresponding results for the full QANet architecture \cite{qanet}, implying that our choice to employ a lightweight architecture has a noticeable impact on performance.
We evaluated these trained light QANet models on the SQuAD 2 dev set, implicitly treating all questions as answerable. This led to the results shown in the first and second rows of table \ref{table:squad2results}, which act as baselines to compare our EQuANt results against. The accuracy column in table \ref{table:squad2results} contains the proportion of questions correctly identified as being answerable or unanswerable.
Having investigated the performance of light QANet on SQuAD 1.1 and 2, we moved on to train each of the EQuANt architectures described in section \ref{subsec: Architecture Design} on SQuAD 2. As can be seen in table \ref{table:squad2results}, EQuANt 1 and 2 did not perform well on SQuAD 2. In fact, their performance is identical. This is explained by both models learning to output a constant answerability probability of 0.69, independent of the query-context pair considered. Note that this probability matches the proportion of SQuAD2 training examples which are answerable, meaning that these models have been unable to extract the necessary features for accurately predicting answerability and have defaulted to the most basic frequentist approach of predicting the mean.
However, as shown in the final row of table \ref{table:squad2results}, EQuANt 3 is capable of both answerability prediction and question answering, significantly exceeding baseline performance on SQuAD 2.
As laid out in this \href{http://ruder.io/multi-task-learning-nlp/index.html}{blog post} by Sebastian Ruder, multi-task learning has recently been successfully applied to numerous NLP tasks. We therefore decided to measure the performance of EQuANt 3, trained on the two tasks of question answering and answerability prediction, at question answering alone by evaluating its EM and F1 scores on SQuAD 1 by providing EQuANt 3 with the ground truth answerability of true for each SQuAD1 question. As shown in the final row of table \ref{table:squad1results}, EQuANt 3 outperforms light QANet by 5.667/2.959 on F1/EM scores, suggesting that it indeed benefits from this multi-task approach.
\iffalse
- EQUANT 1,2,3 TABLES
- Mention dev set is saved as test set
- Compare the improvement in F1 and EM - stated on SQuAD 2 paper.
- Talk about time and resource constraint, and make the point that nevertheless things have improved.
- future work: loosely tied weights for encoder block after C-Q attention
'high level idea':
proof of concept of QANet unanswerability extension; (optional) multitask enhancement of original QANet.
- number of param:
equant 3:
Total number of trainable parameters: 927970
Total number of trainable parameters Input_Embedding_Layer: 106080
Total number of trainable parameters Embedding_Encoder_Layer: 87360
Total number of trainable parameters Context_to_Query_Attention_Layer: 289
Total number of trainable parameters Answerability_Prediction: 139297
Total number of trainable parameters Model_Encoder_Layer: 503232
Total number of trainable parameters Output_Layer: 384
take away ans predict for num light qa net number
- loosely tied
\fi
\section{Conclusion}
\label{sec:Conclusion}
In this work, we have presented EQuANt, an MRC model which extends QANet to cope with unanswerable questions.
In sections \ref{sec:Background} and \ref{sec:Related Work}, we motivated our work and placed it in the wider
context of MRC and unanswerability. Following this, in section \ref{sec:Methods}, we presented our lightweight
QANet implementation and laid out in detail the 3 EQuANt architectures that were trained and whose performance was evaluated.
In section \ref{sec: experiment}, we investigated the context-query attention maps within our lightweight QANet,
allowing us to verify the quality of our learnt encodings and gain insight into why our initial architecture, EQuANt 1
did not predict answerability effectively. Finally, in section \ref{sec: results}, we presented our results and
discussed how the observed performance of EQuANt 3 on SQuAD 1.1 suggests that multi-task learning is a valuable
approach in the context of MRC.
\bibliographystyle{acl_natbib}
|
1,108,101,565,604 | arxiv |
\section{Introduction}
\begin{acronym}
\acrodef{BH}[BH]{black hole}
\acrodef{BAT}[BAT]{Burst Alert Telescope}
\acrodef{BNS}[BNS]{binary neutron star}
\acrodef{EM}[EM]{electromagnetic}
\acrodef{EOS}[EOS]{equation of state}
\acrodef{FAP}[FAP]{false alarm probability}
\acrodef{FAR}[FAR]{false alarm rate}
\acrodef{GBM}[GBM]{Gamma-ray Burst Monitor}
\acrodef{GRB}[GRB]{gamma-ray burst}
\acrodef{sGRB}[sGRB]{short gamma-ray burst}
\acrodef{GW}[GW]{gravitational-wave}
\acrodef{INTEGRAL}[INTEGRAL]{INTErnational Gamma-Ray Astrophysics Laboratory}
\acrodef{IPN}[IPN]{InterPlanetary Network}
\acrodef{LIGO}[LIGO]{Laser Interferometer Gravitational-Wave Observatory}
\acrodef{MCMC}[MCMC]{Markov Chain Monte Carlo}
\acrodef{NS}[NS]{neutron star}
\acrodef{sGRB}[sGRB]{short gamma-ray burst}
\acrodef{SNR}[S/N]{signal-to-noise ratio}
\acrodef{SME}[SME]{Standard Model Extension}
\acrodef{SPI-ACS}[SPI-ACS]{SPectrometer onboard INTEGRAL - Anti-Coincidence Shield}
\end{acronym}
It has long been conjectured that the subclass of gamma-ray bursts
(GRBs) with a duration below about 2~s, known as \acp{sGRB},
are the product of a \ac{BNS} merger and that gamma-rays are produced
in the collimated ejecta following the coalescence
\citep[e.g,][]{blinnikov1984,Nakar2007,Gehrels2012,Berger2014}. So
far, there was only circumstantial evidence for this hypothesis, owing
to the lack of supernovae associated with sGRBs, their localization in
early-type galaxies and their distinct class of duration
\citep[e.g.,][]{Davanzo2015}. The advent of advanced \ac{GW}
detectors, which have been able to detect binary black hole mergers
\citep{GW150914,LVT151012,GW151226,GW170104,GW170814}, and have the
capability to detect a signal from nearby BNS mergers
\citep{Abbott2016} have sparked great expectations. Different
electromagnetic signatures are expected to be associated with BNS
merger events, owing to expanding ejecta, the most obvious of which is
an sGRB in temporal coincidence with the GW signal and/or afterglow
emission at different wavelengths in the days and/or weeks after the
merger event \citep[e.g.,][]{Fernandez2016}.
On n 2017 August 17 a at 12:41:04.47 UTC (T$_{0,GW}$ hereafter), a
signal consistent with the merger of a \ac{BNS} was detected by the
LIGO-Hanford detector \citep{aLIGO_instrument} \change{as a
single-detector trigger. The subsequent alert was issued} in
response to a public real-time Fermi \ac{GBM} trigger on a sGRB at
12:41:06.48 UTC \citep{GCN21505,GCN21506,GCN21509}; the GRB signal was
immediately and independently confirmed by our team \citep{GCN21507}.
Analysis of the LIGO-Livingston data \citep{LVC_PRL_GW170817} revealed
that a trigger was not automatically issued \change{due to the
proximity of an overflow instrumental transient}, which could be
safely removed offline. The addition of Virgo
\citep{TheVirgo:2014hva} to the detector network allowed a precise
localization at 90\% confidence level in an area of about 31 square
degrees \citep{GCN21513}, which is consistent with the Fermi-GBM
localization of GRB~170817A \citep{GCN21520}. The most accurate
localization so far has been derived by the LALInferrence pipeline
\citep{GCN21527}; this is the localization we use in this Letter.
A massive follow-up campaign of the LIGO-Virgo high-probability region
by optical robotic telescopes started immediately after the event and
on 2017 August 18 between 1:05 and 1:45 UT, three groups reported
independent detections of a transient optical source at about
10~arcsec from the center of the host S0 Galaxy NGC~4993; this source
was dubbed SSS17a \citep{GCN21529,GCN21529_paper} or DLT17ck
\citep{GCN21531,GCN21531_paper}; the transient source was confirmed by
\citet{GCN21530} \citep[see also][]{GCN21530_paper}. The source was
identified as the most probable optical counterpart of the BNS merger
\citep{GCN21557,GCN21557_paper}. After that, it was followed at all
wavelengths. \ch{The counterpart has been given the official IAU
designation ``AT2017gfo'' \citep{capstone}.}
\ch{During early observations by the Swift satellite from 53.8 to 55.8 ks
after the LVC trigger an ultraviolet (UV) transient with $u$
magnitude 17 was detected; an X-ray upper limit was set at an order of magnitude below
the typical luminosity of an sGRB afterglow, as determined from
the sample of
Swift \ac{BAT} triggered objects. It was suggested that the object may be a
blue (i.e., lanthanide-free) kilonova \citep{GCN21550}. Infrared
spectroscopy with X-shooter on the ESO Very Large Telescope UT 2
covered the wavelength range 3000--25000 \AA\ and started roughly
1.5 days after the GW event \citep{GCN21592}. As reported in
\citet{Pian2017}, strong evidence was found for $r$-process
nucleosynthesis as predicted by kilonova models
\citep[e.g.,][]{Tanaka2017}. Gemini spectroscopic observations with the
Flamingos-2 instrument taken 3.5 days after the GW event revealed a red
featureless spectrum, again consistent with kilonova
expectations \citep{GCN21682,Troja2017}. Optical spectra collected
1.5~days after the GW event with the ESO New Technology Telescope at La Silla
equipped with the EFOSC2 instrument in spectroscopic mode excluded
a supernova as being the origin of the transient emission
\citep{GCN21582}.
Thus,} the properties of the source are fully consistent with the
a kilonova scenario \citep[see][for a
review]{Metzger2017}. A kilonova is primarily powered by the
radioactive decay of elements synthesized in the outflow,
which produce gamma-ray
lines. These may also be directly detectable in the gamma-ray range
\citep{Hotokezaka2016}.
In this Letter, we describe in detail the detection of GRB~170817A by
the INTernational Gamma-ray Astrophysics Laboratory (INTEGRAL) and the
targeted follow-up observing campaign. We were able to search for any
possible hard X-ray / soft gamma-ray emission for about six days after
the prompt gamma-ray and GW signal. This allowed us to constrain both
continuum emission from GRB-like afterglow emission and line emissions
expected from kilonovae.
\section{INTEGRAL instrument summary}
\label{sec:instrument}
INTEGRAL \citep{integral} is an observatory with multiple instruments:
a gamma-ray spectrometer (20\,keV--8\,MeV, SPI, \citealt{spi}), an
imager (15\,keV--10\,MeV, IBIS, \citealt{ibis}), an X-ray monitor (3--35\,keV, JEM-X, \citealt{jemx}), and an optical monitor (V band, OMC,\citealt{omc}).
The spectrometer SPI is surrounded by a thick Anti-Coincidence Shield
(SPI-ACS). In addition to its main function of providing a veto
signal for charged particles irradiating the SPI instrument, the ACS is
also able to register all other impinging
particles and high-energy photons. Thus, it can be used as a nearly
omnidirectional detector of transient events with an effective area
reaching 0.7~m$^2$ at energies above $\sim$75~keV and a time
resolution of 50~ms \citep{spiacs}. The characterization of its
response to a gamma-ray signal has been delivered with an extensive
simulation study, taking into account the complex opacity pattern of
materials, which are used for the INTEGRAL satellite structure and
other instrument detectors. Similarly, we have computed and verified
the response of the other omnidirectional detectors on board INTEGRAL:
IBIS/ISGRI, IBIS/PICsIT and IBIS/Veto. For details on the INTEGRAL
capabilities of detecting transients from the whole sky, particularily as relevant to our search for electro-magnetic counterparts to GW signals, we refer to \citet{Savchenko2017a} and references therein.
\section{Observation of the Prompt Gamma-Ray Emission}
At the time of the GW170817 trigger, INTEGRAL was performing a
target-of-opportunity observation of GW170814
\citep{GW170814,GCN21474} and at 12:41 UTC it was directed to RA, Dec
(J2000.0) = 36.25$^{\circ}$,\,-49.80$^{\circ}$. This orientation was
overall favorable to detect a signal from the location of
\VAR{optical_counterpart.name} with the SPI-ACS, although not the most
optimal. This can be seen in Fig.~\ref{fig:skysens}, where we show
the complete INTEGRAL sensitivity map combining all instruments as
described in \citet{Savchenko2017a}. We note that with this
orientation, the sensitivity of IBIS (ISGRI, \citealt{isgri}; PICsIT,
\citealt{picsit} and Veto, \citealt{veto} detectors) to a signal from
the direction of \VAR{optical_counterpart.name} was much lower if
compared to SPI-ACS for any plausible type of event spectrum.
\begin{figure}
\centering
\includegraphics[width=1.\linewidth]{sky_sens_total_soft_100ms}
\caption{INTEGRAL 3~$\sigma$ sensitivity to a 100~ms burst characterized
by Comptonized emission with $\alpha$=-0.65 and
E$_{peak}$=185~keV, i.e., the best fit spectral model reported by
the Fermi-GBM for the pulse of the GRB. Black contours correspond
to the confidence regions (90\% and 50\%) of the current
LALInferrence LIGO/Virgo localization \citep{GCN21527}. The magenta
annulus corresponds to the constraint on the GRB~170817A location
derived from the difference in arrival time of the event to Fermi
and INTEGRAL \citep[triangulation;][]{GCN21515}}
\label{fig:skysens}
\end{figure}
We searched the SPI-ACS light curve using five timescales from 0.1 to
10~s, within a window of 30~s before and after the time of GW170817.
The local background noise properties are in good agreement with the
expectation for the background at the current epoch. On a 100~ms time
scale, we detect only one significant excess with a signal-to-noise
ratio (S/N) of \VAR{search.grb.snr|round(1)}, starting at
T$_{0,GW}$+1.9 s (in the geocentric time system; hereafter,
T$_{0,ACS}$). \ch{We compute a significance of association between
GRB~170817A as observed by INTEGRAL and GW170817 of
\VAR{search.assoc.ligo.sig|round(1)}~$\sigma$. The association
significance with the Fermi-GBM observation of GRB~170817A is
\VAR{search.assoc.gbm.sig|round(1)}~$\sigma$ (see
Appendix~\ref{sec:association}).}
\begin{figure}
\centering
\includegraphics[width=1.\linewidth]{acs_lc}
\caption{SPI-ACS light curve of GRB~170817A (100~ms time resolution),
detected 2 seconds after GW170817. The red line highlights the
100~ms pulse, \ch{which has an S/N of \VAR{search.grb.snr|round(1)}} in
SPI-ACS. The blue shaded region corresponds to a range of one
standard deviation of the background.}
\label{fig:acslc}
\end{figure}
\ch{The principal part of the excess gamma-ray emission emerges in
just two 50\,ms time bins. The 100~ms duration firmly places this
event in the short GRB class at
$>$\VAR{thegrb.classification.prob_short_pc_min}\% probability
\citep[using the GRB duration distribution
from][]{Savchenko2012,Qin2013}. We should note, however, that the
SPI-ACS does not have the capability to observe emission below
$\sim$100~keV (due to the limitations of the instrumental low
threshold), which might have slightly different temporal
characteristics, as reported by \citet{gbm_only_paper}.}
\ch{Our coincident observation of the gamma-ray signal permits a
substantial improvement of the Fermi-GBM-only localization by
exploiting the difference in the gamma-ray arrival times at the
location of the two satellites. Using the triangulation annulus
reported by \cite{GCN21515} we compute that the addition of the
INTEGRAL observation reduces the final 90\% GBM localization area by
a factor \VAR{triangulation.gbm_improvement_ipn|round(1)}. We refer
to the joint LIGO/Virgo, Fermi-GBM, and INTEGRAL/SPI-ACS analysis
for more details
\citep{joint_paper}. Appendix~\ref{sec:prompt_multiinstrument}
summarizes the supporting complete INTEGRAL data set at the time of
GRB~170817A.}
The majority of \acp{sGRB} have a hard spectrum, resulting in a strong
detection in the SPI-ACS and/or in IBIS (ISGRI, PICsIT, and/or Veto),
as long as the respective instrument is favorably oriented
\citep{Savchenko2012, Savchenko2017a}. GRB~170817A, on the other hand,
was very soft, with most of its energy below $\sim$100\,keV, apart
from a short-hard initial pulse emitting at least up to 200~keV
\citep{gbm_only_paper}. This results in a reduced SPI-ACS signal
significance. We determined that for the location of
\VAR{optical_counterpart.name}, the SPI-ACS efficiency is smoothly
increasing from about 100 keV to 200 keV, where it reaches a plateau
up to the upper energy threshold of $\sim$80\,MeV. In
Figure~\ref{fig:prompt_spectra}, we show the region that contains the
allowed spectral models consistent with the SPI-ACS observation. We
assume a specific family of models, representative of sGRB spectra not
far from the Fermi-GBM best-fit model of GRB~170817A for time time
interval T$_{0,GBM}$-0.320~--~T$_{0,GBM}$+0.256--(covering the range
of spectra consistent with the average or hard peak):
Comptonized/cut-off power-law models with -1.7$\leq \alpha \leq$-0.2
and 50$\leq E_{peak} \leq$300~keV. In the same figure, the black
dashed line represents the best-fit Fermi-GBM model in the same
0.576~s long time interval that we used to compare SPI-ACS with the
Fermi-GBM results \citep{gbm_only_paper}. This comparison nicely
displays the consistency of both experiments.
Due to the lack of energy resolution in SPI-ACS, the
fluence estimate depends on model assumptions. Using the best-fit
Fermi-GBM model \citep{gbm_only_paper} and assuming the time interval
T$_{GBM,0}$-0.320, T$_{GBM,0}$+0.256\,s to match the interval used by Fermi-GBM,
\citep{gbm_only_paper}, we derive a 75--2000 keV fluence of
$(1.4\pm 0.4)\times$10$^{-7}$~erg~cm$^{-2}$ (1~$\sigma$ error,
statistical only). Additionally, the model assumption uncertainty
employing the same range of models as used in Figure~\ref{fig:prompt_spectra}
corresponds to a 75--2000~keV fluence uncertainty of
$\pm$0.4$\times$10$^{-7}$~erg~cm$^{-2}$. Possible systematic
deviations of the SPI-ACS response, as established by
cross-calibration with other gamma-ray instruments (primarily
Fermi-GBM and Konus-Wind), corresponds to a further uncertainty
of $\pm$0.3$\times$10$^{-7}$~erg~cm$^{-2}$.
\begin{figure}
\centering
\includegraphics[width=1.\linewidth]{prompt_spectra}
\caption{Average hard X-ray/gamma-ray spectrum of the initial pulse
of GRB~170817A. The shaded green region corresponds to the range of
spectra compatible with the INTEGRAL/SPI-ACS observation (see text
for details). IBIS/PICsIT provides a complementary independent
upper limit at high energies; see text. The best-fit Fermi-GBM
model for the spectrum in the same interval (Comptonized model
with low-energy index of -0.62 and $E_{peak}$ of 185\,keV) is
shown as a dashed line for comparison \citep{gbm_only_paper}. }
\label{fig:prompt_spectra}
\end{figure}
Due to the limited duration of this event in SPI-ACS, little can be
learned directly from the light curve. However, we note that the main
prompt component consists of just two bins, with each of the rise time
and decay time below 50~ms. Our variability limits are derived for the
particularly narrow pulse that characterizes the hardest component of
the burst, which is observed by INTEGRAL/SPI-ACS with high effective
area. Our results should be compared to the lower-energy morphology
probed by Fermi-GBM \citep[see][for details]{joint_paper}.
After the detection of GRB~170817A, INTEGRAL continued uninterrupted
observations of the same sky region until 20:44:01 (UTC on August 17). \ch{During this period, no other bursts or steady
emission from the direction of the optical counterpart were
detected. We report in detail our flux limits in
Appendix~\ref{sec:continuation}, while in
Fig.~\ref{fig:follow_timeline}, we graphically summarize our
results.}
\begin{figure}
\centering
\includegraphics[width=1.\linewidth]{temporal_sensitivity}
\caption{Timeline of the INTEGRAL observations, from the prompt
detection with SPI-ACS, through the serendipitous
follow-up and toward the targeted follow-up. Dashed lines
correspond to the narrowband upper limits. Only selected upper
limits are shown; for a complete summary of the observations, see
Table~\ref{tab:fov}, Figure~\ref{fig:spectral_sensitivity}, and
the text.}
\label{fig:follow_timeline}
\end{figure}
\section{Targeted INTEGRAL follow-up observation}
\subsection{Search for a Soft Gamma-Ray Afterglow}
INTEGRAL allows us to search for an afterglow emission in a broad
energy range from 3~keV to 8~MeV. This was covered in detail in
\citet{Savchenko2017a}, where we exploited the serendipitous coverage
of part of the LVT151012 localization region within the field of view
of the INTEGRAL pointed instruments.
To search for a delayed signal, INTEGRAL performed targeted follow-up
observations of the LIGO/Virgo candidate BNS merger G298048
(=GW170817). They started 19.5 hours after the event centered the best
Fermi-GBM localization \citep{GCN21506}. They covered only a
negligible fraction of the LIGO/Virgo localization. Therefore, we avoid
discussing this initial part of the follow-up.
The main part of the follow-up observations was centered on the
candidate optical counterpart, \VAR{optical_counterpart.name}
\citep[RA=13:09:48.089 Dec=-23:22:53.35;][]{GCN21529}. It spanned from
2017 August 18 at 12:45:10 to 2017 August 23 at 03:22:34 (starting about 24
hr after the LIGO/Virgo event), with a maximum on-source time of
326.7 ks. \VAR{optical_counterpart.name} was in the highest
sensitivity part of IBIS and SPI FoV in each of the
dithered single $\sim$40 minute long individual pointings that make
up INTEGRAL observations; \ch{it was in the JEM-X FoV (defined as the
region where the sensitivity is no less than a factor of 20 from the
optimal) for
\VAR{(coverage_pointing.jemx_pointing_fraction_f20*100)|int}\% of
the time, and
\VAR{(coverage_pointing.jemx_pointing_fraction_f2*100)|int}\% of the
time in the region with sensitivity no worse than a factor of 2 from the
optimal.}
We investigated the mosaicked images of the complete observation of
IBIS/ISGRI, SPI, and JEM-X. We do not detect any X-ray
or gamma-ray counterpart to \VAR{optical_counterpart.name} in any of the instruments. The
3-$\sigma$ broadband upper limits for an average flux of a source at the
position of \VAR{optical_counterpart.name} are presented in
Fig.~\ref{fig:spectral_sensitivity} and in Table~\ref{tab:fov}.
\begin{figure}
\centering
\includegraphics[width=1.\linewidth]{spectral_sensitivity}
\caption{Broadband X-ray to gamma-ray sensitivity reached in the complete INTEGRAL
targeted follow-up observation, with a total exposure up to 330~ks
(depending on the instrument and the operational mode).}
\label{fig:spectral_sensitivity}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1.\linewidth]{line_spectral_sensitivity}
\caption{Narrow-line sensitivity in the X-ray/gamma-ray band reached
in the complete INTEGRAL targeted follow-up observation, with a
total exposure of \ch{330~ks for each of the instruments. The
units of the right vertical axis correspond to the luminosity
assuming a distance to the source of 40\,Mpc.}}
\label{fig:line_sensitivity}
\end{figure}
\begin{table*}[t]
\label{tab:fov}
\centering
\caption{Summary of sensitivities for the different instruments on
board INTEGRAL to a source at the location of \VAR{optical_counterpart.name}}
\begin{tabular}{ c c c c c c c }
\toprule
Instrument & Field of View & Angular resolution & Energy range & \multicolumn{3}{c}{3~$\sigma$ sensitivity} \\
& deg$^2$ & & & mCrab & erg~cm$^{-2}$~s$^{-1}$ & erg~s$^{-1}$ \\
\\
\midrule
\multirow{2}{*}{JEM-X}& \
\multirow{2}{*}{110} & \
\multirow{2}{*}{3'} & \
3~--~10~keV & \
1.2 & \
1.9$\times$10$^{-11}$ & \
3.6$\times$10$^{42}$
\\
& \
& \
& \
10~--~25~keV & \
0.64 & \
7.0$\times$10$^{-12}$ & \
1.3$\times$10$^{42}$
\\
\midrule
\multirow{3}{*}{IBIS/ISGRI}& \
\multirow{3}{*}{823}& \
\multirow{3}{*}{12'}& \
20~--~80~keV &\
2.6 & \
3.8$\times$10$^{-11}$ &\
7.3$\times$10$^{42}$
\\
& \
& \
& \
80~--~300~keV & \
6.2 & \
7.1$\times$10$^{-11}$ &\
1.4$\times$10$^{43}$
\\
& \
& \
& \
300~--~500~keV & \
290 & \
1.0$\times$10$^{-9}$ &\
1.9$\times$10$^{44}$
\\
\midrule
\multirow{3}{*}{IBIS/PICsIT}& \
\multirow{3}{*}{823}& \
\multirow{3}{*}{24'}& \
208~--~468~keV &\
36 & \
2.1$\times$10$^{-10}$ &\
4.0$\times$10$^{43}$
\\
& \
& \
& \
468~--~572~keV & \
128 & \
1.6$\times$10$^{-10}$ &\
3.1$\times$10$^{43}$
\\
& \
& \
& \
572~--~1196~keV & \
216 & \
8.7$\times$10$^{-10}$ &\
1.7$\times$10$^{44}$
\\
& \
& \
& \
1196~--~2600~keV & \
973 & \
3.3$\times$10$^{-9}$ &\
6.4$\times$10$^{44}$
\\
\midrule
\multirow{6}{*}{SPI}& \
\multirow{6}{*}{794}& \
\multirow{6}{*}{2.5$^\circ$}& \
20~--~80~keV &\
3.8 & \
5.6$\times$10$^{-11}$ &\
1.1$\times$10$^{43}$
\\
& \
& \
& \
80~--~150~keV &\
16.4 & \
9.8$\times$10$^{-11}$ &\
1.9$\times$10$^{43}$
\\
& \
& \
& \
150~--~300~keV &\
43 & \
2.4$\times$10$^{-10}$ &\
4.6$\times$10$^{43}$
\\
& \
& \
& \
300~--~500~keV &\
135 & \
4.7$\times$10$^{-10}$ &\
9.0$\times$10$^{43}$
\\
& \
& \
& \
500~--~1000~keV &\
308 & \
1.2$\times$10$^{-9}$ &\
2.3$\times$10$^{44}$
\\
& \
& \
& \
1000~--~2000~keV &\
866 & \
2.8$\times$10$^{-9}$ &\
5.4$\times$10$^{44}$
\\
& \
& \
& \
2000~--~4000~keV &\
\VAR{table.spi[2000,4000].mcrab|int} & \
\VAR{table.spi[2000,4000].flux_ecs|latex_exp} & \
\VAR{table.spi[2000,4000].L|latex_exp}
\\
& \
& \
& \
4000~--~8000~keV &\
\VAR{table.spi[4000,8000].mcrab|int} & \
\VAR{table.spi[4000,8000].flux_ecs|latex_exp} & \
\VAR{table.spi[4000,8000].L|latex_exp}
\\
\bottomrule
\vspace{0.3cm}
\end{tabular}
\begin{tablenotes}
\item The upper limits from the INTEGRAL follow-up observation
directed towards \VAR{optical_counterpart.name}, assuming a
power-law-shaped spectral energy distribution with a photon index of
$-2$. The energy ranges are chosen to highlight the advantage of
INTEGRAL instruments over other hard X-ray observatories. The limit
of the FoVs has been set to the point where a worsening of the
instrument sensitivity by a factor of 20 compared to the on-axis
value is reached.
\end{tablenotes}
\end{table*}
We have also searched for isolated line-like features in IBIS/ISGRI
and SPI data: our preliminary analysis did not identify any such
features. In-depth studies will be reported elsewhere. The
narrow-line sensitivity reached in the complete follow-up observation is
presented in Figure~\ref{fig:line_sensitivity}.
IBIS, SPI, and JEM-X observed more than 97\% of the LIGO-Virgo
localization in the combined observation mosaic. We searched the
IBIS/ISGRI, SPI, and JEM-X data for any new point source in the whole
90\% LIGO/Virgo localization region, and did not find any. The
sensitivity depends on the location, with the best value close to that
computed for \VAR{optical_counterpart.name}. Contours containing
regions observed with sensitivity of at least 50\% and 10\% of the
optimal are presented in Figure~\ref{fig:follow_coverage}).
\begin{figure}
\centering
\includegraphics[width=1.\linewidth]{followup_coverage}
\caption{Sensitivity levels (50\% - solid line, and 10\% - dashed
line, of the optimal sensitivity, achieved for \VAR{optical_counterpart.name}) of the
complete IBIS, JEM-X, and SPI mosaics of the targeted INTEGRAL follow-up
observation, compared to the most accurate LALInferrence
LIGO/Virgo localization of GW170817 (50\% and 90\% confidence
containment \citep[black solid lines][]{GCN21527}) and the \VAR{optical_counterpart.name} location
\citep{GCN21529}.}
\label{fig:follow_coverage}
\end{figure}
\subsection{Search for optical emission with the OMC}
The OMC observed the galaxy NGC 4993 including the transient \VAR{optical_counterpart.name}
from 2017 August 18 at 17:27:59 until 2017 August 22 at 22:56:48 UTC. It was
in the OMC FoV for only 29 INTEGRAL pointings (total of 35.7\,ks). Its limited angular
resolution and pixel size did not allow us to distinguish between the
host galaxy contribution and the transient. The data were analyzed by
using the largest OMC photometric aperture (5x5 pixel, 90~arcsec
diameter), to ensure all of the emission from the host galaxy as well as
the transient are included in the aperture. We measured a V magnitude
of 12.67$\pm$0.03 (1-$\sigma$ level) for the total emission. No
variability was detected in the OMC data at the reported 1-sigma
level.
\subsection{Search for delayed bursting activity}\label{sec:delayedbursting}
The continuous observation of the \VAR{optical_counterpart.name}
location performed by INTEGRAL (from 2017 August 18 at 12:45:10 to
2017 August 23 at 03:22:34 UT with a coverage fraction of 80\%) allows us
to also search for any short (magnetar-like) or long bursts from this
source. We used IBIS/ISGRI light curves in two energy ranges:
20--80\,keV and 80--300\,keV, on 100\,ms, 1\,s, 10\,s, and 100\,s time
scales. We did not find any deviations from the background, and set a
3~$\sigma$ upper limit on any possible 1-s long burst flux of 1.0 Crab
(1.4$\times10^{-8}$\,erg\,cm$^{-2}$\,s$^{-1}$) in the 20--80\,keV, and
6.8 Crab (7.8$\times10^{-8}$\,erg\,cm$^{-2}$\,s$^{-1}$) in the
80--300\,keV energy range. The 3~$\sigma$ upper limit on a 100-ms time
scale results in 3.0 Crab
(4.5$\times10^{-8}$\,erg\,cm$^{-2}$\,s$^{-1}$) in the 20--80\,keV, and
21 Crab (2.4$\times10^{-7}$\,erg\,cm$^{-2}$\,s$^{-1}$) in the
80--300\,keV energy range. Assuming a distance of 40\,Mpc \ch{(see
online data of \citealt{Crook2007})}, these limits can be
interpreted as constraints on the burst luminosity on 1~s (100~ms)
time scale of 2.6$\times$10$^{45}$\,erg\,s$^{-1}$
(8.5$\times$10$^{45}$\,erg\,s$^{-1}$) in the 20--80\,keV energy
range. In the 80--300\,keV energy range the luminosity is constrained
to be less than 1.5$\times$10$^{46}$\,erg\,s$^{-1}$
(4.6$\times$10$^{46}$\,erg\,s$^{-1}$). Bursts exceeding such
luminosities were previously observed from magnetars. SGR~1806$-$20,
for example, produced a giant flare with a total energy of
2$\times$10$^{46}$~erg \citep{Hurley2005}, while individual flares
from, e.g., 1E~1547.0$-$5408 exceeded the energy of 10$^{46}$~erg
\citep{Mereghetti2009,Savchenko2010}.
\section{Discussion}\label{sec:discussion}
The detection of GRB~170817A by INTEGRAL and Fermi in unambiguous
coincidence with GW170817 is the first definitive proof that at least
some sGRBs can be associated with BNS merger events. \ch{The duration of the GRB as measured by INTEGRAL above $\sim$100\,keV, 100~ms, firmly assigns
the GRB~170817A to the short GRB class.} \citet{joint_paper} extensively
discuss the implications of the joint GW and gamma-ray observation for the
luminosity function and structure of the sGRB population.
Future observations of similar events will be decisive in constraining
the properties of the BNS merger counterparts. INTEGRAL exhibits an
exceptionally unocculted ($>$99.9\%) sky view at every moment when it
is observing, i.e., for about 85\% of the total mission lifetime. With
its high sensitivity above $\sim$100~keV, INTEGRAL/SPI-ACS has
demonstrated its capability of detecting also fairly weak and soft
transients like GRB~170817A. In the future, INTEGRAL will be able to
systematically detect counterparts to the GW events or to put tight
upper limits on their presence. The advantage of its exceptionally
high effective area above $\sim$100~keV will be even more important
for the events with harder spectra, which is what expected for
typical \acp{sGRB}.
\ch{The possibility of forming a (short-lived) magnetar in a BNS
merger has been extensively discussed in the past
\citep[e.g.][]{Metzger2014,Giacomazzo2015,Price2006,Fernandez2016}. It
has been suggested that a newborn magnetar is responsible for part
of the afterglow emission \citep[e.g.][]{Rowlinson2013}. In
principle, it is not clear if a newborn magnetar, formed in the
merger, is more likely to produce bursts during the first days after
the merger (which are covered by the continuous INTEGRAL
observation). This could be reasonably expected because at early
times, the magnetic energy is maximal and it rapidly
dissipates. Intense X-ray flares could occur associated to frequent
reorganization in the magnetic field structure, as well as in
connection to a delayed accretion.} We have ruled out, however, the
existence of strong magnetar-like bursts in our targeted follow-up
observation.
Interestingly, the amount of energy released in GRB~170817A
\citep{joint_paper} is similar to that found during the giant flares
of magnetars, such as SGR~1806$-$20 \citep{Hurley2005}. \ch{Magnetar
flares are associated with long-lived objects while the GRB~170817A
was a one-time event of a BNS merger. Nevertheless, the similarity
in the most basic observational properties is intriguing and it may
point towards a similarity of the physical processes involved.}
\ch{At late times after the initial gamma-ray burst}, the luminosity of kilonovae is largely fueled by radioactive decays of
r-process elements released in the coalescence \citep[see, e.g.,][for a review on kilonova mechanism]{Metzger2017}. Under favorable
conditions, a forest of nuclear gamma-ray lines produced in these
decays may be detectable by a suitable gamma-ray spectrometer such as
INTEGRAL/SPI \citep{Hotokezaka2016}. If the lines are broad or appear
at low energies ($<$100 keV), IBIS could also detect them, \ch{with a similar significance}. We did not
find any such emission feature
with INTEGRAL/SPI or IBIS, and we set an upper
limit as displayed in Fig.~\ref{fig:line_sensitivity} \ch{for narrow lines. However, for
sufficiently broad lines, the emission pattern can resemble a
nearly continuous spectrum \citep[as discussed in][for high-velocity
ejecta]{Hotokezaka2016}. Thus,
the continuum emission upper limit can be applied (see
Fig.~\ref{fig:spectral_sensitivity} and Table~\ref{tab:fov}).}
\ch{To the best of our knowledge, the most favorable predictions for a
combined decay line flux 1~day after the merger and at the
\VAR{optical_counterpart.name} distance are of the order of
\ecse{3.6}{-12} \citep[assuming high ejecta mass of 0.1\,M$_\odot$
and a velocity of 0.3\,c][]{Hotokezaka2016} in the 300~keV~--~1~MeV band;
this is considerably below our best upper limit in the same energy range, i.e.,
\ecse{1.7}{-9}.}
The detectability of gamma-ray emission resulting from the e$^{+/-}$
annihilation strongly depends on the final photon spectrum, which is
in turn determined by the conditions in which annihilation occurs. The
final spectrum could in principle include a narrow or a broad, blueshifted
or redshifted line-like feature near 511~keV, or may be dominated by
a very extended excess in the soft gamma-ray energy range
\citep[e.g.][]{Maciolek1995,Svensson1987}. To give a general idea
about the sensitivity of INTEGRAL, we consider the IBIS/PICsIT upper
limits in the energy range 468--572~keV, i.e., around the 511~keV
annihilation line, during the targeted follow-up observation. This
limit corresponds to 3.1$\times$10$^{43}$erg~s$^{-1}$ (see
Table~\ref{tab:fov}). This luminosity roughly constraints the total
rate of annihilation to less than
1.7$\times$10$^{-13}$M$_\odot$~s$^{-1}$. A particularly stringent
upper limit can be set by SPI on the flux of a narrow annihilation
line between 505 and 515\,keV, which is less than
4.5$\times$10$^{42}$\,erg\,s$^{-1}$.
\section{Conclusions}
We reported the independent INTEGRAL detection of a short gamma-ray
burst (GRB~170817A), in coincidence with that found by Fermi-GBM
\ch{(the association significance between INTEGRAL and Fermi-GBM is
\VAR{search.assoc.gbm.sig|round(1)} sigma)}, which is for the first
time unambiguously associated to the gravitational wave event GW170817 observed by LIGO/Virgo and consistent with a binary neutron star
merger. \ch{The significance of association between the independent INTEGRAL GRB detection and GW170817 is
\VAR{search.assoc.ligo.sig|round(1)}~sigma}. This is a turning point
for multi-messenger astrophysics.
This observation is compatible with the expectation that a large
fraction (if not all) BNS mergers might be accompanied by a prompt
gamma-ray flash \citep{joint_paper}, detectable by INTEGRAL/SPI-ACS
and other facilities. INTEGRAL independently detects more than 20
confirmed sGRBs per year \citep{Savchenko2012}) in a broad range of
fluences. With the growing sensitivity of the LIGO and Virgo
observatories, being joined in the future by other observatories, we
expect to detect more and more short GRBs associated with BNS mergers.
Additionally, we have exploited the unique uninterrupted serendipitous
INTEGRAL observations available immediately after GRB~170817A/GW170817 (lasting
about 20~ks), as well as dedicated targeted follow-up observations
carried out by INTEGRAL, starting as soon as 19.5 hours after the GRB/GW
(lasting in total 5.1 days). No hard X-ray or gamma-ray
signal above the background was found. By taking advantage of the full
sensitivity and wide FoV of the combination of the IBIS, SPI, and
JEM-X instruments, we provide a stringent upper limit over a broad
energy range, from 3\,keV up to 8\,MeV. The INTEGRAL upper limits above
80\,keV are tighter than those set by any other instrument and
constrain the isotropic-equivalent luminosity of the soft gamma-ray
afterglow to less than 1.4$\times$10$^{43}$~erg~s$^{-1}$
(80--300\,keV), assuming a distance of 40\,Mpc to the source Our data exclude the possibility that a short- or a
long-lasting bright hard X-ray and/or soft gamma-ray phase of activity followed
GRB~170817A/GW170817.
With these results, we show that
INTEGRAL continues to play a key role in the rapidly emerging
multi-messenger field by constraining both the prompt and
delayed gamma-ray emission associated with compact object mergers.
\section*{Acknowledgments}
This work is based on observations with INTEGRAL, an ESA project with
instruments and science data center funded by ESA member states
(especially the PI countries: Denmark, France, Germany, Italy,
Switzerland, Spain), and with the participation of Russia and the
USA. The INTEGRAL SPI project has been completed under the
responsibility and leadership of CNES. The SPI-ACS detector system has
been provided by MPE Garching/Germany. The SPI team is grateful to
ASI, CEA, CNES, DLR, ESA, INTA, NASA and OSTC for their support. The
Italian INTEGRAL team acknowledges the support of ASI/INAF agreement
n.\ 2013-025-R.1. RD and AvK acknowledge the German INTEGRAL support
through DLR grant 50 OG 1101. AL and RS acknowledge the support from
the Russian Science Foundation (grant 14-22-00271). AD is funded by
Spanish MINECO/FEDER grant ESP2015-65712-C5-1-R. Some of the results
in this paper have been derived using the \software{HEALPix}
\citep{healpix} package. We are grateful the Fran\c cois Arago Centre
at APC for providing computing resources, and VirtualData from LABEX
P2IO for enabling access to the StratusLab academic cloud. We
acknowledge the continuous support by the INTEGRAL Users Group and the
exceptionally efficient support by the teams at ESAC and ESOC for the
scheduling of the targeted follow-up observations. We are grateful to
the LVC and Fermi-GBM teams for their suggestions on earlier versions
of this Letter. Finally, we thank the anonymous referee for
constructive suggestions.
|
1,108,101,565,605 | arxiv | \section{Introduction}
This paper deals with the solvability of a class of complex vector fields on the two-dimensional torus
$\mathbb{T}^2$. The main results are generalizations of those contained in the recent papers \cite{CDM} and \cite{CDM1}
where the focus was on solvability in domains of the plane $\mathbb{R}^2$.
Study of complex vector fields in $\mathbb{T}^2$, or on compact manifolds, was considered in many papers (see for instance \cite{BDG}, \cite{BCM}, \cite{BPZZ}, \cite{DM}, \cite{HZ}, \cite{Mez3})
under the
assumption of separation of variables: the coefficients of the induced equations are independent on certain variables. This allows
the use of partial Fourier series to carry out the analysis.
Our approach here is different and there is no need for the assumption of separation of variables and the structures are
not amenable to the use of Fourier series.
For the class of closed and locally solvable one forms $\omega=adx+bdy$ on $\mathbb{T}^2$ with orthogonal vector field
\[
L=b\dd{}{x}-a\dd{}{y}\, ,
\]
we associate a first integral in $Z$ on universal covering space $\mathbb{R}^2$. This function turns out to be
a global homeomorphism $Z:\, \mathbb{R}^2\longrightarrow{\mathcal C}$, sending a fundamental rectangle $R$ of the covering space $\mathbb{R}^2$ onto
a parallelogram $P_\tau$ in ${\mathcal C}$, with vertices $0$, $1$, $\tau$, and $1+\tau$ with $\textrm{Im}(\tau)>0$.
We use the Theta function $\Theta$ and the first integral $Z$ to associate a function
\[
M(p,s)=\frac{\Theta'\left(Z(p)-Z(s)+z_0\right)}{\Theta\left(Z(p)-Z(s)+z_0\right)}\, ,
\]
where $z_0=(1+\tau)/2$ is the unique zero of $\Theta$ in the parallelogram $P_\tau$.
This allows us to introduce a Cauchy-Pompeiu type operator in $\mathbb{R}^2$ given
\[
T_\omega g(p)=\frac{1}{2\pi i}\,\int_R M(p,s)g(s)\, d\mu_s\,.
\]
The properties of this operator are summarized in Theorem \ref{propertiesofT} and used to study
the solvability of $L$ on $\mathbb{T}^2$.
We prove in Theorem \ref{Lu=f} that if $f\in L^q(\mathbb{T}^2)$ with $q>2+\sigma$, where $\sigma$
is a positive number associated to $\omega$, then equation $Lu=f$ has a H\"{o}lder continuous solution in $\mathbb{T}^2$
if and only if $\displaystyle\int_{\mathbb{T}^2}f=0$.
For $A\in L^q(\mathbb{T}^2)$, we show in Theorem \ref{Lu=Au}, that equation $Lu=Au$ has a solution if and only if
$\displaystyle\frac{1}{2\pi i}\,\int_{\mathbb{T}^2}A$ is in the lattice generated by $1$ and $\tau$ in ${\mathcal C}$.
Finally, in Theorem \ref{Lu=Au+Bbaru} we give a necessary and sufficient condition for the solvability of the equation $Lu=Au+B\overline{u}$ and deduce a similarity principle with the solutions of $Lu=0$. As a consequence we
show that any solution on $\mathbb{T}^2$ of $Lu=Au+B\overline{u}$ has the form $u=\exp(s)$ with $s$ continuous on $\mathbb{T}^2$.
\bigskip
This paper was written when the second author was visiting the Department of Mathematics and Statistics at Florida International University.
He is grateful and would like to thank the host institution for the support provided during the visit.
\section{ A class of hypocomplex structures}
We define a class of differential forms on the two dimensional torus $\mathbb{T}^2$ and associate a global first integral
on the universal covering space $\mathbb{R}^2$.
Let
\begin{equation}
\omega =a(x,y)dx+b(x,y)dy
\end{equation}
be a non-vanishing closed one-form on the two dimensional torus $\mathbb{T}^2$ where $(x,y)$ are the angular coordinates.
We assume that $a$ and $b$ are functions of class $C^{1+\varepsilon}$, $\varepsilon>0$, and that $\omega$ satisfies the following properties:
\begin{itemize}
\item[(a)] The set of non-ellipticity
\
\Sigma =\left\{p\in\mathbb{T}^2;\ \omega(p)\wedge\overline{\omega(p)} =0 \right\}
\
is a one-dimensional manifold;
\item[(b)] For each connected component $\Sigma_i$ of $\Sigma$, there exists a positive
number $\sigma_i$ such that for every $p\in \Sigma_i$
\[
\omega\wedge\overline{\omega} =\abs{\rho_i}^{\sigma_i}g\, dx\wedge dy
\]
in a neighborhood of $p$, where $\rho_i$ is defining function of $\Sigma_i$ near $p$
and $g$ a non-vanishing function;
\item[(c)] The differential form $\omega$ is hypocomplex (see \cite{BCH} and \cite{Tre}). This is equivalent to
$\omega$ having locally open first integrals. That is, for every $p\in\mathbb{T}^2$,
there exist an open set $U\subset\mathbb{T}^2$ with $p\in U$ and a function $\zeta\in C^{1+\epsilon}$
such that $d\zeta\wedge\omega=0$ and
$\zeta:\, U\,\longrightarrow\, \zeta(U)\,\subset\, {\mathcal C}$
is a homeomorphism.
\end{itemize}
\begin{remark}\label{abarb}
Assumption (c) implies local solvability of $\omega$ (Condition (P) of Nirenberg-Treves) which in turn implies that the function
$\textrm{Im}(a\overline{b})$ does not change sign (see \cite{BCH} and \cite{Tre}).
\end{remark}
As in {\cite{CDM}}, we can assume that there exist local coordinates $(s,t)$ near points $p \in \Sigma_i$ in which the differential form $\omega$ is a multiple
of
\begin{equation}
ds+i\abs{t}^{\sigma_i}dt
\end{equation}
and first integral $\zeta_i$
\[
\zeta_i =s+i\frac{t\abs{t}^{\sigma_i}}{\sigma_i+1}.
\]
We denote by $L$ be the orthogonal vector field of $\omega$:
\begin{equation}
L=b(x,y)\dd{}{x}-a(x,y)\dd{}{y}\,.
\end{equation}
Let $\displaystyle \Pi :\, \mathbb{R}^2\,\longrightarrow\, \mathbb{T}^2$ be the covering map and denote by
$R$ the fundamental rectangle:
\begin{equation}\label{R}
R=\left\{ (x,y)\in\mathbb{R}^2 ;\ 0\leqslant x \leqslant1,\ \ 0\leqslant y \leqslant1\, \right\}\,.
\end{equation}
We consider the pullback
\[
\Omega =\Pi^\ast\omega =\Pi^\ast a(x,y)dx+\Pi^\ast b(x,y)dy\quad\textrm{and}\quad \mathbf{L}=\Pi^\ast b(x,y)\dd{}{x}-\Pi^\ast a(x,y)\dd{}{y}\,.
\]
Hence $\Pi^\ast a$ and $\Pi^\ast b$ are doubly periodic in $\mathbb{R}^2$:
\begin{equation}\begin{array}{lll}
\Pi^\ast a(x+1,y) & =\ \Pi^\ast a(x,y+1)& =\ \Pi^\ast a(x,y)\, ;\\
\Pi^\ast b(x+1,y) & =\ \Pi^\ast b(x,y+1)& =\ \Pi^\ast b(x,y)\,.
\end{array}\end{equation}
It should be noted that it follows from $d\omega =0$ that $\omega$ is locally exact and that
the function
\begin{equation}\label{primitive}
Z(x,y)=\int_{(0,0)}^{(x,y)}\Omega
\end{equation}
is a global first integral of $\Omega$. Furthermore it follows from the double
periodicity of $\Omega$
that there exist constants $C_1,\, C_2\,\in\, {\mathcal C}$ suct that for every $(x,y)\in\mathbb{R}^2$
\begin{equation}
Z(x+1,y)=Z(x,y)+C_1\quad\textrm{and}\quad Z(x,y+1)=Z(x,y)+C_2\,.
\end{equation}
\begin{lemma}
$\mathrm{Im}(C_1\overline{C_2})\ne 0\,.$
\end{lemma}
\begin{proof} Since $\textrm{Im}(a\overline{b})$ does not change sign (see Remark \ref{abarb}), then
\begin{equation}\label{IMCC1}
\int_R\Omega\wedge\overline{\Omega} =\int_R 2i\, \Pi^\ast\textrm{Im}(a\overline{b})\, dxdy\, \ne 0\,.
\end{equation}
Let $l_1=[0,\ 1]\times\{0\}$ and $l_2=\{0\}\times [0,\ 1]$ be sides of the rectangle $R$. Then using
properties of $Z$, we can write
\begin{equation}\label{IMCC2}\begin{array}{ll}
\displaystyle \int_R\Omega\wedge\overline{\Omega} & =\displaystyle \int_RdZ\wedge d\overline{Z} =\int_{\partial R}Zd{\overline{Z}}\\ \\
& =\displaystyle \int_{l_1}(-C_2)d\overline{Z}+\int_{l_2}C_1d\overline{Z}=C_1\overline{C_2}-C_2\overline{C_1}\,.
\end{array}\end{equation}
The conclusion follows from (\ref{IMCC1}) and (\ref{IMCC2}). \qed
\end{proof}
After replacing $\omega$ by $\displaystyle\frac{\omega}{C_1}$ and, if necessary, after a change of variables
$\widetilde{x}=x,\ \widetilde{y}=-y$, we can assume that the primitive $Z$ satisfies
\begin{equation}\label{Zquasiperiodic}\begin{array}{l}
Z(x+1,y)=Z(x,y)+1\, ,\\
Z(x,y+1)=Z(x,y)+\tau\quad\textrm{with}\quad
\textrm{Im}(\tau) >0\,.
\end{array}\end{equation}
\begin{proposition}
The primitive $Z:\, \mathbb{R}^2\,\longrightarrow\,{\mathcal C}$ given by \eqref{primitive} is a global homeomorphism.
\end{proposition}
\begin{proof}
First we show that $Z(\mathbb{R}^2)$ is a closed subset of ${\mathcal C}$. Suppose that
$\{(x_n,y_n)\}_n $ is a sequence in $\mathbb{R}^2$ such that $\{Z(x_n,y_n)\}_n $ converges to a point
$q\in{\mathcal C}$. For every $n$, we can find $\alpha_n,\, \beta_n\, \in \mathbb{Z}$ and $(x_n^0,y_n^0)\,\in R$ such that
\[
(x_n,y_n)=(x_n^0+\alpha_n,y_n^0+\beta_n).
\]
Hence
\begin{equation}\label{convergenceZn}
Z(x_n,y_n)=Z(x_n^0,y_n^0)+\alpha_n+\beta_n\tau \,.
\end{equation}
The sequence $\{(x_n^0,y_n^0)\}_n \subset R$ is bounded and so is the sequence $\{Z(x_n^0,y_n^0)\}_n$.
It follows then from the convergence of $Z(x_n,y_n)$, (\ref{convergenceZn}), and $\textrm{Im}(\tau)>0$ that
$\alpha_n$ and $\beta_n$ are bounded sequences in $\mathbb{Z}$. Therefore, the sequence $\{(x_n,y_n)\}_n $ is bounded in
$\mathbb{R}^2$ and consequently $(x_n,y_n)$ converges to a point $p\in\mathbb{R}^2$ and so $Z(p)=q$.
Since $Z$ is also a local homeomorphism (see assumptions on $\omega$), then $Z(\mathbb{R}^2)$ is also open in ${\mathcal C}$.
Hence $Z(\mathbb{R}^2)={\mathcal C}$. This means that $Z:\, \mathbb{R}^2\,\longrightarrow\,{\mathcal C}$ is a covering map and, therefore, it is
a homeomorphism since ${\mathcal C}$ is simply connected.
\qed
\end{proof}
\begin{remark}
It follows from the hypotheses on $\omega$ that the vector field $\mathbf{L}$ is
hypocomplex in $\mathbb{R}^2$ (see \cite{BCH}, \cite{Tre}). In particular if a function $U$ solves
$\mathbf{L}U=0$ in a region $S\subset\mathbb{R}^2$, then $U$ can be written as $U=H\circ Z$
with $H$ a holomorphic function in $Z(S)\subset{\mathcal C}$.
\end{remark}
\section{An integral operator via the Theta function}
We use the Theta function to construct a generalized Cauchy-Pompeiu operator for the vector field
$\mathbf{L}$ that enables us to construct solutions on the torus. For $\tau\in {\mathcal C}$ with $\textrm{Im}(\tau)>0$, consider the Theta function
\begin{equation}
\Theta (z)=\sum_{m\in\mathbb{Z}}\ei{2\pi i m^2\tau}\ei{2\pi imz}\,.
\end{equation}
The following properties of $\Theta$ will be used (for details see \cite{N}).
\begin{itemize}
\item[($\imath$)] $\Theta (z+1)=\Theta (z)\,$;
\item[($\imath\imath$)] $\Theta(z+\tau)=\ei{-i\pi \tau -2\pi iz}\Theta (z)\,$;
\item [($\imath\imath\imath$)] The only zero of $\Theta(z)$ in the parallelogram $P_\tau$ with vertices
$0$, $1$, $\tau$, and $1+\tau$ is simple and is given by
\[
z_0=\frac{1+\tau}{2}\,.
\]
The zeros of $\Theta$ in $\mathbb{R}^2$ are $z_{jk}=z_0+j+k\tau$ with $j,k\in\mathbb{Z}$.
\end{itemize}
\bigskip
For $p,\, s\,\in\mathbb{R}^2$, define the function $M(p,s)$ by
\begin{equation}\label{M(p,s)}
M(p,s)=\frac{\Theta'\left(Z(s)-Z(p)+z_0\right)}{\Theta\left(Z(s)-Z(p)+z_0\right)}\,.
\end{equation}
The function $M$ is meromorphic in $Z$ and satisfies the following
\begin{lemma}
For every $p\in\mathbb{R}^2$ and $s$ near $p$, we have
\begin{equation}\label{Property1ofM}
M(p,s)=\frac{1}{Z(s)-Z(p)} + H(Z(s))
\end{equation}
with $H$ a holomorphic function near $Z(p)$. Furthermore,
for each $j,\, k\,\in\mathbb{Z}$
\begin{equation}\label{Property2ofM}
M(p,s+(j,k))=M(p,s)-2\pi i k\,.
\end{equation}
\end{lemma}
\begin{proof}
Property (\ref{Property1ofM}) follows directly from the definition (\ref{M(p,s)}) of $M$ and
the properties of the $\Theta$ function.
To verify (\ref{Property2ofM}), notice that since
\[
\Theta (z+j+k\tau)=\ei{-i\pi k\tau -2\pi i kz}\Theta (z)
\]
then
\[
\Theta' (z+j+k\tau)=\ei{-i\pi k\tau -2\pi i kz}\left[ \Theta' (z)-2\pi ik\Theta (z)\right]\,.
\]
Therefore
\[\begin{array}{ll}
M(p,s+(j,k))& =\displaystyle\frac{\Theta'\left(Z(s+(j,k))-Z(p)+z_0\right)}{\Theta\left(Z(s+(j,k))-Z(p)+z_0\right)}\\
& =\displaystyle\frac{\Theta'\left(Z(s)-Z(p)+z_0+j+k\tau\right)}{\Theta\left(Z(s)-Z(p)+z_0+j+k\tau\right)}\\
& = \displaystyle\frac{\Theta'\left(Z(s)-Z(p)+z_0\right)}{\Theta\left(Z(s)-Z(p)+z_0\right)} -2\pi i k = M(p,s)-2\pi ik\,.\qed
\end{array}\]
\end{proof}
Now we use the function $M$ as the kernel of the operator $T_\omega $ defined defined for $g\in L^q(\mathbb{R}^2)$ by
\begin{equation}\label{operatorT}
T_\omega g(p)=\frac{1}{2\pi i}\int_RM(p,s)g(s)d\mu_s
\end{equation}
where $d\mu_s$ is the density measure in $\mathbb{R}^2$.
A simple version of this operator was considered in \cite{Mez} and \cite{Mez2} for other classes of vector fields, and more recently in \cite{CDM} and \cite{CDM1}.
Let
\begin{equation}\label{sigma}
\sigma =\max_{1\leqslant i\leqslant N}\sigma_i\, ,
\end{equation}
where $\sigma_i$ is the positive number associated with the connected component $\Sigma_i$ of the
characteristic set $\Sigma$ given in hypothesis (b) on $\omega$
and where $N$ is the number of connected components of $\Sigma$.
It follows from property (\ref{Property1ofM}) of $M$ and from Theorem 16 in \cite{CDM} that for $g\in L^q(\mathbb{R}^2)$ with $q>2+\sigma$, we have
\begin{equation} \label{alpha}
T_\omega g\in C^\alpha(R),\ \ \textrm{with}\ \
\alpha =\frac{2-p-\mu}{p}\, ,\ p=\frac{q}{q-1}\, , \ \textrm{and}\
\mu=\frac{\sigma}{\sigma+1}\,.
\end{equation}
\begin{proposition}\label{testfunction}
Let $\phi\,\in\, C^\infty_0(\mathbb{R}^2)$. Then for every $p\in R$ we have
\begin{equation}\label{valuesofphi}
\sum_{j,k\in\mathbb{Z}}\phi(p+(j,k)) =\frac{-1}{2\pi i}\,\int_{\mathbb{R}^2} M(p,s)\mathbf{L}\phi(s)\, d\mu_s.
\end{equation}
\end{proposition}
\begin{proof}
Let $p_\ast $ be a point in the interior of the rectangle $R$, $z_\ast=Z(p_\ast)$ and $D_\epsilon$ be the disc with
center $z_\ast$ and radius $\epsilon >0$. We take $\epsilon$ small enough so that $D_\epsilon\,\subset\, R$. Set
\[
K_\epsilon^{jk}=Z^{-1}(D_\epsilon +j+k\tau)\quad\textrm{and}\quad
\mathbb{R}^2_\epsilon =\mathbb{R}^2\backslash \bigcup_{j,k\in\mathbb{Z}}K^{jk}_\epsilon\,.
\]
Using the fact that $\mathbf{L}M(p_\ast,s)=0$ in $\mathbb{R}^2_\epsilon$ and $\text{\rm supp\,} (\phi)$ is compact, then
Green's Theorem applied to the function $M(p_\ast,s)\mathbf{L} \phi(s)$ in a domain containing $\text{\rm supp\,} (\phi)$ gives
\begin{equation}\label{Greenforphi}
\int_{\mathbb{R}^2_\epsilon} M(p_\ast,s)\mathbf{L} \phi(s)\,d\mu_s =-
\sum_{j,k\in\mathbb{Z}}\int_{\partial K^{jk}_\epsilon} M(p_\ast,s)\phi(s)\, dZ(s)\,.
\end{equation}
Properties (\ref{Property1ofM}) and (\ref{Property2ofM}) together with a change of variables in the integrals over
$\partial K^{jk}_\epsilon$ give
\begin{multline}\label{integralKepsilon}
\displaystyle\int_{\partial K^{jk}_\epsilon}\!\!\! M(p_\ast,s)\phi(s)\, dZ(s) =\displaystyle\int_{\partial K^{00}_\epsilon} \!\!\! M(p_\ast,s+(j,k))\phi(s+(j,k))\, dZ(s)\\ \\
=\displaystyle\int_{\partial D_\epsilon}\!\!\! \left( M(p_\ast,Z^{-1}(\zeta))-2\pi ik\right)\phi\left(Z^{-1}(\zeta)+(j,k)\right)\, d\zeta\\ \\
=\displaystyle\int_0^{2\pi}\!\!\! \left[ \frac{1}{\epsilon\ei{i\theta}}+H(\epsilon\ei{i\theta})-2\pi ik\right]\phi\left( Z^{-1}(z_\ast+\epsilon\ei{i\theta})+(j,k)\right)
i\epsilon\ei{i\theta}d\theta\,.
\end{multline}
Formula (\ref{valuesofphi}) follows from (\ref{Greenforphi}) and (\ref{integralKepsilon}) by taking $\epsilon\rightarrow 0\,.$
\qed
\end{proof}
We have the following theorem:
\begin{theorem}\label{propertiesofT}
For every function $P\in L^q(\mathbb{R}^2)$ with $q>2+\sigma$,
the function $T_\omega P \in C^\alpha(\mathbb{R}^2)$ \emph(with $\alpha$ given in \eqref{alpha}\emph)
satisfies
\begin{itemize}
\item[($\imath$)] $T_\omega P(x+1,y)=T_\omega P(x,y)\,$;
\item[($\imath\imath$)] $T_\omega P(x,y+1)=T_\omega P(x,y)-\displaystyle\int_RP(s)d\mu_s\,$; and
\item[($\imath\imath\imath$)] If in addition $P$ is doubly periodic, then $\mathbf{L} T_\omega P=P\,.$
\end{itemize}
\end{theorem}
\begin{proof}
Properties ($\imath$) and ($\imath\imath$) follow directly from (\ref{Property2ofM}). To verify ($\imath\imath\imath$),
let $\phi\in C^\infty_0(\mathbb{R}^2)$. Then using Proposition \ref{testfunction} we find
\begin{equation}\begin{array}{ll}
<\mathbf{L} T_\omega P,\, \phi> & =\displaystyle -<T_\omega P,\, \mathbf{L} \phi>=-\int_{\mathbb{R}^2}T_\omega P(p)\mathbf{L} \phi(p)\, d\mu_p\\ \\
& =\displaystyle\int_R\frac{-1}{2\pi i}\left[\int_{\mathbb{R}^2}M(p,s)\mathbf{L}\phi(p)d\mu_p\right]\,P(s)\, d\mu_s\\ \\
& =\displaystyle\int_R\sum_{j,k\in\mathbb{Z}}\phi(s+(j,k))\, P(s)\, d\mu_s\\ \\
&=\displaystyle\int_R\sum_{j,k\in\mathbb{Z}}\phi(s+(j,k))\, P(s+(j,k))\, d\mu_s\\ \\
& =\displaystyle\int_{\mathbb{R}^2}P(s)\phi(s)\, d\mu_s = <P,\phi>\,. \qed
\end{array}\end{equation}
\end{proof}
\begin{theorem}\label{Lu=f}
For $f\in L^q(\mathbb{T}^2)$ with $q>2+\sigma$, equation $Lu=f$ has a solution
$u\in C^\alpha(\mathbb{T}^2)$ if and only if $\displaystyle\int_{\mathbb{T}^2}f\, dxdy =0$.
\end{theorem}
\begin{proof}
If equation $Lu=f$ is solvable on $\mathbb{T}^2$,
\[
\int_{\mathbb{T}^2}f\, dxdy=\int_{\mathbb{T}^2}Lu\, dxdy=\int_{\mathbb{T}^2}d(udZ)=0.
\]
Conversely if $\displaystyle\int_{\mathbb{T}^2}f\, dxdy=0$, then it follows from Theorem \ref{propertiesofT} that $T_\omega \Pi^\ast f$ is doubly
periodic and descends as a solution of $Lu=f$ on $\mathbb{T}^2$. \qed
\end{proof}
\section{The equation $Lu=Au$ on $\mathbb{T}^2$}\label{sec4}
We give a necessary and sufficient condition for the global solvability of the equation $Lu=Au.$ For $A\in L^q(\mathbb{T}^2)$ with $q>2+\sigma$, we associate the number
\begin{equation}\label{nuofa}
\nu(A)=\frac{-1}{2\pi i}\int_{\mathbb{T}^2}A(p)\, d\mu_p =\frac{T_\omega \Pi^\ast A(0,1)-T_\omega\Pi^\ast A(0,0)}{2\pi i}\,.
\end{equation}
\begin{theorem}\label{Lu=Au}
For a function $A\in L^q(\mathbb{T}^2)$ with $q>2+\sigma$, equation
\begin{equation}\label{equationLu=Au}
Lu=Au
\end{equation}
has a solution in $C^\alpha(\mathbb{T}^2)$ if and only if the associated number given by \eqref{nuofa} is in the
lattice generated by 1 and $\tau$:
\begin{equation}\label{nuinlattice}
\nu(A)=j+k\tau\quad\textrm{with}\quad j,\, k\, \in\mathbb{Z}\,.
\end{equation}
In this case any solution of \eqref{equationLu=Au} has the form
\begin{equation}\label{solutionsofLu=Au}
u(p)=C\exp(T_\omega\Pi^\ast A (p)+kZ(p))\quad\textrm{with}\quad C\in{\mathcal C}\,.
\end{equation}
\end{theorem}
\begin{proof}
Suppose that $\nu(A)$ is given by (\ref{nuinlattice}). The function $V\in C^\alpha(\mathbb{R}^2)$ given by
\[
V(x,y)=T_\omega \Pi^\ast A(x,y)-2\pi i kZ(x,y)
\]
satisfies $\mathbf{L} V=\Pi^\ast A$ by Theorem \ref{propertiesofT}, and by (\ref{nuinlattice}) it satisfies
\[
V(x+1,y)=V(x,y)-2\pi i k\ \ \textrm{and}\ \ V(x,y+1)=V(x,y)+2\pi i j\,.
\]
Hence $u(x,y)=\ei{V(x,y)}$ is doubly periodic and satisfies \eqref{equationLu=Au}.
To prove the necessity of (\ref{nuinlattice}), suppose equation (\ref{equationLu=Au}) has a solution
in $\mathbb{T}^2$ (note that in this case the solution is necessarily H\"{o}lder continuous by results contained in
\cite{CDM}). Then the function
\[
V(x,y)=\Pi^\ast u(x,y)\ei{-T_\omega\Pi^\ast A(x,y)}
\]
satisfies $\mathbf{L}V=0$ in $\mathbb{R}^2$. Hence there exists an entire function $H$ such that
$V=H\circ Z$. Furthermore, if $z=Z(x,y)$, then it follows from Theorem \ref{propertiesofT} that
\begin{equation}\label{propofH}\left\{\begin{array}{ll}
H(z+1) &= H(z)\\
H(z+\tau) &=\displaystyle \Pi^\ast u(x,y)\ei{-T_\omega\Pi^\ast A(x,y) -2i\pi \nu(A)}=\ei{ -2i\pi \nu(A)}H(z)\,.
\end{array}\right.\end{equation}
It follows from (\ref{propofH}) that $H$ can factored through a function defined on the cylinder. That is,
H can be written as
\[
H(z)=K(\zeta)\quad\textrm{with}\quad \zeta=\ei{2\pi iz}\,
\]
where $K$ is a holomorphic function in the punctured plane ${\mathcal C}^\ast$.
Moreover, $K$ satisfies
\begin{equation}\label{propofK}
K(\zeta\ei{2\pi i\tau})=H(z+\tau)=\ei{ -2i\pi \nu(A)}K(\zeta)\,.
\end{equation}
Consider the Laurent series of $K$: $K(\zeta)=\displaystyle\sum_{m\in\mathbb{Z}}a_m\zeta^m$. It follows at once from
(\ref{propofK}) that
\begin{equation}\label{Laurentcoeff}
a_m\ei{2i\pi m\tau}=a_m\ei{ -2i\pi \nu(A)}\,,\qquad\forall m\in\mathbb{Z}\,.
\end{equation}
Recall that $\textrm{Im}(\tau)>0$ so that $\ei{2i\pi m\tau}\ne 1$ for all $m$. Hence, system (\ref{Laurentcoeff}) has
a solution if and only if $\nu(A)=j+k\tau$ for some $j,\, k\in\mathbb{Z}$ and in this case
$K(\zeta )=a_k\zeta^k$. The function $\Pi^\ast u$ is therefore
\[
\Pi^\ast u(x,y)=a_k\ei{T_\omega\Pi^\ast A(x,y)+2\pi ikZ(x,y)}\,.\quad \qed
\]
\end{proof}
\section{The equation $Lu=Au+B\overline u$ on $\mathbb T^2$}\label{sec5}
In this section we give a necessary and sufficient condition for the solvability of the equation
\begin{equation}\label{Lu=Au+Bbaru}
Lu=Au+B\overline u
\end{equation}
on $\mathbb T^2$ and
deduce a similarity principle with the solutions of $Lu=0$ on $\mathbb T^2$ (which are in fact constant functions).
Let $A,\, B\in L^q(\mathbb T^2)$, with $q>2+\sigma$ where $\sigma$ is given in (\ref{sigma}).
For $k\in\mathbb{Z}$, define the operator $P_k$ by
\[
P_kv(x,y)=T_{\omega}\left[\Pi^\ast A+\tilde{B}_k\,\cdot\,\frac{\overline{e^v}}{e^v}\right](x,y)\,,
\]
where
\[
\tilde{B}_k (x,y) = \Pi^\ast B (x,y)\, \exp \left[ -2\pi i k (\overline{Z}(x,y)+Z(x,y))\right]\, .
\]
It follows from \cite{CDM} and property \eqref{Property1ofM} of $M$ that if $v\in L^q(\mathbb{R}^2)$ with
$q>2+\sigma$, then $P_kv\in C^\alpha (R)$ with $\alpha$ given in \eqref{alpha}. We restrict the action of $P_k$ to the subspace
$C(R)$.
\begin{proposition}\label{fixedpoint}
The operator $P_k$ has a fixed point in $C(R)$.
\end{proposition}
\begin{proof} It follows from \cite[Theorem 9]{CDM} that there exists $M>0$ such that
\[
|T_{\omega}F(x,y)|\,\leqslant\, M\|F\|_q\,,\quad \forall F\in L^q(R) \ \mathrm{and} \ (x,y)\in R\, .
\]
Hence
\[
\| P_k v\|_\infty \,\leqslant\, M\, (\|A\|_q+\|B\|_q)\, \doteq C\, \quad \forall v\in V.
\]
Consider the subset $K$ given by
\[
K=\{\, v\in C(R);\ \|v\|_\infty \leqslant C\,\}\, .
\]
$K$ is a compact and convex subset in $C(R)$. For every $v\in K$ we have
\[
\|P_k(v)\|_\infty \,\leqslant\, M\, (\|\Pi^\ast A\|_q+\|\tilde{B}_k\, \exp(\bar{v}-v)\|_q)\,
\leqslant\, M\, (\|A\|_q+\|B\|_q)\, \doteq C\,.
\]
Hence $P_k(K)\subset K$. Furthermore $P_k:\, C(R)\, \longrightarrow\, C(R)$ is continuous.
Indeed, since the function $g(\zeta)=\exp(\overline{\zeta}-\zeta)$ is Lipschitz (with constant 2) in ${\mathcal C}$,
then for $v,\, v_0\, \in C(R)$, we have
\[\begin{array}{ll}
\|P_k(v)-P_k(v_0)\|_\infty & =
\, \left\lVert T_{\omega}\left[\tilde{B}_k\cdot\left(\exp (\bar{v}-v)-\exp (\bar{v}_0-v_0) \right)\right]\right\rVert_{\infty}\\ \\
& \leqslant\, M \|\tilde{B}_k\|_q\, \left\lVert \exp (\bar{v}-v)-\exp (\bar{v}_0-v_0) \right\rVert_{\infty}\\ \\
& \leqslant\, 2 M \|\tilde{B}_k\|_q\, \|v-v_0 \|_\infty\, .
\end{array}\]
Thus $P_k$ has a fixed point in $K$ (Schauder's Fixed Point Theorem). \qed
\end{proof}
\bigskip
Note as in that in Theorem \ref{propertiesofT}, for all $x,\, y\, \in [0, 1]$, $P_k$ satisfies
\begin{equation}\label{PropertiesofPk}
P_kv(1,y)=P_kv(0,y) \ \mathrm{and}\
P_kv(x,1)=P_kv(x,0) -\!\!\int_R\!\!\! \,\left(\Pi^\ast A+\tilde{B}_k\cdot\frac{\overline{e^v}}{e^v}\right)\, d\mu\,.
\end{equation}
Let $V_k$ be the set of fixed points of $P_k$: $\displaystyle V_k=\{\, v\in C(R),\ P_kv=v\, \}$.
Hence, for every $v\in V_k$, there is $\nu\in {\mathcal C}$ such that
$v(1,y)=v(0,y)$ and $v(x,1)-v(x, 0)=\nu$. Let
\[ \Lambda_k\doteq\{\, v(x,1)-v(x,0): v\in V_k\}\,.\]
\medskip
\begin{theorem}\label{SolvabilityofLu=Au+Bbaru}
Equation \eqref{Lu=Au+Bbaru} has a H{\"o}lder continuous solution on $\mathbb T^2$ if and only if
there are $j,k\in \mathbb Z$ such that $2\pi i(j-k\tau)\in \Lambda_k.$
Moreover any solution $u$ is such that
\[
\Pi^\ast u(x,y)=C\exp \left(2\pi ik Z(x,y) + P_k v(x,y)\right )\, ,
\]
with $C\in\mathbb C$, and $v \in V_k$.
\end{theorem}
\medskip
\begin{proof} Suppose that there is $v\in V_k$ with $v(x,1)-v(x,0)=2\pi i(j-k\tau)$, for some $j,k\in\mathbb Z$. Let
\[
U(x,y) = \exp\left[2\pi i k Z(x,y)+ P_k v(x,y)\right]\,,\quad (x,y)\in \mathbb R^2\, .
\]
It follows from property (\ref{PropertiesofPk}) and assumption on $v$ that $U$ is doubly periodic, and, as $P_kv=v$,
we have
\[\begin{array}{ll}
\mathbf{L}(U) & =\displaystyle U\mathbf{L}(P_kv)=U\mathbf{L}\left(T_{\omega}\left[\Pi^\ast A+ \tilde{B}_k\cdot\frac{\overline{e^v}}{e^v}\right]\right)\\ \\
& =\displaystyle U\left(\Pi^\ast A+\Pi^\ast B\cdot\frac{\overline U}{U} \right)=\Pi^\ast A \cdot U+\Pi^\ast B \cdot {\overline U}\,.
\end{array}\]
Since $U$ is doubly periodic then $u=U\circ\Pi^{-1}\,\in C^\alpha(\mathbb{T}^2)$ satisfies $Lu=Au+\overline{u}$.
Conversely, suppose that $u\in C^\alpha(\mathbb{T}^2)$ solves \eqref{Lu=Au+Bbaru}. Since $L$ is elliptic on
$\mathbb{T}^2\backslash\Sigma$, it follows that the zeros of $u$ are isolated on $\mathbb{T}^2\backslash\Sigma$ and
the function $\displaystyle\frac{\overline{u}}{u}\, \in L^\infty (\mathbb{T}^2)$.
Let
\[
V(x,y)=T_\omega \left[ \Pi^\ast A +\Pi^\ast B\, \Pi^\ast \left(\frac{\overline{u}}{u}\right)\right]\, .
\]
Theorem \ref{propertiesofT} implies that
\[
\mathbf{L} (V)= \Pi^\ast A +\Pi^\ast B\, \Pi^\ast\left(\frac{\overline{u}}{u}\right)
\]
and there exists $\beta\in{\mathcal C}$ such that
\begin{equation}\label{V-function}
V(x,y)=V(x+1,y)\quad\mathrm{and}\quad V(x,y)=V(x,y+1)+\beta,\quad \forall (x,y)\in\mathbb{R}^2\, .
\end{equation}
We have
\[
\mathbf{L} (\Pi^\ast u \, \ei{-V})=0\,
\]
in $\mathbb{R}^2$. Therefore, there exists an entire function $H$ in ${\mathcal C}$ such that
\[
\Pi^\ast u(x,y) \ei{-V(x,y)}=H(Z(x,y))\qquad\forall (x,y)\in\mathbb{R}^2\, .
\]
Moreover the double periodicity of $\Pi^\ast u$ and property \eqref{V-function} imply that the
entire function $H$ satisfies
\[
H(z+1)=H(z)\ \mathrm{ and }\ H(z+\tau)=e^{\beta}H(z)\,,\quad \forall z\in {\mathcal C}\, .
\]
As in the previous section, such an entire function $H$ is of the form $H(z)=Ce^{2\pi ikz}$ with $C\in {\mathcal C}$ and
$\beta=2\pi i(j+k\tau)$ for some $j,k\, \in\mathbb{Z}$. This completes the proof. \qed
\end{proof}
\begin{remark}
In particular, we have showed that a solution to $Lu=Au+B\overline u$ globally defined on $\mathbb T^2$ never vanishes if it is not identically zero.
\end{remark}
|
1,108,101,565,606 | arxiv | \section*{Introduction.}
In Riemannian geometry Vogel proved that every circle-preserving diffeomorphism is conformal, c.f. \cite{Vo} and \cite{Ku}. This theorem has been extended to pseudo-Riemannian manifolds in \cite{Cat}. Here, we shall extend this theorem to Finsler manifolds. Using the Cartan covariant derivative along a curve, the definition of circles in a Finsler manifold is given. This definition is a natural extension of Riemannian one, see for instance, \cite{NY}. Some typical examples of circles are
helices on a cylinder or a torus. It should be remarked that these circles need not to be closed in general, although it may be closed in some cases as on a torus.
A geodesic circle in a Riemannian geometry, as well as in Finsler geometry, is a curve for which the first Frenet curvature $k_1$ is constant and the second curvature $k_2$ vanishes. In the other words a geodesic circle is a torsion free constant curvature curve. A \emph{concircular} transformation is defined by \cite{Ya} and \cite{Fi} in Riemannian geometry to be a conformal transformation which preserves geodesic circles.
This notion has been similarly developed in Finsler geometry by Agrawal and Izumi, cf. \cite{Ag,Iz,Iz2} and studied in \cite{AB,B} by one of the present authors.
The results obtained in this paper shows that in the definition of concircular transformations, a priori, the conformal assumption is not necessary. That is to say, if a transformation preserves geodesic circles then it is conformal.
\section{Preliminaries}
Let $M$ be a real n-dimensional manifold of class $C^ \infty$. We
denote by $TM\rightarrow M$ the bundle
of tangent vectors and by $ \pi:TM_{0}\rightarrow M$ the fiber bundle of
non-zero tangent vectors.
A {\it{Finsler structure}} on $M$ is a function
$F:TM \rightarrow [0,\infty )$, with the following properties: (I)
$F$ is differentiable ($C^ \infty$) on $TM_{0}$; (II) $F$ is
positively homogeneous of degree one in $y$, i.e.
$F(x,\lambda y)=\lambda F(x,y), \forall\lambda>0$, where we denote
an element of $TM$ by $(x,y)$.
(III) The Hessian matrix of $F^{2}/2$ is positive definite on
$TM_{0}$; $(g_{ij}):=\left({1 \over 2} \left[
\frac{\partial^{2}}{\partial y^{i}\partial y^{j}} F^2
\right]\right).$
A \textit{Finsler
manifold} $(M,g)$ is a pair of a differentiable manifold $M$ and a tensor field $g=(g_{ij})$ on $TM$ which defined by a Finsler structure $F$. The spray of a Finsler structure $F$ is a vector field on $TM$
\[ G=y^i \frac{\pa }{\pa x^i} -2 G^i \frac{\pa }{\pa y^i},\]
where
\[ G^i =\frac{g^{il}}{4} \Big \{ \frac{\pa^2 F^2}{\pa x^m \pa y^l} y^m -\frac{\pa F^2}{\pa x^l} \Big \},\]
and $(g^{ij}):=(g_{ij})^{-1}$.
Let $(M,g)$ be a ${\cal C}^\infty$ Finsler manifold and let $c$ be an oriented ${\cal C}^\infty$ parametric curve on
$M$ with equation $x^i(t)$. We choose the pair $(x,\dot x)$, to be the line element
along the curve $c$.
Let $(x^i,y^i)$ be the local coordinates on the slit tangent bundle $TM/0$. Using a Finsler connection we can choose the natural basis $(\delta /\delta x^i,\partial/\partial y^i)$, where $\frac{\delta }{\delta x^{j}}:=\frac{\partial }{\partial x^{j}}-N^i_j\frac{\partial }{\partial y^{i}}$, and $N^i_j :=\frac{1}{2}\frac{\pa G^i}{\pa y^j}$. The dual basis is given by $(d x^i,\delta y^i)$, where $\delta y^k:=dy^k+N^k_ldx^l$.
Let $X$ be a ${\cal C}^\infty$ vector field
$X=X^i(t)\frac{\partial}{\partial x^i}|_{c(t)}$ along $c(t)$. We denote the Cartan covariant derivative of $X$ in direction of $\dot c = \frac{dx^j}{dt}\frac{\partial}{\partial x^j}$ by $\nabla_{_{\dot c}}X = \frac{\delta X^i}{dt} \frac{\pa }{\pa x^i}|_{c(t)}$. The components
$\frac{\delta X^i}{dt}$ can be determined explicitly as follows.
\be \label{Eq;CovDerAlongCurves2}
\nabla_{_{\dot c}}X =\nabla_{_{\dot c}}X^i\frac{\partial}{\partial x^i}=\frac{d X^i}{ dt}\frac{\partial}{\partial x^i}+X^i \nabla_{\dot c}\frac{\partial}{\partial x^i}.
\ee
The last term in Eq. (\ref{Eq;CovDerAlongCurves2}) is given by $\nabla_{{\dot c}}\frac{\partial}{\partial x^i}:=\omega_i^j(\dot c)\frac{\partial}{\partial x^j}$, where $\omega_i^j(\dot c):=(\Gamma_{ik}^jdx^k+C^j_{ik}\delta y^k)(\dot c)$, is the connection $1$-form of Cartan connection, cf. \cite{BCS}, page 39. Here, the coefficients $\Gamma^{i}_{jk}$ are Christoffel symbols with respect to the horizontal partial derivative $\frac{\delta }{\delta x^{j}}$, that is,
$$\Gamma^{i}_{jk}:=\frac{1}{2}g^{ih}(\frac{\delta g_{hk}}{\delta x^{j}}+ \frac{\delta g_{hj}}{\delta x^{k}}- \frac{\delta g_{jk}}{\delta x^{h}}),$$
and $C_{hk}^{i}:=\frac{1}{2}g^{im}\frac{\pa g_{mk}}{\pa y^h}$, is the \emph{Cartan torsion tensor}.
Plugging $\delta y^k$ in $\omega_i^j(\dot c)$
\begin{eqnarray*}
\omega_i^j(\dot c)&=&(\Gamma_{ik}^jdx^k+C^j_{ik}(d y^k+N^k_ldx^l)(\dot c),\\
&=&(\Gamma_{ik}^jdx^k+C^j_{is}N^s_kdx^k)(\frac{dx^l}{dt}\frac{\partial}{\partial x^l}),\\
&=&(\Gamma_{il}^j+C^j_{is}N^s_l)(\frac{dx^l}{dt}),
\end{eqnarray*}
and replacing the resulting term in Eq. (\ref{Eq;CovDerAlongCurves2}), we obtain
the components of Cartan covariant derivative of $X$ in direction of $\dot c$.
\be\label{CartanCovDerAlongC1}
\frac{\delta}{ dt} X^i = \frac{d X^i}{ dt}+ (\Gamma_{kh}^i+C^i_{ks}N^s_h)X^k\frac{dx^h}{dt}.
\ee
The Cartan covariant derivative $\nabla_{\dot c}$ is metric-compatible along $c$, that is, for any vector fields $X$ and $Y$ along $c$,
\be \nn
\frac{d}{dt} g(X,Y)= g(\nabla_{\dot c}X, Y)+g(X, \nabla_{\dot c}Y).
\ee
More details about this preliminaries may be found in \cite{AZ1,BCS,Sh}.
\section{Circles in a Finsler manifold}
As a natural extension of circles in Riemannian geometry, cf. \cite{NY}, we recall the definition of a generalized circle in a Finsler manifold, called here simply, circle.
\begin{defn}
Let $(M,g)$ be a Finsler manifold of class $C^\infty$. A smooth curve $c:I\subset \R\rightarrow M$ parameterized by arc length $s$ is called a \emph{circle} if there exist a unitary vector field $Y=Y(s)$ along $c$ and a positive constant $k$ such that
\begin{eqnarray}
\label{Eq;cov der of X 1}\nabla_{c'} X= k Y,\\
\label{Eq;cov der of X 2}\nabla_{c'} Y= - k X,
\end{eqnarray}
where, $X:=c'=dc/ds$ is the unitary tangent vector field at each point $c(s)$.
The number $1/k$ is called the \emph{radius} of the circle.
\end{defn}
Comparing this definition of circle with definition of a geodesic circle in Finsler geometry, recalled in the introduction, we find out that if in the definition of a geodesic circle we exclude the trivial case, $k_1=0$, that is, if we remove geodesics, then we obtain the definition of a circle in a Finsler manifold.
\begin{lem}\label{prop;diff eq geod circ}
Let $c=c(s)$ be a unit speed curve on an $n$-dimensional Finsler manifold $(M,g)$. If $c$ is a circle, then it satisfies the following ODE
\be\label{Eq;circle equ+1}
\nabla_{c'}\nabla_{c'} X+ g(\nabla_{c'}X,\nabla_{c'}X) X=0,
\ee
where, $ g( \ ,\ )$ denotes scalar product determined by the tangent vector $c'$.
Conversely, $c $ satisfies (\ref{Eq;circle equ+1}), then it is either a geodesic or a circle.
\end{lem}
\begin{proof} Assume that $c$ is a circle parameterized by arc-length. By means of metric compatibility we have
$$ g(\nabla_{c'} X, X) = \frac{1}{2}\frac{d}{ds} [ g(X, X) ] =0.$$
Eqs. (\ref{Eq;cov der of X 1}) and (\ref{Eq;cov der of X 2}) yield
\be
\nabla_{c'}\nabla_{c'}X= k \nabla_{c'} Y= -k^2 X.\label{Eq;circle equ+2}
\ee
This implies
\[ k^2 = - g(\nabla_{c'}\nabla_{c'}X, X) = \frac{d}{ds} [ g(\nabla_{c'} X, X) ] + g(\nabla_{c'} X, \nabla_{c'} X).\]
Plugging it into (\ref{Eq;circle equ+2}), we obtain (\ref{Eq;circle equ+1}).
Conversely, assume that $c=c(s)$ is a unit speed curve on $M$ which satisfies Eq. (\ref{Eq;circle equ+1}). Then by metric compatibility property of $\nabla_{c'}$, we have
\begin{eqnarray}\label{Eq;metric comp 2}
\frac{d}{ds} g( \nabla_{c'}X,\nabla_{c'}X ) = 2 g(\nabla_{c'}\nabla_{c'}X, \nabla_{c'}X).
\end{eqnarray}
Plugging Eq. (\ref{Eq;circle equ+1}) into this equation we have
\begin{eqnarray}\label{Eq;metric comp 3}
g(\nabla_{c'}\nabla_{c'}X, \nabla_{c'}X) = - g(\nabla_{c'}X, \nabla_{c'}X) g(X, \nabla_{c'}X).
\end{eqnarray}
Taking into account Eqs. (\ref{Eq;metric comp 2}) and (\ref{Eq;metric comp 3}) and the fact that $g(X, \nabla_{c'} X)=0$ for unitary tangent vector field $X$, we have
\begin{eqnarray*}
\frac{d}{ds} g( \nabla_{c'}X,\nabla_{c'}X ) = 0.
\end{eqnarray*}
Therefore $k^2:=g( \nabla_{c'}X,\nabla_{c'}X )$ is constant along $c$. If $k=0$, then $c$ is a geodesic. If $k\neq 0$, set
\be\label{Eq;metric comp 4}
Y=\frac{1}{k} \nabla_{c'}X,
\ee
then $Y$ is a unit vector field which satisfies Eq. (\ref{Eq;cov der of X 1}). The covariant derivative of Eq. (\ref{Eq;metric comp 4}) and using Eq. (\ref{Eq;circle equ+1}) yields to
\begin{eqnarray*}
\nabla_{c'}Y=\frac{1}{k} \nabla_{c'}\nabla_{c'}X = -k X.
\end{eqnarray*}
Thus we have Eqs. (\ref{Eq;cov der of X 1}) and (\ref{Eq;cov der of X 2}), hence $c$ is a circle. This completes the proof.
\end{proof}
\bigskip
For a curve $c=c(s)$ paramertized by arc-length $s$, $c'(s):=\frac{dc}{ds}(s)$ is the unit tangent vector along $c$. Let
$$ c''(s):=\nabla_{c'} c', \ \ \ \ c''(s):=\nabla_{c'}\nabla_{c'} c', \ \ \ \
c'''(s):=\nabla_{c'}\nabla_{c'}\nabla_{c'} c'.$$
We can express (\ref{Eq;circle equ+1}) as follows
\be
c''' + g(c'', c'') c' =0. \label{Eq;circle equ+3}
\ee
Equivalently, differential equation of a circle is given by
\begin{equation}\label{Eq: 1.9+1}
c''' = -k^2 c',
\end{equation}
where $k=\sqrt{ g(c'',c'')}$ is the constant first Frenet curvature. Hence,
$c(s)$ is a circle if and only if
$c'''$ is a tangent vector field along $c$, or equivalently $c'''$ is a scalar multiple of $c'$ or $\dot c$.
If $c=c(t)$ is parameterized by an arbitrary parameter $t$, we denote its successive covariant derivatives by $\dot c:=\frac{dc}{dt}$, $\ddot c=:\nabla_{\dot{c}}\dot{c}$ and $\dddot c:= \nabla_{\dot{c}}\nabla_{\dot{c}} \dot{c}$.
We have the following successive relations between successive covariant derivatives.
\begin{eqnarray}\label{Eq;successive1 deri}
\dot c &=& |\dot c| \ c', \\
\label{Eq;successive2 deri}
\ddot c&=&|\dot c|^2 \ c'' + \frac{g(\dot c,\ddot c)}{|\dot c|}\ c',\\
\label{Eq;successive3 deri}
\dddot c&=&|\dot c|^3 \ c''' +3{ g(\dot c,\ddot c)}c''+ \frac{d}{dt}\bigg(\frac{g( \dot c,\ddot c)}{|\dot c|}\bigg)\ c'.
\end{eqnarray}
For an arbitrary parameter $t$ we have the following lemma.
\begin{lem}\label{lem;circle}
Let $(M,g)$ be a Finsler manifold and $c(t)$ a curve on $M$. Then $c(t)$ is a circle with respect to the $g$, if and only if the vector field
$$V :=\dddot c-3\frac{ g(\dot c,\ddot c)}{g(\dot c,\dot c)}\ddot c,$$
is a tangent vector field along $c$ or equivalently a multiple of $\dot c$ or $c'$.
\end{lem}
\begin{proof}
It follows from (\ref{Eq;successive2 deri}) and (\ref{Eq;successive3 deri}) that
\[ \dddot c-3\frac{ g(\dot c,\ddot c)}{g(\dot c,\dot c)}\ddot c = |\dot{c}|^3 c''' + \Big \{ \frac{d}{dt}\bigg(\frac{g( \dot c,\ddot c)}{|\dot c|}\bigg) -3 \frac{ g(\dot{c}, \ddot{c})^2}{ g(\dot{c}, \dot{c})^{3/2} }
\Big \} c' .\]
Thus $c'''$ is parallel to $c'$ if and only if $\dddot c-3\frac{ g(\dot c,\ddot c)}{g(\dot c,\dot c)}\ddot c $ is proportional to $c'$
\end{proof}
Contrary to the Euclidean circle, the general notion of circle in Riemannian geometry as well as in Finsler geometry, called here, simply circle, is not required that a circle be a closed curve. Although it may happen in some cases as small circles or helicoid curves on the sphere. In general, similar to the Riemannian circles, they are spiral curves on the subordinate spaces, for instance, spiral curves on cylindrical surfaces, conical surfaces and so on. Moreover, their length are not required to be bounded as in closed circle in Euclidean spaces.
\section{Circle-preserving diffeomorphisms}
A local diffeomorphism of Finsler manifolds is said to be \emph{circle-preserving}, if it maps circles into circles.
More precisely let $M$ be a differentiable manifold, $g$ a Finsler metric on $M$, $c(s)$ a $C^\infty$ arc length parameterized curve in a neighborhood $ U\subset M$ and $\delta/ ds$ the Cartan covariant derivative along $c$, compatible with $g$.
Let $\phi: M \longrightarrow M$ be a local diffeomorphism on $M$, then it induces a second Finsler metric $\bar g$ and a Cartan covariant derivative $\delta /d\bar s$ along $\bar c$ on $(M,\bar g)$ on some neighborhood $\bar U$ of $M$. Here, we denote the induced Finsler manifold by $(M,\bar g)$, in the sequel.
We say that the local diffeomorphism $\phi: (M,g) \longrightarrow (M,\bar g)$ \emph{preserves circles}, if it maps circles to circles.
Let $ c(s)$ be a circle and $\bar c(\bar s)$ its image by $\phi$, where $\bar s=\bar s(s)$. Then using definite positiveness of $g$ and $\bar g$ and the related fundamental forms
\be \label{Eq;arclenth 1}
ds^2= g_{ij}(x,x')dx^idx^j \quad \textrm{and}\quad d\bar s^2= \bar g_{ij}(x,x')dx^idx^j,
\ee
respectively, we can establish a relation between $s$ and $\bar s$ and their derivatives.
We have $\frac{\delta}{d\bar s}=\frac{\delta}{ds} . \frac{ds}{d\bar s}$, where
\be\label{Eq;arclenth}
\frac{d\bar s}{ds}=\sqrt{\bar g_{jk}(x,x')\frac{d x^j}{ds}\frac{d x^k}{ds}}\neq 0, \quad
\frac{d s}{d \bar s}=\sqrt{ g_{jk}(x,x')\frac{d x^j}{d\bar s}\frac{d x^k}{d\bar s}}\neq 0.
\ee
If we have $d\bar s =e^{\sigma} ds$, or equivalently by means of Eq. (\ref{Eq;arclenth 1}), if $\bar g=e^{2\sigma} g$ or $\bar F=e^{\sigma} F$, where $\sigma$ is a scalar function on $M$, then two Finsler structures $\bar F$ and $F$ are said to be {\it conformal}.
\section{Circles in a Minkowski space}
Let $(V, F)$ be a Minkowski space where $V$ is a vector space and $F$ is a Minkowski norm on $V$.
In a standard coordinate system in $V$,
\be\label{Eq;Minkowski}
G^i =0, \ \ \ \ \ N^i_j =0.
\ee
Then for a vector field $X= X^i(t)\frac{\pa }{\pa x^i} |_{c(t)}$ along a curve $c(t)$, the Cartan covariant derivative $\nabla_{\dot c} X= \frac{\delta X^i}{dt} \frac{\pa }{\pa x^i} |_{c(t)}$ given by Eq. (\ref{CartanCovDerAlongC1}) reduces to
\begin{eqnarray}\label{CartanCovDerAlongC2}
\frac{\delta X^i}{ dt} & = &\frac{d X^i}{ dt}+ \Gamma_{kh}^i X^k\frac{dx^h}{dt}.
\end{eqnarray}
In particular, for $X= c'$, we have
\[ \frac{\delta X^i}{ds} = \frac{d^2 x^i}{ds^2}+ \Gamma_{kh}^i \frac{dx^k}{ds}\frac{dx^h}{ds}= \frac{d^2 x^i}{ds^2}+ G^i,\]
where we have used $\Gamma^i_{jk} \frac{dx^j}{dt} \frac{dx^k}{dt}= \gamma^i_{jk} \frac{dx^j}{dt} \frac{dx^k}{dt}=G^i,$ for which $\gamma^i_{jk}$ are the formal Christoffel symbols. Thus Eq. (\ref{Eq;Minkowski}) yeilds
\[ \frac{\delta X^i}{ds} = \frac{d^2 x^i}{ds^2}.\]
Therefore in a Minkowski space, a curve $c(s)$ with arc-length parameter $s$ is a circle if and only if
\be
\frac{d^3 x^i}{ds^3} + k^2
\frac{dx^i}{ds}=0, \label{mmm}
\ee
where $k$ is a constant.
In this case $ g_{hk}\frac{d^2 x^h}{ds^2} \frac{d x^k}{ds} =0$ and
$k =\sqrt{ g_{hk}\frac{d^2 x^h}{ds^2} \frac{d^2 x^k}{ds^2} }$.
Now let us take a look at circles in a special Minkowski space $(\R^2, F_b)$, where
\[ F_b :=\sqrt{u^2+v^2} + bu ,\]
where $b$ is a positive constant with $ 0 < b < 1$. $(\R^2, F_b)$ is called a {\it Randers plane}.
Consider a curve $c(s) = (x(s), y(s))$ in $\R^2$ with unit speed, namely, $c'(s)=(x'(s), y'(s))$ is a unit vector.
Thus
\[ \sqrt{ x'(s)^2 + y'(s)^2} + b x'(s)= 1.\]
We can let
\[ x'(s) = \frac{ \cos \theta (s) - b}{1-b^2}, \ \ \ \ \ \ y'(s)= \frac{\sin \theta(s)}{\sqrt{1-b^2}}.\]
where $\theta(s)$ is a smooth function. Since $b$ is bounded, components of $c'(s)$
are well defined and one can find out explicitly equation of $c(s)$, the unit circle in the Randers plane $(\R^2, F_b)$.
\section{Vogel Theorem in Finsler geometry}
\bigskip
Let $\phi: (M, g) \longrightarrow (\bar{M}, \bar{g})$ be a diffeomorphism.
We say that $\phi$ \emph{preserves circles}, if it maps circles to circles. More precisely, if $c(s)$ is a circle in $(M, g)$, where $s$ is the arc-length of $c$ with respect to $g$, then $\bar{c}(\bar{s}):=\phi \circ c (s(\bar{s}))$ is a circle in $(\bar{M}, \bar{g})$, where $s = s (\bar{s})$ is the arc-length of $\phi \circ c$ with respect to $\bar{g}$.
We recall the following lemma from linear algebra which will be used in the sequel.
\begin{lem}\label{lem;linear Algebra}
Let $F$ and $G$ be the two bilinear symmetric forms on $\R^n$, satisfying
\begin{itemize}
\item $F$ and $G$ are definite positive.
\item $F$ and $G$ are defined on $\R^n\times\R^n\longrightarrow\R,$ such that
$F(X,Y)=0$, $\forall X,Y \in\R^n,$ with
\begin{eqnarray}\label{Eq;bilinearForms}
&&G(X,X)\neq0, G(Y,Y)\neq0 , \textrm{and } G(X,Y)=0,
\end{eqnarray}
\end{itemize}
then there is a positive real number $\alpha$ such that
$F=\alpha G.$
\end{lem}
\begin{proof}
Let $\{ e_i\},$ be an orthonormal basis on $\R^n$ such that $G (e_i, e_j)=\delta_{ij}$ where $i,j=1,...,n$. Eq.(\ref{Eq;bilinearForms}) with definite positiveness of $F$ and $G$ imply that there is a positive real number $\alpha_i$ such that $F(e_i,e_j)=\alpha_i\delta_{ij},$ and hence
\be \label{Eq;bilinearForms2}
F(e_i,e_j)=\alpha_iG (e_i, e_j).
\ee
Let $a,b\in \R-\{0\}$ with $a^2\neq b^2$ , then for $i\neq j$ we have
\begin{eqnarray*}
&&G(ae_i + be_j,ae_i + be_j) = a^2 + b^2 \neq 0,\\
&&G(be_i - ae_j,be_i - ae_j) = b^2 + a^2 \neq 0,\\
&&G(ae_i + be_j,be_i - ae_j) = 0.
\end{eqnarray*}
This equation together with Eqs.(\ref{Eq;bilinearForms}) and (\ref{Eq;bilinearForms2}) for $i\neq j$ imply $$0=F(ae_i + be_j,be_i - ae_j)=ab(\alpha_iG(e_i,e_i)-\alpha_jG(e_j,e_j))$$ and hence $0 = ab(\alpha_i-\alpha_j)$. Therefore we obtain $\alpha_i=\alpha_j=\alpha$, and $F=\alpha G,$ which completes the proof.
\end{proof}
Next we prove the following theorem.
\setcounter{thm}{0}
\begin{thm}\label{thm;Vogel1}
Every circle-preserving local diffeomorphism of a Finsler manifold is conformal.
\end{thm}
\begin{proof} Without loss of generality we can consider two Finsler metrics $g$ and $\bar{g}$ on the same manifold.
Fix a point $p\in M$. For arbitrary two unit vectors $X, Y\in T_pM$ such that $Y$ is orthogonal to $X$ with respect to $g=g_{_X}$, let ${\cal C}=\{c_k|k\in \R\}$ be a family of circles with the constant curvature $k$ passing through a fixed point $c_{_k}(0) = p$ on $(M,g)$ such that
\be \label{Eq;circle equ}
\frac{dc}{ds}(0) = X, \quad \textrm{and}\quad \nabla_{c'}X (0) =kY.
\ee
We are going to show that $\bar{g}(X, Y)=0$, where $\bar{g}:=\bar{g}_{_X}$.
Since $c$ is assumed to be a circle with respect to the Finsler metric $g$, Eq. (\ref{Eq: 1.9+1}) yields, $c'''$ is a multiple of $c'$.
By Lemma \ref{lem;circle}, $c$ is a circle with respect to $\bar g$ if and only if $\dddot c $ and $\ddot c $ are parallel to $\dot c$ or $c'$.
On the other hand, by virtue of Eq. (\ref{Eq;successive2 deri}), $\ddot c $ is parallel to $\dot c$, if and only if $c''$ so does.
By means of Eq. (\ref{Eq;successive3 deri}), we can see that $\dddot c$ is parallel to $\dot c$ if and only if $\ddot c $ so does. Denote the second term in right-hand side of Eq. (\ref{Eq;successive3 deri}), by ${\cal \overline{ W}}:=3\bar g(\dot c,\ddot c) c''$. Therefore, $c$ is a circle with respect to $\bar g$ if and only if ${\cal \overline{ W}}$ is parallel to $c'= X$ at the point $p=c(0)$. At $p$, by involving Eqs. (\ref{Eq;successive1 deri}) and (\ref{Eq;circle equ}) we have $\dot c=c'\bar g(\dot c,\dot c)^{1/2}=X\bar g(\dot c,\dot c)^{1/2}$, where $\bar g(\dot c,\dot c)\neq0$ is constant by means of Eq. (\ref{Eq;arclenth}). Hence, Eq.
(\ref{Eq;circle equ}) yields $\ddot c=\bar{\nabla}_{\dot{c}} \dot{c}=\frac{d }{ds}(\bar g(\dot c,\dot c)^{1/2} X)\frac{ds}{dt}$ and $\ddot c=k Y\bar g(\dot c,\dot c)$. Therefore we obtain
\begin{eqnarray*}
{\cal \overline{ W}} = 3\bar g(\dot c,\dot c)^{3/2}\bar g(X , Y )Y \ k^2.
\end{eqnarray*}
Hence, $c$ is a circle with respect to $\bar g$ if and only if the vector field ${\cal \overline{ W}}$ is parallel to $X$ or equivalently, $\bar g(X , Y )Y$ is parallel to $X$ for every $X\in T_pM$ and every $Y\in T_pM$ orthonormal to $X$. This implies $\bar g(X , Y )=0$ whenever $g(X,Y)=0$ and by Lemma \ref{lem;linear Algebra}, there is a positive scalar $\alpha^2$ where $\bar g=\alpha^2 g$. Hence, the Finsler metrics $\bar g(x,x')$ and $ g(x,x')$ are conformally related.
\end{proof}
A geodesic circle is a curve for which the first Frenet curvature $k_1$ is constant and the second curvature $k_2$ vanishes. In the other words a geodesic circle is a torsion free constant curvature curve. In Riemannian geometry as well as in Finsler geometry, a concircular transformation is defined to be a conformal transformation which preserves geodesic circles.
By replacing the positive scalar $\alpha$ in proof of the theorem \ref{thm;Vogel1} by $\alpha=e^\sigma$, we get $\bar g=e^{2\sigma} g$, or equivalently $d\bar s =e^{\sigma} ds$ where $\sigma$ is a scalar function on $M$. Therefore, as a corollary of Theorem \ref{thm;Vogel1} we have
\begin{thm}\label{thm;Vogel2 }
Every local diffeomorphism of a Finsler manifold which preserve geodesic circles is conformal.
\end{thm}
This result shows that in the definition of concircular transformations the conformal assumption is not necessary.
\textbf{Acknowledgments}. The authors take this opportunity to express their sincere gratitude to the Professor H. Akbar-Zadeh for his suggestions on this work and his contributions on Finsler geometry.
|
1,108,101,565,607 | arxiv | \section*{Appendix}
\begin{proof}[\textbf{Proof of Theorem~\ref{thm:minimax}}]
The proof is identical to that in \cite{RakSriTew10a}. For simplicity, denote $\psi(x_{1:T}) = \inf_{f\in\F} \sum_{t=1}^T f(x_t)$. The first step in the proof is to appeal to the minimax theorem for every couple of $\inf$ and $\sup$:
\begin{align*}
&\inf_{q_1\in \mathcal Q}\sup_{p_1\in\mathcal P_1} ~\Eunder{x_1\sim p_1}{f_1\sim q_1} \cdots \inf_{q_T\in \mathcal Q}\sup_{p_T\in\mathcal P_T} ~\Eunder{x_T\sim p_T}{f_T\sim q_T} \left[ \sum_{t=1}^T f_t(x_t) - \psi(x_{1:T}) \right] \\
&~~~~~ = \sup_{p_1\in\mathcal P_1} \inf_{q_1\in\mathcal Q} ~\Eunder{x_1\sim p_1}{f_1\sim q_1} \ldots\sup_{p_T\in\mathcal P_T} \inf_{q_T\in\mathcal Q} ~\Eunder{x_T\sim p_T}{f_T\sim q_T} \left[ \sum_{t=1}^T f_t(x_t) - \psi(x_{1:T}) \right] \\
&~~~~~ = \sup_{p_1\in\mathcal P_1} \inf_{f_1\in\F} ~\En_{x_1\sim p_1} \ldots \sup_{p_T\in\mathcal P_T} \inf_{f_T\in\F} ~\En_{x_T\sim p_T} \left[ \sum_{t=1}^T f_t(x_t) - \psi(x_{1:T}) \right]
\end{align*}
From now on, it will be understood that $x_t$ has distribution $p_t$ and that the suprema over $p_t$ are in fact over $p_t\in \mathcal P_t(x_{1:t-1})$. By moving the expectation with respect to $x_T$ and then the infimum with respect to $f_T$ inside the expression, we arrive at
\begin{align*}
&\sup_{p_1}\inf_{f_1}\En_{x_1} \ldots \sup_{p_{T-1}}\inf_{f_{T-1}}\En_{x_{T-1}}\sup_{p_T} \left[ \sum_{t=1}^{T-1} f_t(x_t) + \left[\inf_{f_T}\En_{x_T} f_T(x_T) \right]- \En_{x_T}\psi(x_{1:T})\right] \\
&=\sup_{p_1}\inf_{f_1}\En_{x_1} \ldots \sup_{p_{T-1}}\inf_{f_{T-1}}\En_{x_{T-1}}\sup_{p_T} \En_{x_T}\left[ \sum_{t=1}^{T-1} f_t(x_t) + \left[\inf_{f_T}\En_{x_T} f_T(x_T) \right]- \psi(x_{1:T})\right]
\end{align*}
Let us now repeat the procedure for step $T-1$. The above expression is equal to
\begin{align*}
&\sup_{p_1}\inf_{f_1}\En_{x_1} \ldots \sup_{p_{T-1}}\inf_{f_{T-1}}\En_{x_{T-1}}\left[ \sum_{t=1}^{T-1} f_t(x_t) + \sup_{p_T}\En_{x_T} \left[ \inf_{f_T}\En_{x_T} f_T(x_T)- \psi(x_{1:T})\right]\right] \\
&=\sup_{p_1}\inf_{f_1}\En_{x_1} \ldots \sup_{p_{T-1}}\left[ \sum_{t=1}^{T-2} f_t(x_t) + \left[\inf_{f_{T-1}} \En_{x_{T-1}} f_{T-1}(x_{T-1}) \right] + \En_{x_{T-1}} \sup_{p_T} \En_{x_T} \left[ \inf_{f_T}\En_{x_T} f_T(x_T)- \psi(x_{1:T})\right]\right] \\
&=\sup_{p_1}\inf_{f_1}\En_{x_1} \ldots \sup_{p_{T-1}}\En_{x_{T-1}} \sup_{p_T} \En_{x_T}\left[ \sum_{t=1}^{T-2} f_t(x_t) + \left[\inf_{f_{T-1}} \En_{x_{T-1}} f_{T-1} (x_{T-1})\right] + \left[ \inf_{f_T}\En_{x_T} f_T(x_T) \right]- \psi(x_{1:T})\right]
\end{align*}
Continuing in this fashion for $T-2$ and all the way down to $t=1$ proves the theorem.
\end{proof}
\begin{proof}[\textbf{Proof of Proposition~\ref{prop:lower_bound_oblivious}}]
Fix an oblivious strategy $\ensuremath{\mathbf{p}}$ and note that $\Val_T(\mathcal P_{1:T}) \geq \Val_T^\ensuremath{\mathbf{p}}$. From now on, it will be understood that $x_t$ has distribution $p_t(\cdot|x_{1:t-1})$. Let $\ensuremath{\boldsymbol \pi}=\{\pi_t\}_{t=1}^T$ be a strategy of the player, that is, a sequence of mappings $\pi_t:(\F\times\X)^{t-1} \mapsto \mathcal Q$.
By moving to a functional representation in Eq.~\eqref{eq:def_val_for_p},
\begin{align*}
\Val_T^\ensuremath{\mathbf{p}} = \inf_{\ensuremath{\boldsymbol \pi}} \En_{f_1\sim \pi_1} \En_{x_1\sim p_1} \ldots \En_{f_T\sim \pi_T(\cdot|f_{1:T-1},x_{1:T-1}) } \En_{x_T\sim p_T(\cdot|x_{1:T-1})} \left[ \sum_{t=1}^T f_t(x_t) - \inf_{f\in \F}\sum_{t=1}^T f(x_t) \right]
\end{align*}
Note that the last term does not depend on $f_1,\ldots,f_T$, and so the expression above is equal to
\begin{align*}
&\inf_{\ensuremath{\boldsymbol \pi}}\left\{ \En_{f_1\sim \pi_1} \En_{x_1\sim p_1} \ldots \En_{f_T\sim \pi_T(\cdot|f_{1:T-1},x_{1:T-1}) } \En_{x_T\sim p_T(\cdot|x_{1:T-1})} \left[ \sum_{t=1}^T f_t(x_t) \right] \right.\\
&\left.~~~~~~~~~~~~- \En_{x_1\sim p_1} \ldots \En_{x_T\sim p_T(\cdot|x_{1:T-1})} \left[ \inf_{f\in \F}\sum_{t=1}^T f(x_t) \right] \right\}\\
&= \inf_{\ensuremath{\boldsymbol \pi}}\left\{ \En_{f_1\sim \pi_1} \En_{x_1\sim p_1} \ldots \En_{f_T\sim \pi_T(\cdot|f_{1:T-1},x_{1:T-1}) } \En_{x_T\sim p_T(\cdot|x_{1:T-1})} \left[ \sum_{t=1}^T f_t(x_t) \right]\right\} - \left\{ \En\left[ \inf_{f\in \F}\sum_{t=1}^T f(x_t) \right] \right\}
\end{align*}
Now, by linearity of expectation, the first term can be written as
\begin{align}
\label{eq:lower_strategy_oblivious}
&\inf_{\ensuremath{\boldsymbol \pi}}\left\{ \sum_{t=1}^T \En_{f_1\sim \pi_1}\En_{x_1\sim p_1} \ldots \En_{f_T\sim \pi_T(\cdot|f_{1:T-1},x_{1:T-1}) } \En_{x_T\sim p_T(\cdot|x_{1:T-1})} f_t(x_t) \right\} \notag\\
&= \inf_{\ensuremath{\boldsymbol \pi}}\left\{ \sum_{t=1}^T \En_{f_1\sim \pi_1} \En_{x_1\sim p_1} \ldots \En_{f_t\sim \pi_t(\cdot|f_{1:t-1},x_{1:t-1}) } \En_{x_t\sim p_t(\cdot|x_{1:t-1})} f_t(x_t) \right\} \notag\\
&= \inf_{\ensuremath{\boldsymbol \pi}}\left\{ \sum_{t=1}^T \En_{x_1\sim p_1} \ldots \En_{x_t\sim p_t(\cdot|x_{1:t-1})} \Big[ \En_{f_1\sim \pi_1}\ldots \En_{f_t\sim \pi_t(\cdot|f_{1:t-1},x_{1:t-1}) } f_t(x_t) \Big] \right\}
\end{align}
Now notice that for any strategy $\ensuremath{\boldsymbol \pi}=\{\pi_t\}_{t=1}^T$, there is an equivalent strategy $\ensuremath{\boldsymbol \pi}'=\{\pi'_t\}_{t=1}^T$ that (a) gives the same value to the above expression as $\ensuremath{\boldsymbol \pi}$ and (b) does not depend on the past decisions of the player, that is $\pi'_t:\X^{t-1}\mapsto\mathcal Q$. To see why this is the case, fix any strategy $\ensuremath{\boldsymbol \pi}$ and for any $t$ define
$$\pi'_t(\cdot|x_{1:t-1}) = \En_{f_1\sim \pi_1}\ldots \En_{f_{t-1}\sim \pi_t(\cdot|f_{1:t-2},x_{1:t-2}) } \pi_t(\cdot|f_{1:t-1}, x_{1:t-1})$$
where we integrated out the sequence $f_1,\ldots,f_{t-1}$. Then
$$\En_{f_1\sim \pi_1}\ldots \En_{f_t\sim \pi_t(\cdot|f_{1:t-1},x_{1:t-1}) } f_t(x_t) = \En_{f_t\sim\pi'_t(\cdot|x_{1:t-1})} f_t(x_t)$$
and so $\ensuremath{\boldsymbol \pi}$ and $\ensuremath{\boldsymbol \pi}'$ give the same value in \eqref{eq:lower_strategy_oblivious}.
We conclude that the infimum in \eqref{eq:lower_strategy_oblivious} can be restricted to those strategies $\ensuremath{\boldsymbol \pi}$ that do not depend on past randomizations of the player. In this case,
\begin{align*}
\Val_T^\ensuremath{\mathbf{p}} &= \inf_{\ensuremath{\boldsymbol \pi}}\left\{ \sum_{t=1}^T \En_{x_1\sim p_1} \ldots \En_{x_t\sim p_t(\cdot|x_{1:t-1})} \En_{f_t\sim \pi_t(\cdot|x_{1:t-1}) } f_t(x_t) \Big] \right\} - \left\{ \En\left[ \inf_{f\in \F}\sum_{t=1}^T f(x_t) \right] \right\} \\
&=\inf_{\ensuremath{\boldsymbol \pi}}\left\{ \sum_{t=1}^T \En_{x_1,\ldots, x_{t-1}} \En_{f_t\sim \pi_t(\cdot|x_{1:t-1}) } \En_{x_t} f_t(x_t) \Big] \right\} - \left\{ \En\left[ \inf_{f\in \F}\sum_{t=1}^T f(x_t) \right] \right\} \\
&= \inf_{\ensuremath{\boldsymbol \pi}}\E{ \sum_{t=1}^T \En_{f_t\sim \pi_t(\cdot|x_{1:t-1}) } \En_{x_t\sim p_t} f_t(x_t) - \inf_{f\in \F}\sum_{t=1}^T f(x_t) } \ .
\end{align*}
Now, notice that we can choose the Bayes optimal response $f_t$ in each term:
\begin{align*}
\Val_T^\ensuremath{\mathbf{p}} &= \inf_{\ensuremath{\boldsymbol \pi}}\E{ \sum_{t=1}^T \En_{f_t\sim \pi_t(\cdot|x_{1:t-1}) } \En_{x_t\sim p_t} f_t(x_t) - \inf_{f\in \F}\sum_{t=1}^T f(x_t) } \\
&\geq \inf_{\ensuremath{\boldsymbol \pi}}\E{ \sum_{t=1}^T \inf_{f_t\in\F} \En_{x_t\sim p_t} f_t(x_t) - \inf_{f\in \F}\sum_{t=1}^T f(x_t) } \\
&= \E{ \sum_{t=1}^T \inf_{f_t\in\F} \En_{x_t\sim p_t} f_t(x_t) - \inf_{f\in \F}\sum_{t=1}^T f(x_t) } \ .
\end{align*}
Together with Theorem~\ref{thm:minimax}, this implies that
$$ \Val_T^{\ensuremath{\mathbf{p}}^*} = \Val_T(\mathcal P_{1:T}) = \inf_{\ensuremath{\boldsymbol \pi}}\E{ \sum_{t=1}^T \En_{f_t\sim \pi_t(\cdot|x_{1:t-1}) } \En_{x_t\sim p^*_t} f_t(x_t) - \inf_{f\in \F}\sum_{t=1}^T f(x_t) } $$
for any $\ensuremath{\mathbf{p}}^*$ achieving supremum in \eqref{eq:succinct_value_equality}. Further, the infimum is over strategies that do not depend on the moves of the player.
We conclude that there is an oblivious minimax optimal strategy of the adversary, and there is a corresponding minimax optimal strategy for the player that does not depend on its own moves.
\end{proof}
\begin{proof}[\textbf{Proof of Theorem~\ref{thm:valrad}}]
From Eq.~\eqref{eq:succinct_value_equality},
\begin{align}
\notag
\Val_T & = \sup_{\ensuremath{\mathbf{p}}\in\PDA} \E{\sum_{t=1}^T \inf_{f_t \in \F} \Es{t-1}{f_t(x_t)} - \inf_{f \in \F} \sum_{t=1}^T f(x_t)} \\
\notag
& = \sup_{\ensuremath{\mathbf{p}}\in\PDA} \E{\sup_{f \in \F}\left\{\sum_{t=1}^T \inf_{f_t \in \F} \Es{t-1}{f_t(x_t)} - f(x_t) \right\}}\\
\label{eq:beforeexpequal}
& \le \sup_{\ensuremath{\mathbf{p}}\in\PDA} \E{\sup_{f \in \F} \left\{\sum_{t=1}^T \Es{t-1}{f(x_t)} - f(x_t) \right\}}
\end{align}
The upper bound is obtained by replacing each infimum by a particular choice $f$.
Note that $\Es{t-1}{f(x_t)} - f(x_t)$ is a martingale difference sequence. We now employ a symmetrization technique. For this purpose, we introduce a {\em tangent sequence} $\{ x'_t \}_{t=1}^T$ that is constructed as follows.
Let $x'_1$ be an independent copy of $x_1$. For $t\ge 2$, let $x'_t$ be both identically distributed as $x_t$ as well as independent of it conditioned on $x_{1:t-1}$.
Then, we have, for any $t\in[T]$ and $f\in \F$,
\begin{equation}
\label{eq:expequal1}
\Es{t-1}{f(x_t)} = \Es{t-1}{f(x'_t)} = \Es{T}{f(x'_t)}\ .
\end{equation}
The first equality is true by construction. The second holds because $x'_t$ is independent of $x_{t:T}$ conditioned on $x_{1:t-1}$. We also have, for any $t\in[T]$ and $f\in\F$,
\begin{equation}
\label{eq:expequal2}
f(x_t) = \Es{T}{f(x_t)} \ .
\end{equation}
Plugging in~\eqref{eq:expequal1} and~\eqref{eq:expequal2} into~\eqref{eq:beforeexpequal}, we get,
\begin{align*}
\Val_T &\leq \sup_{\ensuremath{\mathbf{p}}\in\PDA} \E{\sup_{f \in \F}\left\{ \sum_{t=1}^T \Es{T}{ f(x_t')} - \Es{T}{f(x_t)} \right\}}\\
& = \sup_{\ensuremath{\mathbf{p}}\in\PDA} \E{\sup_{f \in \F}\left\{ \Es{T}{ \sum_{t=1}^T f(x_t') - f(x_t) } \right\}} \\
& \leq
\sup_{\ensuremath{\mathbf{p}}\in\PDA} \E{ \sup_{f \in \F}\left\{ \sum_{t=1}^T f(x_t') - f(x_t) \right\} } \ .
\end{align*}
For any $\ensuremath{\mathbf{p}}$, the expectation in the above supremum can be written as
\begin{align*}
\E{ \sup_{f \in \F}\left\{ \sum_{t=1}^T f(x_t') - f(x_t) \right\} }
&=
\En_{x_1,x'_1\sim p_1} \En_{x_2,x'_2\sim p_2(\cdot|x_1)} \ldots \En_{x_T,x'_T\sim p_T(\cdot|x_1,\ldots, x_{T-1})} \left[ \sup_{f\in\F} \left\{ \sum_{t=1}^T f(x'_t) -f(x_t) \right\} \right] .
\end{align*}
Now, let's see what happens when we rename $x_1$ and $x'_1$ in the right-hand side of the above inequality. The equivalent expression we then obtain is
\begin{align*}
\En_{x'_1,x_1\sim p_1} \En_{x_2,x'_2\sim p_2(\cdot|x'_1)}\En_{x_3,x'_3\sim p_3(\cdot|x'_1,x_2)} \ldots \En_{x_T,x'_T\sim p_T(\cdot|x'_1,x_{2:T-1})} \left[ \sup_{f\in\F} \left\{ -(f(x'_1)-f(x_1)) + \sum_{t=2}^T f(x'_t) -f(x_t) \right\} \right] .
\end{align*}
Now fix any $\epsilon\in\{\pm 1\}^T$. Informally, $\epsilon_t=1$ indicates whether we rename $x_t$ and $x'_t$. It is not hard to verify that
\begin{align}
\label{eq:renaming}
&\En_{x_1,x'_1\sim p_1} \En_{x_2,x'_2\sim p_2(\cdot|x_1)} \ldots \En_{x_T,x'_T\sim p_T(\cdot|x_1,\ldots, x_{T-1})} \left[ \sup_{f\in\F} \left\{ \sum_{t=1}^T f(x'_t) -f(x_t) \right\} \right] \notag\\
&=\En_{x_1,x'_1\sim p_1} \En_{x_2,x'_2\sim p_2(\cdot|\chi_1(-1))} \ldots \En_{x_T,x'_T\sim p_T(\cdot|\chi_1(-1),\ldots, \chi_{T-1}(-1)) } \left[ \sup_{f\in\F} \left\{ \sum_{t=1}^T f(x'_t) -f(x_t) \right\} \right] \\
&=\En_{x_1,x'_1\sim p_1} \En_{x_2,x'_2\sim p_2(\cdot|\chi_1(\epsilon_1))} \ldots \En_{x_T,x'_T\sim p_T(\cdot|\chi_1(\epsilon_1),\ldots, \chi_{T-1}(\epsilon_{T-1})) } \left[ \sup_{f\in\F} \left\{ \sum_{t=1}^T -\epsilon_t (f(x'_t) -f(x_t)) \right\} \right]
\end{align}
Since Eq.~\eqref{eq:renaming} holds for any $\epsilon\in\{\pm1\}^T$, we conclude that
\begin{align}
\label{eq:renaming2}
&\E{ \sup_{f \in \F}\left\{ \sum_{t=1}^T f(x_t') - f(x_t) \right\} } \\
&=\En_{\epsilon} \En_{x_1,x'_1\sim p_1} \En_{x_2,x'_2\sim p_2(\cdot|\chi_1(\epsilon_1))} \ldots \En_{x_T,x'_T\sim p_T(\cdot|\chi_1(\epsilon_1),\ldots, \chi_{T-1}(\epsilon_{T-1})) } \left[ \sup_{f\in\F} \left\{ \sum_{t=1}^T -\epsilon_t (f(x'_t) -f(x_t)) \right\} \right] \notag\\
&= \En_{x_1,x'_1\sim p_1} \En_{\epsilon_1} \En_{x_2,x'_2\sim p_2(\cdot|\chi_1(\epsilon_1))} \En_{\epsilon_2} \ldots \En_{x_T,x'_T\sim p_T(\cdot|\chi_1(\epsilon_1),\ldots, \chi_{T-1}(\epsilon_{T-1})) } \En_{\epsilon_{T}} \left[ \sup_{f\in\F} \left\{ \sum_{t=1}^T -\epsilon_t (f(x'_t) -f(x_t)) \right\} \right].\notag
\end{align}
The process above can be thought of as taking a path in a binary tree. At each step $t$, a coin is flipped and this determines whether $x_t$ or $x'_t$ is to be used in conditional distributions in the following steps. This is precisely the process outlined in \eqref{eq:sampling_procedure}. Using the definition of $\boldsymbol{\rho}$, we can rewrite the last expression in Eq.~\eqref{eq:renaming2} as
{\small $$\En_{(x_1,x'_1)\sim \boldsymbol{\rho}_1(\epsilon)} \En_{\epsilon_1} \En_{(x_2,x'_2)\sim \boldsymbol{\rho}_2(\epsilon)(x_1,x'_1)} \ldots
\En_{\epsilon_{T-1}} \En_{(x_T,x'_T)\sim \boldsymbol{\rho}_T(\epsilon)\left((x_1,x'_1),\ldots,(x_{T-1},x'_{T-1})\right)} \En_{\epsilon_T} \\ \left[ \sup_{f\in\F} \left\{ \sum_{t=1}^T \epsilon_t (f(x_t) -f(x'_t)) \right\} \right].$$
}
More succinctly, Eq.~\eqref{eq:renaming2} can be written as
\begin{align}
\label{eq:symmetrized_version_not_broken_up}
\En_{(\x,\x')\sim \boldsymbol{\rho}} \left[ \sup_{f\in\F} \left\{ \sum_{t=1}^T f(\x'_t(-{\boldsymbol 1})) -f(\x_t(-{\boldsymbol 1})) \right\} \right]
&=\En_{(\x,\x') \sim \boldsymbol{\rho}} \En_{\epsilon} \left[ \sup_{f\in\F} \left\{ \sum_{t=1}^T \epsilon_t(f(\x_t(\epsilon)) -f(\x'_t(\epsilon))) \right\} \right] .
\end{align}
It is worth emphasizing that the values of the mappings $\x,\x'$ are drawn conditionally-independently, however the distribution depends on the ancestors in \emph{both} trees. In some sense, the path $\epsilon$ defines ``who is tangent to whom''.
We now split the supremum into two:
\begin{align}
\label{eq:split_rademacher}
&\En_{(\x,\x') \sim \boldsymbol{\rho}} \En_{\epsilon} \left[ \sup_{f\in\F} \left\{ \sum_{t=1}^T \epsilon_t(f(\x_t(\epsilon)) -f(\x'_t(\epsilon))) \right\} \right] \notag\\
& \le \En_{(\x,\x') \sim \boldsymbol{\rho}} \En_{\epsilon} \left[ \sup_{f \in \F} \sum_{t=1}^T \epsilon_t f (\x_t(\epsilon)) \right] + \En_{(\x,\x') \sim \boldsymbol{\rho}} \En_{\epsilon} \left[ \sup_{f \in \F} \sum_{t=1}^T - \epsilon_t f (\x'_t(\epsilon)) \right] \\
&= 2\En_{(\x,\x') \sim \boldsymbol{\rho}} \En_{\epsilon} \left[ \sup_{f \in \F} \sum_{t=1}^T \epsilon_t f (\x_t(\epsilon)) \right] \notag
\end{align}
The last equality is not difficult to verify but requires understanding the symmetry between the paths in the $\x$ and $\x'$ trees. This symmetry implies that the two terms in Eq.~\eqref{eq:split_rademacher} are equal. Each $\epsilon\in\{\pm1\}^T$ in the first term defines time steps $t$ when values in $\x$ are used in conditional distributions. To any such $\epsilon$, there corresponds a $-\epsilon$ in the second term which defines times when values in $\x'$ are used in conditional distributions. This implies the required result. As a more concrete example, consider the path $\epsilon=-{\boldsymbol 1}$ in the first term. The contribution to the overall expectation is the supremum over $f\in\F$ of evaluation of $-f$ on the left-most path of the $\x$ tree which is defined as successive draws from distributions $p_t$ conditioned on the values on the left-most path, irrespective of the $\x'$ tree. Now consider the corresponding path $\epsilon={\boldsymbol 1}$ in the second term. Its contribution to the overall expectation is a supremum over $f\in\F$ of evaluation of $-f$ on the right-most path of the $\x'$ tree, defined as successive draws from distributions $p_t$ conditioned on the values on the right-most path, irrespective of the $\x$ tree. Clearly, the contributions are the same, and the same argument can be done for any path $\epsilon$.
Alternatively, we can see that the two terms in Eq.~\eqref{eq:split_rademacher} are equal by expanding the notation. We thus claim that
\begin{align*}
&\En_{x_1,x'_1\sim p_1} \En_{\epsilon_1} \En_{x_2,x'_2\sim p_2(\cdot|\chi_1(\epsilon_1))} \En_{\epsilon_2} \ldots \En_{x_T,x'_T\sim p_T(\cdot|\chi_1(\epsilon_1),\ldots, \chi_{T-1}(\epsilon_{T-1})) } \En_{\epsilon_{T}} \left[ \sup_{f\in\F} \left\{ \sum_{t=1}^T -\epsilon_t f(x'_t) \right\} \right] \\
&=\En_{x_1,x'_1\sim p_1} \En_{\epsilon_1} \En_{x_2,x'_2\sim p_2(\cdot|\chi_1(\epsilon_1))} \En_{\epsilon_2} \ldots \En_{x_T,x'_T\sim p_T(\cdot|\chi_1(\epsilon_1),\ldots, \chi_{T-1}(\epsilon_{T-1})) } \En_{\epsilon_{T}} \left[ \sup_{f\in\F} \left\{ \sum_{t=1}^T \epsilon_t f(x_t) \right\} \right]
\end{align*}
The identity can be verified by simultaneously renaming $\x$ with $\x'$ and $\epsilon$ with $-\epsilon$. Since $\chi(x,x',\epsilon)=\chi(x',x,-\epsilon)$, the distributions in the two expressions are the same while the sum of the first term becomes the sum of the second term.
More generally, the split of Eq.~\eqref{eq:split_rademacher} can be performed via an additional ``centering'' term. For any $t$, let $M_t$ be a function with the property $M_t(\ensuremath{\mathbf{p}},f,\x,\x',\epsilon)=M_t(\ensuremath{\mathbf{p}},f,\x',\x,-\epsilon)$
We then have
\begin{align}
\label{eq:split_rademacher_with_center}
&\En_{(\x,\x') \sim \boldsymbol{\rho}} \En_{\epsilon} \left[ \sup_{f\in\F} \left\{ \sum_{t=1}^T \epsilon_t(f(\x_t(\epsilon)) -f(\x'_t(\epsilon))) \right\} \right] \notag\\
& \le \En_{(\x,\x') \sim \boldsymbol{\rho}} \En_{\epsilon} \left[ \sup_{f \in \F} \sum_{t=1}^T \epsilon_t (f (\x_t(\epsilon))-M_t(\ensuremath{\mathbf{p}},f,\x,\x',\epsilon)) \right] \\
&+ \En_{(\x,\x') \sim \boldsymbol{\rho}} \En_{\epsilon} \left[ \sup_{f \in \F} \sum_{t=1}^T - \epsilon_t (f (\x'_t(\epsilon))-M_t(\ensuremath{\mathbf{p}},f,\x,\x',\epsilon)) \right] \notag\\
&= 2\En_{(\x,\x') \sim \boldsymbol{\rho}} \En_{\epsilon} \left[ \sup_{f \in \F} \sum_{t=1}^T \epsilon_t (f (\x_t(\epsilon))-M_t(\ensuremath{\mathbf{p}},f,\x,\x',\epsilon)) \right] \notag
\end{align}
To verify equality of the two terms in \eqref{eq:split_rademacher_with_center} we can expand the notation.
{\small
\begin{align*}
&\En_{x_1,x'_1\sim p_1} \En_{\epsilon_1} \En_{x_2,x'_2\sim p_2(\cdot|\chi_1(\epsilon_1))} \En_{\epsilon_2} \ldots \En_{x_T,x'_T\sim p_T(\cdot|\chi_1(\epsilon_1),\ldots, \chi_{T-1}(\epsilon_{T-1})) } \En_{\epsilon_{T}} \left[ \sup_{f\in\F} \left\{ \sum_{t=1}^T -\epsilon_t (f(x'_t)-M_t(\ensuremath{\mathbf{p}},f,\x,\x',\epsilon)) \right\} \right] \\
&=\En_{x_1,x'_1\sim p_1} \En_{\epsilon_1} \En_{x_2,x'_2\sim p_2(\cdot|\chi_1(\epsilon_1))} \En_{\epsilon_2} \ldots \En_{x_T,x'_T\sim p_T(\cdot|\chi_1(\epsilon_1),\ldots, \chi_{T-1}(\epsilon_{T-1})) } \En_{\epsilon_{T}} \left[ \sup_{f\in\F} \left\{ \sum_{t=1}^T \epsilon_t (f(x_t)-M_t(\ensuremath{\mathbf{p}},f,\x,\x',\epsilon)) \right\} \right]
\end{align*}
}
\end{proof}
\begin{proof}[\textbf{Proof of Corollary~\ref{cor:centered_at_conditional}}]
Define a function $M_t$ as the conditional expectation $$M_t(\ensuremath{\mathbf{p}},f,\x,\x',\epsilon)=\En_{x\sim p_t(\cdot|\chi_1(\epsilon_1),\ldots,\chi_{t-1}(\epsilon_{t-1}))} f(x).$$
The property $M_t(\ensuremath{\mathbf{p}},f,\x,\x',\epsilon)=M_t(\ensuremath{\mathbf{p}},f,\x',\x,-\epsilon)$ holds because $\chi(x,x',\epsilon)=\chi(x',x,-\epsilon)$.
\end{proof}
\begin{proof}[\textbf{Proof of Corollary~\ref{cor:valrad_constrained}}]
The first steps follow the proof of Theorem~\ref{thm:valrad}:
\begin{align*}
\Val_T &\leq \sup_{\ensuremath{\mathbf{p}}\in\PDA} \E{ \sup_{f \in \F}\left\{ \sum_{t=1}^T f(x_t') - f(x_t) \right\} }
\end{align*}
and for a fixed $\ensuremath{\mathbf{p}}\in\PDA$,
\begin{align}
&\E{ \sup_{f \in \F}\left\{ \sum_{t=1}^T f(x_t') - f(x_t) \right\} } \\
&= \En_{x_1,x'_1\sim p_1} \En_{\epsilon_1} \En_{x_2,x'_2\sim p_2(\cdot|\chi_1(\epsilon_1))} \En_{\epsilon_2} \ldots \En_{x_T,x'_T\sim p_T(\cdot|\chi_1(\epsilon_1),\ldots, \chi_{T-1}(\epsilon_{T-1})) } \En_{\epsilon_{T}} \left[ \sup_{f\in\F} \left\{ \sum_{t=1}^T -\epsilon_t (f(x'_t) -f(x_t)) \right\} \right].\notag
\end{align}
At this point we pass to an upper bound, unlike the proof of Theorem~\ref{thm:valrad}. Notice that $p_t(\cdot|\chi_1(\epsilon_1),\ldots, \chi_{t-1}(\epsilon_{t-1}))$ is a distribution with support in $\X_t(\chi_1(\epsilon_1),\ldots, \chi_{t-1}(\epsilon_{t-1}))$. That is, the sequence $\chi_1(\epsilon_1),\ldots, \chi_{t-1}(\epsilon_{t-1})$ defines the constraint at time $t$. Passing from $t=T$ down to $t=1$, we can replace all the expectations over $p_t$ by the suprema over the set $\X_t$, only increasing the value:
\begin{align*}
&\En_{x_1,x'_1\sim p_1} \En_{\epsilon_1} \En_{x_2,x'_2\sim p_2(\cdot|\chi_1(\epsilon_1))} \En_{\epsilon_2} \ldots \En_{x_T,x'_T\sim p_T(\cdot|\chi_1(\epsilon_1),\ldots, \chi_{T-1}(\epsilon_{T-1})) } \En_{\epsilon_{T}} \left[ \sup_{f\in\F} \left\{ \sum_{t=1}^T -\epsilon_t (f(x'_t) -f(x_t)) \right\} \right]\\
&\leq \sup_{x_1,x'_1\in \X_1} \En_{\epsilon_1} \sup_{x_2,x'_2\in \X_2(\cdot|\chi_1(\epsilon_1))} \En_{\epsilon_2} \ldots \sup_{x_T,x'_T\in \X_T(\chi_1(\epsilon_1),\ldots, \chi_{T-1}(\epsilon_{T-1})) } \En_{\epsilon_{T}} \left[ \sup_{f\in\F} \left\{ \sum_{t=1}^T -\epsilon_t (f(x'_t) -f(x_t)) \right\} \right] \\
&= \sup_{(\x,\x')\in {\mathcal T}} \En_{\epsilon} \left[ \sup_{f\in\F} \left\{ \sum_{t=1}^T -\epsilon_t (f(\x'_t(\epsilon)) -f(\x_t(\epsilon))) \right\} \right]
\end{align*}
In the last equality, we passed to the tree representation. Indeed, at each step, we are choosing $x_t,x'_t$ from the appropriate set and then flipping a coin $\epsilon_t$ which decides which of $x_t,x'_t$ will be used to define the constraint set through $\chi_t(\epsilon_t)$. This once again defines a tree structure and we may pass to the supremum over trees $(\x,\x')\in{\mathcal T}$. However, ${\mathcal T}$ is not a set of all possible $\X$-valued trees: for each $t$, $\x_t(\epsilon),\x'_t(\epsilon) \in \X_t(\chi_1(\x_1,\x'_1,\epsilon_1),\ldots,\chi_{t-1}(\x_{t-1}(\epsilon_{t-1}),\x'_{t-1}(\epsilon_{t-1}),\epsilon_{t-1}))$. That is, the choice at each node of the tree is constrained by the values of both trees according to the path. As before, the left-most path of the $\x$ tree (as well as the right-most path of the $\x'$ tree) is defined by constraints applied to the values on the path only disregarding the other tree.
The rest of the proof exactly follows the proof of Theorem~\ref{thm:valrad}.
\end{proof}
\begin{proof}[\textbf{Proof of Proposition~\ref{prop:maxvar}}]
Let $M_t(f,\x,\x',\epsilon) = \frac{1}{t-1} \sum_{\tau=1}^{t-1} f(\chi_\tau(\epsilon_\tau))$. Note that since $\chi(x, x', \epsilon) = \chi(x', x, -\epsilon)$, we have that $M_t(f,\x,\x',\epsilon) = M_t(f,\x',\x,-\epsilon)$.
Using \ref{cor:valrad_constrained} we conclude that
\begin{align}
\Val_T & \le 2 \sup_{(\x,\x') \in \mc{T}} \Es{\epsilon}{\sup_{f \in \F} \sum_{t=1}^T \epsilon_t\left(\inner{f,\x_t(\epsilon)} - \frac{1}{t-1} \sum_{\tau=1}^{t-1} \inner{f,\chi_\tau(\epsilon_\tau)} \right)}\notag \\
& = 2 \sup_{(\x,\x') \in \mc{T}} \Es{\epsilon}{\sup_{f \in \F}\inner{f, \sum_{t=1}^T \epsilon_t\left(\x_t(\epsilon) - \frac{1}{t-1} \sum_{\tau=1}^{t-1} \chi_\tau(\epsilon_\tau) \right)}} \notag
\end{align}
By linearity and Fenchel's inequality, the last expression is upper bounded by
\begin{align}
&\frac{2}{\alpha} \sup_{(\x,\x') \in \mc{T}} \Es{\epsilon}{\sup_{f \in \F}\inner{ f, \alpha \sum_{t=1}^T \epsilon_t\left(\x_t(\epsilon) - \frac{1}{t-1} \sum_{\tau=1}^{t-1} \chi_\tau(\epsilon_\tau) \right)}} \notag \\
& \le \frac{2}{\alpha} \sup_{(\x,\x') \in \mc{T}} \Es{\epsilon}{\sup_{f \in \F}
\Psi(f) + \Psi^*\left(\alpha \sum_{t=1}^T \epsilon_t\left(\x_t(\epsilon) - \frac{1}{t-1} \sum_{\tau=1}^{t-1} \chi_\tau(\epsilon_\tau) \right)\right)} \notag \\
& \le \frac{2}{\alpha} \left( \sup_{f \in \F} \Psi(f) + \sup_{(\x,\x') \in \mc{T}} \Es{\epsilon}{\Psi^*\left(\alpha \sum_{t=1}^T \epsilon_t\left(\x_t(\epsilon) - \frac{1}{t-1} \sum_{\tau=1}^{t-1} \chi_\tau(\epsilon_\tau) \right)\right)} \right)\notag \\
& \le \frac{2 R^2}{\alpha} + \frac{2}{\alpha} \sup_{(\x,\x') \in \mc{T}} \Es{\epsilon}{\Psi^*\left(\alpha \sum_{t=1}^T \epsilon_t\left(\x_t(\epsilon) - \frac{1}{t-1} \sum_{\tau=1}^{t-1} \chi_\tau(\epsilon_\tau) \right)\right)}\notag \\
& \le \frac{2 R^2}{\alpha} + \frac{\alpha}{\lambda} \sum_{t=1}^T \Es{\epsilon}{\left\|\x_t(\epsilon) - \frac{1}{t-1} \sum_{\tau=1}^{t-1} \chi_\tau(\epsilon_\tau) \right\|_*^2} \label{eq:MDS}
\end{align}
Where the last step follows from Lemma 2 of \cite{KakSriTew08} (with a slight modification). However since $(\x,\x') \in \mathcal{T}$ are pairs of tree such that for any $\epsilon \in \{\pm 1\}^T$ and any $t \in [T]$.
$$
C(\chi_1(\epsilon_1), \ldots,\chi_{t-1}(\epsilon_{t-1}), \x_{t}(\epsilon)) = 1
$$
we can conclude that for any $\epsilon \in \{\pm 1\}^T$ and any $t \in [T]$,
$$
\left\|\x_t(\epsilon) - \frac{1}{t-1} \sum_{\tau=1}^{t-1} \chi_{\tau}(\epsilon_\tau) \right\|_* \le \sigma_t
$$
Using this with Equation \ref{eq:MDS} and the fact that $\alpha$ is arbitrary, we can conclude that
\begin{align*}
\Val_T & \le \inf_{\alpha > 0}\left\{\frac{2 R^2}{\alpha} + \frac{\alpha}{\lambda} \sum_{t=1}^T \sigma_t^2\right\} \le 2 \sqrt{2} R \sqrt{\sum_{t=1}^T \sigma_t^2}
\end{align*}
\end{proof}
\begin{proof}[\textbf{Proof of Proposition~\ref{prop:smalljumps}}]
Let $M_t(f,\x,\x',\epsilon) = f(\chi_{t-1}(\epsilon_{t-1}))$. Note that since $\chi(x, x', \epsilon) = \chi(x', x, -\epsilon)$ we have that $M_t(f,\x,\x',\epsilon) = M_t(f,\x',\x,-\epsilon)$.
Using \ref{cor:valrad_constrained} we conclude that
\begin{align}
\Val_T & \le 2 \sup_{(\x,\x') \in \mc{T}} \Es{\epsilon}{\sup_{f \in \F} \sum_{t=1}^T \epsilon_t\left(\inner{f,\x_t(\epsilon)} - \inner{f,\chi_{t-1}(\epsilon_{t-1})} \right)}\notag \\
& = 2 \sup_{(\x,\x') \in \mc{T}} \Es{\epsilon}{\sup_{f \in \F}\inner{f, \sum_{t=1}^T \epsilon_t\left(\x_t(\epsilon) - \chi_{t-1}(\epsilon_{t-1}) \right)}} \notag
\end{align}
As before, using linearity and Fenchel's inequality we pass to the upper bound
\begin{align}
&\frac{2}{\alpha} \sup_{(\x,\x') \in \mc{T}} \Es{\epsilon}{\sup_{f \in \F}\inner{ f, \alpha \sum_{t=1}^T \epsilon_t\left(\x_t(\epsilon) - \chi_{t-1}(\epsilon_{t-1}) \right)}} \notag \\
& \le \frac{2}{\alpha} \sup_{(\x,\x') \in \mc{T}} \Es{\epsilon}{\sup_{f \in \F}
\Psi(f) + \Psi^*\left(\alpha \sum_{t=1}^T \epsilon_t\left(\x_t(\epsilon) - \chi_{t-1}(\epsilon_{t-1}) \right)\right)} \notag \\
& \le \frac{2}{\alpha} \left( \sup_{f \in \F} \Psi(f) + \sup_{(\x,\x') \in \mc{T}} \Es{\epsilon}{\Psi^*\left(\alpha \sum_{t=1}^T \epsilon_t\left(\x_t(\epsilon) - \chi_{t-1}(\epsilon_{t-1}) \right)\right)} \right)\notag \\
& \le \frac{2 R^2}{\alpha} + \frac{2}{\alpha} \sup_{(\x,\x') \in \mc{T}} \Es{\epsilon}{\Psi^*\left(\alpha \sum_{t=1}^T \epsilon_t\left(\x_t(\epsilon) - \chi_{t-1}(\epsilon_{t-1}) \right)\right)}\notag \\
& \le \frac{2 R^2}{\alpha} + \frac{\alpha}{\lambda} \sum_{t=1}^T \Es{\epsilon}{\left\|\x_t(\epsilon) - \chi_{t-1}(\epsilon_{t-1}) \right\|_*^2} \label{eq:MDS2}
\end{align}
Where the last step follows from Lemma 2 of \cite{KakSriTew08} (with slight modification). However since $(\x,\x') \in \mathcal{T}$ are pairs of tree such that for any $\epsilon \in \{\pm 1\}^T$ and any $t \in [T]$.
$$
C(\chi_1(\epsilon_1), \ldots,\chi_{t-1}(\epsilon_{t-1}), \x_{t}(\epsilon)) = 1
$$
we can conclude that for any $\epsilon \in \{\pm 1\}^T$ and any $t \in [T]$,
$$
\left\|\x_t(\epsilon) - \chi_{t-1}(\epsilon_{t-1}) \right\|_* \le \delta
$$
Using this with Equation \ref{eq:MDS2} and the fact that $\alpha$ is arbitrary, we can conclude that
\begin{align*}
\Val_T & \le \inf_{\alpha > 0}\left\{\frac{2 R^2}{\alpha} + \frac{\alpha \delta^2 T}{\lambda} \right\} \le 2 R \delta \sqrt{2 T}
\end{align*}
\end{proof}
\begin{proof}[\textbf{Proof of Lemma~\ref{lem:iid_wc_rademacher}}]
We want to bound the supremum (as $\ensuremath{\mathbf{p}}$ ranges over $\PDA$) of the distribution-dependent Rademacher complexity:
\begin{align*}
\sup_{\ensuremath{\mathbf{p}}\in\PDA} \Rad_T(\phi(\F),\ensuremath{\mathbf{p}}) &= \sup_{\ensuremath{\mathbf{p}}\in\PDA} \Eunderone{((\x,\y),(\x',\y')))\sim \boldsymbol{\rho}} \Es{\epsilon}{\sup_{f \in \F} \sum_{t=1}^T \epsilon_t \phi(f(\x_t(\epsilon)),\y_t(\epsilon))}
\end{align*}
for an associated process $\boldsymbol{\rho}$ defined in Section~\ref{sec:rademacher}. To elucidate the random process $\boldsymbol{\rho}$, we expand the succinct tree notation and write the above quantity as
\begin{align*}
&\sup_{\ensuremath{\mathbf{p}}} \En_{x_1,x'_1\sim p}\En_{\substack{y_1\sim p_1(\cdot|x_1) \\ y'_1\sim p_1(\cdot|x'_1)}} \En_{\epsilon_1} \En_{x_2,x'_2\sim p}\En_{\substack{y_2\sim p_2(\cdot|\chi_1(\epsilon_1),x_2) \\ y'_2\sim p_2(\cdot|\chi_1(\epsilon_1),x'_2)}} \En_{\epsilon_2} ~~\ldots \\
&~~~~~~~~~~~~~~~~~~~~~~~~\ldots~~ \En_{x_T,x'_T\sim p}\En_{\substack{y_T\sim p_T(\cdot|\chi_1(\epsilon_1),\ldots, \chi_{T-1}(\epsilon_{T-1}),x_T) \\ y'_T\sim p_T(\cdot|\chi_1(\epsilon_1),\ldots, \chi_{T-1}(\epsilon_{T-1}),x'_T) }} \En_{\epsilon_{T}} \left[ \sup_{f \in \F} \sum_{t=1}^T \epsilon_t \phi(f(x_t),y_t) \right]
\end{align*}
where $\chi_t(\epsilon_t)$ now selects the pair $(x_t,y_t)$ or $(x'_t,y'_t)$. By passing to the supremum over $y_t,y'_t$ for all $t$, we arrive at
\begin{align*}
\sup_{\ensuremath{\mathbf{p}}\in\PDA} \Rad_T(\phi(\F),\ensuremath{\mathbf{p}}) &\leq \sup_{\ensuremath{\mathbf{p}}} \En_{x_1,x'_1\sim p} \sup_{y_1,y'_1} \En_{\epsilon_1} \En_{x_2,x'_2\sim p} \sup_{y_2,y'_2} \En_{\epsilon_2} \ldots \En_{x_T,x'_T\sim p}\sup_{y_T,y'_T} \En_{\epsilon_{T}} \left[ \sup_{f \in \F} \sum_{t=1}^T \epsilon_t \phi(f(x_t),y_t) \right] \\
&= \En_{x_1\sim p} \sup_{y_1} \En_{\epsilon_1} \En_{x_2\sim p} \sup_{y_2} \En_{\epsilon_2} \ldots \En_{x_T\sim p}\sup_{y_T} \En_{\epsilon_{T}} \left[ \sup_{f \in \F} \sum_{t=1}^T \epsilon_t \phi(f(x_t),y_t) \right]
\end{align*}
where the sequence of $x'_t$'s and $y'_t$'s has been eliminated. By moving the expectations over $x_t$'s outside the suprema (and thus increasing the value), we upper bound the above by:
\begin{align*}
& \le \En_{x_1,\ldots,x_T \sim p} \sup_{y_1} \En_{\epsilon_1} \sup_{y_2} \En_{\epsilon_2} \ldots \sup_{y_T} \En_{\epsilon_{T}} \left[ \sup_{f \in \F} \sum_{t=1}^T \epsilon_t \phi(f(x_t),y_t) \right] \\
& = \Eunderone{x_1, \ldots, x_T \sim p} \sup_{\y} \Es{\epsilon}{\sup_{f \in \F} \sum_{t=1}^T \epsilon_t \phi(f(x_t),\y_t(\epsilon))}
\end{align*}
\end{proof}
\begin{proof}[\textbf{Proof of Lemma~\ref{lem:comparison_lemma_iid_wc}}]
First without loss of generality assume $L = 1$. The general case follow from this by simply scaling $\phi$ appropriately.
By Lemma~\ref{lem:iid_wc_rademacher},
\begin{align}
\label{eq:iid_wc_bd}
\Rad_T(\phi(\F),\ensuremath{\mathbf{p}}) & \le \Eunderone{x_1,\ldots,x_T \sim p} \sup_{\y} \Es{\epsilon}{\sup_{f \in \F} \sum_{t=1}^T \epsilon_t \phi(f(x_t),\y_t(\epsilon))}
\end{align}
The proof proceeds by sequentially using the Lipschitz property of $\phi(f(x_t), \y_t(\epsilon))$ for decreasing $t$, starting from $t=T$. Towards this end, define
\begin{align*}
R_t = \Eunderone{x_1,\ldots,x_T \sim p} \sup_{\y} \Es{\epsilon}{\sup_{f\in\F} \sum_{s=1}^t \epsilon_s \phi(f(x_s), \y_s(\epsilon)) + \sum_{s=t+1}^T \epsilon_s f(x_s) } \ .
\end{align*}
Since the mappings $\y_{t+1},\ldots,\y_T$ do not enter the expression, the supremum is in fact taken over the trees $\y$ of depth $t$. Note that $R_0 = \Rad(\F,p)$ is precisely the classical Rademacher complexity (without the dependence on $\y$), while $R_T$ is the upper bound on $\Rad_T(\phi(\F),\ensuremath{\mathbf{p}})$ in Eq.~\eqref{eq:iid_wc_bd}. We need to show $R_T\leq R_0$ and we will show this by proving $R_{t} \le R_{t-1}$ for all $t \in [T]$.
So, let us fix $t \in [T]$ and start with $R_t$:
\begin{align*}
R_t &= \Eunderone{x_1,\ldots,x_T \sim p} \sup_{\y} \Es{\epsilon}{\sup_{f\in\F} \sum_{s=1}^t \epsilon_s \phi(f(x_s), \y_s(\epsilon)) + \sum_{s=t+1}^T \epsilon_s f(x_s) } \\
&= \Eunderone{x_1,\ldots,x_T \sim p} \sup_{y_1} \En_{\epsilon_1}\ldots \sup_{y_t} \En_{\epsilon_t} \En_{\epsilon_{t+1:T}} \left[ \sup_{f\in\F} \sum_{s=1}^t \epsilon_s \phi(f(x_s), y_s) + \sum_{s=t+1}^T \epsilon_s f(x_s) \right] \\
&= \Eunderone{x_1,\ldots,x_T \sim p} \sup_{y_1} \En_{\epsilon_1}\ldots \sup_{y_t} \En_{\epsilon_{t+1:T}} ~~S(x_{1:T}, y_{1:t}, \epsilon_{1:t-1},\epsilon_{t+1:T})
\end{align*}
with
\begin{align*}
S(x_{1:T}, y_{1:t}, \epsilon_{1:t-1},\epsilon_{t+1:T}) &= \En_{\epsilon_t} \left[ \sup_{f\in\F} \sum_{s=1}^t \epsilon_s \phi(f(x_s), y_s) + \sum_{s=t+1}^T \epsilon_s f(x_s) \right] \\
&= \frac{1}{2}\left\{\sup_{f\in\F} \sum_{s=1}^{t-1} \epsilon_s \phi(f(x_s), y_s) + \phi(f(x_t), y_t) + \sum_{s=t+1}^T \epsilon_s f(x_s) \right\} \\
&+ \frac{1}{2}\left\{\sup_{f\in\F} \sum_{s=1}^{t-1} \epsilon_s \phi(f(x_s), y_s) - \phi(f(x_t), y_t) + \sum_{s=t+1}^T \epsilon_s f(x_s) \right\}
\end{align*}
The two suprema can be combined to yield
\begin{align*}
&2S(x_{1:T}, y_{1:t}, \epsilon_{1:t-1},\epsilon_{t+1:T}) \\
&= \sup_{f,g\in\F} \left\{ \sum_{s=1}^{t-1} \epsilon_s (\phi(f(x_s), y_s)+\phi(g(x_s), y_s)) + \phi(f(x_t), y_t) - \phi(g(x_t), y_t) + \sum_{s=t+1}^T \epsilon_s (f(x_s) +g(x_s)) \right\} \\
&\leq \sup_{f,g\in\F} \left\{ \sum_{s=1}^{t-1} \epsilon_s (\phi(f(x_s), y_s)+\phi(g(x_s), y_s)) + |f(x_t) - g(x_t)| + \sum_{s=t+1}^T \epsilon_s (f(x_s) +g(x_s)) \right\} ~~~~(*) \\
&= \sup_{f,g\in\F} \left\{ \sum_{s=1}^{t-1} \epsilon_s (\phi(f(x_s), y_s)+\phi(g(x_s), y_s)) + f(x_t) - g(x_t) + \sum_{s=t+1}^T \epsilon_s (f(x_s) +g(x_s)) \right\} ~~~~(**)
\end{align*}
The first inequality is due to the Lipschitz property, while the last equality needs a justification. First, it is clear that the term $(**)$ is upper bounded by $(*)$. The reverse direction can be argued as follows. Let a pair $(f^*,g^*)$ achieve the supremum in $(*)$. Suppose first that $f^*(x_t)\geq g^*(x_t)$. Then $(f^*,g^*)$ provides the same value in $(**)$ and, hence, the supremum is no less than the supremum in $(*)$. If, on the other hand, $f^*(x_t) < g^*(x_t)$, then the pair $(g^*,f^*)$ provides the same value in $(**)$.
We conclude that
\begin{align*}
&S(x_{1:T}, y_{1:t}, \epsilon_{1:t-1},\epsilon_{t+1:T}) \\
&\leq \frac{1}{2}\sup_{f,g\in\F} \left\{ \sum_{s=1}^{t-1} \epsilon_s (\phi(f(x_s), y_s)+\phi(g(x_s), y_s)) + f(x_t) - g(x_t) + \sum_{s=t+1}^T \epsilon_s (f(x_s) +g(x_s)) \right\} \\
&= \frac{1}{2}\left\{ \sup_{f\in\F} \sum_{s=1}^{t-1} \epsilon_s \phi(f(x_s), y_s) + f(x_t) + \sum_{s=t+1}^T \epsilon_s f(x_s) \right\} + \frac{1}{2}\left\{ \sup_{f\in\F} \sum_{s=1}^{t-1} \epsilon_s \phi(f(x_s), y_s) - f(x_t) + \sum_{s=t+1}^T \epsilon_s f(x_s) \right\}\\
&= \En_{\epsilon_t} \sup_{f\in\F} \left\{ \sum_{s=1}^{t-1} \epsilon_s \phi(f(x_s), y_s) + \epsilon_t f(x_t) + \sum_{s=t+1}^T \epsilon_s f(x_s)\right\}
\end{align*}
Thus,
\begin{align*}
R_t &= \Eunderone{x_1,\ldots,x_T \sim p} \sup_{y_1} \En_{\epsilon_1}\ldots \sup_{y_t} \En_{\epsilon_{t+1:T}} ~~S(x_{1:T}, y_{1:t}, \epsilon_{1:t-1},\epsilon_{t+1:T}) \\
&\leq \Eunderone{x_1,\ldots,x_T \sim p} \sup_{y_1} \En_{\epsilon_1}\ldots \sup_{y_t} \En_{\epsilon_{t:T}} \sup_{f\in\F} \left\{ \sum_{s=1}^{t-1} \epsilon_s \phi(f(x_s), y_s) + \sum_{s=t}^T \epsilon_s f(x_s)\right\} \\
&= \Eunderone{x_1,\ldots,x_T \sim p} \sup_{y_1} \En_{\epsilon_1}\ldots \sup_{y_{t-1}}\En_{\epsilon_{t-1}} \En_{\epsilon_{t:T}} \sup_{f\in\F} \left\{ \sum_{s=1}^{t-1} \epsilon_s \phi(f(x_s), y_s) + \sum_{s=t}^T \epsilon_s f(x_s)\right\} \\
&= R_{t-1}
\end{align*}
where we have removed the supremum over $y_t$ as it no longer appears in the objective. This concludes the proof.
\end{proof}
\begin{proof}[\textbf{Proof of Lemma~\ref{lem:first_lower}}]
Notice that $\ensuremath{\mathbf{p}}$ defines the stochastic process $\boldsymbol{\rho}$ as in \eqref{eq:sampling_procedure} where the i.i.d. $y_t$'s now play the role of the $\epsilon_t$'s. More precisely, at each time $t$, two copies $x_t$ and $x'_t$ are drawn from the marginal distribution $p_t(\cdot|\chi_1(y_1),\ldots,\chi_{t-1}(y_{t-1}))$, then a Rademacher random variable $y_t$ is drawn i.i.d. and it indicates whether $x_t$ or $x'_t$ is to be used in the subsequent conditional distributions via the selector $\chi_t(y_t)$. This is a well-defined process obtained from $\ensuremath{\mathbf{p}}$ that produces a sequence of $(x_1,x'_1,y_1),\ldots,(x_T,x'_T,y_T)$. The $x'$ sequence is only used to define conditional distributions below, while the sequence $(x_1,y_1),\ldots,(x_T,y_T)$ is presented to the player. Since restrictions are history-independent, the stochastic process is following the protocol which defines $\boldsymbol{\rho}$.
For any $\ensuremath{\mathbf{p}}$ of the form described above, the value of the game in \eqref{eq:value_equality} can be lower-bounded via Proposition~\ref{prop:lower_bound_oblivious}.
\begin{align*}
\Val^{\text{sup}}_T &\geq \En \left[
\sum_{t=1}^T \inf_{f_t \in \F}
\Es{(x_t,y_t)}{ |y_t-f_t (x_t)| \ \Big|\ (x,y)_{1:t-1}} - \inf_{f\in \F} \sum_{t=1}^T |y_t-f(x_t)|
\right] \\
&= \En \left[
\sum_{t=1}^T 1 - \inf_{f\in \F} \sum_{t=1}^T |y_t-f(x_t)|
\right]
\end{align*}
A short calculation shows that the last quantity is equal to
\begin{align*}
\En
\sup_{f\in \F} \sum_{t=1}^T \left(1 - |y_t-f(x_t)|\right)
= \En
\sup_{f\in \F} \sum_{t=1}^T y_t f(x_t) .
\end{align*}
The last expectation can be expanded to show the stochastic process:
\begin{align*}
&\En_{x_1,x'_1\sim p_1}\En_{y_1}\En_{x_2,x'_2\sim p_2(\cdot|\chi_1(y_1))}\En_{y_2} \ldots \En_{x_T,x'_T\sim p_T(\cdot|\chi_1(y_1),\ldots,\chi_{T-1}(y_{T-1}))}\En_{y_T} \sup_{f\in \F} \sum_{t=1}^T y_t f(x_t) \\
&= \En_{(\x,\x')\sim \boldsymbol{\rho}}\Es{\epsilon}{ \sup_{f \in \F} \sum_{t=1}^{T} \epsilon_t f(\x_t(\epsilon))} \\
&= \Rad_T(\F, \ensuremath{\mathbf{p}})
\end{align*}
Since this lower bound holds for any $\ensuremath{\mathbf{p}}$ which allows the labels to be independent $\pm1$ with probability $1/2$, we conclude the proof.
\end{proof}
\begin{proof}[\textbf{Proof of Lemma~\ref{lem:second_lower}}]
For the purposes of this proof, the adversary presents $y_t$ an i.i.d. Rademacher random variable on each round. Unlike the previous lemma, only the $\{x_t\}$ sequence is used for defining conditional distributions. Hence, the $\x'$ tree is immaterial and the lower bound is only concerned with the left-most path. The rest of the proof is similar to that of Lemma~\ref{lem:first_lower}:
\begin{align*}
\Val^{\text{sup}}_T &\geq \En \left[
\sum_{t=1}^T \inf_{f_t \in \F}
\Es{(x_t,y_t)}{ |y_t-f_t (x_t)| \ \Big|\ (x,y)_{1:t-1}} - \inf_{f\in \F} \sum_{t=1}^T |y_t-f(x_t)|
\right] \\
&= \En \left[
\sum_{t=1}^T 1 - \inf_{f\in \F} \sum_{t=1}^T |y_t-f(x_t)|
\right]
\end{align*}
As before, this expression is equal to
\begin{align*}
&\En
\sup_{f\in \F} \sum_{t=1}^T y_t f(x_t) = \En_{x_1\sim p_1}\En_{y_1}\En_{x_2\sim p_2(\cdot|x_1)}\En_{y_2} \ldots \En_{x_T\sim p_T(\cdot|x_1,\ldots,x_{T-1})}\En_{y_T} \sup_{f\in \F} \sum_{t=1}^T y_t f(x_t) \\
&= \En_{(\x,\x')\sim \boldsymbol{\rho}}\Es{\epsilon}{ \sup_{f \in \F} \sum_{t=1}^{T} \epsilon_t f(\x_t(-{\boldsymbol 1}))}
\end{align*}
\end{proof}
\section{Constrained Adversaries}
\label{sec:constraints}
In this section we consider adversaries who are constrained in the sequences of actions they can play. It is often useful to consider scenarios where the adversary is worst case, yet has some budget or constraint to satisfy while picking the actions. Examples of such scenarios include, for instance, games where the adversary is constrained to make moves that are close in some fashion to the previous move, linear games with bounded variance, and so on. Below we formulate such games quite generally through arbitrary constraints that the adversary has to satisfy on each round.
Specifically, for a $T$ round game consider an adversary who is only allowed to play sequences $x_1,\ldots,x_T$ such that at round $t$ the constraint $C_t(x_1,\ldots,x_t) = 1$ is satisfied, where $C_t : \X^t \mapsto \{0,1\}$ represents the constraint on the sequence played so far. The constrained adversary can be viewed as a stochastic adversary with restrictions on the conditional distribution at time $t$ given by the set of all Borel distributions on the set
$$\X_t(x_{1:t-1}) ~\stackrel{\scriptscriptstyle\triangle}{=}~ \{x \in \X : C_t(x_1,\ldots,x_{t-1},x) = 1 \} .$$
Since set includes all point distributions on each $x\in\X_t$, the sequential complexity simplifies in a way similar to worst-case adversaries. We write $\Val_T(C_{1:T})$ for the value of the game with the given constraints. Now, assume that for any $x_{1:t-1}$, the set of all distributions on $\X_t(x_{1:t-1})$ is weakly compact in a way similar to compactness of $\mathcal P$. That is, $\mathcal P_t(x_{1:t-1})$ satisfy the necessary conditions for the minimax theorem to hold. We have the following corollaries of Theorems~\ref{thm:minimax} and \ref{thm:valrad}.
\begin{corollary}\label{cor:minimax_constrained}
Let $\F$ and $\X$ be the sets of moves for the two players, satisfying the necessary conditions for the minimax theorem to hold. Let $\{C_t: \X^{t-1}\mapsto \{0,1\} \}_{t=1}^T$ be the \emph{constraints}.
Then
\begin{align}
\label{eq:value_equality_constrained}
\Val_T(C_{1:T}) &~=~ \sup_{\ensuremath{\mathbf{p}}\in\PDA} \En \left[
\sum_{t=1}^T \inf_{f_t \in \F}
\Es{x_t \sim p_t}{f_t(x_t)} - \inf_{f\in\F} \sum_{t=1}^T f(x_t)
\right]
\end{align}
where $\ensuremath{\mathbf{p}}$ ranges over all distributions over sequences $(x_1,\ldots,x_T)$ such that $C_t(x_{1:t-1})=1$ for all $t$.
\end{corollary}
\begin{corollary}\label{cor:valrad_constrained}
Let the set ${\mathcal T}$ be a set of pairs $(\x,\x')$ of $\X$-valued trees with the property that for any $\epsilon \in \{\pm 1\}^T$ and any $t \in [T]$
$$
C(\chi_1(\epsilon_1), \ldots,\chi_{t-1}(\epsilon_{t-1}), \x_{t}(\epsilon)) = C(\chi_1(\epsilon_1), \ldots,\chi_{t-1}(\epsilon_{t-1}), \x'_{t}(\epsilon)) = 1
$$
The minimax value is bounded as
$$
\Val_T(C_{1:T}) \le 2\sup_{(\x,\x')\in {\mathcal T}} \Rad_T(\F, \ensuremath{\mathbf{p}}).
$$
More generally,
\begin{align*}
\Val_T(C_{1:T}) &\leq \sup_{\ensuremath{\mathbf{p}}\in\PDA} \E{ \sup_{f \in \F}\left\{ \sum_{t=1}^T f(x_t') - f(x_t) \right\} } \\
&\leq 2\sup_{(\x,\x')\in {\mathcal T}} \En_{\epsilon} \left[ \sup_{f \in \F} \sum_{t=1}^T \epsilon_t (f (\x_t(\epsilon))-M_t(f,\x,\x',\epsilon)) \right]
\end{align*}
for any measurable function $M_t$ with the property $M_t(f,\x,\x',\epsilon) = M_t(f,\x',\x,-\epsilon)$.
\end{corollary}
Armed with these results, we can recover and extend some known results on online learning against budgeted adversaries. The first result says that if the adversary is not allowed to move by more than $\sigma_t$ away from its previous average of decisions, the player has a strategy to exploit this fact and obtain lower regret. For the $\ell_2$-norm, such ``total variation'' bounds have been achieved in \cite{HazKal09} up to a $\log T$ factor. We note that in the present formulation the budget is known to the learner, whereas the results of \cite{HazKal09} are adaptive. Such adaptation is beyond the scope of this paper.
\begin{proposition}[Variance Bound]
\label{prop:maxvar}
Consider the online linear optimization setting with $\F = \{f : \Psi(f) \le R^2\}$ for a $\lambda$-strongly function $\Psi : \F \mapsto \mathbb{R}_+$ on $\F$, and $\X = \{x : \|x\|_* \le 1\}$. Let $f(x) = \inner{f,x}$ for any $f \in \F$ and $x \in \X$. Consider the sequence of constraints $\{C_t\}_{t=1}^T$ given by
$$
C_t(x_1,\ldots,x_{t-1},x) = \left\{\begin{array}{ll}
1 & \textrm{if } \|x - \frac{1}{t-1} \sum_{\tau=1}^{t-1} x_{\tau} \|_* \le \sigma_t \\
0 & \textrm{otherwise}
\end{array}
\right.
$$
Then
\begin{align*}
\Val_T(C_{1:T}) & \le \inf_{\alpha > 0}\left\{\frac{2 R^2}{\alpha} + \frac{\alpha}{\lambda} \sum_{t=1}^T \sigma_t^2\right\} \le 2 \sqrt{2} R \sqrt{\sum_{t=1}^T \sigma_t^2}
\end{align*}
\end{proposition}
In particular, we obtain the following $L_2$ variance bound. Consider the case when $\Psi : \F \mapsto \mathbb{R}_+$ is given by $\Psi(f) = \frac{1}{2}\|f\|^2$, $\F = \{f : \|f\|_2 \le 1\}$ and $\X = \{x : \|x\|_2 \le 1\}$. Consider the constrained game where the move $x_t$ played by adversary at time $t$ satisfies
$$
\left\|x_t - \frac{1}{t-1} \sum_{\tau=1}^{t-1} x_\tau \right\|_2 \le \sigma_t ~.
$$
In this case we can conclude that
$$
\Val_T(C_{1:T}) \le 2 \sqrt{2} \sqrt{\sum_{t=1}^T \sigma_t^2} \ .
$$
We can also derive a variance bound over the simplex. Let $\Psi(f) = \sum_{i=1}^d f_i \log(d f_i)$ is defined over the $d$-simplex $\F$, and $\X = \{x : \|x\|_\infty \le 1\}$. Consider the constrained game where the move $x_t$ played by adversary at time $t$ satisfies
$$
\max_{j \in [d]} \left|x_{t}[j] - \frac{1}{t-1} \sum_{\tau=1}^{t-1} x_\tau[j] \right| \le \sigma_t ~.
$$
For any $f \in \F$, $\Psi(f) \le \log(d)$ and so we conclude that
$$
\Val_T(C_{1:T}) \le 2 \sqrt{2} \sqrt{ \log(d) \sum_{t=1}^T \sigma_t^2 } \ .
$$
The next Proposition gives a bound whenever the adversary is constrained to choose his decision from a small ball around the previous decision.
\begin{proposition}[Slowly-Changing Decisions]
\label{prop:smalljumps}
Consider the online linear optimization setting where adversary's move at any time is close to the move during the previous time step. Let $\F = \{f : \Psi(f) \le R^2\}$ where $\Psi : \F \mapsto \mathbb{R}_+$ is a $\lambda$-strongly function on $\F$ and $\X = \{x : \|x\|_* \le B\}$. Let $f(x) = \inner{f,x}$ for any $f \in \F$ and $x \in \X$. Consider the sequence of constraints $\{C_t\}_{t=1}^T$ given by
$$
C_t(x_1,\ldots,x_{t-1},x) = \left\{\begin{array}{ll}
1 & \textrm{if } \|x - x_{t-1} \|_* \le \delta \\
0 & \textrm{otherwise}
\end{array}
\right.
$$
Then,
\begin{align*}
\Val_T(C_{1:T}) & \le \inf_{\alpha > 0}\left\{\frac{2 R^2}{\alpha} + \frac{\alpha \delta^2 T}{\lambda} \right\} \le 2 R \delta \sqrt{2 T} \ .
\end{align*}
\end{proposition}
In particular, consider the case of a Euclidean-norm restriction on the moves. Let $\Psi : \F \mapsto \mathbb{R}_+$ is given by $\Psi(f) = \frac{1}{2}\|f\|^2$, $\F = \{f : \|f\|_2 \le 1\}$ and $\X = \{x : \|x\|_2 \le 1\}$. Consider the constrained game where the move $x_t$ played by adversary at time $t$ satisfies
$
\left\|x_t - x_{t-1} \right\|_2 \le \delta ~.
$
In this case we can conclude that
\begin{align*}
\Val_T(C_{1:T}) & \le 2 \delta \sqrt{2 T} \ .
\end{align*}
For the case of decision-making on the simplex, we obtain the following result. Let $\Psi(f) = \sum_{i=1}^d f_i \log(d f_i)$ is defined over the $d$-simplex $\F$, and $\X = \{x : \|x\|_\infty \le 1\}$. Consider the constrained game where the move $x_t$ played by adversary at time $t$ satisfies
$\left\|x_{t} - x_{t-1}\right|_\infty \le \delta$. In this case note that for any $f \in \F$, $\Psi(f) \le \log(d)$ and so we can conclude that
\begin{align*}
\Val_T(C_{1:T}) & \le 2 \delta \sqrt{2 T \log(d) } \ .
\end{align*}
\section{The I.I.D. Adversary}
\label{sec:iid}
In this section, we consider an adversary who is restricted to draw the moves from a fixed distribution $p$ throughout the game. That is, the time-invariant restrictions are $\mathcal P_t(x_{1:t-1}) = \{p\}$. A reader will notice that the definition of the value in \eqref{eq:def_val_game} forces the restrictions $\mathcal P_{1:T}$ to be known to the player before the game. This, in turn, means that the distribution $p$ is known to the learner. In some sense, the problem becomes not interesting, as there is no learning to be done. This is indeed an artifact of the minimax formulation in the \emph{extensive form}. To circumvent the problem, we are forced to define a new value of the game in terms of \emph{strategies}. Such a formulation does allow us to ``hide'' the distribution from the player since we can talk about ``mappings'' instead of making the information explicit. We then show two novel results. First, the regret-minimization game with i.i.d. data when the player does \emph{not} observe the distribution $p$ is equivalent (in terms of learnability) to the classical batch learning problem. Second, for supervised learning, when it comes to minimizing regret, the knowledge of $p$ does not help the learner for some distributions.
Let us first define some relevant quantities. Similarly to \eqref{eq:def_val_game_strategic}, let $\s = \{s_t\}_{t=1}^T$ be a $T$-round strategy for the player, with $s_t:(\F\times\X)^{t-1}\to \mathcal Q$. The game where the player does not observe the i.i.d. distribution of the adversary will be called a {\em distribution-blind} i.i.d. game, and its minimax value will be called the {\em distribution-blind minimax value}:
$$\Val_T^{\text{blind}} ~\stackrel{\scriptscriptstyle\triangle}{=}~ \inf_{\s}\sup_{p} \left[ \En_{x_1,\ldots, x_{T} \sim p} \En_{f_1\sim s_1}\ldots \En_{f_T \sim s_T(x_{1:T-1},f_{1:T-1})} \left\{ \sum_{t=1}^T f_t(x_t) - \inf_{f\in \F}\sum_{t=1}^T f(x_t) \right\} \right]$$
Furthermore, define the analogue of the value \eqref{eq:slt} for a general (not necessarily supervised) setting:
$$\Val_T^{\text{batch}} ~\stackrel{\scriptscriptstyle\triangle}{=}~ \inf_{\hat{f}_T}\sup_{p\in\mathcal P} \left\{ \En\hat{f}_T - \inf_{f\in \F} \En f \right\}$$
For a distribution $p$, the value \eqref{eq:def_val_game} of the online i.i.d. game, as defined through the restrictions $\mathcal P_t=\{p\}$ for all $t$, will be written as $\Val_T(\{p\})$. For the non-blind game, we say that the problem is online learnable in the i.i.d. setting if $$\sup_{p} \Val_T(\{p\}) \to 0 \ .$$
We now proceed to study relationships between online and batch learnability.
\subsection{Equivalence of Online Learnability and Batch Learnability}
\begin{theorem}
\label{thm:equivalence_iid}
For a given function class $\F$, online learnability in the distribution-blind game is equivalent to batch learnability. That is,
$$\frac{1}{T}\Val_T^{\text{blind}}\to 0 ~~~~~~\mbox{if and only if}~~~~~~~ \Val_T^{\text{batch}} \to 0$$
\end{theorem}
\begin{proof}[\textbf{Proof of Theorem~\ref{thm:equivalence_iid}}]
With a proof along the lines of Proposition~\ref{prop:lower_bound_oblivious} we establish that
\begin{align*}
&\frac{1}{T} \Val_T^{\text{blind}} = \inf_{\s} \sup_{p} \left\{ \frac{1}{T} \sum_{t=1}^T \En_{x_1,\ldots, x_{t} \sim p} \En_{f_t \sim s_t(x_{1:t-1},f_{1:t-1})} [f_t(x_t)] - \Es{x_1,\ldots, x_{T} \sim p}{\inf_{f\in \F}\frac{1}{T} \sum_{t=1}^T f(x_t)} \right\}\\
& \ge \inf_{\s} \sup_{p} \left\{ \Es{x_1,\ldots, x_{T} \sim p}{ \frac{1}{T} \sum_{t=1}^T \Es{f_t \sim s_t(x_1,\ldots,x_{t-1})}{ \Es{x \sim p}{f_t(x)}}} - \inf_{f\in \F} \Es{x_1,\ldots, x_T \sim p}{\frac{1}{T} \sum_{t=1}^T f(x_t)} \right\}
\end{align*}
where in the second line we passed to strategies that do not depend on their own randomizations. The argument for this can be found in the proof of Proposition~\ref{prop:lower_bound_oblivious}. The last expression can be conveniently written as
\begin{align*}
\frac{1}{T} \Val_T^{\text{blind}} \geq \inf_{\s} \sup_{p} \left\{ \Es{x_1,\ldots, x_{T} \sim p}{ \En_{r \sim \mathrm{Unif}[T-1]} \Es{f \sim s_{r+1}(x_1,\ldots,x_{r})}{ \Es{x \sim p}{f(x)}} - \inf_{f\in \F} \Es{x \sim p}{f(x)}} \right\}
\end{align*}
The above implies that if $\Val_T^{\text{blind}} = o(T)$ (i.e. the problem is learnable against an i.i.d adversary in the online sense without knowing the distribution $p$), then the problem is learnable in the classical batch sense. Specifically, there exists a strategy $\s=\{s_t\}_{t=1}^T$ with $s_t:\X^{t-1}\mapsto \mathcal Q$ such that
$$\sup_{p}\left\{\Es{x_1,\ldots, x_{T} \sim p}{ \En_{r \sim \mathrm{Unif}[1\ldots T]} \Es{f \sim s_{r+1}(x_1,\ldots,x_{r})}{ \Es{x \sim p}{f(x)}}} - \inf_{f\in \F} \Es{x \sim p}{f(x)} \right\}= o(1).$$
This strategy can be used to define a consistent (randomized) algorithm $\hat{f}_T:\X^T\mapsto \F$ as follows. Given an i.i.d. sample $x_1,\ldots,x_T$, draw a random index $r$ from $1,\ldots, T$, and define $\hat{f}_T$ as a random draw from distribution $s_{r}(x_1,\ldots,x_{r-1})$. We have proven that $\Val_T^{\text{batch}} \to 0$
as $T$ increases, which the requirement of Eq.~\eqref{eq:slt} in the general non-supervised case. Note that the rate of this convergence is upper bounded by the rate of decay of $\frac{1}{T} \Val_T^{\text{blind}}$ to zero.
To show the reverse direction, say a problem is learnable in the classical batch sense. That is, $\Val_T^{\text{batch}} \to 0$. Hence, there exists a randomized strategy $\s = (s_1,s_2,\ldots)$ such that $s_t : \X^{t-1} \mapsto \mathcal Q$ and
$$
\sup_{p}\left\{
\Es{x_1,\ldots,x_{t-1} \sim p}{ \En_{f \sim s_t(x_1,\ldots,x_{t-1})} \Es{x \sim p}{ f(x) }} - \inf_{f \in \F} \Es{x \sim p}{f(x)}
\right\} = o(1)
$$
as $t \rightarrow \infty$. Hence we have that
\begin{align*}
&\sup_{p}\left\{
\Es{x_1,\ldots,x_T \sim p}{ \frac{1}{T} \sum_{t=1}^T \En_{f \sim s_t(x_1,\ldots,x_{t-1})} \Es{x \sim p}{ f(x) } - \inf_{f \in \F} \Es{x \sim p}{f(x)} }
\right\} \\
&\leq \frac{1}{T} \sum_{t=1}^T \sup_{p}\left\{
\Es{x_1,\ldots,x_T \sim p}{ \En_{f \sim s_t(x_1,\ldots,x_{t-1})} \Es{x \sim p}{ f(x) } - \inf_{f \in \F} \Es{x \sim p}{f(x)} }
\right\} = o(1)
\end{align*}
because a Ces\`aro average of a convergent sequence also converges to the same limit.
As shown in \cite{ShaShaSreSri10}, the problem is learnable in the batch sense if and only if
$$ \Es{x_1,\ldots,x_T \sim p}{ \inf_{f \in \F} \frac{1}{T}\sum_{t=1}^T f(x_t)} \rightarrow \inf_{f \in \F} \Es{x \sim p}{f(x)}$$
and this rate is uniform for all distributions.
Hence we have that
$$
\sup_{p}\left\{
\Es{x_1,\ldots,x_T \sim p}{ \frac{1}{T} \sum_{t=1}^T \En_{f \sim s_t(x_1,\ldots,x_{t-1})} \Es{x \sim p}{ f(x) } - \inf_{f \in \F} \frac{1}{T}\sum_{t=1}^T f(x_t)}
\right\} = o(1)
$$
We conclude that if the problem is learnable in the i.i.d. batch sense then
\begin{align}
\label{eq:blind_upper_bd}
o(T) & = \sup_{p}\Es{x_1,\ldots,x_T \sim p}{\sum_{t=1}^T \En_{f \sim s_t(x_1,\ldots,x_{t-1})} \Es{x \sim p}{ f(x) } - \inf_{f \in \F} \sum_{t=1}^T f(x_t)} \notag\\
& = \sup_{p}\Es{x_1,\ldots,x_T \sim p}{\sum_{t=1}^T \En_{f_t \sim s_t(x_1,\ldots,x_{t-1})} f_t(x_t) - \inf_{f \in \F} \sum_{t=1}^T f(x_t)} \notag\\
& = \sup_{p} \En_{x_1,\ldots,x_T \sim p} \En_{f_1 \sim s_1}\ldots\En_{f_T \sim s_T(x_{1:T-1})} \left\{ \sum_{t=1}^T f_t(x_t) - \inf_{f \in \F} \sum_{t=1}^T f(x_t) \right\} \notag\\
& \ge \Val_T^{\text{blind}}
\end{align}
Thus we have shown that if a problem is learnable in the batch sense then it is learnable versus all i.i.d. adversaries in the online sense, provided that the distribution is not known to the player.
\end{proof}
At this point, the reader might wonder if the game formulation studied in the rest of the paper, with the restrictions known to the player, is any easier than batch and distribution-blind learning. In the next section, we show that this is not the case for supervised learning.
\subsection{Distribution-Blind vs Non-Blind Supervised Learning}
\label{sec:blind_non_blind_sup}
In the supervised game, at time $t$, the player picks a function $f_t \in [-1,1]^\X$, the adversary provides input-target pair $(x_t,y_t)$, and the player suffers loss $|f_t(x_t) - y_t|$. The value of the online supervised learning game for general restrictions $\mathcal P_{1:T}$ is defined as
\begin{align*}
\Val^{\text{sup}}_T(\mathcal P_{1:T}) ~\stackrel{\scriptscriptstyle\triangle}{=}~ \inf_{q_1\in \mathcal Q}\sup_{p_1\in\mathcal P_1} ~\Eunderone{f_1,(x_1,y_1)} \cdots \inf_{q_T\in \mathcal Q}\sup_{p_T\in\mathcal P_T} ~\Eunderone{f_T,(x_T,y_T)} \left[ \sum_{t=1}^T |f_t(x_t)-y_t| - \inf_{f\in \F}\sum_{t=1}^T |f(x_t)-y_t| \right]
\end{align*}
where $(x_t,y_t)$ has distribution $p_t$. As before, the value of an i.i.d. supervised game with a distribution $p_{X\times Y}$ will be written as $\Val^{\text{sup}}_T(p_{X\times Y})$.
Similarly to Eq.~\eqref{eq:slt}, define the batch supervised value for the \emph{absolute} loss as
\begin{align}
\Val^{\text{batch, sup}}_T ~\stackrel{\scriptscriptstyle\triangle}{=}~
\inf_{\hat{f}}\sup_{p_{X\times Y}} \left\{ \En |y-\hat{f}(x)| - \inf_{f\in\F} \En |y-f(x)| \right\} .
\end{align}
and the distribution-blind supervised value as
$$\Val_T^{\text{blind, sup}} ~\stackrel{\scriptscriptstyle\triangle}{=}~ \inf_{\s}\sup_{p} \left[ \En_{z_1,\ldots, z_T \sim p} \En_{f_1\sim s_1}\ldots \En_{f_T \sim s_T(z_{1:T-1},f_{1:T-1})} \left\{ \sum_{t=1}^T |f_t(x_t)-y_t| - \inf_{f\in \F}\sum_{t=1}^T |f(x_t)-y_t| \right\} \right]$$
where we use the shorthand $z_t = (x_t,y_t)$ for each $t$.
\begin{lemma}
\label{lem:equivalence_sup_iid}
In the supervised case,
\begin{align*}
\frac{1}{4}T\Val^{\text{batch, sup}}_T \leq \sup_{p_X} \Rad_T(\F,p_X) \leq \sup_{p_X} \Val^{\text{sup}}_T(\{p_X\times U_Y\}) \leq \sup_{p_{X\times Y}} \Val^{\text{sup}}_T(\{p_{X\times Y}\}) \leq \Val^{\text{blind, sup}}_T
\end{align*}
where $\Rad_T(\F,p_X)$ is the classical Rademacher complexity defined in \eqref{eq:classical_rad}, and $U_Y$ is the Rademacher distribution.
\end{lemma}
Theorem~\ref{thm:equivalence_iid}, specialized to the supervised setting, says that $\frac{1}{T}\Val^{\text{blind, sup}}_T \to 0$ if and only if $\Val^{\text{batch, sup}}_T \to 0$. Since $\sup_{p_{X\times Y}} \frac{1}{T}\Val^{\text{sup}}_T(\{p_{X\times Y}\})$ is sandwiched between these two values, we conclude the following.
\begin{corollary}
Either the supervised problem is learnable in the batch sense (and, by Theorem~\ref{thm:equivalence_iid}, in the distribution-blind online sense), in which case $\sup_{p_{X\times Y}} \Val^{\text{sup}}_T(\{p_{X\times Y}\}) = o(T)$. Or, the problem is not learnable in the batch (and the distribution-blind sense), in which case it is not learnable for all distributions in the online sense: $\sup_{p_{X\times Y}} \Val^{\text{sup}}_T(\{p_{X\times Y}\})$ does not grow sublinearly.
\end{corollary}
\begin{proof}[\textbf{Proof of Lemma~\ref{lem:equivalence_sup_iid}}]
The first statement follows from the well-known classical symmetrization argument:
\begin{align*}
\Val^{\text{batch, sup}}_T &= \inf_{\hat{f}}\sup_{p_{X\times Y}} \left\{ \En |y-\hat{f}(x)| - \inf_{f\in\F} \En |y-f(x)| \right\} \\
&\leq \sup_{p_{X\times Y}} \left\{ \En |y-\tilde{f}(x)| - \inf_{f\in\F} \En |y-f(x)| \right\} \\
&\leq 2 \sup_{p_{X\times Y}} \En \sup_{f\in\F}\left| \frac{1}{T} \sum_{t=1}^T|y_t-f(x_t)| - \En |y-f(x)| \right|\\
&\leq 4 \sup_{p_X} \En_{x_{1:T}}\En_{\epsilon_{1:T}} \sup_{f\in\F} \frac{1}{T} \sum_{t=1}^T \epsilon_t f(x_t)
\end{align*}
where the first inequality is obtained by choosing the empirical minimizer $\tilde{f}$ as an estimator.
The second inequality of the Lemma follows from the lower bound proved in Section~\ref{sec:lowerbounds}. Lemma~\ref{lem:first_lower} implies that the game with i.i.d. restrictions $\mathcal P_t = \{ p_X\times U_Y \}$ for all $t$ satisfies
$$\Val^{\text{sup}}_T(\{p_X\times U_Y\}) \geq \Rad_T(\F,p_X)$$
for any $p_X$.
Now, clearly, the distribution-blind supervised game is harder than the game with the knowledge of the distribution. That is,
$$ \sup_{p_{X\times Y}} \Val^{\text{sup}}_T(\{p_{X\times Y}\}) \leq \Val^{\text{blind, sup}}_T $$
\end{proof}
\section{Introduction}
\label{sec:intro}
We continue the line of work on the minimax analysis of online learning, initiated in \cite{AbeAgaBarRak09,RakSriTew10a,RakSriTew10b}. In these papers, an array of tools has been developed to study the minimax value of diverse sequential problems under the \emph{worst-case} assumption on Nature. In \cite{RakSriTew10a}, many analogues of the classical notions from statistical learning theory have been developed, and these have been extended in \cite{RakSriTew10b} for performance measures well beyond the additive regret. The process of \emph{sequential symmetrization} emerged as a key technique for dealing with complicated nested minimax expressions. In the worst-case model, the developed tools appear to give a unified treatment to such sequential problems as regret minimization, calibration of forecasters, Blackwell's approachability, Phi-regret, and more.
Learning theory has been so far focused predominantly on the i.i.d. and the worst-case learning scenarios. Much less is known about learnability in-between these two extremes. In the present paper, we make progress towards filling this gap. Instead of examining various performance measures, as in \cite{RakSriTew10b}, we focus on external regret and make assumptions on the behavior of Nature. By restricting Nature to play i.i.d. sequences, the results boil down to the classical notions of statistical learning in the supervised learning scenario. By not placing any restrictions on Nature, we recover the worst-case results of \cite{RakSriTew10a}. Between these two endpoints of the spectrum, particular assumptions on the adversary yield interesting bounds on the minimax value of the associated problem.
By inertia, we continue to use the name ``online learning'' to describe the sequential interaction between the player (learner) and Nature (adversary). We realize that the name can be misleading for a number of reasons. First, the techniques developed in \cite{RakSriTew10a,RakSriTew10b} apply far beyond the problems that would traditionally be called ``learning''. Second, in this paper we deal with non-worst-case adversaries, while the word ``online'' often (though, not always) refers to worst-case. Still, we decided to keep the misnomer ``online learning'' whenever the problem is sequential.
Adapting the game-theoretic language, we will think of the learner and the adversary as the two players of a zero-sum repeated game. Adversary's moves will be associated with ``data'', while the moves of the learner -- with a function or a parameter. This point of view is not new: game-theoretic minimax analysis has been at the heart of statistical decision theory for more than half a century (see \cite{Berger85}). In fact, there is a well-developed theory of minimax estimation when restrictions are put on either the choice of the adversary or the allowed estimators by the player. We are not aware of a similar theory for sequential problems with non-i.i.d. data.
In particular, minimax analysis is central to nonparametric estimation, where one aims to prove optimal rates of convergence of the proposed estimator. Lower bounds are proved by exhibiting a ``bad enough'' distribution of the data that can be chosen by the adversary. The form of the minimax value is often
\begin{align}
\label{eq:nonparametric}
\inf_{\hat{f}}\sup_{f\in\F} \En\|\hat{f}-f\|^2
\end{align}
where the infimum is over all estimators and the supremum is over all functions $f$ from some class $\F$. It is often assumed that $Y_t=f(X_t)+\epsilon_t$, with $\epsilon_t$ being zero-mean noise. An estimator can be thought of as a strategy, mapping the data $\{(X_t,Y_t)\}_{t=1}^T$ to the space of functions on $\X$. This description is, of course, only a rough sketch that does not capture the vast array of problems considered in nonparametric estimation.
In statistical learning theory, the data are i.i.d. from an unknown distribution $P_{X\times Y}$ and the associated minimax problem in the supervised setting with square loss is
\begin{align}
\label{eq:slt}
\Val^{\text{batch, sup}}_T = \inf_{\hat{f}}\sup_{P_{X\times Y}} \left\{ \En (Y-\hat{f}(X))^2 - \inf_{f\in\F} \En (Y-f(X))^2 \right\}
\end{align}
where the infimum is over all estimators (or learning algorithms) and the supremum is over all distributions. Unlike nonparametric regression which makes an assumption on the ``regression function'' $f\in\F$, statistical learning theory often aims at distribution-free results. Because of this, the goal is more modest: to predict as well as the best function in $\F$ rather than recover the true model. In particular, \eqref{eq:slt} sidesteps the issue of approximation error (model misspecification).
What is known about the asymptotic behavior of \eqref{eq:slt}? The well-developed statistical learning theory tells us that \eqref{eq:slt} converges to zero if and only if the combinatorial dimensions of $\F$ (that is, the VC dimension for binary-valued, or scale-sensitive for real-valued functions) are finite. The convergence is intimately related to the uniform Glivenko-Cantelli property. If indeed the value in \eqref{eq:slt} converges to zero, an algorithm that achieves this is Empirical Risk Minimization. For unsupervised learning problems, however, ERM does not necessarily drive the quantity $\En \hat{f}(X) -\inf_{f\in\F}\En f(X)$ to zero.
The formulation \eqref{eq:slt} no longer makes sense if the data generating process is non-stationary. Consider the opposite from i.i.d. end of the spectrum: the data are chosen in a worst-case manner. First, consider an \emph{oblivious} adversary who fixes the individual sequence $x_1,\ldots,x_T$ ahead of the game and reveals it one-by-one. A frequently studied notion of performance is {\em regret}, and the minimax value can be written as
\begin{align}
\label{eq:minimax_regret_for_oblivious}
\Val_T^{\text{oblivious}} = \inf_{\{\hat{f}_t\}_{t=1}^T}\sup_{(x_1,\ldots,x_T)} \En_{f_1,\ldots,f_T}\left[ \frac{1}{T}\sum_{t=1}^T f_t(x_t) - \inf_{f\in \F}\frac{1}{T}\sum_{t=1}^T f(x_t) \right]
\end{align}
where the randomized strategy for round $t$ is $\hat{f}_t:\X^{t-1}\mapsto \mathcal Q$, with $\mathcal Q$ being the set of all distributions on $\F$. That is, the player furnishes his best randomized strategy for each round, and the adversary picks the worst sequence.
A non-oblivious (\emph{adaptive}) adversary is, of course, more interesting. The protocol for the online interaction is the following: on round $t$ the player chooses a distribution $q_t$ on $\F$, the adversary chooses the next move $x_t\in\X$, the player draws $f_t$ from $q_t$, and the game proceeds to the next round. All the moves are observed by both players. Instead of writing the value in terms of strategies, we can write it in an extended form as
\begin{align}
\label{eq:minimax_regret_for_nonoblivious}
\Val_T = \inf_{q_1\in \mathcal Q}\sup_{x_1\in\X} \Eunderone{f_1\sim q_1} \cdots \inf_{q_T\in \mathcal Q}\sup_{x_T\in\X} \Eunderone{f_T\sim q_T} \left[ \frac{1}{T}\sum_{t=1}^T f_t(x_t) - \inf_{f\in \F}\frac{1}{T}\sum_{t=1}^T f(x_t) \right]
\end{align}
This is precisely the quantity considered in \cite{RakSriTew10a}. The minimax value for notions other than regret has been studied in \cite{RakSriTew10b}. In this paper, we are interested in restricting the ways in which the sequences $(x_1,\ldots,x_T)$ are produced. These restrictions can be imposed through a smaller set of mixed strategies that is available to the adversary at each round, or as a non-stochastic constraint at each round. The formulation we propose captures both types of assumptions.
The main contribution of this paper is the development of tools for the analysis of online scenarios where the adversary's moves are restricted in various ways. Further, we consider a number of interesting scenarios (such as smoothed learning) which can be captured by our framework. The present paper only scratches the surface of what is possible with sequential minimax analysis. Many questions are to be answered: For instance, one can ask whether a certain adversary is more powerful than another adversary by studying the value of the associated game.
The paper is organized as follows. In Section~\ref{sec:stochastic} we define the value of the game and appeal to minimax duality. Distribution-dependent sequential Rademacher complexity is defined in Section~\ref{sec:rademacher} and can be seen to generalize the classical notion as well as the worst-case notion from \cite{RakSriTew10a}. This section contains the main symmetrization result which relies on a careful consideration of original and tangent sequences. Section~\ref{sec:structural} is devoted to analysis of the distribution-dependent Rademacher complexity. In Section~\ref{sec:constraints} we consider non-stochastic constraints on the behavior of the adversary. From these results, variation-type results are seamlessly deduced. Section~\ref{sec:iid} is devoted to the i.i.d. adversary. We show equivalence between batch and online learnability. Hybrid adversarial-stochastic supervised learning is considered in Section~\ref{sec:supervised}. We show that it is the way in which the $x$ variable is chosen that governs the complexity of the problem, irrespective of the way the $y$ variable is picked. In Section~\ref{sec:smoothed} we introduce the notion of \emph{smoothed analysis} in the online learning scenario and show that a simple problem with infinite Littlestone's dimension becomes learnable once a small amount of noise is added to adversary's moves. Throughout the paper, we use the notation introduced in \cite{RakSriTew10a,RakSriTew10b}, and, in particular, we extensively use the ``tree'' notation.
\section*{Acknowledgements}
A. Rakhlin gratefully acknowledges the support of NSF under grant CAREER DMS-0954737 and Dean's Research Fund.
\section{Symmetrization and Random Averages}
\label{sec:rademacher}
Theorem~\ref{thm:minimax} is a useful representation of the value of the game. As the next step, we upper bound it with an expression which is easier to study. Such an expression is obtained by introducing Rademacher random variables. This process can be termed {\em sequential symmetrization} and has been exploited in \cite{AbeAgaBarRak09,RakSriTew10a,RakSriTew10b}. The restrictions $\mathcal P_t$, however, make sequential symmetrization a bit more involved than in the previous papers. The main difficulty arises from the fact that the set $\mathcal P_t(x_{1:t-1})$ depends on the sequence $x_{1:t-1}$, and symmetrization (that is, replacement of $x_s$ with $x'_s$) has to be done with care as it affects this dependence. Roughly speaking, in the process of symmetrization, a tangent sequence $x'_1,x'_2,\ldots$ is introduced such that $x_t$ and $x'_t$ are independent and identically distributed given ``the past''. However, ``the past'' is itself an interleaving choice of the original sequence and the tangent sequence.
Define the ``selector function'' $\chi:\X \times \X \times \{\pm 1\}\mapsto \X$ by
$$
\chi(x, x', \epsilon) = \left\{ \begin{array}{ll}
x' & \textrm{if } \epsilon = 1\\
x & \textrm{if } \epsilon = -1
\end{array}
\right.
$$
When $x_t$ and $x'_t$ are understood from the context, we will use the shorthand $\chi_t(\epsilon):= \chi(x_t, x'_t, \epsilon)$. In other words, $\chi_t$ selects between $x_t$ and $x'_t$ depending on the sign of $\epsilon$.
Throughout the paper, we deal with binary trees, which arise from symmetrization \cite{RakSriTew10a}. Given some set ${\mathcal Z}$, an \emph{${\mathcal Z}$-valued tree of depth $T$} is a sequence $(\z_1,\ldots,\z_T)$ of $T$ mappings $\z_i : \{\pm 1\}^{i-1} \mapsto \Z$. The $T$-tuple $\epsilon =(\epsilon_1,\ldots,\epsilon_T) \in \{\pm 1\}^T$ defines a path. For brevity, we write $\z_t(\epsilon)$ instead of $\z_t(\epsilon_{1:t-1})$.
Given a joint distribution $\ensuremath{\mathbf{p}}$, consider the ``$\left(\X \times \X\right)^{T-1} \mapsto \mathcal{P}(\X \times \X) $''- valued probability tree $\boldsymbol{\rho}=(\boldsymbol{\rho}_1,\ldots,\boldsymbol{\rho}_T)$ defined by
\begin{align}
\label{eq:prob_valued_tree}
\boldsymbol{\rho}_t(\epsilon_{1:t-1}) \left((x_{1},x'_{1}),\ldots,(x_{T-1},x'_{T-1})\right)
= (p_t(\cdot | \chi_1(\epsilon_1),\ldots,\chi_{t-1}(\epsilon_{t-1})), p_t(\cdot | \chi_1(\epsilon_1),\ldots,\chi_{t-1}(\epsilon_{t-1})) ).
\end{align}
In other words, the values of the mappings $\boldsymbol{\rho}_t(\epsilon)$ are products of conditional distributions, where conditioning is done with respect to a sequence made from $x_s$ and $x'_s$ depending on the sign of $\epsilon_s$. We note that the difficulty in intermixing the $x$ and $x'$ sequences does not arise in i.i.d. or worst-case symmetrization. However, in-between these extremes the notational complexity seems to be unavoidable if we are to employ symmetrization and obtain a version of Rademacher complexity.
As an example, consider the ``left-most'' path $\epsilon = -{\boldsymbol 1}$ in a binary tree of depth $T$, where ${\boldsymbol 1} = (1,\ldots,1)$ is a $T$-dimensional vector of ones. Then all the selectors $\chi(x_t, x_t', \epsilon_t)$ in the definition \eqref{eq:prob_valued_tree} select the sequence $x_1,\ldots,x_T$. The probability tree $\boldsymbol{\rho}$ on the ``left-most'' path is, therefore, defined by the conditional distributions $p_t(\cdot| x_{1:t-1})$. Analogously, on the path $\epsilon={\boldsymbol 1}$, the conditional distributions are $p_t(\cdot| x'_{1:t-1})$.
Slightly abusing the notation, we will write $\boldsymbol{\rho}_t(\epsilon) \left((x_{1},x'_{1}),\ldots,(x_{t-1},x'_{t-1})\right)$ for the probability tree since $\boldsymbol{\rho}_t$ clearly depends only on the prefix up to time $t-1$. Throughout the paper, it will be understood that the tree $\boldsymbol{\rho}$ is obtained from $\ensuremath{\mathbf{p}}$ as described above. Since all the conditional distributions of $\ensuremath{\mathbf{p}}$ satisfy the restrictions, so do the corresponding distributions of the probability tree $\boldsymbol{\rho}$. By saying that $\boldsymbol{\rho}$ satisfies restrictions we then mean that $\ensuremath{\mathbf{p}}\in \PDA$.
Sampling of a pair of $\X$-valued trees from $\boldsymbol{\rho}$, written as $(\x,\x') \sim \boldsymbol{\rho}$, is defined as the following recursive process: for any $\epsilon\in\{\pm1\}^T$,
\begin{align}
\label{eq:sampling_procedure}
(\x_1(\epsilon),\x'_1(\epsilon)) &\sim \boldsymbol{\rho}_1(\epsilon) \notag \\
(\x_t(\epsilon),\x'_t(\epsilon)) &\sim \boldsymbol{\rho}_t(\epsilon)((\x_1(\epsilon), \x'_1(\epsilon)),\ldots,(\x_{t-1}(\epsilon),\x'_{t-1}(\epsilon)))~~~~~\mbox{ for }~~ 2\leq t\leq T
\end{align}
To gain a better understanding of the sampling process, consider the first few levels of the tree. The roots $\x_1,\x'_1$ of the trees $\x,\x'$ are sampled from $p_1$, the conditional distribution for $t=1$ given by $\ensuremath{\mathbf{p}}$. Next, say, $\epsilon_1=+1$. Then the ``right'' children of $\x_1$ and $\x'_1$ are sampled via $\x_2(+1),\x'_2(+1) \sim p_2(\cdot|\x'_1)$ since $\chi_1(+1)$ selects $\x'_1$. On the other hand, the ``left'' children $\x_2(-1),\x'_2(-1)$ are both distributed according to $p_2(\cdot|\x_1)$. Now, suppose $\epsilon_1=+1$ and $\epsilon_2 = -1$. Then, $\x_3(+1,-1), \x'_3(+1,-1)$ are both sampled from $p_3(\cdot|\x'_1, \x_2(+1))$.
The proof of Theorem~\ref{thm:valrad} reveals why such intricate conditional structure arises, and Section~\ref{sec:structural} shows that this structure greatly simplifies for i.i.d. and worst-case situations. Nevertheless, the process described above allows us to define a unified notion of Rademacher complexity for the spectrum of assumptions between the two extremes.
\begin{definition}
\label{def:rademacher}
The \emph{distribution-dependent sequential Rademacher complexity} of a function class $\F \subseteq \mathbb{R}^\X$ is defined as
$$
\Rad_T(\mathcal{F}, \ensuremath{\mathbf{p}}) ~\stackrel{\scriptscriptstyle\triangle}{=}~ \En_{(\x,\x')\sim \boldsymbol{\rho}}\Es{\epsilon}{ \sup_{f \in \F} \sum_{t=1}^{T} \epsilon_t f(\x_t(\epsilon))}
$$
where $\epsilon=(\epsilon_1,\ldots, \epsilon_T)$ is a sequence of i.i.d. Rademacher random variables and $\boldsymbol{\rho}$ is the probability tree associated with $\ensuremath{\mathbf{p}}$.
\end{definition}
We now prove an upper bound on the value $\Val_T(\mathcal P_{1:T})$ of the game in terms of this distribution-dependent sequential Rademacher complexity. This provides an extension of the analogous result in \cite{RakSriTew10a} to adversaries more benign than worst-case.
\begin{theorem}\label{thm:valrad}
The minimax value is bounded as
\begin{align}
\label{eq:valrad_upper}
\Val_T(\mathcal P_{1:T}) \le 2 \sup_{\ensuremath{\mathbf{p}}\in\PDA}\Rad_T(\F, \ensuremath{\mathbf{p}}).
\end{align}
A more general statement also holds:
\begin{align*}
\Val_T(\mathcal P_{1:T}) &\leq \sup_{\ensuremath{\mathbf{p}}\in\PDA} \E{ \sup_{f \in \F}\left\{ \sum_{t=1}^T f(x_t') - f(x_t) \right\} } \\
& \leq 2\sup_{\ensuremath{\mathbf{p}}\in\PDA} \En_{(\x,\x') \sim \boldsymbol{\rho}} \En_{\epsilon} \left[ \sup_{f \in \F} \sum_{t=1}^T \epsilon_t (f (\x_t(\epsilon))-M_t(\ensuremath{\mathbf{p}},f,\x,\x',\epsilon)) \right]
\end{align*}
for any measurable function $M_t$ with the property $M_t(\ensuremath{\mathbf{p}},f,\x,\x',\epsilon) = M_t(\ensuremath{\mathbf{p}},f,\x',\x,-\epsilon)$. In particular, \eqref{eq:valrad_upper} is obtained by choosing $M_t=0$.
\end{theorem}
The following corollary provides a natural ``centered'' version of the distribution-dependent Rademacher complexity. That is, the complexity can be measured by relative shifts in the adversarial moves.
\begin{corollary}
\label{cor:centered_at_conditional}
For the game with restrictions $\mathcal P_{1:T}$,
\begin{align*}
\Val_T(\mathcal P_{1:T}) &\leq 2\sup_{\ensuremath{\mathbf{p}}\in\PDA} \En_{(\x,\x') \sim \boldsymbol{\rho}} \En_{\epsilon} \left[ \sup_{f \in \F} \sum_{t=1}^T \epsilon_t \Big( f (\x_t(\epsilon))- \En_{t-1} f(\x_t(\epsilon)) \Big) \right]
\end{align*}
where $\En_{t-1}$ denotes the conditional expectation of $\x_t(\epsilon)$.
\end{corollary}
\begin{example} Suppose $\F$ is a unit ball in a Banach space and $f(x) = \inner{f,x}$. Then
\begin{align*}
\Val_T(\mathcal P_{1:T}) &\leq 2\sup_{\ensuremath{\mathbf{p}}\in\PDA} \En_{(\x,\x') \sim \boldsymbol{\rho}} \En_{\epsilon} \left\| \sum_{t=1}^T \epsilon_t \Big( \x_t(\epsilon)- \En_{t-1} \x_t(\epsilon) \Big) \right\|
\end{align*}
Suppose the adversary plays a simple random walk (e.g., $p_t(x|x_1,\ldots,x_{t-1}) = p_t(x|x_{t-1})$ is uniform on a unit sphere). For simplicity, suppose this is the only strategy allowed by the set $\PDA$. Then $\x_t(\epsilon)- \En_{t-1} \x_t(\epsilon)$ are independent increments when conditioned on the history. Further, the increments do not depend on $\epsilon_t$. Thus,
\begin{align*}
\Val_T(\mathcal P_{1:T}) &\leq 2 \En\left\| \sum_{t=1}^T Y_t \right\|
\end{align*}
where $\{Y_t\}$ is the corresponding random walk.
\end{example}
\section{Smoothed Analysis}
\label{sec:smoothed}
The development of \emph{smoothed analysis} over the past decade is arguably one of the hallmarks in the study of complexity of algorithms. In contrast to the overly optimistic {\em average complexity} and the overly pessimistic {\em worst-case complexity}, smoothed complexity can be seen as a more realistic measure of algorithm's performance. In their groundbreaking work, Spielman and Teng \cite{SpiTen04smoothed} showed that the smoothed running time complexity of the simplex method is polynomial. This result explains good performance of the method in practice despite its exponential-time worst-case complexity.
In this section, we consider the effect of smoothing on {\em learnability}. Analogously to complexity analysis of algorithms, learning theory has been concerned with i.i.d. (that is, \emph{average case}) learnability and with online (that is, \emph{worst-case}) learnability. In the former, the learner is presented with a batch of i.i.d. data, while in the latter the learner is presented with a sequence adaptively chosen by the malicious opponent. It can be argued that neither the average nor the worst-case setting reasonably models real-world situations. A natural step is to consider smoothed learning, defined as a random perturbation of the worst-case sequence.
It is well-known that there is a gap between the i.i.d. and the worst-case scenarios. In fact, we do not need to go far for an example: A simple class of threshold functions on a unit interval is learnable in the i.i.d. supervised learning scenario, yet difficult in the online worst-case model \cite{Lit88, BenPalSha09}. When it comes to i.i.d. supervised learning, the relevant complexity of a class is captured by the Vapnik-Chervonenkis dimension, and the analogous notion for worst-case learning is the Littlestone's dimension \cite{Lit88, BenPalSha09, RakSriTew10a}. For the simple example of threshold functions, the VC dimension is one, yet the Littlestone's dimension is infinite. The proof of the latter fact, however, reveals that the infinite number of mistakes on the part of the player is due to the infinite resolution of the carefully chosen adversarial sequence. We can argue that this infinite precision is an unreasonable assumption on the power of a real-world opponent. It is then natural to ask: What happens if the adversary adaptively chooses the worst-case sequence, yet the moves are smoothed by exogenous noise? The scope of what is learnable is greatly enlarged if smoothed analysis makes problems with infinite Littlestone's dimension tractable.
Our approach to the problem is conceptually different from the smoothed analysis of \cite{SpiTen04smoothed} and the subsequent papers. We do not take a particular learning algorithm and study its smoothed complexity. Instead, we ask whether there \emph{exists} an algorithm which guarantees vanishing regret for the smoothed sequences, no matter how they are chosen. Using the techniques developed in this paper, learnability is established by directly studying the value of the associated game.
Smoothed analysis of learning has been considered by \cite{kalai2010learning}, yet in a different setting. The authors study learning DNFs and decision trees over a binary hypercube, where random examples are drawn i.i.d. from a product distribution which is itself chosen randomly from a small set. The latter random choice adds an element of smoothing to the PAC setting. In contrast, in the present paper we consider adversarially-chosen sequences which are then corrupted by random noise. Further, since ``probability of error'' does not make sense for non-stationary data sources, we consider \emph{regret} as the learnability objective.
Formally, let $\sigma$ be a fixed ``smoothing'' distribution defined on some space $S$. The perturbed value of the adversarial choice $x$ is defined by a measurable mapping $\omega:\X\times S\to\X$, known to the learner. For example, an additive noise model corresponds to $\omega(x,s)=x+s$. More generally, we can consider a Markov transition kernel from a space of moves of the adversary to some information space, and the smoothed moves of the adversary can be thought of as outputs of a noisy communication channel.
A generic \emph{smoothed online learning model} is given by following $T$-round interaction between the learner and the adversary:
\begin{itemize}
\addtolength{\itemsep}{-0.6\baselineskip}
\item[]\hspace{-9mm} On round $t = 1,\ldots, T$,
\item the learner chooses a mixed strategy $q_t$ (distribution on $\F$)
\item the adversary picks $x_t \in \X$
\item random perturbation $s_t \sim \sigma$ is drawn
\item the learner draws $f_t\sim q_t$ and pays $f_t(\omega(x_t,s_t))$
\item[]\hspace{-9mm} End
\end{itemize}
The value of the smoothed online learning game is
\begin{align*}
\Val_T &~\stackrel{\scriptscriptstyle\triangle}{=}~ \inf_{q_1}\sup_{x_1} \Eunder{s_1\sim \sigma}{f_1\sim q_1} \inf_{q_2}\sup_{x_2} \Eunder{s_2\sim\sigma}{f_2\sim q_2}\cdots \inf_{q_T}\sup_{x_T} \Eunder{s_T\sim\sigma}{f_T\sim q_T} \left[ \sum_{t=1}^T f_t(\omega(x_t,s_t)) - \inf_{f\in \F}\sum_{t=1}^T f(\omega(x_t,s_t)) \right]
\end{align*}
where the infima are over $q_t\in\mathcal Q$ and the suprema are over $x_t\in\X$. A non-trivial upper bound on the above value guarantees existence of a strategy for the player that enjoys a regret bound against the smoothed adversary. We note that both the adversary and the player observe each other's moves and the random perturbations before proceeding to the next round.
We now observe that the setting is nothing but a special case of a restriction on the adversary, as studied in this paper. The adversarial choice $x_t$ defines the parameter $x_t$ of the distribution from which a random element $\omega(x_t,s_t)$ is drawn. The following theorem follows immediately from Theorem~\ref{thm:minimax}.
\begin{theorem}\label{thm:main_smoothed}
The value of the smoothed online learning game is bounded above as
\begin{align*}
\Val_T &\le 2\sup_{x_1\in\Z} \Eunderone{s_1 \sim \sigma} \En_{\epsilon_1} \ldots \sup_{x_T\in\Z} \Eunderone{s_T\sim \sigma} \En_{\epsilon_T}\left[\sup_{f\in\F} \sum_{t = 1}^T \epsilon_t f(\omega(x_t,s_t)) \right],
\end{align*}
\end{theorem}
We now demonstrate how Theorem~\ref{thm:main_smoothed} can be used to show learnability for a smoothed learning scenario. What we find is somewhat surprising: for a problem which is not learnable in the online worst-case scenario, an exponentially small noise added to the moves of the adversary yields a learnable problem. This shows, at least in the given example, that the worst-case analysis and Littlestone's dimension are brittle notions which might be too restrictive in the real world, where some noise is unavoidable. It is comforting that small additive noise makes the problem learnable!
\subsection{Binary Classification with Half-Spaces}
Consider the supervised game with threshold functions on a unit interval.
The moves of the adversary are pairs $x=(z,y)$ with $z\in[0,1]$ and $y\in\{0,1\}$, and the binary-valued function class $\F$ is defined by
\begin{align}
\label{eq:def_one_dim}
\F = \left\{ f_\theta (z,y)= \left|y-\ind{z<\theta}\right|: \theta\in[0,1]\right\}.
\end{align}
The class $\F$ has infinite Littlestone's dimension and is not learnable in the worst-case online framework. Any non-trivial upper bound on the value of the game, therefore, has to depend on particular noise assumptions. For the uniform noise $\sigma = \mathrm{Unif}[-\gamma/2,\gamma/2]$ for some $\gamma \ge 0$, for instance, the intuition tells us that noise implies a margin. In this case we should expect a $1/\gamma$ complexity parameter appearing in the bounds. Formally, let $$\omega((z,y),\sigma)=(z+\sigma,y).$$
That is, $\sigma$ uniformly perturbs the $z$-variable of the adversarial choice $x=(z,y)$, but does not perturb the $y$-variable. The following proposition holds for this setting.
\begin{proposition}
\label{prop:one_dim_smoothed}
For the worst-case adversary whose moves are corrupted by the uniform noise $\mathrm{Unif}[-\gamma/2,\gamma/2]$, the value is bounded by
\begin{align*}
\Val_T & \le 2 + \sqrt{2 T \left(4 \log T+ \log(1/\gamma) \right)}
\end{align*}
\end{proposition}
The idea for the proof is the following. By discretizing the interval into bins of size well below the noise level, we can guarantee with high probability that no two smoothed choices $z_t+s_t$ of the adversary fall into the same bin. If this is the case, then the supremum of Theorem~\ref{thm:main_smoothed} can be taken over a discretized set of thresholds. For each fixed threshold $f$, however, $\epsilon_t f(\omega(x_t,s_t))$ forms a martingale difference sequence, yielding the desired bound. We can easily generalize this idea to linear thresholds in $d$ dimensions: Cover the sphere corresponding to the choices $z_t$ and $f_t$ by balls of a small enough radius and argue that with high probability no two smoothed choices of the adversary fall into the same bin. By a simple volume argument, we claim that the supremum in Theorem~\ref{thm:main_smoothed} can be replaced by the supremum over the discretization at a small additional cost (the number of bins that change sign as $f$ ranges over one bin). The result then follows from martingale concentration.
Below, we prove the result for the one-dimensional case, which already exhibits the key ingredients.
\begin{proof}[\textbf{Proof of Proposition~\ref{prop:one_dim_smoothed}}]
For any $f_\theta\in\F$, define
$$ M^\theta_t = \epsilon_t f_\theta(\omega(x_t,s_t)) = \epsilon_t \left|y_t-\ind{z_t+s_t < \theta}\right| .$$
Note that $\{M^\theta_t\}_t$ is a zero-mean martingale difference sequence, that is $\En[M_t | z_{1:t},y_{1:t},s_{1:t}] = 0$.
We conclude that for any fixed $\theta\in[0,1]$,
$$ P\left(\sum_{t=1}^T M^\theta_t \geq \epsilon\right) \leq \exp\left\{-\frac{\epsilon^2}{2T}\right\} $$
by Azuma-Hoeffding's inequality. Let $\F' = \{f_{\theta_1},\ldots,f_{\theta_N}\}\subset \F$ be obtained by discretizing the interval $[0,1]$ into $N=T^a$ bins $[\theta_i,\theta_{i+1})$ of length $T^{-a}$, for some $a\geq 3$. Then
$$ P\left(\max_{f_{\theta}\in\F'}\sum_{t=1}^T M^\theta_t \geq \epsilon\right) \leq N\exp\left\{-\frac{\epsilon^2}{2T}\right\} .$$
Observe that the maximum over the discretization coincides with the supremum over the class $\F$ if no two elements $z_t+s_t$ and $z_{t'}+s_{t'}$ fall into the same interval $[\theta_i,\theta_{i+1})$. Indeed, in this case all the possible values of $\F$ on the set $\{z_1+s_1,\ldots,z_T+s_T\}$ are obtained by choosing the discrete thresholds in $\F'$. Since there are many intervals and we are choosing $T$, the probability of no collision is close to 1.
Let us calculate the probability that for no distinct $t,t'\in[T]$ do we have $z_t+s_t$ and $z_{t'}+s_{t'}$ in the same bin. We can deal with the boundary behavior by ensuring that $\F$ is in fact a set of thresholds that is $\gamma/2$-away from $0$ or $1$, but we will omit this discussion for the sake of clarity. The probability that no two elements $z_t+s_t$ and $z_{t'}+s_{t'}$ fall into the same bin depends on the behavior of the adversary in choosing $z_t$'s. Keeping in mind that the distribution of all $s_t$'s is uniform on $[-\gamma/2,\gamma/2]$, we see that the probability of a collision is maximized when $z_t$ is chosen to be constant throughout the game.
If $z_t$'s are all constant throughout the game, we have $T$ balls falling uniformly into $\gamma T^a > T$ bins. The probability of two elements $z_t+s_t$ and $z_t+s_{t'}$ falling into the same bin is
$$ P\left(\text{no two balls fall into same bin}\right) = \frac{\gamma T^a (\gamma T^a-1)\cdots (\gamma T^a-T)}{\gamma T^a \cdot \gamma T^a \cdots \gamma T^a} \geq \left(\frac{\gamma T^a-T}{\gamma T^a}\right)^T = \left(1-\frac{1}{\gamma T^{a-1}}\right)^{\frac{\gamma T^{a-1}}{\gamma T^{a-2}}}$$
The last term is approximately $\exp\left\{-1/(\gamma T^{a-2})\right\}$ for large $T$, so
$$ P\left(\text{no two balls fall into same bin}\right) \geq 1-\frac{1}{\gamma T^{a-2}} $$
using $e^{-x} \geq 1-x$. Now,
\begin{align*}
P\left(\sup_{f\in\F} \sum_{t = 1}^T \epsilon_t f(\omega(x_t,s_t)) \geq \epsilon \right) &\leq P\left(\sup_{f\in\F} \sum_{t = 1}^T \epsilon_t f(\omega(x_t,s_t)) \geq \epsilon ~\wedge~ \text{none of } (z_t+s_t) \text{'s fall into same bin}\right) \\
&+ P\left(\text{some of } (z_t+s_t) \text{'s fall into same bin} \right) \\
&= P\left(\max_{f_{\theta}\in\F'}\sum_{t=1}^T M^\theta_t \geq \epsilon ~\wedge~ \text{none of } (z_t+s_t) \text{'s fall into same bin} \right) + \frac{1}{\gamma T^{a-2}} \\
&\leq P\left(\max_{f_{\theta}\in\F'}\sum_{t=1}^T M^\theta_t \geq \epsilon \right) + \frac{1}{\gamma T^{a-2}} \\
&\leq T^a \exp\left\{-\frac{\epsilon^2}{2T}\right\} + \frac{1}{\gamma T^{a-2}}
\end{align*}
Using the above and the fact that for any $f \in \F$, $|\sum_{t = 1}^T \epsilon_t f(\omega(x_t,s_t)) | \le T$ we can conclude that
\begin{align*}
\Val_T & \le \E{\sup_{f\in\F} \sum_{t = 1}^T \epsilon_t f(\omega(x_t,s_t))}\\
& \le \epsilon + T^{a+1} \exp\left\{-\frac{\epsilon^2}{2T}\right\} + \frac{T^{3-a}}{\gamma }
\end{align*}
Setting $\epsilon = \sqrt{2 (a+1) T \log T}$ we conclude that
\begin{align*}
\Val_T & \le 1 + \sqrt{2 (a+1) T \log T} + \frac{T^{3-a}}{\gamma }
\end{align*}
Now pick $a = 3 + \frac{\log(1/\gamma)}{\log T}$ (this choice is fine because $\gamma T^{a-1} = T^2$ which grows with $T$ as needed for the previous approximation). Hence we see that
\begin{align*}
\Val_T & \le 2 + \sqrt{2 \left(4 + \frac{\log(1/\gamma)}{\log T}\right) T \log T}\\
& = 2 + \sqrt{2 T \left(4 \log T+ \log(1/\gamma) \right)}
\end{align*}
\end{proof}
While the infinite Littlestone dimension of threshold functions seemed to indicate that half spaces are not online learnable, the analysis shows that very slight perturbations (in fact even exponentially small in $T$) are enough to make half spaces online learnable, so in practice half spaces can be used for classification in the smoothed online setting.
We note that our learnability analysis was based on an upper bound on the value of the game. The inefficient algorithm can be recovered from the minimax formulation directly. However, for the particular problem of smoothed learning with half-spaces, the exponential weights algorithm on the discretization of the interval will also do the job. An alternative analysis can directly focus on this algorithm and use the same bins-and-balls proof to show that the loss of any expert is likely to be close to the loss of any non-discretized threshold.
\section{Value of the Game}
\label{sec:stochastic}
Consider sets $\F$ and $\X$, where $\F$ is a closed subset of a complete separable metric space. Let $\mathcal Q$ be the set of probability distributions on $\F$ and assume that $\mathcal Q$ is weakly compact. We consider randomized learners who predict a distribution $q_t\in\mathcal Q$ on every round.
Let $\mathcal P$ be the set of probability distributions on $\X$. We would like to capture the fact that sequences $(x_1,\ldots,x_T)$ cannot be arbitrary. This is achieved by defining restrictions on the adversary, that is, subsets of ``allowed'' distributions for each round. These restrictions limit the scope of available mixed strategies for the adversary.
\begin{definition}
A {\em restriction} $\mathcal P_{1:T}$ on the adversary is a sequence $\mathcal P_1,\ldots,\mathcal P_T$ of mappings $\mathcal P_t: \X^{t-1}\mapsto 2^\mathcal P$ such that $\mathcal P_t(x_{1:t-1})$ is a \emph{convex} subset of $\mathcal P$ for any $x_{1:t-1}\in\X^{t-1}$.
\end{definition}
Note that the restrictions depend on the past moves of the adversary, but not on those of the player. We will write $\mathcal P_t$ instead of $\mathcal P_t(x_{1:t-1})$ when $x_{1:t-1}$ is clearly defined.
Using the notion of restrictions, we can give names to several types of adversaries that we will study in this paper.
\begin{itemize}
\item A \emph{worst-case adversary} is defined by vacuous restrictions $\mathcal P_t(x_{1:t-1}) = \mathcal P$. That is, any mixed strategy is available to the adversary, including any deterministic point distributions.
\item A \emph{constrained adversary} is defined by $\mathcal P_t(x_{1:x_{t-1}})$ being the set of all distributions supported on the set $\{x \in \X : C_t(x_1,\ldots,x_{t-1},x) = 1 \}$ for some deterministic binary-valued constraint $C_t$. The deterministic constraint can, for instance, ensure that the length of the path determined by the moves $x_1,\ldots,x_t$ stays below the allowed budget.
\item A \emph{smoothed adversary} picks the worst-case sequence which gets corrupted by an i.i.d. noise. Equivalently, we can view this as restrictions on the adversary who chooses the ``center'' (or a parameter) of the noise distribution. For a given family $\G$ of noise distributions (e.g. zero-mean Gaussian noise), the restrictions are obtained by all possible shifts $\mathcal P_t = \{g(x-c_t): g\in\G, c_t\in\X \}$.
\item A \emph{hybrid adversary} in the supervised learning game picks the worst-case label $y_t$, but is forced to draw the $x_t$-variable from a fixed distribution \cite{LazMun09}.
\item Finally, an \emph{i.i.d. adversary} is defined by a time-invariant restriction $\mathcal P_t(x_{1:t-1}) = \{p\}$ for every $t$ and some $p\in\mathcal P$.
\end{itemize}
\noindent For the given restrictions $\mathcal P_{1:T}$, we define the value of the game as
\begin{align}
\label{eq:def_val_game}
\Val_T(\mathcal P_{1:T}) ~\stackrel{\scriptscriptstyle\triangle}{=}~ \inf_{q_1\in \mathcal Q}\sup_{p_1\in\mathcal P_1} ~\Eunderone{f_1,x_1} ~~ \inf_{q_2\in \mathcal Q}\sup_{p_2\in\mathcal P_2} ~\Eunderone{f_2,x_2}\cdots \inf_{q_T\in \mathcal Q}\sup_{p_T\in\mathcal P_T} ~\Eunderone{f_T,x_T} \left[ \sum_{t=1}^T f_t(x_t) - \inf_{f\in \F}\sum_{t=1}^T f(x_t) \right]
\end{align}
where $f_t$ has distribution $q_t$ and $x_t$ has distribution $p_t$. As in \cite{RakSriTew10a}, the adversary is {\em adaptive}, that is, chooses $p_t$ based on the history of moves $f_{1:t-1}$ and $x_{1:t-1}$.
At this point, the only difference from the setup of \cite{RakSriTew10a} is in the restrictions $\mathcal P_t$ on the adversary. Because these restrictions might not allow point distributions, the suprema over $p_t$'s in \eqref{eq:def_val_game} cannot be equivalently written as the suprema over $x_t$'s.
The value of the game can also be written in terms of strategies $\ensuremath{\boldsymbol \pi} = \{\pi_t\}_{t=1}^T$ and $\ensuremath{\boldsymbol \tau} = \{\tau_t\}_{t=1}^T$ for the player and the adversary, respectively, where $\pi_t:(\F\times\X\times\mathcal P)^{t-1}\to \mathcal Q$ and $\tau_t:(\F\times\X\times\mathcal Q)^{t-1} \to \mathcal P$. Crucially, the strategies also depend on the mappings $\mathcal P_{1:T}$. The value of the game can equivalently be written in the strategic form as
\begin{align}
\label{eq:def_val_game_strategic}
\Val_T(\mathcal P_{1:T}) = \inf_{\ensuremath{\boldsymbol \pi}} \sup_{\ensuremath{\boldsymbol \tau}} \Eunder{x_1\sim \tau_1}{f_1\sim \pi_1}\ldots \Eunder{x_T\sim \tau_T}{f_T \sim \pi_T} \left[ \sum_{t=1}^T f_t(x_t) - \inf_{f\in \F}\sum_{t=1}^T f(x_t) \right]
\end{align}
A word about the notation. In \cite{RakSriTew10a}, the value of the game is written as $\Val_T(\F)$, signifying that the main object of study is $\F$. In \cite{RakSriTew10b}, it is written as $\Val_T(\ell,\Phi_T)$ since the focus is on the complexity of the set of transformations $\Phi_T$ and the payoff mapping $\ell$. In the present paper, the main focus is indeed on the restrictions on the adversary, justifying our choice $\Val_T(\mathcal P_{1:T})$ for the notation.
The first step is to apply the minimax theorem. To this end, we verify the necessary conditions. Our assumption that $\F$ is a closed subset of a complete separable metric space implies that $\mathcal Q$ is tight and Prokhorov's theorem states that compactness of $\mathcal Q$ under weak topology is equivalent to tightness \cite{VanDerVaartWe96}. Compactness under weak topology allows us to proceed as in \cite{RakSriTew10a}. Additionally, we require that the restriction sets are compact and convex.
\begin{theorem}\label{thm:minimax}
Let $\F$ and $\X$ be the sets of moves for the two players, satisfying the necessary conditions for the minimax theorem to hold. Let $\mathcal P_{1:T}$ be the restrictions, and assume that for any $x_{1:t-1}$, $\mathcal P_t(x_{1:t-1})$ satisfies the necessary conditions for the minimax theorem to hold. Then
\begin{align}
\Val_T(\mathcal P_{1:T})&=\sup_{p_1\in\mathcal P_1} \En_{x_1\sim p_1}\ldots \sup_{p_T\in\mathcal P_T} \En_{x_T\sim p_T} \left[
\sum_{t=1}^T \inf_{f_t \in \F}
\Es{x_t \sim p_t}{f_t(x_t)} - \inf_{f\in\F} \sum_{t=1}^T f(x_t)
\right]. \label{eq:value_equality}
\end{align}
\end{theorem}
The nested sequence of suprema and expected values in Theorem~\ref{thm:minimax} can be re-written succinctly as
\begin{align}
\label{eq:succinct_value_equality}
\Val_T(\mathcal P_{1:T})
&=\sup_{\ensuremath{\mathbf{p}}\in\PDA} \En_{x_1\sim p_1} \En_{x_2\sim p_2(\cdot|x_1)} \ldots \En_{x_T\sim p_T(\cdot|x_{1:T-1})} \left[
\sum_{t=1}^T \inf_{f_t \in \F}
\Es{x_t \sim p_t}{f_t(x_t)} - \inf_{f\in\F} \sum_{t=1}^T f(x_t)
\right] \\
&=\sup_{\ensuremath{\mathbf{p}}\in\PDA} \En \left[
\sum_{t=1}^T \inf_{f_t \in \F}
\Es{x_t \sim p_t}{f_t(x_t)} - \inf_{f\in\F} \sum_{t=1}^T f(x_t)
\right] \notag
\end{align}
where the supremum is over all joint distributions $\ensuremath{\mathbf{p}}$ over sequences, such that $\ensuremath{\mathbf{p}}$ satisfies the restrictions as described below. Given a joint distribution $\ensuremath{\mathbf{p}}$ on sequences $(x_1,\ldots,x_T)\in \X^T$, we denote the associated conditional distributions by $p_t(\cdot|x_{1:t-1})$. We can think of the choice $\ensuremath{\mathbf{p}}$ as a sequence of oblivious strategies $\{p_t:\X^{t-1}\mapsto\mathcal P \}_{t=1}^T$, mapping the prefix $x_{1:t-1}$ to a conditional distribution $p_t(\cdot|x_{1:t-1})\in\mathcal P_t(x_{1:t-1})$. We will indeed call $\ensuremath{\mathbf{p}}$ a ``joint distribution'' or an ``oblivious strategy'' interchangeably. We say that a joint distribution $\ensuremath{\mathbf{p}}$ \emph{satisfies restrictions} if for any $t$ and any $x_{1:t-1}\in \X^{t-1}$, $p_t (\cdot | x_{1:t-1}) \in \mathcal P_t(x_{1:t-1})$. The set of all joint distributions satisfying the restrictions is denoted by $\PDA$. We note that Theorem~\ref{thm:minimax} cannot be deduced immediately from the analogous result in \cite{RakSriTew10a}, as it is not clear how the restrictions on the adversary per each round come into play after applying the minimax theorem. Nevertheless, it is comforting that the restrictions directly translate into the set $\PDA$ of oblivious strategies satisfying the restrictions.
Before continuing with our goal of upper-bounding the value of the game, let us answer the following question: Is there an oblivious minimax strategy for the adversary? Even though Theorem~\ref{thm:minimax} shows equality to some quantity with a supremum over oblivious strategies $\ensuremath{\mathbf{p}}$, it is not immediate that the answer to our question is affirmative, and a proof is required. To this end, for any oblivious strategy $\ensuremath{\mathbf{p}}$, define the regret the player would get playing optimally against $\ensuremath{\mathbf{p}}$:
\begin{align}
\label{eq:def_val_for_p}
\Val_T^\ensuremath{\mathbf{p}} ~\stackrel{\scriptscriptstyle\triangle}{=}~ \inf_{f_1\in \F} \En_{x_1\sim p_1} \inf_{f_2\in \F} \En_{x_2\sim p_2(\cdot|x_1)} \cdots \inf_{f_T\in \F} \En_{x_T\sim p_T(\cdot|x_{1:T-1})} \left[ \sum_{t=1}^T f_t(x_t) - \inf_{f\in \F}\sum_{t=1}^T f(x_t) \right].
\end{align}
The next proposition shows that there is an oblivious minimax strategy for the adversary and a minimax optimal strategy for the player that does not depend on its own randomizations. The latter statement for worst-case learning is folklore, yet we have not seen a proof of it in the literature.
\begin{proposition}
\label{prop:lower_bound_oblivious}
For any oblivious strategy $\ensuremath{\mathbf{p}}$,
\begin{align}
\label{eq:lower_oblivious}
\Val_T(\mathcal P_{1:T}) ~\geq~ \Val_T^\ensuremath{\mathbf{p}} &~=~ \inf_{\ensuremath{\boldsymbol \pi}}\E{ \sum_{t=1}^T \En_{f_t\sim \pi_t(\cdot|x_{1:t-1}) } \En_{x_t\sim p_t} f_t(x_t) - \inf_{f\in \F}\sum_{t=1}^T f(x_t) }
\end{align}
with equality holding for $\ensuremath{\mathbf{p}}^*$ which achieves the supremum\footnote{Here, and in the rest of the paper, if a supremum is not achieved, a slightly modified analysis can be carried out.} in \eqref{eq:succinct_value_equality}. Importantly, the infimum is over strategies $\ensuremath{\boldsymbol \pi}=\{\pi_t\}_{t=1}^T$ of the player that \emph{do not depend} on player's previous moves, that is $\pi_t:\X^{t-1}\mapsto \mathcal Q$. Hence, there as an oblivious minimax optimal strategy for the adversary, and there is a corresponding minimax optimal strategy for the player that does not depend on its own moves.
\end{proposition}
Proposition~\ref{prop:lower_bound_oblivious} holds for all online learning settings with legal restrictions $\mathcal P_{1:T}$, encompassing also the no-restrictions setting of worst-case online learning \cite{RakSriTew10a}. The result crucially relies on the fact that the objective is external regret.
\section{Analyzing Rademacher Complexity}
\label{sec:structural}
The aim of this section is to provide a better understanding of the distribution-dependent sequential Rademacher complexity, as well as ways of upper-bounding it. We first show that the classical Rademacher complexity is equal to the distribution-dependent sequential Rademacher complexity for i.i.d. data. We further show that the distribution-dependent sequential Rademacher complexity is always upper bounded by the worst-case sequential Rademacher complexity defined in \cite{RakSriTew10a}.
It is already apparent to the reader that the sequential nature of the minimax formulation yields long mathematical expressions, which are not necessarily complicated yet unwieldy. The functional notation and the tree notation alleviate much of these difficulties. However, it takes some time to become familiar and comfortable with these representations. The next few results hopefully provide the reader with a better feel for the distribution-dependent sequential Rademacher complexity.
\begin{proposition}
Consider the i.i.d. restrictions $\mathcal P_t=\{p\}$ for all $t$, where $p$ is some fixed distribution on $\X$. Let $\boldsymbol{\rho}$ be the process associated with the joint distribution $\ensuremath{\mathbf{p}}=p^T$. Then
$$\Rad_T(\mathcal{F}, \ensuremath{\mathbf{p}}) = \Rad_T(\F, p) $$
where
\begin{align}
\label{eq:classical_rad}
\Rad_T(\F, p) \stackrel{\scriptscriptstyle\triangle}{=} \En_{x_1,\ldots,x_T \sim p}\Es{\epsilon}{ \sup_{f \in \F} \sum_{t=1}^{T} \epsilon_t f(x_t)} \ .
\end{align}
is the classical Rademacher complexity.
\end{proposition}
\begin{proof}
By definition, we have,
\begin{align}\label{eq:disttoiid1}
\Rad_T(\mathcal{F}, \ensuremath{\mathbf{p}}) &= \En_{(\x,\x')\sim \boldsymbol{\rho}}\Es{\epsilon}{ \sup_{f \in \F} \sum_{t=1}^{T} \epsilon_t f(\x_t(\epsilon))}
\end{align}
In the i.i.d. case, however, the tree generation according to the $\boldsymbol{\rho}$ process simplifies: for any $\epsilon \in \{\pm1\}^T, t \in [T]$,
\begin{align*}
(\x_t(\epsilon),\x'_t(\epsilon)) \sim p \times p \ .
\end{align*}
Thus, the $2\cdot(2^T-1)$ random variables $\x_t(\epsilon), \x'_t(\epsilon)$ are all i.i.d. drawn from $p$. Writing the expectation
\eqref{eq:disttoiid1} explicitly as an average over paths, we get
\begin{align*}
\Rad_T(\mathcal{F}, \ensuremath{\mathbf{p}}) &= \frac{1}{2^T} \sum_{\epsilon \in \{\pm1\}^T} \En_{(\x,\x')\sim \boldsymbol{\rho}}\left[ \sup_{f\in\F} \sum_{t=1}^T \epsilon_t f(\x_t(\epsilon)) \right] \\
&= \frac{1}{2^T} \sum_{\epsilon \in \{\pm1\}^T} \En_{x_1,\ldots,x_T \sim p}\left[ \sup_{f \in \F} \sum_{t=1}^{T} \epsilon_t f(x_t) \right] \\
&= \En_{\epsilon} \En_{x_1,\ldots,x_T \sim p}\left[ \sup_{f \in \F} \sum_{t=1}^{T} \epsilon_t f(x_t) \right] \ .
\end{align*}
The second equality holds because, for any fixed path $\epsilon$, the $T$ random variables $\{\x_t(\epsilon)\}_{t \in [T]}$ have joint distribution $p^T$.
\end{proof}
\begin{proposition}
For any joint distribution $\ensuremath{\mathbf{p}}$,
$$\Rad_T(\mathcal{F}, \ensuremath{\mathbf{p}}) \leq \Rad_T(\F) $$
where
\begin{align}
\label{eq:wc_rad}
\Rad_T(\F) \stackrel{\scriptscriptstyle\triangle}{=} \sup_{\x} \Es{\epsilon}{ \sup_{f \in \F} \sum_{t=1}^{T} \epsilon_t f(x_t)} \ .
\end{align}
is the sequential Rademacher complexity defined in \cite{RakSriTew10a}.
\end{proposition}
\begin{proof}
To make the $\boldsymbol{\rho}$ process associated with $\ensuremath{\mathbf{p}}$ more explicit, we use the expanded definition:
\begin{align}
\label{eq:rad_bdd_by_wc}
\Rad_T(\F, \ensuremath{\mathbf{p}}) &= \En_{x_1,x'_1\sim p_1}\En_{\epsilon_1}\En_{x_2,x'_2\sim p_2(\cdot|\chi_1(\epsilon_1))} \En_{\epsilon_2} ~\ldots~ \En_{x_T,x'_T\sim p_T(\cdot|\chi_1(\epsilon_1),\ldots, \chi_{T-1}(\epsilon_{T-1})) } \En_{\epsilon_{T}} \left[ \sup_{f \in \F} \sum_{t=1}^T \epsilon_t f(x_t) \right] \notag\\
&\le \sup_{x_1,x'_1}\En_{\epsilon_1}\sup_{x_2,x'_2} \En_{\epsilon_2} ~\ldots~ \sup_{x_T,x'_T} \En_{\epsilon_{T}} \left[ \sup_{f \in \F} \sum_{t=1}^T \epsilon_t f(x_t) \right] \\
&=\sup_{x_1}\En_{\epsilon_1}\sup_{x_2} \En_{\epsilon_2} ~\ldots~ \sup_{x_T} \En_{\epsilon_{T}} \left[ \sup_{f \in \F} \sum_{t=1}^T \epsilon_t f(x_t) \right] \notag \\
&=\Rad_T(\F) \notag\ .
\end{align}
The inequality holds by replacing expectation over $x_t,x'_t$ by a supremum over the same. We then get rid of $x_t$'s since they do not appear anywhere.
\end{proof}
An interesting case of hybrid i.i.d.-adversarial data is considered in Lemma~\ref{lem:iid_wc_rademacher}, and we refer to its proof as another example of an analysis of the distribution-dependent sequential Rademacher complexity.
We now turn to general properties of Rademacher complexity. The proof of next Proposition follows along the lines of the analogous result in \cite{RakSriTew10a}.
\begin{proposition}
\label{prop:rademacher_properties}
Distribution-dependent sequential Rademacher complexity satisfies the following properties.
\begin{enumerate}
\item If $\F\subset \G$, then $\Rad(\F,\ensuremath{\mathbf{p}}) \leq \Rad(\G,\ensuremath{\mathbf{p}})$.
\item $\Rad(\F,\ensuremath{\mathbf{p}}) = \Rad (\operatorname{conv}(\F),\ensuremath{\mathbf{p}})$.
\item $\Rad(c\F,\ensuremath{\mathbf{p}}) = |c|\Rad(\F,\ensuremath{\mathbf{p}})$ for all $c\in\mathbb{R}$.
\item For any $h$, $\Rad(\F+h,\ensuremath{\mathbf{p}}) = \Rad(\F,\ensuremath{\mathbf{p}})$ where $\F+h = \{f+h: f\in\F\}$
\end{enumerate}
\end{proposition}
Next, we consider upper bounds on $\Rad(\F,\ensuremath{\mathbf{p}})$ via covering numbers. Recall the definition of a (sequential) cover, given in \cite{RakSriTew10a}. This notion captures sequential complexity of a function class on a given $\X$-valued tree $\x$.
\begin{definition}
\label{def:cover}
A set $V$ of $\mathbb{R}$-valued trees of depth $T$ is \emph{an $\alpha$-cover} (with respect to $\ell_p$-norm) of $\F \subseteq \mathbb{R}^\X$ on a tree $\x$ of depth $T$ if
$$
\forall f \in \F,\ \forall \epsilon \in \{\pm1\}^T \ \exists \mbb{v} \in V \ \mrm{s.t.} ~~~~ \left( \frac{1}{T} \sum_{t=1}^T |\mbb{v}_t(\epsilon) - f(\x_t(\epsilon))|^p \right)^{1/p} \le \alpha
$$
The \emph{covering number} of a function class $\F$ on a given tree $\x$ is defined as
$$
\N_p(\alpha, \F, \x) = \min\{|V| : V \ \trm{is an }\alpha-\text{cover w.r.t. }\ell_p\trm{-norm of }\F \trm{ on } \x\}.
$$
\end{definition}
Using the notion of the covering number, the following result holds.
\begin{theorem}\label{thm:dudley}
For any function class $\F\subseteq [-1,1]^\X$,
\begin{align*}
\Rad_T(\F,\ensuremath{\mathbf{p}}) \le \En_{(\x,\x')\sim \boldsymbol{\rho}} \inf_{\alpha}\left\{4 T \alpha + 12\int_{\alpha}^{1} \sqrt{T \ \log \ \mathcal{N}_2(\delta, \F,\x ) \ } d \delta \right\} \ .
\end{align*}
\end{theorem}
The analogous result in \cite{RakSriTew10a} is stated for the worst-case adversary, and, hence, it is phrased in terms of the maximal covering number $\sup_\x \mathcal{N}_2(\delta, \F,\x)$. The proof, however, holds for any fixed $\x$, and thus immediately implies Theorem~\ref{thm:dudley}. If the expectation over $(\x,\x')$ in Theorem~\ref{thm:dudley} can be exchanged with the integral, we pass to an upper bound in terms of the expected covering number $\En_{(\x,\x')\sim \boldsymbol{\rho}} \mathcal{N}_2(\delta, \F,\x )$.
The following simple corollary of the above theorem shows that the distribution-dependent Rademacher complexity of a function class $\F$ composed with a Lipschitz mapping $\phi$ can be controlled in terms of the Dudley integral for the function class $\F$ itself.
\begin{corollary}\label{thm:dudleycontraction}
Fix a class $\F\subseteq [-1,1]^\Z$ and a function $\phi:[-1,1]\times \Z\mapsto\mathbb{R}$. Assume, for all $z \in \Z$, $\phi(\cdot,z)$ is a Lipschitz function with a constant $L$. Then,
\begin{align*}
\Rad_T(\phi(\F),\ensuremath{\mathbf{p}}) \le L\ \En_{(\z,\z')\sim \boldsymbol{\rho}} \inf_{\alpha}\left\{4 T \alpha + 12\int_{\alpha}^{1} \sqrt{T \ \log \ \mathcal{N}_2(\delta, \F,\z ) \ } d \delta \right\} \ .
\end{align*}
where $\phi(\F) = \{z \mapsto \phi(f(z),z): f\in \F\}$.
\end{corollary}
The statement can be seen as a covering-number version of the Lipschitz composition lemma.
\section{Supervised Learning}
\label{sec:supervised}
In Section~\ref{sec:iid}, we studied the relationship between batch and online learnability in the i.i.d. setting, focusing on the supervised case in Section~\ref{sec:blind_non_blind_sup}. We now provide a more in-depth study of the value of the supervised game beyond the i.i.d. setting.
As shown in \cite{RakSriTew10a,RakSriTew10nips}, the value of the supervised game with the \emph{worst-case adversary} is upper and lower bounded (to within $O(\log^{3/2} T)$) by \emph{sequential} Rademacher complexity. This complexity can be linear in $T$ if the function class has infinite Littlestone's dimension, rendering worst-case learning futile. This is the case with a class of threshold functions on an interval, which has a Vapnik-Chervonenkis dimension of $1$. Surprisingly, it was shown in \cite{LazMun09} that for the classification problem with i.i.d. $x$'s and adversarial labels $y$, online regret can be bounded whenever VC dimension of the class is finite. This suggests that it is the manner in which $x$ is chosen that plays the decisive role in supervised learning. We indeed show that this is the case. Irrespective of the way the labels are chosen, if $x_t$ are chosen i.i.d. then regret is (to within a constant) given by the classical Rademacher complexity. If $x_t$'s are chosen adversarially, it is (to within a logarithmic factor) given by the sequential Rademacher complexity.
We remark that the algorithm of \cite{LazMun09} is ``distribution-blind'' in the sense of last section. The results we present below are for non-blind games. While the equivalence of blind and non-blind learning was shown in the previous section for the i.i.d. supervised case, we hypothesize that it holds for the hybrid supervised learning scenario as well.
Let the loss class be $\phi(\F) = \{ (x,y) \mapsto \phi(f(x),y) \::\: f \in \F \}\ $ for some Lipschitz function $\phi:\mathbb{R}\times \Y\mapsto\mathbb{R}$ (i.e. $\phi(f(x),y)=|f(x)-y|$). Let $\mathcal P_{1:T}$ be the restrictions on the adversary. Theorem \ref{thm:valrad} then states that
\[
\Val^{\text{sup}}_T(\mathcal P_{1:T}) \le 2 \sup_{\ensuremath{\mathbf{p}}\in\PDA}\Rad_T(\phi(\F), \ensuremath{\mathbf{p}})
\]
where the supremum is over all joint distributions $\ensuremath{\mathbf{p}}$ on the sequences $((x_1,y_1),\ldots,(x_T,y_T))$, such that $\ensuremath{\mathbf{p}}$ satisfies the restrictions $\mathcal P_{1:T}$. The idea is to pass from a complexity of $\phi(\F)$ to that of the class $\F$ via a Lipschitz composition lemma, and then note that the resulting complexity does not depend on $y$-variables. If this can be done, the complexity associated only with the choice of $x$ is then an upper bound on the value of the game. The results of this section, therefore, hold whenever a Lipschitz composition lemma can be proved for the distribution-dependent Rademacher complexity.
The following lemma gives an upper bound on the distribution-dependent Rademacher complexity in the ``hybrid" scenario, i.e. the distribution of $x_t$'s is i.i.d. from a fixed distribution $p$ but the distribution of $y_t$'s is arbitrary (recall that adversarial choice of the player translates into vacuous restrictions $\mathcal P_t$ on the mixed strategies). Interestingly, the upper bound is a blend of the classical Rademacher complexity (on the $x$-variable) and the worst-case sequential Rademacher complexity for the $y$-variable. This captures the hybrid nature of the problem.
\begin{lemma}
\label{lem:iid_wc_rademacher}
Fix a class $\F\subseteq \mathbb{R}^\X$ and a function $\phi:\mathbb{R}\times \Y\mapsto\mathbb{R}$. Given a distribution $p$ over $\X$, let $\PDA$ consist of all joint distributions $\ensuremath{\mathbf{p}}$ such that the conditional distribution $p^{x,y}_t(x_t,y_t|x^{t-1},y^{t-1}) = p(x_t) \times p_t(y_t|x^{t-1},y^{t-1},x_t)$ for some conditional distribution $p_t$. Then,
\begin{align*}
\sup_{\ensuremath{\mathbf{p}}\in\PDA} \Rad_T(\phi(\F),\ensuremath{\mathbf{p}}) &\leq \Eunderone{x_1,\ldots,x_T \sim p} \sup_{\y} \Es{\epsilon}{\sup_{f \in \F} \sum_{t=1}^T \epsilon_t \phi(f(x_t),\y_t(\epsilon))} \ .
\end{align*}
\end{lemma}
Armed with this result, we can appeal to the following Lipschitz composition lemma. It says that the distribution-dependent sequential Rademacher complexity for the hybrid scenario with a Lipschitz loss can be upper bounded via the classical Rademacher complexity of the function class on the $x$-variable only. That is, we can ``erase'' the Lipschitz loss function together with the (adversarially chosen) $y$ variable. The lemma is an analogue of the classical contraction principle initially proved by Ledoux and Talagrand \cite{LedouxTalagrand91} for the i.i.d. process.
\begin{lemma}
\label{lem:comparison_lemma_iid_wc}
Fix a class $\F\subseteq [-1,1]^\X$ and a function $\phi:[-1,1]\times \Y\mapsto\mathbb{R}$. Assume, for all $y \in \Y$, $\phi(\cdot,y)$ is a Lipschitz function with a constant $L$. Let $\PDA$ be as in Lemma~\ref{lem:iid_wc_rademacher}. Then, for any $\ensuremath{\mathbf{p}} \in \PDA$,
$$
\Rad_T(\phi(\F),\ensuremath{\mathbf{p}}) \le L\ \Rad_T(\F,p) \ .
$$
\end{lemma}
Lemma~\ref{lem:iid_wc_rademacher} in tandem with Lemma~\ref{lem:comparison_lemma_iid_wc} imply that the value of the game with i.i.d. $x$'s and adversarial $y$'s is upper bounded by the classical Rademacher complexity.
For the case of adversarially-chosen $x$'s and (potentially) adversarially chosen $y$'s, the necessary Lipschitz composition lemma is proved in \cite{RakSriTew10a} with an extra factor of $O(\log^{3/2} T)$. We summarize the results in the following Corollary.
\begin{corollary}
\label{cor:upper_bounds_for_sup_learning}
The following results hold for stochastic-adversarial supervised learning with absolute loss.
\begin{itemize}
\item If $x_t$ are chosen adversarially, then irrespective of the way $y_t$'s are chosen,
$$\Val^{\text{sup}}_T \leq 2\Rad (\F) \times O(\log^{3/2}(T)),$$
where $\Rad (\F)$ is the (worst-case) sequential Rademacher complexity \cite{RakSriTew10a}. A matching lower bound of $\Rad(\F)$ is attained by choosing $y_t$'s as i.i.d. Rademacher random variables.
\item If $x_t$ are chosen i.i.d. from $p$, then irrespective of the way $y_t$'s are chosen,
$$\Val^{\text{sup}}_T \leq 2\Rad (\F, p),$$
where $\Rad (\F, p)$ defined in \eqref{eq:classical_rad} is the classical Rademacher complexity. The matching lower bound of $\Rad (\F, p)$ is obtained by choosing $y_t$'s as i.i.d. Rademacher random variables.
\end{itemize}
\end{corollary}
The lower bounds stated in Corollary~\ref{cor:upper_bounds_for_sup_learning} are proved in the next section.
\subsection{Lower Bounds}
\label{sec:lowerbounds}
We now give two lower bounds on the value $\Val^{\text{sup}}_T$, defined with the absolute value loss function $\phi(f(x),y) = |f(x)-y|$. The lower bounds hold whenever the adversary's restrictions $\{\mathcal P_t\}_{t=1}^T$ allow the labels to be i.i.d. coin flips. That is, for the purposes of proving the lower bound, it is enough to choose a joint probability $\ensuremath{\mathbf{p}}$ (an oblivious strategy for the adversary) such that each conditional probability distribution on the pair $(x,y)$ is of the form $p_t(x | x_1,\ldots, x_{t-1}) \times b(y)$ with $b(-1)=b(1)=1/2$. Pick any such $\ensuremath{\mathbf{p}}$.
Our first lower bound will hold whenever the restrictions $\mathcal P_t$ are history-independent. That is, $\mathcal P_t(x_{1:t-1})=\mathcal P_t(x'_{1:t-1})$ for any $x_{1:t-1},x'_{1:t-1}\in\X^{t-1}$. Since the worst-case (all distributions) and i.i.d. (single distribution) are both history-independent restrictions, the lemma can be used to provide lower bounds for these cases. The second lower bound holds more generally, yet it is weaker than that of Lemma~\ref{lem:first_lower}.
\begin{lemma}
\label{lem:first_lower}
Let $\PDA$ be the set of all $\ensuremath{\mathbf{p}}$ satisfying the history-independent restrictions $\{\mathcal P_t\}$ and $\PDA' \subseteq \PDA$ the subset that allows the label $y_t$ to be an i.i.d. Rademacher random variable for each $t$. Then
$$ \Val^{\text{sup}}_T (\mathcal P_{1:T}) \geq \sup_{\ensuremath{\mathbf{p}}\in\PDA'} \Rad_T(\F, \ensuremath{\mathbf{p}})$$
\end{lemma}
In particular, Lemma~\ref{lem:first_lower} gives matching lower bounds for Corollary~\ref{cor:upper_bounds_for_sup_learning}.
\begin{lemma}
\label{lem:second_lower}
Let $\PDA$ be the set of all $\ensuremath{\mathbf{p}}$ satisfying the restrictions $\{\mathcal P_t\}$ and let $\PDA' \subseteq \PDA$ be the subset that allows the label $y_t$ to be an i.i.d. Rademacher random variable for each $t$. Then
$$ \Val^{\text{sup}}_T (\mathcal P_{1:T}) \geq \sup_{\ensuremath{\mathbf{p}}\in\PDA'} \En_{(\x,\x')\sim \boldsymbol{\rho}}\Es{\epsilon}{ \sup_{f \in \F} \sum_{t=1}^{T} \epsilon_t f(\x_t(-{\boldsymbol 1}))} $$
\end{lemma}
\begin{remark}
The supervised learning protocol is sometimes defined as follows. At each round $t$, the pair $(x_t,y_t)$ is chosen by the adversary, yet the player first observes only the ``side information'' $x_t$. The player then makes a prediction $\hat{y}_t$ and, subsequently, the label $y_t$ is revealed. The goal is to minimize regret defined as
$$ \sum_{t=1}^T |\hat{y}_t-y_t| - \inf_{f\in\F} \sum_{t=1}^T |f(x_t)-y_t|.$$
As briefly mentioned in \cite{RakSriTew10a}, this protocol is equivalent to a slightly modified version of the game we consider. Indeed, suppose at each step we are allowed to output any function $f':\X\mapsto\Y$ (not just from $\F$), yet regret is still defined as a comparison to the best $f\in\F$. This modified version is clearly equivalent to first observing $x_t$ and then predicting $\hat{y}_t$. Denote by $\tilde{\Val}_T$ the value of the modified ``improper learning'' game, where the player is allowed to choose any $f_t\in\Y^\X$. Side-stepping the issue of putting distributions on the space of all functions $\Y^\X$, it is easy to check that Theorem~\ref{thm:minimax} goes through with only one modification: the infima in the cumulative cost are over all measurable functions $f_t\in\Y^\X$. The key observation is that these $f_t$'s are replaced by $f\in\F$ in the proof of Theorem~\ref{thm:valrad}. Hence, the upper bound on $\tilde{\Val}_T$ is the same as the one on the ``proper learning'' game where our predictions have to lie inside $\F$.
\end{remark}
|
1,108,101,565,608 | arxiv | \section{Introduction}
\label{sec:intro}
Short period binaries with dwarf components have proven to be ideal systems to constrain stellar structure and evolution models with precise mass and radius data. White Dwarf / Red Dwarf (hereafter WD+RD) binaries are of particular relevance, as their large difference both in radius and luminosity results in accurate lightcurves very close to ideal predictions, allowing for high precision timing measurements \citep[see, e.g.,][]{Parsons2010}.
Such binary systems form as a result of a common envelope (CE) phase. The CE model was originally put forward by \citet{Paczynski76}, and assumes that the primary star in an originally wider ($\sim1$~AU) binary system has reached the giant branch, with the secondary being engulfed by the envelope of the giant. As a result of this, both the energy and angular momentum of the secondary are deposited into the envelope of the giant, causing the secondary to spiral inward towards scales of about a solar radius. These models have been refined in subsequent studies, e.g. \citet{Meyer79, Iben93, Taam00, Webbink08} and \citet{Taam10}. The CE phase has been simulated both using smoothed-particle-hydrodynamics (SPH) and grid-based techniques by \citet{Passy12}, and a review of the current state of the art is provided by \citet{Ivanova13}. The systems resulting from such a CE phase consist of a White Dwarf resulting from the core of the giant and a Red Dwarf on a narrow orbit of $\sim1$~$R_\odot$. They are referred to as post-common-envelope binaries (PCEBs).
The evolution of close binary systems is predominantly governed by angular momentum loss, which can be driven by gravitational wave emission for binaries with very short periods ($P_{\rm orb}<3$~h) \citep{Kraft1962, Faulkner1971} or magnetic breaking for binaries with $P_{\rm orb}>3$~h \citep{Verbunt1981}. While the resulting angular momentum loss implies a continuous decrease of the orbital period over time, a number of systems is known showing regular quasi-periodic modulations of the order $\Delta P/P_{bin}\sim3\times10^{-5}$ with periods of several decades in Algols, RS Canum Venaticorum (RS CVn), W Ursae Majoris and cataclysmic variable (CV) stars \citep{Ibanoglu91}. These variations are studied employing O-C diagrams, in which O denotes the observed orbital phase of the binary at a given time, from which a correction C is subtracted based on the zero- and first-order terms in the expansion of the angular velocity, i.e. assuming $\omega(t)=\omega_0+\dot{\omega}t+..$. A diagram showing O-C vs time provides information on the quasi-periodic modulations, corresponding to fluctuations including a regular increase accompanied by a subsequent decrease in the orbital period. The period variation is related to the amplitude in the O-C diagram via \citep{Applegate1992}
\begin{equation}
\frac{\Delta P}{P_{bin}}=2\pi \frac{O-C}{P_{\rm mod}},
\end{equation}
where $P_{\rm mod}$ denotes the period of the modulation. In the case of eclipsing binaries, these fluctuations can be accurately measured using transit timing variations (TTVs) to constrain the underlying physical mechanism.
A potential mechanism to explain the eclipsing time variations has been proposed by \citet{Applegate1992}, in which the period variations are explained as a result of quasi-periodic changes in the quadrupole moment of the secondary star as a result of magnetic activity. It is thus assumed that a sufficiently strong magnetic field is regularly produced during a dynamo cycle, leading to a redistribution of the angular momentum within the star and therefore a change in its quadrupole moment. This model was motivated from a sample of $101$ Algols studied by \citet{Hall1989}, showing a strong connection between the orbital period variations and the presence of magnetic activity.
To sufficiently change the stellar structure to drive the quasi-periodic period oscillation, a certain amount of energy is required to build up a strong magnetic field, which is subsequently dissipated and again built up in the next dynamo cycle. Ultimately, such energy should be extracted from the convective energy of the star, which is powered by the nuclear energy production \citep{Marsh90}. While sufficient energy appears to be available in the case of Algols, it has been shown by \citet{Lanza2005} that the Applegate scenario needs to be rejected for RS~CVns, and independently \citet{Brinkworth2006} have reported that the orbital period variations of the PCEB system NN~Ser cannot be explained using Applegate's model.
In fact, the orbital period variations in NN~Ser were shown to be consistent with a two-planet solution, which was shown to be dynamically stable \citep{Beuermann2010, Horner2012, Beuermann2013, Marsh14}. Similarly, the planetary solution of HW~Vir was shown to be secularly stable \citep{Beuermann2012b}, while a final conclusion on HU~Aqr \citep{Qian2011, Gozdziewski2012, Hinse2012, Wittenmyer2012, Gozdziewski2015HUAqr} is pending. The eclipsing time variations in QS~Vir, on the other hand, are currently not understood, and appear to be incompatible both with the planetary hypothesis as well as the Applegate scenario \citep{Parsons2010}. A summary of PCEB systems potentially hosting planets has been compiled by \citet{Zorotovic2013}, including the properties of the planetary systems. The latter correspond to massive giant planets of several Jupiter masses, on planetary orbits of a few AU.
However, the discussion whether the planets are real is still ongoing. In this respect, the proposed brown dwarf in V471~Tau has not been found via direct imaging \citep[][but see comments by \citet{Vaccaro2015}]{Hardy2015}. In this paper, we in fact show that the period time variations in V471~Tau could also be produced by an Applegate mechanism.
The original Applegate model linking magnetic activity to period time variations has subsequently been improved by different authors. For instance, the analysis by \citet{Lanza1998} provided an improved treatment of the mechanical equilibrium, including the impact of rotational and magnetic energy on the quadrupole moment, leading to an improved estimate of the energy requirements. While the original framework was based on a thin-shell approximation, the latter was extended by \citet{Brinkworth2006} considering a finite shell around the inner core, as well as the change of the quadrupole moment of the core due to the exchange of angular momentum. An even more detailed formulation has been provided by \citet{Lanza2006}. It was proposed by \citet{Lanza1999} that an energetically more favorable scenario may occur in the presence of an $\alpha^2$ dynamo, which was however shown to require strong assumptions concerning the operation of the dynamo, including a dynamo restricted to the star's equatorial plane and a magnetic field of $10^5$~G \citep{Ruediger2002}.
In this paper, we adopt the formulation by \citet{Brinkworth2006} and apply it to realistic stellar profiles. Based on this analysis, we systematically assess whether the Applegate mechanism is feasible in a sample of 16 close binary systems with potential planets, including 11 PCEBs. We further provide analytical scaling relations which can reproduce our main results, provide further insights into the Applegate mechanism and allow for an extension of this analysis to additional systems.
\section{Systems investigated in this paper}
\label{sec:systems}
\subsection{Classification}
Our sample consists of a total of $16$ close binary systems with observed period variations potentially indicating the presence of planets. $11$ of these are the PCEB systems previously described by \citet{Zorotovic2011} and \citet{Zorotovic2013}, as well as four RS~CVn binaries RU~Cnc, AW~Her \citep{Tian2009}, HR~1099 \citep{Fekel1983, Garcia2003, Frasca2004} and SZ~Psc \citep{Jakate1976, Popper1988, Wang2010}, and the RR-Lyr type binary BX~DRA \citep{Park2013}. The detailed properties of these systems, including binary type and spectral class, are given in Table~\ref{tab:secondarys}.
\begin{table*}
\caption{Summary of relevant system parameters for the close binaries with planetary candidates investigated in this paper where we use the usual nomenclature and $\tau$
beeing the age of the binary system (see also sec.~\ref{sec:systems}). Sources for the data are given below.}
\begin{center}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c}
\hline
System & $a_{\rm bin}$/${\rm ~R}_{\sun}$ & $P_{\rm bin}$/$\rm{d}$ &$M_{\rm sec}$/${\rm ~M}_{\sun}$ & $R_{\rm sec}$/${\rm ~R}_{\sun}$ & $T_{\rm sec}$/$\rm{ K}$ & $L_{\rm sec}$/${\rm ~L}_{\sun}$ & $\tau$/$\mbox{ Gyr}$ & type & spectral class & Sources \\
\hline
HS 0705+6700 & 0.81 & 0.0958 & 0.134 & 0.186 & 2,900 & 0.00219$^{\ast}$ & $5^{**}$ & Al & sdB+dM & 1,2,3 \\
HW Vir & 0.860 & 0.117 & 0.142 & 0.175 & 3,084 & 0.003 & $5^{**}$ & Al & sdB+dM & 4,5,6 \\
NN Ser & 0.934 & 0.130 & 0.111 & 0.149 & 2,920 & 0.00147$^{\ast}$ & 2 & No & DA01+M4 & 7,8,9 \\
NSVS14256825 & 0.80 & 0.110 & 0.109 & 0.162 & 2,550 & 0.000994$^{\ast}$ & $5^{**}$ & EB & sd0B+dM & 3 \\
NY Vir & 0.77 & 0.101 & 0.15 & 0.14 & 3,000 & 0.00142$^{\ast}$ & $5^{**}$ & Al & sdB+dM & 3,10,11 \\
HU Aqr & 0.69 & 0.0868 & 0.18 & 0.22 & 3,400 & 0.00580$^{\ast}$ & 1$^{\ast}$ & AM & WD+M4V & 12,13,14 \\
QS Vir & 1.27$^{\ast}$ & 0.151 & 0.43 & 0.42 & 3,100 & 0.0146$^{\ast}$ & $5^{**}$ & Al & DA+M2-4 & 15,16,17 \\
RR Cae & 1.62$^{\ast}$ & 0.304 & 0.183 & 0.209 & 3,100 & 0.00362$^{\ast}$ & 4.5$^{\ast}$ & Al & DA7.8+M4 & 18,19,20 \\
UZ For & 0.788$^{\ast}$ & 0.0879 & 0.14 & 0.177 & 2,950 & 0.00213$^{\ast}$ & 1$^{\ast}$ & AM & M4.5 & 21,22 \\
DP Leo & 0.59$^{\ast}$ & 0.0624 & 0.1 & 0.134$^{\ast}$ & 2,710$^{\ast}$ & 0.000867$^{\ast}$ & 2.5$^{\ast}$ & AM & WD & 23,24 \\
V471 Tau & 3.3 & 0.522$^{\ast}$ & 0.93 & 0.96 & 5,040 & 0.40 & 1 & Al & K2V+DA & 25,26,27 \\
\hline
RU Cnc & 27.76$^{\ast}$ & 10.17 & 1.42 & 4.83 & 4,940$^{\ast}$ & 12.5$^{\ast}$ & 3.3$^{\ast}$ & RS & dF9+dG9 & 23,28 \\
AW Her & 24.86 & 8.82$^{\ast}$ & 1.35 & 3.0 & 5,110$^{\ast}$ & 5.49$^{\ast}$ & 4.0$^{\ast}$ & RS & G2IV &23, 28 \\
HR 1099 & 11.2$^{\ast}$ & 2.84 & 1.3 & 4.0 & 4,940$^{\ast}$ & 8.6$^{\ast}$ & 4.5$^{\ast}$ &RS & K2 & 29,30,31,32 \\
BX Dra & 4.06 & 0.579 & 2.08 & 2.13 & 6,980 & 9.66 & 0.5$^{\ast}$ & RR & F0IV-V & 23,33 \\
SZ Psc & 15.04$^{\ast}$ & 3.97 & 1.62 & 5.1 & 5,004$^{\ast}$ & 14.7$^{\ast}$ & 2.1$^{\ast}$ & RS & K1IV+F8V & 23,34,35,36 \\
\hline
\end{tabular}
\tablefoot{We marked calculated values with an asteriks. In case no age estimates are given in the literature, we adopt a canonical age of $5~\mbox{ Gyr}$, marked as $^{**}$. The term secondary does not necessarily refer to the lower mass component. Rather, it refers to the component of the binary for which we pursued the calculations. For RR Cae, DP Leo and UZ For, we estimated the WD progenitor masses using the fits provided by \citet{Meng2008} assuming solar metallicity. In the case of DP Leo and UZ For, we adopted the main-sequence lifetime of the progenitor plus $0.5~\mbox{ Myr}$ as a rough age estimate. In RR Cae, we added the cooling age of the WD which is given as $t_{cool} \sim 1 \mbox{ Gyr}$ by ref. 18. The abbreviations for the system type are: Al: eclipsing binary of Algol type (detached), EB: eclipsing binary, No: Nova, AM: CV of AM Her type (polar), RR: variable star of RR Lyr type, RS: variable star of RS CVn type. The horizontal line separates the PCEB systems from other close binaries.}
\tablebib{
(1)~\citet{Beuermann2012a}; (2)~\citet{Drechsel2001}; (3)~\citet{Almeida2012};
(4)~\citet{Beuermann2012b}; (5)~\citet{Lee2009}; (6)~\citet{Wood99};
(7)~\citet{Parsons2010NNSer}; (8)~\citet{Beuermann2010}; (9)~\citet{Beuermann2013};
(3)~\citet{Almeida2012};
(10)~\citet{Kilkenny1998}; (11)~\citet{Qian2012};
(12)~\citet{Schwope2011}; (13)~\citet{Wittenmyer2012}; (14)~\citet{Gozdziewski2015HUAqr};
(15)~\citet{ODonoghue2003}; (16)~\citet{Parsons2010}; (17)~\citet{Drake14};
(18)~\citet{Maxted2007}; (19)~\citet{Gianninas11}; (20)~\citet{Zorotovic2013};
(21)~\citet{Bailey1991}; (22)~\citet{Potter2011};
(23)~\citet{Pourbaix04}; (24)~\citet{Beuermann2011};
(25)~\citet{OBrien2001}; (26)~\citet{Hussain06}; (27)~\citet{Hardy2015};
(28)~\citet{Tian2009};
(29)~\citet{Fekel1983}; (30)~\citet{Garcia2003}; (31)~\citet{Frasca2004}; (32)~\citet{Gray06};
(33)~\citet{Park2013};
(34)~\citet{Jakate1976}; (35)~\citet{Popper1988}; (36)~\citet{Wang2010}
.}
\label{tab:secondarys}
\end{center}
\end{table*}
\subsection{Period variations and the LTV effect}
In the presence of a planet, the close binary system and the companion revolve around their common barycentre resulting in variations of the observed eclipse timings referred to as the light travel time variation (LTV) effect.
In the binary eclipse O-C diagram, the expected signature of a single planet on a circular orbit is sinusoidal with semi-amplitude $K$ and period $P_{\rm plan}$. Treating the binary as a point mass and further assuming an edge-on binary (i.e., $i=90\deg$), we can approximate the semi-amplitude of the LTV $K$ caused by the accompanying object via
\begin{equation}
K = \frac{ M_{\rm plan} G^{1/3}}{c} \left[ \frac{P_{\rm plan}}{2 \pi (M_{\rm pri}+M_{\rm sec})} \right]^{2/3},
\end{equation}
where $M_{\rm pri}$ and $M_{\rm sec}$ denote the eclipsing binary component masses, $M_{\rm plan}$ the planetary mass, $P_{\rm plan}$ the planetary orbital period, c the speed of light and $G$ the gravitational constant (see \citet{Pribulla2012} for a more detailed description). Relative to the binary's period, we have a period change of \citep[see, e.g.,][]{Gozdziewski2015}
\begin{equation}
\frac{\Delta P}{P_{\rm bin}} = 4 \pi \frac{K}{P_{\rm plan}} ~~ .
\end{equation}
In Table~\ref{tab:planets} we summarize the LTV properties of the proposed planetary systems as given in the literature we refer to in Table~\ref{tab:secondarys}.
\begin{table*}
\caption{Summary of the TTV data. If more than one planet is thought to be present, we only included the planet with the biggest influence on the binary's period, i.e. the largest $\Delta P / P_{bin}$.}
\begin{center}
\begin{tabular}{c|c|c|c}
\hline
System & Semi-amplitude $K$/s & Period $P$/$\rm{yr}$ & $\Delta P$/$P_{bin}$ \\
\hline
HS 0705+6700 & 67 & 8.41 & $3.2\cdot 10^{-6}$ \\
HW Vir & 563 & 55 & $4.1\cdot 10^{-6}$ \\
NN Ser & 27.65 & 15.482 & $7.1\cdot 10^{-7}$ \\
NSVS14256825 & 20 & 6.86 & $1.2\cdot 10^{-6}$ \\
NY Vir & 12.2 & 7.9 & $6.1\cdot 10^{-7}$ \\
HU Aqr & 87.7 & 19.4 & $1.8\cdot 10^{-6}$ \\
QS Vir & 43 & 16.99 & $10^{-6}$ \\
RR Cae & 7.2 & 11.9 & $2.4\cdot 10^{-7}$ \\
UZ For & 21.6 & 16 & $5.4\cdot 10^{-7}$ \\
DP Leo & 16.9 & 28 & $2.4\cdot 10^{-7}$ \\
V471 Tau & 137.2 & 30.5 & $1.8\cdot 10^{-6}$ \\
\hline
RU Cnc & 21.4 & 37.6 & $2.3\cdot 10^{-7}$ \\
AW Her & 1410 & 12.8 & $4.4\cdot 10^{-5}$ \\
HR 1099 & 7900 & 35 & $9.0\cdot 10^{-5}$ \\
BX Dra & 532 & 30.2 & $3.5\cdot 10^{-6}$ \\
SZ Psc & 480 & 55.5 & $3.4\cdot 10^{-6}$ \\
\hline
\end{tabular}
\label{tab:planets}
\end{center}
\end{table*}
\section{Applegate's mechanism}
\label{sec:applegate}
In the following, we introduce the framework to assess the feasibility of Applegate's mechanism. For this purpose, we first remind the reader of the original Applegate framework and the resulting energy requirement to drive quadrupole moment variations of the secondary star. This is contrasted with the finite shell model by \citet{Brinkworth2006}, which considers the angular momentum exchange between the core and a finite shell and accounts for the backreaction of the core. Using this framework, we derive three different approximations of increasing accuracy: An analytical Applegate model assuming a constant density throughout the star, an analytical two-zone model employing different densities in the core and the shell, and a numerical model employing a full stellar density profile. Finally, we present an analytical two-zone model that precisely reproduces the results of the numerical model and leads to new insights into the limitations of the Applegate mechanism.
\subsection{Thin shell model by Applegate (1992)}
We consider a close PCEB system with a binary separation $a_{\rm bin}$ consisting of a White Dwarf, and a Red Dwarf secondary with mass $M_{\rm sec}$ and radius $R_{\rm sec}$. According to \citet{Applegate1992}, the relative period change $\Delta P / P_{bin}$ is related to the secondary's change in quadrupole moment $\Delta Q$ by
\begin{equation}
\frac{\Delta P}{P_{\rm bin}} = -\frac{9 \Delta Q}{a_{\rm bin}^{2} M_{\rm sec}} ~~ .
\end{equation}
As a result of the secondary's magnetic activity, angular momentum can be transferred between the core and its outer regions leading to a change in their respective angular velocities, causing a change in the oblateness of both the core and the outer shell and modulating the secondary's quadrupole moment. For the sake of simplicity, Applegate considered a thin homogeneous shell and neglected the change of the core's oblateness.\\
We divide the star in an inner core with radius $R_{\rm core}$ and initial orbital velocity $\Omega _1$, and an adjacent outer shell with outer radius $R_{\rm sec}$ and orbital velocity $\Omega _2$, both rotating as rigid objects. Under these assumptions, the required energy $\Delta E$ to drive a given period change via an exchange of angular momentum $\Delta J$ between core and shell is given as
\begin{equation}
\label{eq:applegate_energy}
\Delta E = ( \Omega _2 - \Omega _1 ) \Delta J + \frac{\Delta J ^{2}}{2 I_{\rm eff}},
\end{equation}
with effective moment of inertia $I _{\rm eff} = I_{\rm core} I_{\rm shell} / (I_{\rm core}+I_{\rm shell}$ \citep[see, e.g.,][]{Applegate1992,Parsons2010}. Assuming a thin shell, one has $I_{\rm eff} = (1/3) M_{\rm shell} R_{\rm core}^{2}$. A given period change $\Delta P$ and an angular momentum exchange are connected via
\begin{equation}
\Delta J = \frac{G M_{\rm sec}^{2}}{R_{\rm sec}} \left( \frac{a_{\rm bin}}{R_{\rm sec}} \right)^{2} \frac{\Delta P}{6 \pi} ~~ .
\end{equation}
Following \citet{Parsons2010}, one fixes the shell mass and evaluates Eq.~\ref{eq:applegate_energy} for varying core radii and hence angular momentum exchange to solve for the energy minimum.
Combining the framework of \citet{Applegate1992} with an improved model for the variation of the quadrupole moment considering both the rotation and magnetic energy \citep{Lanza1998}, \citet{Tian2009} derived an approximate formula for the relation between the required energy to drive Applegate's mechanism and the observed eclipsing time variations:
\begin{equation}
\label{eq:tian}
\resizebox{\hsize}{!}
{
$\frac{\Delta E}{E_{\rm sec}} = 0.233 \left( \frac{M_{\rm sec}}{{\rm ~M}_{\sun}} \right)^{3} \left( \frac{R_{\rm sec}}{{\rm ~R}_{\sun}} \right)^{-10} \left( \frac{T_{\rm sec}}{6000~\rm{ K}} \right)^{-4} \left( \frac{a_{\rm bin}}{{\rm ~R}_{\sun}} \right)^{4} \left( \frac{\Delta P}{\rm{s}} \right)^{2} \left( \frac{P_{\rm mod}}{\rm{yr}} \right)^{-1} .$
}
\end{equation}
Both this approximative formula and Applegate's original work only consider angular momentum exchange between a thin outer shell and a dominating core. Moreover, they do not take into account the core's rotational counter-reaction as a result of angular momentum conservation, i.e. an opposite change in the oblateness compensating a fraction of the change of the quadrupole moment. Thus, the energy required to power a certain level of period variation increases and a more realistic description of Applegate's mechanism must include both components.
\subsection{Finite shell model by Brinkworth et al. (2006)}\label{sec:brinkworth}
In contrast to the original work by \citet{Applegate1992}, \citet{Brinkworth2006} derived an analytic expression for the effective change of the secondary's quadrupole moment $\Delta Q$ for a given change of core rotation $\Delta \Omega _1$ and shell rotation $\Delta \Omega _2$, which reads
\begin{equation}
\label{eq:omega2equa}
\Delta Q = Q' _1 \, \left[ 2 \Omega _1 \Delta \Omega _1 + ( \Delta \Omega _1 )^{2} \right] + Q' _2 \, \left[2 \Omega _2 \Delta \Omega _2 + ( \Delta \Omega _2 )^{2} \right] ~~ ,
\end{equation}
where the coefficients $Q' _1$ and $Q' _2$ are (imposing spherical symmetry) given by integrals of the form
\begin{eqnarray}
Q' _1 = \frac{4 \pi}{9 G} \int \limits _{0} ^{R_{\rm core}} \frac{r^{7} \rho(r)}{M(r)} \mathrm{d} r ~~ , \\
Q' _2 = \frac{4 \pi}{9 G} \int \limits _{R_{\rm core}} ^{R_{\rm sec}} \frac{r^{7} \rho(r)}{M(r)} \mathrm{d} r ~~ ,
\end{eqnarray}
with the secondary's radial density profile $\rho(r)$ and $M(r)$ being the total mass inside a radius $r$. Solving Eq.~\ref{eq:omega2equa} for $\Delta \Omega _2$ and using
\begin{equation}
\label{eq:deltaE}
\Delta E = ( \Omega _2 - \Omega _1 ) \cdot I_2 \Delta \Omega _2 + \frac{1}{2} \left[ \frac{1}{I_1} + \frac{1}{I_2} \right] ( I_2 \, \Delta \Omega _2 ) ^{2}
\end{equation}
with the moment of inertia
\begin{eqnarray}
I_1 = \frac{8 \pi}{3} \int \limits _{0} ^{R_{\rm core}} r^{4} \rho (r) \mathrm{d} r ~~ , \\
I_2 = \frac{8 \pi}{3} \int \limits _{R_{\rm core}} ^{R_{\rm sec}} r^{4} \rho (r) \mathrm{d} r ~~ ,
\end{eqnarray}
gives the total amount of energy needed to perform the angular momentum transfer.
The total number of parameters can be reduced by imposing angular momentum conservation, i.e. a lossless exchange
\begin{equation}
I_1 \Delta \Omega _1 + I_2 \Delta \Omega _2 = 0 ~~ .
\end{equation}
Following \citet{Brinkworth2006}, the minimum energy required to drive Applegate's mechanism is achieved assuming no initial differential rotation or
\begin{equation}
\Omega _1 = \Omega _2 ~~ .
\end{equation}
Altogether, we can now solve Eq.~\ref{eq:omega2equa} for $\Delta \Omega _2$ and find
\begin{equation}
\label{eq:solutionOmega2}
( \Delta \Omega _2) = -\Omega_2 \, \frac{\beta}{\alpha} \pm \sqrt{ \, \left[ \Omega _2 \, \frac{\beta}{\alpha} \right]^{2} + \frac{\Delta Q}{\alpha} } ~~ ,
\end{equation}
where we defined
\begin{equation}
\alpha := \gamma ^{2} Q' _1 + Q' _2 ~~ , ~~ \beta := - \gamma Q' _1 + Q' _2 ~~ , ~~ \gamma := \frac{I_2}{I_1}
\end{equation}
for convenience and as a preparation for the next sections.
\subsection{The case of a constant density profile}\label{sec:constant}
As a zero-order approximation, the framework introduced by \citet{Brinkworth2006} can be evaluated assuming a constant density throughout the star. While this is clearly an approximation, it already gives rise to some phenomenological scaling relations to illustrate how the required energy for the Applegate mechanism depends on the binary separation and the mass of the secondary. Employing this approximation, the density in the secondary is now given as
\begin{equation}
\rho (r) = \bar{\rho} = \frac{3 M_{\rm sec}}{4 \pi R _{\rm sec} ^{3}} ~~ .
\end{equation}
Given this and defining $ \xi = R_{\rm sec}/R_{\rm core} $, we can calculate explicit expressions for the coefficients in eq.~\ref{eq:solutionOmega2}:
\begin{align*}
\alpha = \frac{R_{\rm core}^{5}}{15 G} \cdot ( \xi ^{10} - \xi ^{5} ) ~~ , ~~ \beta = 0 ~~ , ~~ \gamma = \xi ^{5} - 1 ~~ .
\end{align*}
In this context, $\Delta \Omega _2$ is given as
\begin{equation}
\Delta \Omega _2 = \pm \sqrt{\frac{15G \, \Delta Q}{R_{\rm core} ^{5} \, [\xi ^{10} - \xi ^{5}]}} ~~ .
\end{equation}
Inserting this into eq.~\ref{eq:deltaE} eliminates both the dependence on $R_{\rm core}$ and $\Omega _2$ and finally yields
\begin{equation}
\centering
\Delta E = \frac{G}{3} \cdot \left( \frac{\Delta P}{P_{\rm bin}} \right) \cdot \frac{a_{\rm bin}^{2} \, M_{\rm sec}^{2}}{R_{\rm sec}^{3}} ~~ ,
\end{equation}
or in a more practical form
\begin{equation}
\label{eq:delta_e_simple}
\Delta E \simeq 1.3 \cdot 10^{48} \rm{ erg} \, \left( \frac{\Delta P}{P_{\rm bin}} \right) \, \left( \frac{a_{\rm bin}}{{\rm ~R}_{\sun}} \right)^{2} \, \left( \frac{M_{\rm sec}}{{\rm ~M}_{\sun}} \right)^{2} \, \left( \frac{R_{\rm sec}}{{\rm ~R}_{\sun}} \right)^{-3},
\end{equation}
which can also be expressed in terms of a relative Applegate threshold energy via division by the energy provided over one modulation period by the secondary:
\begin{equation}
\frac{\Delta E}{E_{sec}} \simeq 1.1 \cdot 10^{7} \, \left( \frac{\Delta P}{P_{\rm bin}} \right) \, \left( \frac{a_{\rm bin}}{{\rm ~R}_{\sun}} \right)^{2} \, \left( \frac{M_{\rm sec}}{{\rm ~M}_{\sun}} \right)^{2} \, \left( \frac{R_{\rm sec}}{{\rm ~R}_{\sun}} \right)^{-3} \, \left( \frac{P_{\rm mod}}{\rm{yr}} \right)^{-1} \, \left( \frac{L_{\rm sec}}{{\rm ~L}_{\sun}} \right)^{-1}.
\end{equation}
The main goal of this calculations is a simple model to give rough order of magnitude estimates as well as the basic scaling properties of the Applegate mechanism. It is interesting to note that this zero-order estimate is independent of the angular velocity of the star, and does not require an assumption of stellar rotation. The results of this estimate along with those for the improved models are given in Table~\ref{tab:results}. In the two-zone model presented in the next subsection, the required energy will include a dependence on the rotation rate, which can be estimated from the assumption of tidal locking.
\subsection{An analytical two-zone model}\label{twozone}
\subsubsection{Derivation}
While the model above already provides an interesting scaling relation including the dependence on binary separation and the mass of the secondary, it is instructive to further understand the dependence on the stellar structure and the rotation of the secondary. For this purpose, we consider now a stellar density profile with a density $\rho_1$ in the core and $\rho_2$ in the shell. We denote the size of the core as $R_{\rm core}$. The density profile is then given as
\begin{equation}
\rho(r) = \begin{cases}
\rho_1 & 0 \leq r \leq R_{\rm core}, \\
\rho_2 & R_{\rm core} < r \leq R_{\rm sec},
\end{cases}
\end{equation}
and the main parameters can be summarized as
\begin{equation}
\lambda = \rho _2 / \rho _1 ~~~,~~~ \xi = R_{\rm star} / R_{\rm core}.
\end{equation}
We can calculate that
\begin{align*}
\alpha = \frac{\lambda R_{\rm core}^{5}}{15 G} \cdot \left( \lambda [\xi ^{5} -1]^{2} + f(\xi, \lambda) \right), \\
\beta = \frac{\lambda R_{\rm core}^{5}}{15 G} \cdot \left( f(\xi,\lambda) - \gamma \right), \\
\gamma = \lambda \left[ \xi ^{5} - 1 \right],
\end{align*}
where the function $f$ is given by
\begin{equation}
f(\xi, \lambda) = \int \limits ^{\xi} _{1} \frac{5 \cdot x^{7}}{1 - \lambda + \lambda x^{3}} \mathrm{d} x.
\end{equation}
The two solutions for the angular velocity change are given as
\begin{equation}
\Delta \Omega _2 = -\Omega_2 \cdot \frac{\beta}{\alpha} \cdot \left( 1 \pm \sqrt{1 + \frac{\alpha \Delta Q}{\beta^{2} \Omega^{2} _2 }} \, \right)
\end{equation}
with
\begin{equation}
\Delta Q = - \frac{a_{\rm bin}^{2} M_{\rm sec}}{9} \frac{\Delta P}{P_{\rm bin}}.
\end{equation}
We assume now that the star is tidally locked, implying that the orbital period of the secondary is given as the orbital period of the binary system, i.e. $\Omega _2 = 2 \pi / P_{\rm bin}$. Explicitly, we then have
\begin{equation}\label{eq:del_om2_tzm}
\Delta \Omega _2 = - \frac{2 \pi}{P_{\rm bin}} \frac{f-\gamma}{\gamma^{2}/\lambda + f} \left( 1 \pm \sqrt{1 - G k_2 \, \frac{a_{\rm bin}^{2} M_{\rm sec} P_{bin}^{2}}{R_{\rm sec}^{5}} \frac{\Delta P}{P_{\rm bin}}} \right)
\end{equation}
with the coefficient
\begin{equation}\label{eq:tzm_k2}
k_2 = \frac{15}{36 \pi^{2}} \, \frac{\xi^{5}}{\lambda} \, \frac{(\gamma^{2}/\lambda + f)}{(f-\gamma)^{2}} ~~ .
\end{equation}
The energy required to drive the angular momentum exchange is
\begin{equation}
\Delta E = \frac{1}{2} (\gamma +1) (\xi^{5}-1) \frac{8 \pi \rho _2}{15} R_{\rm core}^{5} \Delta \Omega _2 ^{2},
\end{equation}
which together with the identity
\begin{equation}
\frac{4 \pi}{3} \bar{\rho} R_{\rm star}^{3} = \frac{4 \pi}{3} \rho_1 R_{\rm core}^{3} + \frac{4 \pi}{3} \rho_2 ( R_{\rm star}^{3} - R_{\rm core}^{3} ).
\end{equation}
results in
\begin{equation}
\label{eq:tzm_energy}
\Delta E = \frac{1}{5} \frac{(\gamma+1)(\xi^{3}-\xi^{-2})}{1+\lambda(\xi^{3}-1)} \lambda M_{\rm star} R_{\rm star}^{2} \Delta \Omega^{2} _2,
\end{equation}
or with Eq.~\ref{eq:del_om2_tzm}
\begin{equation}
\label{eq:tzm_energy_full}
\Delta E = k_1 \cdot \frac{M_{\rm sec} R_{sec}^{2}}{P_{\rm bin}^{2}} \cdot \left( 1 \pm \sqrt{1 - k_2 \, G \, \frac{a_{\rm bin}^{2} M_{\rm sec} P_{bin}^{2}}{R_{\rm sec}^{5}} \frac{\Delta P}{P_{\rm bin}}} \right)^{2},
\end{equation}
where we defined the coefficient
\begin{equation}\label{eq:tzm_k1}
k_1 = \frac{4 \pi^{2}}{5} \, \frac{\lambda (\gamma+1) (\xi^{3}-\xi^{-2})}{1+\lambda (\xi^{3}-1)} \, \frac{(f-\gamma)^{2}}{(\gamma^{2}/ \lambda +f)^{2}} ~~ .
\end{equation}
According to the last section, we can relate eq.~\ref{eq:tzm_energy_full} to the energy provided over one modulation period:
\begin{equation}
\frac{\Delta E}{E_{sec}} = k_1 \cdot \frac{M_{\rm sec} R_{sec}^{2}}{P_{\rm bin}^{2} P_{mod} L_{sec}} \cdot \left( 1 \pm \sqrt{1 - k_2 \, G \, \frac{a_{\rm bin}^{2} M_{\rm sec} P_{bin}^{2}}{R_{\rm sec}^{5}} \frac{\Delta P}{P_{\rm bin}}} \right)^{2}.
\end{equation}
For low-mass stars, typical values are $\xi \sim 4/3$ and $\lambda \sim 1/100$ (see sec.~\ref{sec:results}). As the inverse zone transition parameter $\xi$ shows just negligible variations for typical PCEB systems, we can fix it to $\xi = 4/3$ to find the zone density contrast parameter $\lambda$ which is most compatible with the systems examined in this paper (see tab.~\ref{tab:secondarys}). We set $\lambda = 0.00960$, implying $f \sim 5.57$. Given that, we can numerically evaluate the coefficients yielding
\begin{equation}
k_1 = 0.133 ~~~,~~~ k_2 = 3.42 ~~ .
\end{equation}
\subsubsection{A critical condition}
In contrast to the constant density approximation, the two-zone model incorporates all essential physics involved in the Applegate process. In particular, it accounts for the orbital period of the binary resulting in two different energy branches just as in the full treatment (see sec.~\ref{sec:brinkworth}). Here, the lower energy branch corresponds to the negative solution. Another implication results from the term inside the root: Restricting us to real-valued solutions raises the critical condition
\begin{equation}
\label{eq:tzm_condition}
k_2 \, G \, \frac{a_{\rm bin}^{2} P^{2}_{\rm bin} M_{\rm sec}}{R_{\rm sec}^{5}} \frac{\Delta P}{P_{\rm bin}} := A \leq 1 ~~ .
\end{equation}
Mathematically, systems which do not satisfy eq.~\ref{eq:tzm_condition} (hereafter, we refer to the left-hand side as the Applegate parameter $A$) cannot drive the observed period change independent of energetic arguments as no real-valued solution exists. On the other hand, one can show that in the case of a critical system for which the Applegate parameter is unity, the two-zone model and the constant density model converge. In such a system, the energy to drive the Applegate process is given by
\begin{equation}
\Delta E = k_1 \cdot \frac{M_{\rm sec} R_{sec}^{2}}{P_{\rm bin}^{2}} ~~ .
\end{equation}
Substituting eq.~\ref{eq:tzm_condition} for the binary period leads to
\begin{equation}
\Delta E = k_1 \, k_2 \, G \, \cdot \left( \frac{\Delta P}{P_{bin}} \right) \cdot \frac{a_{\rm bin}^{2} \, M_{\rm sec}^{2}}{R_{\rm sec}^{3}} ~~ .
\end{equation}
In the case of constant density, $\lambda=1$ which means that $f=\xi^{5}-1 = \gamma$ and we end up at $k_1 \cdot k_2 = 1/3$, proving that the two-zone model and the constant density model give identical Applegate energies for critical systems. Using this, we can understand the physical meaning of critical systems by looking into the angular momentum budget of the star and its rotational state. For a critical system, the change of the outer shell's angular velocity (cf. eq.~\ref{eq:del_om2_tzm}) is given by
\begin{equation}
\Delta \Omega _2 = -\frac{2 \pi}{P_{bin}} \cdot \frac{f - \gamma}{\gamma^{2}/\lambda+f} ~~ .
\end{equation}
Now, let $\lambda \rightarrow 1$ which implies $f \rightarrow \gamma$ because for critical systems the two-zone model and the constant density model converge as we showed above. Non-zero solutions impose $\xi \rightarrow 1$ and we arrive at
\begin{equation}
\Delta \Omega _2 = \lim\limits_{\xi \rightarrow 1}{-\frac{2 \pi}{P_{bin}} \cdot \frac{\xi^{5}-1 - \xi^{5}+1}{\xi^{10}-\xi^{5}+\xi^{5}-1}} = -\frac{2 \pi}{P_{bin}}
\end{equation}
proving that the constant density model is the limit of a two-zone calculation for the extreme case that the outer shell and its angular velocity vanish.
\subsubsection{Quality of the approximation}
The quality of the analytical two-zone model becomes clear from Fig.~\ref{fig:tzm_test} where we compare the estimates as calculated with the two-zone model with the full calculations described in the next subsection. The full model considers a white dwarf primary with $0.5~{\rm ~M}_{\sun}$ accompanied by a fairly evolved secondary star with $t \sim 5~\mbox{ Gyr}$, while the two-zone model employs $\lambda=0.0096$ and $\xi =4/3$, consistent with the typical structure of a low-mass star. In both calculations, we assume a relative period change of $\Delta P / P_{bin} = 10^{-7}$ with a modulation period of $P_{\rm mod} = 14~\rm{yr}$, corresponding to a Jupiter-like planet with mass $\sim 3~{\rm ~M}_{Jup}$ and semi-major axis $\sim 5~\rm{au}$ and we investigate the results for varying Applegate parameters $A$. As one can see, the typical deviation of the required energy for the Applegate mechanism corresponds to less than $10\%$ between the two models, provided that the Applegate condition (Eq.~\ref{eq:tzm_condition}) is satisfied with a left-hand side much smaller than unity. We adopt $A \lesssim 0.5$ as a typical fiducial limit.
\begin{figure}[!t]
\resizebox{\hsize}{!}{\includegraphics{tzm_test.eps}}
\caption{The relative difference between the two-zone model described in section \ref{twozone} with the full model considering a realistic stellar density profile as outlined in section \ref{sec:recipe}. The model assumes a white dwarf mass of $0.5$~M$_\odot$ and an age of the secondary of $5$~Gyr. As parameters in the two-zone model, we employ $\lambda=0.0096$ and $\xi=4/3$. The relative deviation of the results is shown as a function of the Applegate parameter $A$ and for different secondary masses. A good approximation requires $A \lesssim 0.5$.}
\label{fig:tzm_test}
\end{figure}
Finally, in Fig.~\ref{fig:comp_models} we compare the computational results of all three models described in the last two sections, namely the constant density approximation, the two-zone model as well as the full model for the same general system configuration as in Fig.\ref{fig:tzm_test} and varying binary separation. Here, we fixed the secondary mass to $M_{sec}=0.3~{\rm ~M}_{\sun}$ allowing us to compare the models for varying Applegate parameter. Over almost the entire parameter range, the two-zone model precisely resembles the full calculations with absolute deviations of less than $10\%$ while the constant density model lies off by several orders of magnitude. Both the full calculations and the two-zone model fail beyond $A \sim 1$ (corresponding to $a_{bin} \sim 3.5~{\rm ~R}_{\sun}$), which is exactly predicted by eq.~\ref{eq:tzm_condition}.
We therefore conclude that the two-zone model provides a valuable and sufficient approximation to estimate the required energy for the case of PCEBs with main sequence low-mass companions in the range of $0.1~{\rm ~M}_{\sun}$ to $0.6~{\rm ~M}_{\sun}$ that satisfy the Applegate condition eq.~\ref{eq:tzm_condition}. Nevertheless we will evaluate our main results with the more detailed numerical framework which includes full stellar density profiles and varying core radii.
\begin{figure}[!t]
\resizebox{\hsize}{!}{\includegraphics{comp_models.eps}}
\caption{In this figure we compare the two analytical models, namely the constant density and the two-zone models as described in sec.~\ref{sec:constant} and sec.~\ref{twozone}, with the full calculations as described in sec.~\ref{sec:recipe} in terms of the predicted relative Applegate threshold energy. We consider the same general system configuration as in Fig.~\ref{fig:tzm_test} and described in sec.~\ref{twozone} but fixed the secondary mass to $M_{sec} = 0.3~{\rm ~M}_{\sun}$ enabling us to plot the calculated energies as functions of the Applegate parameter $A$ (see eq.~\ref{eq:tzm_condition}). The analytical two-zone model closely follows the full calculations with typical deviations of less than $10\%$ while the simple constant density model overestimates the energy threshold by several magnitudes except for the critical region close to unity.}
\label{fig:comp_models}
\end{figure}
\section{Full calculations with a stellar density profile}
\subsection{Recipe}
\label{sec:recipe}
In addition to the constant density profile and the two-zone model, we present here the framework to derive the required energy to drive Applegate's mechanism based on detailed and realistic density profiles. The latter requires a numerical solutions for the coefficients involved in Eq.~\ref{eq:solutionOmega2}. In this section, we will describe our general framework and apply it to NN Ser. The results for the additional systems will be given in section~\ref{sec:results}.\\
From Eq.~\ref{eq:solutionOmega2}, we expect that the effective change of the shell's angular velocity $\Delta \Omega _2$ is an explicit function of its initial angular velocity $\Omega _2$ and an implicit function of the core radius $R_{core}$ via the moments of inertia $I_{1,2}$ and the $Q'_{1,2}$ coefficients. Our procedure is thus as follows:
First we calculate $I_{1,2}$ and $Q'_{1,2}$ for a given core radius $R_{\rm core}$ utilizing a radial density profile provided by \textit{Evolve ZAMS}\footnote{\resizebox{\hsize}{!}{Webpage Evolve ZAMS: http://www.astro.wisc.edu/\~townsend/static.php?ref=ez-web}} \citep{Paxton2004}, but re-scaled to the secondary's radius and normalized to its mass as inferred by \citet{Parsons2010NNSer} to resemble the measurements. We adopt an age value of $\sim 2~\mbox{ Gyr}$ which is roughly the main sequence lifetime of the $2~{\rm ~M}_{\sun}$ WD progenitor \citep[see][]{Beuermann2010}. Unless stated otherwise, we will in the following assume a solar metallicity.
Based on that, we can calculate the two solutions for $\Delta \Omega _2$ for a given initial rotation $\Omega _2$, named $\Delta \Omega _2 ^{+}$ for the positive sign and $\Delta \Omega _2 ^{-}$ respectively. These two solutions finally give two different corresponding energies $\Delta E ^{+}$ and $\Delta E ^{-}$.
According to \citet{Haefner2004}, the NN Ser system is tidally locked, constraining the inital angular velocities to
\begin{equation}
\Omega _1 = \Omega _2 = \frac{2 \pi}{P_{\rm bin}} := \Omega _{\rm bin} ~~ ,
\end{equation}
with $P_{\rm bin}$ the orbital period of the binary \citep[see, e.g.,][]{Brinkworth2006,Lanza2006}. Given the condition of tidal locking, the only remaining parameter is the core radius $R_{\rm core}$. We do not restrict it to equal the nuclear burning zone as typical Red Dwarfs are fully convective \citep[see, e.g.,][]{Engle2011}, but rather explore the minimum energy that can be obtained depending on the core radius. The latter is parametrized as a fraction $\delta$ of the core radius. The relative period change $\Delta P / P_{\rm bin}$ of the binary is calculated assuming sinusoidal perturbance by planets on circular orbits via
\begin{equation}
\frac{\Delta P}{P_{\rm bin}} = 4 \pi \frac{K}{P_{\rm mod}}
\end{equation}
with semi-amplitude $K$ and modulation period $P_{\rm mod}$ \citep[see, e.g.,][]{Applegate1992, Gozdziewski2015}, neglecting small variations due to orbital eccentricity in both the binary or the planet.
\begin{figure}[!t]
\resizebox{\hsize}{!}{\includegraphics{nnser_omega.eps}}
\caption{The angular velocity change normalized by the angular velocity of the binary both in the core ($\Delta \Omega_1$) and the shell ($\Delta \Omega_2$) for our reference system NN Ser as a function of the size of the core. For both quantities, there are two solutions, denoted with + and -. The calculation is based on the full numerical model described in section~\ref{sec:recipe}.}
\label{fig:nnser_omega}
\end{figure}
\begin{figure}[!t]
\resizebox{\hsize}{!}{\includegraphics{nnser_energy.eps}}
\caption{The required energy to drive the eclipsing time observations for the solutions denoted with + and - in Fig.~\ref{fig:nnser_omega}. The calculation is based on the full numerical model described in section~\ref{sec:recipe}.}
\label{fig:nnser_energy}
\end{figure}
As one can see from Fig.~\ref{fig:nnser_omega}, we have two possible solutions corresponding to two different angular momentum transfer modes that are compatible with the boundary conditions. Both modes correspond to different energies. We plot them in Fig.~\ref{fig:nnser_energy} as a function of the fractional core radius $\delta := r_{\rm core} / r_{\rm sec}$, showing that the two branches converge as $\delta$ runs to either $0.3$ or $0.9$, but typically feature a prominent spread of several magnitudes in between. Outside this parameter space, only complex-valued solutions exist. The minimum energy for the modified Applegate mechanism (hereafter $\Delta E_{\rm min}$) is achieved at a fractional core radius of $\delta _{\rm min} \sim 0.75$ for the lower branch.\\
Finally, we compare the energy needed to drive the Applegate mechanism over one modulation period $P_{\rm mod}$ with the total energy generated by the secondary, i.e.
\begin{equation}
E_{\rm sec} = P_{\rm mod} \cdot L_{\rm sec} ~~ .
\end{equation}
In order to be viable, we require
\begin{equation}
\Delta E_{\rm min} < E_{\rm sec} ~~ ,
\end{equation}
knowing that a more realistic condition would require $\Delta E_{\rm min} \ll E_{\rm sec}$, as the energy required for the quadrupole moment oscillations should correspond to a minor fraction of the available energy produced by the star.
In Tables~\ref{tab:secondarys} and \ref{tab:planets}, we summarize the basic properties of all systems investigated in this paper. Our sample is based on the compilation of PCEBs with potential planets described by \citet{Zorotovic2013}, as well as four RS~CVn binaries RU~Cnc, AW~Her \citep{Tian2009}, HR~1099 \citep{Fekel1983, Garcia2003, Frasca2004} and SZ~Psc \citep{Jakate1976, Popper1988, Wang2010}, and the RR-Lyr type binary BX~Dra \citep{Park2013}.
\subsection{How does the age affect our results?}\label{sec:time}
Because of their low mass and luminosity, typical Red Dwarfs of $\sim 0.1 {\rm ~M}_{\sun}$ have main sequence lifetimes of several $100 \mbox{ Gyr}$ \citep[see, e.g.,][]{Engle2011}. On timescales of tens of $\mbox{ Gyr}$, their fundamental properties and internal structures remain virtually constant.\\
Even for more massive dwarfs such as in \textit{QS Vir} with a mass of $0.43~{\rm ~M}_{\sun}$, the radial density profile calculated with \textit{Evolve ZAMS} shows little variation even for the two extreme cases of $t \sim 1~\mbox{ Gyr}$ and $t \sim 14~\mbox{ Gyr}$: while its radius increased by $\sim 1~\%$, the core density increased by slightly more than $\sim 10~\%$, concentrating more mass in the center (see Fig.~\ref{fig:qsvir_density}). Calculating the relative energy threshold to drive the Applegate process as described in sec.~\ref{sec:recipe} assuming $\Delta P / P_{bin} = 10^{-6}$, we find:
\begin{itemize}
\item $t = 1~\mbox{ Gyr}$: $\Delta E_{min}/E_{sec} = 0.615$ at $\delta = 0.729$
\item $t = 14~\mbox{ Gyr}$: $\Delta E_{min}/E_{sec} = 0.692$ at $\delta = 0.713$
\end{itemize}
As both results differ by just $\sim 10 \%$, we conclude that the age of the system is a higher-order effect for the typical systems examined in this paper. We therefore adopt a canonical value of $t = 5~\mbox{ Gyr}$ if no age estimates are given in the literature. For the binaries with evolved components, we estimate the age with \textit{Evolve ZAMS} by solving for the measured stellar radii.
\begin{figure}[!t]
\resizebox{\hsize}{!}{\includegraphics{qsvir_density_evo.eps}}
\caption{The density profile of a red dwarf with $0.1$~M$_\odot$ for a lifetime of $1$~Gyr and $14$~Gyrs. The variation in the core density due to the different lifetime corresponds to only about $10\%$. The calculation assumes a solar metallicity.}
\label{fig:qsvir_density}
\end{figure}
\subsection{Dependence on metallicity variations}\label{sec:metallicity}
Here we explore how much the model results can be affected by uncertainties in the metallicity variation. Using \textit{Evolve ZAMS}, we have calculated the stellar density profile for the secondary in NN~Ser for metallicities between $10^{-4}$ and $3\times10^{-2}$, assuming a generic age of $5$~Gyrs. The resulting density profile is given in Fig.~\ref{fig:qsvir_metallicity}, showing a maximum variation in the core density of about $20\%$.
We further summarize the resulting changes in the stellar luminosity, radius and surface temperature in Table~\ref{tab:metallicity}. While the maximum variations in radius and surface temperature correspond to about $20\%$, the variation in the luminosity corresponds to a factor of $2$ between the extreme cases. Considering that the metallicity is varied by more than two orders of magnitude, the latter still corresponds to a minor uncertainty.
\begin{table}
\caption{Fundamental calculated secondary parameters for varying metallicity values.}
\begin{center}
\begin{tabular}{c|c|c|c}
\hline
Metallicity & Luminosity/${\rm ~L}_{\sun}$ & Radius/${\rm ~R}_{\sun}$ & Surf.temp./$\rm{ K}$ \\
\hline
Z=0.0001 & 0.0444 & 0.377 & 4,320 \\
Z=0.004 & 0.0325 & 0.398 & 3,890 \\
Z=0.03 & 0.0255 & 0.401 & 3,650 \\
\hline
\end{tabular}
\label{tab:metallicity}
\end{center}
\end{table}
\begin{figure}[!t]
\resizebox{\hsize}{!}{\includegraphics{qsvir_metallicity.eps}}
\caption{The density profile of a red dwarf with $0.1$~M$_\odot$ for metallicities between $10^{-4}$ and $3\times10^{-2}$. The maximum variation in the core density corresponds to about $20\%$. The calculation assumes a generic age of $5$~Gyrs.}
\label{fig:qsvir_metallicity}
\end{figure}
Using the calculated stellar density profiles as input, we can now use the formalism developed in section~\ref{sec:recipe} to determine the required energy to drive the eclipsing time variations $\Delta E_{\rm min}$ and compare them to the energy $E_{\rm sec}$ produced by the secondary within one modulation period. For the metallicities investigated here, we find the following results:
\begin{itemize}
\item $Z = 0.0001$: $\Delta E_{min}/E_{sec} = 0.593$
\item $Z = 0.004$: $\Delta E_{min}/E_{sec} = 0.502$
\item $Z = 0.03$: $\Delta E_{min}/E_{sec} = 0.649$
\end{itemize}
Again, the scatter in the ratio $\Delta E_{\rm min}/E_{\rm sec}$ is in the $10\%$-$20\%$ level even for the large range of metallicities considered here. While the latter contributes to the overall uncertainty, an incorrect metallicity estimate cannot significantly affect the question whether the Applegate mechanism is feasible or not.
\section{Results}
\label{sec:results}
In this section, we will apply the framework from the previous section (including the detailed model presented in section~\ref{sec:recipe} to investigate how the feasibility of the Applegate model depends on the properties of the binary system). We will further apply it to the sample presented in section~\ref{sec:systems} to assess for which of these systems the eclipsing time observations can be explained with the Applegate mechanism.
\subsection{Parameter study}\label{sec:param_study}
In the following, we consider a close binary system with varying separation consisting of a $0.5~{\rm ~M}_{\sun}$ White Dwarf and a Red Dwarf companion with different masses in the range of $0.15~{\rm ~M}_{\sun}$ to $0.6~{\rm ~M}_{\sun}$. We assume a fixed relative period variation $\Delta P / P_{bin} = 10^{-7}$ with a modulation period of $P_{\rm mod} = 14~\rm{yr}$, corresponding to a Jupiter-like planet with mass $\sim 3~{\rm ~M}_{Jup}$ and semi-major axis $\sim 5~\rm{au}$.
Using the model presented in section~\ref{sec:recipe}, we determine the required energy to drive the Applegate mechanism as a function of binary separation for different masses of the secondary. The results of this calculation are presented in Fig~\ref{fig:param_study}. For all secondary masses, the Applegate threshold energy scales positively with increasing binary separation, as a larger quadrupole moment variation must be generated with increasing binary separation for the same period variation to be produced. The ratio $E_{\rm min}/E_{\rm sec}$ decreases with increasing mass of the secondary, as more massive secondaries produce higher stellar luminosities and more energy that is potentially available to drive Applegate's mechanism. The latter implies that more massive secondaries are particularly well-suited to produce quadrupole moment variations, while it is more difficult for low-mass companions as observed in NN~Ser.
Qualitatively, we can understand the scaling behavior seen in Fig.~\ref{fig:param_study} utilizing the constant-density model. Normalized to the energy provided by the secondary over one planetary orbit, we have
\begin{equation}
\frac{\Delta E}{E_{\rm sec}} \propto \frac{\Delta P}{P_{bin}} \, a_{\rm bin}^{2} \, M_{\rm sec}^{2} \, R_{\rm sec}^{-3} \, L_{\rm sec}^{-1} \, P_{\rm plan}^{-1} ~~ .
\end{equation}
From \citet{Demircan1991}, we adopt $R_{\rm sec} \propto M_{\rm sec}^{0.95}$ and $L_{\rm sec} \propto M_{\rm sec}^{2.6}$, while the relative period change and the planetary period are virtually constant. Combined we get
\begin{equation}
\frac{\Delta E}{E_{\rm sec}} \propto a_{\rm bin}^{2} \, M_{\rm sec}^{-3.45} ~~ ,
\end{equation}
resembling the fact that the relative Applegate threshold energy raises for increasing binary separation and decreases for increasing secondary mass.
\begin{figure}[!t]
\resizebox{\hsize}{!}{\includegraphics{param_study_b.eps}}
\caption{Relative Applegate energy calculated for varying secondary masses and binary separations according to sec.~\ref{sec:param_study}. Typical PCEBs with extremly low-mass secondarys do not provide enough energy to power the Applegate process or even do not satisfy the Applegate condition. Rather, typical Applegate systems are both relatively massive and extremely compact.}
\label{fig:param_study}
\end{figure}
\subsection{Application to our sample}
We applied the calculations as described in section~\ref{sec:recipe} on the systems introduced and characterized in section~\ref{sec:systems}. In all our calculations, we assumed solar metallicity and rescaled and normalized the calculated radial density profile to resemble the (mean) observed mass and radius values. A summary of the main results is given in Table~\ref{tab:results}.
\begin{table*}
\caption{Summary of our calculations as described in sec.~\ref{sec:recipe}. The ratios $\Delta E_{\rm min}$/$E_{sec}$ denote the energy required to drive an Applegate mechanism of the observed magntiude over the available energy produced by the secondary star. The parameter $\delta_{\rm min}$ denotes the ratio of core radius to secondary star radius for which the minimum energy is obtained in the numerical model. The dash denotes imaginary values implying that no physical solution exists as discussed in sec.~\ref{twozone} and sec.~\ref{sec:recipe}.}
\begin{center}
\begin{tabular}{c|c|c|c|c|c|c|c}
\hline
System & $E_{\rm sec}$/$\rm{ erg}$ &$\Delta E_{\rm min}$/$E_{sec}$ &$\Delta E$/$E_{\rm sec}$ & $\Delta E$/$E_{\rm sec}$ & $\Delta E$/$E_{\rm sec}$ & $\Delta E_{\rm min}$/$E_{\rm sec}$ & $\delta _{\rm min}$ \\
\hline
& & \citet{Applegate1992} & \citet{Tian2009} & \multicolumn{4}{|c}{\centering This paper} \\
\hline
& & (see eq.~\ref{eq:applegate_energy}) & (see eq.~\ref{eq:tian}) & Const.dens. & Two-zone & \multicolumn{2}{|c}{Full model} \\
\hline
HS 0705+6700 & $2.2\cdot 10^{39}$ & 6.7 & 7.3 & 3,300 & 140 & 140 & 0.73 \\
HW Vir & $2.0\cdot 10^{40}$ & 4.9 & 6.0 & 720 & 108 & 104 & 0.72 \\
NN Ser & $2.7\cdot 10^{39}$ & 3.2 & 3.3 & 1,100 & 64 & 64 & 0.73 \\
NSVS14256825 & $8.3\cdot 10^{38}$ & 5.3 & 5.4 & 3,200 & 101 & 102 & 0.73 \\
NY Vir & $1.4\cdot 10^{39}$ & 5.5 & 5.6 & 2,800 & 106 & 106 & 0.73 \\
HU Aqr & $1.4\cdot 10^{40}$ & 0.10 & 0.10 & 240 & 1.9 & 1.9 & 0.732 \\
QS Vir & $3.0\cdot 10^{40}$ & 0.039 & 0.040 & 170 & 0.71 & 0.77 & 0.71 \\
RR Cae & $5.2\cdot 10^{39}$ & 2.8 & 2.9 & 560 & 59 & 59 & 0.73 \\
UZ For & $4.1\cdot 10^{39}$ & 0.14 & 0.15 & 360 & 2.7 & 2.7 & 0.73 \\
DP Leo & $2.9\cdot 10^{39}$ & 0.021 & 0.021 & 150 & 0.38 & 0.38 & 0.74 \\
V471 Tau & $2.0\cdot 10^{42}$ & 0.014 & 0.014 & 12 & 0.26 & 0.26 & 0.84 \\
\hline
RU Cnc & $5.7\cdot 10^{43}$ & 0.074 & 0.076 & 1.7 & - & - & - \\
AW Her & $8.5\cdot 10^{42}$ & 608 & 618 & 270 & - & - & - \\
HR 1099 & $3.7\cdot 10^{43}$ & 0.21 & 0.22 & 10 & - & 6.7 & 0.64 \\
BX Dra & $3.5\cdot 10^{43}$ & 0.00016& 0.00016&0.92 & 0.0029 & 0.056 & 0.52 \\
SZ Psc & $9.9\cdot 10^{43}$ & 0.12 & 0.13 & 4.7 & - & 4.84 & 0.61 \\
\hline
\end{tabular}
\label{tab:results}
\end{center}
\end{table*}
In total, we investigated 16 systems. Considering the numerical model based on realistic density profiles from section~\ref{sec:recipe}, the Applegate formalism can safely explain the period variations of four systems (QS Vir, DP Leo, V471 Tau, BX Dra). However, this may be an underestimate for QS~Vir, which contains variations in the O-C diagram much steeper than the average variation within one modulation time. For the other 12 systems, the relative threshold energy is greater than unity or no solution exists at all, implying that more than the total energy generated by the secondary is necessary to power the binary's period variations or the systems architecture is not capable of driving such a high level of period variation via the Applegate mechanism. We note that for four of these systems, in particular HU~Aqr, UZ~For, HR~1099 and SZ~Psc, the ratio $\Delta E/E_{\rm sec}$ is of order 1. Given the uncertainties regarding metallicity and age as discussed in the previous sections \ref{sec:time} and \ref{sec:metallicity}, these systems may still be able to drive an Applegate mechanism. However, for the remaining 8 systems, the ratio $\Delta E/E_{\rm sec}$ is considerably larger than $1$, implying that the observed eclipsing time variations cannot be explained by magnetic activitiy, particularly not in the wider, more massive and evolving RS CVn systems.
For comparison, we show the results from our constant density model, which tends to produce higher estimates of $\Delta E/E_{\rm sec}$, leading to an overestimate of the required energy. The two-zone model yields results close to the numerical values. For comparison, we also show the results adopting the original framework by \citet{Applegate1992} as presented by \citet{Parsons2010} as well as the fit by \citet{Tian2009}. Both cases tend to significantly underestimate the energy required to drive the eclipsing time variations, due to the thin-shell approximation and its inherent negligence of the core's backreaction. We therefore emphasize that an assessment of the Applegate mechanism needs to be based at least on a two-zone model.
\subsection{Activity of the likely candidates}
In order to test the hypothesis of an Applegate mechanism, we have checked for the presence of magnetic activity those systems where the Applegate mechanism can be expected to produce period time variations of the observed magnitude, i.e. BX~Dra, V471~Tau, DP~Leo, QS~Vir and RU~Cnc. In principle, all systems show signs of strong magnetic activity.
\citet{Park2013} found strong changes in the light curves of BX~Dra, which can only be explained by large spots. Its coronal activity, however, cannot be examined due to the large distance of $230~\rm{ pc}$. V471~Tau, on the other hand, exhibits photometric variability, flaring events and H$\alpha$ emission as well as a strong X-ray signal (\citet{Kaminski2007} and \citet{Pandey2008}).\\
The X-ray flux of DP~Leo has been studied extensively, e.g. by \cite{Schwope2002} and the magnetic activity of QS~Vir could be detected via Ca~II~H\&K emission, Doppler Imaging (\cite{Ribeiro2010}) and coronal emission (\cite{Matranga2012}. RU Cnc is a known ROSAT All Sky Survey (RASS) source (\cite{Zickgraf2003}).
In general, we note that it is very likely for the secondaries in these systems to show magnetic activity, as the Red Dwarfs are fully convective and rapidly rotating, due to the tidal locking to the primary star. The stars therefore likely fulfill the conditions to drive a dynamo and produce magnetic fields. The question is thus whether the activity can drive sufficiently large changes in the quadrupole moment to explain the eclipsing time variations. At least for 8 of the systems in our sample, the latter appears difficult on energetic grounds.
\subsection{Further implications}
Employing a detailed model for the Applegate mechanism in eclipsing binaries, we have checked here whether quadrupole moment variations driven by magnetic energy can explain the observed eclipsing time variations in a sample of PCEB systems. We found that at least in 8 of these systems, this possibility can be ruled out.
However, this does not mean that these systems are not magnetically active, it only implies that magnetic activity is not the only or main cause of the observed period time variations. For instance in NN~Ser, the required energy to drive the Applegate process exceeds the available energy by about a factor of $57$. Considering that the period time variations scale proportionally to the available energy, an Applegate mechanism could nevertheless contribute to additional scatter in the eclipsing time variations at a level of $\lesssim1\%$.
While this effect may be neglected in the case of NN Ser, it may play an important role in other systems with Applegate energies closer to the energy provided by the secondary. It is therefore necessary to further investigate their possible contribution and to distinguish the latter from the potential influence of a companion in order to calculate realistic fits of planetary systems.
\section{Conclusions and discussion}
\label{sec:conclusions}
In this paper, we have systematically assessed the feasibility of the Applegate model in PCEB systems. For this purpose, we have adopted the formulation by \citet{Brinkworth2006} considering a finite shell around a central core, and including the change of the quadrupole moment both in the shell and the core. As these contributions partly balance each other, the latter is energetically more expensive than the thin-shell model by \citet{Applegate1992}, i.e. it requires more energy per orbital period to drive the eclipsing time oscillations.
We apply the Brinkworth model here in different approximations, including a constant density approximation where the required energy is independent of stellar rotation, a two-zone model assuming different densities in the shell and the core, as well as a detailed numerical model where the framework is applied to realistic stellar density profiles. We show that the two-zone model reproduces the results of the most detailed framework with a deviation of less than $25\%$. We have explored the general dependence of the required energy. In particular, the Applegate mechanism becomes energetically more feasible for smaller binary separations, as in that case, a smaller change in the quadrupole moment is sufficient to drive the observed oscillations. In addition, the mechanism becomes more feasible with increasing mass of the secondary star, as the nuclear energy production increases with stellar mass. An ideal Applegate PCEB system consists therefore of a very tight binary ($\sim0.5~{\rm ~R}_{\sun}$) with a secondary star of $\sim0.5~{\rm ~M}_{\sun}$.
This formalism is applied to a sample of close binaries with observed eclipsing time variations, including the PCEB sample provided by \citet{Zorotovic2013} as well as four RS~CVn binaries
For most systems in our sample, the energy required to drive the Applegate process is considerably larger than the energy provided by the star.
In these cases, the observed period variations cannot be explained in the context of the Applegate model. We note that the situation is similar if we consider only the $11$ PCEBs.
Therefore, alternative interpretations such as the planetary hypothesis need to be investigated in more detail, and we also encourage direct imaging attempts as pursued by \citet{Hardy2015} particularly in those cases where the Applegate mechanism turns out to be unfeasible.
Note that our conclusions do not imply the absence of magnetic activity for binaries where the Applegate model is not able to produce the observed period variations. Rather, it only means that the magnetic activity is not strong enough to be the dominant mechanism. However, as many of these systems have rapidly rotating secondaries with a convective envelope, we expect signs of dynamo activity, which can contribute to the period time variations on some level.
Assuming a contribution scaling linearly with the relative Applegate threshold energy, the Applegate process might provide a significant additional scatter that needs to be taken into account when inferring potential planetary orbits from the observed data.
\begin{acknowledgements}
MV and RB gratefully acknowledge funding from the Deutsche Forschungsgemeinschaft via Research Training Group (GrK) 1351 \textit{Extrasolar Planets and their Host Stars}. MV thanks for funding from the \textit{Studienstiftung des Deutschen Volkes} via a travel grant. We thank Klaus Beuermann, Stefan Dreizler, Rick Hessman, Ronald Mennickent and Matthias Schreiber for stimulating discussions on the topic. We also would like to thank the referee Ed Devinney for helpful comments that improved our manuscript.
\end{acknowledgements}
|
1,108,101,565,609 | arxiv | \section{Introduction} \label{sec:intro}
The study of the potential theory for the $d$-dimensional harmonic oscillator
$$
\mathcal{H}=-\Delta+\|x\|^2,
$$
has recently been initiated by Bongioanni and Torrea \cite{BT}.
The multi-dimensional Hermite functions $h_k$ are eigenfunctions of $\mathcal{H}$ and we have
$\mathcal{H} h_k = (2|k|+d) h_k$. The operator $\mathcal{H}$ has a natural self-adjoint
extension, here still denoted by $\mathcal{H}$, whose spectral decomposition is given by the $h_k$.
The integral kernel $G_t(x,y)$ of the Hermite semigroup
$\{\exp({-t\mathcal{H}}): t>0\}$ is known explicitly to be
(see \cite{ST2} for this symmetric variant of the formula)
\begin{align*}
G_t(x,y)&=\sum_{n=0}^\infty e^{-(2n+d)t}\sum_{|k|=n}h_k(x)h_k(y)\\
&=\big(2\pi\sinh(2t)\big)^{-d/2}\exp\bigg(-\frac14\Big[\tanh( t)\|x+y\|^2+\coth(t)\|x-y\|^2 \Big]\bigg).
\end{align*}
Given $\sigma>0$, consider the negative power $\mathcal{H}^{-\sigma}$, which is a contraction on $L^2(\R)$.
It is easily seen that $\mathcal{H}^{-\sigma}$ coincides in $L^2(\R)$ with the \textit{potential operator}
\begin{equation}\label{int}
\mathcal{I}^\sigma f(x)=\int_{\R}\mathcal{K}^{\sigma}(x,y)f(y)\,dy,
\end{equation}
where the \textit{potential kernel} is given by
\begin{equation}\label{ker}
\mathcal{K}^{\sigma}(x,y)= \frac1{\Gamma(\sigma)}\int_0^\infty G_t(x,y) t^{\sigma-1}\,dt.
\end{equation}
Note that all the spaces $L^p(\mathbb R^d)$, $1\le p\le\infty$, are contained in
the natural domain of $\mathcal{I}^\sigma$ consisting of those functions $f$ for which the
integral in \eqref{int} converges $x$-a.e., see \cite[Section 2]{NS3}.
The main result of the paper, Theorem \ref{main1} below, provides qualitatively sharp estimates
of the potential kernel \eqref{ker}. As an application of this result, we prove
sharpness of the $L^p-L^q$ estimates for the potential
operator \eqref{int} obtained recently by Bongioanni and Torrea \cite[Theorem 8]{BT},
see Theorem~\ref{thm:LpLq}.
Recall that an operator $T$ defined on $L^p(\mathbb R^d)$ for some $1\le p\le\infty$,
with values in the space of measurable functions on $\mathbb{R}^d$, is said to be
of weak type $(p,q)$, $1\le q<\infty$, provided that
\begin{equation}\label{weak}
|\{x\in \mathbb R^d\colon |Tf(x)|>\lambda\}|\le C \Big(\|f\|_p\slash \lambda\Big)^q,
\end{equation}
with $C>0$ independent of $f\in L^p(\mathbb R^d)$ and $\lambda>0$.
The restricted weak type $(p,q)$ of $T$ means that \eqref{weak} holds for $f=\chi_E$,
where $E$ is any measurable subset of $\mathbb R^d$ of finite measure.
By definition, weak type $(p,\infty)$ coincides with strong type $(p,\infty)$,
{i.e{.}} the estimate $\|Tf\|_\infty\le C\|f\|_p$, $f\in L^p(\mathbb R^d)$.
In terms of Lorentz spaces, the weak type $(p,q)$ is equivalent to the boundedness from
$L^p(\mathbb{R}^d)$ to $L^{q,\infty}(\mathbb{R}^d)$, and the restricted weak type $(p,q)$ is
characterized by the boundedness from $L^{p,1}(\mathbb{R}^d)$ to $L^{q,\infty}(\mathbb{R}^d)$,
see \cite[Chapter 4, Section 4]{BS}. Strong type $(p,q)$ means of course the $L^p$-$L^q$ boundedness.
The notation $X\lesssim Y$ will be used to indicate that $X\leq CY$ with a positive constant $C$
independent of significant quantities; we shall write $X \simeq Y$ when simultaneously
$X \lesssim Y$ and $Y \lesssim X$. We will also use the notation $X\simeq\simeq Y\exp(-cZ)$
to indicate that there exist positive constants $C, c_1$ and $c_2$, independent of significant
quantities, such that
$$
C^{-1}\,Y\exp(-c_1Z)\le X\le C\,Y\exp(-c_2Z).
$$
Further, in a number of places, we will use natural and self-explanatory generalizations
of the $\simeq \simeq$ relation, for instance in connection with certain integrals
involving exponential factors. In such cases the exact meaning will be clear from the context.
By convention, $\simeq \simeq$ is understood as $\simeq$ whenever there are no exponential
factors involved.
We write $\log^+$ for the positive part of the logarithm, and $\vee,\wedge$ for the operations
of taking maximum and minimum, respectively.
\section{Estimates of the potential kernel} \label{sec:asym}
We begin with two technical results describing the behavior of the integrals
\begin{align*}
I_A(T) & = \int_T^{\infty}t^A\exp(-t)\,dt, \qquad T>0, \\
J_A(T,S) & = \int_T^St^A\exp(-t)\,dt, \qquad 0<T<S<\infty.
\end{align*}
Notice that $I_A(T)$ dominates $J_A(T,S)$.
The lemma below is a refinement of \cite[Lemma 2.1]{NS3}, see also \cite[Lemma 1.1]{ST1}.
\begin{lemma}
\label{lem:le1}
Let $A\in\mathbb R$ and $\gamma > 0$ be fixed. Then
\begin{equation} \label{estea}
I_A(\gamma T)\simeq T^A\exp(-\gamma T), \qquad T \ge 1,
\end{equation}
and for $0<T<1$
\begin{equation*}
I_A(\gamma T)\simeq
\begin{cases}
T^{A+1}, & \quad A<-1 \\
\log (2\slash T), & \quad A=-1 \\
1, & \quad A>-1
\end{cases} \;\;.
\end{equation*}
\end{lemma}
\begin{proof}
We assume that $\gamma =1$. From the proof it will be clear that the estimates are true for any $\gamma >0$.
The case $0<T<1$ was treated in the proof of \cite[Lemma 2.1]{NS3}, so we consider $T \ge 1$
and focus on showing \eqref{estea}. The lower bound in \eqref{estea} is straightforward, we have
\begin{equation*}
I_A(T) > \int_T^{2T}t^Ae^{-t}\,dt\gtrsim T^A \int_T^{2T} e^{-t} \, dt =
T^A\big(e^{-T}-e^{-2T}\big)\gtrsim T^A e^{-T}, \qquad T \ge 1.
\end{equation*}
It remains to prove the upper bound,
\begin{equation}\label{aa}
\int_T^\infty t^Ae^{-t}\,dt\lesssim T^A e^{-T}, \qquad T \ge 1,
\end{equation}
and here we assume that $A>0$, since for $A\le0$ we have $t^A \le T^{A}$, $t > T \ge 1$, and
the conclusion is trivial. Choosing $T_A$ such that for $T\ge T_A$ one has
\begin{equation*}
\int_{2T}^\infty t^Ae^{-t}\,dt\le \frac12\int_{T}^\infty t^Ae^{-t}\,dt,
\end{equation*}
we can write
\begin{equation*}
\int_T^\infty t^Ae^{-t}\,dt\le \int_T^{2T} t^Ae^{-t}\,dt+\int_{2T}^\infty t^Ae^{-t}\,dt\le
C\,T^Ae^{-T}+\frac12\int_{T}^\infty t^Ae^{-t}\,dt, \qquad T \ge T_A.
\end{equation*}
This implies \eqref{aa} for $T\ge T_A$ and consequently for all $T\ge1$.
\end{proof}
\begin{lemma}
\label{lem:le2}
Let $A\in\mathbb R$ and $\gamma > 0$ be fixed. Then for $0<T<S\le 2T$ we have
\begin{equation} \label{estea2}
T^A(S-T)\exp(-2\gamma T) \lesssim J_A(\gamma T,\gamma S)\lesssim T^A(S-T)\exp(-\gamma T),
\end{equation}
while for $S>2T>0$ we have $J_A(\gamma T,\gamma S)\simeq I_A(\gamma T)$ when $S\ge2$, and
\begin{equation*}
J_A(\gamma T,\gamma S)\simeq
\begin{cases}
T^{A+1}, & \quad A<-1\\
\log (S\slash T), & \quad A=-1 \\
S^{A+1}, & \quad A>-1
\end{cases}\;\;
\end{equation*}
when $0<S<2$.
\end{lemma}
\begin{proof}
As in the proof of Lemma \ref{lem:le1}, it is enough to deal with the case $\gamma =1$.
The bounds for $T<S\le 2T$ follow since then $\int_T^St^Ae^{-t}\,dt\simeq T^A\int_T^Se^{-t}\,dt$ and
$$
(S-T)e^{-2T}\le \int_T^Se^{-t}\,dt\le (S-T)e^{-T}.
$$
Assume now that $S > 2T$. Clearly, $J_A(T,S) < I_A(T)$. On the other hand, if $T \ge 1$ then
$$
J_A(T,S) > \int_T^{2T} t^A e^{-t}\, dt \gtrsim T^A \int_T^{2T} e^{-t}\, dt \gtrsim T^A e^{-T}
\gtrsim I_A(T),
$$
the last estimate being a consequence of \eqref{estea}. When $0< T < 1$, we distinguish two subcases.
If $S \ge 2$, then again $J_A(T,S) \gtrsim \int_T^2 t^A\, dt \gtrsim I_A(T)$.
If $2T<S<2$, then $J_A(T,S) \simeq \int_T^S t^A \, dt$, and evaluating the last integral we arrive
at the claimed bounds for $J_A(T,S)$.
\end{proof}
We note that \eqref{estea} and \eqref{estea2} may be written slightly less precisely as
\begin{align*}
I_A(\gamma T) & \simeq\simeq \exp(-cT), \qquad T \ge 1, \\
J_A(\gamma T,\gamma S) & \simeq\simeq T^A (S-T)\exp(-cT), \qquad 0<T<S\le 2T,
\end{align*}
respectively. This fact will be used in the sequel without further mention.
We now apply Lemmas \ref{lem:le1} and \ref{lem:le2} to prove qualitatively sharp estimates of the integral
\begin{equation*}
E_A(T,S)=\int_0^1t^A\exp\big(-Tt^{-1}-St\big)\,dt, \qquad 0<T,S<\infty.
\end{equation*}
The following result provides, in particular, a refinement and generalization of \cite[Lemma 2.4]{NS2}.
\begin{lemma} \label{pro:comp}
Let $A \in \mathbb{R}$ be fixed. Then
\begin{equation*}
E_A(T,S)\simeq\simeq\exp\Big(-c\sqrt{T(T\vee S)}\Big)\times \begin{cases}
T^{A+1}, & \quad A<-1 \\
1+\log^+\frac1{T(T\vee S)}, & \quad A=-1 \\
(S\vee 1)^{-A-1}, & \quad A>-1
\end{cases}\;\; ,
\end{equation*}
uniformly in $T,S>0$.
\end{lemma}
\begin{proof}
We first estimate $E_A(T,S)$ in terms of the integrals $I_A$ and $J_A$.
For $0<S\le 2T$ we have
\begin{equation*}
E_A(T,S)\simeq\simeq \int_0^1 t^A\exp(-cTt^{-1})\,dt
\simeq T^{A+1}\int_{cT}^\infty u^{-A-2}e^{-u}\,du=T^{A+1}I_{-A-2}(cT),
\end{equation*}
where the second relation follows by the change of variable $t=cT\slash u$.
When $S>2T$ we change the variable $t=u\sqrt{T\slash S}$ and get
$$
E_A(T,S)=\Big(\frac TS\Big)^{(A+1)\slash2}
\int_0^{\sqrt{S\slash T}}u^A\exp\big(-\sqrt{TS}(u+u^{-1})\big)\,du\equiv \mathcal J_1+\mathcal J_2,
$$
where $\mathcal J_1$ and $\mathcal J_2$ come from splitting the integration over
the intervals $(0,1)$ and $(1,\sqrt{S/T})$, respectively. Then
\begin{align*}
\mathcal J_1\simeq\simeq \Big(\frac TS\Big)^{(A+1)\slash2}
\int_0^1 u^A\exp\big(-c\sqrt{TS}u^{-1}\big)\,du&\simeq T^{A+1}
\int_{c\sqrt{TS}}^\infty z^{-A-2}e^{-z}\,dz \\
& =T^{A+1}I_{-A-2}\big(c\sqrt{TS}\big)
\end{align*}
and
\begin{align*}
\mathcal J_2\simeq\simeq \Big(\frac TS\Big)^{(A+1)\slash2}
\int_1^{\sqrt{S\slash T}} u^A\exp\big(-c\sqrt{TS}u\big)\,du&
\simeq S^{-A-1}\int_{c\sqrt{TS}}^{cS} z^{A}e^{-z}\,dz\\
&=S^{-A-1}J_{A}\big(c\sqrt{TS},cS\big).
\end{align*}
Summing up, we have
\begin{equation*}
E_A(T,S)\simeq\simeq T^{A+1}I_{-A-2}\big(c\sqrt{T(T\vee S)}\big)
+ \chi_{\{S>2T\}}S^{-A-1}J_{A}\big(c\sqrt{TS},cS\big),
\end{equation*}
uniformly in $S,T>0$. In the next step we describe the behavior of the two terms here by means
of Lemmas \ref{lem:le1} and \ref{lem:le2}.
From Lemma \ref{lem:le1} it follows that
$$
T^{A+1}I_{-A-2}\big(c\sqrt{T(T\vee S)}\big) \simeq\simeq
T^{A+1}\exp\big(-c\sqrt{T(T\vee S)}\big), \qquad T(T\vee S)\ge1,
$$
(here, and also in analogous places below, $c$ on the left-hand side should be understood
as a \textit{given} constant) and
\begin{equation*}
T^{A+1}I_{-A-2}\big(c\sqrt{T(T\vee S)}\big)\simeq
\begin{cases}
T^{A+1}, & \quad A<-1 \\
\log (\frac4{T(T\vee S)}), & \quad A=-1 \\
\big(\frac T{T\vee S}\big)^{(A+1)\slash2}, & \quad A>-1
\end{cases} \;\; , \qquad T(T\vee S)\le1.
\end{equation*}
The term $S^{-A-1}J_{A}(c\sqrt{TS},cS)$ comes into play when $S>2T$, and in this case
we use Lemma \ref{lem:le2} to write the bounds
$$
S^{-A-1}J_{A}\big(c\sqrt{TS},cS\big) \simeq \chi_{\{S\ge 2\}}\Phi_1+\chi_{\{S<2\}}\Phi_2,
$$
where
$$
\Phi_1 = S^{-A-1}I_{A}\big(c\sqrt{TS}\big), \qquad \qquad
\Phi_2=
\begin{cases}
(T\slash S)^{(A+1)\slash2}, & \quad A<-1 \\
\log (\frac ST), & \quad A=-1 \\
1, & \quad A>-1
\end{cases}\;\; .
$$
By Lemma \ref{lem:le1},
\begin{align*}
\Phi_1 & \simeq\simeq S^{-A-1}\exp\big(-c\sqrt{TS}\big), \qquad TS\ge1, \\
\Phi_1 & \simeq
\begin{cases}
(T\slash S)^{(A+1)\slash2}, & \quad A<-1\\
\log (\frac 4{TS}), & \quad A=-1\\
S^{-A-1}, & \quad A>-1
\end{cases}\;\; , \qquad TS\le1.
\end{align*}
To proceed, it is convenient to consider each of the cases $A<-1$, $A=-1$, and $A>-1$ separately.
If $A<-1$, then
\begin{align*}
E_{A}(T,S) & \simeq\simeq \chi_{\{2>S> 2T\}} \Big(\frac TS\Big)^{(A+1)\slash2} +
\begin{cases}
T^{A+1}\exp\big(-c\sqrt{T(T\vee S)}\big), &\quad T(T\vee S)\ge1\\
T^{A+1}, &\quad T(T\vee S)<1\\
\end{cases} \\
& \qquad
+\chi_{\{S>2T\}} \chi_{\{S\ge 2\}}
\begin{cases}
T^{A+1}\exp\big(-c\sqrt{TS}\big), &\quad TS\ge1\\
\big(\frac TS\big)^{(A+1)\slash2}, &\quad TS<1
\end{cases}\;\; .
\end{align*}
Here the first and third terms are insignificant in comparison to the second one.
In case of the third summand, this is because $A<-1$ and
$\big(\frac TS\big)^{(A+1)\slash2}<T^{A+1}$ for $TS<1$.
A similar argument is used for the first one.
The required estimates of $E_{A}(T,S)$ follow.
If $A=-1$, then
\begin{align*}
E_{-1}(T,S) & \simeq\simeq \chi_{\{2>S> 2T\}} \log \frac ST +
\begin{cases}
\exp\big(-c\sqrt{T(T\vee S)}\big), &\quad T(T\vee S)\ge1\\
\log \big(\frac4{T(T\vee S)}\big), &\quad T(T\vee S)<1
\end{cases} \\
& \qquad +\chi_{\{S>2T\}} \chi_{\{S\ge 2\}}
\begin{cases}
\exp\big(-c\sqrt{TS}\big), &\quad TS\ge1\\
\log \big(\frac4{TS}\big), &\quad TS<1\\
\end{cases}\;\; .
\end{align*}
Similarly as in the case of $A<-1$, here also the first and third terms are insignificant
in comparison to the second one. This is clear for the third summand, and
for the first one this is because $\log \frac ST < \log (\frac4{TS})$ when $S<2$.
Thus the desired bounds of $E_{-1}(T,S)$ also follow.
Finally, we consider the case $A>-1$, which is less direct than the previous two.
We have
\begin{align*}
E_{A}(T,S) & \simeq\simeq \chi_{\{2>S> 2T\}} +
\begin{cases}
T^{A+1}\exp\big(-c\sqrt{T(T\vee S)}\big), &\quad T(T\vee S)\ge1\\
\big(\frac T{T\vee S}\big)^{(A+1)\slash2} , &\quad T(T\vee S)<1\\
\end{cases} \\
& \qquad +\chi_{\{S>2T\}} \chi_{\{S\ge 2\}}
\begin{cases}
T^{A+1}\exp\big(-c\sqrt{TS}\big), &\quad TS\ge1\\
S^{-A-1}, &\quad TS<1\\
\end{cases}\;\; .
\end{align*}
Observe that here the relation $\simeq \simeq$ remains valid if the sum of the first and the third
terms is replaced by the comparable (in the sense of $\simeq$) expression
\begin{equation*}
\chi_{\{S>2T\}}
\begin{cases}
T^{A+1}\exp\big(-c\sqrt{TS}\big), &\quad TS\ge1\\
(S\vee 1)^{-A-1} , &\quad TS<1\\
\end{cases} \;\; .
\end{equation*}
Taking into account that $T^{A+1}\exp(-c\sqrt{TS})\simeq\simeq S^{-A-1}\exp(-c\sqrt{TS})$ for
$TS\ge1$,
we conclude that
\begin{align*}
E_{A}(T,S) & \simeq\simeq
\begin{cases}
(T\vee S)^{-A-1}\exp\big(-c\sqrt{T(T\vee S)}\big), &\quad T(T\vee S)\ge1\\
\big(\frac T{T\vee S}\big)^{(A+1)\slash2} , &\quad T(T\vee S)<1\\
\end{cases} \\
& \qquad + \chi_{\{S>2T\}}
\begin{cases}
S^{-A-1}\exp\big(-c\sqrt{TS}\big), &\quad TS\ge1\\
(S\vee 1)^{-A-1}, &\quad TS<1
\end{cases}\;\; .
\end{align*}
Now, if $T\ge S$ and $T(T\vee S)=T^2<1$, then $(\frac T{T\vee S}\big)^{1\slash2}=1\simeq 1\slash (S\vee 1)$,
while for $T < S$ and $T(T\vee S)=TS<1$, we have
$(\frac T{T\vee S}\big)^{1\slash2}=(\frac TS\big)^{1\slash2} < 1\slash (S\vee 1)$. Therefore,
\begin{equation*}
E_{A}(T,S)\simeq\simeq
\begin{cases}
(T\vee S)^{-A-1}\exp\big(-c\sqrt{T(T\vee S)}\big), &\quad T(T\vee S)\ge1\\
(S\vee 1)^{-A-1}, &\quad T(T\vee S)<1
\end{cases}\;\; .
\end{equation*}
We claim that this implies
\begin{equation*}
E_{A}(T,S)\simeq\simeq (S\vee 1)^{-A-1}\exp\big(-c\sqrt{T(T\vee S)}\big),
\end{equation*}
which are precisely the required estimates.
To justify the claim, it is enough to recall that $A>-1$ and observe that if $T\ge S$ and
$T(T\vee S) = T^2 \ge 1$, then
\begin{align*}
(T\vee S)^{-A-1}\exp\big(-c\sqrt{T(T\vee S)}\big)=T^{-A-1}\exp(-cT)&\simeq (T\vee 1)^{-A-1}\exp(-cT)\\
&\simeq\simeq (S\vee 1)^{-A-1}\exp(-cT),
\end{align*}
while if $T < S$ and $T(T\vee S)=TS\ge 1$ (this forces $S>1$), then
$$
(T\vee S)^{-A-1}\exp\big(-c\sqrt{T(T\vee S)}\big)=
S^{-A-1}\exp\big(-c\sqrt{TS}\big)\simeq (S\vee 1)^{-A-1}\exp\big(-c\sqrt{TS}\big).
$$
The proof is finished.
\end{proof}
We are now in a position to prove qualitatively sharp estimates of the potential kernel.
\begin{theor}\label{main1}
For $\sigma>0$ we have
\begin{equation*}
\mathcal K^\sigma(x,y)\simeq\simeq\exp\big(-c\|x-y\|(\|x\|+\|y\|)\big)\times
\begin{cases}
\|x-y\|^{2\sigma-d}, &\quad \sigma<d\slash2 \\
1+\log^+\frac1{\|x-y\|(\|x\|+\|y\|)}, &\quad \sigma=d\slash2 \\
(1+\|x+y\|)^{d-2\sigma}, &\quad\sigma>d\slash2
\end{cases}\;\; ,
\end{equation*}
uniformly in $x,y\in\mathbb R^d$.
\end{theor}
\begin{proof}
We decompose
$$
\Gamma(\sigma)\mathcal K^\sigma(x,y)=\int_0^1G_t(x,y)\,t^{\sigma-1}\,dt+
\int_1^\infty G_t(x,y)\,t^{\sigma-1}\,dt\equiv \mathcal J^\sigma_0(x,y)+\mathcal J^\sigma_\infty(x,y).
$$
For $0<t<1$ we have $\tanh t \simeq t$, $\coth t \simeq t^{-1}$, $\sinh 2t \simeq t$, and therefore
$$
\mathcal J^\sigma_0(x,y)\simeq\simeq E_{\sigma-d\slash2-1}\big(c\|x-y\|^2,c\|x+y\|^2\big).
$$
This combined with Lemma \ref{pro:comp} shows that the estimates from the statement hold with
$\mathcal{K}^{\sigma}(x,y)$ replaced by $\mathcal{J}^{\sigma}_0(x,y)$.
Further, taking into account that $\tanh t \simeq 1 \simeq \coth t$ for $t > 1$, we see that
$$
\mathcal J^\sigma_\infty(x,y) \simeq\simeq \exp\big(-c(\|x\|^2+\|y\|^2)\big).
$$
Thus $\mathcal J^\sigma_0(x,y)$ dominates $\mathcal J^\sigma_\infty(x,y)$
in the above decomposition, in the sense that
$$
\mathcal J^\sigma_\infty(x,y) \lesssim E_{\sigma-d\slash2-1}\big(c\|x-y\|^2,c\|x+y\|^2\big)
$$
for a sufficiently small constant $c>0$. The conclusion follows.
\end{proof}
\section{Sharpness of the $L^p$-$L^q$ boundedness of the potential operator} \label{sec:sharp}
Given $0<\sigma<d/2$, define the region
\begin{align*}
R & =\bigg\{\Big(\frac1p,\frac1q\Big)\colon 0\le\frac1p\le1\,\,\,{\rm and}\,\,\, 0
\vee \Big(\frac1p-\frac{2\sigma}d\Big)\le\frac1q\le 1 \wedge\Big(\frac1p+\frac{2\sigma}d\Big)\bigg\} \\
& \qquad \Big\backslash
\bigg(\bigg\{\Big(\frac1p,\frac1q\Big)\colon 0\le\frac1p\le1-\frac{2\sigma}d\,\,\,{\rm and}\,\,\,
\frac1q = \frac1p+\frac{2\sigma}d\bigg\} \cup
\bigg\{\Big(\frac{2\sigma}d,0\Big),\Big(1,1-\frac{2\sigma}d\Big)\bigg\}\bigg)
\end{align*}
contained in the unit $(\frac1p,\frac1q)$-square $[0,1]^2$, see Figure \ref{fig1}.
\begin{figure}[ht]
\includegraphics[width=0.6\textwidth]{fig1.eps}
\caption{Mapping properties of $\mathcal{I}^{\sigma}$ for $0 < \sigma < d/2$.}\label{fig1}
\end{figure}
The following result enhances \cite[Theorem 8]{BT}, see also \cite[Theorem 2.3]{NS3}.
\begin{theor} \label{thm:LpLq}
Let $d\ge1$, $0<\sigma<d\slash2$ and $1\le p,q\le\infty$.
Then $\mathcal{I}^\sigma\colon L^p(\mathbb R^d)\to L^q(\mathbb R^d)$ boundedly
if and only if $(\frac 1p,\frac1q)$ lies in the region $R$.
On the other hand,
$\mathcal{I}^{\sigma}$ is not even of restricted weak type $(p,q)$ when $(\frac{1}p,\frac{1}q)$ is
not in the closure of $R$. Moreover, $\mathcal{I}^{\sigma}$ is of weak type $(p,q)$ for
$(\frac{1}p,\frac{1}q) = (0,\frac{2\sigma}d)$ and $(\frac{1}p,\frac{1}q) = (1,1-\frac{2\sigma}d)$.
For $(\frac{1}p,\frac{1}q) = (\frac{2\sigma}d,0)$ the restricted weak type is true, whereas weak type fails.
\end{theor}
Before giving the proof we take the opportunity to present a short argument showing
\cite[(21) and (41)]{BT}, the result we will apply in a moment.
\begin{lemma}
\label{lem:le3}
Given $\sigma>0$,
\begin{equation*}
\|\mathcal K^\sigma(x,\cdot)\|_1\simeq (1\vee \|x\|)^{-2\sigma}, \qquad x \in \mathbb{R}^d.
\end{equation*}
\end{lemma}
\begin{proof}
Using the identity (see \cite[Proposition 3.3]{ST2})
$$
\exp({-t\mathcal H}) \boldsymbol{1}(x)=\int_{\mathbb R^d}G_t(x,y)\,dy=
(\cosh 2t)^{-d/2}\exp\Big(-\frac12\tanh(2t)\|x\|^2\Big), \qquad x\in\mathbb R^d,
$$
we may write
\begin{align*}
\int_{\mathbb R^d}\mathcal K^\sigma(x,y)\,dy
& = \frac1{\Gamma(\sigma)}\int_0^\infty\int_{\mathbb R^d}G_t(x,y)\,dy\,t^{\sigma-1}\,dt\\
& = \frac{1}{\Gamma(\sigma)} \int_0^{\infty} (\cosh 2t)^{-d/2}\exp\Big(-\frac12\tanh(2t)\|x\|^2\Big)
t^{\sigma -1}\, dt.
\end{align*}
Here we split the integration to the intervals $(0,1)$ and $(1,\infty)$ and denote the resulting integrals
by $\mathcal{J}_0$ and $\mathcal{J}_{\infty}$, respectively. Then, uniformly in $x\in \mathbb{R}^d$,
$$
\mathcal{J}_0 \simeq \simeq \int_0^{1} \exp\big(-ct\|x\|^2\big) t^{\sigma-1}\, dt
= \|x\|^{-2\sigma} \int_0^{\|x\|^2} e^{-ct} s^{\sigma-1}\, dt
\simeq \|x\|^{-2\sigma} \big( \|x\|^{2\sigma} \wedge 1\big)
$$
and
$$
\mathcal{J}_{\infty} \simeq \simeq \int_1^{\infty} e^{-td} \exp\big(-c\|x\|^2\big) t^{\sigma-1}\, dt
= C_{d,\sigma} \exp\big(-c\|x\|^2\big).
$$
The conclusion follows.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:LpLq}]
We first focus on strong type inequalities. Then, in view of \cite[Theorem 8]{BT}, what remains
to prove are the following two items.
\begin{itemize}
\item[(a)] $\mathcal{I}^\sigma$ is not $L^p-L^q$ bounded for
$\frac{2\sigma}d<\frac1p<1$ and $0<\frac1q<\frac1p-\frac{2\sigma}d$.
\item[(b)] $\mathcal{I}^\sigma$ is not $L^p-L^q$ bounded for
$0<\frac1p<1-\frac{2\sigma}d$ and $\frac1p+\frac{2\sigma}d\le\frac1q<1$.
\end{itemize}
To justify (a), we fix $p$ and $q$ satisfying the assumed conditions and define
$$
f(y) = \chi_{\{\|y\|<1\}} \|y\|^{-2\sigma -d/q}.
$$
This function is in $L^p(\mathbb{R}^d)$ since $-(2\sigma + d/q)p+d>0$. However,
$\mathcal{I}^{\sigma}f \notin L^q(\mathbb{R}^d)$. Indeed, considering $x$ such that $\|x\|<1$ and using the
lower bound from Theorem \ref{main1} we get
$$
\mathcal{I}^{\sigma}f(x) \gtrsim \int_{\|y\|<\|x\|/2} \|x-y\|^{2\sigma-d} \|y\|^{-2\sigma-d/q}\, dy
\gtrsim \|x\|^{2\sigma-d} \int_{\|y\|<\|x\|/2} \|y\|^{-2\sigma-d/q}\, dy = C \|x\|^{-d/q},
$$
and the function $x \mapsto \chi_{\{\|x\|< 1\}} \|x\|^{-d/q}$ does not belong to $L^q(\mathbb{R}^d)$.
Proving (b) we may assume that $(\frac{1}{p},\frac{1}{q})$ lies on the critical segment
$\frac1q=\frac1p+\frac{2\sigma}d$, $0<\frac1p<1-\frac{2\sigma}d$. The case
when $\frac1q>\frac1p+\frac{2\sigma}d$ is contained below, in the negative result concerning the
restricted weak type estimate.
Define
$$
f(y) = \chi_{\{\|y\|>e\}} \|y\|^{-d/p} \big( \log\|y\|\big)^{-1/p-2\sigma/d}.
$$
We have
$$
\int_{\mathbb{R}^d} |f(y)|^p\, dy = C_d \int_e^{\infty} r^{-1} (\log r)^{-1-2\sigma p/d}\, dr < \infty,
$$
so $f \in L^p(\mathbb{R}^d)$. We claim that $\mathcal{I}^{\sigma}f \notin L^q(\mathbb{R}^d)$.
Assuming that $\|x\| > 2e$ and using the lower bound from Theorem \ref{main1} we write
\begin{align*}
\mathcal{I}^{\sigma}f(x) & \gtrsim \int_{\|x\|/2 < \|y\| < \|x\|} \|x-y\|^{2\sigma-d}
\exp\big(-c\|x-y\|(\|x\|+\|y\|)\big) \|y\|^{-d/p} \big(\log\|y\|\big)^{-1/p-2\sigma/d}\, dy \\
& \gtrsim \|x\|^{-d/p} \big(\log\|x\|\big)^{-1/p-2\sigma/d}
\int_{\|x\|/2 < \|y\| < \|x\|} \|x-y\|^{2\sigma-d} \exp\big(-2c\|x-y\| \|x\|\big)\, dy.
\end{align*}
As we shall see in a moment, the last integral is comparable with $\|x\|^{-2\sigma}$. Thus
$$
\mathcal{I}^{\sigma}f(x) \gtrsim \|x\|^{-d/p-2\sigma} \big(\log\|x\|\big)^{-1/p-2\sigma/d}
= \|x\|^{-d/q} \big(\log\|x\|\big)^{-1/q}, \qquad \|x\| > 2e,
$$
and the claim follows.
It remains to analyze the last integral, which we denote by $\mathcal{J}$. Changing the variable
$y=x - z/\|x\|$ we get
$$
\mathcal{J} = \|x\|^{-2\sigma} \int_{{D}_x} \|z\|^{2\sigma-d} e^{-2c\|z\|}\, dz,
$$
where the set of integration is
${D}_x = \{z \in \mathbb{R}^d : \|x\|^2/2 < \|\,x\|x\|-z\| < \|x\|^2\}$.
We now observe that ${D}_x$ contains the ball
$B_x = \{z \in \mathbb{R}^d : \|\,x\|x\|/4-z\| < \|x\|^2/4\}$. Indeed, if $z \in B_x$ then
$$
\frac{\|x\|^2}2 < \Bigg| \bigg\| \frac{x\|x\|}4-z\bigg\| - \bigg\|\frac{3}4 x\|x\|\bigg\| \Bigg|
\le \big\| \,x\|x\|-z\big\| \le \bigg\| \frac{x\|x\|}4 - z\bigg\| + \bigg\|\frac{3}4 x \|x\|\bigg\|
< \|x\|^2.
$$
Thus we have
$$
\|x\|^{-2\sigma} \int_{{B}_x} \|z\|^{2\sigma-d} e^{-2c\|z\|}\, dz \le \mathcal{J} \le
\|x\|^{-2\sigma} \int_{\mathbb{R}^d} \|z\|^{2\sigma-d} e^{-2c\|z\|}\, dz.
$$
Clearly, the integral over $\mathbb{R}^d$ here is finite. The integral over $B_x$ depends on $x$ only
through $\|x\|$. Since the balls $B_x$ are increasing in the sense of $\subset$ when $x$ is moved
away from the origin along a fixed line passing through the origin, we see that the integral over
$B_x$ is an increasing function of $\|x\|$, which is positive and finite. We conclude that
$\mathcal{J} \simeq \|x\|^{-2\sigma}$, $\|x\| > 1$, as desired.
We pass to weak type and restricted weak type inequalities.
Consider first the three `corners' of the boundary of $R$
from the statement of Theorem \ref{thm:LpLq}.
If $(\frac{1}p,\frac{1}q) = (1,1-\frac{2\sigma}d)$, then the weak type
$(1,\frac{d}{d-2\sigma})$ holds by \cite[Theorem 2.3]{NS3}. Notice that this property can be expressed
in terms of Lorentz spaces by saying that $\mathcal{I}^{\sigma}$ is bounded from $L^1(\mathbb{R}^d)$
to $L^{d/(d-2\sigma),\infty}(\mathbb{R}^d)$. Then $(\mathcal{I}^{\sigma})^*$ (the adjoint operator
in the Banach space sense) maps boundedly
$(L^{d/(d-2\sigma),\infty}(\mathbb{R}^d))^*$ into $(L^1(\mathbb{R}^d))^* = L^{\infty}(\mathbb{R}^d)$.
Further, the associate space of $L^{d/(d-2\sigma),\infty}(\mathbb{R}^d)$ in the sense
of \cite[Chapter 1, Definition 2.3]{BS} is $L^{d/(2\sigma),1}(\mathbb{R}^d)$
(cf. \cite[Chapter 4, Theorem 4.7]{BS}), and by
\cite[Chapter 1, Theorem 2.9]{BS} it can be regarded as a subspace of the dual of
$L^{d/(d-2\sigma),\infty}(\mathbb{R}^d)$. Since $(\mathcal{I}^{\sigma})^* = \mathcal{I}^{\sigma}$
by symmetry of the kernel, we infer that $\mathcal{I}^{\sigma}$ is of restricted weak type
$(\frac{d}{2\sigma},\infty)$. On the other hand, weak type $(\frac{d}{2\sigma},\infty)$ coincides,
by definition, with the strong type, so $\mathcal{I}^{\sigma}$ is not of weak type
$(\frac{d}{2\sigma},\infty)$ in view of the strong type results we already know. This clarifies
the situations related to the `corners' $(1,1-\frac{2\sigma}d)$ and $(\frac{2\sigma}d,0)$.
Taking into account $(\frac{1}p,\frac{1}q) = (0,\frac{2\sigma}d)$, we will show that $\mathcal{I}^{\sigma}$
is of weak type $(\infty, \frac{d}{2\sigma})$. To do that, it is enough to verify the estimate
\begin{equation} \label{w}
\big|\big\{x\in \mathbb{R}^d : |\mathcal{I}^\sigma f(x)|>\lambda\big\}\big|
\lesssim \bigg(\frac{\|f\|_{\infty}}{\lambda}\bigg)^{d\slash(2\sigma)},
\qquad \lambda>0,\quad f\in L^\infty(\mathbb{R}^d).
\end{equation}
But this is immediate in view of the bound, see Lemma \ref{lem:le3},
$$
\|\mathcal K^\sigma(x,\cdot)\|_1\le C\|x\|^{-2\sigma}, \qquad x\in\mathbb{R}^d,
$$
since then it follows that $|\mathcal{I}^{\sigma}f(x)| \le C \|x\|^{-2\sigma} \|f\|_{\infty}$
and consequently
$$
\big\{x\in \mathbb{R}^d : |\mathcal I^\sigma f(x)|>\lambda\big\}\subset
\bigg\{x\in \mathbb{R}^d : \|x\|<\bigg(C\frac{\|f\|_\infty}{\lambda}\bigg)^{1/{2\sigma}}\bigg\}.
$$
This inclusion leads directly to \eqref{w}.
Finally, we disprove the restricted weak type in the two triangles, see Figure \ref{fig1}.
In the lower triangle we use an \emph{au contraire} argument involving an extension of the Marcinkiewicz
interpolation theorem for Lorentz spaces due to Stein and Weiss \cite[Chapter 4, Theorem 5.5]{BS}.
Indeed, if $\mathcal{I}^{\sigma}$ were of restricted weak type $(p,q)$ for some $p$ and $q$ such that
$\frac{1}q < \frac{1}p - \frac{2\sigma}d$, then by interpolation with a strong type pair satisfying
$\frac{1}q = \frac{1}p-\frac{2\sigma}d$, $p>1$, $q < \infty$, $\mathcal{I}^{\sigma}$ would be of strong
type $(\widetilde{p},\widetilde{q})$ for some $\widetilde{p}$ and $\widetilde{q}$ corresponding to a point
in the lower triangle, a contradiction with $(a)$ above.
To treat the upper triangle, we will give an explicit counterexample.
Let for large $r$
$$
f_{r}(y) = \chi_{\{\|y\|<r\}}.
$$
Clearly, we have $\|f_r\|_{p} \simeq r^{d/p}$.
Estimating as in the proof of (b) above, we get
\begin{align*}
\mathcal{I}^{\sigma}f_r(x) & \gtrsim \int_{\|x\|/2 < \|y\| < \|x\|} \|x-y\|^{2\sigma-d}
\exp\big(-c\|x-y\|(\|x\|+\|y\|)\big) \chi_{\{\|y\|<r\}} \, dy \\
& \ge \chi_{\{\|x\|<r\}}
\int_{\|x\|/2 < \|y\| < \|x\|} \|x-y\|^{2\sigma-d} \exp\big(-2c\|x-y\| \|x\|\big)\, dy \\
& \gtrsim \chi_{\{1 < \|x\| < r\}} \|x\|^{-2\sigma},
\end{align*}
uniformly in large $r$ and $x \in \mathbb{R}^d$. Consequently,
$$
\big| \big\{ x \in \mathbb{R}^d : \mathcal{I}^{\sigma}f_r(x) > \lambda \big\}\big| \ge
\big|\big\{ 1 < \|x\| < r : \|x\| < (C\lambda)^{-1/(2\sigma)} \big\}\big|
$$
for some $C>0$ independent of $r$ and $\lambda > 0$.
Taking $\lambda = r^{-2\sigma}$ we conclude that the weak type $(p,q)$ estimate for $\mathcal{I}^{\sigma}$
implies $r^d \lesssim r^{dq/p + 2\sigma q}$.
This bound, however, fails when $\frac{1}q > \frac{1}p + \frac{2\sigma}d$ and $r\to \infty$.
The proof is finished.
\end{proof}
For completeness, we remark that in the context of Theorem \ref{thm:LpLq} the question of
weak/restricted weak type $(p,q)$ inequalities related to the segment
$\frac{1}q = \frac{1}p+\frac{2\sigma}d$, $1 \le q < \frac{2\sigma}d$, is more subtle
and remains open. Considering the case $\sigma>d/2$, the operator $\mathcal{I}^\sigma$
is bounded from $L^p(\mathbb R^d)$ to $L^q(\mathbb R^d)$ for every $1\le p,q\le\infty$,
see \cite[Theorem 2.3]{NS3}.
The behavior of $\mathcal{I}^{\sigma}$ in the limiting case $\sigma = d/2$ is described by the
theorem below. This result enhances \cite[Theorem 2.3]{NS3} when $\sigma = d/2$.
\begin{theor}
Let $d \ge 1$ and $1 \le p,q \le \infty$. Then $\mathcal{I}^{d/2}$ is bounded from
$L^p(\mathbb{R}^d)$ to $L^q(\mathbb{R}^d)$ except for $(p,q)=(\infty,1)$ and $(p,q)=(1,\infty)$.
Considering the two singular cases, we have:
\begin{itemize}
\item[(i)] $\mathcal{I}^{d/2}$ is of weak type $(\infty,1)$, but not of strong type $(\infty,1)$;
\item[(ii)] $\mathcal{I}^{d/2}$ is not of restricted weak type $(1,\infty)$.
\end{itemize}
\end{theor}
\begin{proof}
The $L^p$-$L^q$ boundedness is contained in \cite[Theorem 2.3]{NS3}.
To show (i), we observe that the weak type $(\infty,1)$ holds true since the proof of \eqref{w} covers
also the case $\sigma = d/2$. The strong type $(\infty,1)$ fails because
$\mathcal{I}^{d/2}\boldsymbol{1} \notin L^1(\mathbb{R}^d)$, as easily seen by means of Lemma \ref{lem:le3}.
It remains to verify (ii). For $0< \varepsilon < 1/e$,
let $f_{\varepsilon}(x) = \chi_{\{\|x\|<\varepsilon\}}$. By the lower bound of Theorem \ref{main1}
it follows that
$$
\mathcal{I}^{d/2}f_{\varepsilon}(x) \gtrsim \int_{\|y\|< \varepsilon}
\log \frac{1}{\|x-y\|(\|x\|+\|y\|)} \, dy, \qquad \|x\|< 1/e,
$$
uniformly in $\varepsilon < 1/e$. Therefore,
$$
\big\|\mathcal{I}^{d/2}f_{\varepsilon}\big\|_{\infty} \gtrsim \int_{\|y\|<\varepsilon} -\log \|y\|\,dy
= C_d \int_0^{\varepsilon} -r^{d-1}\log r \, dr \gtrsim \varepsilon^d \log\frac{1}{\varepsilon},
\qquad 0 < \varepsilon < 1/e,
$$
and we conclude that
$$
\frac{\|\mathcal{I}^{d/2}f_{\varepsilon}\|_{\infty}}{\|f_{\varepsilon}\|_1} \gtrsim
\log\frac{1}{\varepsilon}, \qquad 0 < \varepsilon < 1/e.
$$
Letting $\varepsilon \to 0^+$, we see that $\mathcal{I}^{d/2}$ is not of restricted weak type $(1,\infty)$.
\end{proof}
|
1,108,101,565,610 | arxiv | \section{A brief introduction to cosmology}
\subsection{The basics}
Cosmology is the study of the universe as a whole, and its aim is to understand the origin of the universe and its evolution. The study of the cosmos is as old as humanity and has always been fascinating. Physical cosmology\footnote{We will drop the word physical soon. It is used here to emphasise the scientific aspect of cosmology opposed to the philosophical or religious studies.} is the scientific study of the universe as a whole based on the laws of physics. The dominant interaction between macroscopic objects is the gravitational force. Therefore, we must study the dynamics of the universe within the framework of Einstein's theory of General Relativity which was formulated in 1916. In simple terms, the main concept of general relativity is the following equation
\begin{align}
\text{geometry} = \kappa \times \text{matter}
\end{align}
where $\kappa = 8\pi G/c^4$ is a coupling constant which determines the strength of the gravitational force. $G$ is Newton's gravitational constant and $c$ is the speed of light.
The Einstein field equations are a set of 10 coupled non-linear PDEs, or in other words, very difficult equations to deal with in general~\cite{Bruhat}. However, these equations can be simplified considerably by making some suitable assumptions. In cosmology~\cite{DodelsonModCosmo} this is known as the cosmological principle. It is an axiom which states that the universe is homogeneous and isotropic when viewed over large enough scales.
These scales are of the order of $100 - 1000\, {\rm MPc}$. To translate this into more practical units, we note that $1\, {\rm pc} \approx 3.26\, {\rm ly} \approx 10^{12} {\rm km}$. This means $100\, {\rm MPc} \approx 326\, {\rm Mly} \approx 10^{18}\, {\rm km}$, and has the simple practical implication that we cannot test the cosmological principle directly by making observations at two points in the universe separated by cosmologically significant scales. However, there are other possibilities of testing the cosmological principle. For instance, if we were to observe a very large structure in the universe which is bigger than $100\, {\rm MPc}$, say, then this would force us to revise this number upwards. It turns out that such very large structure have already been observed, see the Clowes--Campusano Large Quasar Group for one such example.
Henceforth we assume that the cosmological principle is valid for some suitable length scale. A homogeneous and isotropic 4-dimensional Lorentzian manifold is characterised by only one function which is usually denoted by $a(t)$ and one constant $k = (\pm 1, 0)$. Such models were studied independently by Friedmann, Lema\^{\i}tre, and Robertson \& Walker. The function $a(t)$ is called the scale factor and is the only dynamical degree of freedom in the cosmological Einstein field equations. The constant $k$ characterises the curvature of the so-called constant time hypersurfaces, $k=0$ corresponds to a Euclidean space, $k=+1$ to a 3-sphere and $k=-1$ to hyperbolic space. The cosmological Einstein field equations are given by
\begin{subequations}
\label{field1}
\begin{align}
3\frac{\dot{a}^2}{a^2} + 3\frac{k}{a^2} - \Lambda &= \kappa\, \rho
\label{field1a}\\
-2\frac{\ddot{a}}{a} - \frac{\dot{a}^2}{a^2} - \frac{k}{a^2} + \Lambda &= \kappa\, p.
\end{align}
\end{subequations}
Here $\Lambda$ is the so-called cosmological constant, $\rho$ and $p$ are the energy density and pressure of some matter components, respectively. This matter could be a perfect fluid with prescribed equation of state, or a scalar field for instance. More complicated forms of matter can also be included. One can verify by direct calculation that these two equations imply the energy-conservation equation
\begin{align}
\dot{\rho} + 3\frac{\dot{a}}{a}(\rho + p) = 0.
\label{cons1}
\end{align}
In cosmology one assumes that every matter component satisfies its own conservation equation, which does not follow from the field equations but must be assumed or derived separately. Inspection of Eqs.~(\ref{field1}) shows that we have two equations but 3 functions to be found, namely $a(t)$, $\rho(t)$ and $p(t)$. This system of equations is under-determined. In order to close it, we will assume a linear equation of state between the pressure and the energy density $p = w\rho$, where the equation of state parameter $w \in (-1,1]$.
The scale factor $a(t)$ is a measure of the size of the universe at time $t$. However, since we do not have an absolute length scale, the numerical value of $a(t)$ can be rescaled. One convention is to choose $a(t_{\rm today})=1$ and compare the universe's size with its current value. Moreover, it turns out to be useful to introduce the Hubble function $H(t) := \dot{a}/a$ which is a measure of the universe's expansion rate at time $t$. A positive value for this quantity was first observed by Edwin Hubble in 1929, thereby giving experimental evidence to an expanding universe. Today's value $H_{\rm today}$ is of the order of $70 {\rm km}/{\rm s}/{\rm Mpc}$.
Let us now rewrite the field equations using the Hubble parameter. Firstly, we need the relation
\begin{align}
\dot{H} = \frac{\ddot{a}}{a} - \frac{\dot{a}^2}{a^2} = \frac{\ddot{a}}{a} - H^2
\end{align}
which allows us to write~(\ref{field1}) in the following form
\begin{subequations}
\begin{align}
3H^2 + 3\frac{k}{a^2} - \Lambda &= \kappa\, \rho
\label{field2a}\\
-2\dot{H} - 3H^2 - \frac{k}{a^2} + \Lambda &= \kappa\, p.
\end{align}
\end{subequations}
Equation~(\ref{field2a}) is of particular interest to us. By dividing the entire equation by $3H^2$ we arrive at
\begin{align}
1 = \frac{\kappa\, \rho}{3H^2} + \frac{\Lambda}{3H^2} - \frac{k}{a^2 H^2}
\end{align}
and observe that each of the three terms is dimensionless.
It is common to introduce the following dimensionless density parameters
\begin{align}
\Omega = \frac{\kappa\, \rho}{3H^2}, \qquad
\Omega_{\Lambda} = \frac{\Lambda}{3H^2}.
\label{density}
\end{align}
Note that $\Omega$ may contain different forms of matter, the total matter content might contain a pressure-less perfect fluid (standard matter or sometimes called dust) and radiation, in which case one would write $\Omega = \Omega_{\rm m} + \Omega_{\rm r}$. Before getting started with dynamical systems and their application to cosmology, we need to discuss some of the well known solutions in cosmology.
\subsection{Cosmological solutions}
We will now discuss the most important solutions of the field equations~(\ref{field1}). This is needed in order to understand and interpret the solutions encountered later using dynamical systems techniques.
In order to simplify the equations, we will assume that the spatial curvature parameter vanishes, i.e.~$k=0$ and we will also neglect the cosmological term $\Lambda = 0$. Let us firstly assume that the equation of state parameter $w=0$. This corresponds to a matter dominated universe. One can immediately integrate the conservation equation~(\ref{cons1}) and find that
\begin{align}
\rho \propto a^{-3}.
\label{matter}
\end{align}
This result is not unexpected since we find that density is inversely proportional to volume. Using this result in the field equation~(\ref{field1a}) yields the solutions $a(t) \propto t^{2/3}$.
Secondly, we consider $w=1/3$ which corresponds to a radiation dominated universe. In that case, the conservation equation gives
\begin{align}
\rho \propto a^{-4}.
\label{rad}
\end{align}
and the remaining field equations can be solved to find $a(t) \propto t^{1/2}$.
Lastly, we consider the case where $\rho = p = 0$, however, we assume $\Lambda >0$. Then, we can integrate~(\ref{field2a}) and find
\begin{align}
a(t) \propto \exp\left(\sqrt{\Lambda/3}\, t\right).
\end{align}
This solution is generally called the de Sitter solution and corresponds to a universe which undergoes an accelerated expansion.
\subsection{A very brief history of the universe}
\label{sec:briefhist}
Based on a variety of observations, the evolution of the universe can be reconstructed fairly accurately. We are currently living in a matter dominated universe $w=0$, and there is strong evidence for the presence of a positive cosmological constant $\Lambda >0$. Moreover, the spatial curvature of the universe appears to be zero $k=0$. There are some highly restrictive conditions in $k \neq 0$ models.
Since the universe is currently expanding, it must have been smaller and denser in the past. From equations~(\ref{matter}) and~(\ref{rad}) we see that radiation decays faster than matter in an expanding universe. Therefore, at some point in the past, the universe was dominated by radiation. Going back in time further, the universe was very dense and therefore hot and relatively small. The `beginning' of the universe is often referred to as the big bang, giving the image of a vast explosion from which the evolution of the universe started.
It appears very likely that the universe also underwent a period of accelerated expansion at its very early stages, similar to the late time acceleration due to the cosmological term. The reasons for this are beyond the scope of this short introduction, however, we note that this epoch is called inflation.
Very roughly speaking, the standard model of cosmology can be summarised by the succession of the following dominated eras
\begin{align}
\text{inflation} \longrightarrow \text{radiation}
\longrightarrow \text{matter}
\longrightarrow \text{cosmological term}
\label{good}
\end{align}
and a good cosmological model should be able to reproduce (parts of) this pattern.
\subsection{A first taste of dynamical systems}
\label{sec:firsttaste}
In order to get a first taste of the usefulness of dynamical systems techniques in cosmology~\cite{dsincosmology,Coley:2003mj}, let us consider a universe which is spatially flat $k=0$, and its matter content is radiation $\rho_{\rm r}$ with $w=1/3$, and a perfect fluid (dust) $\rho_{\rm m}$ with $w=0$. The following four equations completely determine the dynamics of the system
\begin{subequations}
\label{ex:fall}
\begin{align}
3H^2 - \Lambda &= \kappa\, (\rho_{\rm m} + \rho_{\rm r})
\label{ex:f1}\\
-2\dot{H} - 3H^ 2 + \Lambda&= \kappa\, \frac{1}{3} \rho_{\rm r}
\label{ex:f2}\\
\dot{\rho}_{\rm r} + 4H\rho_{\rm r} &= 0
\label{ex:f3}\\
\dot{\rho}_{\rm m} + 3H\rho_{\rm m} &= 0.
\label{ex:f4}
\end{align}
\end{subequations}
Using the dimensionless density parameters $\Omega_{\rm m}$, $\Omega_{\rm r}$ and $\Omega_{\Lambda}$, we find that equation~(\ref{ex:f1}) becomes the constraint
\begin{align}
1 = \Omega_{\rm m} + \Omega_{\rm r} + \Omega_{\Lambda}
\label{ex:c1}
\end{align}
which means that we have two independent quantities, and choose to work with $\Omega_{\rm m}$ and $\Omega_{\rm r}$. Moreover, since we expect energy densities to be positive we also have the conditions $0 \leq \Omega_{\rm m} \leq 1$ and $0 \leq \Omega_{\rm r} \leq 1$. Therefore, also $\Omega_\Lambda \leq 1$ is needed to satisfy equation~(\ref{ex:c1}).
The solution to the system~(\ref{ex:fall}) at any given time $t$ will correspond to a point in the $(\Omega_{\rm m},\Omega_{\rm r})$ plane. The constraint equation~(\ref{ex:c1}) together with the aforementioned inequalities reduces the allowed $(\Omega_{\rm m},\Omega_{\rm r})$ plane to the triangle\footnote{To the best of our knowledge this idea goes back to Nicola Tamanini.} defined by $\Delta = \{(\Omega_{\rm m},\Omega_{\rm r})\, |\, 0 \leq \Omega_{\rm m} + \Omega_{\rm r} \leq 1 \cap 0 \leq \Omega_{\rm m} \leq 1 \cap 0 \leq \Omega_{\rm r} \leq 1\}$, see also Figure~\ref{fig1}.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.6\textwidth]{figures/fig_triangle_n}
\caption{This figure shows the triangle defined by $\{(\Omega_{\rm m},\Omega_{\rm r})\, |\, 0 \leq \Omega_{\rm m} + \Omega_{\rm r} \leq 1 \cap 0 \leq \Omega_{\rm m} \leq 1 \cap 0 \leq \Omega_{\rm r} \leq 1\}$. Every solution to the field equations~(\ref{ex:fall}) corresponds to a trajectory inside this triangle, one calls this region the phase space of the system.}
\label{fig1}
\end{figure}
Next, we wish to find the dynamical equations for the dimensionless variables $\Omega_{\rm m}$ and $\Omega_{\rm r}$. This requires a slightly lengthy but otherwise straightforward calculation of which we will show some details. We start with
\begin{align}
\frac{d}{dt} \Omega_{\rm m} =
\frac{d}{dt} \left(\frac{\kappa\, \rho_{\rm m}}{3H^2}\right) =
\frac{\kappa}{3} \frac{\dot{\rho}_{\rm m} H^2 - \rho_{\rm m} 2 H \dot{H}}{H^4} =
\frac{\kappa}{3 H} \left(\frac{\dot{\rho}_{\rm m}}{H} - 2 \rho_{\rm m} \frac{\dot{H}}{H^2}\right).
\end{align}
From~(\ref{ex:f4}) we get an expression for $\dot{\rho}_{\rm m}/H$, while~(\ref{ex:f2}) can be solved for $\dot{H}/H^2$. This yields
\begin{align}
\frac{1}{H}\frac{d}{dt} \Omega_{\rm m} &= \frac{\kappa}{3 H^2} \left(
-3\rho_{\rm m} + 3\rho_{\rm m} \bigl(1-\frac{\Lambda}{3H^2} + \frac{\kappa\, \rho_{\rm r}}{9H^2}\bigr)\right) \\
&= -3 \Omega_{\rm m} + 3 \Omega_{\rm m}(1-\Omega_{\Lambda}+\Omega_{\rm r}/3).
\end{align}
The last step is to eliminate $\Omega_{\Lambda}$ using~(\ref{ex:c1}) which gives the equation
\begin{align}
\frac{1}{H}\frac{d}{dt} \Omega_{\rm m} &=
-3 \Omega_{\rm m} +
3 \Omega_{\rm m}(\Omega_{\rm m}+\Omega_{\rm r} + \Omega_{\rm r}/3) \\
&= -3 \Omega_{\rm m} +
3 \Omega_{\rm m}(\Omega_{\rm m} + 4\Omega_{\rm r}/3) \\
&= \Omega_{\rm m}(3\Omega_{\rm m} + 4\Omega_{\rm r} -3).
\end{align}
We now note that
\begin{align}
d \log(a) = \frac{\dot{a}}{a} dt = H dt
\end{align}
which means that by introducing the new independent variable $N = \log(a)$ and denoting differentiation with respect to $N$ by a prime, we finally arrive at
\begin{align}
\Omega'_{\rm m} = \Omega_{\rm m}(3\Omega_{\rm m} + 4\Omega_{\rm r} -3).
\label{ex:o1}
\end{align}
Following a similar calculation one can find the corresponding equation for $\Omega_{\rm r}$ which is given by
\begin{align}
\Omega'_{\rm r} = \Omega_{\rm r}(3\Omega_{\rm m} + 4\Omega_{\rm r} -4).
\label{ex:o2}
\end{align}
For any set of initial conditions $(\Omega_{\rm m}(N_i),\Omega_{\rm r}(N_i))$ with initial `time' $N_i$ in the triangle $\Delta$, the equations~(\ref{ex:o1}) and~(\ref{ex:o2}) will determine a trajectory which describes the dynamical behaviour of the cosmological model we are studying. It should be noted that equations~(\ref{ex:o1}) and~(\ref{ex:o2}) do not depend explicitly on the `time' parameter $N$, such a system is called an autonomous system of equations, or a dynamical system. Equations of this type can be studied using particular methods developed for such systems. In the next Section we will give a brief introduction to dynamical systems and the most common methods used to analyse them.
\section{Some aspects of dynamical systems}
What is a dynamical system? It can be anything ranging from something as simple as a single pendulum to something as complex as the human brain and the entire universe itself. In general, a dynamical system can be thought of as any abstract system consisting of
\begin{enumerate}
\item a space (state space or phase space), and
\item a mathematical rule describing the evolution of any point in that space.
\end{enumerate}
The second point is crucial. Finding a mathematical rule which, for instance, describes the evolution of information at any neuron in the human brain is probably impossible. So, we need a mathematical rule as an input and finding one might be very difficult indeed.
The state of the system we are interested in is described by a set of quantities which are considered important about the system, and the state space is the set of all possible values of these quantities. In the case of the pendulum, the position of the mass and its momentum are natural quantities to specify the state of the system. For more complicated systems like the universe as a whole, the choice of good quantities is not at all obvious and it turns out to be useful to choose convenient variables. It is possible to analyse the same dynamical system with different sets of variables, either of which might be more suitable to a particular question.
There are two main types of dynamical systems: The first are continuous dynamical systems whose evolution is defined by a set of ordinary differential equations (ODEs) and the other ones are called time-discrete dynamical systems which are defined by a map or difference equations. In the context of cosmology we are studying the Einstein field equations which for a homogeneous and isotropic space result in a system of ODEs. Thus we are only interested in continuous dynamical systems and will not discuss time-discrete dynamical systems in the remainder.
Let us denote $\mathbf{x} = (x_1,x_2,\ldots,x_n) \in X$ to be an element of the state space $X\subseteq\mathbb{R}^n$. The standard form of a dynamical system is usually expressed as~\cite{wigginsbook}
\begin{align}
\dot{\mathbf{x}} = \mathbf{f}(\mathbf{x})
\label{dy1}
\end{align}
where the function $\mathbf{f}:X\rightarrow X$ and where the dot denotes differentiation with respect to some suitable time parameter. We view the function $\mathbf{f}$ as a vector field on $\mathbb{R}^n$ such that
\begin{align}
\mathbf{f}(\mathbf{x}) = (f_1(\mathbf{x}),\cdots,f_n(\mathbf{x})).
\end{align}
The ODEs~(\ref{dy1}) define the vector fields of the system. At any point $x \in X$ and any particular time $t$, $\mathbf{f}(\mathbf{x})$ defines a vector field in $\mathbb{R}^n$. When discussing a particular solution to~(\ref{dy1}) this will often be denoted by $\psi(t)$ to simplify the notation. We restrict ourselves to systems which are are finite dimensional and continuous. In fact, we will require the function $f$ to be at least differentiable in $X$.
\begin{definition}[Critical point or fixed point]
The autonomous equation $\dot{\mathbf{x}} = \mathbf{f}(\mathbf{x})$ is said to have a critical point or fixed point at $\mathbf{x} = \mathbf{x}_0$ if and only if $\mathbf{f}(\mathbf{x}_0) = 0$.
\label{def1}
\end{definition}
There is an easy way to justify this definition. Let us consider a one dimensional mechanical system with force $F$. Newton's equation for such a system is
\begin{align}
m \ddot{x} = F(x).
\label{newton}
\end{align}
Let us introduce a second variable $p = m\dot{x}$ such that the single second order ODE~(\ref{newton}) becomes a system of two first order equations
\begin{align}
\dot{x} &= p/m \\
\dot{p} &= F(x).
\label{newton2}
\end{align}
Therefore, according to Definition~\ref{def1}, the critical points of system~(\ref{newton2}) correspond to those points $x$ where the force vanishes $F(x)=0$. At these points, there is no force acting on the particle and the system could, in principle, remain in this (steady) state indefinitely.
This leads to the question of stability of a critical point or fixed point. The following two definitions will clarify what is meant by stable and asymptotically stable. In simple words a fixed point $x_0$ of the system~(\ref{dy1}) is called stable if all solutions $\mathbf{x}(t)$ starting near $\mathbf{x}_0$ stay close to it.
\begin{definition}[Stable fixed point]
Let $\mathbf{x}_0$ be a fixed point of system~(\ref{dy1}). It is called stable if for every $\varepsilon > 0$ we can find a $\delta$ such that if $\psi(t)$ is any solution of~(\ref{dy1}) satisfying $\|\psi(t_0)-\mathbf{x}_0\| < \delta$, then the solution $\psi(t)$ exists for all $t \geq t_0$ and it will satisfy $\|\psi(t)-\mathbf{x}_0\| < \varepsilon$ for all $t \geq t_0$.
\label{def2}
\end{definition}
The point is called asymptotically stable if it is stable and the solutions approach the critical point for all nearby initial conditions.
\begin{definition}[Asymptotically stable fixed point]
Let $\mathbf{x}_0$ be a stable fixed point of system~(\ref{dy1}). It is called asymptotically stable if there exists a number $\delta$ such that if $\psi(t)$ is any solution of~(\ref{dy1}) satisfying $\|\psi(t_0)-\mathbf{x}_0\| < \delta$, then $\lim_{t \rightarrow \infty} \psi(t) = \mathbf{x}_0$.
\label{def3}
\end{definition}
The main difference is simply that all trajectories near an asymptotically stable fixed point will eventually reach that point while trajectories near a stable point could for instance circle around that point. If the point is unstable then solutions will move away from it.
We will not encounter fixed points which are stable but not asymptotically stable when studying cosmological dynamical systems.
Having defined a concept of stability, we will now discuss methods which can be used to analyse the stability properties of critical points.
\subsection{Linear stability theory}
The basic idea of linear stability theory can be explained neatly using the above one dimensional mechanical system $m \ddot{x} = F(x)$. Let us assume that there is a point $x_0$ where the force vanishes $F(x_0)=0$. Can we find the behaviour of the particle near this point? We set $x(t) = x_0 + \delta x(t)$ and assume $\delta x(t)$ to be small. Then $\ddot{x}(t) = \ddot{\delta x}(t)$ and $F(x) = F(x_0 + \delta x) \approx F(x_0) + F'(x_0) \delta x + \ldots = F'(x_0) \delta x + \ldots$ (recall $F(x_0)=0$) so that Newton's equations near the critical point becomes $m \ddot{\delta x} = F'(x_0) \delta x$ where $F'(x_0)$ is a constant. This is a linear second order constant coefficient ODE, its auxiliary equation is simply $\lambda^2 = F'(x_0)/m$. Therefore, the sign of $F'(x_0)$ determines the stability properties of the point $x_0$. If $F'(x_0) < 0$ the solution involves trigonometric functions and we would speak of a stable point, for $F'(x_0) > 0$ the solution would involve exponentials and we would refer to this point as unstable.
Exactly the same ideas can be utilised when studying an arbitrary dynamical system. Let $\dot{\mathbf{x}} = \mathbf{f}(\mathbf{x})$ be a given dynamical system with fixed point at $\mathbf{x}_0$. We will now linearise the system around its critical point. Since $\mathbf{f}(\mathbf{x}) = (f_1(\mathbf{x}),\ldots,f_n(\mathbf{x}))$, we can Taylor expand each $f_i(x_1,x_2,\ldots,x_n)$ near $\mathbf{x}_0$
\begin{align}
f_i(\mathbf{x}) = f_i(\mathbf{x}_0) +
\sum_{j=1}^{n} \frac{\partial f_i}{\partial x_j}(\mathbf{x}_0) y_j +
\frac{1}{2!} \sum_{j,k=1}^{n} \frac{\partial^2 f_i}{\partial x_j \partial x_k}(\mathbf{x}_0) y_j y_k + \ldots
\end{align}
where the vector $\mathbf{y}$ is defined by $\mathbf{y} = \mathbf{x} - \mathbf{x}_0$. Note that in what follows we are only interested in the first partial derivatives. Therefore, of particular importance is the object $\partial f_i/\partial x_j$ which if interpreted as a matrix is the Jacobian matrix of vector calculus of the vector valued function $\mathbf{f}$. We define
\begin{align}
J = \frac{\partial f_i}{\partial x_j} =
\begin{pmatrix}
\frac{\partial f_1}{\partial x_1} &
\ldots &
\frac{\partial f_1}{\partial x_n} \\
\vdots & \ddots & \vdots \\
\frac{\partial f_n}{\partial x_1} &
\ldots &
\frac{\partial f_n}{\partial x_n}
\end{pmatrix}
\label{Jac}
\end{align}
It is the eigenvalues of the Jacobian matrix $J$, evaluated at the critical points $\mathbf{x}_0$, which contain the information about stability. In this context $J$ is sometimes referred to as the stability matrix of the system. As $J$ is an $n \times n$ matrix, it will have $n$, possibly complex, eigenvalues (counting repeated eigenvalues accordingly). Recalling the example of the one dimensional mechanical system at the beginning, it is clear that this approach might encounter problems if one or more of the eigenvalues are zero. This motivates the following definition~\cite{wigginsbook}.
\begin{definition}[Hyperbolic point]
Let $\mathbf{x} = \mathbf{x}_0\in X \subset \mathbb{R}^n$ be a fixed point (critical point) of the system $\dot{\mathbf{x}} = \mathbf{f}(\mathbf{x})$. Then $x_0$ is said to be hyperbolic if none of the eigenvalues of the Jacobian matrix $J(\mathbf{x}_0)$ have zero real part. Otherwise the point is called non-hyperbolic.
\end{definition}
Linear stability theory fails for non-hyperbolic points and other methods have to be employed to study the stability properties.
Roughly speaking we are distinguishing three broad cases: If all eigenvalues have negative real parts, then we can regard the point as stable. If at least one eigenvalues has a positive real part, then the corresponding fixed point would not be stable and correspond to a saddle point which attracts trajectories in some directions but repels them along others. Lastly, all eigenvalues could have a positive real part, in which case all trajectories would be repelled.
In more than 3 dimensions it becomes very difficult to classify all possible critical points based on their eigenvalues. However, in dimensions 2 and 3 this can be done. In the following we present all possible cases for two dimensional autonomous systems.
Let us consider the two dimensional autonomous system given by
\begin{subequations}
\label{2dsys}
\begin{align}
\dot{x} &= f(x,y) \\
\dot{y} &= g(x,y)
\end{align}
\end{subequations}
where $f$ and $g$ are (smooth) functions of $x$ and $y$. We assume that there exits a hyperbolic critical point at $(x_0,y_0)$ so that $f(x_0,y_0)=0$ and $g(x_0,y_0)=0$. The Jacobian matrix of the system is given by
\begin{align}
J =
\begin{pmatrix}
f_{,x} & f_{,y} \\
g_{,x} & g_{,y}
\end{pmatrix}
\label{Jacobian:2d}
\end{align}
where the $f_{,x}$ means differentiation with respect to $x$. Its two eigenvalues $\lambda_{1,2}$ are given by
\begin{subequations}
\begin{align}
\lambda_{1} &=
\frac{1}{2}(f_{,x}+g_{,y})
+ \frac{1}{2}\sqrt{(f_{,x}-g_{,y})^2 + 4 f_{,y}g_{,x}}\\
\lambda_{2} &=
\frac{1}{2}(f_{,x}+g_{,y})
- \frac{1}{2}\sqrt{(f_{,x}-g_{,y})^2 + 4 f_{,y}g_{,x}}
\end{align}
\end{subequations}
and be evaluated at any fixed point $(x_0,y_0)$.
Table~\ref{tab1} contains all possible cases in order to understand the stability or instability properties of the critical point $(x_0,y_0)$ based on the two eigenvalues $\lambda_1$ and $\lambda_2$.
\begin{table}[!htb]
\centering
\begin{tabular}{|p{0.25\textwidth}|p{0.65\textwidth}|}
\hline
Eigenvalues & Description \\
\hline\hline
$\lambda_1<0$, $\lambda_2<0$ & the fixed point is asymptotically stable and trajectories starting near that point will approach that point $\lim_{t\rightarrow \infty} (x(t),y(t)) = (x_0,y_0)$ \\
\hline
$\lambda_1>0$, $\lambda_2>0$ & the fixed point is unstable and trajectories will be repelled from the point $\lim_{t\rightarrow -\infty} (x(t),y(t)) = (x_0,y_0)$. We can speak of $(x_0,y_0)$ as the past time attractor \\
\hline
$\lambda_1<0$, $\lambda_2>0$ & the fixed point is a saddle point. Some trajectories will be repelled, others will be attracted \\
\hline
$\lambda_1 =0$, $\lambda_2>0$ & the point is unstable. The positive eigenvalues ensures that there is at least one unstable direction \\
\hline
$\lambda_1 =0$, $\lambda_2<0$ & linear stability theory fails to determine stability. The point is non-hyperbolic and other methods are needed to study the behaviour of trajectories near that point \\
\hline
$\lambda_1 = \alpha+i\beta$, $\lambda_2 = \alpha-i\beta$ & with $\alpha>0$ and $\beta\neq 0$ the fixed point is an unstable spiral \\
\hline
$\lambda_1 = \alpha+i\beta$, $\lambda_2 = \alpha-i\beta$ & with $\alpha<0$ and $\beta\neq 0$ the fixed point is a stable spiral \\
\hline
$\lambda_1 = i\beta$ , $\lambda_2 = -i\beta $ & solutions are oscillatory and the point is called a centre. Note that a critical point being a centre is not related to centre manifolds.\\
\hline
\end{tabular}
\caption{Stability or instability properties of the critical point $(x_0,y_0)$ based on the two eigenvalues $\lambda_1$ and $\lambda_2$.}
\label{tab1}
\end{table}
\subsection*{Example -- Cosmology with matter, radiation and cosmological term}
Recall the cosmological dynamical system~(\ref{ex:o1}) and~(\ref{ex:o2}) which will be our base model henceforth. The equations read
\begin{subequations}
\label{ex:sys1}
\begin{align}
\Omega'_{\rm m} &= \Omega_{\rm m}(3\Omega_{\rm m} + 4\Omega_{\rm r} -3)\\
\Omega'_{\rm r} &= \Omega_{\rm r}(3\Omega_{\rm m} + 4\Omega_{\rm r} -4)\\
1 &= \Omega_{\rm m} + \Omega_{\rm r} + \Omega_{\Lambda}.
\label{ex:sys1c}
\end{align}
\end{subequations}
We can find the fixed points of this system by solving the simultaneous equations $\Omega'_{\rm m} = 0$ and $\Omega'_{\rm r} = 0$ for the pair $(\Omega_{\rm m},\Omega_{\rm r})$. We find three fixed points, namely $O=(0,0)$, $R=(0,1)$ and $M=(1,0)$. As we use the relative energy densities $\Omega_{i}$ as our dynamical variables, it is easy to interpret those fixed points. At $R$, the radiation dominates and normal matter is absent. Likewise, at $M$, the normal matter dominates while radiation is absent. The point $O$ contains neither radiation nor matter, and is therefore dominated by the cosmological term because of~(\ref{ex:sys1c}).
The Jacobian matrix of system~(\ref{ex:sys1}) is computed straightforwardly. Evaluated at the three fixed points, we find
\begin{align}
J(O) =
\begin{pmatrix}
-3 & 0 \\
0 & -4
\end{pmatrix},
\quad
J(R) =
\begin{pmatrix}
1 & 0 \\
3 & 4
\end{pmatrix},
\quad
J(M) =
\begin{pmatrix}
3 & 4 \\
0 & -1
\end{pmatrix},
\end{align}
respectively. The corresponding eigenvalues of the stability matrix are given by
\begin{subequations}
\begin{alignat}{3}
&O: &\qquad \lambda_1 &= -3, &\quad \lambda_2 &= -4 \\
&R: &\qquad \lambda_1 &= 1, &\quad \lambda_2 &= 4 \\
&M: &\qquad \lambda_1 &= -1, &\quad \lambda_2 &= 3
\end{alignat}
\end{subequations}
which implies that that $O$ is the only attractor of the system. Therefore, all trajectories will eventually approach $O$. $R$ is unstable, however, since both eigenvalues are positive, we can think of $R$ as the only past time attractor. This means all trajectories will have `started' at $R$. Lastly, $M$ is a saddle point. This means that some trajectories are attracted towards $M$ but are eventually repelled to move towards $O$. The phase space diagram Fig.~\ref{fig2} clearly shows these features.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.8\textwidth]{figures/fig_triangle_2_n}
\caption{Phase space diagram of system~(\ref{ex:sys1}).}
\label{fig2}
\end{figure}
In the cosmological context this has the following interpretation. Consider a spatially flat universe filled with normal matter and radiation, and with a very small cosmological term\footnote{If the cosmological term happens to be `large' then matter will never dominate and one obtains an almost direct transition from radiation to a state where the cosmological term dominates.}. Such a universe will generically be dominated by radiation at early times, then it will undergo a period where matter dominates its energy contents. Eventually it will evolve to a state where the cosmological term dominates. This result is in line with our expectation of a good cosmological model, see~(\ref{good}).
\subsection{Lyapunov functions}
The following methods of studying the stability of a fixed point goes back to Lyapunov. It is completely different to linear stability and can be applied directly to the system in question. The main problem with this approach is that one has to be able to guess the Lyapunov function since there is no systematic way of doing so. Let us start by defining what a Lyapunov function is and its relation to stability of an autonomous system of equations.
\begin{definition}[Lyapunov function]
Let $\dot{\mathbf{x}} = \mathbf{f}(\mathbf{x})$ with $\mathbf{x} \in X \subset \mathbb{R}^n$ be a smooth autonomous system of equations with fixed point $\mathbf{x}_0$. Let $V : \mathbb{R}^n \rightarrow \mathbb{R}$ be a continuous function in a neighbourhood $U$ of $\mathbf{x}_0$, the $V$ is called a Lyapunov function for the point $\mathbf{x}_0$ if
\begin{enumerate}
\item $V$ is differentiable in $U \setminus \{\mathbf{x}_0\}$
\item $V(\mathbf{x}) > V(\mathbf{x}_0)$
\item $\dot{V} \leq 0 \quad \forall x\in U \setminus \{\mathbf{x}_0\}$.
\end{enumerate}
Note that the third requirement is the crucial one. It implies
\begin{align}
\frac{d}{dt} V(x_1,x_2,\ldots,x_n) & = \frac{\partial V}{\partial x_1} \dot{x}_1 + \ldots
+ \frac{\partial V}{\partial x_n} \dot{x}_n
\nonumber \\
&= \frac{\partial V}{\partial x_1} f_1 + \ldots
+ \frac{\partial V}{\partial x_n} f_n \leq 0
\end{align}
which required repeated use of the chain rule and substitution of the autonomous system equations to eliminate the terms $\dot{x}_i$ for $i=1,\ldots,n$.
\end{definition}
One can conveniently write $dV/dt$ using vector calculus notation
\begin{align}
\frac{d}{dt} V(x_1,x_2,\ldots,x_n) = \operatorname{{\mathrm grad}} V \cdot \dot{\mathbf{x}}
= \operatorname{{\mathrm grad}} V \cdot \mathbf{f}(\mathbf{x}).
\end{align}
Let us now state the main theorem which connects a Lyapunov function to the stability of a fixed point of a dynamical system.
\begin{theorem}[Lyapunov stability]\label{lyapunovtheorem}
Let $\mathbf{x}_0$ be a critical point of the system $\dot{\mathbf{x}} = \mathbf{f}(\mathbf{x})$, and let $U$ be a domain containing $\mathbf{x}_0$. If there exists a Lyapunov function $V(\mathbf{x})$ for which $\dot{V} \leq 0$, then $\mathbf{x}_0$ is a stable fixed point. If there exists a Lyapunov function $V(\mathbf{x})$ for which $\dot{V} < 0$, then then $\mathbf{x}_0$ is a asymptotically stable fixed point.
Furthermore, if $\|\mathbf{x}\| \rightarrow \infty$ and $V(\mathbf{x})\rightarrow\infty$ for all $\mathbf{x}$, then $\mathbf{x}_0$ is said to be globally stable or globally asymptotically stable, respectively.
\end{theorem}
One can also find some instability results, see e.g.~\cite{Brauer:1989}, which will also depend on our ability to find a suitable Lyapunov function. However, we will not use results along those lines since we are mainly concerned about the stability of certain fixed points in the context of cosmology.
Should we be able to find a Lyapunov function satisfying the criteria of the Lyapunov stability theorem, we could establish (asymptotic) stability without any reference to a solution of the ODEs. However, just because we failed in finding a Lyapunov function at a particular point does not necessarily imply that such a point is unstable. Since there is no systematic way of constructing a function, it is possible that we were simply not clever enough to find a Lyapunov function for the critical point concerned.
\subsubsection*{A first example}
This first example is taken from~\cite{wigginsbook}. Suppose that a system is described by the vector field
\begin{subequations}
\begin{align}
\dot{x} &= y\\
\dot{y} &= -x + \epsilon x^2y
\end{align}
\end{subequations}
which has one critical point at $(x,y)= (0,0)$. A candidate Lyapunov's function is given by
\begin{align}
V(x,y) = \frac{x^2+y^2}{2},
\end{align}
satisfying $V(0,0)= 0$ and $V(x,y) > 0$ in the neighbourhood of the fixed point. This function leads to
\begin{align}
\dot{V} = \operatorname{{\mathrm grad}} V \cdot (\dot{x},\dot{y}) = \epsilon x^2 y^2
\end{align}
from which we conclude that the point is globally asymptotically stable if $\epsilon < 0$ since $x^2 y^2$ is positive definite and thus $\dot{V} < 0$ in the neighbourhood of the fixed point. It is important to emphasise, however, that $\epsilon > 0$ does not imply instability.
\subsubsection*{A second example}
Let us consider the system
\begin{subequations}
\label{Lya:ex2}
\begin{align}
\dot{x} &= -x^3 + x y\\
\dot{y} &= -y - 2x^2 - x^2 y
\end{align}
\end{subequations}
which has one fixed point at $(x,y)= (0,0)$. Computing the eigenvalues of the Jacobian matrix at the fixed point yields $\lambda_1 = -1$ and $\lambda_2 = 0$. Therefore, we cannot decide, based on linear stability theory, whether the origin is stable or not. However, starting with the candidate Lyapunov function
\begin{align}
V(x,y) = 2 x^2 + y^2
\end{align}
leads to
\begin{align}
\dot{V} = -4x^4 - 2y^2 - 2x^2y^2.
\end{align}
Therefore the point is globally asymptotically stable since all terms in $\dot{V}$ are negative definite and thus $\dot{V} < 0$ in the neighbourhood of the fixed point. This example has been adapted from a similar one in~\cite{jackcarr}.
Note that the phase space plot is in agreement with our conclusion of stability, see Fig.~\ref{fig3}.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.8\textwidth]{figures/fig_Lya_ex2_n}
\caption{Phase space plot of the system~(\ref{Lya:ex2}).}
\label{fig3}
\end{figure}
\subsection{Centre manifold theory}
Centre manifold theory is a method that allows us to simplify dynamical systems by reducing their dimensionality near fixed points with vanishing eigenvalues of the Jacobian matrix. It is also central to other elegant concepts such as bifurcations and the method of normal forms~\cite{normalformsbook}. Here the essential basics of centre manifold theory are discussed following~\cite{wigginsbook} and~\cite{jackcarr}.
Let us, as above, consider the dynamical system
\begin{align}
\dot{\mathbf{x}} = \mathbf{f}(\mathbf{x})
\label{cdy1}
\end{align}
with $\mathbf{x} \in \mathbb{R}^n$ and let us assume that it has a fixed point $\mathbf{x}_0$. Near this point we can linearise the system using~(\ref{Jac}). Denoting $\mathbf{y}=\mathbf{x}-\mathbf{x}_0$, we can write~(\ref{cdy1})
\begin{align}
\dot{\mathbf{y}} = J \mathbf{y}
\label{cdy2}
\end{align}
where we emphasise that $J$ is a constant coefficient $n \times n$ matrix. As such it will have $n$ eigenvalues which motivates the following. The space $\mathbb{R}^n$ is the direct sum of three subspaces which are denoted by $\mathbb{E}^s$, $\mathbb{E}^u$ and $\mathbb{E}^c$, where the superscripts stand for stable, unstable and centre, respectively. The space $\mathbb{E}^s$ is spanned by the eigenvectors of $J$ which have negative real part, $\mathbb{E}^u$ is spanned by the eigenvectors of $J$ which have positive real part, and $\mathbb{E}^c$ is spanned by the eigenvectors of $J$ which have zero real part. Linear stability theory is sufficient to understand the dynamics of trajectories in $\mathbb{E}^s$ and $\mathbb{E}^u$. Centre manifold theory will determine the dynamics of trajectories in $\mathbb{E}^c$.
In the context of centre manifold theory it is useful to write our dynamical system~(\ref{cdy1}) in the form
\begin{subequations}
\label{cmexvec}
\begin{align}
\dot{\mathbf{x}} &= A\mathbf{x} + f(\mathbf{x},\mathbf{y})
\label{cenxdot}\\
\dot{\mathbf{y}} &= B\mathbf{y} + g(\mathbf{x},\mathbf{y}),
\label{cenydot}
\end{align}
\end{subequations}
where $(x,y) \in\mathbb{R}^c\times\mathbb{R}^s$. Moreover, we assume
\begin{subequations}
\label{cmnonlinearcondition}
\begin{align}
f(0,0) &= 0, \qquad \nabla f(0,0) = 0 \\
g(0,0) &= 0, \qquad \nabla g(0,0) = 0.
\end{align}
\end{subequations}
In the system~(\ref{cmexvec}), $A$ is a $c \times c$ matrix having eigenvalues with zero real parts, while $B$ is an $s \times s$ matrix whose eigenvalues have negative real parts. Our aim is to understand the centre manifold of this system in order to investigate its dynamics. We have suppressed some regularity assumptions on $f$ and $g$ for simplicity.
\begin{definition}[Centre Manifold]\label{cmdef}
A geometrical space is a centre manifold for~(\ref{cmexvec}) if it can be locally represented as
\begin{align}
W^{c}(0) =\{(x,y)\in \mathbb{R}^c\times\mathbb{R}^s| y = h(x), |x|<\delta, h(0) = 0, \nabla h(0)= 0 \}
\end{align}
for $\delta$ sufficiently small.
\end{definition}
The conditions $h(0) = 0$ and $\nabla h(0) = 0$ from the definition imply that the space $W^c(0)$ is tangent to the eigenspace $E^c$ at the critical point $(x,y) = (0,0)$.
Centre manifold theory is based on three main theorems~\cite{wigginsbook}. The first one is about the existence of the centre manifold, the second one clarifies the issue of stability of solution while the last one is about constructing the actual centre manifold needed to investigate the stability. We will state those theorems but will not state the proofs, the interested reader is referred to~\cite{jackcarr}.
\begin{theorem}[Existence]\label{cmexist}
There exists a centre manifold for~(\ref{cmexvec}). The dynamics of the system~(\ref{cmexvec}) restricted to the centre manifold is given by
\begin{align}
\dot{u} = Au + f(u,h(u))
\label{cmexisteqn}
\end{align}
for $u\in\mathbb{R}^c$ sufficiently small.
\end{theorem}
\begin{theorem}[Stability]
Suppose the zero solution of~(\ref{cmexisteqn}) is stable (asymptotically stable or unstable). Then the zero solution of~(\ref{cmexisteqn}) is also stable (asymptotically stable or unstable). Furthermore, if $(x(t),y(t))$ is also a solution of~(\ref{cmexisteqn}) with $(x(0),y(0))$ sufficiently small, there exists a solution $u(t)$ of~(\ref{cmexisteqn}) such that
\begin{subequations}
\begin{align}
x(t) &= u(t) +\mathcal{O}(e^{-\gamma t})\\
y(t) &= h(u(t)) + \mathcal{O}(e^{-\gamma t})
\end{align}
\end{subequations}
as $t\rightarrow\infty$, where $\gamma>0$ is a constant.
\end{theorem}
We now know that the centre manifold exists, and we can establish the stability or instability of a solution. However, our ability to do so depends on the knowledge of the function $h(x)$ in Definition~\ref{cmdef}. We will now derive a differential equation for the function $h(x)$.
Following Definition~\ref{cmdef}, we have that $y = h(x)$. Let us differentiate this with respect to time and apply the chain rule. This gives
\begin{align}
\dot{y} = \nabla h(x) \cdot \dot{x}
\label{cmydot}
\end{align}
Since $W^c(0)$ is based on the dynamics generated by the system~(\ref{cmexvec}), we can substitute for $\dot{x}$ the right-hand side of~(\ref{cenxdot}) and for $\dot{y}$ the right-hand side of~(\ref{cenydot}). This yields
\begin{align}
Bh(x) + g(x,h(x)) = \nabla h(x) \cdot \left[Ax + f(x,h(x))\right]
\end{align}
where we also used that $y = h(x)$. The latter equation can be re-arranged into the quasilinear partial different equation
\begin{align}
\mathcal{N}(h(x)) :=
\nabla h(x)\left[Ax + f(x,h(x))\right] - Bh(x) - g(x,h(x)) = 0
\label{cmn}
\end{align}
which must be satisfied by $h(x)$ for it to be the centre manifold. In general, we cannot find a solution to this equation. Even for relatively simple dynamical systems it is often impossible to find an exact solution of this equation. It is the third and last theorem which explain why not all is lost at this point.
\begin{theorem}[Approximation]\label{apptheorem}
Let $\phi:\mathbb{R}^c\rightarrow\mathbb{R}^s$ be a mapping with $\phi(0) = \nabla \phi(0) = 0$ such that $\mathcal{N}(\phi(x)) = \mathcal{O}(|x|^q)$ as $x\rightarrow 0$ for some $q>1$. Then
\begin{align}
|h(x) - \phi(x)| = \mathcal{O}(|x|^q) \quad\text{as}\quad x\rightarrow 0.
\end{align}
\end{theorem}
The main point of this theorem is that an approximate knowledge of the centre manifold returns the same information about stability as the exact solution of equation~(\ref{cmn}). It turns out that finding an approximation for the centre manifold is a fairly doable task in comparison to finding the exact solution. The centre manifold machinery is best explained with a concrete example.
\subsubsection*{Example -- a simple two-dimensional model}
The following two dimensional example is taken from Wiggins~\cite{wigginsbook}. We consider the system
\begin{subequations}
\label{ex1}
\begin{align}
\dot{x} &= x^2y-x^5\\
\dot{y} &= -y+x^2.
\end{align}
\end{subequations}
The origin $(x,y) = (0,0)$ is a fixed point. The Jacobian matrix of the linearised system about the origin has eigenvalues of $0$ and $-1$. Since there is a zero eigenvalue, the point is non-hyperbolic and linear stability theory fails to determine the nature of stability of this point.
By Theorem~\ref{cmexist}, there exists a centre manifold for the system~(\ref{ex1}) and it can be represented locally as
\begin{align}
W^c(0) = \{(x,y)\in\mathbb{R}^2|y=h(x),|x|<\delta, h(0) = Dh(0) = 0\}
\end{align}
for $\delta$ sufficiently small. Next, we need to compute $W^c(0)$. Here we can exploit Theorem~\ref{apptheorem} which says that it suffices to approximate the centre manifold to establish stability properties. Therefore, it is customary to assume an expansion for $h(x)$ of the form
\begin{align}
h(x) = a x^2 + b x^3 + O(x^4)
\label{hexpansion}
\end{align}
where $a$ and $b$ are constants to be determined. This expression is then substituted into~(\ref{cmn}) with the aim of determining those constants.
In this example, the equations~(\ref{ex1}) yield
\begin{subequations}
\begin{align}
A &= 0 \quad B = -1 \\
f(x,y) &= x^2y-x^5 \\
g(x,y) &= x^2.
\end{align}
\end{subequations}
This, in addition to~(\ref{hexpansion}), is substituted into~(\ref{cmn}) and gives
\begin{align}
\mathcal{N} &= (2ax + 3bx^2 + \cdots)(ax^4 + bx^5 - x^5 + \cdots)\nonumber\\
&+ ax^2 + bx^3 - x^2 + \cdots = 0.
\label{neqnexample}
\end{align}
The coefficients of each power of $x$ must be zero so that~(\ref{neqnexample}) holds. This provides us with a set on linear equations in the constants $a$ and $b$ which is solved by
\begin{align}
a = 1 \quad b = 0,
\end{align}
where all terms of order $O(x^4)$ have been ignored. Therefore, the centre manifold is locally given by
\begin{align}
h(x) = x^2 + \mathcal{O}(x^4).
\end{align}
Finally, following Theorem~\ref{cmexist}, the dynamics of the system restricted to the centre manifold is obtained to be
\begin{equation}
\dot{x} = x^4 + \mathcal{O}(x^5).
\label{cmexample1}
\end{equation}
We conclude that for $x$ sufficiently small, $x=0$ is unstable. Therefore, the critical point $(0,0)$ is unstable. In Fig.~\ref{figcex} we show the phase space for this system and also indicate the centre manifold.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.8\textwidth]{figures/fig_centre_ex_n}
\caption{Phase space plot of the system~(\ref{ex1}). The centre manifold is indicated by a dashed line and was computed up to terms $x^{13}$. One sees very clearly how the centre manifold attracts the trajectories and how they are repelled from the origin (along the centre manifold) making this point unstable.}
\label{figcex}
\end{figure}
\section{Cosmology using dynamical systems}
We discussed some aspects of cosmology in the context of dynamical systems, see~\cite{dsincosmology,Coley:2003mj} for more details, and also~\cite{Rendall:2001it}, or~\cite{Coley:2004jm} for anisotropic models. Based on the series of cosmological epochs $\text{inflation} \rightarrow \text{radiation} \rightarrow \text{matter} \rightarrow \text{cosmological term}$ of Section~\ref{sec:briefhist}, which could be called a `minimal' cosmological model, we will now make links with dynamical systems. A very neat paper studying cosmological models in the Lotka-Volterra framework is~\cite{Perez:2013zya}.
Let us now consider a generic `minimal' cosmological model described by an $n \times n$ system of autonomous equations. Should this model begin with an inflationary period, then this should correspond to an early time attractor in the dynamical system. All eigenvalues of the Jacobian matrix at this point should be positive in order to ensure that all trajectories evolve away from this point, this means $\lambda_i > 0$ for $i=1,\ldots,n$.
In an ideal model we would also have two saddle points ($\lambda_j >0$, $\lambda_k <0$ with $j+k=n$) which correspond to a radiation dominated and matter dominated universe, respectively. These epochs being saddle points makes sure that some trajectories are attracted to these points, however, they will eventually be repelled. In this case the universe will evolve through both epochs. Let us note here that most models will only contain either matter or radiation, and thus we would be satisfied if there was only one saddle point.
Lastly, we require a late-time attractor ($\lambda_i < 0$ for $i=1,\ldots,n$) where the universe is undergoing an accelerated expansion which corresponds to the de Sitter solution. We say the universe is approaching de Sitter space asymptotically. This can be summarised as follows
\begin{align}
\begin{array}{ccccc}
\text{inflation} &
\longrightarrow &
\text{radiation/matter} &
\longrightarrow &
\text{de Sitter}\\[1ex]
\lambda_i > 0 &
\mbox{} &
\lambda_j >0, \lambda_k <0 &
\mbox{} &
\lambda_i < 0
\end{array}
\label{wishlist}
\end{align}
where, for simplicity, we neglected the possibility of some zero eigenvalues.
\subsection{Cosmology with matter and scalar field}
The cosmological constant $\Lambda$ has strong observational support~\cite{Perlmutter:1998,Riess:1998}, but also leads to a variety of problems which are called the cosmological constant problems, we refer the reader to~\cite{Weinberg:1988cp,Sahni:2002kh} and in particular~\cite{Martin:2012bt}. These problems can largely be avoided if the constant term $\Lambda$ is replaced by a dynamically evolving scalar field $\varphi$ with some given potential $V(\varphi)$. In this case one often speaks of dark energy. In many models the potential $V$ is assumed to be of exponential form, $V=V_0\exp(-\lambda\kappa\varphi)$.
Moreover, instead of writing the equation of state for the matter as $p=w\rho$, one often encounters a slightly different parametrisation which is given by
\begin{align}
p_{\gamma} = w_{\gamma} \rho_{\gamma} = (\gamma-1)\rho_\gamma
\end{align}
where $\gamma = 1+w_{\gamma}$ is a constant and $0\leq\gamma\leq 2$. Its value is $4/3$ when there is radiation, and is $1$ for standard matter or dark matter in this context.
For this setup, the Einstein field equations are
\begin{subequations}
\label{eqn:sca1}
\begin{align}
H^2 &= \frac{\kappa^2}{3}
\left(\rho_\gamma + \frac{1}{2}\dot{\varphi}^2 + V \right)
\label{friedmann}\\
\dot{H} &= -\frac{\kappa^2}{2}(\rho_\gamma+p_\gamma+\dot{\varphi}^2).
\label{hduncoupled}
\end{align}
\end{subequations}
We can interpret $\rho_\varphi = \dot{\varphi}^2/2+V$ as the energy density of the scalar field and $p_\varphi = \dot{\varphi}^2/2-V$ as its pressure. This also allows us to define an effective equation of state for the field. The conservation equations for the matter and the scalar field are given by
\begin{subequations}
\label{eqn:sca2}
\begin{align}
\dot{\rho}_\gamma &= -3H (\rho_\gamma + p_\gamma)\\
\ddot{\varphi} &= - 3H\dot{\varphi} - \frac{dV}{d\varphi}
= - 3H\dot{\varphi} + \lambda \kappa V
\label{uncoupledKG}
\end{align}
\end{subequations}
where we used the exponential form of the potential. We follow the approach outlined in Section~\ref{sec:firsttaste} and rewrite~(\ref{eqn:sca1}) and~(\ref{eqn:sca2}) using more suitable variables. As before, we start with dividing equation~(\ref{friedmann}) with $H^2$ which results in
\begin{align}
1 = \frac{\kappa^2\rho_\gamma}{3H^2}+\frac{\kappa^2\dot{\varphi}^2}{6H^2}+\frac{\kappa^2V}{3H^2}.
\label{friedmann2}
\end{align}
Every term on the right-hand side is positive since $V>0$ and $\rho_\gamma > 0$, and it turns out that the following the dimensionless variables~\cite{Copeland:1997et, Copeland:2006wr} are particularly useful
\begin{align}
x^2 = \frac{\kappa^2\dot{\varphi}^2}{6H^2}, \quad
y^2 = \frac{\kappa^2V}{3H^2}, \quad
s^2 = \frac{\kappa^2\rho_\gamma}{3H^2}
\label{compact}
\end{align}
which transform~(\ref{friedmann2}) into
\begin{align}
1 = x^2 + y^2 + s^2.
\label{friedmann3}
\end{align}
Therefore, we can choose $x,y$ as two independent variables. This leads to
\begin{align}
1 \geq 1-x^2-y^2 = s^2 = \frac{\kappa^2\rho_\gamma}{3H^2} \geq 0
\end{align}
implying that $0\leq x^2+y^2\leq 1$ which means that the physical phase space of this model is contained within the unit circle.
We will introduce three more quantities which are useful in understanding the physical properties at the fixed points. The dimensionless density parameter~(\ref{density}) of the scalar field $\varphi$ can be expressed in terms of the new variables and is given by
\begin{align}
\Omega_\varphi = \frac{\kappa^2\rho_\varphi}{3H^2} = x^2+y^2.
\end{align}
Moreover, we define the equation of state for the scalar field by
\begin{align}
\gamma_\varphi = 1 + w_{\varphi} =
1 + \frac{p_{\varphi}}{\rho_{\varphi}} = \frac{2x^2}{x^2+y^2}.
\end{align}
Lastly, we define the effective equation of state of the total system by
\begin{align}
w_{\rm eff} &= \frac{p_{\gamma} + p_{\varphi}}{\rho_{\gamma} + \rho_{\varphi}} =
\frac{w_{\gamma}\rho_{\gamma} + \dot{\varphi}^2/2-V}{\rho_{\gamma} + \dot{\varphi}^2/2+V}
\nonumber \\
&= w_{\gamma}(1-x^2-y^2) + x^2 - y^2.
\end{align}
Now, we are ready to derive a two dimensional dynamical system using the variables $x$ and $y$. As before, we will introduce a new `time' variable $N=\log(a)$ so that $dN = H dt$, and denote differentiation with respect to $N$ by a prime.
Let us begin by differentiating $x$ with respect to time $t$
\begin{align}
\dot{x} = \frac{\kappa}{\sqrt{6}} \frac{\ddot{\varphi}H -\dot{\varphi}\dot{H}}{H^2} =
\frac{\kappa}{\sqrt{6}}\left(\frac{\ddot{\varphi}}{H}-\dot{\varphi}\frac{\dot{H}}{H^2}\right).
\end{align}
Substituting for $\ddot{\varphi}$ using~(\ref{uncoupledKG}) and for $\dot{H}$ using~(\ref{hduncoupled}) we arrive at
\begin{align}
\dot{x} = \frac{\kappa}{\sqrt{6}}\left(- 3\dot{\varphi} + \lambda\kappa \frac{V}{H} + \dot{\varphi}\frac{\kappa^2}{2H^2}(\gamma\rho_\gamma+\dot{\varphi}^2)\right).
\end{align}
Next, using the variables~(\ref{compact}) and the condition~(\ref{friedmann3}) we get
\begin{align}
\dot{x} = H\left[-3x +\sqrt{\frac{3}{2}}\lambda y^2 + \frac{3}{2}x\left((1-x^2-y^2)\gamma + 2x^2\right)\right]
\label{finalxdot}
\end{align}
One can now introduce the new `time parameter' $N$. Following similar steps, the equation for $y'$ can be derived. The final system is
\begin{subequations}
\label{xyp}
\begin{align}
x' &= -3x + \sqrt{\frac{3}{2}}\lambda y^2 + \frac{3}{2}x\left(2x^2 + \gamma(1-x^2-y^2)\right)\\
y' &= -\lambda\sqrt{\frac{3}{2}}xy + \frac{3}{2}y\left(2x^2 + \gamma(1-x^2-y^2)\right).
\end{align}
\end{subequations}
The complete dynamics of this cosmological model is describe by the two equations~(\ref{xyp}).
We noted that the phase space of this system is contained in the unit circle. Inspection of the dynamical equations shows that system~(\ref{xyp}) is invariant under the transformation $y \mapsto -y$ and symmetric under time reversal $t \mapsto -t$. This implies that we can restrict our analysis on the upper half-disk with $y > 0$. The lower half-disc of the phase space corresponds to the contracting universe because $H<0$ in this region.
The properties of the dynamical system~(\ref{xyp}) depend on the values of the constants $\lambda$ and $\gamma$. Amongst others, they will in particular affect the existence and stability of the fixed points of the system, see~\cite{Copeland:1997et}. This can be related to the theory of bifurcations, something that has not been explored in cosmological dynamical systems. The following Table~\ref{crit1} contains all critical points of the system~(\ref{xyp}).
\begin{table}[!htb]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
\mbox{} & $x$ & $y$ & existence \\
\hline
\hline
O & 0 & 0 & $\forall\lambda$ and $\gamma$ \\
\hline
$\mathrm{A}_{+}$ & 1 & 0 & $\forall\lambda$ and $\gamma$ \\
\hline
$\mathrm{A}_{-}$ & -1 & 0 & $\forall\lambda$ and $\gamma$ \\
\hline
B & $\lambda/\sqrt{6}$ & $[1-\lambda^2/6]^{1/2}$ & $\lambda^2 < 6$ \\
\hline
C & $\sqrt{3/2} \gamma/\lambda$ & $[3(2-\gamma)\gamma/2\lambda^2]^{1/2}$
& $\lambda^2 > 3\gamma$ \\
\hline
\end{tabular}
\caption{Critical point of the system~(\ref{xyp}).}
\label{crit1}
\end{table}
Having found all the possible fixed points, we can now compute the eigenvalues and determine their stability which is summarised in Table~\ref{crit2}, see~\cite{Copeland:1997et}.
\begin{table}[!htb]
\centering
\begin{tabular}{|c|p{0.6\textwidth}|c|c|}
\hline
\mbox{} & Stability & $\Omega_\varphi$ & $\gamma_\varphi$ \\
\hline
\hline
O & saddle point for $0 < \gamma < 2$ & 0 & Undefined \\
\hline
$\mathrm{A}_{+}$ & unstable node for $\lambda < \sqrt{6}$ and saddle point for $\lambda > \sqrt{6}$& 1 & 2 \\
\hline
$\mathrm{A}_{-}$ & unstable node for $\lambda >-\sqrt{6}$ and addle point for $\lambda < -\sqrt{6}$ & 1 & 2 \\
\hline
B & stable node for $\lambda^2 < 3\gamma$ and saddle point for $3\gamma < \lambda^2 < 6$ & 1 & $\lambda^2/3$ \\
\hline
C & stable node for $3\gamma < \lambda^2 < 24 \gamma^2/(9\gamma -2)$ and stable spiral for $\lambda^2 > 24 \gamma^2/(9\gamma -2)$ & $3\gamma/\lambda^2$ & $\gamma$ \\
\hline
\end{tabular}
\caption{Summary of the properties of the critical points.}
\label{crit2}
\end{table}
The three figures Fig.~\ref{fig_d1}--\ref{fig_d3} show the phase spaces of this model for various parameter choices.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.9\textwidth]{figures/fig_dyn_11_n}
\caption{Phase space plot scalar field cosmology with exponential potential and matter. Parameter values are $\gamma=1$ and $\lambda=1$.}
\label{fig_d1}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.9\textwidth]{figures/fig_dyn_21_n}
\caption{Phase space plot scalar field cosmology with exponential potential and matter. Parameter values are $\gamma=1$ and $\lambda=2$.}
\label{fig_d2}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.9\textwidth]{figures/fig_dyn_31_n}
\caption{Phase space plot scalar field cosmology with exponential potential and matter. Parameter values are $\gamma=1$ and $\lambda=3$.}
\label{fig_d3}
\end{figure}
It should be noted that the inequality signs in Table~\ref{crit2} exclude certain values from the analysis. For instance, when we choose $\lambda^2 = 3\gamma$, the two points B and C have the same coordinates (the system has one critical point less), namely $x_0 = \sqrt{\gamma/2}$ and $y_0 = \sqrt{1-\gamma/2}$ so that $x_0^2+y_0^2=1$ and its eigenvalues are $0,3/2(\gamma-2)$. Linear stability theory cannot determine the stability of this point. One could, in principle, apply centre manifold theory. However, this is problematic as the physical phase space is bounded by the unit circle and centre manifold theory will take into account the entire phase space. One could construct the centre manifold and only consider it inside the circle but this also has problems. For concreteness we set $\gamma=1$ in the following, which means $\lambda = \sqrt{3}$ and $x_0 = y_0 = \sqrt{1/2}$.
The easiest way forward is to use Lyapunov's method near this point. We start with the candidate Lyapunov function of the form
\begin{align}
V = \left(x-\frac{1}{\sqrt{2}}\right)^2 + 4 \left(y-\frac{1}{\sqrt{2}}\right)^2
\label{Vdyn}
\end{align}
and one verify that this function satisfies $\dot{V} < 0$ near the critical point. Since the function is positive definite near that point by construction, we can apply Theorem~\ref{lyapunovtheorem}. Following for instance~\cite{Brauer:1989}, we can estimate the region of asymptotic stability. Defining $S_\delta := \{(x,y)| V \leq \delta \}$ for $\delta \geq 0$, and denoting by $C_\delta$ the component of $S_\delta$ containing the critical point, we have the following statement~\cite{Brauer:1989}. Let $\Omega$ be the set where $\dot{V} < 0$, then the interior of $C_\delta$ contained in $\Omega$ lies in the region of asymptotic stability. As mentioned earlier, this approach relies on our ability to find a suitable Lyapunov function. Different choices can result in different parts of the region of asymptotic stability being covered and there is no guarantee that the entire region can be identified by this method alone. In Fig.~\ref{fig_d4}, we show the region of asymptotic stability based on the Lyapunov function~(\ref{Vdyn}) for model~(\ref{xyp}). A better Lyapunov function would of course improve this picture and increase the region.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.9\textwidth]{figures/fig_dyn_L1_n}
\caption{Phase space plot scalar field cosmology with exponential potential and matter. Parameter values are $\gamma=1$ and $\lambda=\sqrt{3}$. The shaded area shows part of the region of asymptotic stability of the fixed point. In this region $\dot{V} < 0$ and $V < 3/2$.}
\label{fig_d4}
\end{figure}
A detailed and comprehensive phase-space analysis, based on linear stability theory alone, of this model can be found in~\cite{Copeland:1997et}. Other methods were explored in~\cite{Bohmer:2010re}. A complete discussion of all its properties in the context of cosmology is also given. This model has many interesting features as well as some problems which motivates various extensions, many of which have been considered in the literature. In fact, the literature of dynamical systems applications in early-time and late-time cosmology is so vast, that it could fill several books with ease!
We should point out that this model falls short our wish list~(\ref{wishlist}). The early time fixed points $\mathrm{A}_{\pm}$ are dominated by the scalar field, however, the effective equation of state is $w_{\rm eff} = 1$ which is unphysical. It is point C which makes this model so interesting because this fixed point is stable and contains both, a non-vanishing scalar field and matter. One speaks of scaling solutions as the scalar field energy density is proportional to that of the fluid.
\subsection{Cosmology with matter and scalar field and interactions}
The models considered so far were all two dimensional. This relied on the fact that we were able to `eliminate' the Hubble parameter $H$ from the equations due to a smart choice of variables and a clever choice of `time'. However, there are many known models where this approach does not work and one has to introduce new variables. In the following we will discuss one such type of models and a possible choice of a new variable.
The cosmological Einstein field equations~(\ref{eqn:sca1})--(\ref{eqn:sca2}) are compatible with the introduction of an additional interaction term $Q$, say. This interaction would allow for an energy transfer from the scalar $\varphi$ to the matter $\rho_\gamma$ and vice versa. The introduction of such a term leaves Eqs.~(\ref{eqn:sca1}) unchanged, but~(\ref{eqn:sca2}) becomes
\begin{subequations}
\label{eqn:sca3}
\begin{align}
\dot{\rho}_\gamma &= -3H (\rho_\gamma + p_\gamma) - Q\\
\ddot{\varphi} &= - 3H\dot{\varphi} - \frac{dV}{d\varphi} + \frac{Q}{\dot{\varphi}}
\label{coupledKG}
\end{align}
\end{subequations}
where we note that the term $Q\dot{\varphi}$ is natural when one computes the conservation equation $\dot{\rho}_\varphi = -3H (\rho_\varphi+p_\varphi)$. Various choices for the coupling function $Q$ were considered in the literature, for instance $Q=\alpha H\rho_\gamma$ or $Q=(2/3)\kappa\beta\rho_\gamma\dot{\varphi}$ with $\alpha$ and $\beta$ being dimensionless constants whose sign determines the direction of energy transfer from one component to the other~\cite{Amendola:1999qq, Holden:1999hm, Billyard:2000bh}. Those two choices can be motivated physically, however, one of the main motivation is the fact that the dynamical system with these coupling remains two dimensional as the Hubble parameter can be eliminated from the equations. However, both choices appear rather arbitrary and one would prefer a choice where the coupling is simply proportional to an energy density, for instance $Q = \Gamma \rho_{\gamma}$ with $\Gamma$ assumed to be small, see~\cite{Boehmer:2008av}, or for a further generalisation~\cite{Boehmer:2009tk}. In this case the phase space cannot be represented in the plane and one has to work in a three dimensional space.
As before, we start with the variables~(\ref{compact}) but need a third variable in order to be able to write the cosmological field equations as an autonomous system of differential equations. A possible third variable $z$ can be chosen to be
\begin{align}
z = \frac{H_0}{H+H_0}
\end{align}
where $H_0$ is the Hubble parameter at an arbitrary fixed time. It is convenient to chose this time to be `today'. This variable $z$ ensures that the physical phase-space is compact. The Hubble parameter $H \rightarrow 0$ in the early time universe and $H \rightarrow \infty$ for the late time universe. Therefore
\begin{align}
z = \begin{cases}
0 &\mbox{if } H=0 \\
1/2 &\mbox{if } H=H_0 \\
1 &\mbox{if } H \rightarrow \infty
\end{cases}
\end{align}
and $z$ is bounded by $0 \leq z \leq 1$. Since the phase-space of system~(\ref{xyp}) is half a unit circle, we have that with coupling term $Q = \Gamma \rho_{\gamma}$ the phase-space now corresponds to a half-cylinder of unit height and unit radius.
The resulting dynamical system is given by
\begin{subequations}
\label{withcoup}
\begin{align}
\label{x'}
x' &= -3x+\lambda \frac{\sqrt{6}}{2}\,y^2+\frac{3}{2}x(1+x^2-y^2)
-\zeta\,\frac{(1-x^2-y^2)z}{2x(z-1)} \\
\label{y'}
y' &= -\lambda \frac{\sqrt{6}}{2}\,xy+\frac{3}{2}y(1+x^2-y^2) \\
\label{z'}
z' &= \frac{3}{2}z(1-z)(1+x^2-y^2)
\end{align}
\end{subequations}
where $\zeta = \Gamma/H_0$. A detailed phase-space analysis of this model can be found in~\cite{Boehmer:2008av}. However, this model also has some additional interesting features outside the standard linear stability theory. For instance, there is a point vertically above point D in Fig.~\ref{fig_ql4} which attract trajectories. The system~(\ref{withcoup}), however, does not have a critical point there.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.6\textwidth]{figures/lambda4_n}\\[3em]\mbox{}
\caption{Phase space plot of system~(\ref{withcoup}) with $\lambda=4$ and $\zeta = 10^{-6}$.}
\label{fig_ql4}
\end{figure}
By inspecting equations~(\ref{withcoup}) at $x_0=y_0=\sqrt{6}/(2\lambda)$ we note that $y'(x_0,y_0) = 0$ and that
\begin{subequations}
\label{withcoup2}
\begin{align}
x'(x_0,y_0) &= -\zeta\,\frac{(\lambda-3/\lambda)z}{\sqrt{6}(z-1)} \\
z'(x_0,y_0) &= \frac{3}{2}z(1-z)
\end{align}
\end{subequations}
for some small coupling term $\zeta \ll 1$. Therefore, as the trajectories approach the $z=1$ plane, we have that also $z'(x_0,y_0) \rightarrow 0$. However, the behaviour of $x'(x_0,y_0)$ is more involved as
\begin{align}
x'(x_0,y_0) \propto \frac{\zeta}{1-z}.
\end{align}
On the other hand, $\zeta \ll 1$ but $(1-z) \rightarrow 0$ as $z \rightarrow 1$. Therefore, the fraction $\zeta/(1-z)$ will initially be small. However, eventually the $(1-z)$ term will dominate and $\zeta/(1-z)$ will become large, explaining the repeller behaviour of this point in Fig.~\ref{fig_ql4}.
\section{Final remarks}
It is hoped that this chapter succeeded in giving the reader a useful introduction into the exciting field of dynamical systems in cosmology. We should remark that the majority of papers dealing with the subject are confined to linear stability theory and focus more on the interpretation of results in the context of cosmology. However, there are many models where a more in depth analysis is needed to gain a complete understanding of the physics involved. Moreover, there is no need to select models primarily because of their simpler mathematical structure since we have all the tools at hand to study the more difficult ones too. We hope the reader feels encouraged to study all aspects of a cosmological dynamical system and use a variety of techniques developed by mathematicians, beyond linear stability theory. As Einstein wrote `Everything should be made as simple as possible, but not simpler.'
\subsection*{Acknowledgements}
We would like to thank Nicola Tamanini and Matthew Wright for valuable comments on these notes.
|
1,108,101,565,611 | arxiv | \section{Introduction}
Randall-Sundrum\ (RS) hypothesis of the existence of a fifth
space dimension with a warped metric\ \cite{Randall:1999ee}
provides an alternative
solution to the gauge hierarchy problem, which is distinct from
the supersymmetric\ (SUSY) approach.
Different embedding of the Standard Model\ (SM) into warped extra
dimension\ (WED) have been discussed and there has been considerable
attention focused on studying the phenomenological implications and
consistencies of the WED models\ \cite{agashe}.
Understanding
the smallness of neutrino masses in WED models however has been quite non-trivial.
In contrast, in SUSY
framework, a simple extension of the Minimal Supersymmetric Standard Model (MSSM)
by the addition of three right-handed neutrinos leads via type I seesaw mechanism\ \cite{seesaw}
to a set of three light neutrinos. The formula for neutrino masses in this case has
the form $M_{\nu} \simeq -m_D m_N^{-1} m_D^T$ for $m_D \ll m_N$ where $m_D$ is the Dirac mass
and $m_N$ the Majorana mass of right-handed\ (RH) neutrinos. Since $m_N$ is a new
scale unrelated to the SM gauge group, its value can be much
higher than the weak scale $v_{wk}$ whereas $m_D$ breaks SM gauge
group and is of order of $ v_{wk}$, making $M_\nu$ much smaller than the
known quark and lepton masses. Typical values of the seesaw scale
$m_N$ in Grand Unification Theories (GUTs) are of order $10^{14}\,$GeV.
A common theme of all seesaw-like solutions to neutrino masses is that neutrinos
are Majorana fermions implying observable lepton number violating
processes. Several ongoing searches for lepton number violating process such
as neutrinoless double beta decay of nuclei are under way to test this hypothesis.
There have been several interesting proposals to understand small neutrino masses in WED
models\ \cite{Grossman:1999ra,Gherghetta:2000qt,Huber:2003sf,
Gherghetta:2003he,Chen:2005mz,perez,Csaki:2008qq,Agashe:2008fe}.
A generic prediction of these models (with the
exception of \cite{Huber:2003sf,perez,Csaki:2008qq}) is that neutrinos are Dirac
fermions so that total lepton number remains a good symmetry of nature
and processes such as neutrinoless double beta decay and $K^+\to \pi^-\mu^+\mu^+$ etc.
that violate lepton number should not
be observed. It has also been argued that this kind of approach
provides a simple way to understand the flavor structure among
neutrinos (a much milder hierarchy for neutrinos compared to
charged leptons)\ \cite{Agashe:2008fe}.
The models \cite{Huber:2003sf,perez,Csaki:2008qq}
that have Majorana neutrinos use type I seesaw for the purpose so that the seesaw scale
is in the range of $10^{14}\,$GeV or higher and not directly accessible at the LHC.
In this paper, we discuss an alternative class of WED models where neutrinos are Majorana fermions
and obtain their masses from a different mechanism, known in literature as the inverse seesaw
mechanism\ \cite{Mohapatra:1986aw}. Its implementation requires adding two
gauge singlet chiral fields $N$ and $S$ per family to
the SM such that they form a pseudo-Dirac pair with
mass in the TeV range. The smallness of neutrino masses is related to
the extent of their ``pseudo-Dirac-ness'' which is governed by a tiny
lepton number breaking mass term for the fields $N, S$ (denoted by $m_{S,N}$).
The generic mass formula for the neutrino mass matrix is given by
$M_\nu \simeq -m_D \left(m_{SN}\, m_{S}^{-1}\, m_{SN}^T\right)^{-1} m_D^T$
with $m_S \ll m_D \ll m_{SN}$ where $m_{SN}$ is the Dirac mass that couples $N$ and $S$.
Unlike the usual four-dimensional inverse seesaw models, where smallness of $m_S$
requires introducing a tiny parameter by hand,
we show here that in the WED models, one can have this smallness
dictated by parameters of order one that govern the location of the 5D profile of the
$S$ fields in the bulk. In this sense, the RS framework
is ideally suited to the implementation of inverse seesaw.
Furthermore, in contrast with the type I embedding in WED,
the seesaw scale in this case is in the TeV range so that it is accessible at the LHC.
We implement the inverse seesaw mechanism in WED models in this paper and
present realistic examples that fit current neutrino oscillation data.
An interesting outcome of our model is that it predicts
the existence of an eV mass sterile neutrino
in a natural manner due to the fact that absence
of global parity anomaly requires that
there be an even number of singlet $S$-fermions:
four in our case out of which only three are
required for inverse seesaw, the remaining one will become the light sterile neutrino.
We note some of the properties
of the sterile neutrino predicted in our model.
This paper is organized as follows: in Sec.\ \ref{sec:typeI_seesaw} we review the
implementation of type I seesaw mechanism in WED\ \cite{Huber:2003sf}.
In Sec.\ \ref{sec:inverse_seesaw}, we present our model using the inverse seesaw mechanism.
We first illustrate the appearance of light sterile neutrino with a
toy model. We then consider realistic cases and give two examples
of parameter space
which reproduce the experimentally measured neutrino mass squared differences and mixing matrix
for normal and inverse neutrino mass hierarchies respectively.
Then, we study the contribution from higher Kaluza-Klein\ (KK) mode.
In Sec.\ \ref{sec:pheno}, we comment briefly on some phenomenological implication of the model,
in particular the effect on the neutrinoless double beta decay.
\section{Type I seesaw in warped extra dimension}
\label{sec:typeI_seesaw}
The Randall-Sundrum~(RS) model~\cite{Randall:1999ee} has the warped metric
\begin{eqnarray}
ds^{2} & = & G_{AB}dx^{A}dx^{B}=e^{-2\sigma\left(y\right)}\eta_{\mu\nu}dx^{\mu}dx^{\nu}-dy^{2},
\;\;\;\;\sigma\left(y\right)=k\left|y\right| \, ,
\end{eqnarray}
where $k$ is the AdS curvature, $\eta_{\mu\nu}={\rm diag}\left(1,-1,-1,-1\right)$
and the fifth dimension $-\pi R\leq y\leq\pi R$ is taken to be a $S_{1}/Z_{2}$ orbifold.
As discussed in Ref.\ \cite{Huber:2003sf}, one way to implement type I seesaw in
WED is to extend the SM by adding three RH neutrinos $N$,
one for each family and including their Yukawa couplings in 5D.
The bulk action for this model can be written as follows:
\begin{eqnarray}
S \!& = &\! \int d^{4}x\int_{-\pi R}^{\pi
R}dy\sqrt{G}\left[\overline{N}iE_{a}^{A}\gamma^{a}D_{A}N-m_{DN5}\overline{N}N
-\!\!\left( \frac{1}{2}m_{N5}\overline{N}N^{c}
+\lambda_{N5}\overline{\ell}N H +{\rm h.c.} \right)\right] ,
\label{eq:seesaw_action}
\end{eqnarray}
where $\ell$ and $H$ are respectively the $SU(2)_{L}$ lepton and Higgs doublets
\footnote{To avoid clutter, we have suppressed the family indices of $\ell$ and $N$.}.
In Eq.\ \eqref{eq:seesaw_action}, $a,b,...$ and $A,B,..$ are respectively the flat and curve
indices which run from 0 to 4. We have $\gamma^{a}=\left(\gamma^{\mu},i\gamma^{5}\right)$
for $a=0,1,2,3,4$ and the spacetime covariant derivative $D_{A}=\partial_{A}+\omega_{A}$.
From the warped metric, the inverse vielbein is given by
$E_{a}^{A}={\rm diag}\left(e^{\sigma},e^{\sigma},e^{\sigma},e^{\sigma},1\right)$
while the spin connection is given by
$\omega_{A}=\left(\frac{1}{2}\sigma'e^{-\sigma}\gamma^{5}\gamma_{\mu},0\right)$
where $\sigma'=d\sigma/dy=k\,{\rm sgn}\left(y\right)$. We then determine
$\sqrt{G}=\sqrt{{\rm det}G_{AB}}=e^{-4\sigma}$. For a spinor $\Psi$, $\Psi^{c}=C\gamma^{0}\Psi^{*}$
is the corresponding charge conjugate spinor with
$C=i \gamma^{2}\gamma^{0} \gamma^{5}= \gamma^{2}\gamma^{0} \gamma^{4}$
such that $\gamma^{a,T}=-C\gamma^{a}C^{-1}$.
If we assign the lepton number $L$ for both $N,\ell$ as $L=1$
the only term that violates $L$ is Majorana masses $m_{N5}$
in the action \eqref{eq:inv_seesaw_action}.
In the standard notation where the warped factor is given by
$e^{-k|y|}$ with the Planck and TeV branes located at $y=0$ and
$y=\pi R$ respectively, the fermion zero modes are given by the
5D profile $\tilde{f}^{(0)}(y) = \sqrt{\frac{\pi
kR\left(1-2c_{f}\right)}{e^{\pi
kR(1-2c_{f})}-1}}e^{\left(\frac{1}{2}-c_{f}\right)\sigma}$ with
Dirac mass parameter $c_f= m_f/k$
and $\sigma \equiv k |y|$.
We follow a definition of the profile wave functions
whose normalization condition does not include any extra warped factor
(i.e. with respect to flat metric) such that
\begin{eqnarray}
\frac{1}{2\pi R} \int^{\pi R}_{-\pi R} d y
\tilde{f}^{(m)} (y) \tilde{f}^{(n)} (y) = \delta_{mn} \, .
\end{eqnarray}
It is clear from this that if $c_f > \frac{1}{2}$, the
profile peaks near the Planck brane whereas if $c_f < \frac{1}{2}$,
it peaks near the TeV brane. Electroweak precision
constraints demand that the 5D profiles of charged leptons peak near the Planck
brane due to small wave function overlaps with the KK modes\ \cite{agashe}.
The RH neutrinos being electroweak
singlets do not however have any such constraints. To implement the
seesaw mechanism, lepton number is assumed to be broken at the Planck
brane via the Majorana mass term $m_{N5} = d_N \, \delta (y)$
where $d_N$ is a dimensionless number.
The zero mode profile of the RH neutrino is chosen
to peak near the TeV brane i.e. $c_N < 1/2$.
In order to obtain fermion masses, we need to know the Higgs
doublet profile. We assume that it is localized on the TeV brane.
To estimate the order of magnitude of the model parameters, we work with
only one generation of fermion. Denoting the Dirac mass parameters for the
lepton doublet, lepton singlet and RH neutrino respectively
by $c_{\ell} > 1/2$, $c_{e_R} > 1/2$ and $c_N < 1/2$,
we can write the effective 4D charged lepton mass $m_\ell$,
Dirac mass for the neutrinos $m_D$ and the Majorana mass $m_N$
for the RH neutrinos as:
\begin{eqnarray}
m_\ell ~&\sim& k\times e^{-k\pi R(c_\ell+c_{e_R})}\, , \nonumber \\
m_D~&\sim&~k\times e^{-k\pi R(c_\ell+\frac{1}{2})}\, , \nonumber \\
m_N~&\sim&~ k\times e^{k\pi R(2c_N-1)} \, ,
\end{eqnarray}
which leads to light neutrino mass
\begin{equation}
m_\nu \sim k\times e^{-2k\pi R(c_\ell+c_N)} \, .
\label{eq:ss_neutrino_mass}
\end{equation}
Here all 5D dimensionless couplings are assumed to be unity.
With $k\pi R \sim 37$ and $k = 2.4 \times 10^{18}\,{\rm GeV}$,
for example, in order to get the right charged lepton masses,
we choose $c_{\ell_\alpha}=0.65$ ($\alpha = e, \mu, \tau$),
$c_{e_R}~=~0.78$, $c_{\mu_R}=0.61$ and $c_{\tau_R}~=~0.53$.
Since $e^{-74}\sim 7\times 10^{-33}$, to get neutrino masses
$m_\nu \lesssim 1\,{\rm eV}$, we will have $c_N \gtrsim 0.2$.
Here the smallness of the light neutrino mass is attributed to
a large $m_N$ as in the usual seesaw.
Since we place the hierarchy in $c_{\alpha_R}$ and fix the
$c_{\ell_\alpha}$ to be the same for all flavors, we will get
a non-hierarchical (anarchical) neutrino mass matrix if $c_N$ is non-hierarchical.
On the other hand, if we fix $c_{\alpha_R}$ while having hierarchical
$c_{\ell_\alpha}$, $c_N$ should also be hierarchical in order to get
an anarchical neutrino mass matrix.
\section{Inverse seesaw in warped extra dimension}
\label{sec:inverse_seesaw}
As noted earlier, to implement the inverse seesaw mechanism\ \cite{Mohapatra:1986aw}
in 4D models, two types of chiral gauge singlet fermions are needed.
As before we denote by $N$ the RH neutrinos used for seesaw mechanism discussed above
and by $S$ the extra singlet fermion fields. They form a pseudo-Dirac pair with
splitting given by a tiny parameter that breaks the lepton number.
The smallness of this parameter is chosen by hand in the 4D case.
We follow this strategy closely in the discussion of WED embedding of inverse seesaw.
The bulk action for inverse seesaw in WED can be written as follows:
\begin{eqnarray}
S & = & \int d^{4}x\int_{-\pi R}^{\pi R}dy\sqrt{G}
\Bigg[\overline{N}iE_{a}^{A}\gamma^{a}D_{A}N-m_{DN5}\overline{N}N
+\overline{S}iE_{a}^{A}\gamma^{a}D_{A}S-m_{DS5}\overline{S}S\nonumber \\
& & -\left( \frac{1}{2}m_{N5}\overline{N}N^{c}
+\frac{1}{2}m_{S5}\overline{S}S^{c}+\frac{1}{2}m_{SN5}\overline{S}N^{c}
+\lambda_{N5}\overline{\ell}N H+\lambda_{S5}\overline{\ell}S^{c} H
+{\rm h.c.}
\right)\Bigg] \, .
\label{eq:inv_seesaw_action}
\end{eqnarray}
If we assign the lepton number $L$ for $N,S$ respectively as $L=1,-1$,
the only fermion bilinears which violate $L$ are the Majorana masses
$m_{N5}$ and $m_{S5}$ in the action \eqref{eq:inv_seesaw_action}.
This above action could also arise from an exact gauge symmetry
such as $U(1)_{B-L}$\ (with the usual definition of quantum numbers)
after spontaneous symmetry breaking by Higgs field that transforms as
$B-L =+1$. The $S$ field is $B-L$ neutral and therefore
its Majorana mass term $m_{S5}$ which is allowed by $B-L$ breaks the global symmetry $L$\ (defined above) which persists
in gauged $B-L$ version.
The $B-L$ model implies that $m_{N5}=0$; however,
since $m_{N5}$ does not play a role in the neutrino masses and mixing,
the final results derived in the model without $B-L$ symmetry and presented below remain unchanged.
Note that in 4D, a five dimensional field splits into two chiral pairs
and only one chirality remains as a zero mode. So in our model, in 4D,
only the left chirality of the lepton doublet $\ell$
and the right chiralities of $S$ and $N$ survive as zero modes.
In odd space-time dimension (i.e. five), the action \eqref{eq:inv_seesaw_action}
contains parity anomaly if the total number of
bulk fermions that couple to gauge and gravity fields
is odd\ \cite{Redlich:1983kn,Callan:1984sa}. In warped type I seesaw as discussed
in Sec.\ \ref{sec:typeI_seesaw} where the lepton doublets also propagate in the extra dimension,
cancellation of the parity anomaly naturally requires three generations
of RH neutrinos $N$. However, in the warped inverse seesaw,
in order not to reintroduce parity anomaly,
we have to add an even number of singlet Dirac fermions $S$.
The minimal number of $S$ fields required to obtain three active light neutrino
is three. Since we cannot have odd number of $S$, the minimal number has
to be four. Thus, after the three of the four $S$-fields pair up
with the three $N$ fields to make the three pseudo-Dirac fermions,
we are left with an extra $S$ field which in the end becomes the
sterile neutrino with mass in the eV range.
We again assume that Higgs doublet is localized on the TeV brane
and that the $L$-violating Majorana masses are confined to the Planck brane
i.e. $m_{N5} = d_N \, \delta(y),\, m_{S5} = d_S \, \delta(y)$ with $d_N,\,d_S$
dimensionless numbers.
For simplicity,
we further assume that $m_{SN5} = d_{SN} \, k$
with $d_{SN}$ dimensionless number and ignore any possible boundary masses.
To estimate the order of magnitude of the dimensionless parameters
that characterize the model, we consider only one generation for all fermions.
Assuming the 5D location of the fields to be
$c_\ell > 1/2$, $c_{e_R} > 1/2$, $c_N < 1/2$ and $c_S < 1/2$,
we find the 4D effective masses to be
\begin{eqnarray}
m_D~&\sim&~ k\times e^{-k\pi R(c_\ell+\frac{1}{2})}\, , \nonumber \\
m_N ~&\sim&~ k \times e^{\pi k R(2c_N-1)} \, ,\nonumber \\
m_S ~&\sim&~ k \times e^{\pi k R(2c_S-1)} \, , \nonumber \\
m_{SN}~&\sim&~ k \times e^{-k \pi R} \, .
\end{eqnarray}
In the equation for $m_{SN}$ we have also assumed
that $ c_N + c_S \leq 0$ and we will see that this is indeed what
we need for the inverse seesaw mechanism to work.
This leads to the effective light neutrino mass
\begin{equation}
m_{\nu} \sim k \times e^{-2 \pi k R (c_\ell -c_S)} \, .
\label{eq:invss_neutrino_mass}
\end{equation}
In contrast to Eq.~\eqref{eq:ss_neutrino_mass}, here
the smallness of the light neutrino mass is attributed to small $m_S$.
For example, if we take $c_\ell = 0.65$ and $c_S \lesssim -0.2$,
we have $m_\nu \lesssim 1\,{\rm eV}$ with the seesaw scale
$m_{SN} \sim \cal{O}$(TeV). Also, if we assume all $c_\ell$ and $c_S$ to be
of same order for all generation, we get a neutrino
mass matrix with non-hierarchical pattern.
It is interesting that the final neutrino mass formula
is independent of the precise 5D profile of the $N$ fields.
\subsection{The appearance of sterile neutrino(s)}
\label{sec:sterile}
To understand the appearance of the sterile neutrino(s),
let us consider a toy model with one generation of
$\ell$ and $N$ and two generations of $S$ called $S_1, S_2$.
In this case, since only one S is needed for the inverse seesaw, the additional one
would result in a sterile neutrino.
In this model, the mass matrix in the basis $(\nu,N,S_1,S_2)$
is given by
\begin{eqnarray}
M & = & \left(\begin{array}{cccc}
0 & m_D & 0 & 0 \\
m_D & m_N & m_{SN_1} & m_{SN_2} \\
0 & m_{SN_1} & m_{S_{11}} & m_{S_{12}} \\
0 & m_{SN_2} & m_{S_{12}} & m_{S_{22}}
\end{array}\right) \, .
\label{eq:model_112}
\end{eqnarray}
For simplicity, we assume that $m_{SN_1} = m_{SN_2} = m_{SN}$,
$m_{S_{11}} = m_{S_{22}} = m_S$ and $m_N = m_{S_{12}} = 0$.
Assuming $m_S \ll m_D \ll m_{SN}$, we can diagonalize matrix
\eqref{eq:model_112} and obtain two heavy
and two light states with their respective masses given by
\begin{eqnarray}
m_{\rm heavy} &\simeq& \pm \sqrt{2m_{SN}^2 + m_D^2}
+ m_S\frac{m_{SN}^2}{2 m_{SN}^2 + m_D^2} \, , \label{eq:m_heavy}\\
m_{\rm light} &\simeq& m_S,\;\; m_S\frac{m_D^2}{2m_{SN}^2 + m_D^2} \, .
\label{eq:m_light}
\end{eqnarray}
The two heavy states mix with the light neutrino with $\sim m_D/m_{SN}$
and can be named the heavy RH neutrinos.
The light state with mass $m_S$ can be identified as sterile
neutrino while the other is the light active neutrino.
Hence, we obtain an interesting relation between the mass
of active and sterile neutrinos as follows
\begin{equation}
m_{\rm active} \simeq m_{\rm sterile}\,
\frac{m_D^2}{2m_{SN}^2 + m_D^2} \, .
\label{eq:mac_mst}
\end{equation}
For instance, to have an active neutrino with mass $m_{\rm active}\sim$ 0.05 eV
and a sterile neutrino with mass $m_{\rm sterile}\sim$ 1 eV, we would require
a hierarchy between $m_D$ and $m_{SN}$ to be $m_D/m_{SN} \sim 0.22$.
On the other hand, if we want $m_{\rm sterile}\sim$ 1 keV
which could be a potential dark matter candidate, it would require
$m_D/m_{SN} \sim 0.007$.
It should be pointed out that although the result above
is obtain by assuming $m_{S_{12}}$ to be vanishing,
barring any accidental cancellation, it holds in general even if $m_{S_{12}}$
is of the order of the diagonal elements $m_{S_{11}}$ and $m_{S_{22}}$.
The result will also hold if $m_N$ is non-vanishing as
long as $m_N \ll m_{SN}$. In order to obtain more than one sterile neutrino,
we can extend the number of $S$ in the model.
\subsection{Neutrino mixing in warped inverse seesaw}
\label{sec:mixing}
We will now present a realistic warped inverse seesaw model with
three $N$ and four $S$ fields to explore the detailed
neutrino mixing and mass hierarchy pattern.
We first ignore the contributions of the KK modes
which will be discussed in a subsequent section, where we will show
under what conditions their effects can be safely ignored.
Considering for now only the zero modes, we have
the leading $10 \times 10$ neutrino mass matrix in the basis
$(\nu,N,S)$, which is given as follows
\begin{eqnarray}
M & = & \left(\begin{array}{ccc}
0 & m_D & 0 \\
m_D^T & m_N & m_{SN} \\
0 & m_{SN}^T & m_S
\end{array}\right) \, .
\label{eq:invss_mass_matrix}
\end{eqnarray}
where $m_D$, $m_{SN}$ and $m_S$ are respectively
$3\times3$, $3\times 4$ and $4\times 4$ matrices.
Assuming $m_S,m_N \ll m_D \ll m_{SN}$, we can block diagonalize the
mass matrix above and obtain the light neutrino mass matrix to be
\begin{equation}
M_\nu \simeq -m_D \left(m_{SN}\,m_S^{-1}\,m_{SN}^T\right)^{-1} m_D^T \, .
\label{eq:invss_light_neutrino}
\end{equation}
Note that $m_N$ does not appear in the light neutrino masses. It only affects
the mass splitting of the pseudo-Dirac pair $(N,S)$.
\subsection{Examples I: Normal hierarchy (NH) mass spectrum}
\label{sec:nor_eg}
In this section, we address the issue of neutrino mixing.
We will search for the parameter space which reproduces the neutrino mixing matrix
for the normal hierarchy spectrum ($m_{\nu_3} > m_{\nu_2} > m_{\nu_1}$)
while having anarchical pattern for $m_D$ and $m_{SN}$. Notice
that $m_{SN}$ is naturally anarchical. However, as far as $m_D$ is
concerned, unlike the RH charged leptons
since the $N$ is located closer to the TeV brane, whether it is hierarchical or not depends
on the profiles of the left-handed charged leptons.
By fixing the values of left-handed lepton doublets and attributing the hierarchy to
the RH singlet charged leptons, we can have an anarchical $m_D$.
For example, we have chosen the bulk mass
parameters for charged leptons as follows: $c_{\ell_e} = c_{\ell_\mu} = c_{\ell_\tau} = 0.65$,
$c_{e_R} = 0.7770, c_{\mu_R} = 0.6099, c_{\tau_R} = 0.5271 $.
This fits the charged lepton mass spectrum. We then choose
$c_{N_1} = c_{N_2} = c_{N_3} = -0.340$ and
$c_{S_1} = -0.338, c_{S_2} = -0.366, c_{S_3} = -0.358, c_{S_4} = -0.377$,
and all the dimensionless couplings having the values in the range $[0.1,1.0]$ .
We then obtain the following mass matrices\ (in GeV)
\begin{eqnarray}
m_{D} & = & \left(\begin{array}{ccc}
2.763 & 6.029 & 15.826 \\
9.294 & 5.778 & 12.058 \\
12.540 & 16.077 & 7.285
\end{array}\right) \, ,
\label{eq:mD_anar_nor}
\end{eqnarray}
\begin{eqnarray}
m_{SN} & = & \left(\begin{array}{cccc}
172.342 & 191.492 & 138.988 & 141.208 \\
177.903 & 222.665 & 134.505 & 264.764 \\
82.1093 & 276.105 & 347.918 & 269.177
\end{array}\right) \, ,
\label{eq:mSN_anar_nor}
\end{eqnarray}
\begin{eqnarray}
m_{S} & = & \left(\begin{array}{cccc}
5.8522 & 3.2171 & 2.5562 & 1.8856 \\
3.2171 & 0.7979 & 2.5724 & 1.5791 \\
2.5562 & 2.5724 & 3.8971 & 0.8128 \\
1.8856 & 1.5791 & 0.8128 & 1.3833
\end{array}\right)\times 10^{-9} \, ,
\label{eq:mS_anar_nor}
\end{eqnarray}
\begin{eqnarray}
m_{N} & = & \left(\begin{array}{ccc}
0.4072 & 1.018 & 0.6108 \\
1.018 & 0.8143 & 1.222 \\
0.6108 & 1.222 & 1.018
\end{array}\right)\times 10^{-9} \, .
\label{eq:mN_anar_nor}
\end{eqnarray}
From the matrices above, we obtain the light neutrino mixing matrix
(by diagonalizing the $10 \times 10$ neutrino mass matrix) as follows
\begin{eqnarray}
U_\nu^{nor} & = & \left(\begin{array}{cccc}
-0.8517 & 0.5122 & 0.0962 & -0.0135 \\
0.3183 & 0.6593 & -0.6694 & 0.1104 \\
-0.4136 & -0.5468 & -0.7182 & 0.1005 \\
0.0466 & 0.0633 & 0.1638 & 0.9887
\end{array}\right) \, ,
\label{eq:neutrino_mixing_nor}
\end{eqnarray}
where the last row and column correspond to the mixing with light sterile neutrino.
The top left $3\times 3$ submatrix of matrix \eqref{eq:neutrino_mixing_nor}
corresponds to the mixing between active neutrinos and is in good
agreement with the one obtained from the approximate formula
Eq.\ \eqref{eq:invss_light_neutrino}\footnote{In this work, we will only use
the exact mixing matrix obtained from diagonalizing $10 \times 10$ neutrino mass matrix.}.
The active neutrino mixing matrix \eqref{eq:neutrino_mixing_nor} is in
good agreement with the experimentally measured values~\cite{GonzalezGarcia:2010er}
\begin{eqnarray}
U_\nu^{exp} & = & \left(\begin{array}{ccc}
-0.8212 & 0.5623 & 0.0976 \\
0.3598 & 0.6429 & -0.6762 \\
-0.4429 & -0.5202 & -0.7302
\end{array}\right) \, .
\label{eq:neutrino_mixing_exp}
\end{eqnarray}
The masses of three light active neutrinos are
$(\nu_1,\nu_2,\nu_3) = (0.00172, 0.00885, 0.0516)\,{\rm eV}$.
On the other hand, the sterile neutrino
has a mass of $0.848\,{\rm eV}$ which could potentially explain
the anomaly in LSND and MiniBooNE\ \cite{Aguilar:2001ty,AguilarArevalo:2010wv}.
From the above, we can determine the mass squared differences
of the active neutrinos
\begin{eqnarray}
\Delta m_{21}^2 & = & 7.54 \times 10^{-5}\,{\rm eV}^2 \, , \nonumber \\
\Delta m_{31}^2 & = & 2.66 \times 10^{-3}\,{\rm eV}^2 \, ,
\end{eqnarray}
which are within $1 \sigma$ of the experimental values.
With the existence of an extra light sterile neutrino with
significant mixing with the active neutrinos, we have to
check if this could be consistent with the experimentally
determined number of species of neutrinos.
In the SM, the difference of the total width of the $Z^0$ boson
and the width for the decay into all visible channels is
attributed only to the light neutrinos that couple to the $Z^0$ boson.
The was determined very precisely from LEP data to be
$N_\nu = 2.9841 \pm 0.0083$~\cite{Nakamura:2010zzi}.
In our example above, we can calculate the correction due to the
existence of the light sterile neutrino which mixes with
active neutrinos~\cite{Bilenky:1990tm}
\begin{equation}
N_\nu = \sum_{i,j=1}^4
\left|\sum_{\alpha=e,\mu,\tau}
U_{\alpha i}^* U_{\alpha j}\right|^2 = 2.979 \, ,
\label{eq:zwidth_num_nu}
\end{equation}
where we have ignored the neutrino masses since $m_{1,2,3,4} \ll M_{Z^0}$.
Finally we can also write down $4\times 4$ neutrino mass matrix
including three active and one light sterile neutrinos as follows\ (in GeV)
\begin{eqnarray}
M_{\nu}^{nor} & = &
\left(
\begin{array}{cccc}
0.003890 & 0.0004645 & -0.004287 & 0.01234 \\
0.0004645 & 0.01681 & 0.01198 & -0.09776 \\
-0.004287 & 0.01198 & 0.02098 & -0.09063 \\
0.01234 & -0.09776 & -0.09063 & -0.8271
\end{array}
\right) \times 10^{-9} \, .
\end{eqnarray}
Since the experiment could not determine the sign of $\Delta m_{31}^2$,
we consider below the possibility of inverted neutrino mass spectrum i.e.
$m_{\nu_2} > m_{\nu_1} > m_{\nu_3}$ .
\subsection{Examples II: Inverted hierarchy (IH) mass spectrum}
\label{sec:inv_eg}
We again choose anarchical $m_D$ and $m_{SN}$.
The 5D parameters in this case are different.
For example, we have chosen the bulk mass
parameters as follows: $c_{\ell_e} = c_{\ell_\mu} = c_{\ell_\tau} = 0.65$,
$c_{e_R} = 0.7770, c_{\mu_R} = 0.6099, c_{\tau_R} = 0.5271 $,
$c_{N_1} = c_{N_2} = c_{N_3} = -0.360$,
$c_{S_1} = -0.3869, c_{S_2} =-0.353, c_{S_3} = -0.3029, c_{S_4} =-0.343$,
and all the dimensionless couplings having the values in the range $[0.1,1.0]$
and we obtain the following mass matrices (in GeV)
\begin{eqnarray}
m_{D} & = & \left(\begin{array}{ccc}
3.1553 & 5.7058 & 7.4499 \\
8.1511 & 3.9266 & 2.7872 \\
3.8560 & 5.5772 & 0.3675
\end{array}\right) \, ,
\label{eq:mD_anar_inv}
\end{eqnarray}
\begin{eqnarray}
m_{SN} & = & \left(\begin{array}{cccc}
105.405 & 101.936 & 107.203 & 70.534 \\
85.528 & 110.527 & 73.175 & 133.853 \\
32.219 & 150.114 & 176.516 & 141.532
\end{array}\right) \, ,
\label{eq:mSN_anar_inv}
\end{eqnarray}
\begin{eqnarray}
m_{S} & = & \left(\begin{array}{cccc}
0.2383 & 0.9876 & 4.0775 & 1.5672 \\
0.9876 & 2.0009 & 29.796 & 8.5212 \\
4.0775 & 29.796 & 214.205 & 20.529 \\
1.5672 & 8.5212 & 20.529 & 16.493
\end{array}\right)\times 10^{-9} \, ,
\label{eq:mS_anar_inv}
\end{eqnarray}
\begin{eqnarray}
m_{N} & = & \left(\begin{array}{ccc}
0.09489 & 0.2372 & 0.1423 \\
0.2372 & 0.1898 & 0.2847 \\
0.1423 & 0.2847 & 0.2372
\end{array}\right)\times 10^{-9} \, .
\label{eq:mN_anar_inv}
\end{eqnarray}
From the matrices above, we obtain the light neutrino mixing matrix
\begin{eqnarray}
U_\nu^{inv} & = & \left(\begin{array}{cccc}
-0.8131 & -0.5714 & 0.09739 & -0.0358 \\
0.3440 & -0.6076 & -0.7062 & -0.0866 \\
-0.4635 & 0.5353 & -0.6964 & 0.1003 \\
-0.0755 & 0.1336 & 0.0834 & -0.9905
\end{array}\right) \, ,
\label{eq:neutrino_mixing_inv}
\end{eqnarray}
where again the last row and column correspond to the mixing with light sterile neutrino.
As before, the top left $3\times 3$ submatrix of matrix \eqref{eq:neutrino_mixing_inv}
corresponds to the mixing between active neutrinos
and is in good agreement with the measured value in
Eq.\ \eqref{eq:neutrino_mixing_exp}.~\footnote{The sign differences in the
second column of Eq.\ \eqref{eq:neutrino_mixing_inv}
can be accounted for by changing the Majorana phases accordingly.}
The masses of three light active neutrinos are
$(\nu_1,\nu_2,\nu_3) = (0.0471, 0.0479, 0.000454)\,{\rm eV}$.
On the other hand, the sterile neutrino has a mass of $16.9\,{\rm eV}$
which could be too large to explain the anomaly in LSND and MiniBooNE.
From the above, we can determine the mass squared differences
of the active neutrinos
\begin{eqnarray}
\Delta m_{21}^2 & = & 7.74 \times 10^{-5}\,{\rm eV}^2 \, , \nonumber \\
\Delta m_{31}^2 & = & - 2.22 \times 10^{-3}\,{\rm eV}^2 \, ,
\end{eqnarray}
which are within 1$\sigma$ of the experimental values for the inverted mass spectrum.
In this example, we determine $N_\nu = 2.9767$ from Eq.\ \eqref{eq:zwidth_num_nu}.
We can also write down $4\times 4$ neutrino mass matrix
including three active and one light sterile neutrinos as follows\ (in GeV)
\begin{eqnarray}
M_{\nu}^{inv} & = &
\left(
\begin{array}{cccc}
0.03718 & 0.02257 & -0.02833 & 0.6064 \\
0.02257 & 0.1148 & -0.1385 & 1.4528 \\
-0.02833 & -0.1385 & 0.1770 & -1.6816 \\
0.6064 & 1.4528 & -1.6816 & 16.5961
\end{array}
\right) \times 10^{-9} \, .
\end{eqnarray}
\subsection{Contributions from Kaluza-Klein modes}
\label{sec:kk-modes}
In this section we would like to estimate the contributions
from KK modes. For simplicity, we would consider single generation
for each $N$, $S$ and $\ell$ fields.
We KK decompose the 5D fermionic fields as
\begin{equation}
\Psi_{L,R}(x^\mu,y) = \frac{e^{2\sigma}}{\sqrt{2\pi R}}
\sum_{n=0}^{\infty} \Psi^{(n)}_{L,R}(x^\mu) \widetilde\Psi^{(n)}_{L,R}(y) \, ,
\label{eq:kk-decom}
\end{equation}
where $\Psi_{L,R} = \frac{1}{2}(1\mp \gamma_5)\Psi$.
For $N$ and $S$, we choose $N_R$ and $S_R$ to be even under $Z_2$
while for $\ell$, we choose $\ell_L$ to be even.
As before, we will restrict $H$ to be strictly confined to the TeV brane
with $H(y) = k \, \delta(y-\pi R)$ and the Majorana masses to be strictly
confined to the Planck brane i.e. $m_{N5} = d_N \, \delta (y)$ and
$m_{S5} = d_S \, \delta (y)$. Similarly, we also assume
$m_{SN5} = d_{SN} k$.
Substituting Eq.\ \eqref{eq:kk-decom} for $N$, $S$ and $\ell$ fields
respectively into the action \eqref{eq:inv_seesaw_action},
we can write down the mass matrix in the basis of
$\left(\nu_{L}^{(0)},N_{R}^{\left(0\right)},S_{R}^{\left(0\right)},
\nu_{L}^{(1)},\nu_{R}^{(1)},
N_{L}^{\left(1\right)},N_{R}^{\left(1\right)},
S_{L}^{\left(1\right)},S_{R}^{\left(1\right)},...\right)$
as follows
\begin{eqnarray}
\!\!\!\! M^{kk} & = & \left(\begin{array}{cccccccccc}
0 & m_{D}^{(00)} & 0 & 0 & 0 & 0 &
m_{D}^{(01)} & 0 & 0 & ...\\
m_{D}^{(00)} & m_{N_{R}}^{(00)} & m_{SN_{R}}^{(00)} & m_{D}^{(10)} & 0 &
0 & m_{N_R}^{(01)} & 0 & m_{SN_R}^{(10)} & ...\\
0 & m_{SN_{R}}^{(00)} & m_{S_R}^{(00)} & 0 & 0 &
0 & m_{SN_{R}}^{(01)} & 0 & m_{S_{R}}^{(01)} & ...\\
0 & m_{D}^{(10)} & 0 & 0 & m_\nu^{(1)} &
0 & m_{D}^{(11)} & 0 & 0 & ...\\
0 & 0 & 0& m_\nu^{(1)} & 0 &
0 & 0 & 0 & 0 & ...\\
0 & 0 & 0 & 0 & 0 &
0 & m_{N}^{(1)} & m_{SN_{L}}^{(11)} & 0 & ...\\
m_{D}^{(01)} & m_{N_{R}}^{(01)} & m_{SN_{R}}^{(01)}
& m_{D}^{(11)} & 0 & m_{N}^{(1)} & m_{N_{R}}^{(11)}
& 0 & m_{SN_{R}}^{(11)} & ...\\
0 & 0 & 0 & 0 & 0 &
m_{SN_{L}}^{(11)} & 0 & 0 & m_{S}^{(1)} & ...\\
0 & m_{SN_{R}}^{(10)} & m_{S_{R}}^{(01)} & 0 & 0 &
0 & m_{SN_{R}}^{(11)} & m_{S}^{(1)} & m_{S_{R}}^{(11)} & ...\\
... & ... & ... & ... & ... & ... & ... & ... & ... & ...
\end{array}\right)\, ,
\label{eq:mass_matrix}
\end{eqnarray}
where $m_S^{(1)},\, m_N^{(1)}$ and $m_\nu^{(1)}$ are respectively
the first KK masses of $S,\, N$ and $\nu$ and
\begin{eqnarray}
m_{D}^{(mn)} & = & e^{-k \pi R}
\lambda_{N4}\, k\, \widetilde \nu_{L}^{(m)} (\pi R)
\widetilde N_{R}^{(n)} (\pi R) \, , \nonumber \\
m_{N_{R}}^{(mn)} & = &
\frac{d_N}{2\pi R} \widetilde N_{R}^{(m)} (0)
\widetilde N_{R}^{(n)} (0) \, , \nonumber \\
m_{S_{R}}^{(mn)} & = &
\frac{d_S}{2\pi R} \widetilde S_{R}^{(m)} (0)
\widetilde S_{R}^{(n)} (0) \, , \nonumber \\
m_{SN_{L,R}}^{(mn)} & = &
\int^{\pi R}_{-\pi R} \frac{dy}{2\pi R} \, e^{-k |y|} \,
d_{SN}\, k \,\widetilde S_{L,R}^{(m)} (y)
\widetilde N_{L,R}^{(n)} (y) \, .
\label{eq:mass_elements_sim}
\end{eqnarray}
In the first equation of Eqs.\ \eqref{eq:mass_elements_sim},
$\lambda_{N4} = \frac{\lambda_{N5}}{2\pi R}$ is dimensionless.
Assuming $m_{N_{R}}^{(mn)},m_{S_{R}}^{(mn)} \ll
m_D^{(mn)} \ll m_{SN_{L,R}}^{(mn)} \ll \mbox{KK masses}$,
we obtain the light neutrino mass roughly as
\begin{eqnarray}
m_{\nu} & \sim & m_{S_{R}}^{(00)}
\frac{\left(m_D^{(00)}\right)^{2}}{\left(m_{SN_{R}}^{(00)}\right)^{2}}
\left[1+\mathcal{O}\left(\epsilon^{2}\right)\right] \, ,
\label{eq:light_neutrino_mass}
\end{eqnarray}
where $\epsilon \sim \frac{m_{SN_{L,R}}^{(mn)}}{\mbox{KK masses}}$.
Notice that at leading order, the light neutrino mass is given
by the inverse seesaw relation and the contributions from
KK modes are suppressed as long as
$m_{SN_{L,R}}^{(mn)}$ is less than KK masses.
Including the contributions from the first KK modes by considering
the entire $9 \times 9$ mass matrix in Eq.\ (\ref{eq:mass_matrix}),
in Figure \ref{fig:kk}, we numerically plotted the
contributions of the first KK modes to the light
neutrino mass as a function of $d_{SN} = m_{SN5}/k$.
As long as we keep $d_{SN} \lesssim 0.3$
(as we did in the examples in Secs.~\ref{sec:nor_eg} and \ref{sec:inv_eg}),
the corrections from the first KK modes are not more than 20 \%.
Hence, we expect the contributions from higher KK modes to be negligible.
\begin{figure}[fhptb]
\centering
\includegraphics[scale=0.7]{kk.eps}
\caption[]{The contributions from the first KK modes
to the light neutrino mass as a function of $d_{SN} = m_{SN5}/k$.}
\label{fig:kk}
\end{figure}
\section{Comments }
\label{sec:pheno}
A few comments are now in order about our model.
\noindent (i) In the above discussion, we have added two kinds of lepton number breaking terms on the
Planck brane and assumed that these are the only sources of lepton number violation in our model
i.e. Majorana mass terms of type $NN$ and $SS$. However,
we could just keep only the second of the two terms, in which case
in Eq.\ (\ref{eq:invss_mass_matrix}) depicting the inverse seesaw matrix for the zero modes,
the term $m_N$ will absent. Similarly in the discussion of KK mode contributions,
all Majorana terms involving $N_L, N_R$ (i.e. $m^{(mn)}_{N_{L,R}}$) will be absent.
This makes it easier to estimate the KK contributions to the zero mode
mass and it confirms our result that they are indeed small.
Such a situation can be guaranteed by adding an extra $B-L$ gauge symmetry
into the theory under which $S$ is a singlet but $N$ field is not.
The $m_{SN}$ is then generated by a Higgs field which breaks the
$B-L$ symmetry by one unit. Since $m_N$ and all Majorana masses involving the higher
KK modes of the $N$ field break $B-L$ by two units, they will be absent.
\noindent (ii) A comment on the phenomenology of our model:
A key feature of the model is the presence of
a light sterile neutrino, which arises because
of the need to guarantee freedom from parity anomalies as noted.
Since the number of $S$ field $N_S$ we can add to the model has to be
even, the prediction of this model is that we will have an odd number
of sterile neutrinos $N_{\rm sterile} = N_S - 3$ where the 3
is the number of family in the SM.
The sterile neutrino will
contribute to neutrinoless
double beta decay due to its mixing with $\nu_e$; however
the effective neutrino mass due to this contribution
remains in the 3 meV range due to small mixing and eV range sterile mass.
This remains far below the reach of the current double beta decay search.
This sterile neutrino could also provide a way to understand the recent
reactor anomalies as well as the MiniBooNE and LSND
results~\cite{Giunti:2011gz}. However at LSND and MiniBooNE,
it will predict the same oscillation effect for both neutrinos and anti-neutrinos.
The sterile neutrino will contribute an extra neutrino
species in the analysis of Big Bang nucleosynthesis (BBN). This is consistent
with current analyses of the BBN as well as cosmological
structure formation and WMAP data~\cite{hannestad}.
\noindent(iii)
The scenario outlined here leads to a $\theta_{13}\simeq 0.096$
for the NH and $0.097$ for the IH case. This is however
not a prediction since the Dirac neutrino Yukawa coupling, the
lepton number violating masses $m_{S}$, $m_{N}$
as well as the $m_{SN}$ matrices all involve free parameters.
\noindent (iv) The specific model discussed here predicts RH neutrinos
with masses of order 100 GeV which are therefore
accessible at the LHC via their mixing with the left-handed neutrino.
LHC signals for such Dirac neutrinos have been studied in Ref.\ \cite{saavedra}.
Their primary decay signal is the three lepton plus missing energy in $pp$ collisions. Furthermore,
an interesting possibility is the KK excited mode of
the electron, if in the TeV range could decay to the RH neutrino and the $W$.
Since the dominant decay mode of the RH neutrino is
to two leptons plus missing energy ($N\to \ell^+\ell^- \nu$),
there could be exotic final states such as $\ell^\pm\ell^\mp \ell^- \nu$.
\noindent(v) The TeV scale particle spectrum in the model is similar
to an SO(10) model with inverse seesaw discussed in the literature.
Extrapolating the discussion of that model~\cite{blanchet}, it appears very likely
that it will provide a satisfactory framework for realization
of resonant leptogenesis idea to understand the origin of matter.
\section{Conclusions}
\label{sec:conclusion}
In summary, we have presented a new way to understand the small neutrino masses
by embedding the inverse seesaw mechanism into the warped extra dimension models.
In the four dimensional implementation of inverse seesaw,
a small lepton number violating mass term needs to be
put in by hand (of order of or less than a keV) to get sub-eV scale active neutrino masses.
In the WED framework on the other hand, locating the
lepton number violating mass terms on the Planck brane
provides a simple way to understand this smallness without any fine-tuning of parameters.
This model differs from the type I seesaw in WED by the
presence of sub-TeV scale right-handed neutrinos which may be accessible at the LHC.
An interesting prediction of the model is an eV scale sterile
neutrino which arises from the requirement of cancellation of
parity anomaly in odd number of dimensions. Its small mass is again connected to the small
parameter in the inverse seesaw and the lepton number breaking in the Planck brane.
We have also worked out numerical examples which give active neutrino masses and mixing
in accord with observations for both normal and inverted hierarchy cases,
showing that such models can indeed provide realistic description of nature.
\subsection*{Acknowledgements}
We would like to thank K.~Agashe for many useful discussions and a critical reading of the manuscript.
The work of R.N.M. was supported by
the NSF grant PHY-0968854, and the work of I.S. was supported by
the U.S.~Department of Energy through grant DE-FG02-93ER-40762.
C.S.F would like to thank C.N. Yang Institute for Theoretical Physics
for the generous support.
|
1,108,101,565,612 | arxiv | \section{Derivation of the UCT-HLLD scheme for Isothermal MHD}
\label{app:UCT_HLLD_isothermal}
The wave pattern emerging from the solution of the Riemann problem in isothermal MHD differs from its adiabatic counterpart for the absence of the contact or entropy mode.
The HLLD flux can still be written in the Roe-like form (\ref{eq:hlld1}) but the jump conditions are different, as shown by Mignone (2007) \cite{Mignone2007}.
Eq. (\ref{eq:UCT_HLLD_ad}) and (\ref{eq:UCT_HLLD_nu}) still have the same form but the coefficient $\chi^s$ needed in our Eq. (\ref{eq:By_chi}) is different.
In particular, from Eq. 32-33 of \cite{Mignone2007}, we find the following expression:
\begin{equation}
\chi^s = \frac{\rho^s(\lambda^s - v^s_x) - B_x^2}
{\rho^*(\lambda^s - \lambda^{*L})(\lambda^s - \lambda^{*R})} - 1
\,,
\end{equation}
where
\begin{equation}\label{eq:hlld_iso_lambda*}
\lambda^{*L} = u^* - \frac{|B_x|}{\sqrt{\rho^*}} \,,\qquad
\lambda^{*R} = u^* + \frac{|B_x|}{\sqrt{\rho^*}}
\end{equation}
are the Alfv\'en velocities, $u^* = F^{\rm hll}/\rho^{\rm hll}$ and $\rho^* = \rho^{\rm hll}$ is the density inside the Riemann fan.
From Eq. (\ref{eq:hlld_iso_lambda*}), one clearly has that $\lambda^{*R} + \lambda^{*L} = 2u^*$ and $B_x^2 = \rho^*(\lambda^{*s} - \lambda^*)^2$.
This allows us to write the $\tilde{\chi}^s$ coefficients needed in Eq. (\ref{eq:UCT_HLLD_ad}) as
\begin{equation}
\tilde{\chi}^s = \frac{(v_x^s - u^*)(\lambda^s - u^*)}
{\lambda^{*s} + \lambda^s - 2u^*} \,.
\end{equation}
\clearpage
\section{Introduction}
Magnetohydrodynamics (MHD) is the basic modelization framework to treat plasmas at the macroscopic level, that is neglecting kinetic effects and as a single fluid, an approximation commonly used for applications to laboratory, space and astrophysical plasmas.
Magnetic fields and currents created by the moving charges play a fundamental role in the dynamics of the fluid, which is considered to be locally neutral, and the set of hydrodynamical (Euler) equations must be supplemented by the magnetic contributions to the global energy and momentum, and by a specific prescription for the evolution of the magnetic field itself, the so-called induction equation, that is Faraday's law combined to a constitutive relation between the current and the electric field (Ohm's law).
In extending the same numerical techniques employed for the Euler equations to multi-dimensional MHD, a major challenge dwells in preserving the divergence-free constraint of the magnetic field which inherently follows from the curl-type character of Faraday's law.
This is especially true for Godunov-type shock-capturing schemes based on the properties of the hyperbolic set of conservation laws, let us call them \emph{standard upwind procedures}, where spatial partial derivatives do not commute when discontinuities are present and spurious effects (\emph{numerical magnetic monopoles}) arise.
The research in this field, which is crucial for building accurate and robust finite-volume (FV) or finite-difference (FD) shock-capturing numerical codes for computational MHD, started more than twenty years ago and several different methods have been proposed.
Here, for sake of conciseness, we simply refer the reader to the paper by T{\'o}th (2000) \cite{Toth2000} for a comprehensive discussion and comparison of the early schemes.
Summarizing in brief, among the proposed solution methods are schemes based on the cleaning of the numerical monopoles, either by solving an elliptic (Poisson) equation and removing the monopole contribution from the updated fields \cite{Brackbill_Barnes1980}, or by adding specific source terms and an additional evolution equation for the divergence of $\vec{B}$ itself to preserve the hyperbolic character of the MHD set \cite{Powell_etal1999, Munz_etal2000, Dedner_etal2002}.
Other methods evolve in time the vector potential \cite{Rossmanith2006, Helzel_etal2011} or use different alternatives, such as for the so-called flux distribution schemes \cite{Torrilhon2005}.
Conversely, a radically different strategy is adopted in the so-called constrained transport (CT) methods.
These schemes all rely on the curl-type nature of the induction equation and on its discretization based on Stokes' theorem (rather than on Gauss' one as needed for the equations with the divergence operator), as first realized in pioneering works where the evolution equation for the magnetic field alone was solved \cite{Yee1966, Brecht_etal1981, DeVore1991, Evans_Hawley1988}.
The CT method was later extended to the full MHD system of hyperbolic equations in the context of Godunov-type schemes \cite{Dai_Woodward1998,Ryu_etal1998,Balsara_Spicer1999}.
In FV-CT schemes, magnetic field components are stored as surface integrals at cell interfaces as primary variables to be evolved via the induction equation, while the corresponding fluxes are line-averaged electric field (namely \emph{electromotive force}, emf) components located at cell edges, to recover the discretized version of Stokes' theorem.
By doing so, the solenoidal constraint can be preserved exactly during time evolution.
A major difficulty of the CT formalism is the computation of upwind-stable emf components located at zone-edges \cite{Balsara_Spicer1999}.
This can be achieved either by properly averaging the interface fluxes computed when solving the 1D Riemann problems at zone interfaces, or by using genuine (but much more complex) 2D Riemann solvers computed directly at cell edges \cite{Balsara2010, Balsara2012, Balsara2014a, Balsara_Dumbser2015}.
In the latter case the dissipative part of the multidimensional emf can be shown to behave as a proper resistive term for the induction equation \cite{Balsara_Nkonga2017}.
As far as the former (simpler) case is concerned, in the original work by \cite{Balsara_Spicer1999} the emf was obtained as the arithmetic of the four upwind fluxes nearest to the zone edge.
It was then recognized (e.g. \cite{Gardiner_Stone2005}) that this approach has insufficient numerical dissipation and it does not reduce to the plane-parallel algorithm for grid-align flow.
Gardiner \& Stone (2005) \cite{Gardiner_Stone2005} suggested that this issue could be solved by doubling the dissipation and introduced a recipe to construct a stable and non-oscillatory upwind emf with optimal numerical dissipation based on the direction of the contact mode.
This approach (here referred to as the CT-Contact method) is, however, mainly supported by empirical results as there is no formal justification that the emf derivative should obey such selection rule.
In addition, the method can be at most $2^{\rm nd}$-order accurate thus making the generalization to higher-order methods not feasible.
A rigorous approach to this problem for both FV and FD Godunov-type schemes for computational MHD was originally proposed by Londrillo and Del Zanna (2004) \cite{Londrillo_DelZanna2004} with their \emph{upwind constrained transport} (UCT) method.
According to the UCT methodology, the continuity property of the magnetic field components at cell interfaces (which follows from the solenoidal constraint) is considered as a \emph{built-in} condition in a numerical scheme, enabling face-centered fields to be evolved as primary variables.
At the same time, staggered magnetic field components enter as single-state variables in the fluid fluxes at the corresponding cell interfaces, and as two-state reconstructed values at cell edges in the four-state emf for the induction equation.
Time-splitting techniques should be avoided as they prevent exact cancellation of $\nabla\cdot\vec{B}$ terms at the numerical level.
The emf components constructed using information from the four neighboring upwind states must also automatically reduce to the correct 1D numerical fluxes for plane parallel flows and discontinuities aligned with the grid directions.
According to the authors, these are the necessary conditions to preserve the divergence-free condition and to avoid the occurrence of numerical monopoles that may arise while computing the divergence of fluid-like fluxes numerically.
In the original work of \cite{Londrillo_DelZanna2004}, a second-order FV scheme based on Roe-type MHD solver (UCT-Roe) and a high-order scheme based on characteristic-free reconstruction and a two-wave HLL approximate Riemann solver (UCT-HLL) were proposed.
The UCT-HLL scheme was further simplified by Del Zanna et al. (2007) \cite{DelZanna_etal2007} in the context of general relativistic MHD,
and recipes were given to build a FD UCT schemes of arbitrary order of accuracy by testing several reconstruction methods.
High-order FD-CT methods were also recently proposed by \cite{Minoshima_etal2019} who, instead, constructed the emf by simply doubling the amount of numerical dissipation.
While FD approaches are based on a point value representation of primary variables and avoid multi-dimensional reconstructions, FV-CT schemes of higher than $2^{\rm nd}$-order accuracy are much more arduous to construct albeit they are likely to increase robustness, see the review by Balsara \cite{Balsara_Review2017} and references therein.
Efforts in this direction were taken by Balsara (2009) \cite{Balsara2009, Balsara_etal2009} in the context of ADER-WENO FV schemes who designed genuinely third- and fourth-order spatially accurate numerical methods for MHD.
More recently, fourth-order FV schemes using high-order quadrature rules evaluated on cell interfaces have been proposed by Felker \& Stone (2018) \cite{Felker_Stone2018} and Verma (2019) \cite{Verma_etal2019}.
Here the construction of the higher-order emf follows the general guidelines of the UCT-HLL (or Lax-Friedrichs) approach introduced by \cite{DelZanna_etal2007} and later resumed for truly 2D Riemann problem by \cite{Balsara2010}.
The goal of the present work is to systematically construct UCT schemes for classical MHD, using a variety of 1D Riemann solvers avoiding the full spectral resolution in characteristic waves, and providing the correct averaging recipes to build the four-state emf fluxes at zone edges, extending the scheme by \cite{DelZanna_etal2007} to less dissipative solvers like HLLD \cite{Miyoshi_Kusano2005}, where the Riemann fan is split into four intermediate state to include also the contact and Alfv\'enic contributions, other than simply the fast magnetosonic ones.
This novel UCT scheme is tested by performing several multi-dimensional numerical tests, and comparison is also made with other CT popular schemes based on emf averaging, including the simple arithmetic averaging method, those based on doubling the diffusive contribution of 1D fluxes, as in \cite{Gardiner_Stone2005}, the above mentioned UCT-HLL one, and a novel UCT version of the GFORCE scheme by \cite{Toro_Titarev2006}.
Our paper is structured as follows.
In \S\ref{sec:notations} we introduce the basic CT discretization and general notations while in \S\ref{sec:emf_averaging} we review basic averaging CT schemes.
The UCT method and the original Roe and HLL schemes are discussed in \S\ref{UCT}, while the new UCT-based composition formulae are presented in \S\ref{sec:composition}. Numerical benchmarks are introduced in \S\ref{sec:numerical_benchmarks} and a final summary is reported in \S\ref{sec:summary}.
\section{CT schemes based on emf averaging}
\label{sec:emf_averaging}
The set of conserved variables $U_{ {\boldsymbol{c}} }$ may be extended to include the zone-centered representation of magnetic field components as well, usually provided by simple spatial average from the two neighboring faces along the relevant direction at the beginning of any timestep.
The solution to the full Riemann problem (8 equations and variables for 3D MHD) thus provides point-value upwind fluxes for the zone-centered magnetic field as well.
Indeed, indicating with a square bracket the flux component we make the formal correspondences
\begin{equation}\label{eq:edge_flux}
\begin{array}{lll}
F_x^{[B_x]} = 0 \,,\; &
F_y^{[B_x]} = E_z \,,\; &
F_z^{[B_x]} = -E_y \,,
\\ \noalign{\medskip}
F_x^{[B_y]} = -E_z \,,\; &
F_y^{[B_y]} = 0 \,,\; &
F_z^{[B_y]} = E_x \,,
\\ \noalign{\medskip}
F_x^{[B_z]} = E_y \,,\; &
F_y^{[B_z]} = -E_x \,,\; &
F_z^{[B_z]} = 0 \,,
\end{array}
\end{equation}
where $F_x = \hvec{e}_x\cdot\tens{F}$ and so forth.
This formal analogy holds for the upwind fluxes $\hat{F}$ as well.
The electromotive force at cell edges can thus be obtained by taking advantage of the upwind information already at disposal during the 1D Riemann solver, thus avoiding more complex 2D Riemann problems (for a detailed discussion see the review by Balsara \cite{Balsara_Review2017} and also \cite{Balsara2009} for extensions to higher orders).
To ease up the notations, we consider a top-view of a cell edge representing the intersection of four zones (Fig. \ref{fig:ct}b) and label the left and right states along the $y$-coordinate as south ($S$) and north ($N$).
Similarly, left and right states in the $x$-direction are labeled with west ($W$) and east ($E$) with respect to the intersection point.
We then define the $S$ and $N$ states reconstructed along direction $y$ (see the vertical arrows in the cited figure) and the $W$ and $E$ states reconstructed along direction $x$ (the horizontal arrows) as
\begin{equation}\label{eq:edge_E}
E_z^S = \altmathcal{R}_y^+\left(-F^{[B_y]}_{ {\mathbf{x}_f} }\right) ,\quad
E_z^N = \altmathcal{R}_y^-\left(-F^{[B_y]}_{ {\mathbf{x}_f} + \hvec{e}_y}\right),\quad
E_z^W = \altmathcal{R}_x^+\left( F^{[B_x]}_{ {\mathbf{y}_f} }\right) ,\quad
E_z^E = \altmathcal{R}_x^-\left( F^{[B_x]}_{ {\mathbf{y}_f} + \hvec{e}_x}\right) \,,
\end{equation}
and likewise, for the dissipative part of numerical fluxes we let
\begin{equation}\label{eq:edge_diffE}
\phi_x^S = \altmathcal{R}_y^+\left(\Phi^{[B_y]}_{ {\mathbf{x}_f} }\right),\quad
\phi_x^N = \altmathcal{R}_y^-\left(\Phi^{[B_y]}_{ {\mathbf{x}_f} +\hvec{e}_y}\right),\quad
\phi_y^W = \altmathcal{R}_x^+\left(\Phi^{[B_x]}_{ {\mathbf{y}_f} }\right) \,,\quad
\phi_y^E = \altmathcal{R}_x^-\left(\Phi^{[B_x]}_{ {\mathbf{y}_f} +\hvec{e}_x}\right)\,.
\end{equation}
When not directly available, the diffusion terms at cell faces can generically be obtained as the difference between the centered contribution and the numerical flux, that is
\begin{equation}
\Phi^{[B_y]}_{ {\mathbf{x}_f} } = F^{[B_y]}_{ {\mathbf{x}_f} } - \hat{F}^{[B_y]}_{ {\mathbf{x}_f} }\,,\quad
\Phi^{[B_x]}_{ {\mathbf{y}_f} } = F^{[B_x]}_{ {\mathbf{y}_f} } - \hat{F}^{[B_x]}_{ {\mathbf{y}_f} }\,.
\end{equation}
The generalization to the 3D case is easily obtained by cyclic permutations.
Different emf averaging procedures have been proposed for CT schemes, outlined in what follows.
\subsection{Arithmetic averaging}
Arithmetic averaging, initially proposed by \cite{Balsara_Spicer1999}, is probably the simplest CT scheme and can be trivially obtained by taking the arithmetic average of the upwind fluxes obtained at the four nearest emf sharing the same zone edge:
\begin{equation}\label{eq:Arithmetic}
\hat{E}_{ {\mathbf{z}_e} } = \frac{1}{4}\left(- \hat{F}^{[B_y]}_{ {\mathbf{x}_f} }
- \hat{F}^{[B_y]}_{ {\mathbf{x}_f} +\hvec{e}_y}
+ \hat{F}^{[B_x]}_{ {\mathbf{y}_f} }
+ \hat{F}^{[B_x]}_{ {\mathbf{y}_f} +\hvec{e}_x}\right)
\,.
\end{equation}
Despite its simplicity, as pointed out by several authors \cite[see, e.g.][]{Londrillo_DelZanna2000, Gardiner_Stone2005}, this approximation suffers from insufficient dissipation and thus spurious numerical oscillations in several tests, yielding, for 1D plane-parallel flows along the grid Cartesian axes, half of the correct value.
\subsection{The CT-Contact scheme}
Gardiner \& Stone \cite{Gardiner_Stone2005} suggested that the CT algorithm could be cast as a spatial integration procedure.
The reconstruction can be operated from any one of the four nearest face centers to the zone edge.
Choosing the arithmetic average leads, in our notations, to the following expression for the zone-centered emf:
\begin{equation}\label{eq:UCT_CONTACT}
\hat{E}_{ {\mathbf{z}_e} } = \hat{E}_ {\mathbf{z}_e} ^{\rm arithm}
+\frac{\Delta y}{8}\left[
\left(\pd{E_z}{y}\right)^S
- \left(\pd{E_z}{y}\right)^N \right]
+\frac{\Delta x}{8}\left[
\left(\pd{E_z}{x}\right)^W
- \left(\pd{E_z}{x}\right)^E \right] \,,
\end{equation}
where $\hat{E}_ {\mathbf{z}_e} ^{\rm arithm}$ is the arithmetic average, Eq. (\ref{eq:Arithmetic}).
In the original work by \cite{Gardiner_Stone2005}, various ways for obtaining the derivatives are discussed, although an optimal expression based on the sign of the fluid velocity is suggested.
This gives an upwind-selection rule which is essentially based on the speed of the contact mode leading to stable and non-oscillatory results, yielding
\begin{equation}\label{eq:ct_contact_dEy}
\left(\pd{E_z}{y}\right)^S =
\frac{1 + s_ {\mathbf{x}_f} }{2}\left(\frac{\hat{F}^{[B_x]}_{ {\mathbf{y}_f} } - F^{[B_x]}_{y, {\boldsymbol{c}} }}{\Delta y/2}\right)
+ \frac{1 - s_ {\mathbf{x}_f} }{2}\left(\frac{\hat{F}^{[B_x]}_{ {\mathbf{y}_f} +\hvec{e}_x}
- F^{[B_x]}_{y, {\boldsymbol{c}} +\hvec{e}_x}}{\Delta y/2}\right) \,,
\end{equation}
where $s_ {\mathbf{x}_f} = {\rm sign}(\hat{F}^{[\rho]}_ {\mathbf{x}_f} )$ while $F^{[B_y]}_{y,c} = (\hvec{e}_y\cdot\tens{F}(U_ {\boldsymbol{c}} ))^{[B_y]}$ is the $B_y$ component of the flux evaluated at the cell center.
Similarly:
\begin{equation}\label{eq:ct_contact_dEx}
\left(\pd{E_z}{x}\right)^W =
\frac{1 + s_ {\mathbf{y}_f} }{2}\left(-\frac{\hat{F}^{[B_y]}_{ {\mathbf{x}_f} } - F^{[B_y]}_{x, {\boldsymbol{c}} }}
{\Delta x/2}\right)
+ \frac{1 - s_ {\mathbf{y}_f} }{2}\left(-\frac{\hat{F}^{[B_y]}_{ {\mathbf{x}_f} +\hvec{e}_y}
- F^{[B_y]}_{x, {\boldsymbol{c}} +\hvec{e}_y}}{\Delta x/2}\right) \,.
\end{equation}
Similar expressions can be obtained at the North (N) and East (E) edges.
Although there is no formal justification that the electric field derivative should obey such selection rule, this algorithm has been found to yield robust and stable results in practice.
The scheme correctly reduces to the base upwind method for grid-aligned planar flows and, as also pointed out by \cite{Felker_Stone2018}, it is at most of second-order spatially accurate given the way the derivatives in Eq. (\ref{eq:ct_contact_dEy})-(\ref{eq:ct_contact_dEx}) are computed.
We rename here this method as the \emph{CT-Contact} scheme.
\subsection{The CT-Flux scheme}
From the previous considerations, it appears that a cost-effective and straightforward possibility is to double the weights of the dissipative flux terms.
The edge-centered emf can then be obtained by separately reconstructing the centered flux terms and dissipative contributions from cell faces to the edge, using Eq. (\ref{eq:edge_E}) and (\ref{eq:edge_diffE}).
These contributions are then added with the correct weights so as to reduce to plane-parallel flows for 1D configurations.
This leads to
\begin{equation}\label{eq:CT_Flux}
\hat{E}_{ {\mathbf{z}_e} } = \frac{1}{4}
\left( E_z^N + E_z^S + E_z^E + E_z^W \right)
+ \frac{1}{2}\left(\phi_x^S + \phi_x^N\right)
- \frac{1}{2}\left(\phi_y^W + \phi_y^E\right) \,,
\end{equation}
where the different $E_z$ are the centered contributions obtained using Eq. (\ref{eq:edge_E}).
This approach, named here \emph{CT-Flux}, has been recently adopted by \cite{Minoshima_etal2019} in designing high-order finite difference scheme.
Note that the staggered magnetic fields are not employed when reconstructing from the face to the corner.
\section{The upwind constrained transport (UCT) method: original framework}
\label{UCT}
The CT discretization scheme outlined so far allows to preserve exactly the solenoidal condition for the magnetic field but this is not enough to avoid the presence of spurious magnetic monopole terms when computing the magnetic forces, if fluxes are calculated by using reconstructed values of the magnetic field components as for the other fluid variables.
As first realized in \cite{Londrillo_DelZanna2000} and systematically demonstrated in \cite{Londrillo_DelZanna2004}, the only way to properly take into account the specific smoothness properties of the divergence-free $\vec{B}$ vector in Godunov-type schemes for MHD is to follow these guidelines:
\begin{enumerate}
\item
magnetic field components do not possess a left and right representation at
the cell interface along the corresponding direction
(upwind reconstruction is not needed).
Hence a CT staggering is their appropriate discretization as primary variables,
the time evolution must be performed for these magnetic field components
at their staggered locations;
\item
only staggered field components must be present in the definition of the fluid
numerical fluxes in the corresponding inter-cell positions, in order to avoid the
formation of magnetic monopoles;
\item
a four-state Riemann solver for the induction equation is needed,
as explained below;
\item
time integration must avoid time-splitting techniques.
\end{enumerate}
\subsection{The UCT-Roe scheme}
With the Roe formalism, the solution to the Riemann problem at zone interfaces is obtained from independent 1D matrices describing characteristic modes propagating as planar waves \cite{Londrillo_DelZanna2004}.
The flux components entering the induction equation can be expressed through a linear combination of 1-D upwind fluxes along the intersecting direction.
It is a main feature of the UCT method that this combination follows a proper upwind selection rule, since a same flux component at the same collocation point results to have two independent representations in terms of characteristic wave fans \cite{Londrillo_DelZanna2004}:
\begin{equation}\label{eq:UCT_Roe1}
\hat{E}_{ {\mathbf{z}_e} } = \frac{1}{4}\left( E_z^{SW} + E_z^{SE}
+ E_z^{NE} + E_z^{NW}\right)
+\frac{1}{2}\left(\phi_x^S + \phi_x^N\right)
-\frac{1}{2}\left(\phi_y^W + \phi_y^E\right) \,,
\end{equation}
where the different $E_z$ at the intersection point are obtained by separate reconstruction of the velocity along the $x$- and $y$- directions from the nearest zone center and of the staggered fields in the transverse one from the adjacent interface, e.g., $E^{SW}_z = -v^{SW}_xB^W_y + v^{SW}_yB^S_x$.
The expression above clearly shows that the centered and dissipative terms are represented as a four-state function and two-point average in the orthogonal coordinate.
Alternatively, one may also use
\begin{equation}\label{eq:UCT_Roe2}
\hat{E}_{ {\mathbf{z}_e} }
= -\frac{1}{2}\left[(\overline{v}_xB_y)^W + (\overline{v}_xB_y)^E\right]
+\frac{1}{2}\left[(\overline{v}_yB_x)^S + (\overline{v}_yB_x)^N\right]
+\frac{1}{2}\left(\phi_x^S + \phi_x^N\right)
-\frac{1}{2}\left(\phi_y^W + \phi_y^E\right) \,.
\end{equation}
where, e.g., $\overline{v}_x = (v^L_x + v^R_x)_ {\mathbf{y}_f} /2$ while $\overline{v}_y = (v^L_y + v^R_y)_ {\mathbf{x}_f} /2$.
For multidimensional FV schemes solving the Riemann problem at the cell corners, these terms are already at disposal at desired location.
For Godunov-type schemes relying on face-centered flux computation, the evaluation of the Roe dissipative terms can become a rather consuming task since, in the UCT formalism, both zone-centered hydrodynamical variables \emph{and} staggered magnetic fields needed to be reconstructed towards at a cell edge.
This turns out to be more costly than interpolating just the dissipative terms from the interfaces using Eq. \ref{eq:edge_diffE}).
In the UCT-Roe scheme, the dissipative terms are obtained, e.g.,
\begin{equation}
\phi^S_x = \frac{1}{2}\sum_\kappa |{\lambda_\kappa}|
(L_\kappa\cdot\Delta U)^S R^{S}_\kappa \,.
\end{equation}
The eigenvector matrices should be computed by properly averaging the adjacent L/R reconstructed states whereas jumps of conserved variables are split into a hydrodynamic ($U_h$) and magnetic part as $\Delta U^S = \{U_h^{SE} - U_h^{SW},\, B_y^E - B_y^W\}$.
While the Roe solver preserves all stationary wave families, it is also prone to numerical pathologies.
In this respect, the HLLI Riemann solver of \cite{Dumbser_Balsara2016} (and its multidimensional extension of \cite{Balsara_Nkonga2017}) offers an interesting alternative to overcome these problems.
This will be addressed in a forthcoming paper.
\subsection{The UCT-HLL scheme}
In the case of component-wise Riemann solvers, only magnetic field and velocity components are required at the cell edge (or a combination of them), thus reducing the amount of transverse reconstructions.
An attractive choice is the UCT-HLL scheme of \cite{Londrillo_DelZanna2004} in which the edge-centered electric field evaluates to
\begin{equation}\label{eq:UCT_HLL1}
\hat{E}_{ {\mathbf{z}_e} } = \frac{ \alpha_x^+\alpha_y^+ E_z^{SW}
+ \alpha_x^+\alpha_y^- E_z^{NW}
+ \alpha_x^-\alpha_y^+ E_z^{SE}
+ \alpha_x^-\alpha_y^- E_z^{SW}}
{(\alpha_x^+ + \alpha_x^-)(\alpha_y^+ + \alpha_y^-)}
+ \frac{\alpha_x^+\alpha_x^-}{\alpha_x^+ + \alpha_x^-}
\left(B_y^E - B_y^W\right)
- \frac{\alpha_y^+\alpha_y^-}{\alpha_y^+ + \alpha_y^-}
\left(B_x^N - B_x^S\right) \,,
\end{equation}
where $\alpha_x^+$ is computed through some averaging procedure between the the north and south faces
Here we adopt $\alpha^+_x = \max(0, \lambda^R_{ {\mathbf{x}_f} }, \lambda^R_{ {\mathbf{x}_f} +\hvec{e}_y})$,
$\alpha^-_x = -\min(0, \lambda^L_{ {\mathbf{x}_f} }, \lambda^L_{ {\mathbf{x}_f} +\hvec{e}_y})$.
A variant of this scheme proposed by \citep{DelZanna_etal2007}, more economical in terms of storage and computations, employs the transverse velocities being entirely analogous to Eq. (\ref{eq:UCT_HLL1}):
\begin{equation}\label{eq:UCT_HLL2}
\hat{E}_{ {\mathbf{z}_e} } = -\frac{ \alpha_x^+ (\overline{v}_xB_y)^W
+ \alpha_x^- (\overline{v}_xB_y)^E
- \alpha_x^+\alpha_x^-(B_y^E - B_y^W)}
{\alpha_x^+ + \alpha_x^-}
+\frac{ \alpha_y^+ (\overline{v}_yB_x)^S
+ \alpha_y^- (\overline{v}_yB_x)^N
-\alpha_y^+\alpha_y^ - (B_x^N - B_x^S)}
{\alpha_y^+ + \alpha_y^-} \,,
\end{equation}
where, e.g., $(\overline{v}_xB_y)^W = \overline{v}_x^W\,B_y^W$ while the upwind transverse velocities at an $x$-interface are first computed as
\begin{equation}\label{eq:vt}
\overline{v}_{t, {\mathbf{x}_f} } = \frac{ \alpha_x^+ \vec{v}^L_ {\mathbf{x}_f}
+ \alpha_x^- \vec{v}^R_ {\mathbf{x}_f} }
{\alpha_x^+ + \alpha_x^-}\cdot\hvec{e}_t \,,
\end{equation}
for $t=y,z$ and then properly reconstructed in the transverse directions.
The previous formalism may be further developed and extended to other Riemann solvers as well.
This is discussed in the next section.
\section{The UCT method: a novel generalized composition formula for Jacobian-free Riemann solvers}
\label{sec:composition}
We now present a general formalism for constructing emf averaging schemes with in-built upwind dissipation properties and using component-wise Riemann solver, that is, not directly employing characteristic information.
Our starting point is the definition of the inter-cell numerical flux function for which we assume the approximate form given by Eq. (\ref{eq:centered+dissipative}).
We shall also assume that the dissipative terms of the induction system can be expressed as linear combinations of the left and right transverse magnetic field components alone and that, by suitable manipulation, the induction fluxes can be arranged as
\begin{equation}\label{eq:RoeForm_Induction}
\hat{F}^{[B_t]}_{ {\mathbf{x}_f} } = a^L_xF^L_{ {\mathbf{x}_f} } + a^R_xF^R_{ {\mathbf{x}_f} } - (d^R_xB^R_t - d^L_xB^L_t) \,.
\end{equation}
where $F = v_xB_t - v_tB_x$ is the induction flux, $t=y,z$ labels a transverse component at an $x$-interface, while $a_x^L + a_x^R = 1$.
Analogous expressions are obtained at $y$- or $z$-interfaces.
The precise form of the coefficients $a^s_x$ and $d^s_x$ where $s=L,R$ depends, of course, on the chosen Riemann solver.
Consider, for instance the Rusanov Lax-Friedrichs solver; in this case one has the simple expressions
\begin{equation}\label{eq:LFcoeffs}
a^L_x = a^R_x = \frac{1}{2} \,,\qquad
d^L_x = d^R_x = \frac{|\lambda^{\max}_ {\mathbf{x}_f} |}{2} \,,
\end{equation}
where $|\lambda^{\max}_ {\mathbf{x}_f} |$ is the largest characteristic speed (in absolute value) computed from the L/R states at the interface, e.g., $|\lambda^{\max}_ {\mathbf{x}_f} | = \max(|\lambda^L_ {\mathbf{x}_f} |, |\lambda^R_ {\mathbf{x}_f} |)$.
Likewise, the HLL solver (Eq. \ref{eq:HLLFlux}) can be rewritten in the form given by Eq. (\ref{eq:RoeForm_Induction}) with coefficients
\begin{equation}\label{eq:HLLcoeffs}
a^L_x = \frac{\alpha_x^R}{\alpha^R_x + \alpha^L_x}
= \frac{1}{2} + \frac{1}{2}\frac{|\lambda^R_ {\mathbf{x}_f} | - |\lambda^L_ {\mathbf{x}_f} |}
{\lambda^R_ {\mathbf{x}_f} - \lambda^L_ {\mathbf{x}_f} }
\,,\quad
a^R_x = \frac{\alpha_x^L}{\alpha^R_x +\alpha^L_x}
= \frac{1}{2} - \frac{1}{2}\frac{|\lambda^R_ {\mathbf{x}_f} | - |\lambda^L_ {\mathbf{x}_f} |}
{\lambda^R_ {\mathbf{x}_f} - \lambda^L_ {\mathbf{x}_f} }
\,,\quad
d^L_x = d^R_x = \frac{\alpha^R_x\alpha^L_x}{\alpha^R_x + \alpha^L_x} \,,
\end{equation}
where $\alpha_x^{L,R}$ are given after Eq. (\ref{eq:HLLFlux}).
With these assumptions, the edge-centered emf with the desired upwind properties can be constructed from (\ref{eq:RoeForm_Induction}) as
\begin{equation}\label{eq:emf2D}
\hat{E}_ {\mathbf{z}_e} =
-\left[(a_x\overline{v}_xB_y)^W + (a_x\overline{v}_xB_y)^E\right]
+\left[(a_y\overline{v}_yB_x)^N + (a_y\overline{v}_yB_x)^S\right]
+ \left[(d_xB_{y})^E - (d_xB_{y})^W \right]
- \left[(d_yB_{x})^N - (d_yB_{x})^S \right] \,,
\end{equation}
where the transverse velocities are reconstructed from the interface values given (unless otherwise stated) by Eq. (\ref{eq:vt}) whereas flux and diffusion coefficients $a_x^{W,E}$ and $d_x^{W,E}$ are computed by combining the corresponding expressions obtained at $x$-interfaces with a 1D Riemann solver, e.g.,
\begin{equation}\label{eq:dEW}
d^W_x = \frac{d^L_{ {\mathbf{x}_f} } + d^L_{ {\mathbf{x}_f} +\hvec{e}_y}}{2} \,,\quad
d^E_x = \frac{d^R_{ {\mathbf{x}_f} } + d^R_{ {\mathbf{x}_f} +\hvec{e}_y}}{2} \,.\quad
\end{equation}
Similarly, we obtain $d_y^{N,S}$ by averaging the diffusion coefficients at $y$-faces:
\begin{equation}\label{eq:dSN}
d^S_y = \frac{d^L_{ {\mathbf{y}_f} } + d^L_{ {\mathbf{y}_f} +\hvec{e}_x}}{2} \,,\quad
d^N_y = \frac{d^R_{ {\mathbf{y}_f} } + d^R_{ {\mathbf{y}_f} +\hvec{e}_x}}{2} \,.
\end{equation}
Other forms of averaging for these coefficients - based on the upwind direction or by maximizing the diffusion terms - are of course possible.
However, for the present work, we will employ the simple averaging given by Eq. (\ref{eq:dEW}) and (\ref{eq:dSN}) for both the $a$ and $d$ coefficients.
In the next sections we derive the coefficients also for other Riemann solvers, namely, the HLLC, HLLD and the GFORCE schemes.
From now on, we specialize to an $x$-interface and drop the $ {\mathbf{x}_f} $ subscript for ease of notations as it should now be clear from the context.
\subsection{The UCT-HLLC scheme}
The HLLC solver (see \cite{TSS1994} for the original formulation) describes the Riemann fan in terms of three waves consisting of two outermost fast modes separated by a middle contact wave.
Extensions to ideal MHD have been developed by Gurski \cite{Gurski2004} and Li \cite{Li2005}).
Both formulations, however, fail to satisfy exactly the full set of integral relations across the Riemann fan\footnote{For a detailed mathematical analysis, see section 3.3 of \cite{Mignone_Bodo2006} and the discussion on page 82 in the book by \cite{Mignone_Bodo2008}} and the resulting numerical schemes are prone to instabilities.
A consistent formulation has been presented by Mignone \& Bodo \cite{Mignone_Bodo2008} showing that, for non-zero normal magnetic field ($B_x\ne 0$), the solution must assume continuity of the transverse fields across the middle wave.
The HLLC flux for the induction system can be written in the form (\ref{eq:RoeForm_Induction}) as
\begin{equation}\label{eq:HLLCFlux}
\hat{F} = \frac{1}{2}\Big[F^L + F^R
- |\lambda_L|\left(B_t^{*L} - B_t^L\right)
- |\lambda^*|\left(B_t^{*R} - B_t^{*L}\right)
- |\lambda_R|\left(B_t^R - B_t^{*R}\right) \Big] \,,
\end{equation}
where $F=v_xB_t - v_tB_x$ while $\lambda^*$ is the speed of the contact mode.
Since $B_t^{*L}=B_t^{*R}$ holds across the middle wave, it is easily verified from Eq. (\ref{eq:HLLCFlux}) that this method produces the same coefficients as the HLL solver (Eq. \ref{eq:HLLcoeffs}) and thus an equivalent amount of numerical diffusion.
However, for $B_x=0$, the HLLC solver admits a jump in the transverse magnetic field across the middle wave.
In particular (see also Sec. 4.2 of \cite{Miyoshi_Kusano2005}) it is easy to show that
\begin{equation}
B^{*s} - B^s = B^s\chi^s \,,\qquad{\rm where} \qquad
\chi^s = -\frac{v_x^s - \lambda^*}{\lambda^s - \lambda^*} \,.
\end{equation}
Working out the explicit expressions leads to the following flux and diffusion coefficients,
\begin{equation} \label{eq:HLLCcoeffs}
a^L = a^R = \frac{1}{2}\,,\qquad
d^s = \left(\frac{|\lambda^*|-|\lambda^s|}{2}\right)\chi^s + \frac{|\lambda^*|}{2}
\,.
\end{equation}
As we shall see later, the expressions in this degenerate case will be useful to obtain the correct singular limit in the HLLD solver.
\subsection{The UCT-HLLD scheme}
The HLLD (see \cite{Miyoshi_Kusano2005} and \cite{Mignone2007} for adiabatic and isothermal MHD, respectively) approximates the Riemann fan with a five-wave pattern that includes two outermost fast shocks propagating with speed $\lambda_L$ and $\lambda_R$, two rotational waves $\lambda^{*L}$ and $\lambda^{*R}$ separated, in the adiabatic case, by a contact wave in the middle moving at speed $\lambda^*$.
Across the contact mode, when $B_x\ne0$, the transverse components of magnetic field $B_t \equiv B_y,\, B_z$ are continuous.
For our purposes, we conveniently rewrite the HLLD flux for the induction system in the form (Eq. \ref{eq:RoeForm_Induction}) as
\begin{equation}\label{eq:hlld1}
\hat{F} = \frac{1}{2}\Big[ F^L + F^R
- \left|\lambda_L\right| \left(B^{*L}_t - B^L_t\right)
- \left|\lambda^{*L}\right|\left(B^{**}_t - B^{*L}_t\right)
- \left|\lambda^{*R}\right|\left(B^{*R}_t - B^{**}_t\right)
- \left|\lambda_R\right| \left(B^R_t - B^{*R}_t\right)\Big]
\end{equation}
where $F=v_xB_t - v_tB_x$ and we have assumed $B^{**R}_t = B^{**L}_t = B^{**}_t$.
The rotational modes are given by
\begin{equation}\label{eq:hlld_lambda*}
\lambda^{*L} = \lambda^* - \frac{|B_x|}{\sqrt{\rho^{*L}}} \,,\quad
\lambda^{*R} = \lambda^* + \frac{|B_x|}{\sqrt{\rho^{*R}}} \,,\quad
\end{equation}
where $\rho^{*s} = \rho^s(\lambda^s - v_x^s)/(\lambda^s - \lambda^*)$ and $\lambda^* = m_x^{\rm hll}/\rho^{\rm hll}$.
Here the suffix \quotes{${\rm hll}$} labels a specific component of the HLL intermediate state, obtained from the integral form of the Riemann fan:
\begin{equation}\label{eq:Uhll}
U^{\rm hll} = \frac{\lambda^R U^R - \lambda^L U^L + F^L - F^R}
{\lambda^R - \lambda^L} \,.
\end{equation}
From Eq. 45 and 47 of Miyoshi \& Kusano \cite{Miyoshi_Kusano2005} together with of the expressions above, we rewrite the jumps of $B_t$ across the outermost fast waves ($s=L,R$) as
\begin{equation}\label{eq:By_chi}
B^{*s}_{t} - B^s_t = B^s_t\chi^s\,,
\qquad{\rm where}\qquad
\chi^s = \frac{(v^s_{x}- \lambda^*)(\lambda^s - \lambda^*)}
{(\lambda^{*s} - \lambda^s)(\lambda^{*s} + \lambda^s - 2\lambda^*)}
\,.
\end{equation}
The state in the $**$ region can be identified with the HLL average (Eq. \ref{eq:Uhll}) beyond the Alfv\'en modes:
\begin{equation}\label{eq:**}
B^{**}_t = \frac{\lambda^{*R}B^{*R}_t - \lambda^{*L}B^{*L}_t
+ F^{*L} - F^{*R}}
{\lambda^{*R} - \lambda^{*L}} \,,
\end{equation}
where $F^{*s}$ can be replaced with the jump conditions across the fast waves, i.e., $F^{*s} = F^s + \lambda^s(B^{*s}_t - B^s_t)$.
The dissipative terms in the HLLD flux (\ref{eq:hlld1}) can now be expressed as a linear combination of $B^s_t$ alone.
After some tedious but otherwise straightforward algebra, one finds that the coefficients needed in Eq. (\ref{eq:RoeForm_Induction}) can be written in the form
\begin{equation}\label{eq:UCT_HLLD_ad}
a^L = \frac{1 + \nu^*}{2}\,,\quad
a^R = \frac{1 - \nu^*}{2}\,,\quad
d^s = \frac{1}{2}(\nu^s - \nu^*)\tilde{\chi}^s
+ \frac{1}{2}\left(|\lambda^{*s}| - \nu^*\lambda^{*s}\right) \,,
\end{equation}
where $\tilde{\chi}^s = (\lambda^{*s} - \lambda^s)\chi^s$, while
\begin{equation}\label{eq:UCT_HLLD_nu}
\nu^s = \frac{|\lambda^{*s}| - |\lambda^s|}{\lambda^{*s} - \lambda^s}
= \frac{\lambda^{*s} +\lambda^s}{|\lambda^{*s}| + |\lambda^s|} \,,\qquad
\nu^* = \frac{|\lambda^{*R}| - |\lambda^{*L}|}{\lambda^{*R} - \lambda^{*L}}
= \frac{\lambda^{*R} + \lambda^{*L}}{|\lambda^{*R}| + |\lambda^{*L}|}\,.
\end{equation}
Note that the diffusion coefficients defined in Eq. (\ref{eq:UCT_HLLD_ad}) are well-behaved when $\lambda^{*s}\to\lambda^s$ which typically occurs in the limit of zero tangential field and $B^2_x\gtrsim \Gamma p$.
In this limit, in fact, $\tilde{\chi}^s \to (v_x^s - \lambda^*)/2$ and $\nu^s=\pm1$.
On the other hand, particular care must be given to the degenerate case $B_x\to 0$, in which the two rotational waves collapse onto the entropy mode: $\lambda^{*R},\lambda^{*L}\to\lambda^*$.
In this situation, $\tilde{\chi}^s \to (v_x^s - \lambda^*)$ remains regular but the coefficient $\nu^*$ given in Eq. (\ref{eq:UCT_HLLD_nu}) becomes ill-defined at stagnation points ($\nu^*\sim v_x/|v_x|$) and one should rather resort to a three-wave pattern in which the tangential field is discontinuous across $\lambda^*$.
This limit is embodied by the HLLC solver, Eq. (\ref{eq:HLLCcoeffs}), and can be recovered by setting $\nu^*=0$ in the expressions above.
In practice we switch to the degenerate case whenever the difference between the two rotational modes falls below a given tolerance:
\begin{equation}
\nu^* = \left\{\begin{array}{ll}
\displaystyle \frac{|\lambda^{*R}| - |\lambda^{*L}|}{\lambda^{*R} - \lambda^{*L}}
& \quad {\rm if} \;\; |\lambda^{*R} - \lambda^{*L}| >
\epsilon|\lambda^R-\lambda^L| \,,
\\ \noalign{\medskip}
0 & \quad {\rm otherwise} \,.
\end{array}\right.
\end{equation}
with $\epsilon = 10^{-9}$.
This concludes the derivation of the UCT-HLLD averaging scheme for adiabatic MHD.
A couple of remarks are worth making.
First, if the base Riemann solver is not the HLLD, the proposed emf averaging-scheme can still be employed provided that the contact and Alfv\'en velocities are locally redefined using Eq. (\ref{eq:hlld_lambda*}) and the following expressions for consistency reasons.
Second, in the case of isothermal MHD our derivation still holds although the $\chi^s$ coefficients are different.
This case is discussed in \ref{app:UCT_HLLD_isothermal}.
\subsection{The UCT-GFORCE scheme}
In its original formulation \cite{Toro_Titarev2006}, the Generalized First ORder CEntered (GFORCE) flux is obtained from a weighted average of the Lax-Friedrichs (LF) and Lax-Wendroff (LW) fluxes,
\begin{equation}\label{eq:GFORCE1D}
\hat{F} = \omega_g F^{\rm LW} + (1 - \omega_g)F^{\rm LF} \,,
\end{equation}
where $F^{\rm LW} = F(U^{\rm LW})$ and
\begin{equation}\label{eq:GFORCE_Fluxes}
U^{\rm LW} = \frac{U^L + U^R}{2}
- \frac{\tau}{2}(F^R - F^L) \,,\qquad
F^{\rm LF} = \frac{F^L + F^R}{2}
- \frac{1}{2\tau}\left(U^R - U^L\right) \,,
\end{equation}
are, respectively, the LWs flux and state and the LF flux.
In Eq. (\ref{eq:GFORCE1D}), $\omega_g \in [0,1]$ is a weight coefficient usually chosen to satisfy monotonicity requirements.
This yields, according to \cite{Toro_Titarev2006},
\begin{equation}
\omega_g \le \frac{1}{1 + c_g} \,,
\end{equation}
where $0\le c_g\le 1$ is the Courant number.
The FORCE flux is recovered with $\omega = 1/2$ and it is precisely the arithmetic mean of the Lax-Friedrichs and the Lax-Wendroff fluxes.
In the original formulation $\tau=\Delta t/\Delta x$ although here, in order to minimize the amount of numerical dissipation, we choose $\tau$ as the inverse of the local maximum signal speed, $\tau = 1/|\lambda_{\max}|$.
Specializing to a magnetic flux component, we re-write the GFORCE flux at a zone interface as
\begin{equation}\label{eq:GFORCE_Bt}
\hat{F}_x = -\overline{v}_tB_x
+ \frac{1}{2}B_t^L\left[\omega v_x^{\rm LW} + (1-\omega) v_x^L\right]
+ \frac{1}{2}B_t^R\left[\omega v_x^{\rm LW} + (1-\omega) v_x^R\right]
- (d^RB^R_t - d^LB_t^L) \,,
\end{equation}
which now closely relates to the form given by Eq. (\ref{eq:RoeForm_Induction}) with $a^L_x = a^R_x = 1/2$, provided that we redefine the transverse velocities as
\begin{equation}\label{eq:UCT_GFORCE_vt}
\overline{v}_t =
\omega\left(v_t^{\rm LW} - \frac{\tau_x}{2}\Delta v_yv_x^{\rm LW}\right)
+ (1-\omega) \left(\frac{v_y^L + v_y^R}{2}\right)
\end{equation}
and the diffusion coefficients as
\begin{equation}\label{eq:UCT_GFORCE_dLR}
d^{s} = \omega \frac{\tau_x}{2} v_x^{s} v_x^{\rm LW}
+ (1-\omega)\frac{|\lambda_x|}{2} \,.
\end{equation}
The Lax-Wendroff velocities can be obtained zone interfaces as $v_k^{\rm LW} = m_k^{\rm LW}/\rho^{\rm LW}$ where momentum components and density are calculated using the first of (\ref{eq:GFORCE_Fluxes}).
Similar quantities are obtained at $y$- and $z$-interfaces by suitable index permutation.
At the practical level, the UCT-GFORCE scheme is therefore obtained by storing, during the Riemann solver call at any given interface, the transverse velocities $\vec{v}_t$ as well as $d^{s}$ with $s=L,R$.
\section{Notations and general formalism for CT schemes}
\label{sec:notations}
The ideal MHD equations are characterized by two coupled sub-systems, one for the time evolution of the set of \emph{conservative} hydro-like flow variables (mass, momentum and energy densities denoted, respectively, with $\rho$, $\rho\vec{v}$ and $\altmathcal{E}$):
\begin{equation}
\pd{U}{t} + \nabla\cdot\tens{F} = 0,
\qquad{\rm where}\qquad
U = \left(\begin{array}{l}
\rho \\ \noalign{\medskip}
\rho \vec{v} \\ \noalign{\medskip}
\altmathcal{E}
\end{array}\right) \,,\quad
\tens{F} = \left(\begin{array}{c}
\rho \vec{v} \\ \noalign{\medskip}
\rho \vec{v}\vec{v} - \vec{B}\vec{B} + \tens{I}p_t \\ \noalign{\medskip}
(\altmathcal{E} + p_t)\vec{v} - (\vec{v}\cdot\vec{B})\vec{B}
\end{array}\right)^\intercal \,,\quad
\end{equation}
and an induction equation for the evolution of the magnetic field
\begin{equation}\label{eq:induction}
\frac{\partial\vec{B}}{\partial t} + \nabla\times\vec{E}=0,
\end{equation}
where the curl operator appears instead of a divergence and where the electric field $\vec{E} = - \vec{v} \times \vec{B}$ is not a primary variable, depending on the flow velocity and the magnetic field, but its components have to be considered as the fluxes for the magnetic field itself.
Here $\vec{v} = (v_x,\, v_y,\, v_z)$ is the fluid velocity vector, $p_t = p+B^2/2$ is the total (thermal + magnetic) pressure, while the total energy density adds up kinetic, thermal and magnetic contributions:
\begin{equation}
\altmathcal{E} = \frac{1}{2}\rho\vec{v}^2 + \frac{p}{\Gamma-1} + \frac{\vec{B}^2}{2} \,,
\end{equation}
$\Gamma$ is the specific heat ratio for an adiabatic equation of state.
Due to the commutativity of analytical spatial derivatives, the induction equation (\ref{eq:induction}) implicitly contains the solenoidal condition for the magnetic field
\begin{equation}
\nabla\cdot\vec{B} = 0,
\end{equation}
that if true for $t=0$ must be preserved during the subsequent time evolution.
The peculiar structure of the MHD system and the existence of the above non-evolutionary constraint makes it difficult to extend straightforwardly the methods developed for the Euler equations to MHD, especially for upwind schemes where Riemann solvers have to be modified in order to adapt to the curl operator and where numerical derivatives do not commute and spurious magnetic monopoles could arise.
\begin{figure}
\centering
\includegraphics[trim={90 10 50 10}, width=0.5\textwidth]{Slide1.jpg}%
\includegraphics[width=0.55\textwidth]{Slide2.jpg}
\caption{Right: positioning of MHD variables in the CT formalism.
Staggered magnetic field components (blue) are face-centered,
the electromotive force is edge-centered (red) while remaining hydrodynamical
quantities are located at the zone center (green).
Right: Top view of the intersection between four neighbor zones:
N, S, E and W indicate the four cardinal directions with respect to
the zone edge (here represented by the intersection between four
neighbor zones), $\altmathcal{R}_x(F_{ {\mathbf{y}_f} })$ and $\altmathcal{R}_y(F_{ {\mathbf{x}_f} })$ are
1-D reconstruction operators
applied to each zone face while $F$ denotes a generic flux component
computed during the $x-$ ($F\equiv -F^{[B_y]}_{ {\mathbf{x}_f} }$) or
$y-$ sweep ($F\equiv F^{[B_x]}_{ {\mathbf{y}_f} }$), see Eq. (\ref{eq:edge_E}).}
\label{fig:ct}
\end{figure}
Here we adopt a Cartesian coordinate system, with unit vectors $\hvec{e}_x=(1,0,0)$, $\hvec{e}_y=(0,1,0)$ and $\hvec{e}_z=(0,0,1)$, uniformly discretized into a regular mesh with coordinate spacing $\Delta x$, $\Delta y$ and $\Delta z$.
Computational zones (or cells) are centered at $(x_i,\, y_j,\, z_k)$ and delimited by the six interfaces orthogonal to the coordinate axis aligned, respectively, with $(x_{i\pm\frac{1}{2}},\, y_j,\, z_k)$, $(x_i,\, y_{j\pm\frac{1}{2}},\, z_k)$ and $(x_i,\, y_j,\, z_{k\pm\frac{1}{2}})$.
CT-based schemes for MHD are characterized by a hybrid collocation for primary variables, those to be evolved.
While flow variables are zone-centered, here labeled as $U_{ {\boldsymbol{c}} }$ where the $ {\boldsymbol{c}} $ subscript is a shorthand notation for $(i,j,k)$, magnetic fields have a staggered representation and are located at zone interfaces.
Numerical fluxes for flow variables are also collocated in this points, where Riemann solvers will be computed, whereas magnetic fluxes (the electric field components) are computed at zone edges, as shown in Fig. \ref{fig:ct}.
To simplify the notations, from now on these staggered electromagnetic quantities will be indicated as
\begin{equation}
\vec{B}_{ f } \equiv \left( \begin{array}{l}
B_{ {\mathbf{x}_f} } \\ \noalign{\medskip}
B_{ {\mathbf{y}_f} } \\ \noalign{\medskip}
B_{ {\mathbf{z}_f} } \end{array}\right)
= \left( \begin{array}{l}
B_{x,i+\frac{1}{2},j,k} \\ \noalign{\medskip}
B_{y,i,j+\frac{1}{2},k} \\ \noalign{\medskip}
B_{z,i,j,k+\frac{1}{2}} \end{array}\right) \,, \qquad
\vec{E}_{e} \equiv \left( \begin{array}{l}
E_{ {\mathbf{x}_e} } \\ \noalign{\medskip}
E_{ {\mathbf{y}_e} } \\ \noalign{\medskip}
E_{ {\mathbf{z}_e} } \end{array}\right)
= \left( \begin{array}{l}
E_{x,i,j+\frac{1}{2},k+\frac{1}{2}} \\ \noalign{\medskip}
E_{y,i+\frac{1}{2},j,k+\frac{1}{2}} \\ \noalign{\medskip}
E_{z,i+\frac{1}{2},j+\frac{1}{2},k} \end{array}\right) \,,
\end{equation}
where the subscripts $ {\mathbf{x}_f} $, $ {\mathbf{y}_f} $ and $ {\mathbf{z}_f} $ identify the spatial component as well as the face-centered staggered location inside the control volume, i.e., $ {\mathbf{x}_f} \equiv \{x,(i+\frac{1}{2}, j, k)\}$, $ {\mathbf{y}_f} \equiv \{y,(i, j+\frac{1}{2}, k)\}$, and $ {\mathbf{z}_f} \equiv \{z,(i, j, k+\frac{1}{2})\}$.
Likewise, the components and corresponding positions of the different edge-centered electric field components are labeled as $ {\mathbf{x}_e} \equiv \{x,(i, j+\frac{1}{2}, k+\frac{1}{2})\}$, $ {\mathbf{y}_e} \equiv \{y,(i+\frac{1}{2}, j, k+\frac{1}{2})\}$, and $ {\mathbf{z}_e} \equiv \{z,(i+\frac{1}{2}, j+\frac{1}{2}, k)\}$.
This subscript notation extends also to arrays and scalar quantities in general by discarding the spatial component (e.g. \quotes{$x$} or \quotes{$y$}) when unnecessary, e.g., $U_ {\mathbf{x}_f} = U_{i+\frac{1}{2}, j, k}$.
This should not generate confusion as its employment will be clear from the context.
We will also make frequent use of the backward difference operators $\Delta_x$, $\Delta_y$, and $\Delta_z$, defined as
\begin{equation}\label{eq:deltaOp}
\Delta_x Q_ {\boldsymbol{c}} \equiv Q_ {\boldsymbol{c}} - Q_{ {\boldsymbol{c}} -\hvec{e}_x} \,,\quad
\Delta_y Q_ {\boldsymbol{c}} \equiv Q_ {\boldsymbol{c}} - Q_{ {\boldsymbol{c}} -\hvec{e}_y} \,,\quad
\Delta_z Q_ {\boldsymbol{c}} \equiv Q_ {\boldsymbol{c}} - Q_{ {\boldsymbol{c}} -\hvec{e}_z} \,,\quad
\end{equation}
where $Q$ can be any quantity, here with cell-centered representation $Q_c$. These $\Delta$ operators can be equivalently applied to face-centered $Q_f$ or edge-centered $Q_e$ values.
In the context of a finite-volume (FV) approach, conserved variables are evolved in terms of their volume (or zone-) averages $U_{ {\boldsymbol{c}} }$, implying a surface-averaged representation of the fluxes at zone interface, as required by direct application of Gauss' theorem:
\begin{equation}
\begin{array}{l}
\displaystyle \hat{F}_{ {\mathbf{x}_f} } = \frac{1}{\Delta y\Delta z}
\int \hvec{e}_x\cdot\tens{F}\big(U(x_{i+\frac{1}{2}},y,z,t)\big) \,dy\,dz \,,
\\ \noalign{\medskip}
\displaystyle \hat{F}_{ {\mathbf{y}_f} } = \frac{1}{\Delta z\Delta x}
\int \hvec{e}_y\cdot\tens{F}\big(U(x,y_{j+\frac{1}{2}},z,t)\big) \,dz\,dx \,,
\\ \noalign{\medskip}
\displaystyle \hat{F}_{ {\mathbf{z}_f} } = \frac{1}{\Delta x\Delta y}
\int \hvec{e}_z\cdot\tens{F}\big(U(x,y,z_{k+\frac{1}{2}},t)\big) \,dx\,dy \,.
\end{array}
\end{equation}
Conversely, magnetic field components $\vec{B}_{ f }$, having a staggered representation, are also interpreted as face-averages and are updated using a discrete version of Stokes' theorem.
The line-averaged electric field $\vec{E}_{e}$ effectively behave as electromotive force (emf), so the numerical fluxes for the magnetic field are commonly referred to emf components in the literature:
\begin{equation}
\begin{array}{l}
\displaystyle \hat{E}_{ {\mathbf{x}_e} } = \frac{1}{\Delta x}
\int E_x(x,y_{j+\frac{1}{2}},z_{k+\frac{1}{2}},t) \,dx\,,
\\ \noalign{\medskip}
\displaystyle \hat{E}_{ {\mathbf{y}_e} } = \frac{1}{\Delta y}
\int E_y(x_{i+\frac{1}{2}},y,z_{k+\frac{1}{2}},t) \,dy\,,
\\ \noalign{\medskip}
\displaystyle \hat{E}_{ {\mathbf{z}_e} } = \frac{1}{\Delta z}
\int E_z(x_{i+\frac{1}{2}},y_{j+\frac{1}{2}},z,t) \,dz\,.
\end{array}
\end{equation}
The semi-discrete FV version of any CT numerical scheme is then the following:
\begin{equation}
\begin{array}{lcl}
\displaystyle \frac{dU_ {\boldsymbol{c}} }{dt} & = & \displaystyle -
\left( \frac{ \Delta_x \hat{F}_{ {\mathbf{x}_f} } }{ \Delta x }
+ \frac{ \Delta_y \hat{F}_{ {\mathbf{y}_f} } }{ \Delta y }
+ \frac{ \Delta_z \hat{F}_{ {\mathbf{z}_f} } }{ \Delta z }\right) \,,
\\ \noalign{\medskip}
\displaystyle \frac{dB_{ {\mathbf{x}_f} }}{dt} & = & \displaystyle -
\left( \frac{\Delta_{y} \hat{E}_{ {\mathbf{z}_e} }}{\Delta y}
-\frac{\Delta_{z} \hat{E}_{ {\mathbf{y}_e} }}{\Delta z}
\right) \,,
\\ \noalign{\medskip}
\displaystyle \frac{dB_{ {\mathbf{y}_f} }}{dt} & = & \displaystyle -
\left( \frac{\Delta_{z} \hat{E}_{ {\mathbf{x}_e} }}{\Delta z}
-\frac{\Delta_{x} \hat{E}_{ {\mathbf{z}_e} }}{\Delta x}
\right) \,,
\\ \noalign{\medskip}
\displaystyle \frac{dB_{ {\mathbf{z}_f} }}{dt} & = & \displaystyle -
\left( \frac{\Delta_{x} \hat{E}_{ {\mathbf{y}_e} }}{\Delta x}
-\frac{\Delta_{y} \hat{E}_{ {\mathbf{x}_e} }}{\Delta y}
\right)\,,
\end{array}
\end{equation}
Notice that no approximation has been made so far.
The condition
\begin{equation}
\frac{d}{dt} \left(
\frac{ \Delta_x B_{ {\mathbf{x}_f} } }{ \Delta x }
+ \frac{ \Delta_y B_{ {\mathbf{y}_f} } }{ \Delta y }
+ \frac{ \Delta_z B_{ {\mathbf{z}_f} } }{ \Delta z }\right) = 0
\end{equation}
is thus valid exactly, and at any time the discrete version of the solenoidal constraint is ensured \emph{to machine accuracy} (if so for at the initial condition).
To second-order accuracy, a midpoint quadrature rule is typically used to evaluate, e.g., $\hat{F}_{ {\mathbf{x}_f} }$ with its point value obtained by means of a 1D Riemann solver at cell interfaces (the base scheme).
For higher than $2^{\rm nd}$ order schemes, $\hat{F}_{ {\mathbf{x}_f} }$ is obtained by suitable quadrature rules, see .e.g \cite{Corquodale_Colella2011, Felker_Stone2018, Verma_etal2019}.
Another option is to use high-order finite-difference (FD) schemes for which primary variables are stored as point-values in the same positions imposed by the CT method, and where multi-dimensional averaging is not needed \cite{Londrillo_DelZanna2000, DelZanna_etal2007}.
However, for sake of clarity and simplicity, in the following we limit our analysis to second-order schemes, so that FV and FD schemes basically coincide and the averaging operations are simply omitted in our notations.
\subsection{Approximate Riemann solvers for the base scheme}
CT schemes for MHD must be coupled to the Godunov-type method to solve the hyperbolic sub-system of Euler-like partial differential equations for $U_{ {\boldsymbol{c}} }$ (the base scheme).
The inter-cell fluxes $\hat{F}_{ {\mathbf{x}_f} }$, $\hat{F}_{ {\mathbf{y}_f} }$ and $\hat{F}_{ {\mathbf{z}_f} }$ are evaluated by solving a Riemann problem between left and right states reconstructed from the zone average to the desired quadrature point.
For the midpoint rule, left and right states can be obtained using one-dimensional upwind reconstruction techniques.
For instance, at an $x$-interface,
\begin{equation}
U^L_ {\mathbf{x}_f} = \altmathcal{R}^+_x\left(U_{ {\boldsymbol{c}} }\right),\quad
U^R_ {\mathbf{x}_f} = \altmathcal{R}^-_x\left(U_{ {\boldsymbol{c}} + \hvec{e}_x}\right)\, ,\quad
\end{equation}
where $\altmathcal{R}^\pm_x()$ is an operator in the $x$ direction giving the reconstructed value at the right ($+$) or left ($-$) interface with respect to the cell center, with the desired order of accuracy and possessing monotonicity properties.
Left and right states in the other directions are obtained similarly.
Reconstruction is best carried on primitive or characteristic variables as it is known to produce less oscillatory results.
After the reconstruction phase, one needs to solve the Riemann problem, a procedure that in modern shock-capturing schemes for MHD is hardly ever achieved using exact nonlinear solvers.
Approximate Riemann solvers provide inter-cell fluxes generally written as the sum of a centered flux terms and a dissipative term
\begin{equation}\label{eq:centered+dissipative}
\hat{F}_{ {\mathbf{x}_f} } = F_{ {\mathbf{x}_f} } - \Phi_{ {\mathbf{x}_f} } \,,
\end{equation}
where $F_{ {\mathbf{x}_f} }$ is the centered flux term while $\Phi_{ {\mathbf{x}_f} }$ is the (stabilizing) dissipative term.
Consider, for instance, the Roe Riemann linear solver, based on the decomposition of variables into characteristics.
In this case the two terms are
\begin{equation}\label{eq:RoeFlux}
F_{ {\mathbf{x}_f} } = \frac{1}{2}\left(F_{ {\mathbf{x}_f} }^L + F_{ {\mathbf{x}_f} }^R\right)
\,,\quad
\Phi_{ {\mathbf{x}_f} } = \frac{1}{2}\tens{R}|\tens{\Lambda}|\tens{L}
\cdot\left(U^R_ {\mathbf{x}_f} - U^L_ {\mathbf{x}_f} \right) \,,
\end{equation}
where $F^{L,R}_{ {\mathbf{x}_f} } = \hvec{e}_x\cdot\tens{F}(U_ {\mathbf{x}_f} ^{L,R})$ are the left and right fluxes, $\tens{R}=\tens{R}(\overline{U}_{\rm roe})$ and $\tens{L}=\tens{L}(\overline{U}_{\rm roe})$ are the right and left eigenvector matrices defined in terms of the Roe average state $\overline{U}_{\rm roe}$ (note that $\tens{R}\tens{L}=\tens{L}\tens{R}=\tens{I}$) while $|\tens{\Lambda}| = {\rm diag}(|\lambda_1|, ...,|\lambda_k|)$ is a diagonal matrix containing the eigenvalues in absolute value.
A different averaging procedure is obtained in the case of HLL schemes \cite{HLL1983}, where the inter-cell numerical flux is expressed through a convex combination of left and right fluxes plus a diffusion term
\begin{equation}\label{eq:HLLFlux}
\hat{F}_{ {\mathbf{x}_f} } = \frac{\alpha^R_x F^L_ {\mathbf{x}_f} + \alpha^L_x F^R_ {\mathbf{x}_f} }
{\alpha^R_x + \alpha^L_x}
- \frac{\alpha^R_x\alpha^L_x(U^R_ {\mathbf{x}_f} - U^L_ {\mathbf{x}_f} )}
{\alpha^R_x + \alpha^L_x} \,,
\end{equation}
where $\alpha^R_x = \max (0, \lambda^R_ {\mathbf{x}_f} ) \ge 0$ and $\alpha^L_x = -\min(0,\lambda^L_ {\mathbf{x}_f} )\ge 0$, with $\lambda^R_ {\mathbf{x}_f} $ the rightmost (largest) characteristic speed and $\lambda^L_ {\mathbf{x}_f} $ the leftmost (smallest with sign, not in terms of its absolute value) characteristic speed.
The HLL flux can be derived from an integral relation \cite{Toro2009} and approximates the inter-cell flux with a single intermediate state when $\lambda^R_ {\mathbf{x}_f} \geq 0$ and $\lambda^L_ {\mathbf{x}_f} \leq 0$, while retaining pure upwind properties when the two speeds have the same sign, hence $\hat{F}_{ {\mathbf{x}_f} } \equiv F^L_ {\mathbf{x}_f} $ or $ \hat{F}_{ {\mathbf{x}_f} } \equiv F^R_ {\mathbf{x}_f} $.
The HLL flux is known to be quite dissipative compared to the Roe one, especially for second order schemes, but the simplicity of component-wise resolution, possibly combined to higher-order reconstruction, has received considerable attention since \cite{Londrillo_DelZanna2000}.
The simplest solver that does not require the full characteristic decomposition is the Rusanov one, or local Lax-Friedrichs, which simply replaces the eigenvalue matrix with $|\tens{\Lambda}| = \tens{I}|\lambda_{\max}|$, where $\lambda_{\max}$ is the maximum local spectral radius.
In this case $\Phi_{ {\mathbf{x}_f} } = |\lambda_{\max}|(U^R_ {\mathbf{x}_f} - U^L_ {\mathbf{x}_f} )/2$.
Notice that the Rusanov flux can also be derived as a particular case of the HLL one, when $-\lambda^L_ {\mathbf{x}_f} =\lambda^R_ {\mathbf{x}_f} = |\lambda_{\max}|$, hence $\alpha_x^L = \alpha_x^R = |\lambda_{\max}|$.
Other alternatives to the Roe solver, avoiding the full spectral decomposition but still relying on the knowledge of the characteristic speeds, are the HLLC \cite{Toro2009, Gurski2004, Li2005} and HLLD \cite{Miyoshi_Kusano2005} solvers, for a better resolution with respect to HLL of contact and Alfv\'enic jumps, respectively.
These solvers will be discussed in better details in the next sections.
It is worth noticing that also multidimensional Riemann solvers for the induction equation allow the numerical flux to be decomposed into centered and dissipative terms, as shown in the work by Balsara, see \citep{Balsara_Kappeli2017, Balsara_Nkonga2017}.
The parabolic terms arising from the dissipative terms are equivalent to a physical conductivity which makes the discretization of the induction equations numerically stable.
In this context, an attractive approach to the Riemann problem is provided by the so-called HLLI solver, accounting for multiple intermediate characteristic waves \cite{Dumbser_Balsara2016,Balsara_Nkonga2017}.
\section{Flux and Dissipation terms}
\begin{itemize}[leftmargin=*]
\item
HLL:
By construction, the HLL interface flux satisfies
\[
\left\{\begin{array}{l}
\lambda_L(\vec{U}^{hll} - \vec{U}_L) = (\vec{F}^{hll} - \vec{F}_L)
\\ \noalign{\medskip}
\lambda_R(\vec{U}^{hll} - \vec{U}_R) = (\vec{F}^{hll} - \vec{F}_R)
\end{array}\right.
\]
Solving for $\vec{U}^{hll}$ and $\vec{F}^{hll}$ gives
\[
\vec{U}^{hll} = \frac{\lambda_R\vec{U}_R - \lambda_L\vec{U}_L
+ \vec{F}_L - \vec{F}_R}
{\lambda_R - \lambda_L} \,,\qquad
\vec{F}^{hll} = \frac{\lambda_R\vec{F}_L - \lambda_L\vec{F}_R
+ \lambda_L\lambda_R(\vec{U}_R-\vec{U}_L)}
{\lambda_R - \lambda_L}
\]
At an interface we evaluate $\vec{F}$ as
\[
\vec{F}_{i+\frac{1}{2}} = \left\{\begin{array}{ll}
\vec{F}_L & \quad{\rm if}\quad \lambda_L > 0 \\ \noalign{\medskip}
\vec{F}^{hll} & \quad{\rm if}\quad \lambda_L < 0 < \lambda_R
\\ \noalign{\medskip}
\vec{F}_R & \quad{\rm if}\quad \lambda_R < 0 \\ \noalign{\medskip}
\end{array}\right.
\quad\rightarrow\quad
\vec{F}_{i+\frac{1}{2}} = \frac{\alpha_R\vec{F}_L - \alpha_L\vec{F}_R
+ \alpha_L\alpha_R(\vec{U}_R-\vec{U}_L)}
{\alpha_R - \alpha_L}
\]
where $\alpha_L = \min(0, \lambda_L)$ and $\alpha_R = \max(0, \lambda_R)$.
Using the jump conditions:
\[
\vec{F}_{i+\frac{1}{2}} = \left\{\begin{array}{l}
\vec{F}_L
+ \min\left(0,\lambda_L\right)\left(\vec{U}^{hll} - \vec{U}_L\right)
+ \min\left(0,\lambda_R\right)\left(\vec{U}_R - \vec{U}^{hll}\right)
\\ \noalign{\medskip}
\vec{F}_R
+ \max\left(0,\lambda_R\right)\left(\vec{U}^{hll} - \vec{U}_R\right)
+ \max\left(0,\lambda_L\right)\left(\vec{U}_L - \vec{U}^{hll}\right)
\end{array}\right.
\]
By taking the arithmetic average of the two solution, one can write the flux in a Roe-like form
\[\boxed{
\vec{F}_{i+\frac{1}{2}} = \frac{1}{2}\left[ \vec{F}_L + \vec{F}_R
- |\lambda_L|\left(\vec{U}^{hll} - \vec{U}_L\right)
- |\lambda_R|\left(\vec{U}_R - \vec{U}^{hll}\right)\right]
}\]
\item
HLLC: write the HLLC flux as
\[
\vec{F}_{i+\frac{1}{2}} = \left\{\begin{array}{ll}
\vec{F}_L + \alpha_L(\vec{U}^*_L - \vec{U}_L) & {\rm if} \quad \lambda^* > 0
\\ \noalign{\medskip}
\vec{F}_R + \alpha_R(\vec{U}^*_R - \vec{U}_R) & {\rm if} \quad \lambda^* < 0
\end{array}\right.
\]
where $\alpha_L = \min(0, \lambda_L)$ and $\alpha_R = \max(0, \lambda_R)$.
Using the jump condition across each wave we can also write
\[
\vec{F}_{i+\frac{1}{2}} = \left\{\begin{array}{l}
\vec{F}_L
+ \min\left(0,\lambda_L\right)\left(\vec{U}^*_L - \vec{U}_L\right)
+ \min\left(0,\lambda^*\right)\left(\vec{U}^*_R - \vec{U}^*_L\right)
+ \min\left(0,\lambda_R\right)\left(\vec{U}_R - \vec{U}^*_R\right)
\\ \noalign{\medskip}
\vec{F}_R
+ \max\left(0,\lambda_R\right)\left(\vec{U}^*_R - \vec{U}_R\right)
+ \max\left(0,\lambda^*\right)\left(\vec{U}^*_L - \vec{U}^*_R\right)
+ \max\left(0,\lambda_L\right)\left(\vec{U}_L - \vec{U}^*_L\right)
\end{array}\right.
\]
Taking the arithmetic average we write the HLLC flux in a Roe-like form:
\[\boxed{
\vec{F}_{i+\frac{1}{2}} = \frac{1}{2}\Big[ \vec{F}_L + \vec{F}_R
- |\lambda_L|\left(\vec{U}^*_L - \vec{U}_L\right)
- |\lambda^*|\left(\vec{U}^*_R - \vec{U}^*_L\right)
- |\lambda_R|\left(\vec{U}_R - \vec{U}^*_R\right)\Big]
}\]
Alternatively (CHECK again), in even more compact form,
\[
\vec{F}^c = \frac{1+{\rm sign}(\lambda^*)}{2}
\left[\vec{F}_L + \alpha_L(\vec{U}^*_L - \vec{U}_L)\right]
+ \frac{1-{\rm sign}(\lambda^*)}{2}
\left[\vec{F}_R + \alpha_L(\vec{U}^*_R - \vec{U}_R)\right]
\]
and therefore
\[\boxed{
\vec{F}^c = \alpha^*_L\vec{F}_L + \alpha^*_R\vec{F}_R
- \alpha^*_L\alpha_L\left(\vec{U}_L - \vec{U}^*_L\right)
- \alpha^*_R\alpha_R\left(\vec{U}_R - \vec{U}^*_R\right)
}
\]
where $\alpha^*_L = (1+{\rm sign}(\lambda^*))/2$, $\alpha^*_R = (1-{\rm sign}(\lambda^*))/2$.
So that the coefficients are
\[
\alpha_L = \frac{1+{\rm sgn}(\lambda^*)}{2} \,,\quad
\alpha_R = \frac{1-{\rm sgn}(\lambda^*)}{2} \,,\quad
\phi_{LR} = \alpha_L\lambda_L \left(\vec{u}^*_L - \vec{u}_L\right) +
\alpha_R\lambda_R \left(\vec{u}^*_R - \vec{u}_R\right)
\]
\item HLLD: write the flux as
\begin{equation}
\begin{array}{lll}
\displaystyle \vec{F}_x & = \displaystyle \frac{1}{2}\Big[ \vec{F}_L + \vec{F}_R &
- \left|\lambda_L\right| \left(\vec{u}^*_L - \vec{u}_L\right)
- \left|\lambda^*_{L}\right| \left(\vec{u}^{**}_L - \vec{u}^*_L\right) \\
& & - \left|\lambda_{c}\right| \left(\vec{u}^{**}_R - \vec{u}^{**}_L\right)
- \left|\lambda^*_{R}\right|\left(\vec{u}^*_R - \vec{u}^{**}_R\right)
- \left|\lambda_{R}\right| \left(\vec{u}_R - \vec{u}^*_R\right)\Big]
\end{array}
\end{equation}
\end{itemize}
For HLL-type solver, the jump across a given wave ma be written as, e.g.,
\begin{equation}
U^{*}_L - U_L = f(U_L, \lambda^*) \qquad{\rm where}\quad
\lambda^* = \lambda^*(U_L, U_R)
\end{equation}
At the contact wave, there's no jump in magnetic field so,
\begin{equation}\label{eq:HLLDFlux_By}
\vec{F}_x = \frac{1}{2}\Big[ \vec{F}_L + \vec{F}_R
- \left|\lambda_L\right| \left(\vec{u}^*_L - \vec{u}_L\right)
- \left|\lambda^*_L\right|\left(\vec{u}^{**} - \vec{u}^*_L\right)
- \left|\lambda^*_R\right|\left(\vec{u}^{*}_R - \vec{u}^{**}\right)
- \left|\lambda_R\right| \left(\vec{u}_R - \vec{u}^*_R\right)\Big]
\end{equation}
with $B^{**}_{yL} = B^{**}_{yR}$.
\newpage
\section{General Composition}
Let's start by writing the induction equation in components:
\begin{equation}
\begin{array}{l}
\displaystyle \pd{B_x}{t} + \pd{E_z}{y} - \pd{E_y}{z} = 0 \\ \noalign{\medskip}
\displaystyle \pd{B_y}{t} + \pd{E_x}{z} - \pd{E_z}{x} = 0 \\ \noalign{\medskip}
\displaystyle \pd{B_z}{t} + \pd{E_y}{x} - \pd{E_x}{y} = 0
\end{array}
\end{equation}
The general formula should look something like
\begin{equation}\label{eq:general_comp}
\begin{array}{lcl}
E_x &=&\displaystyle - \av{v_yB_z - v_zB_y} - \frac{\Delta_zB_y}{2}
+ \frac{\Delta_yB_z}{2}
= - \av{v_yB_z - v_zB_y} - \phi_z(B_y) + \phi_y(B_z)
\\ \noalign{\medskip}
E_y &=&\displaystyle - \av{v_zB_x - v_xB_z} - \frac{\Delta_xB_z}{2}
+ \frac{\Delta_zB_x}{2}
= - \av{v_zB_x - v_xB_z} - \phi_x(B_z) + \phi_z(B_x)
\\ \noalign{\medskip}
E_z &=&\displaystyle - \av{v_xB_y - v_yB_x} - \frac{\Delta_yB_x}{2}
+ \frac{\Delta_xB_y}{2}
= - \av{v_xB_y - v_yB_x} - \phi_y(B_x) + \phi_x(B_y)
\end{array}
\end{equation}
where the term in $\av{.}$ represent some kind of average while the other terms are diffusion terms.
The \emph{composition formula} assumes that the 1D flux can be written as
\begin{equation}
\hat{F} = a^LF^L + a^RF^R - (d^RB_t^R - d^LB_t^L)
\end{equation}
where $F = B_tv_n - B_nv_t$, we compose the electric field at the edge as
\begin{equation}\label{eq:UCT_composition}
\boxed{
E_z = - \left[(av_xB_y)^W + (av_xB_y)^E\right]
+ \left[(av_yB_x)^S + (av_yB_x)^N\right]
+ \Big[(dB_y)^E - (dB_y)^W\Big]
- \Big[(dB_x)^N - (dB_x)^S\Big]
}
\end{equation}
where
\begin{itemize}
\item $a^{E,W}$, $d^{E,W}$, $\overline{v}_y$ are taken by averaging $x$-interface values;
\item $a^{N,S}$, $d^{N,S}$, $\overline{v}_x$ are taken by averaging $y$-interface values;
\end{itemize}
\newpage
\section{UCT-Flux}
Separate the smooth term from the diffusion term in the 1D fluxes:
\[
F = \hat{F} - \Phi \quad\rightarrow\quad
\Phi = \hat{F} - F = \frac{F_L + F_R}{2} - F
\]
where the fluxes of the induction equation are, during the $x$-sweep:
\begin{equation}
F_{i+\frac{1}{2}} = \left\{\begin{array}{l}
0 \\ \noalign{\medskip}
\hat{F}^{[B_y]} - \Delta_xB_y = -\hat{E}_z - \Delta_xB_y \\ \noalign{\medskip}
\hat{F}^{[B_z]} - \Delta_xB_z = \hat{E}_y - \Delta_xB_z
\end{array}\right.
\end{equation}
while during the $y$-sweep:
\begin{equation}
F_{j+\frac{1}{2}} = \left\{\begin{array}{l}
\hat{F}^{[B_x]} - \Delta_yB_x = \hat{E}_z - \Delta_yB_y \\ \noalign{\medskip}
0 \\ \noalign{\medskip}
\hat{F}^{[B_z]} - \Delta_yB_z = -\hat{E}_x - \Delta_yB_z
\end{array}\right.
\end{equation}
and the $z$-sweep:
\begin{equation}
F_{k+\frac{1}{2}} = \left\{\begin{array}{l}
\hat{F}^{[B_x]} - \Delta_zB_x = -\hat{E}_y - \Delta_zB_y \\ \noalign{\medskip}
\hat{F}^{[B_y]} - \Delta_zB_y = \hat{E}_x - \Delta_zB_z \\ \noalign{\medskip}
0
\end{array}\right.
\end{equation}
When averaging at an edge, following Eq. (\ref{eq:general_comp})
\[
\begin{array}{l}
E_x = \av{E_x} + \phi_y - \phi_z \\ \noalign{\medskip}
E_y = \av{E_y} + \phi_z - \phi_x \\ \noalign{\medskip}
E_z = \av{E_z} + \phi_x - \phi_y \\ \noalign{\medskip}
\end{array}
\]
\newpage
\section{Velocity decomposition}
Write the intercell flux by subtracting a second diffusion contribution:
\begin{equation}
F_{i+\frac{1}{2}} = \hat{F}_{i+\frac{1}{2}}
- \left(\Phi_{i+\frac{1}{2}} - \frac{\lambda}{2} \Delta U\right)
- \frac{\lambda}{2} \Delta U
= \hat{F}_{i+\frac{1}{2}} - \tilde{\Phi}_{i+\frac{1}{2}}
- \frac{\lambda}{2} \Delta U
\end{equation}
where $\lambda$ has yet to be specified.
In the case of a Roe solver, $\Phi = \tens{|A|}\cdot(U_R - U_L)/2$ and
\begin{equation}
F_{i+\frac{1}{2}} =
\frac{F_{i+\frac{1}{2},L} + F_{i+\frac{1}{2},R}}{2}
- \frac{1}{2}\Big(|\tens{A}|-\tens{I}\lambda\Big)\cdot\Delta U
- \frac{\lambda}{2}\Delta U
\end{equation}
where $|A| = R|\Lambda|L$, $\Delta U = U_R - U_L$.
At cell edges using the composition formula
\begin{equation}\label{eq:Ez1}
\begin{array}{lcl}
E_z &=& \displaystyle - \frac{(v_xB_y + \tilde{\Phi}_y)^W + (v_xB_y + \tilde{\Phi}_y)^E}{2}
+ \frac{(v_yB_x + \tilde{\Phi}_x)^S + (v_yB_x + \tilde{\Phi}_x)^N}{2}
\\ \noalign{\medskip}
& & \displaystyle + \frac{\av{\lambda}_S^N}{2} (B^E_y - B^W_y)
- \frac{\av{\lambda}_E^W}{2} (B_x^N - B_x^S)
\end{array}
\end{equation}
Here we give some guidelines on how $\lambda$ could be determined:
\begin{itemize}
\item
In the case of supersonic flow along one direction (say, $x$) we set $\lambda=0$ since we can recover the upwind state naturally:
\[
0 > S_L > S_R \quad\rightarrow\quad
\Phi_x \equiv \hat{\Phi}_x = F_{i+\frac{1}{2}} - \frac{F_L + F_R}{2}
= F_{L} - \frac{F_L + F_R}{2}
= \frac{F_L - F_R}{2}
\]
and the corner electric field reduces to the expected value:
\[
\quad\rightarrow\quad
E_z = - B^W_y\frac{v_x^{W,L} + v_x^{W,R}}{2}
+ \frac{1}{2}B^S_xv_y^{W,L}
+ \frac{1}{2}B^N_xv_y^{W,R} + \frac{1}{2}\Phi^y_W + \frac{1}{2}\Phi^y_E
\]
\item
In the more general case, one can choose $\lambda = |v_x|$ (during the x-direction) or $\lambda = |v_y|$ (during the y-sweep), that is, the fluid velocity or the speed of the contact mode.
Then Eq. (\ref{eq:Ez1}) may also be rewritten as
\begin{equation}\label{eq:Ez2}
\begin{array}{lcl}
E_z &=& \displaystyle - \frac{1}{2}\Big[(v_xB_y)^E + (v_xB_y)^W\Big]
+ \frac{\overline{v}_x^{EW}}{2} (B^E_y - B^W_y)
- \frac{1}{2}\left(\tilde{\Phi}^N_x + \tilde{\Phi}^S_x\right)
\\ \noalign{\medskip}
& & \displaystyle
+ \frac{1}{2}\Big[(v_yB_x)^N + (v_yB_x)^S\Big]
- \frac{\overline{v}_y^{NS}}{2} (B^N_x - B^S_x)
+ \frac{1}{2}\left(\tilde{\Phi}^E_y + \tilde{\Phi}^W_y\right)
\end{array}
\end{equation}
\end{itemize}
\newpage
\section{HLLD}
Flux at cell interface for transverse components of $\vec{B}$ is written as (see Eq. \ref{eq:HLLDFlux_By}):
\[
\vec{F}_x = \frac{1}{2}\Big[ \vec{F}_L + \vec{F}_R
- \left|\lambda_L\right| \left(\vec{u}^*_L - \vec{u}_L\right)
- \left|\lambda^*_L\right|\left(\vec{u}^{**} - \vec{u}^*_L\right)
- \left|\lambda^*_R\right|\left(\vec{u}^{*}_R - \vec{u}^{**}\right)
- \left|\lambda_R\right| \left(\vec{u}_R - \vec{u}^*_R\right)\Big]
= \hat{F} - \phi
\]
At cell edge we need to evaluate
\begin{equation}
E_z = \frac{1}{2}\Big[ - (v_xB_y)^W - (v_xB_y)^E
+ (v_yB_x)^N + (v_yB_x)^S\Big]
- \phi_y + \phi_x
\end{equation}
where the diffusion terms are:
\begin{equation}\label{eq:hlld_phi1D}
\begin{array}{l}
2\phi_x = \left|\lambda_L\right| \left(B^{*W}_y - B^W_y\right)
+ \left|\lambda^*_L\right|\left(B^{**}_y - B^{*W}_y\right)
+ \left|\lambda^*_R\right|\left(B^{*E}_y - B^{**}_y\right)
+ \left|\lambda_R\right| \left(B^E_y - B^{*E}_y\right)
\\ \noalign{\medskip}
2\phi_y = \left|\lambda_L\right| \left(B^{*S}_x - B^N_x\right)
+ \left|\lambda^*_L\right|\left(B^{**}_x - B^{*N}_x\right)
+ \left|\lambda^*_R\right|\left(B^{*N}_x - B^{**}_x\right)
+ \left|\lambda_R\right| \left(B^N_x - B^{*N}_x\right)
\end{array}
\end{equation}
compute during an $x$- and $y$-sweep, respectively.
By working out the explicit dependences from the 1D solver we try to write the diffusion terms as
\[
\boxed{
\phi_x = d_RB^R_{y,z} - d_LB^L_{y,z} \,,\qquad
\phi_y = d_RB^R_{x,z} - d_LB^L_{x,z} \,,\qquad
}
\]
Using script \#03 one finds:
\begin{equation}\label{eq:By_chi}
B^{*}_{y_L} - B_{y_L} = B_{y_L}\chi_L \qquad{\rm where}\quad
\chi_L = -\frac{\rho_L(v_{x_L}- \lambda_c)(v_{x_L} - \lambda_L)}
{(\lambda_L-\lambda_c)(v_{xL} - \lambda_L)\rho_L+ B_x^2}
\end{equation}
Using $\rho_L(\lambda_L-v_{xL}) = \rho^*_L(\lambda_L-\lambda_c)$ (Eq. 43 of Mioyshi \& Kusano 2005) and $B_x^2 = \rho^*_L(\lambda^*_L-\lambda_c)^2$ (Eq 51 of MK05) we have
\begin{equation}\label{eq:chi}
\begin{array}{ll}
\chi_L &=\displaystyle -\frac{\rho_L(v_{x_L}- \lambda_c)(v_{x_L} - \lambda_L)}
{-\rho_L^*(\lambda_L-\lambda_c)^2 + B_x^2}
= -\frac{\rho_L(v_{x_L}- \lambda_c)(v_{x_L} - \lambda_L)}
{\rho_L^*[-(\lambda_L-\lambda_c)^2 + (\lambda^*_L-\lambda_c)^2]}
= \frac{(v_{x_L}- \lambda_c)(\lambda_L - \lambda_c)}
{[-(\lambda_L-\lambda_c)^2 + (\lambda^*_L-\lambda_c)^2]}
\\ \noalign{\medskip}
&=\displaystyle \frac{(v_{x_L}- \lambda_c)(\lambda_L - \lambda_c)}
{(\lambda^*_L-\lambda_L)(\lambda^*_L + \lambda_L - 2\lambda_c)}
\end{array}
\end{equation}
Likewise one can find $\chi_R$.
The $**$ state can be identified with the HLL average beyond the Alfven modes:
\begin{equation}\label{eq:**}
B^{**}_y = \frac{\lambda^*_RB^*_{y_R} - \lambda^*_LB^*_{y_L} + F^*_L - F^*_R}
{\lambda^*_R - \lambda^*_L}
\end{equation}
where, in order to avoid computing $F^{*}_L$ and $F^*_R$ we use the jump conditions,
\[
\begin{array}{l}
\displaystyle F^*_L = F_L + \lambda_L(B^*_L - B_L) \\ \noalign{\medskip}
\displaystyle F^*_R = F_R + \lambda_R(B^*_R - B_R)
\end{array}
\]
Since $B^*_t \propto B_t$ and $B^{**}_t = f(B^{*}_{tL},B^{*}_{tR})$, the dissipative terms (\ref{eq:hlld_phi1D}) can therefore be written as a combination of linear terms,
\begin{equation}\label{eq:hlld_phi2D}
\begin{array}{l}
\phi_x = d^E_xB^E_y - d^W_xB^W_y
\\ \noalign{\medskip}
\phi_y = d^N_yB^N_x - d^S_yB^S_x
\end{array}
\end{equation}
I now discuss the degeneracies:
\begin{itemize}
\item $B_x\to0, B_t \ne 0 $.
In this limit we have only 3 waves:
\[
\lambda = u + \left(-c_f,\, 0,\, c_f\right)
\]
since $c_s \to c_a \to 0$ (this correspond to case I in Roe \& Balsara 1996).
The $**$ state is apparently ill-defined but the solution of the HLLD remains regular and it should reduce to that of the HLLC solver with a jump in the transerse components of $B_t$.
$\chi_L$ and $\chi_R$ remain bounded:
\[
\lim_{B_x\to0}\chi_L = -\frac{v_{xL} - \lambda_c}{\lambda_L - \lambda_c}
\]
\item $B_t\to0, B_x \ne 0 $.
In this limit we have only 5 waves:
\[
\lambda = \left\{\begin{array}{lll}
\displaystyle u + \left(-a,\, -b_x,\, 0,\, b_x,\, a\right)
& {\rm if} \quad b_x = B_x/\sqrt{\rho} < a
& \qquad ({\rm Case}\quad III) \\ \noalign{\medskip}
\displaystyle u + \left(-b_x,\, -a,\, 0,\, a,\, b_x\right)
& {\rm if} \quad b_x = B_x/\sqrt{\rho} > a
& \qquad ({\rm Case}\quad IV)
\end{array}\right.
\]
\begin{itemize}
\item
In case III (as in Roe \& Balsara 1996) $c^2_f = a^2$, $c_s^2 = b_x^2$ while $\alpha^2_f = 1$, $\alpha^2_s = 0$ we do not have any problem since $c_f$ and $b_x$ are distinct (weak magnetic fields):.
\item
In case IV (as in Roe \& Balsara 1996), $c^2_f = b_x^2$, $c_s^2 = a^2$ while $\alpha^2_f = 0$, $\alpha^2_s = 1$ (strong magnetic field).
Here the denominator for $\chi$ (Eq. \ref{eq:chi}) vanishes since $\lambda^*_\alpha\to \lambda_\alpha$ and we switch to HLL.
\end{itemize}
\end{itemize}
Using the MAPLE script \#02b we write the flux as
\[
F = a_LF_L + a_RF_R - (d_RB_R - d_LB_L)
\]
In order to deal with the singularity when $B_x\to0$, let's rearrange Eq. (\ref{eq:hlld_phi1D}) as
\[
2\phi_x = -|\lambda_L|B_{yL} + B^*_{yL} (|\lambda_L| - |\lambda^*_L|)
+ B^{**}_{y}(|\lambda^*_L| - |\lambda^*_R|)
+ B^*_{yR} (|\lambda^*_R| - |\lambda_R|)
+ |\lambda_R|B_{yR}
\]
Using (script \#02b):
\begin{equation}
\phi_x = d_RB_{yR} - d_LB_{yL}\,,
\qquad{\rm where}\qquad\left\{\begin{array}{l}
\displaystyle d_L = \frac{1}{2}\Big[
|\lambda^*_L| - |\lambda_L| - \nu(\lambda^*_L-\lambda_L)\Big]\chi_L
+ \frac{1}{2}|\lambda^*_L| - \frac{1}{2}\nu\lambda^*_L
\\ \noalign{\medskip}
\displaystyle d_R = \frac{1}{2}\Big[
|\lambda^*_R| - |\lambda_R| - \nu(\lambda^*_R-\lambda_R)\Big]\chi_R
+\frac{1}{2}|\lambda^*_R| - \frac{1}{2}\nu\lambda^*_R
\end{array}\right.
\end{equation}
The first term in square brackets can be re-written in a more convenient way to avoid singular behavior using Eq. (\ref{eq:chi}):
\[
\begin{array}{l}
\displaystyle (|\lambda^*_L| - |\lambda_L|)\chi_L
= (\lambda^*_L - \lambda_L)\frac{\lambda^*_L + \lambda_L}
{|\lambda^*_L| + |\lambda_L|}\chi_L
= \frac{(v_{x_L}- \lambda_c)(\lambda_L - \lambda_c)}
{(\lambda^*_L + \lambda_L - 2\lambda_c)}
\frac{ \lambda^*_L + \lambda_L}
{|\lambda^*_L| + |\lambda_L|}
\\ \noalign{\medskip}
\displaystyle (\lambda^*_L - \lambda_L)\chi_L
= \frac{(v_{x_L}- \lambda_c)(\lambda_L - \lambda_c)}
{(\lambda^*_L + \lambda_L - 2\lambda_c)}
\end{array}
\]
Here we used the identity
\[
\frac{|a| - |b|}{a - b} = \frac{a + b}{|a| + |b|}
\]
At a zone edge we use a composition formula similar to Del Zanna (2007):
\begin{equation}
\boxed{
E_{z, i+\frac{1}{2}, j+\frac{1}{2},k} =
-\frac{1}{2} \left[(\bar{v}_xB_y)^W + (\bar{v}_xB_y)^E\right]
+\frac{1}{2} \left[(\bar{v}_yB_x)^S + (\bar{v}_yB_x)^N\right]
-\frac{1}{2} (\phi_x^S + \phi^N_x)
+\frac{1}{2} (\phi_x^W + \phi^E_x)
}
\end{equation}
where the $d$ are taken as arithmetic (or upwinded averages) while the L/R states are reconstructed from the staggered fields.
\subsection{About the ** state}
The definition of the Alfven waves follows directly from: i) assuming constant density across them; ii) constant normal velocity across the Riemann fan; iii) using the transverse components of magnetic field and velocities:
\[
\left\{\begin{array}{l}
\displaystyle (B^{**}_L - B^*_L)\lambda^*_L = (B^{**}_L - B^*_L)\lambda_c
-B_x(v^{**}_L - v^*_L)
\\ \noalign{\medskip}
\displaystyle (\rho^{**}_Lv^{**}_L - \rho^*_Lv^*_L)\lambda^*_L
= (\rho^{**}_Lv^{**}_L - \rho^*_Lv^*_L)\lambda_c
- B_x(B^{**}_L - B^*_L)
\end{array}\right.
\quad\Longrightarrow\quad
\left\{\begin{array}{l}
\displaystyle (B^{**}_L - B^*_L)(\lambda^*_L-\lambda_c) = -B_x(v^{**}_L - v^*_L)
\\ \noalign{\medskip}
\displaystyle (v^{**}_L - v^*_L)(\lambda^*_L - \lambda_c)
= - \frac{B_x}{\rho^*_L}(B^{**}_L - B^*_L)
\end{array}\right.
\]
Inserting the second into the first and applying the same arguments for the right state,
\[
(\lambda^*_L - \lambda_c)^2 = \frac{B_x^2}{\rho^*_L}\,,\qquad
(\lambda^*_R - \lambda_c)^2 = \frac{B_x^2}{\rho^*_R}\,,
\]
From which one finds:
\[
\left\{\begin{array}{l}
\displaystyle \lambda^*_L = \lambda_c - \frac{|B_x|}{\sqrt{\rho^*_L}}
\\ \noalign{\medskip}
\displaystyle \lambda^*_R = \lambda_c + \frac{|B_x|}{\sqrt{\rho^*_R}}
\end{array}\right.
\qquad\rightarrow\qquad
\left\{\begin{array}{l}
\displaystyle \sqrt{\rho^*_L} = -\frac{|B_x|}{\lambda_c - \lambda^*_L}
\\ \noalign{\medskip}
\displaystyle \sqrt{\rho^*_R} = \frac{|B_x|}{\lambda_c - \lambda^*_R}
\end{array}\right.
\]
The ** state is written as:
\[
\begin{array}{ll}
\displaystyle B_y^{**}
&=\displaystyle \frac{\lambda^*_R B^*_{yR} - \lambda^*_L B^*_{yL} + F^*_L - F^*_R}
{\lambda^*_R - \lambda^*_L}
= \frac{ B^*_{yR}(\lambda^*_R-\lambda_c) - B^*_{yL}(\lambda^*_L-\lambda_c)
+ B_x(v^*_{yR} - v^*_{yL})}
{\lambda^*_R - \lambda^*_L}
\\ \noalign{\medskip}
&=\displaystyle \frac{\sqrt{\rho^*_L}B^*_{yR} + \sqrt{\rho^*_R}B^*_{yL} +
\sqrt{\rho^*_R\rho^*_L}(v^*_{yR} - v^*_{yL})s}
{\sqrt{\rho^*_R} + \sqrt{\rho^*_L}}
\end{array}
\]
with $s={\rm sign}(B_x)$.
The diffusion term appears as
\[
(|\lambda^*_L| - |\lambda^*_R|)B^{**}_y =
-\frac{|\lambda^*_L| - |\lambda^*_R|}{\lambda^*_L - \lambda^*_R}
\Big(\lambda^*_RB^*_{y_R} - \lambda^*_LB^*_{y_L} + F^*_L - F^*_R\Big)
\]
where
\begin{equation}
\nu \equiv \frac{|\lambda^*_R| - |\lambda^*_L|}{\lambda^*_R - \lambda^*_L}
= \frac{\lambda^*_R + \lambda^*_L}{|\lambda^*_R| + |\lambda^*_L|}
\end{equation}
Setting $q = 1/\sqrt{\rho}$ one can expand the two expressions as
\begin{equation}
\nu = \frac{ |\lambda_c + q^*_R|B_x||
- |\lambda_c - q^*_L|B_x||}
{ |B_x|(q^*_R + q^*_L)}
= \frac{2\lambda_c + |B_x|(q^*_R - q^*_L)}
{|\lambda_c + |B_x|q^*_R| + |\lambda_c - |B_x|q^*_L|}
\end{equation}
This expression has different limits, depending on which quantity ($B_x$ or $\lambda_c$) tends to zero faster:
\begin{itemize}
\item In the limit $B_x\to0$ ($\lambda_c\ne0$):
\[
\lim_{B_x\to0}\nu = \frac{\lambda_c}{|\lambda_c|}
\]
\item In the limit $\lambda_c\to0$ ($B_x\ne0$):
\[
\lim_{\lambda_c\to0}\nu = \frac{q^*_R - q^*_L}{q^*_R + q^*_L}
\]
\end{itemize}
\newpage
\section{GFORCE Scheme}
For 1D conservation law the GFORCE scheme is written as
\begin{equation}
F^{\rm GFORCE} = \omega F^{LW} + (1-\omega)F^{LF}
\end{equation}
where
\[
F^{LF} = \frac{F_L + F_R}{2} - \frac{1}{2\tau}(U_R-U_L) \,,\qquad
F^{LW} = F(U^{LW}) \quad{\rm with}\quad
U^{LW} = \frac{U_L + U_R}{2} - \frac{\tau}{2}(F_R - F_L)
\]
where $\tau = \Delta t / \Delta x$.
\paragraph{2D Formulation.}
At an edge we construct the electric field similarly keeping in mind that the equations for the magnetic field are written as
\begin{equation}
\begin{array}{l}
\displaystyle \pd{B_x}{t} + \pd{E_z}{y} = 0 \qquad (F = E_z) \\ \noalign{\medskip}
\displaystyle \pd{B_y}{t} - \pd{E_z}{x} = 0 \qquad (F = -E_z)
\end{array}
\end{equation}
For the LF electric field we have
\begin{equation}\label{eq:Ez_LF}
E^{LF}_z = \frac{1}{2}\Big[ - (v_xB_y)^E - (v_xB_y)^W + (v_yB_x)^N + (v_yB_x)^S\Big]
+ \frac{1}{2\tau} (B^E_y - B^W_y)
- \frac{1}{2\tau} (B^N_x - B^S_x)
\end{equation}
For the Lax-Wendroff state we have (at the edge):
\[
B_x^{LW} = \frac{B^N_x + B^S_x}{2} - \frac{\tau_y}{2}(E^N_z - E^S_z)
\,,\qquad
B_y^{LW} = \frac{B^E_y + B^W_y}{2} + \frac{\tau_x}{2}(E^E_z - E^W_z)
\]
Suppose that $\vec{v}^{\rm LW}$ is obtained.
Then a 2D emf can be constructed as
\[
E^{\rm LW}_{z_e} = \left(\frac{B_x^N + B_x^S}{2}\right)v_y^{\rm LW}
-\left(\frac{B_y^E + B_y^W}{2}\right)v_x^{\rm LW}
-\frac{\tau_yv_y^{\rm LW}}{2}\left(E_z^N - E_z^S\right)
-\frac{\tau_xv_x^{\rm LW}}{2}\left(E_z^E - E_z^W\right)
\]
or expliciting the electric fields,
\[
\begin{array}{ll}
E^{\rm LW}_{z_e} = \displaystyle \left(\frac{B_x^N + B_x^S}{2}\right)v_y^{\rm LW}
-\left(\frac{B_y^E + B_y^W}{2}\right)v_x^{\rm LW}
&\displaystyle -\frac{\tau_yv_y^{\rm LW}}{2}\left[(B_xv_y)^N-(B_xv_y)^S\right]
+\frac{\tau_yv_y^{\rm LW}}{2}\left[(B_yv_x)^N-(B_yv_x)^S\right]
\\ \noalign{\medskip}
&\displaystyle -\frac{\tau_xv_x^{\rm LW}}{2}\left[(B_xv_y)^E-(B_xv_y)^W\right]
+\frac{\tau_xv_x^{\rm LW}}{2}\left[(B_yv_x)^E-(B_yv_x)^W\right]
\end{array}
\]
\paragraph{1D Formulation.}
In alternative, one can operate at a zone face and build the LW state of the transverse field.
Here the fluxes are computed as:
\[
\hat{F}_x = \omega\hat{F}^{\rm LW}_x + (1-\omega)\hat{F}^{\rm LF}_x
= \omega\left[\left(\frac{B_y^L + B_y^R}{2}
- \frac{\tau_x}{2}(F_x^R - F_x^L)\right)v^{\rm LW}_x
- B_xv_y^{\rm LW} \right]
+
(1-\omega)\left[ \frac{F_x^L + F_x^R}{2}
- \frac{|\lambda_x|}{2}(B_y^R - B_y^L) \right]
\]
\[
\hat{F}_y = \omega\hat{F}^{\rm LW}_y + (1-\omega)\hat{F}^{\rm LF}_y
= \omega\left[\left(\frac{B_x^L + B_x^R}{2}
- \frac{\tau_y}{2}(F_y^R - F_y^L)\right)v^{\rm LW}_y
- B_yv_x^{\rm LW} \right]
+
(1-\omega)\left[ \frac{F_y^L + F_y^R}{2}
- \frac{|\lambda_y|}{2}(B_x^R - B_x^L) \right]
\]
where $F_x = B_yv_x - B_xv_y$ and $F_y = B_xv_y - B_yv_x$.
Using the MAPLE script (we work out just the LW contribution, since the LF flux is straightfoward):
\[
\hat{F}^{\rm LW}_x = -\left(v_y^{\rm LW} - \frac{\tau_x}{2}\Delta v_yv_x^{\rm LW}\right)B_x
+ \left(\frac{\tau_xv_x^L + 1}{2}v_x^{\rm LW}\right) B_y^L
- \left(\frac{\tau_xv_x^R - 1}{2}v_x^{\rm LW}\right) B_y^R
\]
\[
\hat{F}^{\rm LW}_y = -\left(v_x^{\rm LW} - \frac{\tau_y}{2}\Delta v_xv_y^{\rm LW}\right)B_y
+ \left(\frac{\tau_yv_y^L + 1}{2}v_y^{\rm LW}\right) B_x^L
- \left(\frac{\tau_yv_y^R - 1}{2}v_y^{\rm LW}\right) B_x^R
\]
Following the guidelines given after Eq. (\ref{eq:UCT_composition}) we compute at an $x$-interface:
\[
\begin{array}{l}
\displaystyle \overline{v}_y = \omega\left(v_y^{\rm LW} - \frac{\tau_x}{2}\Delta v_yv_x^{\rm LW}\right)
+(1-\omega)\left(\frac{v_y^L + v_y^R}{2}\right)
\\ \noalign{\medskip}
a = 1/2
\\ \noalign{\medskip}
\displaystyle d^{L,R} = \omega\left(\frac{\tau_xv_x^{L,R}}{2}v_x^{\rm LW}\right)
+ (1-\omega)\frac{|\lambda_x|}{2}
\end{array}
\]
while, at the $y$-interface:
\[
\begin{array}{l}
\displaystyle \overline{v}_x = \omega\left(v_x^{\rm LW} - \frac{\tau_y}{2}\Delta v_xv_y^{\rm LW}\right)
+(1-\omega)\left(\frac{v_x^L + v_x^R}{2}\right)
\\ \noalign{\medskip}
a = 1/2
\\ \noalign{\medskip}
\displaystyle d^{L,R} = \omega\left(\frac{\tau_yv_y^{L,R}}{2}v_y^{\rm LW}\right)
+ (1-\omega)\frac{|\lambda_y|}{2}
\end{array}
\]
LW velocities are computed as
\[
v_y^{\rm LW} = \frac{(\rho v_y)^{\rm LW}}{\rho^{\rm LW}} \,,\quad
(\rho v_y)^{\rm LW} = \frac{(\rho v_y)^L + (\rho v_y)^R}{2}
- \frac{\tau_x}{2}\left[(\rho v_yv_x - B_yB_x)^R
- (\rho v_yv_x - B_yB_x)^L\right]
\]
\color{black}
Another possibility is to use the linear form of the equations, assuming $F(U) = AU$, to compute the LW state.
\paragraph{Choice of Courant number.}
In 1D, the time global time step is determined from
\[
\Delta t = C_a\frac{\Delta x}{|\lambda_{\max}|} \quad\rightarrow\quad
\frac{\Delta x}{\Delta t} = \frac{|\lambda_{\max}|}{C_a}
> \frac{|\lambda|}{C_a} > |\lambda|
\]
where $\lambda$ is the local value.
The global time step is determined from the condition
\[
\Delta t = N_dC_a\frac{\Delta x}{\max(\lambda_x + \lambda_y + \lambda_z)}
\]
\newpage
\section{UCT-HLLI}
Using Eq. (30) of Dumbser \& Balsara (2016):
\[
F = \frac{\lambda_RF_L - \lambda_LF_R}{\lambda_R-\lambda_L}
+ \frac{\lambda_R\lambda_R}{\lambda_R-\lambda_L}(U_R - U_L)
- \varphi\frac{\lambda_R\lambda_R}{\lambda_R-\lambda_L}
R_*\delta_*L_*(U_R - U_L)
\]
\[
\Delta U_k = (L_k\cdot\Delta U)R_k
\]
\newpage
\section{Energy Correction}
The energy flux could be corrected as
\[
F_{E} \to F_E - (\vec{E}\times\vec{B})_{i+\frac{1}{2}}
+ \frac{1}{2}\Big[ (\vec{E}\times\vec{B})_{i+\frac{1}{2}, j+\frac{1}{2}}
+ (\vec{E}\times\vec{B})_{i+\frac{1}{2}, j-\frac{1}{2}}\Big]
\]
\end{document}
\section{Numerical benchmarks}
\label{sec:numerical_benchmarks}
In what follows we compare different emf-averaging schemes in terms of accuracy, robustness and dissipation properties.
Our selection includes:
\begin{itemize}
\item
Thew arithmetic averaging, given by Eq. (\ref{eq:Arithmetic});
\item
the CT-Contact emf averaging, given by Eq. (\ref{eq:UCT_CONTACT});
\item
the CT-Flux emf, given by Eq. (\ref{eq:CT_Flux});
\item
the UCT-HLL scheme following our new composition formula, Eq. (\ref{eq:emf2D}) with coefficients given by Eq. (\ref{eq:HLLcoeffs}).
Extensive numerical testing (including several other additional tests not shown here) has demonstrated our formulation of UCT-HLL scheme yields essentially equivalent results to the original formulation (Eq. \ref{eq:UCT_HLL2}).
\item
the newly proposed UCT-HLLD scheme, given by Eq. (\ref{eq:emf2D}) together with Eq. (\ref{eq:UCT_HLLD_ad}) and Eq. (\ref{eq:UCT_HLLD_nu});
\item
the novel UCT-GFORCE scheme, defined by Eq. (\ref{eq:emf2D}) with transverse velocity and diffusion coefficients given by Eq. (\ref{eq:UCT_GFORCE_vt}) and (\ref{eq:UCT_GFORCE_dLR}).
\end{itemize}
During the comparison we will employ the same base scheme for all emf averaging methods.
The base scheme is chosen to be either the $2^{\rm nd}$-order Strong Stability-Preserving (SSP) Runge Kutta scheme \cite{Gottlieb_etal2001} with piecewise linear reconstruction or the $3^{\rm rd}$-order Runge-Kutta time-stepping with the $5^{\rm th}$-order monotonicity-preserving spatial reconstruction \cite{Suresh_Huynh1997}, first introduced in the context of relativistic MHD flows by \cite{DelZanna_etal2007}.
Although both base schemes are second-order accurate (reconstructions are applied direction-wise), the latter has reduced dissipation properties when compared to the former.
Unless otherwise stated, an adiabatic equation of state with specific heat ratio $\Gamma=5/3$ is adopted.
The interface Riemann solver is either the Roe solver of \cite{Cargo_Gallice1997} or the HLLD solver of \cite{Miyoshi_Kusano2005}, depending on the test, while the CFL number is $C_a = 0.4$ in 2D and $C_a = 0.3$ in 3D, unless otherwise stated.
\subsection{Field Loop Advection in two and three dimensions}
As a first first test, we consider the advection of weakly magnetized field loop in both 2D and 3D.
In the limit of a pressure-dominated plasma, the magnetic field is essentially transported as a passive scalar and the upwind properties of any multidimensional scheme can be easily inspected.
In the 2D version, computations are carried out on the rectangle $x\in[-1,1]$ and $y\in[-1/2,1/2]$ using both the $2^{\rm nd}$ and $3^{\rm rd}$ order base-schemes covered by a uniform grid of $128\times 64$ zones.
The initial condition consists of a medium with constant density and pressure, $\rho = p = 1$ while the magnetic field is initialized through the $z$-component of the vector potential:
\begin{equation}\label{eq:fl_Az}
A_z(x,y) = \left\{\begin{array}{ll}
A_0(R-r) & \quad{\rm if}\quad r < R \\ \noalign{\medskip}
0 & \quad{\rm otherwise}\,,
\end{array}\right.
\end{equation}
where $A_0 = 10^{-3}$, $R = 0.3$ while $r=\sqrt{x^2 + y^2}$.
The velocity is constant and equal to $\vec{v} = 2\hvec{e}_x + \hvec{e}_y$ so that the system is uniformly advected along the main diagonal.
Periodic boundary conditions are imposed on all sides and the base scheme employs the Riemann solver of Roe is used for all computations.
\begin{figure*}[!ht]
\centering
\includegraphics[width=0.98\textwidth]{fl2D_mapsRK2.eps}
\caption{\footnotesize Magnetic energy (normalized to the maximum initial
value) maps for the field loop test using the linear scheme at $t=2$.
The six panels show the results obtained with different e.m.f. averaging
schemes, indicated in the upper left portion of the panel.
\label{fig:fl2D_mapsRK2}}
\end{figure*}
\begin{figure*}[!h]
\centering
\includegraphics[width=0.98\textwidth]{fl2D_mapsRK3.eps}
\caption{Same as Fig. \ref{fig:fl2D_mapsRK2} but for RK3 time stepping and
MP5 reconstruction.
\label{fig:fl2D_mapsRK3}}
\end{figure*}
Results are shown in Fig. \ref{fig:fl2D_mapsRK2} and \ref{fig:fl2D_mapsRK3} for the $2^{\rm nd}$-and $3^{\rm rd}$-order schemes, respectively.
The amount of numerical diffusion, mostly discernible from the smearing of the loop edges, is primarily determined by the choice of the reconstruction scheme.
The smearing is greatly reduced with a higher than $2^{\rm nd}$-order reconstruction.
However, the choice of the averaging scheme shows striking differences in the shape of the loop.
Arithmetic averaging performs the worse, indicating large-amplitude oscillations already visible with the linear scheme and corrupting the loop shape even more when a higher-order reconstruction is employed.
Some oscillations are also present in the CT-Flux scheme while the remaining averaging methods show oscillation-free behavior, thus indicating a sufficient amount of dissipation.
\begin{figure*}
\centering
\includegraphics[width=0.45\textwidth]{fl2D_decayRK2.eps}%
\includegraphics[width=0.45\textwidth]{fl2D_decayRK3.eps}
\caption{\footnotesize Magnetic energy decay for the 2D field loop test using
the $2^{\rm nd}$ (left) and $3^{\rm rd}$ (right) order scheme.
The different curves have been normalized to the initial energy and correspond
to Arithmetic (black), CT-Contact (green), CT-Flux (cyan), UCT-HLL (orange),
UCT-HLLD (red) and UCT-GFORCE (purple).
\label{fig:fl2D_decay}}
\end{figure*}
A more quantitative analysis is provided in Fig. \ref{fig:fl2D_decay} where we plot the total integrated magnetic energy as a function of time for the selected schemes.
The decay provides a measure of the scheme dissipation properties but not necessarily of its stability.
Indeed, arithmetic averaging exhibits the smallest decay rate while, at the second-order level, CT-Contact and CT-Flux provide the optimal level of dissipation.
On the other hand, with a higher-order reconstruction, the newly proposed UCT-HLLD schemes provides the lesser amount of dissipation, still ensuring stability of the integration, while CT-Contact yields slightly more diffusive results.
\begin{figure*}[!ht]
\centering
\includegraphics[trim=0 20 180 20, width=0.3\textwidth]{fl3D_arithmetic_RK2.eps}%
\includegraphics[trim=0 20 180 20, width=0.3\textwidth]{fl3D_ct_contact_RK2.eps}%
\includegraphics[trim=0 20 180 20, width=0.3\textwidth]{fl3D_ct_flux_RK2.eps}
\includegraphics[trim=0 20 180 20, width=0.3\textwidth]{fl3D_uct_hll_RK2.eps}%
\includegraphics[trim=0 20 180 20, width=0.3\textwidth]{fl3D_uct_hlld_RK2.eps}%
\includegraphics[trim=0 20 180 20, width=0.3\textwidth]{fl3D_uct_gforce_RK2.eps}
\caption{\footnotesize Volume rendering of the magnetic energy at $t=1$ for the 3D
field loop test using the $2^{\rm nd}$-order scheme.
The panel order is the same used for Fig. \ref{fig:fl2D_mapsRK2} and the
corresponding emf-averaging scheme is reported in the top left corner of
each panel.
While the color-scale has been chosen to be the same for all plots,
the maximum and minimum values are reported below the legend.
\label{fig:fl3D_mapsRK2}}
\end{figure*}
\begin{figure*}[!ht]
\centering
\includegraphics[trim=0 20 180 20, width=0.3\textwidth]{fl3D_arithmetic_RK3.eps}%
\includegraphics[trim=0 20 180 20, width=0.3\textwidth]{fl3D_ct_contact_RK3.eps}%
\includegraphics[trim=0 20 180 20, width=0.3\textwidth]{fl3D_ct_flux_RK3.eps}
\includegraphics[trim=0 20 180 20, width=0.3\textwidth]{fl3D_uct_hll_RK3.eps}%
\includegraphics[trim=0 20 180 20, width=0.3\textwidth]{fl3D_uct_hlld_RK3.eps}%
\includegraphics[trim=0 20 180 20, width=0.3\textwidth]{fl3D_uct_gforce_RK3.eps}
\caption{\footnotesize Same as Fig. \ref{fig:fl3D_mapsRK2} but for RK3
time stepping and MP5 reconstruction.
\label{fig:fl3D_mapsRK3}}
\end{figure*}
Computations have been repeated using a three-dimensional configuration as described in \cite{GS2008} (see also \cite{MigTze2010} and, more recently, \cite{Minoshima_etal2019}).
The domain is now chosen to be $x,y\in[-1/2,1/2]$ and $z\in[-1,1]$ with periodic boundary conditions and uniform flow velocity $\vec{v}=(1,\, 1,\, 2)$.
The initial magnetic field can be obtained by rotating the original 2D frame by an angle $\gamma$ around the $y$ axis.
The relation between the unrotated coordinates $(x',y',z')$ and the actual computational coordinates $(x,y,z)$ is given by
\begin{equation}
\left\{\begin{array}{l}
\displaystyle x' = x\cos\gamma + z\sin\gamma \\ \noalign{\medskip}
\displaystyle y' = y \\ \noalign{\medskip}
\displaystyle z' = -x\sin\gamma + z\cos\gamma \,.
\end{array}\right.
\end{equation}
Coordinates in the primed frame must satisfy periodicity in the range $x'/L'_x\in[-1/2,1/2]$, $y'/L'_y\in[-1/2,1/2]$.
This is achieved by modifying
\begin{equation}
x' \leftarrow x' - L'_x\,{\rm floor}\left(\frac{x'}{L'_x} + \frac{1}{2}\right) \,,\qquad
y' \leftarrow y' - L'_y\,{\rm floor}\left(\frac{y'}{L'_y} + \frac{1}{2}\right) \,,
\end{equation}
where $L'_x = 2/\sqrt{5}$, $L'_y = 1$.
We then define the vector potential in the primed frame using Eq. (\ref{eq:fl_Az}) with $x'$ and $y'$ used as arguments.
The inverse transformation is applied in order to recover the magnetic vector potential in the rotated frame:
\begin{equation}
\vec{A} = A'_z(-\sin\gamma,\, 0,\, \cos\gamma) \,.
\end{equation}
We choose $\gamma$ so that $\tan\gamma = 1/2$.
Figures \ref{fig:fl3D_mapsRK2} and \ref{fig:fl3D_mapsRK3} show volume renderings of magnetic pressure (for different emf) at $t=1$ obtained, respectively, with the $2^{\rm nd}$- and $3^{\rm rd}$-order base schemes and a resolution of $64\times64\times128$ zones.
Our results agree with the 2D expectations showing a very similar trend.
Arithmetic averaging still yields insufficient dissipation leading to severe distortions and oscillations which are amplified when switching form the $2^{\rm nd}$- to the $3^{\rm rd}$-order scheme.
This eventually leads to the formation of an unstable checkerboard pattern (top left panel of Fig. \ref{fig:fl3D_mapsRK3}) and the disruption of the loop.
Modest fluctuations are also visible with the CT-Flux, although integration (with both the $2^{\rm nd}$- and $3^{\rm rd}$-order schemes) remain stable.
The decay of magnetic energy, shown in the top panels of Fig. \ref{fig:fl3D_decay}, confirms the same trend already established in the 2D case although discrepancies between schemes are less pronounced in the 3D case.
As pointed out by \cite{GS2008}, the component of magnetic field along the loop cylindrical axis should be zero analytically.
At the numerical level, however, this is verified only at the truncation level of the scheme as indicated by the bottom panels in Fig. \ref{fig:fl3D_decay}, where we plot the error $\av{|B'_z|}/B_0$ for the different schemes.
The CT-Contact and CT-Flux yield larger errors (apart from arithmetic averaging which yields the worse results) and discrepancies become more evident with the $3^{\rm rd}$-order scheme.
Overall, the three UCT averaging schemes (UCT-HLLD, UCT-GFORCE and UCT-HLL) perform best.
It is worth pointing out that the employment of RK time stepping adopted here avoids the complexities - generally inherent with corner-transport-upwind schemes \cite{GS2008,MigTze2010} - in the calculation of the interface states, since primitive variables need not be evolved through a separate normal predictor step and no balancing source term is required in a fully conservative evolution scheme.
\begin{figure*}[!h]
\centering
\includegraphics[width=0.45\textwidth]{fl3D_decayRK2.eps}%
\includegraphics[width=0.45\textwidth]{fl3D_decayRK3.eps}
\caption{\footnotesize \emph{Top panels}: Magnetic energy decay (normalized to
the initial value) as a function of time for the 3D field loop test.
Left and right panels correspond to the $2^{\rm nd}$ and $3^{\rm rd}$ order
schemes while color (reported in the legend) keeps the same convention used
in Fig. \ref{fig:fl2D_decay} (arithmetic averaging has been omitted due to
large errors).
\emph{Bottom panels}: normalized evolution of the axial component of magnetic
field ($B'_z$) for selected schemes.
\label{fig:fl3D_decay}}
\end{figure*}
\subsection{Magnetized Current Sheet}
We now consider a particularly interesting configuration where the amount of numerical dissipation introduced by either the base scheme or the emf-averaging procedure (or both) is crucial in determining the system evolution.
The computational domain is initially filled with plasma at rest ($\vec{v}=0$) having uniform density $\rho = 1$ and a Harris current sheet is used for the magnetic field:
\begin{equation}
\vec{B}(y) = B_0\tanh\left(\frac{y}{a}\right)\hvec{e}_x\,,\quad
\end{equation}
where $B_0=1$ and $a=0.04$ is the current sheet width.
An equilibrium configuration is constructed by counter-acting the Lorentz force with a thermal pressure gradient,
\begin{equation}
p(y) = \frac{B_0^2}{2}\left(\beta + 1\right) - \frac{B_x(y)^2}{2} \,,
\end{equation}
where $\beta = 2p_\infty/B_0^2 = 10$ is the initial plasma-beta parameter.
The equilibrium magnetic field is perturbed with
\begin{equation}
\delta\vec{B} = \epsilon B_0\left[
-\frac{1}{2}k_y\sin\left(\frac{1}{2}k_yy\right)\cos\left(k_xx\right)\hvec{e}_x
+ k_x\cos\left(\frac{1}{2}k_yy\right)\sin\left(k_xx\right)\hvec{e}_y
\right]\,,
\end{equation}
where $k_x = 2\pi/L_x$, $k_y = 2\pi/L_y$ while $\epsilon = 10^{-3}$ is the initial amplitude.
In order to fulfill the divergence-free condition to machine accuracy, we differentiate the vector potential $\delta A_z = \epsilon B_0\cos(k_yy/2)\cos(k_xx)$ in order to produce the desired perturbation.
We employ a rectangular box defined by $x\in[-1,1]$ and $y\in[-\frac{1}{2},\frac{1}{2}]$ with periodic boundary conditions in the $x$-direction and reflective conditions at the top and bottom boundaries.
We carry out two sets of computations at the resolution of $128\times64$ zones using the $2^{\rm nd}$-order base scheme with the Riemann solver of Roe (first set) and the HLLD solver (second set) at cell interfaces.
We begin our discussion by pointing out that, in absence of a physical resistivity, the previous (unperturbed) equilibrium is a stationary solution of the ideal MHD equations and any dissipative process should be absent.
In practice, however, the discretization process introduces a numerical viscosity/resistivity which allows the current sheet to reconnect to some extent.
\begin{figure*}
\centering
\includegraphics[width=0.99\textwidth]{cs_maps_pb_roe.eps}
\caption{\footnotesize \emph{Left}: thermal pressure maps for the current sheet
problem at $t=30$ using the different emf-averaging schemes reported
in each panel.
Magnetic field lines are over-plotted.
Results have been produced using the $2^{\rm nd}$-order base scheme
with the Roe Riemann solver and a resolution of $128\times64$ zones.
\emph{Right}: magnetic energy (normalized to its initial value)
as a function of time.
The dotted orange line corresponds to the UCT-HLL scheme with resolution
four times larger.
The color convention (reported in the legend) is the same one adopted in
previous plots.
\label{fig:cs_maps_roe}}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.99\textwidth]{cs_maps_pb_hlld.eps}
\caption{\footnotesize Same as Fig. \ref{fig:cs_maps_roe} but this
time using the $2^{\rm nd}$-order base scheme with the HLLD Riemann solver.
\label{fig:cs_maps_hlld}}
\end{figure*}
Thermal pressure maps are shown for different emf methods on the left side of Fig. \ref{fig:cs_maps_roe} at $t=30$ using the base scheme with the Roe solver.
The plot on the right side shows the corresponding volume-integrated magnetic energy as a function of time.
Magnetic reconnection takes place more rapidly for the UCT-HLL emf followed by UCT-GFORCE, Arithmetic, CT-Flux, CT-Contact eventually leading to the formation of a large magnetic island located across the vertical boundaries.
The rate at which field dissipation occurs depends on the amount of numerical viscosity diffusion: more dissipative schemes will trigger reconnection events earlier.
Results obtained with the UCT-HLLD scheme, in fact, show that the amount of dissipation is considerably reduced and the layer remains more stable, as one would expect for an ideal system.
This conclusion is also supported by a high-resolution run ($512\times256$ zones) with the UCT-HLL method (dotted orange line on the right side) indicating that magnetic field dissipation takes place at later times.
When the base Riemann solver is switched to HLLD (Fig. \ref{fig:cs_maps_hlld}), no significant change is found for the UCT schemes.
However, the solution obtained with the CT-Contact (and, to a lesser extent, with the CT-Flux) emf-averaging is now considerably different, bearing closer resemblance with the UCT-HLLD scheme.
The magnetic energy now remains nearly constant not only for the UCT-HLLD scheme (red curve) but also with the CT-Contact emf (green curve).
This apparently odd behavior may be understood by inspecting the amount of numerical dissipation inherited by the CT-Contact (or CT-Flux) scheme from the base 1D solver.
When the Roe Riemann solver is employed, contributions to the diffusion term are given by jumps in magnetic field \emph{and} thermal pressure when sweeping along the $y$-direction.
These contributions enter in the momentum flux \emph{and} the induction system as well.
Conversely, with the 1D HLLD Riemann solver, dissipation terms are proportional to the jump in magnetic field only and this contribution is confined to the electric field alone.
The CT-Contact scheme will therefore carry different amount of numerical viscosity depending on which 1D Riemann solver is selected.
Conversely, the UCT-HLLD introduces the same amount of numerical viscosity regardless of the 1D base Riemann solver.
From the discussion after Eq. (\ref{eq:UCT_HLLD_nu}), the order of magnitude of the dissipation term is $\tilde{\chi} \approx (v_x - \lambda^*) \approx O(\epsilon)$ and thus smaller when compared to the Roe dissipation matrix.
This test clearly substantiates that the choice of the emf averaging scheme is as crucial as the choice of the interface Riemann solver in the evolution of magnetized systems.
\subsection{Orszag Tang Vortex}
The Orszag-Tang is a standard numerical benchmark in the context of the ideal MHD equations and although an exact solution does not exist, its straightforward implementation has made it an attractive numerical benchmark for inter-scheme comparison.
The problem consists of a doubly-periodic square domain, $x,y \in[0,2\pi]$ with uniform density and pressure, $\rho = 25/9$ and $p=5/3$ with velocity and magnetic field vectors given by
\begin{equation}
\vec{v} = -\sin(y)\hvec{e}_x + \sin(x)\hvec{e}_y \,,\qquad
\vec{B} = -\sin(y)\hvec{e}_x + \sin(2x)\hvec{e}_y \,.\qquad
\end{equation}
Albeit most numerical schemes gives comparable results at time $t = \pi$, the subsequent evolution has been discussed by few authors (see, e.g., \cite{Balsara1998, Lee_Deane2009, Mignone_etal2010_ho, Waagan_etal2011}).
Here we carry out such investigation by considering a later evolutionary time, $t=2\pi$ and by comparing different sets of emf-averaging schemes with grid resolution of $128^2$ or $256^2$ grid zones.
We employ both the $2^{\rm nd}$- and $3^{\rm rd}$-order base schemes using the HLLD Riemann solver.
As we shall see, the choice of the numerical method can appreciably impact the evolution of the system at this later time.
\begin{figure*}[!ht]
\centering
\includegraphics[width=0.92\textwidth]{ot_maps_RK2_128.eps}\\
\caption{\footnotesize Pressure map distributions for the Orszag-Tang vortex
at $t=2\pi$ using the $2^{\rm nd}$-order scheme with a grid resolution of
$128\times 128$ zones.
The top (bottom) row shows the results obtained with
Arithmetic, CT-Contact and CT-Flux (UCT-HLL, UCT-HLLD and UCT-GFORCE) schemes.
\label{fig:ot_maps_RK2_128}}
\end{figure*}
\begin{figure*}[!h]
\centering
\includegraphics[width=0.92\textwidth]{ot_maps_RK3_256.eps}\\
\caption{\footnotesize Same as Fig. \ref{fig:ot_maps_RK2_128} but for
the $3^{\rm rd}$-order scheme with $256\times 256$ grid zones.
\label{fig:ot_maps_RK3_256}}
\end{figure*}
The initial vorticity distribution spins the fluid clockwise leading to the steepening of density perturbations into shocks around $t \sim 0.8$.
The dynamics is then regulated by multiple shock-vortex interactions leading, by $t\sim 2.6$ to the formation of a horizontal current sheet at the center of the domain.
Here magnetic energy is gradually dissipated and the current sheet twists leading, at $t=2\pi$ to the structures observed in Fig. \ref{fig:ot_maps_RK2_128} and \ref{fig:ot_maps_RK3_256} for the $2^{\rm nd}$-order and $3^{\rm rd}$-order base schemes with resolution of $128^2$ and $256^2$ zones, respectively.
The most noticeable difference lies at the center of the computational domain where the formation of a magnetic island (an O-point) can be discerned when using the Arithmetic, CT-Contact or UCT-HLLD averaging schemes while it is absent from the other solvers.
The presence of the central island may be attributed to the amount of numerical resistivity that can trigger tearing-mode reconnection episodes across the central current sheet, resulting in a final merging in this larger island.
For sufficiently low numerical dissipation, all schemes should eventually exhibits such a feature.
This may appear in contradiction with what required for the magnetized sheet test, where the initial equilibrium was expected to be stable in ideal MHD.
Here, however, the situation is highly dynamic with the central current sheet undergoing a fast thinning process (induced by converging shock fronts), and it is known that only in the presence of a sufficiently high local Lundquist number (i.e. low numerical dissipation in the ideal MHD case of this test) the tearing instability is expected to develop on the \emph{ideal} (Alfv\'enic) timescales, see \cite{Landi_etal2015} and \cite{Papini_etal2019}.
In our computations we found that Arithmetic averaging, CT-Contact and UCT-HLLD show the formation of the central islands for the two resolutions considered here with both the $2^{\rm nd}$- and $3^{\rm rd}$-order schemes.
A quantitative comparison is given Table \ref{tab:ot_compare} where we list the values of $\mu_p = p_{\max}/\av{p}$ for different computations.
Here $p_{\max}$ is the maximum pressure value in the region $4\pi/10 < x,\, y < 6\pi/10$ while $\av{p}$ is the average value in the entire computational domain.
For $\mu_p \lesssim 2$ no island is formed, while for $\mu_p\gtrsim2$ the island extent is roughly proportional to $\mu_p$.
CT-Contact and UCT-HLLD perform similarly yielding the smallest amount of dissipation while retaining numerical stability (no negative pressure has been encountered).
The UCT-HLL scheme is the most diffusive scheme showing the formation of the central O-point only with $256^2$ zones and MP5 reconstruction.
The UCT-GFORCE scheme performs similarly to the CT-Flux average and it is superior to the UCT-HLL scheme.
Note that the value of $\mu_p$ increases by more than $\sim 30\%$ when doubling the resolution for all methods except for arithmetic averaging which, on the other hand, yields insufficient dissipation as witnessed by several pressure fixes with the $3^{\rm rd}$-order scheme at the highest resolution employed.
\begin{table}
\centering
\begin{tabular}{ lccccccc }
\hline
Scheme & Res. & Arithmetic & CT-Contact & CT-Flux & UCT-HLL & UCT-HLLD & UCT-GFORCE \\
\hline
RK2+Lin & $128^2$ & 2.16 & 2.49 & 1.40 & 1.21 & 2.54 & 1.25 \\ \noalign{\smallskip}
RK2+Lin & $256^2$ & 2.78 & 2.78 & 2.27 & 1.44 & 2.90 & 2.13 \\ \noalign{\smallskip}
RK3+MP5 & $128^2$ & 2.52 & 3.45 & 2.09 & 1.30 & 3.33 & 2.03 \\ \noalign{\smallskip}
RK3+MP5 & $256^2$ & 2.89 & 4.29 & 3.29 & 3.75 & 4.31 & 3.87 \\
\hline
\end{tabular}
\caption{\footnotesize Normalized pressure maximum $\mu_p=p_{\max}/\av{p}$ for
the Orszag-Tang vortex at $t=2\pi$ using selected CT averaging techniques
(columns) and different base-schemes and resolution (rows).
An island forms only when $\mu_p \gtrsim 2$.}
\label{tab:ot_compare}
\end{table}
Finally, it is worth to mention that the employment of the Roe Riemann solver (instead of HLLD) in the $3^{\rm rd}$-order base scheme lead to integration failures with the Arithmetic and CT-Flux averaging schemes.
No sign of numerical instability was discerned with the other emf-solvers.
\subsection{Three-Dimensional Blast Wave}
To assess the robustness of the proposed averaging schemes in a strongly magnetized plasma, we now analyze the blast wave problem in three dimensions.
Despite its simplicity, the blast wave problem is a particularly effective benchmark in testing the solver ability at handling MHD wave degeneracies parallel and perpendicularly to the field orientation.
Our configuration recalls the original paper of Balsara \& Spicer \cite{Balsara_Spicer1999} where it was first introduced and it consists of a unit cube filled with constant density $\rho_0=1$ and pressure $p_0=0.1$, threaded by a uniform magnetic field
\begin{equation}
\vec{B} = B_0\left( \sin\theta\cos\phi,\,
\sin\theta\sin\phi,\ ,
\cos\theta \right) \,,
\end{equation}
where $B_0 = 100/\sqrt{4\pi}$, $\theta = \pi/2$ and $\phi=\pi/4$.
The plasma $\beta$ in the ambient medium is therefore $\beta \approx 2.5\times 10^{-4}$ making this test particularly challenging.
A sphere with radius $r=0.1$ is filled with a much higher pressure, $p_1 = 10^3$, and the system is evolved until $t=0.01$.
We adopt an adiabatic equation of state with specific heat ratio $1.4$ and set outflow boundary conditions everywhere.
The MHD equations are solved on the unit cube $[-1/2,1/2]^3$ using the $2^{\rm nd}$-order base scheme with the Roe Riemann solver and different emf-averaging schemes with a resolution of $192^3$ zones.
As in the original paper by \cite{Balsara_Spicer1999}, in order to avoid the occurrences of negative pressures\footnote{No scheme preserves energy positivity without energy correction for this test, not even with a minmod limiter.}, the total energy density after each step has to be locally redefined by replacing, in the magnetic term, the zone-centered magnetic field (updated using a standard Godunov-step) with the arithmetic average of the staggered fields, $E \leftarrow E - (\vec{B}_{ {\boldsymbol{c}} }^2 - \overline{\vec{B}}_f^2)/2$ where, e.g., $\overline{B}_x = (B_{ {\mathbf{x}_f} } + B_{ {\mathbf{x}_f} -\hvec{e}_x})/2$.
\begin{figure*}
\centering
\includegraphics[width=0.95\textwidth]{blast_compare.eps}
\caption{\footnotesize Results for the 3D blast wave problem at $t=0.01$
using $192^3$ grid zones.
In the four leftmost panels we show 2D colored maps at $z=0$ of density and
log pressure (top), specific kinetic energy and magnetic energy (bottom)
obtained with the UCT-HLLD scheme.
In the top right panel we plot the magnetic energy densities along the
main diagonal ($x_d=x\sqrt{2}$) at $z=0$ obtained with different emf solvers
(solid colored lines reported in the legend).
The dotted black line indicate the solution obtained with the
UCT-HLLD solver at twice the resolution.
In the bottom right panel we plot 1D slices of density along the vertical
axis.
The color pattern is the same already used in other figures.
\label{fig:blast_compare}}
\end{figure*}
The four leftmost panels of Fig. \ref{fig:blast_compare} show 2D maps, taken as $xy$ slices at $z=0$, of various gas-dynamical quantities obtained with the UCT-HLLD solver.
The explosion is delimited by an outer fast forward shock and the presence of a magnetic field makes the propagation highly anisotropic by compressing the gas in the direction parallel to the field.
In the perpendicular direction the outer fast shock becomes magnetically dominated with very weak compression.
Results reproduced with the other emf schemes are also very similar.
A more careful comparison, shown in the top rightmost panel, reveals some differences in the magnetic energy plot along the main diagonal in the $z=0$ plane.
Here a dip is formed around $x_d \approx 0.2$, where $x_d$ is the distance from a point on the diagonal to the coordinate origin.
The dip becomes more pronounced as the numerical dissipation of scheme is reduced.
Indeed, UCT-HLL yields the highest minimum followed by UCT-GFORCE, CT-Contact, CT-Flux and UCT-HLLD.
This trend is confirmed at twice the grid resolution ($384^2$ zones, dotted black line), leading to the formation of an even deeper sag.
Finally, in the bottom rightmost panel we compare the density profiles along the $z$-axis for different schemes, showing only minor differences.
\subsection{Kelvin-Helmholtz Instability}
\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth]{kh_growth.eps}
\caption{\footnotesize
Growth rates, measured as $\delta v_y = (\max(v_y) - \min(v_y))/2$, for the
2D KH instability.
\emph{Top panels:} $\delta v_y$ for the $2^{\rm nd}$- and $3^{\rm rd}$-order
schemes using the different emf solvers at the resolution of $128\times 256$ zones.
\emph{Bottom panels:} $\delta v_y$ for the $2^{\rm nd}$- and $3^{\rm rd}$-order
schemes using $256\times 512$ zones.
\label{fig:kh_growth}}
\end{figure*}
The Kelvin-Helmholtz instability (KH) is driven by the relative motion between two fluids.
In the presence of a magnetic field aligned with the flow direction, the instability is typically reduced by the stabilizing action of magnetic tension.
In the following test, we consider a 2D Cartesian domain $x\in[0,1]$ , $y\in[-1,1]$ initially setup to contain a velocity shear layer,
\begin{equation}
v_x = \frac{M}{2}\tanh(y/a) \,,
\end{equation}
where $M$ is the sonic Mach number while $a=0.01$ is the shear width.
We normalize velocities to the speed of sound ($\rho=1$, $p=1/\Gamma$) while the magnetic field is aligned with the flow direction, $\vec{B} = B_0\hvec{e}_x$ while the field strength is parametrized using the Alfv\'en velocity, $B_0 = v_A\sqrt{\rho}$.
A random perturbation is applied to seed the instability,
\begin{equation}
v_y = \epsilon M\exp\left[-(y/20a)^2\right] \,,
\end{equation}
where $\epsilon\in [-10^{-5},10^{-5}]$ is a random number.
The boundary conditions are periodic in the $x$-direction while a reflective boundary is applied at $y=\pm1$.
We tune up the parameters by choosing $M=1$, $v_A = 1/2$ and by integrating the MHD equations until $t=50$ using the HLLD Riemann solver in the base scheme.
This choice of parameters makes the test particularly severe since the configuration is only weakly unstable.
The maximum growth rate that can fit in the computational box, indeed, has been found by repeating the linear analysis of Miura \& Pritchett (1982) \cite{Miura_Pritchett1982} yielding ${\rm Im}(\omega a/c_s) \approx 0.395\times 10^{-2}$, where $\omega$ is the growth rate.
Computations are repeated using two different grid resolutions, $N_x = 128$ and $N_x = 256$ ($N_y=2N_x$) so that the shear width is resolved on $\sim 4$ and $\sim 8$ zones, respectively.
Fig. \ref{fig:kh_growth} shows the perturbation growth, measured as $\delta v_y = (\max(v_y) - \min(v_y) )/2$, as a function of time for the $2^{\rm nd}$ and $3^{\rm rd}$-order base schemes and the selected emf averaging solvers.
Perturbations grow linearly until the system turns into a nonlinear phase around $t\sim 37$.
At low resolution, the UCT-HLL scheme fails to evolve into a fully developed unstable state, the UCT-GFORCE yields a reduced instability growth while only with the CT-Contact, CT-Flux and UCT-HLLD perturbations follow a linear amplification phase in closer agreement with the analytical prediction.
Here we find ${\rm Im}(\omega) \approx 0.35$ and ${\rm Im}(\omega) \approx 0.31 - 0.33$ for the $2^{\rm nd}$- and $3^{\rm rd}$-order schemes, respectively.
Growth rates slightly lower when switching to higher-order reconstruction owing, at this poor resolution, to the excitation of short-wavelength perturbation modes with smaller growth rates.
Better convergence is achieved at twice the resolution where, for the $2^{\rm nd}$-order scheme, we find $0.38\lesssim {\rm Im}(\omega) \lesssim 0.40$ whereas, for the $3^{\rm rd}$-order method, values move closer yielding a reduced inter-scheme dispersion with $0.38\lesssim {\rm Im}(\omega) \lesssim 0.39$.
The converged growth rate, at the resolution of $512\times1024$, is ${\rm Im}(\omega) \approx 0.393$.
Finally, in Fig. \ref{fig:kh_maps}, we show colored maps of $B^2/\rho$ for the $2^{\rm nd}$-order (low resolution, top panels) and the $3^{\rm rd}$-order (high resolution, bottom panels) computations at $t=50$.
Note that no sign of instability is visible at low resolution for the UCT-HLL and UCT- GFORCE schemes.
On the other hand, the instability is fully developed and comparable results are obtained at twice the resolution with the $3^{\rm rd}$-order base scheme for all emf methods.
\begin{figure*}[!ht]
\centering
\includegraphics[width=0.9\textwidth]{kh_maps.eps}
\caption{\footnotesize
Final evolutionary stage ($t=50$) for the 2D Kelvin-Helmholtz test.
Colored maps show $B^2/\rho$ for selected emf with the $2^{\rm nd}$-order
scheme using $128\times 256$ zones (top) and the $3^{\rm rd}$-order
scheme with $256\times 512$ grid points (bottom).
\label{fig:kh_maps}}
\end{figure*}
\subsection{Magnetorotational Instability in the ShearingBox model}
As a final application we compare the different emf solvers by investigating the nonlinear evolution of the magneto-rotational instability (MRI) in the shearing-box approximation model.
The implementation of the shearing-box equations (which provides a local Cartesian description of a differentially rotating disk) for the PLUTO code may be found in \cite{Mignone_etal2012}.
The initial background state consists of a uniform density distribution, $\rho_0=1$, and a linear velocity shear $\vec{v} = -q\Omega x\hvec{e}_y$.
Here $\Omega=1$ is the orbital frequency while $q=3/2$ - typical of a Keplerian profile - gives a local measure of the differential rotation.
A net magnetic flux threads the computational domain and it is initially aligned with the vertical direction, $\vec{B}=B_0\hvec{e}_z$ with strength
\begin{equation}
B_0 = c_s\sqrt{\frac{2\rho_0}{\beta}} \,.
\end{equation}
We choose $c_s=4.88$ (isothermal sound speed) and $\beta=8\times 10^3$ so that we fit approximately one most unstable MRI wavelength in the vertical direction, \cite{Pessah_etal2007, Bodo_etal2011}.
The shearing-box equations are solved on a 3D Cartesian box with $x,y\in[-2L,2L]$ and $z\in[-L/2,L/2]$ with periodic boundary conditions in the vertical and azimuthal ($y$) directions while shearing-sheet conditions are imposed at the $x$-boundaries.
Integration are carried at the resolution of $144\times144\times 36$ zones for $\approx 100$ rotations ($t_{\rm stop} = 628$) using the Roe Riemann solver with an isothermal equation of state.
We employ the orbital advection scheme to subtract the linear shear contribution from the total velocity so that the MHD equations are evolved only in the residual velocity, thus giving a substantial speedup of the algorithm, see \cite{Mignone_etal2012}.
\begin{figure*}[!ht]
\centering
\includegraphics[width=0.33\textwidth]{sb3D_ct_contact_RK2.eps}
\includegraphics[width=0.33\textwidth]{sb3D_uct_hlld_RK2.eps}
\includegraphics[width=0.33\textwidth]{sb3D_uct_hll_RK2.eps}
\caption{\footnotesize Three-slice cut of $B_z$ at $t = 628$ for the shearingbox
test problem, using the CT-Contact (left), UCT-HLLD (middle) and
UCT-HLL (right) schemes.
\label{fig:sb_maps}}
\end{figure*}
\begin{figure*}[!h]
\centering
\includegraphics[width=0.9\textwidth]{sb_compare.eps}
\caption{\footnotesize Maxwell stresses (normalized to $\rho_0c_s^2$) as
a function of time for the the 3D shearingbox model.
In the left panel we compare five different averaging schemes at the nominal
resolution of $144^2\times36$ zones while in the right panel the comparison
is repeated with results obtained at twice the resolution with UCT-HLL and
UCT-HLLD.
Different colors correspond to different emf-averaging schemes
(as reported in the legend).
\label{fig:sb_growth}}
\end{figure*}
The initial transient phase is accompanied by an exponential growth of the magnetic field followed by a transition to a nonlinear turbulent state.
The vertical component of magnetic field at $t=628$ is shown in Fig. \ref{fig:sb_maps}, comparing the results obtained with the CT-Contact, UCT-HLLD and UCT-HLL schemes.
The maps indicate qualitative larger amount of fine-scale structure with the formers with respect to the latter.
A more quantitative measure is provided by the plots of the volume-integrated Maxwell stress $w_{xy} = -\av{B_xB_y}$ - normalized to $\rho_0 c_s^2$ - as a function of time shown in the left panel of Fig. \ref{fig:sb_growth}.
The time-averaged stress value, in the range $\Omega t/2\pi \in[10,100]$, is $\overline{w}_{xy}\sim 0.21$ for UCT-HLLD, $\sim 0.20$ for the CT-Contact and CT-Flux schemes.
Lower values, respectively equal to $\sim 0.07$ and $\sim 0.10$, are found when using the UCT-HLL or UCT-GFORCE methods.
Previous studies (see, e.g., \cite{Bodo_etal2011}) indicate that stresses increase with resolution and should eventually converge as the mesh spacing becomes sufficiently fine.
In this sense, lower values of $w_{xy}$ imply larger numerical diffusion.
This conclusion is supported by computations carried out at twice the resolution ($288\times288\times 72$) using the UCT-HLL and UCT-HLLD emf-averaging schemes, for which the stresses are plotted in the right panel of Fig. \ref{fig:sb_growth}.
In this case the time-averaged value of $w_{xy}$ increases to $\sim 0.10$ (for UCT-HLL) and to $\sim 0.27$ (for UCT-HLLD).
\section{Summary}
\label{sec:summary}
The systematic construction and comparison of averaging schemes to evaluate the electromotive force (emf) at zone edges in constrained transport MHD has been the subject of this work.
The upwind constrained transport (UCT) formalism, originally developed by \cite{Londrillo_DelZanna2004}, has been reconsidered under a more general perspective where the edge-averaged electric field can be constructed using the information available from 1D face-centered, component-wise Riemann solvers.
This approach offers enhanced flexibility allowing new upwind techniques to be incorporated in CT-MHD schemes at the modest cost of storing transverse velocity, weight coefficients for the fluxes and diffusion terms for the magnetic field.
Four popular schemes, namely Arithmetic, CT-Contact, CT-Flux and UCT-HLL, together with two novel algorithms have been presented and compared in terms of accuracy, robustness and dissipation properties.
Among the newly introduced schemes, the UCT-HLLD and UCT-GFORCE schemes build into the UCT framework the proper combination of upwind fluxes derived, respectively, from the HLLD Riemann solver of \cite{Miyoshi_Kusano2005} and the GFORCE scheme of \cite{Toro_Titarev2006}.
Through a series of 2D and 3D numerical tests and benchmark applications, our conclusions can be summarized as follows:
\begin{itemize}
\item
The choice of emf averaging procedure at zone edges can be as crucial as the choice of the Riemann solver at zone interfaces of the underlying base scheme.
This becomes particularly true in problems where the magnetic field has a dominant role on the system dynamics.
\item
Averaging schemes with insufficient dissipation (arithmetic averaging) may easily corrupt the solution leading to spurious numerical artifacts or to the occurrences of nonphysical values such as negative pressures.
\item
The simple recipe of doubling the dissipation term (e.g. CT-Flux and CT-Contact), suggested by the need recovering the proper directional biasing for grid-aligned configurations, still represents an efficient and valid option, although not strictly compliant with the UCT formalism.
\item
The newly proposed UCT-HLLD scheme presents low-diffusion and excellent stability properties when used in conjunction with the HLLD Riemann solver as well as the Roe solver.
The amount of numerical dissipation is comparable to or even lower to the CT-Contact scheme with the advantage that UCT-HLLD can be extended to higher than $2^{\rm nd}$-order schemes.
\item
The UCT-GFORCE scheme, also introduced here for the first time, has dissipation properties intermediate between the UCT-HLL and UCT-HLLD (or CT-Contact) averaging schemes.
\end{itemize}
Contrary to non-UCT schemes, where the amount of numerical dissipation is inherited from the base Riemann solver applied at zone-interfaces, our formulation yields a one-valued independent and continuous numerical flux function with stable upwind properties along each direction.
Whether this is an advantage or not should be discerned for the particular application at hand.
Albeit our results have been presented in the context of $2^{\rm nd}$-order schemes in which the Riemann solver is applied at cell interfaces, our formulation can be naturally extended to higher than $2^{\rm nd}$-order finite-volume of finite-difference schemes.
Obviously, the component-wise Riemann solvers employed dimension-by-dimension as in the present work, in spite of their simplicity, may not be the optimal choice (as opposed to truly multidimensional solvers) in situations requiring unstructured triangular or geodesic meshes to treat geometrically complex MHD flows \cite[e.g.][]{Balsara_Dumbser2015,Balsara_etal2019}.
Forthcoming works will extend this formalism to the relativistic case as well, along the lines of \cite{DelZanna_etal2003,DelZanna_etal2007,Mignone_Bodo2006,Mignone_etal2009}, aiming at improving on the simple HLL choice which is nowadays the adopted standard \cite[e.g.][]{Porth_etal2019}.
|
1,108,101,565,613 | arxiv |
\section{ACKNOWLEDGEMENTS}
{
This work was supported in parts by
NSFC (61872250, 62132021, U2001206, U21B2023, 62161146005),
GD Talent Plan (2019JC05X328),
GD Natural Science Foundation (2021B1515020085),
DEGP Key Project (2018KZDXM058, 2020SFKC059),
National Key R\&D Program of China (2018AAA0102200),
Shenzhen Science and Technology Program (RCYX20210609103121030, RCJC20200714114435012, JCYJ20210324120213036),
and Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ).}
\section{Discussion and Future Work}
\input{figures/failure}
We have presented an RL-based method to jointly learn grasp and motion planning for high-DOF grippers.
We advocate the use of Interaction Bisector Surface to characterize the fine-grained spatial relationship between the gripper and the target object. We found that IBS is surprisingly effective as a state/observation representation of deep RL since it well informs the fine-grained control of each finger with spatial relation against the target object. Together with a few critical designs of the learning model and strategy, our method learns high-quality grasping with smooth reaching motion.
\rz{
Our method has the following limitations:
\begin{itemize}
\item Our RL model adopts success signal in execution as well as Q1 metric for measuring grasp quality and does not explicitly define the naturalness of a grasp. Therefore, there is the case that a generated grasp is successful but looks unnatural. For example, some successful grasps could have an unbent finger that looks implausible; see Figure~\ref{fig:failure}(a).
\item The other reward we use is devised to avoid collision during the reach-and-grasp process. This makes our method unable to learn picking up a flat shape lying on the table. See Figure~\ref{fig:failure}(b) for an example.
\end{itemize}
}
\rz{As future work, we would like to conduct further investigation on the following four aspects:
\begin{itemize}
\item To further improve the performance of our method, we can try more complicated feature encoding other than the current PointNet and MPLs, and further, take more dynamic information of each frame such as velocity into account.
\item To generate more natural grasps, it is necessary to investigate grasp quality metrics that can better reflect grasp naturalness. This could be learned from human grasping datasets such as ContactPose~\cite{brahmbhatt2020contactpose} .
\item To make our method be able to grasp flat-shaped objects, we can relax the collision constraint and even utilize the collision with the environment to help lift the object and achieve a successful grasp as in~\cite{eppner2015exploitation} .
\item To be able to conduct real robot implementation, we need to study how to perform sim-to-real policy transfer, overcoming the domain gap between simulated observation and real visual perception, as well as the gap between simplified static environment and real dynamic scenes.
\end{itemize}
}
\section{Introduction}
Robotic grasping is an important and long-standing problem in robotics. It has been drawing increasingly broader attention from the fields of computer vision~\cite{saxena2010vision}, machine learning~\cite{kleeberger2020survey} and computer graphics~\cite{liu2009dextrous,pollard2005physically}, due to the interdisciplinary nature of its core techniques.
Traditional methods typically approach grasping by breaking the task into two stages: static grasp synthesis followed by motion planning of the gripper. In the first stage, a few candidate grasp poses are generated. The second stage then plans a collision-avoiding trajectory for the robotic arm and gripper to select a feasible grasp and execute it. The main limitation of such a decoupled approach is that the two stages are not jointly optimized which could lead to suboptimal solutions.
An integrated solution to grasp planning and motion planning, \rz{which is often referred to as the \emph{reach-and-grasp problem}}, remains a challenge~\cite{wang2019manipulation}.
A few works have attempted integrated grasp and motion planning by formulating a joint optimization problem. The advantage of an integrated planner is that motion planning and grasp planning impose constraints on each other. However, these existing methods still rely on pre-sampled grasp candidates over which a probabilistic distribution for selection is computed, making it highly reliant on the quality of the candidates. Wang et al.~\shortcite{wang2019manipulation} introduce online grasp synthesis to eliminate the need for a perfect grasp set
and grasp selection heuristics. Nevertheless, such an approach optimizes over a discrete set of grasp candidates, which limits the grasping space explored.
Reinforcement learning (RL)~\cite{sutton2018reinforcement} models offer a counterpoint to the planning paradigm. Rather than optimizing for grasp selection and motion planning, the idea is to use closed-loop feedback control based on sensory observations so that the agent can dynamically update its strategy while accumulating new observations. More recent advances of RL allow for continuous, high-dimensional actions which are especially suitable for continuous exploration of \rz{reach-and-grasp} planning. Albeit offering promising solutions, the sample efficiency issue of RL hinders its application in highly complex control scenarios, such as dexterous grasping of a high-DOF robotic hand (e.g., a 24-DOF five-fingered gripper).
We argue that the main cause of the limitation above is the lack of an \rz{effective} representation of observations. Indeed, even with deep neural networks as powerful function approximators, it is still too difficult to fit a function mapping raw
sensory observations (camera images) to low-level robot actions (e.g., motor torques, velocities, or Cartesian motions).
Therefore, learning complicated control of high-DOF grippers calls for an informative representation of \rz{the intermediate states during the reaching-and-grasping process.}
Such representation should well inform the RL model about \rz{the \emph{dynamic interaction} between the gripper and the target object.}
In this work, we advocate the use of \emph{Interaction Bisector Surface (IBS)} for representing gripper-object interaction in RL-based \rz{reach-and-}grasp learning. IBS, computed as the Voronoi diagram between two geometric objects, was originally proposed for indexing and recognizing inter-protein relation in the area of biology~\cite{kim2006interaction}. In computer graphics, it has been successfully applied in characterizing fine-grained spatial relations between 3D objects~\cite{zhao2014IBS,hu2015interaction}.
We found that IBS is surprisingly effective as a state/observation representation in learning high-DOF \rz{reach-and-}grasp planning.
Gripper-object IBS well informs \rz{the global pose of the gripper and }the fine-grained \rz{local} control of each finger with spatial relation against the target object, making the map from observation to action easier to model.
\rz{For different initial configuration of the gripper, i.e., different relative pose to the object, our method is able to provide different motion sequences and form different final graspings as shown in Figure~\ref{fig:teaser}.
Moreover, during the reaching-and-grasping process, the dynamic change of relationship between the gripper and object can be well-reflected by the corresponding IBS, which enables our method to deal with moving objects with dynamic pose change, going beyond static object grasping of most previous works.
In addition, as IBS is defined purely based on the geometry of the gripper and the object without any assumption of the object semantics or affordance, our method can generalize well to objects of unseen categories.
To speed up the computation of IBS for efficient training and testing, we propose a grid-based approximation of IBS as well as post-refinement mechanism to improve accuracy. Empirical studies show that the approximation is accurate to capture the important interaction information and fast enough to enable online computation in an interactive frame rate.
To capture richer information of interaction, we propose a combination of local and global encoders for multi-level feature extraction of IBS, based on its segmentation corresponding to the different components of the gripper.
}
We adopt Soft Actor-Critic (SAC)~\cite{haarnoja2018soft}, an off-policy model, as our RL framework.
Aside from the core design of state representation, we introduce two other critical designs to make our model more sample efficient and easier to train. To learn collision-avoiding finger motions, we impose finger-object contact information as a constraint of RL. A straightforward option is to design the reward to punish finger-object penetration. This can greatly complicate model training. Thus, our \emph{first key design} is to enhance the standard scalar Q value into a vector storing finger-wise contact information. Such disentangled Q representation provides a more informative dictation on learning contact-free motions.
To deal with the continuous action space in grasp planning,
\rh{we opt to learn from imperfect demonstrations synthesized offline with heuristic planning from different initial configurations to the final grasp poses generated GraspIt!~\cite{miller2004graspit}. }
The demonstration grasping trajectories, possibly containing gripper-object penetration,
are stored in the experience replay buffer~\cite{mnih2015human} of SAC.
Our \emph{second key design} is to bootstrap the learning with a bootstrapping replay buffer containing imperfect demonstrations possibly with collision.
We then inject the replay buffer with an increasing amount of experiences sampled from the currently learned policy and rectified to be collision-free.
This forms an enhanced replay buffer based on which a gradually refined policy can be learned.
Such a double replay buffer scheme helps learn a strong control planning model fairly efficiently.
\rz{Our method generates high-quality dexterous grasps for unseen complex shapes with smooth grasping motions.
Furthermore, our method can dynamically adjust and adapt to object movement during grasping, thus allowing to grasp moving objects.
When compared to other baseline methods, our method consistently achieves a higher success rate on different datasets.
Our method is also robust to partial observations of target objects.
Our contributions include:
\begin{itemize}
\item A novel state representation for learning \rz{reach-and-grasp} planning based on gripper-object interaction bisector surface, along with an accurate approximation for fast training.
\item A combination of local and global encoders for multi-level feature extraction of IBS to capture richer information of gripper-object interaction.
\item A new vector-based representation of Q value encoding not only regular rewards but also finger-wise contact information for efficient learning of contact-free grasping motion.
\item A double replay buffer mechanism for learning collision-avoiding grasping policy from imperfect demonstrations.
\end{itemize}
\section{Method}
\input{figures/ibs_sampler}
\subsection{IBS sampler}
The IBS is essentially the set of points equidistant from two sets of points sampled on the scene and the gripper, respectively.
The computation of exact IBS requires the extraction of the Voronoi diagram, which is time-consuming. To trade-off between efficiency and accuracy, we compute IBS only within a given range and discretize the space to obtain an approximation of IBS, as shown in Figure~\ref{fig:ibs_sampler}.
More specifically, IBS is only computed within the sphere which is located at the \rz{centroid} of the palm with a radius of $r$. The bounding box of the sphere is discretized into a $k^3$ grid for IBS point sampling. For each of the grid points, i.e., the center of each grid cell, we compute its distances to both the scene $d_s$ and the gripper $d_g$, and the points having $\delta = d_g-d_s = 0$ are IBS points.
To find such set of points, we store the $\delta$ values on \rz{ grid cells} and extract the zero-crossing points on the fly.
\rz{Note that to accelerate the whole process, the $\delta$ values on only a partial set of cells around the exact IBS are computed in a region growing manner.
More specifically, we first find the cell that is closest to the middle position of the line connecting the centroid of the foreground object in the scene and the palm, and compute its $\delta$ value. Then the computation grows outwards with the neighboring cells until sufficient grid cells that share the same sign have been found.}
\rz{Figure~\ref{fig:ibs_sampler} shows the sampling process of IBS in two different states. We can see that the grid is divided by IBS into two parts, i.e., one with points closer to the scene (in red color) and the other with points closer to the gripper (in blue color). Only the $\delta$ values on the grid cells close to the exact IBS are computed to get the initial set of sampled IBS points (highlighted with solid colors).
Note that such set of sampled IBS contains two layers of grid points which has redundant information, thus we only keep the layer corresponding to the gripper as our IBS point set (the solid layer in blue).}
\rz{Since the initial set of sampled IBS points are grid centers and the approximation accuracy is highly affected by the grid resolution, we further refine the locations of these points to make them approach the exact IBS.}
For each point $p$, we first locate its nearest points on scene $p_s$ and gripper $p_g$. Without loss of generality, let us assume that the distance $d_g$ between the point to the gripper is larger than the distance $d_s$ between the point and the scene. We then need to move $p$ towards $p_g$ to make it closer to the exact IBS: $p' = p + \Delta_p \times \overrightarrow{pp_g}$. The detail of how to derive an appropriate $\Delta_p$ can be found in the supplementary material. Once the location of point $p$ is updated, we update its nearest points on the scene and on the gripper.
\rz{To gauge how close the sampled IBS is to the exact IBS as well as the effectiveness of our refinement method, we measure the IBS approximation error using the Chamfer distance between the points sampled from the grids and those sampled on the exact IBS surface.
Figure~\ref{fig:ibs_resolution} shows how the Chamfer distance changes during the reach-and-grasp process before and after the refinement under different grid resolutions $k = 20$, $40$, and $80$.
We can find several interesting properties.
Firstly, approximation error is relatively stable during the reach-and-grasp process for each setting and gets slightly higher towards the end as the IBS surface becomes more complex.
Secondly, approximation error drops significantly after the refinement
for different grid resolutions, which shows the effectiveness of the refinement process.
Thirdly, approximation error generally increases with the decrease of grid resolution, especially when no refinement is involved.
However, the computation time increases with the grid resolution. To obtain a good balance, we set grid resolution $k=40$ and sample $n=4096$ points for IBS approximation.}
\input{figures/ibs_resolution}
\subsection{State and action representation}
\label{sec:state}
\paragraph{State representation.}
Finding an informative representation for a given state is the key to guiding the movement of the gripper towards a successful grasping of the object. Our key observation is that the IBS between the scene and gripper together with the gripper configuration provide rich information about the intermediate state.
For gripper configuration, we use a (6+18)-DOF Shadow Hand in all our experiments, where the first 6-DOF encodes the global orientation and position of the gripper and the remaining 18-DOF encodes the joint angles.
\rz{To better describe the local context around the gripper to guide its configuration change, we set the origin of the world coordinate to the centroid of the palm, as the setting in the IBS sampling process, to encode the spatial interaction features between the gripper and the scene.
We found that our method gets similar performance when using the centroid of the object as the origin of the world coordinate.
}
For each point $p$ on the sampled IBS, we store the following information as illustrated in Figure~\ref{fig:ibs_encode}:
\begin{itemize}
\item coordinate $ c = (x,y,z) \in R^3$
\item distance to the scene $d_s^p \in R$
\item unit vector pointing to the nearest point $p_s$ on the scene $v_s ^p\in R^3$
\item indicator of whether $p_s$ is located on the foreground object $b_s^p \in \{0,1\}$
\item distance to the gripper $d_g^p \in R$
\item unit vector pointing to the nearest point $p_h$ on the gripper $v_g^p \in R^3$
\item\rz{one-hot indicator of the gripper component that $p_g$ belongs to $c_g^p \in \{0, 1\}^6$ }
\item value defined on $p_g$ indicating which side of the gripper it located on $a_g^p \in [-1, 1]$
\end{itemize}
Here, $a_g^p = n_p \cdot d_{up}$ is the dot product of the normal direction $n_p$ of point $p_g$ on gripper in rest pose and the upright direction $d_{up}$ perpendicular to the palm and pointing outwards.
\input{figures/ibs_encode}
\paragraph{Action representation.}
\rz{The action consists of two parts. The first part} is defined as the gripper configuration change, which is also with (6+18)-DOF. For each single action, we restrain the change of each parameter within $[-0.25cm, 0.25cm]$ for global translation \rz{of the entire gripper}, and $[-0.025, 0.025]$ in radian for \rz{both global rotation angle of the entire gripper or local rotation angle of each joint.}
\rz{The other part is the stop action, defined as a termination value to indicate the possibility of terminating the planning process.
The exact termination mechanism is provided in Section~\ref{sec:reward}.
.}
\subsection{Network and reward design}
\label{sec:reward}
\paragraph{Network architecture.}
Figure~\ref{fig:actor} shows the network architecture of our grasp planner (actor network).
For the sampled IBS, both local and global features are extracted to concatenate with the feature extracted from the gripper configuration, and then the concatenated feature is passed to \rz{ two other MLPs } to get the predicted action, \rz{one for the gripper configuration change and the other for the terminal value.}
We use PointNet~\cite{qi2017pointnet} for both local and global IBS encoder, and the global encoder takes the whole IBS as input while the local encoder takes each IBS component corresponding to different gripper components as input.
\input{figures/actor}
\paragraph{Reward function.}
\rz{
The reward function needs to reflect the quality of an executed action.
As our final goal is to perform a successful grasping of the given object in the scene at the end of the planning,
we first define a \emph{grasp reward} function $R_g$ to measure the grasp quality when the whole process is terminated.
Moreover, to further encourage a more natural grasp pose with more gripper components taking part in the grasping and avoiding collision with the scene during the whole process, we define another \emph{reaching reward} function $R_c^i$ per gripper component $g_i$ to provide guidance for each intermediate step.
We measure the grasp quality from two aspects. First is the commonly used execution success, which we use the success signal $S$ obtained from the Pybullet simulator when performing the final grasp.
\rh{We consider the grasp to be successful if and only if the object can be lifted up by more than 0.2m as in the previous work ~\cite{xu2021adagrasp}.}
More details about the simulator setup and how to perform the grasp can be found in the supplementary material.
However, this sparse boolean value cannot provide enough guidance for high-DOF grasping, thus we complement it with another well-known geometric measure $Q_1$~\cite{ferrari1992planning}.
But as the traditional $Q_1$ measure can only be computed when the gripper touches the object without any collision and the computation becomes unstable during the training, we adopt the generalized $Q_1$ measure proposed in~\cite{liu2020deep} instead.
As those two grasp quality metrics are only computed when the reach-and-grasp task is completed, to encourage a quick convergence, we set a negative reward $r_f$ for each intermediate step.
So the grasp reward function is defined as:
\[ R_g=
\begin{cases}
\omega_s S + \omega_q Q_1, & \text{if the task is completed};\\
r_f, & \text{otherwise}.
\end{cases} \]
where we set $\omega_s = 150$, $\omega_q = 1000$, and $r_f = -3$ in all our experiments.
To encourage the contact between the gripper and the object, our planning process terminates requires that not only if a randomly sampled value is smaller than the terminal value but also at least two gripper components contact the object in the current step.
To determine whether a gripper component $g_i$ is contacting with the object, we would like to ensure that there are enough points on the gripper component that are close enough to the object while not colliding with the scene.
So we first count the number $m_i$ of IBS points determined together by the inner side of the gripper component $g_i$ ($c_n^p=i, a_g^p \geq 0$) and the object ($b_s^p = 1$) with distance to the object smaller than a given threshold $\delta_d = 0.5cm$ ($d_s^p < \delta_d $), i.e., IBS points with $b_s^p = 1, c_n^p=i, a_h^p \geq 0, d_s^p < \delta_d $.
Then we further check if any of the IBS points determined by the gripper component $g_i$ (i.e., IBS points with $c_n^p=i$) is on the inner side of the scene by computing the angle between the corresponding vector $v_s^p$ pointing from the IBS point to the nearest point on the scene and the normal $n_s^p$ of the nearest point $p_s$. If the angle is smaller than $90$ degrees, we consider the IBS point is on the inner side of the scene, and count the number $n_i$ of such IBS points. If $ n_i \geq \delta_n$, we consider the gripper component is colliding with the scene.
Therefore, if the gripper component $g_i$ is not colliding with the scene ($n_i <\delta_n $) and there are enough contacting points ($m_i \geq \delta_m $), we consider the gripper component is contacting the object.
Thus we define the reaching reward function $R_c^i$ per gripper component $g_i$ as follows to encourage a more effective reaching-and-grasping process with more contacting but not colliding points:}
\[ R_c^i=
\begin{cases}
-100, & n_i \geq \delta_n (\text{collide with the scene} );\\
R_{\text{contact}}^i, & \text{otherwise}.
\end{cases} \]
\[ R_{\text{contact}}^i =
\begin{cases}
\min\{\eta, m_i\}, & m_i \geq \delta_m (\text{have enough contact} );\\
0, & \text{otherwise}.
\end{cases} \]
In all our experiments, we set $\delta_n = \delta_m = 3$ and $\eta = 40$.
\subsection{Network training}
\label{sec:training}
To train the network, we adopt the well-known off-policy method Soft Actor-Critic~\cite{haarnoja2018soft} and make some small modifications to the Q-network to make the training more effective.
SAC has two networks, i.e., the policy network (also known as the actor network) and the Q-network (also known as the critic network). The policy network outputs a Gaussian distribution $\pi(*|s;\theta)$ to sample action for an input state $s$, where $\theta$ are the parameters of the network. Q-network outputs the evaluation value $Q(s,a;\Phi)$ for given part state $s$ and action $a$, where $\Phi$ are the parameters of the network. SAC uses an addition target Q-network to calculate target value for temporal difference (TD) update, whose parameters are denoted by $\Phi^\prime$.
A transition is denoted as a tuple ${\{s, a, R, s^\prime, d\}}$, where $R$ is the reward and $d$ indicates whether state $s^\prime$ is a terminated state.
All the transitions will be stored in a replay buffer $D$ and those two networks are trained by the data sampled from $D$.
The key change to the original SAC networks is that instead of outputting a single scale value to estimate the reward, we let the Q-network output a vector $(Q_g ,Q_c^0, \dots , Q_c^5) $ to estimate both $R_g$ and $R_c^i$. Accordingly, the reward $R$ in each experience is a vector of $(R_g ,R_c^0, \dots , R_c^5) $. Note that only $R_g$ is accumulated while $R_c^i$ is computed for each single step.
We found that this change made the training more stable and prevented the collision more effectively than directly combining all rewards together.
The loss function for training the Q-network is then determined by temporal difference (TD) update:
\begin{equation}
L_Q(\Phi)= [(Q_g(s,a;\Phi)-y_g(R_g,s^\prime,d))^2+
\sum_{i=0}^5(Q_c^i(s,a;\Phi)- \lambda R_c^i)^2)]
\end{equation}
with $\lambda =0.25$ balancing the two type of rewards and target value $y_g$ for $R_g$ defined as:
\begin{equation}
y_g(R_g,s^\prime,d)=R_g+\gamma(1-d)\
[Q_g(s^\prime,{\widetilde{a}}^\prime;\Phi^\prime)\
-\alpha \log\pi(\widetilde{a}|s^\prime;\theta)],
\end{equation}
where ${\widetilde{a}}^\prime\sim\pi(*|s^\prime;\theta))$ is the sampled action.
$\gamma = 0.99$ is the discount factor, and $\alpha $ is the temperature parameter that can be adjusted automatically to match an entropy target in expectation, to balance exploring the environment and maximizing reward.
For the policy network, in order to avoid self-collision of the gripper, i.e., the collision among different gripper components, we add a self-collision loss $L_{\text{self}}$ adapted from~\cite{liu2020deep} to the original loss:
\begin{equation}
L(\theta) = L_{Q}(\theta) + \omega L_{\text{self}}(\theta),
\end{equation}
where $\omega = 100$ is the parameter to balance the two loss terms. The original loss function $L_{Q}(\theta)$ and the self-collision loss $L_{\text{self}}$ for training the policy network are defined as:
\begin{equation}
L_Q(\theta)=[Q_g(s,\widetilde{a}(s;\theta))+(\sum_{i=0}^{5}Q_c^i(s,\widetilde{a}(s;\theta))-\alpha \log\pi({\widetilde{a}}|s;\theta)],
\end{equation}
\begin{equation}
L_{\text{self}}(\theta) = \sum_{i=1}^{L} \sum_{j=1}^{N} \max (D(p_j(s,\widetilde{a}(s;\theta)), H_i(s,\widetilde{a}(s;\theta))), 0)
\end{equation}
where $\widetilde{a}\sim\pi(*|s;\theta))$ is the sampled action based on current state and network parameters, $L$ the number of gripper links, $N$ the number of points $p_j(s;\theta)$ sampled from each link, $H_i(s;\theta)$ the convex hull of each link after performing action $\widetilde{a}$ on state $s$, and $D$ the signed distance from a point to a convex hull. More details about the self-collision loss $L_{\text{self}}$ can be found in~\cite{liu2020deep}. The reason why we add the self-collision loss directly for the policy network instead of defining a corresponding reward function is that the self-collision loss is differentiable and thus can be optimized directly
\input{figures/demonstration}
\paragraph{Training with demonstration.}
Note that the searching space of the action is extremely large and to make the training more efficient, we adopt the popular training with demonstration strategy.
To generate the demonstrations, as shown in Figure~\ref{fig:demonstration}, we first generate a valid grasp pose for the given object, and then move the gripper away from the object along the direction pointing from the object center to the palm center until the distance reaches $d = 20cm$, and then we reset the gripper configuration by setting all joint angles to be zero and add some random Gaussian noise scaled based on the rotation limit of each joint to the gripper configuration to generate a random initial configuration. After getting the initial configuration of the gripper, to generate a motion sequence towards the corresponding final grasp pose and use it as the demonstration, we simply first move the gripper to the final position and then transit to the final grasping configuration using linear interpolation.
Note that the demonstration generated in this way is imperfect since the gripper may collide with the scene during the whole process, so unlike previous methods that usually use imitation learning for behavior cloning of the perfect demonstrations, we store the generated imperfect demonstration into the bootstrapping replay buffer and use reinforcement learning only.
In more detail, we use two replay buffers, one for demonstration data with maximal size set to be $n_d = 5.0\times 10^4$ and the other for self-exploration data with maximal size set to be $n_s = 1.0\times 10^5$.
Before training, we always fill up the demonstration buffer and keep counting the total number $n_t$ of data generated in those two ways.
The probability of sampling data from the demonstration buffer is set to be $n_d/n_t$. Thus, more demonstration data will be sampled in the beginning to guide the network to learn a good initial policy quickly and hence speed up the training process.
Following~\cite{vecerik2017leveraging}, these samples are included in the update of both the actor and the critic.
\section{Overview}
Given the input scene segmented into foreground object and background context and the initial configuration of the gripper, our goal is to output a sequence of collision-free actions which move the gripper towards the object to perform a successful grasping. We train a deep reinforcement learning model in which the state is given by the gripper-object interaction and the actions are designed as the gripper configuration changes. Figure~\ref{fig:overview} gives an overview of one step of the planned reaching-and-grasping motion.
Our method starts by extracting an informative representation of the interaction between the given object and the current gripper configuration. A set of local and global features are extracted from the given state to predict the action that changes the configuration of the gripper so that it moves closer to the given object and forms a better grasping. The updated configuration after applying the predicted action is then passed through the same pipeline to predict the subsequent action.
\paragraph{Interaction extraction.}
We use IBS~\cite{zhao2014IBS} to represent the interaction.
Since the computation of accurate IBS is time-consuming, we design an IBS sampler to obtain a discretized and simplified version of IBS in a much more efficient manner. The set of sampled IBS points is then moved to be closer to the exact IBS.
\paragraph{State encoding.}
\rz{The gripper configuration is encoded by its global position and orientation as well as local joint angels with (6+18)-DOF.
Based on the part components of the gripper model, i.e., five fingers and one palm, we adopt a multi-level representation of IBS inspired by the work of Zhao et al.~\shortcite{zhao2017character}.
More specifically, for each sampled IBS point, we first find its nearest points on
the scene and the gripper, and then encode it with a set of information, consisting of its own coordinate, the component labels of those two nearest points as well as the spatial relationship relative to each nearest point.
Therefore, the components of the gripper model, including five fingers and one palm, naturally induce a segmentation of the IBS based on the association between the gripper and the IBS. }
We then combine local features extracted separately for each IBS segment and global feature for the entire IBS to form a multi-level description of gripper-object interaction.
\paragraph{Action prediction.}
To predict the action given the current state, we design
a policy network that takes both local and global features from the current configuration \rz{and outputs the configuration change of the gripper and terminate value.
The policy network is trained via reinforcement learning \rz{using the Soft Actor-Critic method~\cite{haarnoja2018soft}}.}
To avoid gripper-scene collision and self-collision of the gripper during the reaching-and-grasping process,
we design \rz{a new vector-based representation of Q value encoding not only regular rewards but also finger-wise contact information for efficient learning of contact-free grasping motion.
To further accelerate the training, we generate a set of imperfect demonstration data to bootstrap the learning and to converge to the final policy in a more efficient and effective manner.}
\section{Related Work}
Robotic grasping has a large body of literature. Existing approaches can generally be classified into analytical and data-driven methods.
Analytical (or geometric) approaches analyze the shape of the target object to synthesize a suitable grasp~\cite{sahbani2012overview}. Data-driven (or empirical) approaches based on machine learning are gaining increasing attention in recent years~\cite{bohg2013data,kleeberger2020survey} .
\input{figures/overview1}
\rz{\paragraph{Analytical robotic grasping.}
With known object shapes, analytical methods \lm{search for grasp poses that maximize a certain grasp quality metric, and these methods can mainly be classified into discrete sampling-based techniques or continuous optimization techniques. Some sampling-based techniques search in the space of grasp poses~\cite{miller2000graspit,miller2004graspit}, while others search contact points or contact areas on surfaces of objects and then searches for collision-free grasp poses that realize the given set of contact points or areas~\cite{chen1993finding,7812687,9120282,9626457}. Compared to sampling-based techniques, continuous optimization techniques plan grasp poses by optimizing differentiable losses~\cite{kiatos2019grasping,maldonado2010robotic}, which are more efficient. Recent work~\cite{liu2020deep} proposes a differentiable grasp quality, which can be used for continuous optimization and deep learning methods.
For high-DOF grippers, however, sampling-based techniques are computationally costly due to the large search space. Continuous optimization techniques for high-DOF grasp planning apt to find local optimal grasp poses.}
Moreover, analytical methods are difficult to generalize to incomplete or unknown objects.
}
\paragraph{Learning-based robotic grasping.}
Learning-based methods are typically split into supervised learning and reinforcement learning.
For supervised learning, grasp annotations can be collected either by humans~\cite{depierre2018jacquard}, with simulation~\cite{mahler2016dex}, or through real robot tests~\cite{levine2018learning}. Supervised grasp learning can be categorized as discriminative or generative depending on whether the grasp configuration is the input or output of the learned model.
While discriminative approaches sample grasp candidates and rank them using a neural network~\cite{mahler2018dex},
generative approaches directly generate suitable grasp poses~\cite{morrison2018closing}.
Early learning-based works mainly focus on generating grasps for target objects with low-DOF grippers (such as parallel-jaw grippers)~\cite{saxena2007robotic,gualtieri2016high}. Such work learns to regress grasp quality or to predict grasp success ~\cite{mahler2016dex,inproceedingsDexNetTwo,fang2018mtda,lu2020multifingered,lu2020planning,van2019learning}, but still needs sampling-based techniques to search for better grasps. Liu et al.~\shortcite{liu2019grasp,liu2020deep} use deep neural networks to directly regress high-DOF grasps based on the input of voxels or depth images.
\paragraph{Reinforcement learning for robotic grasping.}
Deep reinforcement learning (RL) has been shown as a promising and powerful technique to automatically learn control policies by trial and error. Based on raw sensory inputs, dexterous grasping behaviors can be performed. A comparative study of RL-based grasping methods is given in~\cite{quillen2018deep}.
QT-Opt~\cite{kalashnikov2018qt} learns various manipulation strategies with dynamic responses to disturbances.
Song et al.~\shortcite{song2020grasping} present an RL-based closed-loop 6D grasping of novel objects with the help of human demonstrations. The learned policy can operate in dynamic scenes with moving objects.
Rajeswaran et al.~\shortcite{rajeswaran2017learning} show that model-free RL can effectively scale up to complex
manipulation tasks with a high-DOF hand.
Mandikal and Grauman~\shortcite{mandikal2020dexterous} introduce an approach for learning dexterous grasps, and the key idea is to embed an object-centric visual affordance model within a deep reinforcement learning
loop to learn grasping policies that favor the same object regions favored by people. \rz{Andrychowicz et al.~\shortcite{andrychowicz2020learning} explore RL for dexterous in-hand manipulation by reorientating a block. Starke et al.~\shortcite{starke2019neural} present a good example of conditioning motion behavior on the geometry of the surrounding 3D environment. Ficuciello et al. propose methods that use RL to search good grasp poses \shortcite{ficuciello2016synergy} as well as good grasp motion trajectories \shortcite{monforte2019multifunctional} in a synergies subspace, and these methods need good initial parameters to reduce the searching space while finding initial parameters need additional imitation learning or human efforts.
To overcome this limitation, Ficuciello et al.~\shortcite{ficuciello2019vision} introduce a visual module to predict initial parameters given visual information of objects. The main weakness of their methods is that the learning process should be done for every object respectively to achieve a good grasp.
}
Training RL models in the real environment is usually prohibitive since it requires a large number of trials and errors.
A straightforward approach is to train them in a simulation environment and then transfer the learned policy to the real world with sim-to-real techniques~\cite{peng2018sim,james2019sim}.
Our work trains an RL model for dexterous high-DOF grasping.
We focus on how to train such complicated
planning policies with the help of an effective representation of gripper-object interaction
and leave the issue of sim-to-real transfer for future work.
\paragraph{Grasp representation.}
Existing grasping models have adopted various representations to describe the shape of the target object to be grasped, such as voxels~\cite{varley2017shape}, depth images~\cite{viereck2017learning}, multi-view images~\cite{collet2010efficient}, or geometric primitives~\cite{aleotti20123d}.
Some works represent a grasp with Independent Contact Regions (ICRs)~\cite{roa2009computation,fontanals2014integrated}.
These regions are defined such that if each finger is positioned on its corresponding contact region, a force-closure
grasp~\cite{nguyen1988constructing} is always obtained, independently of the exact location of each finger.
Our work opts to characterize the interaction between the gripper and the object using Interaction Bisector Surface (IBS)~\cite{zhao2014IBS}. Interaction Bisector Surface (IBS) captures the spatial boundary between two objects. It can provide a more detailed and informative interaction representation with both geometric and topological features extracted on the IBS. Hu et al.~\shortcite{hu2015interaction} further combined IBS with Interaction Region (IR), which
is used to describe the geometry of the surface region on the object corresponding to the interaction, to encode more
geometric features on the objects.
Pirk et al.~\shortcite{pirk2017understanding} build a spatial and temporal representation of
interactions named interaction landscapes.
\rh{Karunratanakul et al. ~\shortcite{karunratanakul2020grasping} proposes an implicit representation to encode hand-object grasps by defining the distance to human hand and objects for points in space, and then use it to generate the grasping hand pose for a given static shape.
In contrast, our method utilizes constantly updated IBS as state representation to plan the dynamic motion of the gripper for the object reach-and-grasp task.
}
\section{Results and Evaluation}
We first explain the experiment setup, the dataset we used, and then evaluate our method both qualitatively and quantitatively.
\subsection{Data preparation}
\rz{We first adopt the dataset provided by Liu et al.~\shortcite{liu2020deep}, which consists of 500 objects collected from four datasets.
We use the objects from KIT Dataset \shortcite{kasper2012kit} and GD Dataset \shortcite{kappler2015leveraging} as training data, and then test objects from YCB Dataset~\shortcite{calli2017yale} and BigBIRD dataset~\shortcite{singh2014bigbird}.
We then further test our method on more objects from ContactPose Dataset \shortcite{brahmbhatt2020contactpose} and from 3DNet Dataset~\shortcite{wohlkinger20123dnet} when compared to baseline methods.
To generate the demonstrations to guide the training of our network, we randomly select 100 grasp poses for each object in the training set from the pose set associated with that object, and synthesize imperfect reach-and-grasp motion as described in Section~\ref{sec:training}.
Note that since each object is placed on the table in our setting, we need to first filter out invalid grasp poses which collide with the table.
To generate our self-exploration data, we need to sample the initial configuration of the gripper.
For each object, we use its center to create a sphere with radius $r = 20 cm$, and sample points on the upper hemisphere as the origin of the local coordinate system of the gripper and rotate the gripper to make its palm face the object center \rh{and its thumb point upwards}.
As different initial configurations will lead to different reach-and-grasp results, to remove the bias of such initialization, we set a fixed set of initial configurations for each object to test.
The initialization details can be found in the supplementation material.
}
\input{figures/gallery}
\subsection{Qualitative results}
\label{sec:qualitative}
\rz{Figure~\ref{fig:gallery} shows the gallery of results obtained with our method, where we show the initial input configuration of the gripper and four sampled frames during the approaching process with the final grasp pose on the right. Note that each example is shown with a view we selected to demonstrate the motion sequence more clearly, and we further add a purple curve to visualize moving trajectory together with sampled frames.
When inspecting these results, we see that the method is able to handle various shapes given different initial configurations of the gripper.
For example, for the small shark model shown in the first row, our method can adjust the gripper pose and joints precisely to pinch the model, while for the model shown in the second row, although the gripper starts from the position close to the table, our method is able to generate the moving sequence with the final grasp on the top of the model to avoid collision with the table. Our method is also able to deal with objects with more complex geometry. For example, to grasp the binoculars shown in the third row and the elephant shown in the fourth row, those four fingers tend to move together to form the gripping grasp with the thumb, while for the pitcher shown in the last row, fingers spread widely to better wrap its top.
}
\input{figures/diff_init}
Moreover, Figure~\ref{fig:diff_init} shows examples of results where we fix the shape and plan the grasping with different initial gripper configurations. \rz{For each example, we show four different initial configurations on the aligned hemisphere with purple dots on the left, and then their corresponding final grasp poses on the right.}
We see that various grasp poses can be generated even for the same shape when given different initial configurations. \rz{For example, for the elephant model shown in the first row, some final grasp poses tend to cover the head of the elephant while some others prefer wrapping the back of the model.
Similar results can also be observed for the shoe model shown in the second row.
For these models with far more complex geometry than the models we have in the training set, the final grasp poses obtained by our method can adapt well to their shapes while avoiding collision with the table at the same time.
Overall, our method is quite robust and can successfully plan grasping for different shapes with various geometries when given different initial gripper configurations.}
\input{figures/train_demo}
\rz{Note that all the objects tested in our experiments are unseen objects from different datasets, which shows that our method can generalize well to other objects instead of just remembering the grasp poses shown in the demonstrations. To further justify this, we compare the grasp pose synthesized by our method from the same initial configuration of the demonstration grasp in Figure~\ref{fig:train_demo}.
We can see that even starting from the same initial configuration, our method ends up with quite different and generally more natural grasp poses based on the learned policy.
}
\subsection{Quantitative evaluation}
\label{sec:quantitative}
\rz{To quantitatively evaluate our method, we first conduct ablation studies to justify several key design choices of our method, and then we provide comparisons to several baseline methods to show the superiority of our method.
\paragraph{Ablation studies.}
As explained in Section~\ref{sec:reward}, our goal is to learn a reach-and-grasp planner with high-quality final grasps as well as low penetration during the whole process, thus we use metrics to measure the final grasp and process penetration, respectively.
For the final grasp, we first compute the success rate $S$ among the whole testing set, where whether each final grasp is
successful or not is tested in the simulator. Then for all successful grasps, we compute the
average generalized $Q_1$~\cite{liu2020deep}
as a complementary metric to get a more detailed grasp quality measure.
For the process penetration, we further compute the percentage of the testing objects, of which the penetration of each frame during the whole reach-and-grasp process is smaller than a given threshold $\tau$, where $\tau = 1mm, 2mm$ and $3mm$.
Using IBS as the dynamic state representation is the key contribution of our method. Based on this, we further propose to use demonstrations, the vector-based representation of Q value, and a combination of the local and global encoder to boost the training and improve the performance.
To show the importance of all those design choices, we use the simplest version of our method as the baseline, denoted as ``IBS-G", which only uses a global encoder for IBS and a standard scalar Q value
and is trained without demonstrations, and then gradually add those key components one by one in the ablation studies to show how performance is boosted accordingly.
To further justify the superiority of IBS as the state representation, we also compare different versions of our method with either RGBD images or point clouds as state representation.
The evaluation results of the ablation studies are shown in Table~\ref{tab:ablation_studies}.
\input{figures/tab_ablation_studies2}
\textit{Importance of demonstration.} Training with demonstrations is crucial to make our network be able to learn meaningful policy.
We can see in the Table~\ref{tab:ablation_studies} that without demonstration, the baseline ``IBS-G" cannot learn any meaningful policy to perform valid grasping and thus the evaluation metrics cannot be reported, due to the high complexity of our problem and the corresponding large searching space.
When trained with demonstration, denoted as ``+ Demo'',
The network is able to learn a reasonable policy now, although the performance is not satisfactory enough with only $28.7\%$ success rate. Moreover, for the most of those successful grasps, the penetration is higher than other settings.
\textit{Importance of vector-based representation of Q value.}
Making the Q-Network output a vector instead of a single scale value provides better control of each individual gripper component.
To justify the benefit of such a design, we further enable this feature in our method and report the results in the third row of Table~\ref{tab:ablation_studies} , denoted as ``+ Q-Vec".
We see that compared to the results using a scalar Q value, the performance gets highly boosted, including both success rate and penetration avoidance.
The main reason is that when combining the grasp quality measure and the contacting measure together into one single value, it's hard for the network to learn which is the main reason to cause the change of the value, while using a vector-based representation can provide a more clear guidance for the network to learn.
\textit{Importance of multi-level encoder.} To further show the importance of the multi-level encoder used for feature extraction of IBS, i.e., using both global and local encoders and then concatenating the features together,
we further add the local encoder to get the full version of our method, and the result is shown in the fourth row of Table~\ref{tab:ablation_studies} , denoted as ``+ IBS-L".
We can see that the performance is better than other settings, which shows that the component-based IBS partition and the corresponding local encoder can help get more information and give better control of those gripper components.
\textit{Importance of IBS as state representation.} The introduction of IBS to represent the intermediate state during the whole planning process and encode the interaction between the gripper and the scene is one of our key contributions. To show the importance of our IBS encoding, we compare our method to two alternative ways of interaction representation, i.e., using either RGBD images or point clouds of the scene and the gripper, while all the other design choices are the same to our method, including using demonstrations and vector-based representation of Q value.
For RGBD image presentation, we use visual information captured by a simulated hand-mounted camera in its upright-oriented pose that translates together with the hand but does not rotate. The visual input consists of an RGBD image and the corresponding segmentation information, where each pixel can belong to the object, the gripper or the background.
We then replace the encoder with similar architecture to that of the network used in~\cite{Jain-ICRA-19}.
More details about this baseline can be found in the supplementary material.
For point cloud representation, \rz{we use the scene point cloud and gripper point cloud directly as input.
For the features to be encoded, we remove the features that contain interaction information from the list of features we defined for IBS points in Section~\ref{sec:state} and keep all the remaining ones, which include the coordinate $c_s \in R^3$ and foreground/background indicator $b_s \in {0,1}$ for each scene point and coordinate $c_g \in R^3$, gripper component indicate $c_g \in \{0, 1\}^6$ and gripper side indicator $a_g \in [-1, 1]$ for each gripper point.
The encoder is similar to that we used for IBS-G.
More details about this baseline can also be found in the supplementary material. }
The corresponding performances are shown in the last two rows of Table~\ref{tab:ablation_studies}, denoted as ``RGBD image" and ``Point cloud", respectively. Note that our method with either setting performs poorly .
We think the main reason is that they cannot provide enough spatial information between the object and the gripper when the demonstrations provided are imperfect and with large variations in initial configurations and final grasp poses, while the IBS we used provides a more informative representation.
}
\rz{\paragraph{Comparison to baselines.}
Strategies for the reach-and-grasp task can be roughly divided into two categories, one synthesizes the grasp first and then plans the transfer from the initial configuration to the final grasp pose and the other optimizes the whole process directly.
To give more detailed comparisons to previous methods, we organize the baseline methods according to those two different types of strategies and analyze the results separately.
\input{figures/tab_comp_mp.tex}
For the first type of baselines, we compare the final grasps obtained using our method to those synthesized using the method in~\cite{liu2020deep} and GraspIt!~\cite{miller2004graspit}.
For the final grasp pose generated via GraspIt!, we select the one closest to the initial gripper configuration from a pre-sampled set of candidate grasp poses based on the distance metric proposed in~\cite{di2008posedis}
To further generate the whole reach-and-grasp process, we use the same motion planning method~\cite{kavraki1996PRM} from The Open Motion Planning Library (OMPL)~\cite{sucan2012open} to plan a collision-free moving trajectory from the initial configuration to the global pose of the final pose.
We can execute the whole process in our dynamics simulator to see if the object can be successfully grasped in the end to get an overall success.
In more detail, once the gripper in the rest pose reaches its final position and orientation following the planned trajectory, we start using the interface provided by the simulator to move fingers towards the target joint states unless it contacts with the scene.
When all fingers stop moving, we use the simulator to execute the grasping test.
\rh{Accordingly, we can compute the success rate for different stages.
The first one is ``final grasp'', where we test whether the gripper with the given final configuration can successfully grasp and lift the object in the dynamics simulator.
The second one is ``motion plan'', where we check whether the given planner can find a feasible trajectory to transit the gripper from the initial configuration to the final configuration.
The third one is ``overall'', where we consider the whole process to be successful if and only if both final grasp and motion plan succeed.
}
Table~\ref{tab:comp_mp} reports the success rate of the final grasp, motion plan, and the overall process executed in the dynamic simulator .
Note that the success rates are computed for different settings.
``Avg" refers to the average success rate among all testing samples, while `` Top-1" refers to the average success rate of all testing objects, where each object is tested with a fixed set of initial configurations and is considered to be successfully grasped if any of those initial configurations leads to a successful grasp.
We can see that using the final grasp obtained via our method gets consistently better results.
\input{figures/comp_mp}
\rz{Figure~\ref{fig:comp_mp} shows some visual comparisons of those methods.
For each example, we show the final grasp synthesized by each method on the left and several key frames of the reach-and-grasp process with the offline planned trajectory in purple on the right.
From the results, the method of Liu et al.~\shortcite{liu2020deep} cannot handle the tabletop well: It tends to generate a grasp pose with finger touching the table and does not form a valid grasp pose, especially for flat-shaped objects shown in the first row.
Although GraspIt! can generate much more successful grasps, it usually fails to plan a collision-free moving path for the gripper from the given initial configuration to the final pose, and even with successfully planned path, the gripper configuration is more likely to be changed during the reaching process to avoid collision with the tabletop, which leads to a lower overall success rate. Note how the final grasp pose after the reaching process is different from the planned final poses shown in the fifth row.
But for the final grasp pose generated by our method, we have taken the tabletop into the consideration during the whole reaching process, thus on the one hand, our method gets a much higher success rate of the final grasp, one the other hand, even using the same motion planning method instead of the own motion planned by our method, we can still get the best overall success rate.
}
\input{figures/tab_dataset.tex}
\input{figures/comp_baseline}
For the second type of baselines, we compare the whole reach-and-grasp process obtained via our method to a heuristic primitive-based grasping method.
Inspired by the work of \cite{della2019learning}, we adopt three grasping primitives, including ``Pinch", ``Top", and ``Lateral", and for each testing initial configuration, we select the closest primitive based on the geodesic distance on the sphere to execute the corresponding grasp.
Note that ``Top" and ``Pinch" primitives have the same start gripper configurations, and we choose the one that achieves higher performance to get the final result.
More details about this primitive-based method (PBM) can be found in the supplementary material.
}
\rz{Table~\ref{tab:dataset} reports the comparisons of ``Avg" and `` Top-1" success rate of overall execution in the simulator on four different datasets. We can see that our method outperforms the primitive-based method on all the datasets.
The main reason is the reaching path of each primitive grasp is fixed and cannot adapt well to the various geometry of different objects, while our method can keep targeting the object with the guidance of the encoded IBS.
Figure~\ref{fig:comp_pbm} shows some visual comparisons between our method and PBM.
Note how the final grasp poses of the primitive-based method deviate from the object due to the shape complexity.
For example, neither the final grasps of PBM shown in the first and third examples touches the object.
One interesting aspect of our method we would like to highlight is that when executing the reach-and-grasp process in the simulator, the pose of the target object may change due to collision with the gripper, and our method is still able to dynamically adapt to its new pose, follow the moving object and grasp it successfully. See the grasp result of the bottle shown in the fourth row of Figure~\ref{fig:comp_pbm}.
Other than the adaption to dynamic changes of the target object, our method is also robust to partial observations.
Examples can be found in the supplementary materials.
}
|
1,108,101,565,614 | arxiv | \section{Introduction}
An ultracold atom gas in the presence of disordered environments is becoming
a subject of increasing experimental and theoretical research
activities. Generally, one would like to understand
how the condensation and the
superfluid properties of ultracold gases are influenced
by a spatially random force on the atoms.
In some experiments,
the random potential is created by optical means to show
its effects on the transport
properties of a Bose gas \cite{Aspect,Inguscio,Ertmer}.
In most others, however, one
must rather face the reality of
unavoidable external random forces which are
induced either by the roughness of a
dielectric surface \cite{Perrin}, by the magnetic field
along wires with current
irregularities \cite{Schmiedmayer}, or by different localized atomic species
\cite{Sengstock}. Furthermore, recent theoretical results on
the impact of randomness on bosons in lattices
are reviewed in Refs.~\cite{Krutitsky,Lewenstein}.
Most theoretical
studies on a 3D disordered Bose gas are limited to calculations up to the second-order
in the random potential.
In these works, the weak disorder induces only small corrections to the
condensate depletion, the superfluid density \cite{HM},
the collective
excitations and their damping \cite{Giorgini},
and the condensation critical point
\cite{Lopatin}. The extension to strong disorder has so far been analysed only
numerically in Ref.~\cite{Giorgini2}.
More recently, an analytical mean-field study \cite{Pelster}, which takes into
account higher order corrections,
has shown the possibility
of having a transition from a superfluid phase to a Bose glass phase where
the spatial long-range correlations have completely disappeared.
In this paper, we address the issue of the influence of strong disorder
at zero temperature for a finite correlation length \cite{Kobayashi,Timmer}.
The random potential follows a Gaussian distribution and is said to be
uncorrelated in the case where all Fourier components contribute
equally to the randomness,
while it is correlated when the influence of the Fourier components
falls off for wavenumbers larger the inverse correlation length $\xi$.
As encoutered in experiments \cite{Schmiedmayer},
we choose a Lorentzian correlation
function. Usually, this length appears to be much bigger
than the healing length and thus affects the condensate properties.
In order to simplify this physical problem, we assume that all particles
occupy the same quantum state, for which the macroscopic wave-function obeys
the Gross-Pitaevskii equation in the presence of an external spatially
random force.
In order to solve this stochastic nonlinear differential equations,
we apply the random phase approximation (RPA) \cite{Pines} and take
the ensemble average over all possible realisations of the associated
potential. In the clean case without a random potential,
this gapless and conserving approximation has been successfully
used in the context of calculating
of the collective excitations at finite temperatures
\cite{Reidl,Fliesser} and in kinetic theory \cite{condenson}.
In our case with disorder, we obtain the particle density distribution
beyond the lowest order expansion in the random potential.
With this we extend
the seminal work of Ref.~\cite{HM} to strong disorder, which has
the consequence
that the superfluid phase disappears
by a first-order transition. We show how the critical value
of the disorder strength, for which this transition
occurs, depends on the correlation length.
\section{Momentum Distribution}
We start from the Gross-Pitaevskii equation at zero temperature written in
Fourier space. Defining the Fourier components according to
$\psi(\vc{r})=\sum_{\vc{k}}e^{i\vc{k}.\vc{r}}\alpha_{\vc{k}}/\sqrt{V}$,
we get in units with $\hbar =1$:
\begin {eqnarray}
\label{EVOLV}
\left(i\frac{\partial}{\partial t}-\frac{\vc{k}^2}{2m}\right)\alpha_\vc{k}&=&
\sum_{\vc{q}}U_{\vc{q}}\alpha_{\vc{k}-\vc{q}}\nonumber \\
&& + \frac{g}{V}\sum_{\vc{k'},\vc{q}}\alpha^*_{\vc{k'}}
\alpha_{\vc{k}+\vc{q}}\alpha_{\vc{k'}-\vc{q}} \, ,
\end{eqnarray}
where the random potential $U_{\vc{q}}$ follows a Gaussian distribution
$\langle U_{\vc{q}}U_{\vc{q'}}\rangle=\delta_{\vc{q},-\vc{q'}}R(\vc{q})/V$.
The quadratic amplitude is assumed to be Lorentzian $R(\vc{q})=R/(1+\xi^2 \vc{q}^2)$,
i.e.~it is correlated below the wavenumber $1/\xi$.
We assume that a macroscopic fraction of the
condensate moves with a velocity $\vc{k_s}/m$.
In order to derive a dynamic dielectric function for the total fluid,
we define the bilinear combination
\begin{eqnarray}
\label{RHO}
\rho_{\vc{k},\vc{q}}=\alpha^*_{\vc{k}}\alpha_{\vc{k}+\vc{q}} \, .
\end{eqnarray}
It represents an excitation of momentum $\vc{q}$ created from
a particle which releases its momentum from $\vc{k}+\vc{q}$ to
$\vc{k}$. In particular, it allows the definition of
the Fourier component of the density fluctuation:
$\rho_{\vc{q}}=\sum_{\vc{k}} \rho_{\vc{k},\vc{q}}=
\rho^*_{\vc{-q}}$. The dynamic evolution of (\ref{RHO})
following from (\ref{EVOLV}) is given by:
\begin{eqnarray}
\label{he}
\hspace*{-30mm}
i\frac{\partial}{\partial t}\rho_{\vc{k},\vc{q}}
&=&(\epsilon_{\vc{k}+\vc{q}}
-\epsilon_{\vc{k}})
\rho_{\vc{k},\vc{q}}+
\\ \nonumber
\sum_{\vc{q'}}
(U_{\vc{q'}}&+&\frac{g}{V} \sum_{\vc{k'}}\alpha_\vc{k'}^*\alpha_{\vc{k'+q'}})
(\alpha^*_{\vc{k}}\alpha_{\vc{k+q}-\vc{q'}}-
\alpha^*_{\vc{k}+\vc{q'}}\alpha_\vc{k+q}) \, .
\end{eqnarray}
The technical details for implementing the
random phase approximation in the clean case can be found either in
Ref.~\cite{Pines} or in Ref.~\cite{condenson} where
a finite-temperature Bose gas has been considered with
$\rho_{\vc{k},\vc{q}}$ being a quantum operator. Here, we shall briefly
repeat this procedure by
considering random variables instead of operators.
We treat the quartic terms in (\ref{he})
within a factorisation procedure
and use the property that
$\langle \rho_{\vc{k},\vc{q}} \rangle =0$ for $\vc{q}\not=0$ in
an homogeneous gas.
For any quartic term,
we approximate
\begin{eqnarray}\label{fact}
&& \hspace*{-10mm}\alpha^*_{\vc{k_1}}\alpha^*_{\vc{k_2}}
\alpha_{\vc{k_3}}\alpha_{\vc{k_4}}
- \langle \alpha^*_{\vc{k_1}}\alpha^*_{\vc{k_2}}
\alpha_{\vc{k_3}}\alpha_{\vc{k_4}} \rangle \simeq
\nonumber \\ &+&
\langle |\alpha_{\vc{k_1}}|^2 \rangle \delta_{\vc{k_1},\vc{k_3}}
\alpha^*_{\vc{k_2}}\alpha_{\vc{k_4}}
\nonumber \\
&+&\langle |\alpha_{\vc{k_2}}|^2 \rangle
\delta_{\vc{k_2},\vc{k_4}}
(1+\delta_{\vc{k_1},\vc{k_2}}\delta_{\vc{k_3},\vc{k_4}})
\alpha^*_{\vc{k_1}} \alpha_{\vc{k_3}}
\nonumber \\
&+&
\langle |\alpha_{\vc{k_1}}|^2 \rangle
\delta_{\vc{k_1},\vc{k_4}}
(1-\delta_{\vc{k_1},\vc{k_2}}-\delta_{\vc{k_3},\vc{k_4}})
\alpha^*_{\vc{k_2}}\alpha_{\vc{k_3}}
\nonumber \\
&+&
\langle |\alpha_{\vc{k_2}}|^2 \rangle
\delta_{\vc{k_2},\vc{k_3}}
(1-\delta_{\vc{k_1},\vc{k_2}}-\delta_{\vc{k_3},\vc{k_4}})
\alpha^*_{\vc{k_1}}\alpha_{\vc{k_4}} \,
\end{eqnarray}
which avoids double counting. For example,
in case of $\vc{k_1}=\vc{k_2}$ and
$\vc{k_3}\not=\vc{k_4}$
the approximation reduces to two terms only which is important for
contributions involving the macroscopic component $\vc{k_s}$
of the condensate. Since the
average over the
quartic term in Eq.(\ref{fact}) applied in (\ref{he})
involves components with
the total transfer momentum $\vc{q}\not=0$,
it will not contribute for an homogeneous gas.
Through this procedure,
the RPA keeps among all terms those combinations
involving products of off-diagonal terms $\rho_{\vc{k'},\vc{q}}$
and averaged diagonal ones $n_\vc{k''}=\langle |\alpha_{\vc{k''}}|^2 \rangle $
for all possible values of $\vc{k'}$
and $\vc{k''}$,
and neglects all others combinations.
Therefore, we remove contributions which are bilinear in
$\rho_{\vc{k'},\vc{q'}}$
for $\vc{q'}\not=\vc{q},\vc{0}$.
In this way, we
obtain the linear integral equation for $\vc{q} \not= \vc{0}$:
\begin{eqnarray}
\label{rho4}
\left[i\frac{\partial}{\partial t}
-(\epsilon_{\vc{k}+\vc{q}}-\epsilon_{\vc{k}})\right]
\rho_{\vc{k},\vc{q}}=
\nonumber \\
(U_{\vc{q}}+\frac{g \rho_{\vc{q}}}{V})
(n_{\vc{k}}-n_{\vc{k+q}})
+\frac{g\rho_{\vc{q}}}{V}( n'_{\vc{k}}-n'_{\vc{k+q}})
\nonumber \\
+ (1-\delta_{\vc{k},\vc{k_s}}-\delta_{\vc{k},\vc{k_s}-\vc{q}})
\!\!\!\! \sum_{\vc{k'}\not=
\vc{k_s},\vc{k_s}-\vc{q}}
\frac{g\rho_{\vc{k'},\vc{q}}}{V}
n_{\vc{k_s}}
\, .
\end{eqnarray}
Here $n'_{\vc{k}}=(1-\delta_{\vc{k_s},\vc{k}})n_{\vc{k}}$ refers
to the disordered part of the condensate which consists of all its part
which are not at the wavenumber $\vc{k_s}$.
Note at this stage that $\rho_{\vc{k},\vc{q}}$ is still
a random variable. Thus, we should still take
the ensemble average over
any non-vanishing combination like
$\langle U_{-\vc{q}} \rho_{\vc{k},\vc{q}} \rangle$
and $\langle \rho_{\vc{k'},-\vc{q}} \rho_{\vc{k},\vc{q}} \rangle$
and solve the resulting equations. Equivalently, here we directly solve
Eq.~(\ref{rho4}) for $\rho_{\vc{k},\vc{q}}$
and perform the disorder average at a later stage.
In order to make the link with the dielectric formalism, let us
assume for the moment that the potential has a temporal dependence
of the form
$U_{\vc{q}}(t)=\exp(-i\omega t)U_{\vc{q},\omega}$. Then the solution is
of the form $\rho_{\vc{k},\vc{q}}(t)=
\exp(-i\omega t)\rho_{\vc{k},\vc{q},\omega}$ as well.
Under these conditions, we find a solution:
\begin{eqnarray}\label{sol}
\lefteqn{\rho_{\vc{k},\vc{q},\omega}=
\frac{
(U_{\vc{q},\omega}+ 2g \rho_{\vc{q},\omega}/V)
(n_{\vc{k}}-n_{\vc{k+q}})}{\omega+i0_+ -
(\epsilon_{\vc{k}+\vc{q}}-\epsilon_{\vc{k}})}}
\nonumber \\ \nonumber
&&\times
\frac{
(\omega+i0_+ -
\frac{\vc{k_s}.\vc{q}}{m})^2-(\frac{\vc{q}^2}{2m})^2
}
{(\omega+i0_+ -
\frac{\vc{k_s}.\vc{q}}{m})^2-(\frac{\vc{q}^2}{2m})^2+
\frac{g n_{\vc{k_s}}\vc{q}^2}{mV}
(\delta_{\vc{k},\vc{k_s}}+\delta_{\vc{k},\vc{k_s-q}})}
\nonumber \\
\end{eqnarray}
which is similar to Ref.~\cite{condenson}.
The Fourier component of the density fluctuations can
therefore be written in the form:
\begin{eqnarray}\label{resp}
\rho_{\vc{q},\omega}= \chi(\vc{q},\omega)U_{\vc{q},\omega} \, ,
\end{eqnarray}
with the susceptibility
\begin{eqnarray}\label{K}
\chi(\vc{q},\omega)=
\frac{V}{2g}\left[\frac{1}{{\cal K}(\vc{q},\omega)}-1\right]\, .
\end{eqnarray}
The dynamic dielectric function ${\cal K}(\vc{q},\omega)$ defined
by (\ref{K}) for the
total fluid can be decomposed into \cite{condenson}
\begin{eqnarray}
\label{kal}
{\cal K}(\vc{q},\omega)
&=&{\cal K}_n(\vc{q},\omega) \nonumber \\
&&\hspace*{-5mm}+\frac{-\frac{2g n_{\vc{k_s}}}{V}
\frac{\vc{q}^2}{m}}
{\left(\omega+i0_+ -
\frac{\vc{k_s}.\vc{q}}{m}\right)^2-\left(\frac{\vc{q}^2}{2m}\right)^2
+\frac{g n_{\vc{k_s}} \vc{q}^2}{mV}}\, ,
\end{eqnarray}
where we obtain for the disordered part of the fluid:
\begin{eqnarray}
\label{kn}
{\cal K}_n(\vc{q},\omega)=
1- \frac{2g}{V}\sum_{\vc{k}}
\frac{n'_{\vc{k}}-n'_{\vc{k+q}}}{
\omega+i0_+ -
\frac{\vc{k}.\vc{q}}{m}-\frac{\vc{q}^2}{2m}} \, ,
\end{eqnarray}
(cf. also \cite{condenson,Graham}).
In this way, we recover the same expression as in Ref.~\cite{Fliesser,
Graham} for
the susceptibility function $\chi(\vc{q},\omega)$.
In the special case of a time-independent potential
$U_{\vc{q},\omega}=U_{\vc{q}}\delta_{\omega,0}$ which is of interest here, the
solution (\ref{sol}) can be used to define a self-consistent
relation that allows the calculation of $n'_{\vc{k}}$. At first, combining
Eqs.~(\ref{sol})--(\ref{kn}), we get the response function
for a particle of the condensate to be excited with momentum $\vc{q}$:
\begin{eqnarray}
\label{respc}
\rho_{\vc{k_s},\vc{q}}=
n_\vc{k_s}\frac{U_{\vc{q}}}{\left(i0_+ -
\frac{\vc{k_s}.\vc{q}}{m}-
\frac{\vc{q}^2}{2m}\right){\tilde{{\cal K}}}(\vc{q})}\, .
\end{eqnarray}
For $\omega=0$, the screening factor ${\tilde{\cal{K}}}(\vc{q})$
defined by (\ref{respc}) for any external force
acting on the condensate particles is:
\begin{eqnarray}\label{kalc}
&&\hspace*{-9mm}{\tilde{{\cal K}}}(\vc{q})=
\nonumber \\
&&\hspace*{-9mm}\frac{-(\frac{\vc{k_s}.\vc{q}}{m})^2+(\frac{\vc{q}^2}{2m})^2}
{{\cal K}_n(\vc{q},0)\left[
{\epsilon^B_{\vc{q}}}^2-( \frac{\vc{k_s}.\vc{q}}{m})^2\right]
- \left[{\cal K}_n(\vc{q},0)-1 \right]\frac{2g n_{\vc{k_s}}\vc{q}^2}{mV}}
\, .
\end{eqnarray}
In the simple special case $n'_\vc{k}=0$, i.e. ${\cal K}_n(\vc{q},0)=1$ and
$\vc{k_s}=0$, we obtain more simply:
\begin{eqnarray}
\label{resp1}
{\tilde{{\cal K}}}(\vc{q})=\frac{1}{1+(4mgn_0/V)/\vc{q}^2} \, .
\end{eqnarray}
This formula shows that the random force acting on the condensate particles
is screened
for momenta below the inverse of the healing length which plays here
the role of the Debye length for the condensate.
Let us now notice that:
$\langle \rho^*_{\vc{k_s},\vc{q}} \rho_{\vc{k_s},\vc{q}} \rangle
=\langle |\alpha_{\vc{k_s}}|^2 |\alpha_{\vc{k_s +q}}|^2 \rangle
= n_{\vc{k_s}}n'_{\vc{k_s}+\vc{q}}$ for $\vc{q}\not= 0$ whereas the
fluctuations of the macroscopic $\vc{k_s}$ component around
the average are small in the thermodynamic limit.
Using this relation in (\ref{respc}) and (\ref{kalc}),
we arrive at the following non-linear integral
equation for $n'_{\vc{k_s+q}}$:
\begin{eqnarray}
\label{eqn}
&& \hspace*{-11mm}n'_{\vc{k_s}+\vc{q}}=
\nonumber \\
&& \hspace*{-11mm}\frac{R(\vc{q})(\epsilon_{\vc{k_s-q}}-\epsilon_{\vc{k_s}})^2 n_\vc{k_s}/V}
{|{\cal K}_n(\vc{q},0)[
{\epsilon^B_{\vc{q}}}^2-(
\frac{\vc{k_s}.\vc{q}}{m})^2]
-
({\cal K}_n(\vc{q},0)-1)\frac{2g n_{\vc{k_s}}}{V}
\frac{\vc{q}^2}{m}|^2} \, ,
\end{eqnarray}
where $\epsilon^B_{\vc{q}}=\sqrt{c_B^2 \vc{q}^2 +
(\vc{q}^2 / 2m})^2$
denotes the Bogoliubov excitation energy and
$c_B=\sqrt{g n_{\vc{k_s}}/ m V}$ is the sound
velocity in the absence of disorder.
The nonlinearity comes from the fact that the dieletric function
${\cal K}_n(\vc{q},0)$
depends via Eq.~(\ref{kn}) on $n'_{\vc{k_s+q}}$.
Let us first consider the case of weak disorder where we can approximate
${\cal K}_n(\vc{q},0) \simeq 1$ and recover
the second-order result for the disordered components of the condensate
\cite{HM}:
\begin{eqnarray}
n'_{\vc{k_s}+\vc{q}}=
\frac{R(\vc{q})(\epsilon_{\vc{k_s-q}}-\epsilon_{\vc{k_s}})^2 n_\vc{k_s}/V}
{\left[
{\epsilon^B_{\vc{q}}}^2-\left(\frac{\vc{k_s}.\vc{q}}{m}\right)^2\right]^2} \, .
\end{eqnarray}
This formula is regular provided the Landau stability criterion
$\epsilon^B_{\vc{q}}>|\vc{k_s}.\vc{q} / m|$ is satisfied.
An increase of $\vc{k_s}$ would
increase the disordered part of the fluid until
a singularity is reached leading to an instability.
For $\vc{k_s}=0$, we recover the weak-disorder result
for this part \cite{HM}:
\begin{eqnarray}
\sum_\vc{q} n'_{\vc{q}}= \sum_\vc{q}
\frac{R(\vc{q})n_{\vc{0}}/V}
{[
\epsilon_{\vc{q}}^2+2gn_{\vc{0}}/V]^2}
=R \frac{m^{3/2}}{4\pi^2}\sqrt{\frac{V}{gn_0}} n_0 \, .
\end{eqnarray}
The superfluid fraction $n_s$ is defined as the part of the fluid which
is moving at a velocity $\vc{k_s}/m$ and is found by calculating the total
momentum of the gas:
\begin{eqnarray}
\vc{P}=\sum_{\vc{k}} \vc{k} n_{\vc{k}}
=\vc{k_s} n_s \, .
\end{eqnarray}
In the limit of small velocity $\vc{k_s}/m$, we get the expression:
\begin{eqnarray}
n-n_s=\frac{1}{3}\sum_\vc{k} \left(\vc{k}.\frac{\partial}{\partial \vc{k_s}}
n_{\vc{k_s+k}}\right)\biggr|_{\vc{k_s}=0} \, .
\end{eqnarray}
Noticing that ${\cal K}_n(\vc{q},0)$ with the solution $n'_{\vc{k}}$
is an even function of $\vc{k_s}$, we deduce from Eq.~(\ref{eqn}):
\begin{eqnarray}\label{hms}
n-n_s=\frac{4}{3}(n - n_0) \, .
\end{eqnarray}
This relation is identical to the one in Ref.~\cite{HM}
but is here found to remain valid in the RPA for the more
general case of strong disorder.
\begin{figure}[t]
\scalebox{0.5}{
\includegraphics{rpaxi0.eps}}
\caption{Clean part of the condensate fraction $n_\vc{0}$
as a function of $R^*$ in the case of uncorrelated disorder in the RPA model
(full curve) and in the HM model (dotted curve) and the corresponding
superfluid fraction $n_s$
(dot-dashed curve in RPA and dashed curve in HM model).
}
\label{fig1}
\end{figure}
\begin{figure}[t]
\scalebox{0.5}{
\includegraphics{rpaxi1.eps}}
\caption{Same as for Fig.1 but for correlated disorder with $\xi^*=1$.
}
\label{fig2}
\end{figure}
\begin{figure}[t]
\scalebox{0.5}{
\includegraphics{rpaxi10.eps}}
\caption{Same as for Fig.1 but for correlated disorder with $\xi^*=10$.
}
\label{fig3}
\end{figure}
\begin{figure}[t]
\scalebox{0.5}{
\includegraphics{xir.eps}}
\caption{Critical value for the disorder intensity $R^*_c$ as a function of
the correlation length $\xi^*$.}
\label{fig4}
\end{figure}
\section{Phase transition}
Equation (\ref{eqn}) is solved numerically by an iterative procedure
as a function
of the reduced dimensionless parameters for the disorder strength
$R^*=R m^{3/2}/\sqrt{4\pi^2gn/V}$
and the correlation length $\xi^*= \sqrt{4m g n/V} \xi $. We have looked
for the stable solution that minimizes the total energy of the system.
The homogeneous part of the
condensate $n_0$ and the superfluid fraction $n_s$
are plotted in
Figs. 1--3 as a function of the reduced disorder strength $R^*$ and are compared
with the results obtained by HM.
The transition is determined as the highest value $R^*=R^*_c$ for
which the condensate fraction $n_0$ is non-zero so that spatial
coherence is preserved over the entire space.
This point corresponds to the situation where $n_0$ has an infinite derivative
with respect to $R^*$. As a consequence of relation (\ref{hms}),
also the superfluid density $n_s$
has there an infinite derivative.
For uncorrelated disorder i.e. $\xi^*=0$, we notice only a tiny difference
with respect to the second-order theory,
until we reach the critical value when the disordered part of the
condensate is about 30 percent. For higher values of $\xi^*$
this difference becomes more pronounced, but a smaller fraction of the
disordered part of the fluid is needed
in order to achieve the transition. Fig.~4 shows that the
critical value $R_c^*$ increases as a function of the coherence length
which is understandable from the fact that for larger $\xi^*$ the high spatial
frequency components of the disordered part of the condensate are smaller
(cf. Eq.(\ref{eqn})).
\section{Conclusions}
The random phase approximation has been applied to the
Gross-Pitaevskii equation in the presence of a random potential
in order to describe a strongly disordered Bose gas at zero temperature.
This approximation goes beyond a previous second-order calculation and predicts
a first-order phase transition from a superfluid phase to a non-superfluid phase.
The critical value of the disorder intensity for this transition
depends strongly on the correlation length. Nevertheless, our model
fails to describe the properties of the non-superfluid phase.
A possible explanation is that the assumption
of a unique wave function for any particle
excludes
the possibility of having fragmented condensates for
strong disorder that could be necessary in a such a phase.
\section*{Acknowledgements}
This work was supported by the SFB/TR 12 of the German Research Foundation (DFG).
Furthermore, PN acknowledges support from the german AvH foundation,
from the Belgian
FWO project
3E050202, and from
Junior Fellowship F/05/011 of the KUL Research Council.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.